International Perspectives: R&D and AI Policies in the Global Landscape

“The challenges of aligning policies and fostering international cooperation are significant, but the need is evident for collaboration to harness AI’s potential for humanity’s benefit.”

R&DEveryone’s talking about artificial intelligence (AI), but not everyone’s talking about it the same way. The tenor of the global conversation on AI ranges from dystopian fearmongering to evangelistic optimism.

It’s vital to know the prevailing mood in the territory where you plan to launch your AI-powered service, app, or consultancy. In this article, we’ll briefly tour recent legislation, ethical conversations, and economic strategies to demonstrate how varied current thinking is on this revolutionary new technology.

We’ll look at the current situation in the United States, Canada, Europe, China, Japan and beyond, as countries develop the policies, guidelines and laws necessary to regulate AI innovation without stifling creativity.

The U.S. Approach: Risk Management and Research Resourcing

 The U.S. White House’s recent “National Artificial Intelligence Research and Development Strategic Plan” set out nine strategies, wide-ranging in scope and including investment in responsible AI research, safety, systems evaluation and international collaboration.

The proposals include an “AI Bill of Rights,” designed to combat the pervasive fear that AI could, if given access to substantial public datasets, have undue influence on the lives of those whose information it accesses. As the proposal paper states, “unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent.”

It’s worth noting, however, that for all its focus on curbing any inadvertent negative consequences of AI development, the U.S. government’s approach includes provisions for increasing the AI workforce, expanding public-private partnerships and funding accelerated research.

The President’s 2024 Budget also proposes a historic investment of $209.7 billion for federal research and development (R&D), prioritizing breakthroughs in critical areas such as health, clean energy, and transportation. The budget emphasizes public access to research results, inclusive funding for disadvantaged communities, and intellectual property protection. By reinforcing and expanding the commitment to innovation, the United States aims to address societal challenges, create jobs, and ensure the benefits of scientific research reach all people and communities, fostering a resilient and competitive nation.

Overall, it’s a balanced approach that recognizes many of the ethical concerns of AI while celebrating its enormous economic and technological potential.

Canada: Ethical Focus and Thought Leadership

A little more cautious perhaps than its southern neighbor, Canada takes an ethics-first approach to the AI issue, as evidenced in its statement, “Exploring the Future of Responsible AI in Government.”

In that document, the Canadian government outlines five ethical principles that will guide its approach:

  1. To understand and Measure AI’s impact.
  2. To be transparent about how it uses AI.
  3. To provide a meaningful explanation of any AI-driven government decision-making.
  4. To be transparent by providing access to underlying code, datasets, and more, whilst balancing individual privacy rights and national security.
  5. To provide training and upskilling for government employees.

Principle four is likely to prove the most challenging as it gets to the crux of the issue: proprietary secrecy versus procedural transparency.

In May 2023, the Canadian government passed the Artificial Intelligence and Data Act (AIDA), intended as “the first step towards a new regulatory system designed to guide AI innovation in a positive direction and to encourage the responsible adoption of AI technologies by Canadians and Canadian businesses.”

Furthermore, as the first country to create a national strategy for AI (in 2017) and among the foremost innovators in the sector, the government has much to lose from any missteps. However, they also have a head start in considering these issues from a governmental and regulatory standpoint and might be regarded as a thought leader in global AI collaboration.

Additionally, Canada`s National Research Council Canada 2023–24 Departmental Plan assures to foster and elevate the economic success of Canada by actively engaging in, supporting, and promoting innovation-led research and development endeavours while propelling the advancement of fundamental scientific knowledge and Canada’s standing in global research excellence. Facilitate the seamless access of scientific and technological infrastructure, services, and information to the government, business entities, and research communities.

European Union: A Principled, Risk-Driven Approach

With 27 countries and many different cultures and languages, the EU’s challenge when considering AI regulation frameworks is significant. Like Canada, however, they were early entrants into AI ethics, publishing their “Ethics Guidelines for Trustworthy AI” in April 2019, following an open consultation.

The study proposed three principles of trustworthy AI. It stated that such AIs should be:

  • Lawful: respecting laws and regulations.
  • Ethical: respecting ethical values and principles.
  • Robust: technically secure and resilient.

The latter concern proved prescient, recognizing that as AI-empowered services, apps, and decision-making becomes omnipresent, we will depend upon its speed and efficiency. Having such a system prove vulnerable to sudden outages or attacks could have wide-ranging consequences.

The EU’s thinking includes guidelines promoting diversity, non-discrimination, fairness, and concern for “societal and environmental well-being.”

Developing from these early principles, the European Commission is finalizing its regulatory framework for AI. This framework takes an avowedly risk-based approach, defining four layers of risk that our use of AI might entail (from minimal to unacceptable). Future systems will evaluate against this risk hierarchy and potentially sanction or block them if they fall short.

The EU anticipates that its framework will be fully operational by the second half of 2024.

Within the framework of Horizon Europe, the European Union’s comprehensive research and innovation program backed by a substantial €95.5 billion, this funding allocation serves as a vital instrument to advance the EU’s climate objectives, fortify energy resilience, and foster the development of foundational digital technologies. Furthermore, it encompasses focused initiatives aimed at assisting Ukraine, enhancing economic stability, and facilitating a sustainable recovery from the adverse impacts of the COVID-19 pandemic.

China: Military and Economic Might be with Cautious Collaboration

China’s President Xi Jinping sees AI as crucial to China’s military might and economic power in the coming decade. China’s State Council was an early AI adopter and innovator, publishing its “New Generation Artificial Intelligence Development Plan (AIDP)” in July 2017.

In the opening words of that strategy, “AI has become a new focus of international competition. AI is a strategic technology that will lead in the future; the world’s major developed countries are taking the development of AI as a major strategy to enhance national competitiveness and protect national security.”

China has kept an exceptionally watchful eye on U.S. innovation in AI, and keen competition is growing between these two superpowers.

In recent years, concerns have tempered China’s bullish optimism regarding AI’s sudden rise due to the possibility that it may precipitate security risks and lower the threshold for conflict, as “bloodless” cyberattacks at scale could become possible as an alternative to human bloodshed. China’s innovation strategies also revolve around a multifaceted approach beyond mere productivity and technical expertise enhancements to ascend the value chain. The primary objective is outpacing foreign competitors and replacing imports across strategically identified industries.

As a result, there are signs that China is keen to join in global collaborative projects to regulate AI use and collaborate on ethical deployment principles.

A recent report from the Center for a New American Security (CNAS) think tank, quoting the AIDP, noted that China has publicly called to “deepen international cooperation on AI laws and regulations, international rules and so on, and jointly cope with global challenges.”

Of course, what a country publicly states on the global stage, and what its military-industrial complex develops behind the scenes may be entirely different. In the private sector, Jack Ma, Alibaba’s chairman, expressed significant concerns at the 2019 Davos World Economic Forum that an AI arms race could lead to war.

Japan: Optimism and Innovation

In 2019, Japan published its “Social Principles of Human-Centric AI,” an attempt to show how human principles might be furthered by AI rather than taking a risk-first approach.

The priorities it describes within these social principles include:

  1. Human-centricity of approach
  2. Education and Literacy optimization
  3. Privacy protection
  4. National security
  5. Fair competition
  6. Fairness, accountability, and transparency
  7. Innovation

Although it’s laudable that Japan sees the potential for AI to improve lives, it currently has no sector-specific measures to regulate risk.

Instead, guidelines promote transparency and fairness, and various business-related laws enable individuals to determine data usage.

The Protection of Personal Information Act also outlines mandatory requirements for large-scale commercial data sourcing, including anonymizing such information.

The R&D and Innovation Subcommittee also compiled recommendations titled Future Policy Directions for Promoting Innovation Circulation. Among them are Policy recommendations for promoting innovation circulation:

  1. Startups first
  2. Development of human resources and creation of intellectual capital
  3. More challenges and failures
  4. Intensive support for market creation
  5. Shifting to “mission-oriented innovation policies”
  6. Strengthening computing infrastructures and general-purpose technologies

South Korea: Innovation through Risk Management

South Korea is also adopting a policy of innovative freedom. However, unlike Japan, it has passed AI-specific legislation, the 2023 “Act on Promotion of AI Industry and Framework for Establishing Trustworthy AI.” This Act incorporates seven previous fragmentary pieces of AI legislation to provide an overall legal framework.

The Act clarifies that anyone can develop new AI technology without government pre-approval. It defines “high-risk AI” and actively requires the application of conditions of trustworthiness in these areas. It also establishes a basis for ethical guidelines, a future AI roadmap and the creation of an “AI Council” to be supervised by the Prime Minister.

Indeed, South Korea was one of the nations involved in developing the OECD AI Policy Observatory (OECD.AI) and its principles.

One example of how South Korea is attempting to balance an innovation-friendly environment for AI and privacy/ security concerns is the establishment of a “data dam,” a secure repository of public data for use across a wide range of public and private sectors.

While unlikely to prove popular with countries more wary of government intrusion, this is at least an innovative approach to the problem inherent in all AI systems – the more data they access, the better they become.

Additionally, in October 2022, Korea’s announced the “National Strategic Technology Nurture Plan“, which aims to position the country as a technology leader in the global tech competition. The plan focuses on fostering twelve strategic technologies, including semiconductors, quantum, AI, and cybersecurity, through increased R&D investment, cross-border cooperation, and talent development. It emphasizes the importance of science and technology in driving economic growth, industry competitiveness, and national security. The plan also includes establishing governance systems, industry-academia-research collaboration hubs, and international cooperation to support technology sovereignty.

Global Collaboration: Standardizing and Agreeing Principles

The above nations, and many more entering the AI field, are already involved in several attempts at global cooperation. Such efforts include:

  • The AI for Good Summit, a thought leadership and research convention based in Geneva but accessible online, features dozens of keynote speakers worldwide, including such luminaries as WHO Director-General Tedros Adhanom Ghebreyesus, UN Secretary António Guterres and Google DeepMind’s COO Lila Ibrahim. The Summit focuses on ways AI can help solve global problems, including climate change, health inequalities and poverty.
  • GPAI – the Global Partnership on AI, a multidisciplinary stakeholder initiative for research and development in AI, focusing on trustworthiness. The OECD hosts the GPAI, and membership actively requires a commitment to the OECD’s Recommendation on AI. The Organization for Economic Cooperation and Development (OECD) is an entity whose brief extends beyond AI. Still, its AI Policy Observatory focuses specifically on evidence-based policy analysis in AI. In addition, the OECD recommends human-centric principles in this field.

In the years ahead, further conferences, resources, and collaborative agencies will deepen and strengthen global competition, if only to prevent a potentially damaging and risky AI arms race.

Challenges in Policy Alignment

Given the variety of government types and research approaches evident in AI development globally, it may be challenging for the world’s leaders to agree upon a single set of principles. There will be several significant challenges to overcome, including:

  • Acceptance and Recognition of Risk: depending on how a nation values its citizens’ autonomy, different levels of privacy risk may be deemed acceptable.
  • Transparency and Openness: Economic and political factors may prevent some countries from adopting a collaborative knowledge-sharing approach.
  • Legal Complexity: As each country develops its laws surrounding AI, understanding how these interact may prove difficult. After all, one of the benefits of AI is that it works across national boundaries and cannot easily be locally restricted.
  • Trust: Put, nations that have every reason to mistrust one another may have to come together to agree with universal principles. Rather like agreements on nuclear non-proliferation, this may make for some uncomfortable handshakes!

Need for International Cooperation

Despite these misgivings and challenges, there will be a need for increased cooperation between nations as AI continues to develop at an unprecedented rate.

After all, at heart, international interests are aligned: almost all nations want AI to help solve their societal and economic problems rather than deepen them.

Experts in international diplomacy, AI research and development, and legal formulation will have to agree on fair principles for developing AIs that deliver economic and societal gains rather than further deepening global divisions.

Toward Collaboration

The global landscape of R&D and AI policies is characterized by diverse approaches and perspectives. While countries like the United States, Canada, the European Union, China, Japan, and South Korea have different priorities and strategies, they all recognize the importance of AI in shaping their future. The challenges of aligning policies and fostering international cooperation are significant, but the need is evident for collaboration to harness AI’s potential for humanity’s benefit. By working together, countries can establish common principles, address ethical concerns, and promote responsible AI innovation, ensuring that AI technology drives positive change in the global landscape.

Image Source: Deposit Photos
Copyright: BiancoBlue
Image ID: 355352652 

Share

Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com.

Join the Discussion

No comments yet.