Who’s afraid of AI?


TECH4GOOD

Have you heard of the new mining industry?

We are now beginning to see the impact of artificial intelligence or AI and how it is transforming our world in many ways. Due to its ability to compute, search and formulate solutions fast, it has shown great potential in helping humanity solve a lot of societal issues related to climate change, healthcare, education, transportation, and industry. AI is now being integrated into almost every aspect of our lives. However, it has also raised several concerns regarding the ethical, legal, and social implications of its use. Governments today are struggling with the question of how to regulate AI to ensure its safe and ethical development and utilization.

A technology information aggregator, BluCles4U, recently released the results of their study to find out which countries are most afraid of AI. Surprisingly, the Philippines came out fourth in the ranking after the United States, India, and the United Kingdom. The results provide useful insights into how the general public and institutions perceive AI and the issues that surround it. Key concerns were danger and safety, job replacement, ethics and social impact, productivity, and economic impact. It also touched on questions of whether AI will ever develop sentience, consciousness, and emotions.

In the case of the Philippines, the fear factor may have been heightened by several legislative initiatives intended to mitigate job loss, AI-enabled online piracy, and the need to create a new government agency that will regulate AI. It may also have been brought about by the well-publicized statements from tech personalities who are calling for a pause on further developments on generative AI for fear that it will be able to develop artificial general intelligence that may result in an AI-ruled world. Or the recent pronouncements from people who, in the first place, were the ones responsible for the development of the technology. Geoffrey Hinton recently quit Google so he can speak about the dangers of AI saying that he regrets his contribution to the field.

Regulating AI is not an easy task considering the possible issues that it can bring: bias and discrimination, privacy violations, loss of jobs, disinformation, and the potential for AI to be weaponized or used for criminal activities. AI is a fast-evolving technology, and its potential risks and benefits are not yet fully understood. Regulations that are developed today may become obsolete tomorrow. How can the government regulate something that it does not fully understand or whose face it cannot yet picture?

AI is a multidisciplinary domain that involves computer and data scientists, engineers, ethics experts, lawyers, and policymakers. Who can take on the responsibility for regulating it? We can take a coordinated approach that involves all stakeholders in the regulatory process but it will not be easy considering the different agendas and priorities of these stakeholders. If a person is accidentally bumped by an AI-operated vehicle, who should be held liable – the owner of the vehicle, the builder of the vehicle, or the software programmer who developed the algorithm that makes the vehicle run?

Regulating AI also requires international cooperation and coordination because it is a super technology that transcends national boundaries. However, different countries have different regulatory frameworks and cultural norms regarding AI. Therefore, it is challenging to develop a harmonized global regulatory framework that can address the unique concerns and priorities of different countries.

Regulating AI is not just about setting rules and guidelines. It also requires effective enforcement mechanisms that can ensure compliance with the regulations. This can be challenging, given the complex and dynamic nature of AI systems.

Artificial intelligence is at the forefront of global governance conversations today. Governments realize the need to ensure continuous innovation in this domain at one end of the spectrum and ensure the ethical and responsible use of it at the other. Some governments have opted to prescribe AI governance frameworks instead of legislating regulatory policies. These governance frameworks are mostly voluntary and focused on enabling an ecosystem that balances support for data-driven technologies for innovation, ethical integrity, and public trust. To achieve this, the use of AI should be human-centric, safe, explainable, and transparent. Implementing a voluntary governance approach requires multi-stakeholder consultation with the industry, academia, and government.

AI is a complex technology that is often difficult for non-experts to understand. These can make it challenging for policymakers to develop regulations that are effective and enforceable and also make it difficult for the public to grasp the risks and benefits which can lead to misunderstandings and mistrust. Governments must therefore move away from just simple protectionist and defensive mindsets. Creating a regulatory situation that is either too slow to respond to new developments or too reactive, overly restrictive, and stifling innovation may result in missed opportunities for the Philippines. We need a regulatory framework that can adapt to changing technological landscape. ([email protected])

(The author is the lead convenor of the Alliance for Technology Innovators for the Nation (ATIN), vice president of the Analytics Association of the Philippines, and vice president, UP System Information Technology Foundation.)