Artificial Intelligence and the future of humanity


TECH4GOOD

Monchito Ibrahim

Will Artificial Intelligence ever rule the world?

Two emerging technologies, AI and Metaverse, are now beginning to manifest globally and are thought to transform the world beyond recognition. AI, however, is expected to create a bigger challenge to humanity if not properly managed early on.

AI takeover is a common theme in science fiction. It is a hypothetical scenario in which AI becomes the dominant form of intelligence on Earth. It pictures robots effectively taking control of the planet away from humans. Will this ever happen? I believe it could but only if humans will allow it to happen. If we just let it develop at a fast uncontrolled pace, we will see science fiction manifesting into true reality very soon. Technology today has become so intelligent with the internet extending the physical reality into virtual making that possibility not far-fetched. When that happens, society will have to be re-imagined, re-invented, and re-constructed.

Elon Musk (his company SpaceX is expected to open a plant here in the country) and Bill Gates have both issued warnings about AIs developing super-intelligence that will allow them to destroy Earth or get rid of humanity by accident or even intentionally. Elon Musk even postulated that AI is likely to become much smarter than humans and overtake humans very soon. The late physicist Stephen Hawkings also warned that AI will eventually reach a level where it will essentially become a new form of life that will outperform humans.

The challenge at hand is how to build a super-intelligent tool that will help its creators instead of harming them. In 2015, a number of AI researchers joined Stephen Hawking and Elon Musk in signing the Future of Life Institute’s open letter highlighting the potential risks and benefits associated with Artificial Intelligence. They all “believe that research on how to make AI systems robust and beneficial is both necessary, important and timely and that there are concrete research directions that can be pursued.”

The most critical question humanity needs to answer is how to manage the development of AI so that humans will not lose control over it forever. I have read of several approaches to address this issue. One is alignment where AI development will have to be boxed within the confines of accepted human values. The other is capability control which aims to reduce or even eliminate AI system’s capacity to harm humans or gain control. We must make sure that these controls have to be in place before a super-intelligent one is created. We do not want a scenario where a badly made super-intelligent AI seizes control of its environment and refuses to have itself modified after it is made operational.

It is not just AI taking over the world, or what we call “evil AI,” that we need to worry about. A more immediate concern is humans using AI as a tool for things like bank robbery, and fraud, among other crimes. It is common knowledge that black hackers have been using simple AI solutions to easily create cybersecurity problems for individuals, businesses, and even countries. This calls for concrete steps to address this concern immediately.

One of the panels in the recent National Analytics and AI Summit organized by the Analytics Association of the Philippines (AAP) talked about ethical considerations in adopting AI. The panelists highlighted some strategies, contextualized to apply to the Philippines, designed to ensure the further development of AI, and focused on creating benefits for society. There was a suggestion to create a multi-sectoral ethics body to develop standards and propose policies that will regulate the use and development of AI. The intention is to build controls without stifling AI-based innovations. Other recommendations centered on the aspect of education, both for organizations and the workforce. A lot of companies who have started on their AI journeys are not even clear on what they are walking into and what their responsibilities are.

David Chalmers, a professor of philosophy at New York University said “I worry about a scenario where the future is AI and humans are left out of it. If the world is taken over by unconscious robots, that would be as disastrous and bleak a scenario as one could imagine.” We need to start a continuing conversation about the ethical use of AI and how to regulate it if we need to. It is very hard to predict when AI breakthroughs are going to happen. The importance of preparation is very important. Disruptions must be pre-empted today because, by the time they happen, it will be too late to respond.

As one of the summit panelists put it, we should always be mindful of our purpose: to build a better world for people to live in dignity and to flourish, let us all strive to use AI for that.

[email protected]

(The author is the Lead Convenor of the Alliance for Technology Innovators for the Nation (ATIN), vice president of the Analytics Association of the Philippines, and vice president, UP System Information Technology Foundation.)