Laws needed to regulate AI applications, curb apparent hazards

E CARTOON MAY 24, 2023.jpg

“If this technology goes wrong, it can go quite wrong.”

With this declaration, Sam Altman, CEO of OpenAI, urged the US Congress last week to enact a law that would regulate the use of artificial intelligence (AI) following serious concerns raised on the dangers it posed.
California-based AI research company, Open AI, developed ChatGPT which stands for Chat Generative Pre-Trained Transformer. It is an AI chatbot technology that can process natural human language and generate a response. GPT models that are pre-trained on vast amounts of text data from diverse sources could transform a wide range of linguistic patterns, facts, grammar and context.

Writing poems, passing the bar exam, preparing a five-year business plan and drafting a graduation speech are some of the capabilities attributed to ChatGPT. Six ways of making money with ChatGPT are cited: getting business ideas; freelancing; blogging; email marketing; creating videos; writing e-books, and self-publishing. However, technology experts have sounded the cybersecurity alarm. Scammers could design prompts enabling ChatGPT to write phishing emails that could lead to widescale fraud.

Altman affirms that while “AI has the potential to improve nearly every aspect of our lives, (and) address some of humanity's biggest challenges, like climate change and curing cancer,” it could also engender concerns on “widespread disinformation, job security and other hazards.” Hence, he advocates regulatory intervention by governments to mitigate the risks posed by increasingly powerful models.

Meanwhile, the European Parliament is poised to pass an AI Act “to ensure that AI systems are overseen by people; are safe, transparent, traceable, non-discriminatory, and environmentally friendly.” The proposed law also seeks to prescribe a technology-neutral definition applicable to present and future systems. Parliament members’ avowed objective is “to ensure a human-centric and ethical development of artificial intelligence (AI) in Europe” by crafting transparency and risk-management rules for AI systems.

The proposed law seeks to prohibit “intrusive and discriminatory” uses of AI systems, such as those involving: biometric identification systems; predictive policing systems based on profiling; and emotion recognition systems in law enforcement, border management, workplace, and educational institutions.

Providers of AI foundation are required “to guarantee robust protection of fundamental rights, health and safety and the environment, democracy and rule of law; mitigate risks; comply with design, information and environmental requirements and register in the EU database.” Developers of foundational models similar to ChatGPT would also be required to comply with additional transparency requirements, like “disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training.”

Meanwhile, there must be continuing efforts to educate the public on the reasoned use of technology to address global concerns like climate change and environment sustainability. Governments must heed the call to rein in the untrammeled deployment of technology and ensure that while basic human rights are adequately protected, the guardrails would not unduly restrict technology-driven business growth and innovation.