Corporates that use artificial intelligence (AI) in their governance systems should ensure that their stakeholders’ private information is secured, a lawyer-professor from De La Salle University said.
In his paper “The Robo Corp: A.I. Robots in Philippine Corporate Governance,” James Keith Heffron said that if a corporation requires a robot director for its work, it shall inform its stakeholders of its capacity, process, and limitations.
“Examples of these measures are protocols against malicious cyberattacks, deepfakes, and rules on the use of generative AI,” Heffron said.
“Human oversight protocols should be set in place to legally override AI decisions. This can be done by retaining the usual legislated concept of stockholder or corporator oversight through votation,” he added.
Heffron recommended that this AI robot have a complementary human director who can discuss and explain the processes and analyses used by the AI to come up with its decisions.
This is crucial as AI cannot be liable for criminal offenses, if such one happens, while human directors may be punished through rea or culpa, wherein it is not the act but the malicious intent or the gross lack of care that is punished, he said.
“The AI algorithms used in both the processes and decisions should be in line with and not violate anti-discriminatory legislation,” the lawyer said.
He pointed out that a robot is still amenable to data breaches and hacking, which is a real threat to the industry.
“It goes without saying that a robot’s programming may be hacked, and its code be surreptitiously altered to make decisions based on the hacker’s own interests and objectives,” he also said.
The Philippines have been adopting AI laws and policies that ensure responsible AI use, such as the National AI Strategy Roadmap 2.0.
In cooperation with the Department of Trade and Industry and Center for AI Research, this plan aims to recalibrate the strategic actions, considering recent developments, and address emerging themes such as ethics and governance.