The Philippine Supreme Court (SC) has approved a governance framework regulating the use of artificial intelligence in the country's judicial system, setting clear ethical boundaries on how AI tools may be used in court proceedings and administration.
The SC En Banc passed the Governance Framework on the Use of Human-Centered Augmented Intelligence in the Judiciary through a Resolution dated February 18, 2026, under A.M. No. 25-11-28-SC.
What the Framework covers
The framework applies to all levels of the judiciary, from Supreme Court justices down to lower court employees, as well as court users and third-party vendors involved in developing or deploying AI tools for court use.
At its core, the framework rests on three ethical principles: fairness, accountability, and transparency. It draws from international standards, including guidelines from the Council of ASEAN Chief Justices and UNESCO, and notably expands its ethical scope to include environmental responsibility and sustainability. The Framework was developed by Senior Associate Justice Marvic M.V.F. Leonen, with Associate Justices Ramon Paul L. Hernando and Rodil V. Zalameda as vice chairpersons. Other members of the Judiciary, experts in AI and related fields, lawyers, and the academe have all been consulted in the development of the Framework.
A defining feature is its use of the term "human-centered augmented intelligence," a deliberate framing that positions AI as a support tool rather than a decision-maker. As the framework states, "The use of human-centered augmented intelligence should be centered on human values, such as the promotion of the rule of law and fundamental freedoms, dignity and autonomy, privacy and data protection, fairness, nondiscrimination, and social justice."
Key rules and restrictions
Several firm rules govern AI use under the framework:
No unauthorized AI tools. No AI tool may be deployed without express approval from the SC En Banc. Rollout must follow a phased approach, beginning with pilot testing.
Mandatory disclosure. Anyone using AI to prepare court documents must declare which tool was used, its version, the reason for its use, and the degree of human oversight applied.
No AI-only decisions. AI tools and their outputs cannot serve as the sole basis for any adjudicatory ruling. Human judges retain full responsibility for independent legal reasoning and final judgments.
Data protection. Confidential, privileged, or sensitive information must not be entered into any AI tool without explicit authority. A risk assessment is required before any AI tool is used.
Non-discrimination. AI development and use must not worsen existing inequalities or introduce new forms of bias. The judiciary will provide training to address algorithmic and automation bias.
Covered AI use cases include voice-to-text transcription, legal research, document summarization, translation, optical character recognition, and copy editing, among others.
Oversight and implementation
The SC will establish a permanent Committee on Human-Centered Augmented Intelligence to serve as its main advisory body on AI design, development, and ethical use. The committee will draw members from the legal community and other fields relevant to AI in courts.
It forms part of the SC's Strategic Plan for Judicial Innovations 2022–2027, which aims to build a technology-driven judiciary that is transparent, accountable, and accessible to the public.