Let justice lead the code

The Supreme Court has opened the door— now let’s walk

In April 2025, the Supreme Court of Pakistan delivered a ruling that may well define our legal generation. In a landmark judgment authored by Mt Justice Syed Mansoor Ali Shah and joined by Mr Justice Aqeel Ahmed Abbasi, the Court called for comprehensive, constitutional, and carefully bounded guidelines for the use of Artificial Intelligence in Pakistan’s judicial system.

It was more than a legal pronouncement— it was an invitation. An invitation to innovate, responsibly. To reform, without compromising integrity. And to modernize the courts, while keeping the soul of justice intact.

As someone who has spent the last several years at the frontier of legal AI in Pakistan— building tools, training systems, educating the profession, and writing frameworks— this judgment felt personal. Not just because it validated much of the work my colleagues and I have done, but because it recognizes that the judicial crisis of delay and inefficiency is real, and that technology— especially AI— is now part of the solution.

I have long maintained that Pakistan cannot afford to treat AI as a novelty. When our courts are burdened with millions of unresolved cases, and litigants wait years— sometimes decades— for basic relief, the constitutional promise under Article 10A (fair trial) and Article 37(d) (inexpensive and expeditious justice) begins to feel like fiction. The Supreme Court has now rightly declared that delays are a constitutional failure— and that AI, if properly governed, can help cure it.

But let us be absolutely clear: this is not a green light for techno-utopianism. The Court drew a principled boundary: AI must serve the judge, not replace the judge. It can assist in legal research, refine draft judgments, support administrative tasks, and enhance clarity— but the final moral, interpretive, and constitutional responsibility belongs only to the human judge.

That is the only legitimate model. And I have spent years building it.

In 2023, I published ChatGPT for Lawyers— turned out to be a bestseller book on Amazon, that offered the first in-depth, practical guide to how legal professionals can ethically integrate generative AI into their workflows. I didn’t write it as theory; I wrote it as a working lawyer and a technologist. I wrote it because I feared we were being left behind— and because I believed we could lead.

In my role as technology lawyer, and through my work, I have already been in touch with experts across the UK, the US, and Saudi Arabia. There is a global demand for how emerging democracies manage this moment. The world is watching to see whether countries like ours can innovate without compromising judicial independence.

Soon after, with some great people I met, I co-developed YourMunshi— a legal AI assistant trained specifically on Pakistani statutes, judicial language, and procedural logic. This wasn’t an imported tool. It was built for our system, our courts, our lawyers. And it came with guardrails: every output required human approval, every citation was traceable, and no conclusions were ever final without verification.

To ensure lawyers and judges weren’t left behind, we conducted countrywide events throughout 2024— from Islamabad to Karachi and beyond. We gave live demos of YourMunshi and the use of AI, offered hands-on training, and invited district judges, court staff, bar councils, and law students to understand how AI could empower, not undermine, their practice. The results were clear: the appetite is there. The system is ready. What it needs now is leadership.

This Supreme Court judgment provides precisely that.

It rightly identifies the opportunities: AI-powered legal research tools like Westlaw, Casetext, Judge-GPT, and ChatGPT can revolutionize how courts handle volume. Drafting tools like BriefCatch and WordRake can elevate the clarity and professionalism of judicial writing. And case allocation algorithms— already used in China, Kazakhstan, and several EU jurisdictions— can reduce discretion, eliminate judge-shopping, and restore public trust.

But the Court is equally aware of the dangers. It warns of “automation bias,” where human oversight weakens in the face of AI suggestions. It identifies “hallucinations”— false citations or incorrect interpretations. It stresses that AI lacks the empathy, discretion, and moral courage required for true adjudication. And it makes it crystal clear: any attempt to delegate core judicial functions to machines would not only be improper— it would be judicial misconduct.

In other words, AI must be the clerk, not the conscience of the court.

The Court has now directed the National Judicial (Policy Making) Committee and the Law and Justice Commission of Pakistan to prepare comprehensive guidelines. This is a critical step. But it must not be treated as a purely administrative exercise. We need an inclusive, technically grounded, and constitutionally aware consultation process— one that includes judges, legal technologists, AI ethicists, bar associations, and civil society.

This is where we must look outward as well. The EU AI Act, rightly classifies judicial AI systems as “high-risk.” It mandates human oversight, transparency, explainability, and fairness. It prohibits fully automated legal decisions and insists that no AI tool— regardless of its accuracy— can replace the judgment of a human being. Pakistan should draw clear inspiration from this regulatory model. Our legal tradition is no less sacred than Europe’s. If anything, our procedural vulnerabilities require even stronger safeguards.

And yet— this is also our opportunity to lead.

Unlike older jurisdictions burdened by legacy systems, Pakistan can leapfrog. We’ve already deployed Judge-GPT in the district courts. We’ve piloted local tools. We have a Constitution that explicitly promises speed and fairness. If we move decisively now— if we build a framework that empowers judges, protects rights, and integrates technology transparently— we can become a regional model for responsible legal AI adoption.

In my role as technology lawyer, and through my work, I have already been in touch with experts across the UK, the US, and Saudi Arabia. There is a global demand for how emerging democracies manage this moment. The world is watching to see whether countries like ours can innovate without compromising judicial independence.

Let us prove that we can.

Justice Shah’s words resonate deeply: “The courtroom is not a site for algorithmic governance but a space for reasoned, principled deliberation.” But let us add to that: the courtroom can be a site of principled innovation—if led with courage and care.

This is not a tech project. It is a constitutional one. And I humbly offer myself— alongside my team, my tools, and our lived experience—for the national effort that must now begin. The time for position papers is over. The time for policymaking is now.

Let this judgment not be an end, but a beginning.

Let it not be an exception, but a standard.

Let it not remain on paper— but take root in policy, in training, and in practice.

Adam Jabbar
Adam Jabbar
The writer is an Advocate of the High Court. He specializes in Cyber and Technology Laws, advising various local and international tech firms, government ministries, and international organizations. He can be reached at [email protected]

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

Pakistan not to initiate but will respond ‘very strongly’ to any...

DPM reiterates demand for independent, neutral probe into the incident under credible and mutually agreed TORs Says any attempt to stop or divert...