Bogus laws: The unexpected consequence of AI in law

As with any technology, AI needs care in its use

The unexpected has become the expected in the realm of artificial intelligence (AI). We’ve been amazed by its innovations, from self-driving cars to voice assistants, but the recent controversy surrounding the AI model, ChatGPT, has presented an unexpected twist: bogus case laws. This instance, while unsettling, provides an opportunity to reflect on the ethical implications and responsibilities when harnessing AI in the legal sector.

Recently, a New York lawyer, Steven Schwartz, submitted a court brief that cited six non-existent judicial decisions. These fictitious cases were not the product of intentional deceit but the output of ChatGPT, an AI language model developed by OpenAI. Schwartz had relied on the model to draft his brief, likely captivated by its ability to produce human-like responses and seemingly accurate legal content1.

The incident has sent ripples across the legal community, prompting discussions about the intersection of AI and legal ethics. It’s a timely discourse, considering the accelerating adoption of AI in various sectors, including law. While AI has shown promise in streamlining legal research and document analysis, its use must be tempered with caution, and the Schwartz case serves as a stark reminder of this reality.

The American Bar Association’s (ABA) Model Rules of Professional Conduct do not explicitly address AI use. Yet, several existing ethics rules are pertinent. Lawyers are responsible for the representations they make in their practice; it’s their license at stake. This responsibility extends to three key areas when using AI: competence, confidentiality, and nonlawyer assistance.

First, the duty of competence requires lawyers to provide competent representation and stay updated on current technology. They must ensure that the technology they use provides accurate information, a concern raised by the ChatGPT incident. Over-reliance on AI tools might introduce errors and potentially misleading information, as seen in Schwartz’s case.

The use of AI in law is still a relatively new frontier, with potential for both great innovation and serious missteps. This is a moment of learning and adaptation for the legal profession, a time to strike a balance between leveraging the capabilities of AI and maintaining the high standards of ethical conduct and professionalism that the law demands.

Second, the duty of confidentiality mandates lawyers to protect their clients’ information. By using AI programs like ChatGPT, lawyers might inadvertently provide AI companies with data that could be used to train and improve their models, potentially breaching confidentiality rules.

Lastly, the rule concerning nonlawyer assistance means lawyers must supervise those who aid them, including AI programmes, to ensure their conduct aligns with professional conduct rules. Lawyers must understand the technology well enough to guarantee it meets the ethical standards they are obligated to uphold.

This incident underlines the need for guidelines on AI use within the legal profession. As AI continues to evolve and become more integrated into our lives and professions, it’s crucial to understand and manage the risks associated with its use. It’s not enough to marvel at AI’s ability to streamline tasks; we must also be vigilant and responsible users, aware of its limitations and potential for error.

The Schwartz case is a wake-up call for the legal profession, a call to think critically about AI’s role in law and the safeguards necessary to ensure its ethical use. AI, for all its potential, is a tool, and like all tools, its use needs guidance and oversight. It’s time to consider not only how AI can benefit law but also how it can be ethically integrated into legal practice.

In the meantime, the incident serves as a cautionary tale for lawyers tempted to rely too heavily on AI. As we navigate the new frontier of AI and law, we must remember that while AI can aid us, it doesn’t replace the need for human oversight, integrity, and professional responsibility. The power of AI in law lies not in its ability to replace humans but to augment our capabilities– while the human element remains, as ever, irreplaceable.

In the wake of this incident, attention has turned to Pakistan, where a judge recently used ChatGPT in a trailblazing legal decision. A Pakistani judge recently utilized the AI tool to ask legal questions concerning a juvenile criminal case. He noted the potential of AI technology in assisting the justice system in passing efficient and intelligent judicial orders and judgments following the law. However, he also emphasized that the decision to grant pre-arrest bail was not based on the answers provided by ChatGPT, but on the human judicial mind.

The Pakistani judge’s experiment underscores the need for a nuanced understanding of AI’s role in the judicial system. It demonstrates the potential value of AI as a tool to aid human decision-making and reduce the burden on the judicial mind by providing relevant and reliable answers but also highlights the importance of human oversight and that the ultimate responsibility lies with the legal professionals themselves.

Lawyers and judges in Pakistan, and indeed worldwide, must tread carefully. They can harness the potential of AI in their work but should be aware of its limitations and the potential for errors, as the Schwartz case illustrates. AI is a tool, not a replacement for human legal expertise, and its use must be properly supervised to prevent unintended consequences such as the citation of non-existent case law.

The use of AI in law is still a relatively new frontier, with potential for both great innovation and serious missteps. This is a moment of learning and adaptation for the legal profession, a time to strike a balance between leveraging the capabilities of AI and maintaining the high standards of ethical conduct and professionalism that the law demands.

Adam Jabbar
Adam Jabbar
The writer is an Advocate of the High Court. He specializes in Cyber and Technology Laws, advising various local and international tech firms, government ministries, and international organizations. He can be reached at [email protected]

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

PTI to resist person-specific constitutional amendment to benefit CJP

Mandate thieves amending constitution to reward CJP for snatching PTI ‘bat’, mandate: PTI Spokesperson ISLAMABAD: Pakistan Tehreek-e-Insaf (PTI) vehemently denounced the person-specific constitutional amendment...