Unregulated Intelligence

Pakistan’s AI drift in a world racing ahead

In an age when artificial intelligence is no longer a concept of the future but an ever-present force shaping economies, education, governance, and even human perception, the absence of a finalised and enforceable AI policy in Pakistan is not just an administrative oversight— it is an existential misstep.

While AI is redefining the global architecture of power and progress, Pakistan lingers on the margins with a draft National AI Policy that has remained unratified since May 2023. This drift, in the face of rapid technological adoption, leaves critical sectors— education, healthcare, finance, judiciary, and even local governance— vulnerable to unregulated experimentation and unintended consequences.

The question is no longer whether we need an AI policy, but whether we are willing to let AI shape us in ways we no longer control. Finalising and operationalising a robust, enforceable, and holistic AI policy is not merely a technical requirement— it is a national imperative. The clock has already ticked for two years, and with each passing day, Pakistan edges closer to the margins of a rapidly advancing global AI ecosystem. In a world increasingly governed by algorithms, silence is not neutrality— it is surrender.

Artificial intelligence today drives national security doctrines, economic productivity, academic innovation, disaster prediction systems, and smart governance across the world. Nations like China have embedded AI deep into their urban planning, surveillance systems, and education reforms under an overarching regulatory framework that mandates ethical compliance, cybersecurity standards, and human oversight. Japan has introduced AI-integrated elderly care systems governed by strict data privacy laws and regular algorithmic audits. South Korea has institutionalised AI-driven governance through its Presidential Committee on the Fourth Industrial Revolution, ensuring public-private collaboration with legal guardrails. Even in the Middle East, the UAE has appointed a Minister of State for Artificial Intelligence and rolled out AI strategies linked with national priorities, monitored rigorously.

In contrast, Pakistan’s AI engagements— though growing— are occurring largely in a vacuum. Various state departments, the judiciary, the State Bank, and even municipal offices have begun employing AI tools for decision-making, facial recognition, predictive justice, and content creation, yet without any legal requirement for transparency, bias checks, or public accountability. From AI-generated court summaries to biometric attendance in schools and algorithm-based welfare targeting, we are deploying technology whose foundations are built on data, but whose oversight is built on air.

The danger is not hypothetical. AI tools are already generating images and narratives in our newsrooms that reinforce stereotypes and distort realities— often without the human editors realizing the depth of misrepresentation. In classrooms, adaptive learning tools may be tailoring student paths without understanding local linguistic, cultural, or socioeconomic nuances. Hospitals may soon employ diagnostic algorithms without guaranteeing that those systems have been locally validated. Worse still, local governments could begin using AI for public service planning, sanitation monitoring, or budget allocations— based on incomplete, biased, or outdated data— without any redress mechanism for the citizens affected.

A responsible AI policy cannot merely be about industry growth or economic opportunity. It must be an all-encompassing regulatory and ethical framework rooted in the realities of Pakistani society while aligned with international standards. Countries across the world are already structuring their AI governance on models that categorize AI applications based on risk. The European Union’s AI Act, now being implemented, places outright bans on certain dangerous applications such as social scoring and predictive policing, while mandating strict obligations on AI used in high-risk areas such as health, finance, and law enforcement. Pakistan must not only draw inspiration from such tiered frameworks but contextualise them to our institutional limitations and developmental priorities.

At the core of any policy must lie the principles of fairness, transparency, human oversight, explain ability, and accountability. These cannot remain abstract values— they must be encoded into law and enforced through institutions. A statutory AI regulatory authority is urgently needed: one that licenses AI systems, audits their functioning, investigates harms, ensures algorithmic neutrality, and works across ministries and provincial governments. Just as we have authorities regulating food, telecom, or banking, the time has come for a dedicated AI regulatory commission with the power to scrutinise state and non-state AI deployments alike.

Importantly, the policy should not be restricted to federal jurisdictions. AI regulation must penetrate to the grassroots—district-level education boards, basic health units, tehsil-level planning bodies, and municipal administrations— so that any use of AI, from school assessments to water distribution, comes with checks, human review, and community engagement. If left unregulated, even a well-meaning AI tool at the union council level can deepen exclusion or make irreversible errors, especially in vulnerable or underserved populations.

Moreover, the framework should enforce public transparency. All government departments and allied institutions must be required to declare the use of AI, its purpose, the data it is trained on, and its limitations. Citizens should be informed when decisions affecting them are AI-assisted. Public registers, grievance redress mechanisms, and awareness campaigns—much like Japan’s AI literacy initiatives—can empower the population to interact responsibly with the technology shaping their lives.

What is also missing in Pakistan’s narrative is a commitment to periodic review and adaptability. The draft policy, even if finalised, cannot be a static document. It must come with clear timelines, enforceable milestones, mechanisms for updating in light of technological changes, and a feedback loop from civil society, academia, industry, and the public. South Korea’s AI policy, for example, undergoes annual review involving multiple stakeholders and is constantly recalibrated to keep pace with global shifts. This level of responsiveness is essential.

Of course, the road to effective AI governance also passes through capacity-building. Without training bureaucrats, judges, teachers, and journalists in AI literacy and ethical implications, even the best regulations will falter. The Presidential Initiative for Artificial Intelligence and Computing (PIAIC) was a promising start, but it needs deepening, decentralisation, and alignment with regulatory priorities. AI literacy cannot remain a privilege of elite coding academies in urban centers— it must become a national imperative reaching public servants, local officials, and ordinary citizens.

In essence, Pakistan cannot afford to remain a passive recipient of AI-driven change. As the world races toward integrating AI with governance and public life, we are risking a future where unregulated systems will act without accountability, where errors will go unchallenged, and where social injustice may be reinforced in digital disguise. It is not the pace of technology but the paralysis of policy that is our real threat.

For Pakistan, the cost of delay is no longer theoretical— it is dangerously real. As AI applications proliferate, often quietly embedded in government processes, local services, and everyday decisions, unchecked deployment threatens to deepen inequality, erode public trust, and create opaque systems that cannot be audited, corrected, or held accountable.

The question is no longer whether we need an AI policy, but whether we are willing to let AI shape us in ways we no longer control. Finalising and operationalising a robust, enforceable, and holistic AI policy is not merely a technical requirement— it is a national imperative. The clock has already ticked for two years, and with each passing day, Pakistan edges closer to the margins of a rapidly advancing global AI ecosystem. In a world increasingly governed by algorithms, silence is not neutrality— it is surrender.

If finalising our National AI Policy were a sporting event, we would not merely be running behind— we would still be at the starting blocks. What’s urgently needed is to build the track, define the lanes, lay down the rules, appoint referees, and set the finish line— with strict penalties for false starts. Only then can Pakistan hope to harness AI intelligently, ethically, and fairly— across every level of society and for every citizen it intends to serve.

Majid Nabi Burfat
Majid Nabi Burfat
The writer is a freelance columnist

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

DPM Dar to visit US next week to attend ‘high-level events’...

Foreign minister expected to have bilateral engagements with his counterparts, as well as senior UN officials ISLAMABAD: Deputy Prime Minister and Foreign Minister Ishaq...