Why Pakistan needs AI laws in the digital age

AI is still unregulated

Artificial intelligence has moved beyond the realm of science fiction. It is already reshaping newsrooms, political campaigns, and the advertising industry around the world. AI-generated anchors, deepfake videos, and algorithmic targeting are spreading fast, yet different countries still rely on outdated laws drafted long before machine learning entered public life. The gap between regulation and reality is widening and with it, the risks to journalism, democracy, and public trust.

Right now, Pakistan manages digital harms through old statutes. The Prevention of Electronic Crimes Act (PECA) 2016 criminalises offences like online harassment, defamation, and data theft. While useful against individual wrongdoers, PECA does little to address systemic threats posed by AI, such as disinformation campaigns or synthetic media designed to sway elections.

Meanwhile, traditional broadcast media remains under the Pakistan Electronic Media Regulatory Authority (PEMRA), created by its 2002 ordinance. PEMRA enforces licensing, decency standards, and penalties for TV and radio. But its rules were designed for an analog world. They cannot cover synthetic newsreaders, AI voiceovers, or algorithmically generated stories that bypass traditional broadcasters altogether. The result is uneven regulation: licensed channels may face scrutiny, while viral AI content circulates unchecked.

The Pakistan Telecommunication Authority (PTA) wields blocking powers to take down unlawful content and suspend platforms. But AI presents new challenges. Deepfakes can be recreated in seconds; blocking one video does not stop another from appearing. More importantly, current rules do not demand transparency from the algorithms that shape what millions see on their feeds. In today’s media environment, the hidden ranking and recommendation systems often matter more than the content itself, yet they remain unregulated.

Other countries are moving faster. UNESCO has also urged members to ensure AI is transparent, accountable, and subject to human oversight. The European Union’s AI Act goes further, banning the most dangerous AI uses, regulating high-risk systems like election manipulation, and requiring labels for AI-generated content. Pakistan does not need to replicate these frameworks word for word, but it does need to adapt their principles. Risk-based regulation, disclosure obligations, and accountability for developers and deployers are crucial if the country is to keep pace.

The opportunity is clear. With balanced, transparent, and forward-looking laws, Pakistan can harness AI to strengthen media and PR industries while protecting democratic values. Without them, the digital public sphere risks being overwhelmed by synthetic content and opaque algorithms that leave citizens guessing what is real.

What might this look like in Pakistan? Three steps stand out. First, mandatory disclosure: any AI-generated media— whether a political ad, a campaign speech, or a news clip— should be clearly labelled. Citizens deserve to know whether they are engaging with human or machine-made content.

Second, independent audits. Organisations that use AI to influence public opinion, such as political parties, PR firms, or even major news outlets, should be subject to external reviews of their algorithms and targeting practices. This would discourage manipulative techniques and promote ethical standards.

Third, stronger oversight. A dedicated AI media regulator, or at least a specialised unit within existing agencies, should be empowered to investigate amplification systems, demand transparency reports, and enforce corrective action. Relying only on PECA prosecutions or PTA takedowns is not enough in the AI era.

But regulation must also protect freedoms. Overly broad enforcement risks turning AI laws into another tool for censorship. Safeguards are essential: clear thresholds for violations, independent appeals, and public reporting of enforcement actions. Industry self-regulation could complement state action, with certification schemes for ethical AI tools and standards for labelling political ads or synthetic content.

At the same time, citizens must be empowered. Media literacy campaigns should teach people how to identify AI-generated content, understand targeting techniques, and question the authenticity of what they see. Public awareness is the best defence against any manipulation.

Pakistan’s National AI Policy, approved in 2025, provides a starting point. But policies are only aspirations until they are translated into enforceable law. Without concrete legislation, the risks will multiply: fake videos could distort elections’ transparency, automated propaganda could deepen polarisation, and trust in journalism could erode further.

The opportunity is clear. With balanced, transparent, and forward-looking laws, Pakistan can harness AI to strengthen media and PR industries while protecting democratic values. Without them, the digital public sphere risks being overwhelmed by synthetic content and opaque algorithms that leave citizens guessing what is real.

AI is here to stay. The real question is whether Pakistan will shape the rules in time or allow its media landscape to be reshaped without oversight.

Qudrat Ullah
Qudrat Ullah
The writer can be contacted at: [email protected].

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

Shaheen to replace Rizwan as ODI captain, sources say

Pakistan's cricket leadership is set for a change, with Shaheen Shah Afridi reportedly replacing Mohammad Rizwan as the ODI captain. This decision follows Pakistan's...