Most Pakistani professionals rely on AI but lack proper cybersecurity training

Nearly nine out of ten professionals in Pakistan are now using artificial intelligence tools in their daily work, yet many remain unaware of the cybersecurity risks involved, according to a new report by Kaspersky titled “Cybersecurity in the Workplace: Employee Knowledge and Behavior.”

The study found that 86% of professionals across Pakistan regularly depend on AI-powered tools for various tasks, but only 52% have received formal training on safe and responsible AI use. This lack of awareness leaves many exposed to potential threats such as data leaks, malicious prompts, and misuse of neural networks.

Although 98% of respondents said they understand what “generative AI” means, their engagement goes far beyond theory. Around 68% use AI to write or edit content, 52% for drafting emails, 56.5% for producing images or videos, and 35% for analyzing data.

Despite this growing reliance, training has not kept pace. One in five professionals said they have received no AI-related instruction at all. Of those who have, two-thirds were trained mainly on how to use AI effectively, while only about half were educated on cybersecurity concerns related to AI.

The report also found that AI is becoming increasingly accepted in workplaces. About 81% of employees said their organizations officially permit generative AI tools, 15% said such tools are banned, and 4% were unsure of company policy. However, many continue to use AI tools without clear oversight—a trend Kaspersky refers to as “shadow IT.”

To mitigate these risks, the report urges organizations to adopt clear AI usage policies that define acceptable practices, outline approved tools, and limit sensitive data processing through AI systems.

Rashed Al Momani, Kaspersky’s General Manager for the Middle East, emphasized that neither full bans nor unrestricted access to AI are effective strategies. He recommended a balanced approach that tailors access levels according to the sensitivity of data handled by different departments.

Kaspersky advises that companies not only train employees in responsible AI use but also equip IT teams with specialized knowledge on AI vulnerabilities and defense mechanisms. The firm’s Automated Security Awareness Platform and Large Language Models Security course aim to strengthen these capabilities.

The company further recommends that all employee devices be protected with comprehensive security tools, such as the Kaspersky Next suite, and that organizations implement structured AI-use policies to ensure safe and efficient integration across all operations.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read