Abuse of social media platforms by terrorists

A new dimension of cyberspace

Social media has opened up many new platforms in communication, entertainment and knowledge. Unfortunately, it has also given radicals the means to target, connect and communicate with individuals who are vulnerable to radicalization. The exposure to extremist content and online groups can lead them down to a very dangerous track.

Terrorist groups exploit social media platforms for propaganda, recruitment, coordination, fundraising and execution of their terror activities and it is only due to the platform’s wide accessibility, low cost and instant reach. Social media also enables them to communicate directly with audiences in real time-unfiltered by mainstream media channels. To radicalize the individuals, terrorists use numerous social media platforms which include, online journals, Facebook, Twitter, YouTube, video games, Telegram, and many others. It has been seen that both the terrorist groups Al-Qaeda and the Islamic State publish online journals which are available worldwide to train, educate and radicalize individuals.

“Social media is not about exploitation but service to the community”. _ Simon Mainwaring

A study by Gabriel Weimann (University of Haifa) estimates that nearly 90 percent of organized online terrorism occurs via social media platforms. ISIS alone deployed approximately 46,000 Twitter accounts between September and December 2014, with many accounts being duplicates due to suspensions. The Islamic State has maintained a “media insurgency” featuring slick video content, posters like “Jihadi John” and highly produced propaganda aimed at both global and regional audiences.

Automated Content Moderation will help in flagging hate speech, violent images or radical messaging before it spreads. It is vital to track suspicious accounts using fake profiles, unusual communication patterns or coded language. A joint task force is needed to set up cyber units focused on dismantling terror networks online. We need to train people to recognize fake news, extremist propaganda and recruitment attempts.  It is important to prevent terrorists from using encrypted channels for planning attacks and also assign unique IDs to online content to track and block reposting of banned materials. Terrorism is cross-border; countries should share intelligence on online activities of terrorist groups. It is necessary to create universal laws to tackle terrorism in digital spaces

The Algorithmic design and artificial intelligence have accelerated the delivery of extremist content. Platforms like Tik Tok are implicated in algorithm-driven radicalization. Austrian authority disrupted a jihadist-inspired attack where the youngest suspect was just 14 years old and he was influenced via Tik Tok content.

The model of content recommendation – “For Your Pages”- can lead users deeper into extremist content via echo chambers and rabbit-holes. Europol has noted terrorist organizations using AI to tailor content to youth, enhancing manipulation and engagement. The proliferation to AI-generated propaganda including fake text, images and videos which has begun to serve extremist propaganda’s strategic goals.

Radicalization of youth has dramatically increased across regions. In the UK, 13 percent of terrorism investigations involve individuals under 18, a threefold increase over three years. Globally, one in five terror suspects in Western countries is now a minor. In the UK, 42 percent of the 219 terrorist arrests in 2024 involved individuals under 18.

In Pakistan, extremist groups like the Taliban (TTP) and BLA utilize localized propaganda and online recruitment. The TTP recruits via emotional videos, local language content, job postings for video editors and private Telegram groups. Platforms such as Facebook and Tik Tok have removed 90 percent of flagged content, whereas X and WhatsApp complied only 30 percent of the time. Such platforms also host coordinated misinformation campaigns, including over 1,200 anti-CPEC Tik Tok videos in 2024, reaching millions, aimed at swaying public sentiments against China-backed development.

Pakistan has about 303 million social media users as of January 2024, making it a high impact environment for extremist content. Even modest estimates recommend that extremist content could influence over 13,000 individuals per year. Another terrifying matter is the encrypted messaging and Dark Web hidden operations. The Neo-Nazi Terrorgram channels on Telegram have grown, including some with over 16,000 followers. Multiple Terrorgram-linked plots and prosecutions have been recorded globally in early 2025. The Dark Web remains a hub for unidentified extremist activity propaganda, “how-to” guides and even anonymous funding through cryptocurrencies.

In the UK, the Online Safety Act (effective in March 2025) aimed to crack down on terrorist content, yet over a thousand extremist-linked accounts remain active across main platforms and it is raising serious uncertainties about enforcement effectiveness. Australia’s eSafety Agency has summoned tech firms to explain failure to curb violent content highlighting the ongoing legal struggle.

Terrorist Groups are also making strategic shifts like Al-Qaeda, weakened on the ground but now are focusing on social media platforms for recruitment, targeting needy and disaffected youth through ideological narratives and financial promises. The International groups regularly infiltrate local spaces leveraging translated messaging in Urdu and regional narratives to broaden their reach.

The specialized media units like Umar Media serve as the “Ministry of Information” for the Pakistan Taliban, producing multi-language propaganda in video, radio and podcast form.

Terrorists also used PUBG for covert coordination. In Swat, Khyber Pakhtunkhwa, Pakistani authorities discovered that members of the TTP used PUBG’s chat features to coordinate an attack on a police station. The in-game allowed them to evade traditional electronic surveillance. Law enforcement later retrieved PUBG-installed mobile devices and chat logs during their investigation. Extremist groups are increasingly turning to gaming environments to radicalize or recruit youth. Platforms that support chat, live streams or mods are especially vulnerable.

Saving social media from terrorist groups requires a multi-layered approach that combines technology, laws and public awareness. The evidence highlights a critical truth that social media is a central battlefield in modern terrorism.

Terrorist entities exploit low barriers to entry, algorithmic strengthening and growing technologies to radicalize, recruit and mobilize. The key to countering such acts is an urgent need for robust AI-powered moderation that can detect nuanced extremist content, especially in local languages and symbols.

The stronger regulation and enforcement of online safety laws, with transparent accountability and swift action against extremist content. Coordinated international responses to encrypted and underground platforms are essential to dismantle transnational extremist networks.

Automated Content Moderation will help in flagging hate speech, violent images or radical messaging before it spreads. It is vital to track suspicious accounts using fake profiles, unusual communication patterns or coded language. A joint task force is needed to set up cyber units focused on dismantling terror networks online. We need to train people to recognize fake news, extremist propaganda and recruitment attempts.  It is important to prevent terrorists from using encrypted channels for planning attacks and also assign unique IDs to online content to track and block reposting of banned materials. Terrorism is cross-border; countries should share intelligence on online activities of terrorist groups. It is necessary to create universal laws to tackle terrorism in digital spaces.

Zoha Aziz
Zoha Aziz
The writer is a freelance columnist

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read