The year 2026 started on an alarming note regarding the use of artificial intelligence. One of the viral cases these days is Britain’s media regulator investigation into Elon Musk’s X over concerns its Grok AI chatbot was creating sexually intimate deepfake images in violation of its duty to protect people in the UK from illegal content. This problem is not limited to one country. The weaponisation of women’s photos in Pakistan is a major issue. In 2021, a district court in Karachi sentenced a professor to eight years in imprison and imposed a fine of over Rs1 million for harassing a female teacher on the internet. The professor was found guilty of impersonating a female colleague and committing an “offence against modesty of a natural person and minor” in October 2016. In this backdrop, the fears that AI could be used against women are valid. Last year, Pakistan launched an AI national policy. But experts say that, while Pakistan’s National AI Policy 2025 prioritises innovation and growth, it lacks enforceable safeguards to protect rights, allowing monitoring, censorship, algorithmic bias and technology-facilitated harms to flourish unchecked.
The study ‘Emerging Technologies in Pakistan: Towards a People-Centred Policy Framework’ conducted by the Digital Rights Foundation (DRF) and released a few days back highlights the growing gap between rapid technological adoption and human rights protections in the country. Many internet users may have noticed how big search engines have incorporated their AI-enabled summaries whenever a term is searched. Such summaries often lack the correct information, but since they appear on the top, there are chances that many users would take this as a fact. This may create an environment where half-truths will be promoted. Last year, a high-profile case involved the Australian government asking for a refund of part of the fee paid to an accountancy firm for a report on governance. Most of the references mentioned in the report were incorrect. At present, when AI adoption is still in its early stages, such mistakes can be pointed out. The main question is: what will happen when AI is everywhere; when our children will be trained on information generated by AI? In that case, separating truth from fiction would be impossible.
In a roundtable that took place recently to discuss the DRF’s study, experts agreed that Pakistan must move beyond techno-solutionism towards human rights-centred, participatory governance rooted in Global South realities. Immediate priorities identified include mandatory human rights impact assessments for AI deployments, passage of long-delayed data protection legislation, transparent content moderation frameworks, participatory AI oversight bodies, protections for journalists and workers and environmental accountability for AI infrastructure. Pakistan has to take major steps to introduce safeguards and guardrails to chatbots, LLMs and all AI-enabled software to protect people’s digital rights and safety.
Editorial Published in The NEWS on January 19, 2025.