• Unicef says children’s photos are being manipulated and sexualised through AI tools
• In some countries, as many as one in 25 children reported having their images turned into sexual deepfakes
ISLAMABAD: Unicef has said it is increasingly alarmed by reports of a rapid rise in the volume of AI-generated sexualised images circulating online, including cases in which photographs of children have been manipulated and sexualised. The organisation has urged governments and industry to prevent the creation and spread of AI-generated sexual content involving children.
“The harm from deepfake abuse is real and urgent. Children cannot wait for the law to catch up,” Unicef said in a statement released by UN Information Centre in Islamabad on Thursday.
“Deepfakes – images, videos, or audio generated or manipulated using Artificial Intelligence (AI) and designed to look real – are increasingly being used to produce sexualised content involving children, including through “nudification,” where AI tools are used to strip or alter clothing in photos to create fabricated nude or sexualised images.
Unicef said this unprecedented situation poses new challenges for prevention and education, legal frameworks, and response and support services. Current prevention efforts, which often focus on teaching children about online safety and the risks of creating or sharing sexual images, remain important but are insufficient when sexual content can be artificially generated.
The statement said the growing prevalence of AI-powered image and video generation tools that produce child sexual abuse material marks a significant escalation in risks to children through digital technologies.
Recent large-scale research conducted by Unicef, ECPAT and Interpol under the Disrupting Harm project showed that across 11 countries, at least 1.2 million children reported having had their images manipulated into sexually explicit deepfakes through AI tools in the past year.
“Children themselves are deeply aware of this risk. In some of the study countries, up to two-thirds of children said they worry that AI could be used to create fake sexual images or videos. Levels of concern vary widely between countries, underscoring the urgent need for stronger awareness, prevention and protection measures,” the statement said.
“We must be clear. Sexualised images of children generated or manipulated using AI tools are child sexual abuse material (CSAM). Deepfake abuse is abuse, and there is nothing fake about the harm it causes.
“When a child’s image or identity is used, that child is directly victimised. Even without an identifiable victim, AI-generated child sexual abuse material normalises the sexual exploitation of children, fuels demand for abusive content and presents significant challenges for law enforcement in identifying and protecting children who need help.”
Unicef welcomed the efforts of AI developers who are implementing safety-by-design approaches and robust guardrails to prevent misuse of their systems.
Published in Dawn, February 6th, 2026.