The IWF said eight out of 10 reports of illegal AI-made images came from members of the public who had found them on public sites such as forums or AI galleries.
AI-generated child sexual abuse imagery is increasingly appearing on the open web and nearing a “tipping point,” according to the Internet Watch Foundation (IWF). The safety watchdog reported that the volume of AI-created illegal content found in the past six months has already surpassed the total seen in the previous year.
The IWF, which operates a UK hotline with a global scope, noted that nearly all of this content was discovered on publicly accessible parts of the internet, rather than on the dark web, which requires specialized browsers to access.
The IWF’s interim chief executive, Derek Ray-Hill, said the level of sophistication in the images indicated that the AI tools used had been trained on images and videos of real victims. “Recent months show that this problem is not going away and is in fact getting worse,” he said.
According to one IWF analyst, the situation with AI-generated content was reaching a “tipping point” where safety watchdogs and authorities did not know if an image involved a real child needing help.
The IWF took action against 74 reports of AI-generated child sexual abuse material (CSAM) – which was realistic enough to break UK law – in the six months to September this year, compared with 70 over the 12 months to March. One single report could refer to a webpage containing multiple images.
As well as AI images featuring real-life victims of abuse, the types of material seen by the IWF included “deepfake” videos where adult pornography had been manipulated to resemble CSAM. In previous reports the IWF has said AI was being used to create images of celebrities who have been “de-aged” and then depicted as children in sexual abuse scenarios. Other examples of CSAM seen have included material for which AI tools have been used to “nudify” pictures of clothed children found online.
More than half of the AI-generated content flagged by the IWF over the past six months is hosted on servers in Russia and the US, with Japan and the Netherlands also hosting significant amounts. Addresses of the webpages containing the imagery are uploaded to an IWF list of URLs which is shared with the tech industry so they can be blocked and rendered inaccessible.
The IWF said eight out of 10 reports of illegal AI-made images came from members of the public who had found them on public sites such as forums or AI galleries.
Meanwhile, Instagram has announced new measures to counteract sextortion, where users are tricked into sending intimate images to criminals, typically posing as young women, and then subjected to blackmail threats.
The platform will roll out a feature that blurs any nude images users are sent in direct messages, and urges them to be cautious about sending any direct message (DM) that contains a nude image. Once a blurred image is received the user can choose whether or not to view it, and they will also receive a message reminding them that they have the option to block the sender and report the chat to Instagram.
The feature will be turned on by default for teenagers’ accounts globally from this week and can be used on encrypted messages, although images flagged by the “on device detection” feature will not be automatically notified to the platform itself or authorities.
It will be an opt-in feature for adults. Instagram will also hide follower and following lists from potential sextortion scammers who are known to threaten to send intimate images to those accounts.