Social media platforms are facing renewed pressure to limit online harassment as Ofcom, the UK’s communications regulator, rolls out new guidance aimed at protecting women and girls from digital abuse, coercive behaviour, and the unauthorised circulation of intimate images.
The guidelines, which take effect on Tuesday under the Online Safety Act (OSA), include a range of measures intended to reduce “pile-ons” – a phenomenon where a user becomes the target of mass abusive replies or harassment, often escalating into reputational harm or psychological trauma.
A Landmark Push to Regulate Online Harm
Though non-binding, the new recommendations signal Ofcom’s firm expectations of tech firms. The regulator has warned that failure to comply could lead to the strengthening of the OSA, making some of the currently “voluntary” measures enforceable by law.
Ofcom has committed to monitoring compliance and publishing a progress report in 2027, assessing the extent to which major social platforms comply with its recommendations and act against harassment and misogynistic abuse.
“If their action falls short, we will consider making formal recommendations to government on where the Online Safety Act may need to be strengthened,” Ofcom stated.
Stopping Pile-Ons and Preventing Sexual Exploitation
One of the major proposals would allow platforms such as X (formerly Twitter) to cap the number of replies a single user can receive within a defined window. The idea is to prohibit mass dogpiling and coordinated harassment campaigns — a tactic commonly wielded against female journalists, activists, or public figures.
A particularly significant recommendation focuses on protecting victims of non-consensual imagery — including explicit deepfakes and revenge-porn scenarios.
Ofcom advocates for widespread adoption of hash-matching, a system that converts illicit images into digital fingerprints stored in a cross-platform database. Once tagged, such images can be automatically recognised and removed, even if cropped, filtered, or reposted.
This is especially critical amid the rising use of generative AI to fabricate explicit content targeting women and girls.
Tech Industry Culture Under Review
Dame Melanie Dawes, Ofcom’s chief executive, emphasised that years of testimony and victim statements have exposed the pervasive nature of gender-based digital abuse.
“We are sending a clear message to tech firms to step up and act in line with our practical industry guidance, to protect their female users against the very real online risks they face today,” Dawes declared. She added that companies will be held accountable to a “new standard” for female safety online.
Other tools recommended by the regulator include:
-
AI-driven prompts reminding users to reconsider before posting hostile content
-
Temporary account restrictions or “time-outs” for repeated violators
-
Blocking abusers from earning advertising revenue on misogynistic or harmful content
-
Faster utilities for victims to mute or block multiple accounts simultaneously
Critics Say Voluntary Measures Are Not Enough
Internet Matters, a UK nonprofit focused on children’s digital safety, cautioned that many platforms are unlikely to adopt these protections without legal compulsion.
Rachel Huggins, co-chief executive of the group, said: “We know that many companies will not adopt the guidance simply because it is not statutory, meaning the unacceptable levels of online harm which women and girls face today will remain high.”
Advocates argue that a voluntary guidance model leaves too much discretion in the hands of global corporations that often lack incentives to restrict viral engagement — even when harmful.
What Comes Next
Ofcom is currently consulting on whether to make hash-matching compulsory. Meanwhile, a broader review of OSA enforcement mechanisms may follow if compliance remains weak.
The next two years will be pivotal: regulators, civil society groups, and tech giants must determine whether the UK’s approach will become a pioneering model for online safety — or another well-intentioned but unenforceable proposal.
For now, pressure mounts on social media platforms to prioritise user protection over engagement metrics — a shift with potential global ramifications for digital rights, platform responsibility, and freedom from misogynistic abuse in the online public sphere.