Instagram have been scrutinised for failing to remove accounts that have been found guilty of child exploitation.
The Meta-owned photo and video sharing network claim to have a zero tolerance approach but it has been found that even after accounts were flagged as inappropriate, and reported by other users via the in-app reporting tool, the automated moderation technology failed to permanently block the pages.
One account that had been harbouring explicit images of children in swimwear, which was reported by multiple users, was overlooked “due to the high volume” of reports as “technology has found that this account probably doesn’t go against our community guidelines”.
Instead a user who made the report was advised to “block or unfollow” the account, which attracted a following of over 33,000.
This problem also expands to other social networking platforms like Twitter, where multiple ‘tribute pages’ were found to exist instead of being taken down as they did not meet the criminal threshold.
Andy Burrows, head of online safety policy at the NSPCC, characterised the accounts as “a shop window for paedophiles.”
He said in a statement: “Companies should be proactively identifying this content and then removing it themselves,
“But even when it is reported to them, they are judging that it’s not a threat to children and should remain on the site.”
Related Video:
BANG Showbiz Tech