Instagram has rolled out two new features to try to prevent bullying, one that prompts people to think twice before posting an offensive comment and another that lets users restrict what can be seen under their photos.

The hugely popular social network, which is owned by Facebook, has been under pressure to address bullying, particularly among its teenage users.

Adam Mosseri, the head of Instagram, said the new tool would notify people if they tried to post offensive remarks under someone else’s photos. A pop-up box will also note that Instagram wants to be a “supportive” place.

“This intervention gives people a chance to reflect and undo their comment and prevents the recipient from receiving the harmful comment notification,” he said.

Mr Mosseri added that early tests suggested the feature encouraged people to take back their comment and share something “less hurtful” upon reflection.

In a 2018 survey conducted by the Pew Research Centre, a non-profit based in Washington DC, 59 per cent of teenagers were found to have experienced bullying online. The survey said more than one in five 12-to-20-years had experienced bullying specifically on Instagram.

Meanwhile, the Restrict feature allows users to silence anyone abusive, so that their comments are not visible to anyone but themselves, unless approved by the user.

Restricted users will still be able to see their target’s posts, but they will not be able to see if they are online. Messages sent by a restricted user will also be relegated to a separate spam inbox.

Mr Mosseri said: “It’s our responsibility to create a safe environment on Instagram. This has been an important priority for us for some time, and we are continuing to invest in better understanding and tackling this problem.”

READ ALSO  Healthy growth seen in deals inked with ASEAN

He added that the company would share “more updates soon”.

On Tuesday, Twitter also announced it would be updating its rules against “hateful conduct” on its platform.

The company said it was clamping down on language that “dehumanises others on the basis of religion”, after consulting members of the public, external experts and its own teams.

In a statement, the company said: “We create our rules to keep people safe on Twitter, and they continuously evolve to reflect the realities of the world we operate in. Our primary focus is addressing the risks of offline harm, and research shows that dehumanising language increases that risk.”

Earlier this year the parents of Molly Russell, a British teenager, said she had seen images of self-harm on Instagram before committing suicide. The company has since banned such images.



Via Financial Times