There is no denying that the Internet is flooded with hate and abusive stuff. Especially social media is treated by some as a platform to abuse. Instagram has come up with a new feature that will warn you if your caption is abusive. And the best part is that it warns you before you even post the caption.
Instagram announced that it has started rolling out a new tool that will inform users if their caption might be considered offensive. This is in a bid to help users rethink what they are posting or in some cases reframe the caption in such a way that it is not abusive.
The feature is powered by an Artificial Intelligence which works behind the scene. It is capable of flagging “potentially offensive” words and automatically gives the users an option to “edit captions.” Furthermore, users will also get an option to “learn more” or override the caption by selecting “share anyway.”
As part of our long-term commitment to lead the fight against online bullying, we’ve developed and tested AI that can recognize different forms of bullying on Instagram. Earlier this year, we launched a feature that notifies people when their comments may be considered offensive before they’re posted. Results have been promising, and we’ve found that these types of nudges can encourage people to reconsider their words when given a chance.
Starting from now, whenever users type a caption for a post the AI detects and prompts them to edit the caption. At the outset, the new feature is expected to educate users on cyberbullying and also reduce the risks of users from being blocked. In all likelihood, it will also ease Instagram’s burden when it comes to screening for abusive captions.