Facebook is introducing new tools to tackle online bullying, the company announced this morning. Specifically, it’s rolling out a way for people to hide or delete multiple comments at once from the options menu of a post, and is beginning to test new ways to more easily search and block offensive words from showing up in comments. It’s also rolling out a new way to report bullying on behalf of others and is offering the opportunity to appeal decisions related to bullying and harassment.
There’s an old internet adage, “don’t feed the trolls.”
It means you shouldn’t engage with disruptive commenters – those who are making ridiculous, rude, and offensive statements designed to anger or attack others. But on Facebook, no one seems to heed this rule. As soon as a troll’s comment appears, it quickly becomes the top comment on the post as everyone piles on to tell them how wrong and rude they are.
While people’s inability to ignore trolls is part of the problem here, an even larger factor is that Facebook built out a massive social network of now over 2 billion users before it took the time to thoughtfully consider what sort of tools are needed to address online abuse.
While it may feel like bullying and harassment has increased on Facebook thanks to the current political climate – it’s always been there…and at scale. The issue is especially troubling considering the ramifications that continued bullying and abuse can have on online users – it can be a factor in troubled teens’ decisions towards self-harm and suicide, for example. (And Facebook users can be kids of just 13, per its official rules.)
At its worse, online hate speech spreading across Facebook has contributed to genocide.
It’s irresponsible, at this point, for large scale social platforms to not make anti-abuse tools one of their biggest priorities.
The first of Facebook’s new additions will allow users to better control how people interact with their posts. They’ll now have a way to hide or delete multiple comments at once from the options menu of the post. Currently, you can remove a single comment at a time – which makes it difficult to moderate posts that have a lot of activity, or where one user has continued to post abusive remarks throughout a number of conversation threads.
This feature is rolling out now on desktop and Android and will hit iOS in “the coming months.”
An upcoming feature will also introduce the ability to search and block offensive words, Facebook says.
Instagram has offered offensive word blocking tools for years, and implemented A.I. to detect and hide harassing comments. That Facebook is just now getting around to improving its own tools on this front is notable, in terms of the delay involved here.
Facebook didn’t say when offensive language blocking tools would be made public.
Another new tool will allow friends and family to report bullying and harassment on behalf of victims via the menu above the post in question. Facebook’s Community Operations team will review the post, keep the report anonymous, and determine whether it violates the company’s Community Standards, it says.
Facebook says it added this option because some people don’t feel comfortable reporting a bully or harasser themselves.
On the flip side, Facebook will allow those dinged for being abusive to request an appeal.
Earlier this year, Facebook said it was rolling out a new process that allowed people to request a second review of their photo, video or post that had been removed for violation of the Community Standards. Now this appeals process is expanding to bullying and harassment violations. This could help combat reports where people submit items for being abusive that are really just unpopular statements, but not necessarily those that would constitute online harassment.
The appeals process can also be used to request Facebook take a second look at content that it declined to take down the first time.
Another notable change is that Facebook is ramping up its protection for public figures.
Online, including on Facebook, anyone who gains a following has been subject to a slightly different set of rules than private individuals. To some extent, this is necessary – politicians, for example, still have to hear from constituents, even when those constituents are sharing their thoughts a rude or harassing fashion.
But the attacks on public figures often go too far. Earlier this year, Facebook expanded protections on young public figures, and now it will turn its attention to better protecting “severe” attacks that directly engage other public figures, as well. (The company didn’t define how it measures severity.)
Facebook’s post also highlighted a recent partnership with the National Parent Teachers Association in the US to facilitate 200 community events in cities in every state to address tech-related challenges faced by families, including bullying prevention. It’s also now offering a peer-to-peer online safety and anti-bullying program to every secondary school in the UK, through partners. And it supports a program in India that has educated tens of thousands of young people about online safety, thoughtful sharing, and privacy and security, the company claims.
It’s good that Facebook will roll out improved tools around bullying. But it should have done so ages ago.
Because of Facebook’s size and scale – it also runs Instagram, and messaging platforms Messenger and WhatsApp – it has already set the tone for much of online culture.
For years, it has permitted back-and-forth heated discussions to devolve into harassment and bullying. It has sacrificed people’s mental health and wellness at the alter of “increased engagements” on its site – a metric that impacts its bottom line. Newer platforms, like Twitter, later emerged without having to live up to any sort of standards around reasonable and thoughtful discussions, thanks to Facebook’s inattention or slow reactions to dealing with hate speech and abuse.
“Everyone deserves to feel safe on Facebook, and it’s important that we help people who encounter bullying and harassment online,” writes Antigone Davis, Facebook’s Global Head of Safety, in the company’s announcement. “We know our job is never done when it comes to keeping people safe, and we’ll continue listening to feedback on how we can build better tools and improve our policies.”
Source: TechCrunch http://j.mp/2Ngg79l
No comments:
Post a Comment