CURBING MISINFORMATION

PLATFORM 1: FACEBOOK

From the Facebook Files study from the Wall Street Journal in 2020, Facebook data was proven to increase the use of hate speech during the riots by 300%. At the time this information was leaked, Facebook responded that they are working hard to fix the algorithm. Additional disturbing findings are explained in the article such as political influence on Chat Boards, overall problematic experience reported by users, inefficient policy for stolen content (bad for content producers and audiences) and even disturbing findings about the algorithm’s attack on adolescents. This may look bad for Facebook, but the leaks seem to get much darker throughout the following years.

Facebook now meta responded to this concept with a very broad statement. I believe that it is possible that the algorithm may have independently produced harmful or negative content because it is known that algorithms are at the point of outsmarting humans. Further I believe that Zuckerberg could be explaining a truthful situation where they have lost control of the algorithm and they may intend for it to produce positive connections and it may simply be out of their hands. This potential outing I had conceived for a horrible narrative on Facebook’s behalf quickly vanished after I learned about Facebook’s whistle blower, Frances Haugen.

n a CBS news cast, Facebook papers” suggest platform did little to stop misinformationfrom October 2021 about leaked information from Facebook data representing how their interface promoted violence, enabled human trafficking, and did not restrict hate speech efficiently. American data engineer and scientists, Frances Haugen, and former Facebook employee stated “we are literally subsidizing hate on these platforms”. Haugen explains in the interview below, a common pattern she saw in the data was, “conflicts of interest between what was good for the public and what was good for Facebook, and Facebook over and over again chose Facebook to optimize its own interests like making more money”.

Frances spent 15 years working at many other social networks including Google and Pinterest, and she said Facebook was the worst one regarding this issue. When Haugen leaked the tens of thousands of Facebook data through internal documents, she wanted to show the public that without a doubt, Facebook has been lying about the policies working against hate, violence, and misinformation, and rather that they are promoting it. By releasing this data and showing that the new policies in place are proven to be making Facebook owned platforms a net negatively impactful user experience rather than even attempting to go in the other direction. This also shows the exploitation of Facebook users to increase profit. By monetizing on the fear and vulnerabilities of the users, Facebook is displaying inhumane and unethical user interface structures, on top of lying about the algorithm’s reconstruction and revision.

It is clear that Meta platforms need to show real effort to their users and provide claims with supporting statistical reasoning due to the loss of trust with the platforms and their users. Looking at Meta’s efforts on September 23, 2022, they still seem to be repeating the cycle of claiming to fix the algorithm while actively making it worse behind closed doors. I came to this conclusion when I found that years later in an article “Facebook whistleblower launches nonprofit to take on big tech” Frances Haugen is still very much fighting the fight against Meta and its unethical user exploitation for profit.

PLATFORM 2: youtube

A better example of a platform’s progressive attempt to curb information is YouTube in February 2022. In “Youtube’s Inside Responsibility: What’s next on our misinfo efforts”, the article is very well-written and trustworthy. All of the claims to combat misinformation and its spread are supported with current tactics being put in place (if not already) that restrict the curation, publication, and spread of misinformation on the YouTube platform. I can back up this claim with supporting evidence as a first-hand user of YouTube. I have noticed that one of my favorite large YouTube creators, Steve-Will-Do-it was banned/removed from the platform due to persistent violations of the use of misinformation, hate-speech, and violent content.

The issue I had with this was the lack of investigation into the context of his videos. The comments he made were funny to a niche group, but that niche group consisted of millions. I was disheartened to hear that his YouTube channel got taken down because although he made some bold and provocative comments, the intention behind them was clearly un-harmful if you watched many of his videos and understood his humor. Any statement he made was never malicious or 100% serious. Steve is a well-known YouTuber for giving back to his subscribers and changing lives. In every single YouTube video, he gave away millions of dollars’ worth of gifts to communities all over the globe and nearby. I believe YouTube should take down Steve’s videos individually if there are issues or significant/credible reports, but I believe removing his channel that had an overall positive impact on the internet is not an appropriate or ethical response. If YouTube’s policy for removing an account was better structured to where there was a person studying the context and sentiment from his videos rather than what it may look like on the surface level.