Facebook

How Facebook is the Wrong Company to Protect Us from Misinformation

Executive Summary

  • Facebook claims that it will defend its users from misinformation.
  • We review how likely it is that Facebook is up for the job.

Introduction

See our references for this article and related articles at this link.

Misinformation is a term frequently used by elites.

I found something very odd when looking up the definition of misinformation.

The definition laid out here states that misinformation is false. However, when misinformation is used, the person applying the term often does not bother proving the information is false. 

Why Is the Term Misinformation Problematic?

The term is problematic because it presumes that the person calling out the misinformation knows the intent of the provider of the information. Misinformation is not only false; it is deliberately intended to deceive. But how does the person leveling the accusation know this? That is a high standard to prove. Most entities and people who use the term misinformation not only do not go to the effort to prove the information is false, but they also do not demonstrate that the information is being provided with this malicious intent.

What Facebook Says About Stopping Misinformation?

The following is taken from the Facebook website regarding misinformation.

False news is harmful to our community, it makes the world less informed, and it erodes trust. It’s not a new phenomenon, and all of us — tech companies, media companies, newsrooms, teachers — have a responsibility to do our part in addressing it. At Facebook, we’re working to fight the spread of false news in three key areas:
1. Disrupting economic incentives because most false news is financially motivated; – Facebook

Right off the bat, we have a problem, which is that Facebook is not observing its own financial incentives.

Facebook receives a financial benefit from distributing false information, as false information is often more appealing to Facebook users than accurate information. Therefore, as Facebook only has a responsibility to maximize profit to provide the highest return for shareholders, Facebook can argue (although it will never admit to it) that their only real loyalty is to profit maximization, which means promoting false information. This is nothing new, as Facebook’s entire business model is about deceptively surveilling users and then selling this information to advertisers; advertisers, of course, produce an enormous quantity of false or exaggerated information.

Facebook continues..

2. Building new products to curb the spread of false news; and

What does this mean? Facebook will now censor material?

3. Helping people make more informed decisions when they encounter false news.

And why would Facebook be in a good position to do this?

4. Disrupting Economic Incentives

This is called de-platforming and is what YouTube has done. What has happened on YouTube is only small independent content producers have found themselves de-platformed and restricted. Again, Facebook is like YouTube, with only an objective to maximize profits and shareholder value.

These spammers make money by masquerading as legitimate news publishers and posting hoaxes that get people to visit their sites, which are often mostly ads. Updating our detection of fake accounts on Facebook, which makes spamming at scale much harder.

Those sites are immediately obvious, and false information is not significantly spread through spam. Most spam is trying to sell something, not spread false news information.

Is this really the major problem with false information?

If the false information is distributed by Companies from Microsoft, Oracle, SAP, and many others habitually provide false information to their customers. As we cover in the article How Facebook is Constantly Lying About its Surveillance of Users, Yet, again, it is not called misinformation if a major corporation spreads it.

Will Facebook censor and de-platform those entities? What if those entities are paying Facebook? Don’t they have a fiduciary responsibility to their shareholders to publish anything as long as the entity is paying Facebook to do so?

Some of the steps we’re taking include:
Better identifying false news through our community and third-party fact-checking organizations so that we can limit its spread, which, in turn, makes it uneconomical.

Who are the fact-checkers? As we will cover, Facebook needs first to run its own claims past fact-checkers as we rank Facebook as one of the least honest companies we have ever tracked.

Making it as difficult as possible for people posting false news to buy ads on our platform through strict enforcement of our policies.

Does this include advertisers that provide false information because that will cover a very high percentage of Facebook’s advertisers?

Facebook Journalism Project: We are committed to collaborating with news organizations to develop products together, providing tools and services for journalists, and helping people get better information so they can make smart choices about what they read. We are convening key experts and organizations already doing important work in this area, such as the Walter Cronkite School of Journalism and Mass Communication at Arizona State University, and have been listening and learning to help decide what new research to conduct and projects to fund. Working with the News Literacy Project, we are producing a series of public service announcements (PSAs) to help inform people on Facebook about this important issue.

News Integrity Initiative: We’ve joined a group of over 25 funders and participants — including tech industry leaders, academic institutions, non-profits and third party organizations — to launch the News Integrity Initiative, a global consortium focused on helping people make informed judgments about the news they read and share online. Founding funders of this $14-million fund include Facebook, the Craig Newmark Philanthropic Fund, the Ford Foundation, the Democracy Fund, the John S. and James L. Knight Foundation, the Tow Foundation, AppNexus, Mozilla and Betaworks. The initiative’s mission is to advance news literacy, to increase trust in journalism around the world and to better inform the public conversation. The initiative, which is administered by the CUNY Graduate School of Journalism, will fund applied research and projects, and convene meetings with industry experts.

This is amusing because, along with Google, Facebook has been instrumental in reducing the revenues flowing to media entities.

This is what Facebook did to its media “partners.”

Proprietary access to subscribers and the identities of readers and visitors is a highly
guarded asset historically by subscription businesses. It is unlikely that publishers would have
shared this information unless they were under the belief that Facebook was a content
distribution platform and traffic generator, not a surreptitious aggregator of consumer data for
Facebook’s own internal, and competitive, advertising sales efforts. Facebook obtained the initial cooperation of third-party businesses through the inducements of content distribution and the convenience of single login. Now Facebook would receive the ability to monitor the behavior of their customers—competitors with Facebook in the digital advertising market—by changing the fine print of permissions.

Facebook increasingly knew as much about The Wall Street Journal’s readers as the Journal did itself. Furthermore, unlike the Journal, Facebook now knew which Journal readers were avid ESPN readers, giving it the capability to bundle and sell targeted audiences, which further commoditized the value of competitors’ inventory. – The Anti Trust Case Against Facebook

And there is something else. There is no reason to think that anything Facebook has written above is true.

Facebook on Political Ads

As is normally the case with Facebook, once it is being paid, the interest in limiting false information goes out the window.

Defying pressure from Congress, Facebook said on Thursday that it would continue to allow political campaigns to use the site to target advertisements to particular slices of the electorate and that it would not police the truthfulness of the messages sent out.

Facebook is paying for its own glowing fake news coverage, so it’s not surprising they’re standing their ground on letting political figures lie to you,” Senator Elizabeth Warren said on Twitter.

Ms. Warren, who has been among the most critical of Facebook and regularly calls for major tech companies to be broken up, reiterated her stance that the social media company should face tougher policies.

Again, as Facebook is being paid to run political ads, it will not restrict this speech. This can only be added to all the other advertisers that run misleading ads, and which Facebook won’t restrict. Therefore, freedom of speech on Facebook is highly correlated with money.

How Facebook manipulates its users while offering this as a value-added service for political ads was shared by an ex-Facebook employee, Yaël Eisenstat,  in an article in The Washington Post titled I worked on political ads at Facebook. They profit by manipulating us..

Eisenstat states the following:

A year and a half later, as the company continues to struggle with how to handle political content and as another presidential election approaches, it’s clear that tinkering around the margins of advertising policies won’t fix the most serious issues. The real problem is that Facebook profits partly by amplifying lies and selling dangerous targeting tools that allow political operatives to engage in a new level of information warfare. Its business model exploits our data to let advertisers aim at us, showing each of us a different version of the truth and manipulating us with hyper-customized ads — ads that as of this fall can contain blatantly false and debunked information if they’re run by a political campaign. As long as Facebook prioritizes profit over healthy discourse, it can’t avoid damaging democracy.

It was unclear to me why the company was applying different policies and tools across the platform. Most users do not differentiate organic content from ads — as I clearly saw on a trip to India, where we were testing our ad-integrity products — so why did we expect users to understand that we applied different standards to different forms of content that all appear in their news feeds?

The fact that we were taking money for political ads, and allowing campaigns and other political organizations to target users based on the vast amounts of data we had gathered, meant political ads should have an even higher bar for integrity than what people were posting in organic content. We verified advertisers to run political ads, giving them a check mark and a “paid for by” label, and I asked if that gave the false impression that we were vouching for the validity of the content, boosting its perceived credibility even though we weren’t checking any facts or trying to halt the spread of false information.

And speaking of misinformation, let us review what the following quotation says about how Facebook “informs” its users.

But before local news started collapsing, thanks partly to the advertiser exodus to Facebook and Google, newspapers used this model to fulfill their responsibilities to educate readers and hold those in power to account. Facebook does the opposite: It narrows its users’ interests and fuels their ignorance with lies and misinformation. – The Nation

In this way, Facebook not only extracts massive revenues from traditional media, but it also presents poor quality information to its users. I found this exact same issue with respect to LinkedIn, as I cover in the article How LinkedIn Degraded as a Content Platform. LinkedIn does not low the natural interests of its users to drive their interests in shares. It de-emphasizes non-promotional content, content that might be critical of large powerful entities (like the type of content Brightwork Research & Analysis shared on LinkedIn), and drives users down the lowest common denominator. LinkedIn is particularly in favor of “positive” posts shared by employees of multi-billion dollar companies that praise their products. Employees at these firms then like each other’s promotional, and more often than not deceptive and brainless shares, and LinkedIn views these as “authentic engagement.”

Facebook is Designed to Amplify Lies?

The following quotation explains this pattern by Facebook.

“Most negative misinformation (62%) was about Democrats or liberals.” The incitement of violence remains on Facebook and on the company’s other apps as well.

This is no accident. Yaël Eisenstat, Facebook’s former head of global elections integrity, explained in The Washington Post that the company “profits partly by amplifying lies and selling dangerous targeting tools that allow political operatives to engage in a new level of information warfare. – The Nation

Multiple Golden Pinocchio Award Winner Opposes Misinformation?

Facebook is a multiple-time winner of our Golden Pinocchio Award. Facebook has been lying to users about how it surveils them and how their information is used and sold to advertisers. Facebook’s lying is jaw-dropping, often doing the exact opposite of what it claims to do and recycling the same excuses when caught lying. When Sheryl Sandberg or Mark Zuckerberg speak, they are clear they are lying right when speaking.

Facebook, which targets its users with ads and surveils them across the Internet, and then lies about it, is the company to battle misinformation?

And what was the most prominent example of Big Tech censoring misinformation? It was when Big Tech, long with the establishment Democratic associated media, colluded to suppress the Hunter Biden email story and try to have the story associated with unsupported claims about the entire issue being a Russian intelligence operation. However, the story about Hunter Biden’s emails was all true. The emails were his, and they did describe his corrupt dealings with foreign actors. Why did Big Media and Big Tech decide to censor what they called misinformation? It is well known that both Big Media and Big Tech are in the tank for the Democratic Party, and did not want the Hunter Biden story to upset Joe Biden’s election bid. 

Then the story of the censorship became its own story. 

Months after the Hunter Biden emails, Biden was formally made part of an investigation by the IRS and FBI for corruption. Is this more misinformation? Is it still “debunked” a “right-wing conspiracy theory,” and “fake news?” This video also shows an audience in China laughing at how straightforward it is for China to corrupt US elites. Is that Chinese speaker also misinformation? Is the laughing misinformation also? 

Was the video of this Chinese speaker shown on any establishment media outlet? Of course not.

Google and Facebook’s Censorship

Both Google and Facebook are engaging in enormous censorship.

This video describes how unregulated Big Tech firms are altering their algorithms to stop information they don’t want people to watch. Everything in the videos by David Wood regarding Islam is correct. He quotes directly from the Koran and other Islamic documents to make his point. However, stopping “misinformation” is not about stopping false information. Even if something is true, it can still be classified as “hate speech,” and censored on this basis as well. 

Conclusion

Facebook, a routinely caught lying company an ideal company to stop “misinformation.” Facebook’s natural inclination is to promote whatever information on their platform is profit-maximizing and to use rules about misinformation to censor information that disagrees with or shows Facebook in a bad light.