Facebook’s practice of exempting secret ‘whitelisted’ elite from its rules and allowing them to post banned content with special XCheck program is under review by its oversight board
- Facebook’s XCheck (cross-check) program shields an elite list of celebrities from rules that apply to the general public
- The oversight board is looking into whether Facebook ‘has been fully forthcoming in its responses in relation to cross-check’
- It was initially designed to as a type of quality control to protect the company from bad publicity in the event that it moderated content from high-profile users
- When VIPs violate rules, their posts aren’t removed but instead sent to a separate system staffed by better-trained employees to further review content
- The social media site has yet to respond to the board’s request for it to share how XCheck works, including criteria for adding users and its system error rates
- Facebook claims it has 40,000 employees working on safety and security and has invested more than $13billion in those areas since 2016
Facebook’s oversight board is reviewing the company’s practice of exempting secret ‘whitelisted’ users from its community guidelines and allowing them to post banned content through a special XCheck program.
XCheck, also known internally as cross-check, has been a long-time subject of Facebook’s oversight board – a body not affiliated with the social media giant hired to critique how Facebook handles problematic content.
The board was created last year with a $130million trust fund from Facebook, which allows the committee to make final decisions on whether individual pieces of content can remain on the site.
While it was initially designed to as a type of quality control to protect the company from bad publicity in the event that it moderated content from high-profile users, critics say it has shielded those same users from the rules that apply to the general public.
Facebook’s oversight board is reviewing the company’s practice of exempting secret ‘whitelisted’ users from its community guidelines through a special XCheck program. While it was initially designed to as a type of quality control, critics say it has shielded those same users from the rules that apply to the general public
Facebook has yet to hand over that information but employees themselves have spoken out on the practice of whitelisting, complaining that it left users exposed to misinformation
These unmoderated users are part of an elite list that has grown to include millions of accounts, such as former President Donald Trump, his son Donald Trump Jr, Senator Elizabeth Warren, model Sunnaya Nash and Facebook founder and CEO Mark Zuckerberg himself.
While the company has previously told its oversight board that the system was only used in a ‘small number of decisions’ the board is now looking into whether Facebook ‘has been fully forthcoming in its responses in relation to cross-check, including the practice of whitelisting,’ according to The Wall Street Journal.
In a blog post on Tuesday the board said ‘disclosures (from The Journal) have drawn renewed attention to the seemingly inconsistent way that the company makes decisions, and why greater transparency and independent oversight of Facebook matter so much for users’.
The Journal relied on internal documents provided to it by employees of the company who say that the program shields celebrities from enforcement actions that are meted out against the platform’s more than three billion other users.
If a VIP is believed to have violated the rules their posts aren’t removed immediately but are instead sent to a separate system staffed by better-trained employees who then further review the content.
The XCheck program has been in place for years – well before Trump was banned from the platform after he was accused of fomenting the January 6 riot at the US Capitol.
The board then called on Facebook to address the claims, ‘share the criteria for adding pages and accounts to cross-check as well as to report on relative error rates of determinations made through cross-check, compared with its ordinary enforcement procedures’.
The board said it would share any new details on the topic it learned from the social media site in October as part of its quarterly transparency report.
Unmoderated users are part of an elite list that has grown to include millions of accounts, such as former President Donald Trump (left), his son Donald Trump Jr, Senator Elizabeth Warren (right), model Sunnaya Nash and Facebook founder and CEO Mark Zuckerberg himself
The board then called on Facebook to address the claims, ‘share the criteria for adding pages and accounts to cross-check as well as to report on relative error rates of determinations made through cross-check, compared with its ordinary enforcement procedures (pictured: Facebook’s headquarters in Menlo Park, California)
Facebook didn’t immediately respond to The Journal’s request for comment.
The backlash comes as Facebook faces relentless pressure to guard against being a platform where misinformation and hate can spread, while at the same time remain a forum for people to speak freely.
The oversight board said: ‘The choices made by companies like Facebook have real-world consequences for the freedom of expression and human rights of billions of people across the world.
‘By having clear rules and enforcing them consistently, platforms can give users the confidence that they’ll be treated fairly. Ultimately, that benefits everyone.’
Apparently the oversight board doesn’t even have clarity on how the XCheck program works. It has previously requested for Facebook to share how it works, including the criteria for adding users to the list and its system error rates.
Facebook has yet to hand over that information but employees themselves have spoken out on the practice of whitelisting, complaining that it left users exposed to misinformation.
‘We are knowingly exposing users to misinformation that we have the processes and resources to mitigate,’ read a 2019 memo by Facebook researchers.
Mark Zuckerberg’s livestream Q&A with his employees was banned by his OWN algorithm
Facebook CEO Mark Zuckerberg
In 2019, Facebook CEO Mark Zuckerberg held a livestreamed Q&A session with employees from his own companies.
But the session was mistakenly banned because it ran afoul of the platform’s own algorithm, according to The Wall Street Journal.
The mistake was one of 18 instances from 2019 that were inadvertently flagged among those who are ‘whitelisted’ by the ‘XCheck’ program.
Four of those instances involved posts by then-President Donald Trump and his son, Donald Trump Jr.
The other incidents included posts by Senator Elizabeth Warren, fashion model Sunnaya Nash, and others.
At one point, content moderators were reviewing less than 10 per cent of problematic posts by users shielded by XCheck.
One Facebook user who was on the whitelist was allowed to share an article by a doctor who died more than 40 years ago claiming that chemotherapy was ineffective in treating cancer.
Samidh Chakrabarti, an executive who headed Facebook’s Civic Team, wrote in a document: ‘One of the fundamental reasons I joined FB Is that I believe in its potential to be a profoundly democratizing force that enables everyone to have an equal civic voice.
‘So having different rules on speech for different people is very troubling to me.’
Other employees expressed their displeasure with the program.
‘FB’s decision-making on content policy is influenced by political considerations,’ wrote a company economist in the data-science division.
Kaushik Iyer, an engineer for Facebook’s civic integrity team, wrote in June 2020: ‘Separate content policy from public policy.’
Meanwhile, also in the series of recent Wall Street Journal reports, Facebook supposedly knew its Instagram photo sharing tool was hurting teenage girls’ mental health and that its moderation system had a double standard allowing VIPs to skirt rules.
In another report, Facebook’s 2018 algorithm overhaul geared towards boosting ‘meaningful social interactions (MSI)’ and making the site a healthier place actually made it angrier instead.
The Journal report read: ‘Facebook’s chief executive, Mark Zuckerberg, said the aim of the algorithm change was to strengthen bonds between users and to improve their well-being.
‘Facebook would encourage people to interact more with friends and family and spend less time passively consuming professionally produced content, which research suggested was harmful to their mental health.’
In turn, high-profile users and companies looked to tactics such as ‘outrage and sensationalism’ to dig through the algorithm and reach their customers.
‘That tactic produced high levels of comments and reactions that translated into success on Facebook,’ which, according to The Journal, Facebook’s internal employees knew would happen.
However, Facebook’s vice president of global affairs Nick Clegg said otherwise in a blog post on Saturday.
In reference to The Journal’s reports, he wrote: ‘These stories have contained deliberate mischaracterizations of what we are trying to do, and conferred egregiously false motives to Facebook’s leadership and employees.’
The post didn’t specifically cite any factual inaccuracies between reports from the social media giant and The Journal – only that Facebook has 40,000 employees working on safety and security and has invested more than $13billion in those areas since 2016.
Source: Read Full Article