The video sharing platform’s reliance on automation has apparently gone too far
YouTube has become the target of accusations for allegedly violating various laws intended to prevent racial discrimination. It seems that a group of African-American content creators has filed a class-action lawsuit against the giant video sharing site and its parent company, Alphabet, based on Section 230 of the Communications Decency Act. This is the latest filing in court against the already controversial immunities somehow afforded to digital services under the Communications Decency Act.
This lawsuit, filed Tuesday in California federal court, is bringing to discussion the way in which YouTube is using artificial intelligence (AI), algorithms and other filtering tools.
YouTube uses these tools to help viewers recognize any potentially mature content and basically to monitor what is being uploaded to the site. However, this group claims that by employing a “Restricted Mode,” the site is acting as an improper censor. This means that the site is rife with “digital racism,” which means that users are profiled on race, identity and viewpoints. For the plaintiffs, this situation interferes significantly with their ability to turn the content into profitable. The group added that YouTube’s acting is “intentional and systematic, regardless of whether Defendants are motivated by ideological animus towards black and members of other protected racial classifications under the law.”
The lawsuit also claims that YouTube is applying “Restricted Mode” to videos titled or tagged with abbreviations like “BLM” or “KKK.” Also, on terms like “racial profiling” or “police shooting” or “Black Lives Matter,” including names of individuals killed by police officers in the past like as well as videos tagged with Bill Cosby” or “Louis Farrakhan.” These videos get tagged even if they don’t include profanity, drug use, depict sexual activity or violence or any of the site’s policies.