U.S. lawmakers are prioritizing proposals to curb social media giants by reducing their protection of free speech in legal obligations.
Their efforts come after a former Facebook product manager filed a lawsuit alleging that the company's plans promote online hatred and excess and fail to protect new users from harmful content.
The report, Frances Haugen, is expected to consider the lawyers' proposals in the House of Representatives on Wednesday. His earlier disclosure has fueled global legal and regulatory efforts aimed at overthrowing Big Tech, and has made a series of recent appearances before European lawmakers and officials drafting the rules of social media companies.
Haugen, a data scientist working for Facebook's public integrity unit, reaffirmed his assertion that a number of internal company documents were secretly copied and handed over to government security regulators and Congress.
When he first appeared in public this fall, placing heavy criticism of the social media platform before the Senate Commerce subcommittee, he had ideas on how Facebook platforms could be made safer and the rules of Congress act. He rejected the idea of disbanding the technology giant as many lawmakers sought, opting for legal redress.
Most notably, they include new long-standing protections for the use of social media. Both Republican and Democratic Alliance lawmakers have called for the removal of some of the protections provided by the 25-year-old law - known as Section 230 - that protects online companies from indebtedness to what users post.
Facebook and other social media companies use computer algorithms to rate and recommend content. They control what appears in the user news feed. Haugen's idea is to remove protection in situations where dynamic content driven by algorithms favors greater user engagement over public safety.
That is the thought behind the Justice Against Malicious Algorithms Act, introduced by House Democrats' top officials a week after Haugen testified in a Senate panel in October. The bill will hold social media companies accountable for removing their protections under Section 230 to obtain recommendations for users who are considered to be harmed. The forum may lose security in situations where "knowingly or unknowingly" promotes harmful content.
The House Energy and Commerce Committee sub-committee held Wednesday's hearing on the bill and other proposed legislation to curb harassment on social media. Democrats are high on the committee, including the Chairman of the Rep. Frank Pallone of New Jersey, introduced a bill to regulate algorithms.
"The committee has seen growing evidence that if social media companies are faced with a choice between making more money or protecting public health and safety, they will continue to choose money," Pallone said recently. “The lack of transparency in these companies has serious consequences for all Americans. Self-timer is over. Congress now has to come together in a different way to consider proposals that bring real accountability. "
Some experts who advocate tight control of social media say that this law could have unintended consequences. It does not make clear enough direct algorithmic behavior that could lead to a loss of obligation protection, they suggest, which makes it difficult to see how it will work and lead to widespread disagreement about what it can actually do.
Meta Platforms, the new name of the parent company Facebook, declined to comment on certain legal proposals. The company says it has long advocated for revised rules.
Meta CEO Mark Zuckerberg has proposed changes that could give online forums legal protection only if they can prove that their illegal content identification systems are working. That requirement, however, could be very difficult for small technology companies and beginners to meet, leading critics to charge that it will ultimately favor Facebook.
Some telecommunications companies have called for warnings of any changes to the law in Section 230.