By Jurgita Lapienytė, Editor-in-Chief at Cybernews
A new BBC report revealed what we suspected all along – big tech platforms turn a blind eye to harmful content for the sake of profit. Platforms allow so-called borderline content – misogynistic, sexist, racist, conspiracy-driven – that is harmful yet legal.
According to the report, based on accounts from a dozen whistleblowers and insiders, Meta engineers were instructed to allow more borderline content to compete with TikTok. Meanwhile, TikTok is said to have prioritized several user complaints involving politicians to “avoid threats of regulation or bans.”
Unsurprisingly, big tech platforms denied any wrongdoing, insisting that they do not amplify harmful content.
Algorithms are allegedly designed to better understand user interests and needs, and cater to them accordingly. Unfortunately, most of what a user “wants” turns out to be conspiracy theories, AI slop, deepfakes, and pro-Nazi content. Or at least the algorithm seems to think so – because most of this is so-called ragebait content, designed to provoke a strong response from the user.
And since users engage with it, the algorithm is tricked into “thinking” this is what people want. Humans behind the algorithm must clearly understand this is not the case, but clicks translate to cash. So why would Big Tech cut the branch it’s sitting on?
In 2024, Meta earned $16 billion, or 10% of its annual revenue, from scam ads and banned goods. The information comes not from a third-party analytics firm but from Meta’s own documents, proving that the tech giant is well aware of how much harm it can spread – and how much money it can make along the way.
While platforms and lawmakers take their sweet time debating what borderline content is, people are left to deal with the psychological fallout of social media addiction. From the inability to tell right from wrong or fake from real, loss of concentration, sleep, and even sense of self, to radicalization, depression, and self harm – the consequences of companies toying with their algorithms to meet business goals are dire for humanity.
It’s not only our mental health that’s at stake. Adversaries, well aware of algorithmic logic, abuse it to spread misinformation and straightforward lies, sowing division to influence elections all over the world – making us wonder just how much harm performative compliance has already done to democracy.
ABOUT THE AUTHOR
Jurgita Lapienytė is the Editor-in-Chief at Cybernews, where she leads a team of journalists and security experts dedicated to uncovering cyber threats through research, testing, and data-driven reporting. With a career spanning over 15 years, she has reported on major global events, including the 2008 financial crisis and the 2015 Paris terror attacks, and has driven transparency through investigative journalism. A passionate advocate for cybersecurity awareness and women in tech, Jurgita has interviewed leading cybersecurity figures and amplifies underrepresented voices in the industry. Recognized as the Cybersecurity Journalist of the Year and featured in Top Cyber News Magazine’s 40 Under 40 in Cybersecurity, she is a thought leader shaping the conversation around cybersecurity. Jurgita has been quoted internationally.
ABOUT CYBERNEWS
Cybernews is a globally recognized independent media outlet where journalists and security experts debunk cyber by research, testing, and data. Founded in 2019 in response to rising concerns about online security, the site covers breaking news, conducts original investigations, and offers unique perspectives on the evolving digital security landscape. Through white-hat investigative techniques, Cybernews research team identifies and safely discloses cybersecurity threats and vulnerabilities, while the editorial team provides cybersecurity-related news, analysis, and opinions by industry insiders with complete independence. For more, visit www.cybernews.com.
Like this:
Like Loading...
Related
This entry was posted on March 16, 2026 at 12:35 pm and is filed under Commentary with tags Cybernews. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
Guest Post: How Meta and TikTok Turn User Rage into Revenue, While Pretending to Keep You Safe
By Jurgita Lapienytė, Editor-in-Chief at Cybernews
A new BBC report revealed what we suspected all along – big tech platforms turn a blind eye to harmful content for the sake of profit. Platforms allow so-called borderline content – misogynistic, sexist, racist, conspiracy-driven – that is harmful yet legal.
According to the report, based on accounts from a dozen whistleblowers and insiders, Meta engineers were instructed to allow more borderline content to compete with TikTok. Meanwhile, TikTok is said to have prioritized several user complaints involving politicians to “avoid threats of regulation or bans.”
Unsurprisingly, big tech platforms denied any wrongdoing, insisting that they do not amplify harmful content.
Algorithms are allegedly designed to better understand user interests and needs, and cater to them accordingly. Unfortunately, most of what a user “wants” turns out to be conspiracy theories, AI slop, deepfakes, and pro-Nazi content. Or at least the algorithm seems to think so – because most of this is so-called ragebait content, designed to provoke a strong response from the user.
And since users engage with it, the algorithm is tricked into “thinking” this is what people want. Humans behind the algorithm must clearly understand this is not the case, but clicks translate to cash. So why would Big Tech cut the branch it’s sitting on?
In 2024, Meta earned $16 billion, or 10% of its annual revenue, from scam ads and banned goods. The information comes not from a third-party analytics firm but from Meta’s own documents, proving that the tech giant is well aware of how much harm it can spread – and how much money it can make along the way.
While platforms and lawmakers take their sweet time debating what borderline content is, people are left to deal with the psychological fallout of social media addiction. From the inability to tell right from wrong or fake from real, loss of concentration, sleep, and even sense of self, to radicalization, depression, and self harm – the consequences of companies toying with their algorithms to meet business goals are dire for humanity.
It’s not only our mental health that’s at stake. Adversaries, well aware of algorithmic logic, abuse it to spread misinformation and straightforward lies, sowing division to influence elections all over the world – making us wonder just how much harm performative compliance has already done to democracy.
ABOUT THE AUTHOR
Jurgita Lapienytė is the Editor-in-Chief at Cybernews, where she leads a team of journalists and security experts dedicated to uncovering cyber threats through research, testing, and data-driven reporting. With a career spanning over 15 years, she has reported on major global events, including the 2008 financial crisis and the 2015 Paris terror attacks, and has driven transparency through investigative journalism. A passionate advocate for cybersecurity awareness and women in tech, Jurgita has interviewed leading cybersecurity figures and amplifies underrepresented voices in the industry. Recognized as the Cybersecurity Journalist of the Year and featured in Top Cyber News Magazine’s 40 Under 40 in Cybersecurity, she is a thought leader shaping the conversation around cybersecurity. Jurgita has been quoted internationally.
ABOUT CYBERNEWS
Cybernews is a globally recognized independent media outlet where journalists and security experts debunk cyber by research, testing, and data. Founded in 2019 in response to rising concerns about online security, the site covers breaking news, conducts original investigations, and offers unique perspectives on the evolving digital security landscape. Through white-hat investigative techniques, Cybernews research team identifies and safely discloses cybersecurity threats and vulnerabilities, while the editorial team provides cybersecurity-related news, analysis, and opinions by industry insiders with complete independence. For more, visit www.cybernews.com.
Share this:
Like this:
Related
This entry was posted on March 16, 2026 at 12:35 pm and is filed under Commentary with tags Cybernews. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.