By Bill Ready, CEO at Pinterest
AI has been advancing rapidly over the last 10 years, doubling every 6 months. Until recently, the advancements have mostly been behind the scenes from a consumer perspective. But in the last few months a next generation of AI has been made available to the public and captured the attention and imagination of many. In fact, two of the largest providers of search, Google and Microsoft (with OpenAI), are showing significant advancements in AI that appear set to create a next major step forward in how search works. I’m excited about that, as are countless others. I’m also very glad to see that it has sparked a broader dialog about the appropriate use of AI and the ethical issues it raises. It’s encouraging that Microsoft and Google have been directly speaking to how they are attempting to address those issues—even though many questions remain.
What’s missing is a discussion of the other major use of AI in our world today: social media. Social media used AI to create the new big tobacco. It has addicted all of us—but especially young people—over the last decade. But laced with a now evermore powerful AI, it’ll only get worse for our mental health. What comes next is a choice. What will social media do with this next generation of AI? Calls for change have come from parents, researchers, whistle-blowers, regulators, and lawmakers for years. But the call needs to come from within social media as well.
What happened?
Remember when social media first came into broad use? It helped reconnect us with old friends, share family updates with relatives, and meet and connect with neighbors. It gave us hope that we could create a more curious, connected, and compassionate world.
That feels like a distant memory. Today, social media has made us more distracted, more depressed, and more divided. It has turned us against our neighbors and focused us on our differences rather than our commonalities.
That’s because social media companies put AI in charge of what we see and they asked it to maximize view time. AI quickly figured out that people were more likely to view something for longer when it triggered their basest instincts: fear, anger, envy, greed.
The points of view that would get the most engagement were the most extreme rather than the most sensible. The more you were enraged, the more you would engage. With each refinement of social media apps, users are less and less in control of what they see and more and more vulnerable to an increasingly powerful AI that is tuned to keep them viewing, no matter the cost to their wellbeing.
To give a simple metaphor of how this works, let’s take an experience we’ve all had: You’re sitting in a traffic jam and there’s an accident up ahead. You know you shouldn’t look. You know it won’t make you feel good. But…there’s an urge to look anyway. If you ask people afterwards whether they’d like to see another car crash, almost everyone would say no. And fortunately, we don’t have to encounter these situations every day in the real world. But in the world of social media, the AI is going to show you another car crash. And you can’t help but glance at that one, too. So it shows you another and another, until eventually all you see are car crashes.
Defenders of social media will say they are simply giving users what they want. But do we really think this is what people want: more fear, more anger, more envy, more violence, more hate speech, more trolling? A world where all we see are car crashes? That people want to feel worse about themselves and the world around them?
Social media may not have initially understood the unintended consequences of telling AI to maximize view time, but those consequences are overwhelmingly clear now. Even worse, these choices have become deeply ingrained in the business model of much of social media.
As CEO of Pinterest, I’m writing this because I believe it to be one of the most important societal issues of our time. We must build a more positive place online. And it is possible.
To that end, we’ve made a particular set of choices.
From implicit to explicit signals
First, we train our AI models to prioritize explicit intent signals. That could include what people pin to our platform in the first place (say, an amazing brunch recipe), what they might search for once they are here (bold summer makeup), or what they save to their boards to act on later (clever ideas to decorate a dorm room).
When you tune AI on those more conscious, explicit actions, you get very different outcomes than when you optimize for views alone. In that environment, additive rather than addictive content wins, largely because the user is playing a more deliberate role in choosing.
So far, it’s working. And we know this because of our next choice.
From tactics to outcomes
Second, we’re committed to holding ourselves accountable to more positive wellbeing outcomes. There’s no shortage of tactics that social media companies could implement or propose that seem like they ought to help. But unless they result in demonstrably better wellbeing outcomes,those efforts will always be woefully inadequate. In order to build a better internet for our better selves, emotional wellbeing has to be a real, measurable result—and should become the standard for the entire industry.
A recent study we ran with UC Berkeley’s Greater Good Science Center found that 10 minutes a day of active engagement with inspiring content on Pinterest by Gen Z users buffers against rising burnout, stress and social disconnectedness. We replicated similar findings across the UK, Canada, Australia, Germany, France, Brazil, and Japan. More than a dozen studies over the last five years—commissioned and not—show that positive spaces like Pinterest have a wide range of benefits for users.
It’s still early and we don’t profess to have all the answers. We have had our own regrettable moments in which our AI models have served negative or damaging content to users. But we’re committed to better outcomes and bolstered by these early studies that show it’s possible.
A more positive internet is possible.
We got here by making different choices about AI. By placing our users’ wellbeing over their view time. And by holding ourselves accountable for more positive outcomes on mental health—not simply empty tactics. We’ve seen the effects of what social media has been asking AI to do for the last decade. My question is this: what will social media companies ask this new, more powerful generation of AI to do next?
What comes next is a choice.
A choice that leaders must make, a choice that users deserve and should participate in, and a choice that the good of society depends on. Pinterest is committed to using our platform—and the AI that powers it—to create more positive wellbeing outcomes.
We’re making our choice and our intentions clear.
Read more on our research withGreater Good Science Center at University of California Berkeley.
Read more about what Pinterest is doing to support emotional wellbeing and create a better internet for our better selves.
Time To Deploy Ransomware Down… Successful Ransomware Prevention Up: IBM
Posted in Commentary with tags IBM on February 22, 2023 by itnerdAccording to IBM, ransomware prevention saw massive improvements in 2022, while ransomware time to deploy (TTD) dopped by 94%, just two findings derived from billions of datapoints collected in 2022 from network and endpoint devices by IBM and reported on in their “X-Force Threat Intelligence Index 2023.” This is a wide-ranging report with excellent stats:
Top impacts 2022
This is a bit of mixed bag. But at least the fact that ransomware is being stopped is good news.
Morten Gammelgaard, EMEA, co-founder of BullWall had this to say:
“It is excellent news that ransomware prevention is improving, if for no other reason than it diverts cybercriminals away from executing attacks to developing new tactics, which they will. With extortion, data theft, data leaks and brand reputation being the top 4 out of 5 ways ransomware impacted organizations in 2022, organizations cannot rely solely on prevention and need to also consider active defense/containment strategies to catch the attacks that bypass prevention-based tools. When an active attack is unable to encrypt or exfiltrate data, organizations are given time to respond, eliminating 80% of the potential impact to their business.”
David Maynor, Senior Director of Threat Intelligence at Cybrary followed up with this:
“There are three kinds of lies: lies, damn lies, and ransomware stats. For the last couple of months depending on who you ask ransomware attacks and becoming less of a problem or they are increasing. If your risk model is based on arbitrary thresholds like at 20% we don’t address it but we take it seriously at 21% of attacks seen…you have already lost and a ransomware actor is probably watching you read this.”
Hopefully when this report comes out in 2024, we see more ransomware being stopped which means by extension that ransomware is less profitable for the people behind ransomware.
Leave a comment »