By Bill Ready, CEO at Pinterest
AI has been advancing rapidly over the last 10 years, doubling every 6 months. Until recently, the advancements have mostly been behind the scenes from a consumer perspective. But in the last few months a next generation of AI has been made available to the public and captured the attention and imagination of many. In fact, two of the largest providers of search, Google and Microsoft (with OpenAI), are showing significant advancements in AI that appear set to create a next major step forward in how search works. I’m excited about that, as are countless others. I’m also very glad to see that it has sparked a broader dialog about the appropriate use of AI and the ethical issues it raises. It’s encouraging that Microsoft and Google have been directly speaking to how they are attempting to address those issues—even though many questions remain.
What’s missing is a discussion of the other major use of AI in our world today: social media. Social media used AI to create the new big tobacco. It has addicted all of us—but especially young people—over the last decade. But laced with a now evermore powerful AI, it’ll only get worse for our mental health. What comes next is a choice. What will social media do with this next generation of AI? Calls for change have come from parents, researchers, whistle-blowers, regulators, and lawmakers for years. But the call needs to come from within social media as well.
What happened?
Remember when social media first came into broad use? It helped reconnect us with old friends, share family updates with relatives, and meet and connect with neighbors. It gave us hope that we could create a more curious, connected, and compassionate world.
That feels like a distant memory. Today, social media has made us more distracted, more depressed, and more divided. It has turned us against our neighbors and focused us on our differences rather than our commonalities.
That’s because social media companies put AI in charge of what we see and they asked it to maximize view time. AI quickly figured out that people were more likely to view something for longer when it triggered their basest instincts: fear, anger, envy, greed.
The points of view that would get the most engagement were the most extreme rather than the most sensible. The more you were enraged, the more you would engage. With each refinement of social media apps, users are less and less in control of what they see and more and more vulnerable to an increasingly powerful AI that is tuned to keep them viewing, no matter the cost to their wellbeing.
To give a simple metaphor of how this works, let’s take an experience we’ve all had: You’re sitting in a traffic jam and there’s an accident up ahead. You know you shouldn’t look. You know it won’t make you feel good. But…there’s an urge to look anyway. If you ask people afterwards whether they’d like to see another car crash, almost everyone would say no. And fortunately, we don’t have to encounter these situations every day in the real world. But in the world of social media, the AI is going to show you another car crash. And you can’t help but glance at that one, too. So it shows you another and another, until eventually all you see are car crashes.
Defenders of social media will say they are simply giving users what they want. But do we really think this is what people want: more fear, more anger, more envy, more violence, more hate speech, more trolling? A world where all we see are car crashes? That people want to feel worse about themselves and the world around them?
Social media may not have initially understood the unintended consequences of telling AI to maximize view time, but those consequences are overwhelmingly clear now. Even worse, these choices have become deeply ingrained in the business model of much of social media.
As CEO of Pinterest, I’m writing this because I believe it to be one of the most important societal issues of our time. We must build a more positive place online. And it is possible.
To that end, we’ve made a particular set of choices.
From implicit to explicit signals
First, we train our AI models to prioritize explicit intent signals. That could include what people pin to our platform in the first place (say, an amazing brunch recipe), what they might search for once they are here (bold summer makeup), or what they save to their boards to act on later (clever ideas to decorate a dorm room).
When you tune AI on those more conscious, explicit actions, you get very different outcomes than when you optimize for views alone. In that environment, additive rather than addictive content wins, largely because the user is playing a more deliberate role in choosing.
So far, it’s working. And we know this because of our next choice.
From tactics to outcomes
Second, we’re committed to holding ourselves accountable to more positive wellbeing outcomes. There’s no shortage of tactics that social media companies could implement or propose that seem like they ought to help. But unless they result in demonstrably better wellbeing outcomes,those efforts will always be woefully inadequate. In order to build a better internet for our better selves, emotional wellbeing has to be a real, measurable result—and should become the standard for the entire industry.
A recent study we ran with UC Berkeley’s Greater Good Science Center found that 10 minutes a day of active engagement with inspiring content on Pinterest by Gen Z users buffers against rising burnout, stress and social disconnectedness. We replicated similar findings across the UK, Canada, Australia, Germany, France, Brazil, and Japan. More than a dozen studies over the last five years—commissioned and not—show that positive spaces like Pinterest have a wide range of benefits for users.
It’s still early and we don’t profess to have all the answers. We have had our own regrettable moments in which our AI models have served negative or damaging content to users. But we’re committed to better outcomes and bolstered by these early studies that show it’s possible.
A more positive internet is possible.
We got here by making different choices about AI. By placing our users’ wellbeing over their view time. And by holding ourselves accountable for more positive outcomes on mental health—not simply empty tactics. We’ve seen the effects of what social media has been asking AI to do for the last decade. My question is this: what will social media companies ask this new, more powerful generation of AI to do next?
What comes next is a choice.
A choice that leaders must make, a choice that users deserve and should participate in, and a choice that the good of society depends on. Pinterest is committed to using our platform—and the AI that powers it—to create more positive wellbeing outcomes.
We’re making our choice and our intentions clear.
Read more on our research withGreater Good Science Center at University of California Berkeley.
Read more about what Pinterest is doing to support emotional wellbeing and create a better internet for our better selves.
Activision Has Been Pwned As It Were A N00b Playing Call Of Duty
Posted in Commentary with tags Hacked on February 22, 2023 by itnerdIt appears that video game company Activision has been pwned by hackers. And this hack is really bad. Here’s a quick synopsis:
And seeing as they are being purchased by Microsoft, this could not have come at a worse time for the company. And Activision’s response to this has been, shall we say, sub-optimal.
David Maynor, Senior Director of Threat Intelligence at Cybrary had this to say:
There is no one “SOP” for breaches. This timeline shows a typical public reaction to a breach. Some entity, in this case VX-Underground, notices something on a market and tells the world about it. Reporters that follow VX-Underground use it as a tip and suddenly the victims switchboard/email server gets loaded with requests for comment.
“There is also the fog of war effect where different people have different parts of a puzzle and make assumptions. This leads to different hot takes contradicting each other.
“From the trial last year of the Uber CISO, Joseph Sullivan, we know that big corps can handle breaches differently. What I can say from personal experience is that the responses to questions as well as public statements are approved by if not written by a crisis communications team. The default response is deescalate, deflect, then deny. This is why the infosec community values technically insightful Root Cause Analysis (RCA) from a victim.”
Tim Morris, Chief Security Advisor, AMER at Tanium follows up with this:
“There is conflicting information on this one. Specifically, about what was accessed /stolen. Regardless, the initial attack vector was a social engineered phishing/smishing attack, obtaining access via SMS / 2FA. Proving once more that SMS / 2FA isn’t the most robust form of authentications and other, stronger MFA methods should be used.
“Also, training of users is still needed. Users should treat SMS messages with the same scrutiny as email phishing scams. Be wary of phone calls from “IT Support”. Unless initiated by the user, they should be suspect. Either ignore or call back to a known number. For SMS, ignore and never give out any 2FA codes sent via text.
“Principle of least privilege needs to be implemented, so that if/when an employee’s account credentials are stolen the “blast radius” is small, i.e. what the attacker has access to is minimized. Threat hunting, good incident response, and monitoring are key to find these intrusions quickly, and limit their reach.
“Have a good PR plan on what to do when a breach happens. This successful attack happened two and a half months ago, and is only public now because some leaked data was published on vx-underground.”
Given the profile of Activision who makes the Call Of Duty franchise, and their relationship with Microsoft, a lot of eyes are going to be on this one. If I were Activision, I’d be working very hard to find out what happened, what was stolen, and how to stop this from happening again. Then I would put all of that out in the public domain as quickly as possible. Because right now, Activision look like a bunch of n00bs.
Leave a comment »