Yesterday, U.S. Senators Mitt Romney (R-UT), Jack Reed (D-RI), Jerry Moran (R-KS), and Angus King (I-ME) released a letter to the Senate artificial intelligence (AI) working group leaders outlining a framework to mitigate extreme AI risks. I encourage you to read the letter, but here’s the TL:DR:
Congress should consider a permanent framework to mitigate extreme risks. This framework should also serve as the basis for international coordination to mitigate extreme risks posed by AI. This letter is an attempt to start a dialogue about the need for such a framework, which would be in addition to, not at the exclusion of, proposals focused on other risks presented by developments in AI.
Under this potential framework, the most advanced model developers in the future would be required to safeguard against four extreme risks – the development of biological, chemical, cyber, or nuclear weapons. An agency or federal coordinating body would be tasked to oversee the implementation of these proposed requirements, which would apply to only the very largest and most advanced models. Such requirements would be reevaluated on a recurring basis as we gain a better understanding of the threat landscape and the technology.
Sounds interesting. But is it useful? Here’s what Kevin Surace, Chair, Token had to say:
This is great politics and important to state publicly, but it won’t protect anyone from these threats. The major model providers already have strong safeguards in place for these and similar threats (you cannot get an answer from ChatGPT on how to create a chemical weapon).
This changes nothing from all major US providers. They already strongly limit access to such content. However open source models being used by bad actors and rogue countries are not subject to these laws and will misuse the technology anyway.
Anyone can already Google how to create a biological weapon. Having the answers faster doesn’t really help someone with the chemistry, procurement, production and so on anymore than Google already did. But AI could create perhaps new compounds not well documented elsewhere. And the bad actors are already taking advantage of that with open source models.
This has zero impact on OpenAI, Microsoft, Google and so on. And it has zero impact on a rogue country using open source models.
I’m all for guardrails and safeguards. But they have to be useful. I am not yet convinced that this effort by these senators is useful. But I am free to be convinced otherwise. Let’s see if they can convince myself and others that this is a useful exercise.
UPDATE: I have additional commentary from Madison Horn, Congressional Candidate (OK-5) and cybersecurity leader:
The plan proposed by the Senators is crucial. We are in the midst of a new kind of Cold War with China, one that includes the race to harness AI. A comprehensive strategy to not only secure but also to fully harness the potential of AI is essential. The nation that leads in AI will not only dictate global markets but also define international norms for decades to come.
Executing a plan to mitigate AI risks is loaded with challenges. First, we need a solid strategy to retain top talent for any new agencies we might set up, and we must also forge strong partnerships with the private sector. Then there’s Congress—sometimes it seems like they’re in a tech time warp, which doesn’t help. Plus, we can’t let our drive for security strangle American innovation. We need to stay agile, adapting as new models and classifications emerge, and ensure we’re not shutting out new startups or inadvertently creating monopolies.
And let’s not overlook cybersecurity challenges. Ensuring these AI models aren’t leaked or stolen is crucial—our adversaries are definitely taking notes and will be trying to tap into this wealth of information that will be retained.
Artificial intelligence poses a significant threat, one that reshapes the global landscape in ways we haven’t witnessed since the post-WWII era. With new alliances forming, notably between Russia and China, the stakes in the AI war are extraordinarily high. The power of AI doesn’t just accelerate a country’s ability to dominate global markets; it also has the potential to shift global values depending on who emerges as the leader in this technology. In the most extreme scenarios, the misuse of AI could lead to catastrophic outcomes, potentially destroying the world in a matter of seconds. The race to harness AI, therefore, is not just about technological superiority but also about steering the future ethical and moral compass of our entire planet.
We need to keep the spark of American innovation alive—it’s also crucial for our national security. Collaboration with the private sector? Non-negotiable. With many of the few qualified individuals in Congress retiring or being pushed out of office by partisan politics, it’s up to the American people to step up. We must elect leaders who are not just filling a seat but who truly understand the complexities of today’s tech challenges. Leaders who have the understanding to craft and pass laws that safeguard our citizens without choking out our innovation and economic growth. This is about securing a future where America continues to lead, not follow.
Mission Cloud and CrowdStrike Announce Strategic Partnership
Posted in Commentary with tags CrowdStrike, Mission Cloud on April 19, 2024 by itnerdMission Cloud, a US-based Amazon Web Services (AWS) Premier Tier Services Partner with a focus on cloud and AI, today announced a strategic partnership with CrowdStrike (Nasdaq: CRWD) to stop cloud breaches and secure global customers building their businesses on AWS.
Cloud intrusions have grown 75% in the past year, with adversaries breaking into customer environments in as little as two minutes. The lack of cloud-native security solutions and skilled personnel to operate them puts organizations at risk. Mission Cloud One is enhancing its comprehensive managed service for AWS optimization, operations and security by standardizing on the CrowdStrike Falcon® platform for CrowdStrike Falcon® Cloud Security, the industry’s only unified agent and agentless platform for code to cloud protection. The partnership also provides customers with access to CrowdStrike Falcon Complete Cloud Detection and Response (CDR) services, delivering 24/7 protection against cloud attacks.
Learn more about Mission Cloud and CrowdStrike’s partnership here.
Leave a comment »