Site icon The IT Nerd

Wozniak, Musk & More Call For AI Development Pause

Advertisements

There’s an open letter signed by over 1200 people who are asking for an immediate six-month halt on AI technology more powerful than ChatGPT-4. The open letter was created by an organization called the Future of Life Institute. The aim of this organization is to “steer transformative technology towards benefitting life and away from extreme large-scale risks.” Among those who signed are Steve Wozniak who co-founded Apple, Elon Musk the clown prince of tech and the guy who runs Twitter, SpaceX, and Tesla among other companies. This does bring up all sorts of questions about AI and how it should be used.

I have a number of comments on AI in general and specifically this open letter. The first is from Baber Amin, COO, Veridium:

Thoughts on AI development and application:

“For great leaps in technology, we often need to establish safety measures and regulations – for example, when we split the atom to harness nuclear power. While nuclear energy has provided many advantages in fields like medicine and energy, it has also given rise to the terrible threat of nuclear weapons. However, the difficulty of accessing and managing nuclear materials has provided a natural form of protection.

“AI model development and training, on the other hand, lack these same natural barriers, making it easier to develop without appropriate safety measures in place. That’s why it’s important to take a step back and create responsible systems that are accurate, transparent, trustworthy, and potentially even capable of self-regulation.

Risks for companies using the OpenAI API.

      “As organizations turn to OpenAI’s API for their artificial intelligence needs, it’s important to keep in mind the following considerations:

  1. Data Privacy: OpenAI’s models are trained on large amounts of data, which until recently could have included sensitive information from organizations. Starting March 1, OpenAI will no longer use customer data submitted via API to train their models without explicit consent. However, the data will still be kept for 30 days for monitoring purposes.
  2. Bias: OpenAI’s training data comes from the real world, which means it may contain biases that are reflected in their models. Organizations using OpenAI should be aware of this possibility and take corrective measures.
  3. Misinformation and Fake Data: OpenAI’s generative models can create text that is indistinguishable from real data, which could be used to generate fake news or blog posts. Organizations need to be cautious of inadvertently spreading misinformation.
  4. Phishing Attacks: OpenAI’s generative models can also be used to create sophisticated phishing attacks or deepfakes, which could lead to propaganda and possible slander.
  5. Spam: Lastly, OpenAI’s generative AI can be used to generate spam, resulting in unsolicited emails or social media posts, causing reputational damage to an organization

     “By keeping these considerations in mind, organizations can use OpenAI’s API effectively and responsibly.

      “For security protections, looking at OpenAI, they do have the following security controls in place, which all seem very reasonable.  

Matt Mullins, Senior Security Researcher, Cybrary is next:

   “There are a number of benefits to AI and its applications that are being explored. While there are a great deal of efficiencies created there, other non-beneficial aspects arise. The disruption of a number of industries being the most profound, in ways that were not easily predictable. Things associated (typically) with “human-ness” are being found to be more vulnerable than other aspects.

   “For example… art, music, essays, and other things that were an established trope of human creativity as normality are significantly being destabilized as AIs are able to quickly ingest, seed, and innovate in ways that were not previously predicted.

   “Aside from these disruptions, the potential for attacks on baseline ‘truth’ have been established as well. Consider the modification of voice, visual imagery, and video which can all be done so effectively that a zoom call could potentially be spoofed. The ramifications of such realistic mimicry have direct threats to establishments of truth and sub sequentially democratic process itself.

Overall, AI is presenting a removal of entry level aspects to IT and security. Beyond this entry level the veil seems to be easy to pierce with a critical eye for understanding code. The bigger issues presented are the capabilities that AI presents to disrupt how we see the world.”

David Maynor, Senior Director of Threat Intelligence, Cybrary has this to add:

Addressing major tech calling for a 6 mo. AI moratorium:

   “It is funny that technologist that have been disruptive to industries and use mantras like “fail fast” are aligning against AI research. While conspiracy theories point to worrying about a Skynet like AI turning on humans I personally feel that AI availability will disrupt the disruptors and make their fiefdoms ripe for replacement.”

It will be interesting to see how this play out. I for one do not see the AI arms race as I call it stopping anytime soon unless governments get interested in terms of slowing down AI development.

UPDATE: Dr. Chenxi Wang (she/her), Founder and General Partner, Rain Capital added this comment:

A pause in the AI fever is needed, not just from the business standpoint, but also from the point of view of security and privacy. Until we understand how to assess data privacy, model integrity, and the impact of adversarial data, continued development of AI may lead to unintended social, technical, and cyber consequences. 

Exit mobile version