There’s an open letter signed by over 1200 people who are asking for an immediate six-month halt on AI technology more powerful than ChatGPT-4. The open letter was created by an organization called the Future of Life Institute. The aim of this organization is to “steer transformative technology towards benefitting life and away from extreme large-scale risks.” Among those who signed are Steve Wozniak who co-founded Apple, Elon Musk the clown prince of tech and the guy who runs Twitter, SpaceX, and Tesla among other companies. This does bring up all sorts of questions about AI and how it should be used.
I have a number of comments on AI in general and specifically this open letter. The first is from Baber Amin, COO, Veridium:
Thoughts on AI development and application:
“For great leaps in technology, we often need to establish safety measures and regulations – for example, when we split the atom to harness nuclear power. While nuclear energy has provided many advantages in fields like medicine and energy, it has also given rise to the terrible threat of nuclear weapons. However, the difficulty of accessing and managing nuclear materials has provided a natural form of protection.
“AI model development and training, on the other hand, lack these same natural barriers, making it easier to develop without appropriate safety measures in place. That’s why it’s important to take a step back and create responsible systems that are accurate, transparent, trustworthy, and potentially even capable of self-regulation.
Risks for companies using the OpenAI API.
“As organizations turn to OpenAI’s API for their artificial intelligence needs, it’s important to keep in mind the following considerations:
- Data Privacy: OpenAI’s models are trained on large amounts of data, which until recently could have included sensitive information from organizations. Starting March 1, OpenAI will no longer use customer data submitted via API to train their models without explicit consent. However, the data will still be kept for 30 days for monitoring purposes.
- Bias: OpenAI’s training data comes from the real world, which means it may contain biases that are reflected in their models. Organizations using OpenAI should be aware of this possibility and take corrective measures.
- Misinformation and Fake Data: OpenAI’s generative models can create text that is indistinguishable from real data, which could be used to generate fake news or blog posts. Organizations need to be cautious of inadvertently spreading misinformation.
- Phishing Attacks: OpenAI’s generative models can also be used to create sophisticated phishing attacks or deepfakes, which could lead to propaganda and possible slander.
- Spam: Lastly, OpenAI’s generative AI can be used to generate spam, resulting in unsolicited emails or social media posts, causing reputational damage to an organization
“By keeping these considerations in mind, organizations can use OpenAI’s API effectively and responsibly.
“For security protections, looking at OpenAI, they do have the following security controls in place, which all seem very reasonable.
- Data encryption at rest and in transit.
- Access control around data and models.
- Monitoring for suspicious activity.
- Patching for latest security patches.
- Auditing of access to data and models.
Matt Mullins, Senior Security Researcher, Cybrary is next:
“There are a number of benefits to AI and its applications that are being explored. While there are a great deal of efficiencies created there, other non-beneficial aspects arise. The disruption of a number of industries being the most profound, in ways that were not easily predictable. Things associated (typically) with “human-ness” are being found to be more vulnerable than other aspects.
“For example… art, music, essays, and other things that were an established trope of human creativity as normality are significantly being destabilized as AIs are able to quickly ingest, seed, and innovate in ways that were not previously predicted.
“Aside from these disruptions, the potential for attacks on baseline ‘truth’ have been established as well. Consider the modification of voice, visual imagery, and video which can all be done so effectively that a zoom call could potentially be spoofed. The ramifications of such realistic mimicry have direct threats to establishments of truth and sub sequentially democratic process itself.
Overall, AI is presenting a removal of entry level aspects to IT and security. Beyond this entry level the veil seems to be easy to pierce with a critical eye for understanding code. The bigger issues presented are the capabilities that AI presents to disrupt how we see the world.”
David Maynor, Senior Director of Threat Intelligence, Cybrary has this to add:
Addressing major tech calling for a 6 mo. AI moratorium:
“It is funny that technologist that have been disruptive to industries and use mantras like “fail fast” are aligning against AI research. While conspiracy theories point to worrying about a Skynet like AI turning on humans I personally feel that AI availability will disrupt the disruptors and make their fiefdoms ripe for replacement.”
It will be interesting to see how this play out. I for one do not see the AI arms race as I call it stopping anytime soon unless governments get interested in terms of slowing down AI development.
UPDATE: Dr. Chenxi Wang (she/her), Founder and General Partner, Rain Capital added this comment:
A pause in the AI fever is needed, not just from the business standpoint, but also from the point of view of security and privacy. Until we understand how to assess data privacy, model integrity, and the impact of adversarial data, continued development of AI may lead to unintended social, technical, and cyber consequences.
ByteDance Appears To Have A Backup Plan For A TikTok Ban… And It’s Called Lemon8
Posted in Commentary with tags TikTok on March 30, 2023 by itnerdThe United States and various other countries are looking to ban TikTok because it is seen as a tool of the Chinese Communist Party to spread misinformation and gather information on people that they can use against them. That’s sent TikTok’s parent company ByteDance looking for options to keep itself alive. And the company over the last month has started to push an app called Lemon8 towards US audiences. This app seems to be a version of Instagram that allows users to share photos. It doesn’t appear to have video support, but I am sure that’s coming. And the thing is that TikTok users can link their TikTok accounts to Lemon8. And apparently that’s happening with the biggest influences on TikTok not only linking their accounts to Lemon8, but actively promoting the app. Thus it’s no shock that the app is getting downloads as a result. In fact according to TechCrunch, Lemon8 is already in the top ten of the US version of the Apple App Store. Though I will point out that the app has been around since 2020 and is extremely popular in other parts of the world. Though the app is not yet available in Canada as I type this.
But I have to ask the question, is this really a backup plan? I ask because I’ve written about the RESTRICT act which if passed would give the US the ability to ban apps like TikTok. The way the law is written, it’s beyond a safe bet that Lemon8 would meet the same fate. So why should ByteDance bother with this? My guess is that ByteDance was originally going to go after Instagram with this app, but they appear to now shifted it to being a haven for TikTok users in the short term if TikTok were to be banned. Thus kind of forcing the US government and other governments into a game of “whack a mole”. Also, during the disastrous (for ByteDance) hearings last week on Capitol Hill, ByteDance sent an army of influencers to the hill to lobby politicians against banning TikTok. I’m also guessing that by shifting those influencers to Lemon8, it’s a means to show how powerful that community is and that Congress can’t ignore them.
It will be interesting to see how this plays out as I have to believe that it’s only a matter of time before the RESTRICT act passes congress and lands on the President’s desk. And once he signs it, then it’s game on in terms of what happens to ByteDance and all their apps.
Leave a comment »