Archive for AI

California Adopts A Resolution That Encourages The Responsible Use Of AI

Posted in Commentary with tags on August 17, 2023 by itnerd

California recently adopted an AI Resolution that’s in alignment with the Biden Administration’s guidelines for responsible AI. Spearheaded by Sen. Dodd, this resolution reinforces California’s influential role in shaping regulatory frameworks:

Senate Concurrent Resolution 17 highlights the significant challenges posed by the use of technology, data, and automated systems, including incidents of unsafe, ineffective, or biased systems and unchecked data collection that threatens privacy and opportunities. At the same time, the resolution recognizes the potential benefits of AI, including increased efficiency in agriculture and data analysis that could revolutionize industries.

The resolution affirms the state’s commitment to President Biden’s vision for safe AI and the principles outlined in the White House Office of Science and Technology Policy’s “Blueprint for an AI Bill of Rights.” The five principles — Safe and Effective Systems; Algorithmic Discrimination Protections; Data Privacy; Notice and Explanation; and Human Alternatives, Consideration and Fallback — will guide the design, use, and deployment of automated systems in California.

SCR 17 was approved Monday in the Assembly with a unanimous voice vote after being previously approved by the full Senate. It does not require the governor’s signature.

Ani Chaudhuri, CEO, Dasera had this comment:

Today, with the California Legislature adopting the nation’s first AI-drafted resolution, we’re witnessing a pivotal moment in the intersection of technology, governance, and society. As someone deeply entrenched in data security and governance, this resolution isn’t just a piece of legislative text; it’s a testament to how our society is evolving and the responsibilities we must shoulder as we traverse this path.

  1. Safe and Effective Systems: AI’s promise lies in its ability to improve our world, but this can only be realized if the systems themselves are safe and effective. Any AI system must be meticulously tested in controlled and real-world scenarios. But it’s more than just about ensuring systems don’t malfunction—it’s about ensuring they function in a way that aligns with our societal values and norms.
  2. Algorithmic Discrimination Protections: Biases in AI systems have made headlines repeatedly, tarnishing this transformative tech’s image. Eliminating biases isn’t a ‘nice-to-have’—it’s a fundamental necessity. Every stage of AI development, from data collection to model training, should be scrutinized to ensure no group is unduly disadvantaged.
  3. Data Privacy: In an era where personal data is often compared to oil in its value, safeguarding this data is paramount. While AI systems thrive on data, we must implement stringent measures to ensure data privacy isn’t compromised. From where data is stored to how it’s accessed to who has rights to it—every aspect needs to be governed with the utmost responsibility.
  4. Notice and Explanation: The days of black-box algorithms must end. Stakeholders, from the public to policymakers, should clearly understand how AI decisions are made. It’s not about revealing trade secrets but ensuring transparency so these systems can be trusted.
  5. Human Alternatives, Consideration, and Fallback: As magnificent as AI is, it isn’t infallible. There should always be a human touchpoint—a fallback mechanism—that can intervene when things go awry. Automated systems should be designed with the understanding that humans are the ultimate safeguard.

Sen. Dodd’s resolution serves as a blueprint for California, the entire nation, and potentially the world. The principles highlighted are about safe AI deployment and ensuring AI uplifts society without trampling on individual rights.

To my colleagues in the tech industry: let’s take this as a call to action. We have the responsibility not only to innovate but to ensure that our innovations are imbued with integrity, respect, and a profound sense of duty to the betterment of society.

AI has the potential to transform society. But it needs guardrails around it. Otherwise the potential exists for it to run amok and harm society instead of help it. Which is why I feel that this l feel that this resolution is a great move.

ARPA Launches $20 Million AI Cyber Challenge To Hunt & Fix AI Vulnerabilities

Posted in Commentary with tags , on August 10, 2023 by itnerd

The US Defense Advanced Research Projects Agency (DARPA) has just launched the AI Cyber Challenge –  a new competition that challenges the nation’s top AI and cybersecurity talent to automatically find and fix software vulnerabilities, defend critical infrastructure from cyberattacks. The Challenge offers $20 million in prize money. 

AIxCC will allow two tracks for participation: the Funded Track and the Open Track. Funded Track competitors will be selected from proposals submitted to a Small Business Innovation Research solicitation. Up to seven small businesses will receive funding to participate. Open Track competitors will register with DARPA via the competition website and will proceed without DARPA funding. 

Teams on all tracks will participate in a qualifying event during the semifinal phase, where the top scoring teams (up to 20) will be invited to participate in the semifinal competition. Of these, the top scoring teams (up to five) will receive monetary prizes and continue to the final phase and competition. The top three scoring competitors in the final competition will receive additional monetary prizes.

Chloé Messdaghi, Head of Threat Research, Protect AI, said: 

“We applaud the administration for its recognition of the crucial role the hacker community can play in identifying, codifying and closing the major security gaps that AI and ML platforms embody, foster or at the least, don’t address.  

“Protect AI has just launched the Huntr platform to pay security researchers for discovering vulnerabilities in open-source software, focusing exclusively on AI/ML threat research. We launched Huntr specifically because we noticed two things. 

“First, people in security aren’t aware of all of the vulnerabilities inherent in AI & ML or that improper usage can create and amplify. A platform that helps bug bounty hunters find vulns is critically important to helping drive new generations of safe, secure and effective AI-driven technologies and systems. 

“Also, we are offering educational content for security professionals to help them learn and grow as a community through our MLSecOps community platform.  

“Again, it’s great to see the Administration, the cybersecurity community and the hacker community come together to help ensure a safe future. The hacker community has been committed to and contributing to exactly this type of future for the last two decades.”

This is a good initiative by DARPA as we need to get ahead of any AI related vulnerabilities before a threat actor takes advantage of them. Hopefully we see more of this.

New AI Attack Tools Are Emerging… And That Should Concern You

Posted in Commentary with tags on July 26, 2023 by itnerd

There’s a new AI FraudGPT tool discussed in this Netenrich report called “FraudGPT: The Villain Avatar of ChatGPT,” and the recent appearance of the WormGPT, used to launch BEC attacks as discussed in this SlashNext report. Both reports are very much worth reading as AI is clearly being used for evil.

I did a Q&A on this with David Mitchell, Chief Technical Officer, HYAS and got this commentary: 

  • Any differences & similarities of these tools/offerings?

“The only difference will be the goal of the particular groups using these platforms — some will use for phishing/financial fraud and others will use to attempt to gain access to networks via other means. “     

  • Are these just riding on the ChatGPT brand, or are they new AI iterations?  

“GPT stands for “Generative Pre-trained Transformer”, which is a specific model of AI use case, not a brand per-se. The dark versions being sold may have different sets of training and data sizes, but the overarching point is that they have no guardrails or ethics ingrained. “     

  • Why now and will we see more of this attack vector? 

“As with any new technology, soon after it is released, nefarious actors begin adopting it in order to learn its weaknesses to exploit. In the case of GPT, nefarious actors are adopting the technology and enhancing it for their needs. “    

  • Can these AI assisted attacks be detected by currently installed defenses? 

“Historically, these attacks could often be detected via security solutions like anti-phishing and protective DNS platforms. With the evolution happening within these dark GPTs, organizations will need to be extra vigilant with their email & SMS messages because they provide an ability for a non-native English speaker to generate well-formed text as a lure.”

These new AI based attack tools are going to make life miserable for defenders. Thus hopefully defences can be made to make AI based attack tools less dangerous.

The EU Passes Draft Legislation To Govern AI

Posted in Commentary with tags , on June 14, 2023 by itnerd

The news is out today that the EU Parliament has moved one step closer to putting legislation into force to govern AI:

The European parliament approved rules aimed at setting a global standard for the technology, which encompasses everything from automated medical diagnoses to some types of drone, AI-generated videos known as deepfakes, and bots such as ChatGPT.

MEPs will now thrash out details with EU countries before the draft rules – known as the AI act – become legislation.

“AI raises a lot of questions socially, ethically, economically. But now is not the time to hit any ‘pause button’. On the contrary, it is about acting fast and taking responsibility,” said Thierry Breton, the European commissioner for the internal market.

A rebellion by centre-right MEPs in the EPP political grouping over an outright ban on real-time facial recognition on the streets of Europe failed to materialise, with a number of politicians attending Silvio Berlusconi’s funeral in Italy.

The final vote was 499 in favour and 28 against with 93 abstentions.

Craig Burland, CISO, Inversion6 had this comment in relation to this news:

Let the debate begin! Similar to data privacy years ago, the EU has just taken a position at the far end of the spectrum to frame the parameters of the discussion. Putting aside the many challenges of enforcement as well as the ubiquitous use of AI in modern technology projects, the EU has documented intriguing concepts centered on ensuring the validity of the content and proper use cases. Contrast this with Google’s pronouncement last week that focused primarily on protecting the technology itself.  What was announced today will shift and transition as the debate plays out in the media and behind closed doors. But, in planting this flag, the EU has started what will be a fascinating dialog that affects businesses and individuals alike.

I’m honestly not sure how this will shake out. But based on the fact that the EU has come out with regulations like GDPR, this draft legislation is likely to shape the discussion about AI and how it should be used. Thus everyone need to pay attention to this.

UPDATE: Eduardo Azanza, CEO, Veridas adds this:

     “The passing of the Artificial Intelligence Act is a significant moment and should not be underestimated at all. For technologies such as AI and biometrics to ever be successful, it is essential that there is trust from businesses and the wider public.

It’s critical that we have established agreed standards and deliverables to ensure that AI and collected biometric data are used responsibly and ethically. There must be clearly defined responsibilities and chains of accountability for all parties, as well as a high degree of transparency for the processes involved. 

As the UK and US look to introduce their own Artificial Intelligence Act, it is essential they work with the EU to define minimum global standards – only then can we guarantee the ethical use of AI and biometrics.

Ultimately, it’s businesses’ duty to responsibly and ethically use AI technology, as its capability to replicate human abilities raises huge concerns. Organizations need to be conducting periodic diagnoses on the ethical principles of AI. Confidence in AI security technology must be based on transparency and compliance with legal, technical, and ethical standards.”

UPDATE #2: Ani Chaudhuri, CEO, Dasera had this comment:

European Union lawmakers have taken a decisive step in shaping the future of artificial intelligence by adopting the E.U. AI Act. This landmark legislation challenges the power of American tech giants and sets unprecedented restrictions on AI usage. This move is long overdue as it prioritizes data security and protects individuals from potential harm caused by unchecked AI systems.

The E.U. AI Act introduces essential guardrails to prevent deploying AI systems that pose an “unacceptable level of risk.” By banning tools like predictive policing and social scoring systems, the legislation safeguards against intrusive and discriminatory practices. Furthermore, it limits high-risk AI applications, such as those that could influence elections or jeopardize people’s health.

One significant aspect of the legislation is its focus on generative AI, including systems like ChatGPT. Requiring content generated by such systems to be labeled and mandating the publication of summaries of copyrighted data used for training promotes transparency and protects intellectual property rights. These measures address growing concerns and ensure responsible AI development.

While some voices express concern over the potential impact on AI development and adoption, the European Parliament’s determination to lead the global dialogue on responsible AI should be applauded.  European lawmakers have proactively developed comprehensive AI legislation that accounts for evolving technologies and potential risks.

The E.U.’s commitment to data privacy, tech competition, and social media regulation aligns with its ambitious AI regulations. This cohesive framework ensures that European companies adhere to high standards, promoting consumer trust and privacy. It also strengthens Europe’s position as the global tech regulator, setting precedents that will shape international tech policies.

As Europe leads in establishing AI standards, the United States must step up its efforts to keep pace. Congress must pass comprehensive legislation addressing AI and online privacy. Falling behind Europe risks hindering innovation and surrendering the opportunity to lead the global debate on AI governance.

We believe that responsible AI development should be a global endeavor. As Europe sets the bar, it is incumbent upon the United States to catch up and play an active role in shaping AI policies. We can strike the right balance and ensure AI benefits society by fostering innovation while safeguarding individual rights.

While concerns and challenges exist, the E.U. AI Act represents a significant step toward building a responsible and secure AI ecosystem. Europe’s commitment to protecting individuals and upholding data security sets an example for the world. As the AI landscape continues to evolve, we must embrace robust regulations that foster trust, innovation, and global cooperation.

G7 Officials To Discuss AI Regulation Today

Posted in Commentary with tags on May 30, 2023 by itnerd

Members of the G7 Group of nations are together today to discuss AI regulation:

G7 government officials will hold the first working-level AI meeting on May 30 and consider issues such as intellectual property protection, disinformation and how the technology should be governed, Japan’s communications minister, Takeaki Matsumoto, said.

The meeting comes as tech regulators worldwide gauge the impact of popular AI services like ChatGPT by Microsoft-backed OpenAI.

The EU is coming closer to enact the world’s first major legislation on AI, inspiring other governments to consider what rules should be applied to AI tools.

Japan, as this year’s chair of G7, “will lead the G7 discussion on responsive use of the generative AI technology”, Matsumoto said, adding the forum hoped to come up with suggestions for heads of state by year-end.

Kevin Bocek, VP Ecosystem and Community at Venafi starts out with this comment:
 
“We are still in the early stages of understanding the impact of AI on both businesses and the public, and it’s a constantly moving target, with new use cases and products being announced on a daily basis. So, it is very encouraging to see world leaders putting AI at the heart of discussions and starting to think about the best way to move forwards. As part of this process, it is vital that they recognize that smart organizations will not slow down the innovation that we’re seeing with Generative AI, and that the results will be overwhelmingly positive. However, there are known and unknown risks that need to be skillfully mitigate. 

As such, the priority for regulations must be to contain risks while encouraging exploration, curiosity and trial and error. But any steps to achieve this can’t be approached with a “set and forget” mentality. Regulators need to establish policies and guidelines that are reviewed and refreshed frequently as we explore the power of AI in more depth. This means the governments will need to constantly collaborate and communicate with experts in the field to avoid neglect and exploitation.”

Ani Chaudhuri, CEO, Dasera follows up with this:

“The forthcoming G7 meeting on AI regulation highlights a critical juncture in our technological evolution. It’s encouraging to see top-level discussions taking place around intellectual property protection, disinformation, and governance in AI – topics that are integral to the development and responsible use of AI tools.

The creation of the “Hiroshima AI process” demonstrates a welcome commitment from global leaders to address the challenges of AI technology. It is a positive step towards fostering a future where AI aligns with our shared democratic values and upholds a high standard of trustworthiness.

However, while discussions on international standards are crucial, equally important is the ability to adapt these standards as the AI landscape continues to evolve rapidly. For AI to be truly beneficial, we must focus not only on legislation but also on transparency, user control, and education about these technologies.

Moreover, AI ethics should not be an afterthought. Building ethical considerations into AI systems from the outset is vital to ensure the technology respects privacy, maintains security, and protects human rights. This, in my opinion, should be at the forefront of G7 discussions. I look forward to the outcomes of these important conversations and the future of AI regulation.”

I will be interested to see what comes out of these meetings and if companies in the AI space abide by any regulation that appears. That’s the key as rules are meaningless if they are not adhered to.

The White House Makes An Announcement On How They’re Going To Promote Responsible AI Development

Posted in Commentary with tags , on May 4, 2023 by itnerd

The White House today has announced what they are going to do to promote responsible AI innovations. This is timely as this is a top of mind issue at the moment. Here’s what the goal is:

AI is one of the most powerful technologies of our time, but in order to seize the opportunities it presents, we must first mitigate its risks. President Biden has been clear that when it comes to AI, we must place people and communities at the center by supporting responsible innovation that serves the public good, while protecting our society, security, and economy. Importantly, this means that companies have a fundamental responsibility to make sure their products are safe before they are deployed or made public.

There’s a lot more to this and I encourage you to read the full details at the link above.

I have two comments on this. Starting with Ani Chaudhuri, CEO, Dasera 

In light of the recent announcement made by the Biden-Harris Administration, it is evident that the US government has taken some essential steps to promote responsible AI innovation while protecting Americans’ rights and safety. While these actions are commendable, it is crucial to emphasize that data security plays a vital role in ensuring AI’s responsible and ethical use.

As the Administration engages with CEOs of leading AI companies, it is essential to remember that responsible and ethical AI development requires robust security measures. Data security companies play a significant part in this landscape, working diligently to protect sensitive information and mitigate risks associated with AI technologies.

The new investments in AI research and development, public assessments of generative AI systems, and policies to ensure responsible AI use by the US government are all necessary steps to create a safer AI ecosystem. However, investing in data security infrastructure and prioritizing collaboration with data security companies is vital. In doing so, the government and AI industry can ensure comprehensive protection against risks and potential harm to individuals and society.

Furthermore, AI developers must be held accountable for the security of their products, emphasizing their responsibility to make their technology safe before deployment or public use. This includes proper data management, secure storage, and measures to prevent unauthorized access to sensitive information.

The Biden-Harris Administration’s actions to promote responsible AI innovation are crucial for a safer future. However, it is equally important to acknowledge the role of data security companies in this landscape and foster partnerships to ensure a comprehensive and cohesive approach to AI-related risks and opportunities.

This is followed up by a comment from Craig Burland, CISO, Inversion6:

There’s no putting the AI genie back in the bottle. Two years ago, if your product didn’t have AI it was considered last-generation.  From SIEM to EDR, products had to have AI / ML.  Now, ChatGPT is evoking fears pulled from science fiction movies.  

Generative AI (GAI) is an evolution of technology that started when we jumped into Big Data. GAI has tremendous potential and troubling downsides. But, the government will be hard-pressed to curtail building new models, slow expanding capabilities, or ban addressing new use cases. These models could proliferate anywhere on the globe.  Clever humans will find new ways to use this tool – for good and bad.  Any regulation will largely be ceremonial and practically unenforceable.  

I think that this is a good initiative by the White House. But as always, I await meaningful results as I feel that we’re currently at a tipping point in terms of where we are with AI. Which in my mind implies that things can go in a great direction, or things could go off the rails when it comes to AI. And in either case, there would be no way back.

Wozniak, Musk & More Call For AI Development Pause

Posted in Commentary with tags on March 29, 2023 by itnerd

There’s an open letter signed by over 1200 people who are asking for an immediate six-month halt on AI technology more powerful than ChatGPT-4. The open letter was created by an organization called the Future of Life Institute. The aim of this organization is to “steer transformative technology towards benefitting life and away from extreme large-scale risks.” Among those who signed are Steve Wozniak who co-founded Apple, Elon Musk the clown prince of tech and the guy who runs Twitter, SpaceX, and Tesla among other companies. This does bring up all sorts of questions about AI and how it should be used.

I have a number of comments on AI in general and specifically this open letter. The first is from Baber Amin, COO, Veridium:

Thoughts on AI development and application:

“For great leaps in technology, we often need to establish safety measures and regulations – for example, when we split the atom to harness nuclear power. While nuclear energy has provided many advantages in fields like medicine and energy, it has also given rise to the terrible threat of nuclear weapons. However, the difficulty of accessing and managing nuclear materials has provided a natural form of protection.

“AI model development and training, on the other hand, lack these same natural barriers, making it easier to develop without appropriate safety measures in place. That’s why it’s important to take a step back and create responsible systems that are accurate, transparent, trustworthy, and potentially even capable of self-regulation.

Risks for companies using the OpenAI API.

      “As organizations turn to OpenAI’s API for their artificial intelligence needs, it’s important to keep in mind the following considerations:

  1. Data Privacy: OpenAI’s models are trained on large amounts of data, which until recently could have included sensitive information from organizations. Starting March 1, OpenAI will no longer use customer data submitted via API to train their models without explicit consent. However, the data will still be kept for 30 days for monitoring purposes.
  2. Bias: OpenAI’s training data comes from the real world, which means it may contain biases that are reflected in their models. Organizations using OpenAI should be aware of this possibility and take corrective measures.
  3. Misinformation and Fake Data: OpenAI’s generative models can create text that is indistinguishable from real data, which could be used to generate fake news or blog posts. Organizations need to be cautious of inadvertently spreading misinformation.
  4. Phishing Attacks: OpenAI’s generative models can also be used to create sophisticated phishing attacks or deepfakes, which could lead to propaganda and possible slander.
  5. Spam: Lastly, OpenAI’s generative AI can be used to generate spam, resulting in unsolicited emails or social media posts, causing reputational damage to an organization

     “By keeping these considerations in mind, organizations can use OpenAI’s API effectively and responsibly.

      “For security protections, looking at OpenAI, they do have the following security controls in place, which all seem very reasonable.  

  • Data encryption at rest and in transit.
  • Access control around data and models.
  • Monitoring for suspicious activity.
  • Patching for latest security patches.
  • Auditing of access to data and models.

Matt Mullins, Senior Security Researcher, Cybrary is next:

   “There are a number of benefits to AI and its applications that are being explored. While there are a great deal of efficiencies created there, other non-beneficial aspects arise. The disruption of a number of industries being the most profound, in ways that were not easily predictable. Things associated (typically) with “human-ness” are being found to be more vulnerable than other aspects.

   “For example… art, music, essays, and other things that were an established trope of human creativity as normality are significantly being destabilized as AIs are able to quickly ingest, seed, and innovate in ways that were not previously predicted.

   “Aside from these disruptions, the potential for attacks on baseline ‘truth’ have been established as well. Consider the modification of voice, visual imagery, and video which can all be done so effectively that a zoom call could potentially be spoofed. The ramifications of such realistic mimicry have direct threats to establishments of truth and sub sequentially democratic process itself.

Overall, AI is presenting a removal of entry level aspects to IT and security. Beyond this entry level the veil seems to be easy to pierce with a critical eye for understanding code. The bigger issues presented are the capabilities that AI presents to disrupt how we see the world.”

David Maynor, Senior Director of Threat Intelligence, Cybrary has this to add:

Addressing major tech calling for a 6 mo. AI moratorium:

   “It is funny that technologist that have been disruptive to industries and use mantras like “fail fast” are aligning against AI research. While conspiracy theories point to worrying about a Skynet like AI turning on humans I personally feel that AI availability will disrupt the disruptors and make their fiefdoms ripe for replacement.”

It will be interesting to see how this play out. I for one do not see the AI arms race as I call it stopping anytime soon unless governments get interested in terms of slowing down AI development.

UPDATE: Dr. Chenxi Wang (she/her), Founder and General Partner, Rain Capital added this comment:

A pause in the AI fever is needed, not just from the business standpoint, but also from the point of view of security and privacy. Until we understand how to assess data privacy, model integrity, and the impact of adversarial data, continued development of AI may lead to unintended social, technical, and cyber consequences. 

A Computer Passes The Turing Test For The First Time

Posted in Commentary with tags , on June 8, 2014 by itnerd

What is the Turing Test you ask? It’s a test developed by Alan Turing to test the ability of a computer to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. The test does not check the ability to give the correct answer to questions; it checks how closely the answer resembles typical human answers. The second that computers can pass this test, humans might be in trouble.

A Russian-based team is said to be the first to create a program that passed the Turing Test. Here’s what The Independent had to say:

Eugene Goostman, a computer programme made by a team based in Russia, succeeded in a test conducted at the Royal Society in London. It convinced 33 per cent of the judges that it was human, said academics at the University of Reading, which organised the test.

It is thought to be the first computer to pass the iconic test. Though other programmes have claimed successes, those included set topics or questions in advance.

Now before someone sends me an e-mail asking me how long before we get something like SkyNet which was the computer that tried to exterminate humans in the Terminator movies or HAL9000 from 2001: A Space Odyssey, we’re still a ways around from that. This computer that beat the Turing Test is good at conversation logic and nothing deeper than that. Still, this will likely be remembered as the moment that artificial intelligence started to catch up to what humans can do.

Great, just great.