Archive for AI

New Secure AI System Guidelines Agreed To By 18 Countries

Posted in Commentary with tags on November 27, 2023 by itnerd

The US, UK, among 16 other countries have jointly released secure AI system guidelines based on the principle that it should be secure by design:

This document recommends guidelines for providers of any systems that use artificial intelligence (AI), whether those systems have been created from scratch or built on top of tools and services provided by others. Implementing these guidelines will help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorised parties.

This document is aimed primarily at providers of AI systems who are using models hosted by an organisation, or are using external application programming interfaces (APIs). We urge all stakeholders (including data scientists, developers, managers, decision-makers and risk owners) to read these guidelines to help them make informed decisions about the design, development, deployment and operation of their AI systems.

 Anurag Gurtu , Chief Product Officer, StrikeReady had this comment:

The recent secure AI system development guidelines released by the U.K., U.S., and other international partners are a significant move in enhancing cybersecurity in the field of artificial intelligence. These guidelines emphasize the importance of security outcomes for customers, incorporating transparency and accountability, and promoting a secure organizational structure. They focus on managing AI-related risks, requiring rigorous testing of tools before public release, and establishing measures to counteract societal harms, like bias. Additionally, the guidelines advocate a ‘secure by design’ approach covering all stages of AI development and deployment, and address the need to combat adversarial attacks targeting AI and machine learning systems, including prompt injection attacks and data poisoning.

The fact that 18 countries agreed on a common set of principals is great. The thing is that more nations have to do the same thing. Otherwise you may still have AI that is closer to the “Terminator” end of the spectrum rather than being helpful and friendly.

UPDATE: Troy Batterberry, CEO and founder, EchoMark had this comment:

   “While logging and monitoring insider activities are important, we know they do not go nearly far enough to prevent insider leaks. Highly damaging leaks continue to happen at well-run government and commercial organizations all over the world, even with sophisticated monitoring activities in place. The leaker (insider) simply feels they can hide in the anonymity of the group and never be caught. An entirely new approach is required to help change human behavior. Information watermarking is one such technology that can help keep private information private.”

28 Countries Agree To Collaborate On ‘Frontier AI’

Posted in Commentary with tags on November 3, 2023 by itnerd

This week, the UK hosted the AI Safety Summit in Bletchley Park where 28 countries, including the US, the UK, China, six EU member states, Brazil, Nigeria, Israel and Saudi Arabia, signed the Bletchley Declaration, an agreement establishing shared responsibility for the opportunities, risks and needs for global action on systems that pose urgent and dangerous risks.

“Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation,” reads a public statement published by the UK Department for Science, Innovation and Technology. 

The declaration lays out the first two steps of their agenda for addressing ‘frontier AI’ risk:

  1. Identify shared concerns for AI safety risks by building a “scientific and evidence-based understanding of the risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.”
  2. Build respective risk-based policies to ensure safety in light of identified risks, collaborating “while recognizing our approaches may differ based on national circumstances and applicable legal frameworks.” This includes: increased transparency by developers, tools for safety testing and evaluation metrics, and developing relevant public sector capabilities and scientific research.  

Ted Miracco, CEO, Approov Mobile Security had this comment:

   “The Bletchley Declaration demonstrates a more proactive approach by governments, signaling a possible lesson learned from past failures to regulate social media giants. By addressing AI risks collectively, nations aim to stay ahead of tech behemoths, recognizing the potential for recklessness. This commitment to collaboration underscores some determination to safeguard the future by shaping responsible AI development and mitigating potential harms.

   “We all certainly harbor doubts regarding the ability of governments and legal systems to match the speed and avarice of the tech industry, but the Bletchley Declaration signifies a crucial departure from the laissez-faire approach witnessed with social media companies. We should applaud the proactive effort of these governments to avoid idle passivity and assertively engage in shaping AI’s trajectory, while prioritizing public safety and responsible governance over unfettered market forces.”


Emily Phelps, Director, Cyware adds this comment:
 
   “Recognizing that AI-driven risks cross borders, it is imperative for countries to join forces, ensuring that advancements in AI are accompanied by safety measures that protect all societies equally. The focus on a scientific and evidence-based approach to understanding these risks will enhance our collective intelligence and response capabilities. While the nuances of national circumstances will lead to varied approaches, the shared commitment to transparency, rigorous testing, and bolstered public sector capabilities is a reassuring move towards a safer AI-driven future for everyone.”

It’s a good thing in my mind that there’s cross border collaboration on AI as the potential for it to help mankind is great. But the potential for it to harm mankind is also great. Thus rules, boundaries and limitations need to be wrapped around it so that the latter does not happen.

White House Issues Executive Order on Safe, Secure, and Trustworthy AI

Posted in Commentary with tags on October 30, 2023 by itnerd

Today the White House has announced on using an executive order to mitigate AI risks:

As part of the Biden-Harris Administration’s comprehensive strategy for responsible innovation, the Executive Order builds on previous actions the President has taken, including work that led to voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI.

The link above has a very extensive document that is worth reading as it goes into a lot of detail as to what this executive order covers.  John Gunn, CEO, Token had this comment:

The aim is noble and the need is certain, but the implementation will be challenging considering that Generative AI technology is already being used extensively by hackers and enemy states to attack US companies with phishing emails that are nearly impossible to detect. Most AI technologies that deliver benefits can also be used for harm, so almost every company developing AI solutions needs to make the required disclosure today.

This is likely to be a hot topic today. Thus as I get other reactions to this, I will post it here.

UPDATE: Anurag Gurtu, CPO, StrikeReady had this comment:

As President Biden prepares to leverage emergency powers for AI risk mitigation, it’s a clear signal of the critical juncture at which we find ourselves in the evolution of AI technology. The administration’s decision reflects a growing awareness of the transformative impact AI has on every sector, and the need for robust frameworks that govern its ethical use and development.

This initiative isn’t just about preemptive measures against potential misuse; it’s a foundational move towards establishing a global standard for AI that aligns with our values of safety, security, and trustworthiness. It’s an acknowledgment that while AI presents unparalleled opportunities for advancement, it also brings challenges that must be addressed to protect societal welfare and national interests.

For businesses and developers, this move will likely mean a more stringent regulatory environment, but also a clearer direction for innovation within safe and secure boundaries. It’s time for all stakeholders to engage in dialogue and contribute to a balanced approach that fosters innovation while safeguarding against the risks that have kept policymakers and citizens alike vigilant.

UPDATE #2: George McGregor, VP, Approov had this to say:

If you market a cybersecurity solution in the USA, you had better read through this Executive Order (EO)  – it may affect your business!  If your solution is deterministic in nature, then life will be easier, but if you are promoting the use of AI in your product, then life may well get more complicated: Not only do you need to demonstrate to customers that false-positives and management overhead due to AI are not an issue,  but with these new guidelines, the AI methods you employ will be under the microscope also.

Here are some other comments, each followed by the relevant text from the EO:

First – if you are an AI based cybersecurity vendor, you may be expected to share your test results with the government. The success or failure of a security solution, by its very nature, “poses a risk to national security”.

  • From the EO text:  Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public.

Second, attestation techniques will become critical – this is already true for mobile app code which can easily be reverse-engineered and replicated unless steps are taken. Fingerprinting techniques used in mobile may be applicable here.

  • From the EO text: Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.

A program to use AI to eliminate vulnerabilities is a very noble pursuit but should not be viewed as a replacement for good software development discipline and implementing run time visibility and protection.

  • From the EO text:  Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure.

The use of AI will not only be a power for good. The hackers will seek to use these techniques also and there will inevitably be an arms-race between security teams and hackers. To start with however, the cost of entry for bad actors will be high, in terms of knowledge required and complexity of the task, and this will mean that only well funded “nation state” teams will be the primary users of AI for nefarious purposes.   National Security teams will need to have the resources to track and counter these efforts.

  • From the EO text: Order the development of a National Security Memorandum that directs further actions on AI and security, to be developed by the National Security Council and White House Chief of Staff. This document will ensure that the United States military and intelligence community use AI safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI.

Malwarebytes Discovers That The Bing AI Chatbot Delivers Ads With Malicious Links

Posted in Commentary with tags , on September 29, 2023 by itnerd

Malwarebytes has research on Bing and its AI Chatbot being leveraged by threat actors to deliver ads with malicious links. In short, it’s a malvertizing campaign in which attackers take over the ad accounts of legitimate businesses to create targeted malicious ads:

Ads can be inserted into a Bing Chat conversation in various ways. One of those is when a user hovers over a link and an ad is displayed first before the organic result. In the example below, we asked where we could download a program called Advanced IP Scanner used by network administrators. When we place our cursor over the first sentence, a dialog appears showing an ad and the official website for this program right below it:

Users have the choice of visiting either link, although the first one may be more likely to be clicked on because of its position. Even though there is a small ‘Ad’ label next to this link, it would be easy to miss and view the link as a regular search result.

Upon clicking the first link, users are taken to a website (mynetfoldersip[.]cfd) whose purpose is to filter traffic and separate real victims from bots, sandboxes, or security researchers. It does that by checking your IP address, time zone, and various other system settings such as web rendering that identifies virtual machines.

Real humans are redirected to a fake site (advenced-ip-scanner[.]com) that mimics the official one while others are sent to a decoy page. The next step is for victims to download the supposed installer and run it.

The MSI installer contains three different files but only one is malicious and is a heavily obfuscated script:

Upon execution, the script reaches out to an external IP address (65.21.119[.]59) presumably to announce itself and receive an additional payload.

Lovely.

Emily Phelps, Director, Cyware had this comment:

   “With advancing technologies and a rapidly evolving digital landscape, threat actors are able to exploit human trust in established entities at scale. Addressing these risks requires more than awareness training and traditional security controls. End users must understand the risks and proceed with caution, but platforms must also bolster their security posture to adapt to these threats. It’s critical to employ continuous and rigorous testing to ensure they remain a step ahead of potential online adversaries.”

Add this to the attack surface that you have to defend yourself against as I didn’t have “malware delivered by ads on an AI chatbot” on my cybersecurity BINGO card. But I should have expected it as threat actors are getting very crafty these days.

Tech Leaders Make A Trip To Capitol Hill To Talk AI

Posted in Commentary with tags on September 14, 2023 by itnerd

Yesterday, the biggest names in tech made a trip to Capitol Hill for a closed-door summit on artificial intelligence:

Senate Majority Leader Chuck Schumer, D-N.Y., hosted the private AI Insight Forum in the grand Kennedy Caucus Room on Capitol Hill on Wednesday, as lawmakers sought advice from 22 AI tech giants, human rights and labor leaders about how government should regulate the new technology.

In addition to Musk, Meta CEO Zuckerberg and Microsoft co-founder Gates, ChatGPT-maker OpenAI CEO Sam Altman and Google CEO Sundar Pichahi attended, as well as leaders from human rights, labor and entertainment groups.

And here’s what they allegedly said:

According to Schumer, every leader in the meeting raised their hand when asked if government should regulate AI.

“We got some consensus on some things … I asked everyone in the room, does government need to play a role in regulating AI and every single person raised their hand, even though they had diverse views,” Schumer told reporters. “So that gives us a message here that we have to try to act, as difficult as the process might be.”

That’s not the response I was expecting from them. But likely it likely is the right answer. Allen Drennan, Principal & Co-Founder, Cordoniq had this comment:

“The new privacy and security concerns of AI need to be carefully evaluated by regulators, or consumers could quickly find that every piece of data that has ever been provided to private companies and organizations is used in the training of AI models.  While this has clear benefits, such as applying AI to cold-case files in investigations, it could also be used to scrape all communications you have ever posted to the Internet, including social media, email cloud host providers and others, to gain a more exact profile of the consumer, on a mass basis. This type of advertiser information is invaluable which makes privacy regulations all that more important.”

Hopefully, there’s a thoughtful approach to AI that balances regulation to letting it do what it was designed to do. That way we can get the benefits without many of the risks.

California Adopts A Resolution That Encourages The Responsible Use Of AI

Posted in Commentary with tags on August 17, 2023 by itnerd

California recently adopted an AI Resolution that’s in alignment with the Biden Administration’s guidelines for responsible AI. Spearheaded by Sen. Dodd, this resolution reinforces California’s influential role in shaping regulatory frameworks:

Senate Concurrent Resolution 17 highlights the significant challenges posed by the use of technology, data, and automated systems, including incidents of unsafe, ineffective, or biased systems and unchecked data collection that threatens privacy and opportunities. At the same time, the resolution recognizes the potential benefits of AI, including increased efficiency in agriculture and data analysis that could revolutionize industries.

The resolution affirms the state’s commitment to President Biden’s vision for safe AI and the principles outlined in the White House Office of Science and Technology Policy’s “Blueprint for an AI Bill of Rights.” The five principles — Safe and Effective Systems; Algorithmic Discrimination Protections; Data Privacy; Notice and Explanation; and Human Alternatives, Consideration and Fallback — will guide the design, use, and deployment of automated systems in California.

SCR 17 was approved Monday in the Assembly with a unanimous voice vote after being previously approved by the full Senate. It does not require the governor’s signature.

Ani Chaudhuri, CEO, Dasera had this comment:

Today, with the California Legislature adopting the nation’s first AI-drafted resolution, we’re witnessing a pivotal moment in the intersection of technology, governance, and society. As someone deeply entrenched in data security and governance, this resolution isn’t just a piece of legislative text; it’s a testament to how our society is evolving and the responsibilities we must shoulder as we traverse this path.

  1. Safe and Effective Systems: AI’s promise lies in its ability to improve our world, but this can only be realized if the systems themselves are safe and effective. Any AI system must be meticulously tested in controlled and real-world scenarios. But it’s more than just about ensuring systems don’t malfunction—it’s about ensuring they function in a way that aligns with our societal values and norms.
  2. Algorithmic Discrimination Protections: Biases in AI systems have made headlines repeatedly, tarnishing this transformative tech’s image. Eliminating biases isn’t a ‘nice-to-have’—it’s a fundamental necessity. Every stage of AI development, from data collection to model training, should be scrutinized to ensure no group is unduly disadvantaged.
  3. Data Privacy: In an era where personal data is often compared to oil in its value, safeguarding this data is paramount. While AI systems thrive on data, we must implement stringent measures to ensure data privacy isn’t compromised. From where data is stored to how it’s accessed to who has rights to it—every aspect needs to be governed with the utmost responsibility.
  4. Notice and Explanation: The days of black-box algorithms must end. Stakeholders, from the public to policymakers, should clearly understand how AI decisions are made. It’s not about revealing trade secrets but ensuring transparency so these systems can be trusted.
  5. Human Alternatives, Consideration, and Fallback: As magnificent as AI is, it isn’t infallible. There should always be a human touchpoint—a fallback mechanism—that can intervene when things go awry. Automated systems should be designed with the understanding that humans are the ultimate safeguard.

Sen. Dodd’s resolution serves as a blueprint for California, the entire nation, and potentially the world. The principles highlighted are about safe AI deployment and ensuring AI uplifts society without trampling on individual rights.

To my colleagues in the tech industry: let’s take this as a call to action. We have the responsibility not only to innovate but to ensure that our innovations are imbued with integrity, respect, and a profound sense of duty to the betterment of society.

AI has the potential to transform society. But it needs guardrails around it. Otherwise the potential exists for it to run amok and harm society instead of help it. Which is why I feel that this l feel that this resolution is a great move.

ARPA Launches $20 Million AI Cyber Challenge To Hunt & Fix AI Vulnerabilities

Posted in Commentary with tags , on August 10, 2023 by itnerd

The US Defense Advanced Research Projects Agency (DARPA) has just launched the AI Cyber Challenge –  a new competition that challenges the nation’s top AI and cybersecurity talent to automatically find and fix software vulnerabilities, defend critical infrastructure from cyberattacks. The Challenge offers $20 million in prize money. 

AIxCC will allow two tracks for participation: the Funded Track and the Open Track. Funded Track competitors will be selected from proposals submitted to a Small Business Innovation Research solicitation. Up to seven small businesses will receive funding to participate. Open Track competitors will register with DARPA via the competition website and will proceed without DARPA funding. 

Teams on all tracks will participate in a qualifying event during the semifinal phase, where the top scoring teams (up to 20) will be invited to participate in the semifinal competition. Of these, the top scoring teams (up to five) will receive monetary prizes and continue to the final phase and competition. The top three scoring competitors in the final competition will receive additional monetary prizes.

Chloé Messdaghi, Head of Threat Research, Protect AI, said: 

“We applaud the administration for its recognition of the crucial role the hacker community can play in identifying, codifying and closing the major security gaps that AI and ML platforms embody, foster or at the least, don’t address.  

“Protect AI has just launched the Huntr platform to pay security researchers for discovering vulnerabilities in open-source software, focusing exclusively on AI/ML threat research. We launched Huntr specifically because we noticed two things. 

“First, people in security aren’t aware of all of the vulnerabilities inherent in AI & ML or that improper usage can create and amplify. A platform that helps bug bounty hunters find vulns is critically important to helping drive new generations of safe, secure and effective AI-driven technologies and systems. 

“Also, we are offering educational content for security professionals to help them learn and grow as a community through our MLSecOps community platform.  

“Again, it’s great to see the Administration, the cybersecurity community and the hacker community come together to help ensure a safe future. The hacker community has been committed to and contributing to exactly this type of future for the last two decades.”

This is a good initiative by DARPA as we need to get ahead of any AI related vulnerabilities before a threat actor takes advantage of them. Hopefully we see more of this.

New AI Attack Tools Are Emerging… And That Should Concern You

Posted in Commentary with tags on July 26, 2023 by itnerd

There’s a new AI FraudGPT tool discussed in this Netenrich report called “FraudGPT: The Villain Avatar of ChatGPT,” and the recent appearance of the WormGPT, used to launch BEC attacks as discussed in this SlashNext report. Both reports are very much worth reading as AI is clearly being used for evil.

I did a Q&A on this with David Mitchell, Chief Technical Officer, HYAS and got this commentary: 

  • Any differences & similarities of these tools/offerings?

“The only difference will be the goal of the particular groups using these platforms — some will use for phishing/financial fraud and others will use to attempt to gain access to networks via other means. “     

  • Are these just riding on the ChatGPT brand, or are they new AI iterations?  

“GPT stands for “Generative Pre-trained Transformer”, which is a specific model of AI use case, not a brand per-se. The dark versions being sold may have different sets of training and data sizes, but the overarching point is that they have no guardrails or ethics ingrained. “     

  • Why now and will we see more of this attack vector? 

“As with any new technology, soon after it is released, nefarious actors begin adopting it in order to learn its weaknesses to exploit. In the case of GPT, nefarious actors are adopting the technology and enhancing it for their needs. “    

  • Can these AI assisted attacks be detected by currently installed defenses? 

“Historically, these attacks could often be detected via security solutions like anti-phishing and protective DNS platforms. With the evolution happening within these dark GPTs, organizations will need to be extra vigilant with their email & SMS messages because they provide an ability for a non-native English speaker to generate well-formed text as a lure.”

These new AI based attack tools are going to make life miserable for defenders. Thus hopefully defences can be made to make AI based attack tools less dangerous.

The EU Passes Draft Legislation To Govern AI

Posted in Commentary with tags , on June 14, 2023 by itnerd

The news is out today that the EU Parliament has moved one step closer to putting legislation into force to govern AI:

The European parliament approved rules aimed at setting a global standard for the technology, which encompasses everything from automated medical diagnoses to some types of drone, AI-generated videos known as deepfakes, and bots such as ChatGPT.

MEPs will now thrash out details with EU countries before the draft rules – known as the AI act – become legislation.

“AI raises a lot of questions socially, ethically, economically. But now is not the time to hit any ‘pause button’. On the contrary, it is about acting fast and taking responsibility,” said Thierry Breton, the European commissioner for the internal market.

A rebellion by centre-right MEPs in the EPP political grouping over an outright ban on real-time facial recognition on the streets of Europe failed to materialise, with a number of politicians attending Silvio Berlusconi’s funeral in Italy.

The final vote was 499 in favour and 28 against with 93 abstentions.

Craig Burland, CISO, Inversion6 had this comment in relation to this news:

Let the debate begin! Similar to data privacy years ago, the EU has just taken a position at the far end of the spectrum to frame the parameters of the discussion. Putting aside the many challenges of enforcement as well as the ubiquitous use of AI in modern technology projects, the EU has documented intriguing concepts centered on ensuring the validity of the content and proper use cases. Contrast this with Google’s pronouncement last week that focused primarily on protecting the technology itself.  What was announced today will shift and transition as the debate plays out in the media and behind closed doors. But, in planting this flag, the EU has started what will be a fascinating dialog that affects businesses and individuals alike.

I’m honestly not sure how this will shake out. But based on the fact that the EU has come out with regulations like GDPR, this draft legislation is likely to shape the discussion about AI and how it should be used. Thus everyone need to pay attention to this.

UPDATE: Eduardo Azanza, CEO, Veridas adds this:

     “The passing of the Artificial Intelligence Act is a significant moment and should not be underestimated at all. For technologies such as AI and biometrics to ever be successful, it is essential that there is trust from businesses and the wider public.

It’s critical that we have established agreed standards and deliverables to ensure that AI and collected biometric data are used responsibly and ethically. There must be clearly defined responsibilities and chains of accountability for all parties, as well as a high degree of transparency for the processes involved. 

As the UK and US look to introduce their own Artificial Intelligence Act, it is essential they work with the EU to define minimum global standards – only then can we guarantee the ethical use of AI and biometrics.

Ultimately, it’s businesses’ duty to responsibly and ethically use AI technology, as its capability to replicate human abilities raises huge concerns. Organizations need to be conducting periodic diagnoses on the ethical principles of AI. Confidence in AI security technology must be based on transparency and compliance with legal, technical, and ethical standards.”

UPDATE #2: Ani Chaudhuri, CEO, Dasera had this comment:

European Union lawmakers have taken a decisive step in shaping the future of artificial intelligence by adopting the E.U. AI Act. This landmark legislation challenges the power of American tech giants and sets unprecedented restrictions on AI usage. This move is long overdue as it prioritizes data security and protects individuals from potential harm caused by unchecked AI systems.

The E.U. AI Act introduces essential guardrails to prevent deploying AI systems that pose an “unacceptable level of risk.” By banning tools like predictive policing and social scoring systems, the legislation safeguards against intrusive and discriminatory practices. Furthermore, it limits high-risk AI applications, such as those that could influence elections or jeopardize people’s health.

One significant aspect of the legislation is its focus on generative AI, including systems like ChatGPT. Requiring content generated by such systems to be labeled and mandating the publication of summaries of copyrighted data used for training promotes transparency and protects intellectual property rights. These measures address growing concerns and ensure responsible AI development.

While some voices express concern over the potential impact on AI development and adoption, the European Parliament’s determination to lead the global dialogue on responsible AI should be applauded.  European lawmakers have proactively developed comprehensive AI legislation that accounts for evolving technologies and potential risks.

The E.U.’s commitment to data privacy, tech competition, and social media regulation aligns with its ambitious AI regulations. This cohesive framework ensures that European companies adhere to high standards, promoting consumer trust and privacy. It also strengthens Europe’s position as the global tech regulator, setting precedents that will shape international tech policies.

As Europe leads in establishing AI standards, the United States must step up its efforts to keep pace. Congress must pass comprehensive legislation addressing AI and online privacy. Falling behind Europe risks hindering innovation and surrendering the opportunity to lead the global debate on AI governance.

We believe that responsible AI development should be a global endeavor. As Europe sets the bar, it is incumbent upon the United States to catch up and play an active role in shaping AI policies. We can strike the right balance and ensure AI benefits society by fostering innovation while safeguarding individual rights.

While concerns and challenges exist, the E.U. AI Act represents a significant step toward building a responsible and secure AI ecosystem. Europe’s commitment to protecting individuals and upholding data security sets an example for the world. As the AI landscape continues to evolve, we must embrace robust regulations that foster trust, innovation, and global cooperation.

G7 Officials To Discuss AI Regulation Today

Posted in Commentary with tags on May 30, 2023 by itnerd

Members of the G7 Group of nations are together today to discuss AI regulation:

G7 government officials will hold the first working-level AI meeting on May 30 and consider issues such as intellectual property protection, disinformation and how the technology should be governed, Japan’s communications minister, Takeaki Matsumoto, said.

The meeting comes as tech regulators worldwide gauge the impact of popular AI services like ChatGPT by Microsoft-backed OpenAI.

The EU is coming closer to enact the world’s first major legislation on AI, inspiring other governments to consider what rules should be applied to AI tools.

Japan, as this year’s chair of G7, “will lead the G7 discussion on responsive use of the generative AI technology”, Matsumoto said, adding the forum hoped to come up with suggestions for heads of state by year-end.

Kevin Bocek, VP Ecosystem and Community at Venafi starts out with this comment:
 
“We are still in the early stages of understanding the impact of AI on both businesses and the public, and it’s a constantly moving target, with new use cases and products being announced on a daily basis. So, it is very encouraging to see world leaders putting AI at the heart of discussions and starting to think about the best way to move forwards. As part of this process, it is vital that they recognize that smart organizations will not slow down the innovation that we’re seeing with Generative AI, and that the results will be overwhelmingly positive. However, there are known and unknown risks that need to be skillfully mitigate. 

As such, the priority for regulations must be to contain risks while encouraging exploration, curiosity and trial and error. But any steps to achieve this can’t be approached with a “set and forget” mentality. Regulators need to establish policies and guidelines that are reviewed and refreshed frequently as we explore the power of AI in more depth. This means the governments will need to constantly collaborate and communicate with experts in the field to avoid neglect and exploitation.”

Ani Chaudhuri, CEO, Dasera follows up with this:

“The forthcoming G7 meeting on AI regulation highlights a critical juncture in our technological evolution. It’s encouraging to see top-level discussions taking place around intellectual property protection, disinformation, and governance in AI – topics that are integral to the development and responsible use of AI tools.

The creation of the “Hiroshima AI process” demonstrates a welcome commitment from global leaders to address the challenges of AI technology. It is a positive step towards fostering a future where AI aligns with our shared democratic values and upholds a high standard of trustworthiness.

However, while discussions on international standards are crucial, equally important is the ability to adapt these standards as the AI landscape continues to evolve rapidly. For AI to be truly beneficial, we must focus not only on legislation but also on transparency, user control, and education about these technologies.

Moreover, AI ethics should not be an afterthought. Building ethical considerations into AI systems from the outset is vital to ensure the technology respects privacy, maintains security, and protects human rights. This, in my opinion, should be at the forefront of G7 discussions. I look forward to the outcomes of these important conversations and the future of AI regulation.”

I will be interested to see what comes out of these meetings and if companies in the AI space abide by any regulation that appears. That’s the key as rules are meaningless if they are not adhered to.