Archive for November 30, 2023

Google discovers ChatGPT training data flaw

Posted in Commentary with tags on November 30, 2023 by itnerd

On the one year anniversary of ChatGPT going public comes these recent findings by Google researchers on ChatGPT’s training data.

The researchers successfully prompted ChatGPT to disclose parts of its training data using a novel attack technique, which involved asking the chatbot’s production model to repeatedly echo specific words indefinitely.

Anurag Gurtu, CPO, StrikeReady had this to say:

The exposure of training data in ChatGPT and other generative AI platforms raises significant privacy and security concerns. This situation underscores the need for more stringent data handling and processing protocols in AI development, especially regarding the use of sensitive and personal information. It also highlights the importance of transparency in AI development and the potential risks associated with the use of large-scale data. Addressing these challenges is critical for maintaining user trust and ensuring the responsible use of AI technologies.

This is not a good look for AI in general and ChatGPT specifically. Clearly people behind AI products need to get a handle on this sort of thing quickly or these sorts of issues will simply multiply.

UPDATE: Kevin Surace, Chair, Token adds this:

The attack was incredibly simple and some of them still work as of now. It is an absolute disaster for any model to reveal its training data – IP-wise, legal, integrity and so on. Certainly, OpenAI and others must put in more stringent safeguards to keep this from happening again.

A Now Fixed Zoom Vulnerability Enabled An Attacker To Gain A Lot Of Access To A Zoom Room

Posted in Commentary with tags on November 30, 2023 by itnerd

There was a scary Zoom vulnerability that you might want to pay attention to:

In June 2023, a vulnerability in Zoom Rooms was discovered. This vulnerability had the potential to allow an attacker to claim a Zoom Room’s service account and gain access to the victim’s organization’s tenant. As a service account, an attacker would have invisible access to confidential information in Team Chat, Whiteboards, and other Zoom applications.

But the good news is that it was fixed:

Following several conversations with the Zoom team, the vulnerability was validated and promptly remediated. To mitigate this issue, Zoom removed the ability to activate Zoom Room accounts.

But it highlights the risks posed by cloud services. Basically, you have to trust that the provider of the cloud service has their security on point. Allen Drennan, Principal & Co-Founder, Cordoniq adds these thoughts:

This is just another example of why organizations who are security conscious need to consider the ramification of utilizing public cloud-based services for their internal collaboration. Online retail video conferencing companies are often slow to respond to security threats, leaving large numbers of customers vulnerable to cyber threats. Having complete control over the implementation of the solution, including how user account access is administered and managed within the solution, is critical to data privacy.

The good news is that this specific vulnerability was addressed by Zoom. The bad news is there might be more out there that we don’t know about. And that’s concerning.

Today Is The One Year Anniversary Of ChatGPT Being Publicly Available

Posted in Commentary with tags on November 30, 2023 by itnerd

Today is November 30th which makes it one year since ChatGPT became available to the public. ChatGPT has taken the world by storm for good and bad reasons. History will be the ultimate judge of how impactful ChatGPT will be. But John Pritchard, CPO at Radiant Logic has some thought on ChatGPT:

“The one-year anniversary of ChatGPT marks a revolutionary moment for Generative AI. It has completely surpassed our expectations of what technology is capable of and enabled businesses of all sizes to leverage AI without significant upfront investments. However, we must consider how we can best utilize this advanced tool – businesses may feel inclined to rush and hop on the AI train to keep up with their competitors, but without a strong foundation and data ecosystem, businesses can unintentionally cause more problems.  

Before organizations invest time, finances and resources in integrating Gen AI into their decision-making processes, they need to first and foremost ensure their data is clean and of the best quality. GenAI’s effectiveness is directly dependent on the data it receives and if businesses aren’t careful, they can exacerbate existing issues by making decisions based on inaccurate AI results. This means making sure your data set is accurate, up-to-date and does not have anomalies. 

Businesses must also train their employees who will be overseeing the AI. While GenAI is an intelligent tool, it has not yet been perfected and can produce errors and wrong answers – human oversight remains critical to significantly reduce GenAI hallucinations and unwanted output. As GenAI is not advanced enough to fully function on its own, using it is more like collaborating with it. So, employees must also know how to frame instructions that an AI model can properly understand and interpret, a technique known as prompt engineering. With these steps, businesses can fully move forward with implementing GenAI and harness its full potential.” 

With everything that surrounds AI, the next year or two will be interesting to watch to see how it is used, and how it is controlled.

Update Chrome ASAP As There’s A Flaw That Is Actively Being Exploited

Posted in Commentary with tags on November 30, 2023 by itnerd

If you are a user of the Chrome browser, you should update it ASAP to stop a flaw in said browser from being used by threat actors as an attack vector to pwn you.

The details of the flaw can be found here. The flaw allows an attacker to execute code and pwn you. Which is of course bad. What’s worse is that this exploit according to 9to5Mac is actively being exploited. Making this a today problem for you.

Time to update all the things Chrome related.

Elon Musk Told Advertisers Who Fled Twitter To Go F**k Themselves… Seriously, He Did

Posted in Commentary with tags on November 30, 2023 by itnerd

I had a reader ping me on, ironically Twitter to send me this. Advanced warning, this may not be suitable for work:

After I picked my jaw off the ground, I dug around for some context to this rather bizarre moment and found it here:

Elon Musk, the CEO of Tesla and SpaceX and the owner of X (formerly Twitter), says that the current advertiser boycott could “kill the company.”

“What this advertising boycott is going to do is, it is going to kill the company,” Musk said Wednesday. “And the whole world will know that those advertisers killed the company.”

Musk was interviewed by Andrew Ross Sorkin Wednesday afternoon at The New York Times Dealbook Summit, capping off a day of speakers that included Vice President Kamala Harris, FTC chair Lina Khan, Disney CEO Bob Iger, and PGA commissioner Jay Monahan.

He also responded to Disney CEO Bob Iger, who explained his company’s decision to pull advertising from the platform earlier.

“Don’t advertise. If someone is going to try and blackmail me with advertising? Blackmail me with money? Go fuck yourself,” Musk said. “Go fuck yourself, is that clear? Hey Bob, if you’re in the audience. That’s how I feel, don’t advertise.”

Well, that’s really going to encourage advertisers to come back to Twitter. All this does is reinforce the fact that Elon Musk is not playing with a full deck. And after dropping the f-bomb a couple of times, he then apparently tried to walk back his Twitter response to a Antisemitic trope on Twitter. Judge for yourself:

Musk did say of the post that sparked the advertiser exodus that “I should, in retrospect, not have replied to that particular post, and should have expanded in greater length about what I meant.” 

“I handed a loaded gun to those who hate me,” he added, calling it “one of the most foolish” things he had said on the platform. Later, after calling Sorkin “Jonathan,” Musk quipped that “what I am trying to illustrate is that, sometimes I say the wrong thing.”

I’m personally not buying this. Are you? Leave a comment below and share your thoughts.

What’s even worse is that Twitter CEO Linda Yaccarino was in the audience watching this debacle. And she Tweeted this:

That pretty much confirms that she’s part of the problem now and not someone who can guide Twitter to a better place. And she clearly not smart enough to listen to her friends who are telling her to get out before she destroys her reputation.

What’s clear from this debacle is that Elon completely off his rocker. And this will simply accelerate the departure of advertisers from Twitter. I wonder if Elon will start caring once Twitter is in critical condition with no hope of recovery? By the time he does, if he actually does care, it may be too late.

Industry Experts Serve Up Some Predictions & Trends for 2024

Posted in Commentary on November 30, 2023 by itnerd

As we start to wind down 2023, here’s some Predictions & Trends for 2024. I’ve collected expert predictions from nine of our top cyber security experts, with both AI and “Trending” centric thoughts for 2024:

Stefan Keller, Chief Product Officer, Open Systems:

2024 AI predictions: 

Defeating AI generated phishing attacks will become a major area of investment due to the widespread availability of generative AI tools that leverage deepfakes and personalize messages with a greater degree of sophistication.

Defending converged ecosystems (IT/OT/IoT) will become an important focus area as companies move forward with business transformation initiatives to boost overall performance through increased revenue, lower operating costs, and better customer satisfaction and workforce productivity.

Increasing cyber resiliency of business systems will become a major growth area as senior executives weigh invest in capabilities to ensure continuity of operations even in the wake of a successful breach.

Craig Harber, Security Evangelist: Open Systems:

2024 Trending: 

IT leaders need to prepare for a significant increase in cyberattack scale, scope, and sophistication in 2024. Generative AI tools give attackers a significant advantage in defeating malware detection engines deployed by security teams by modifying malware code and tailoring email messages to trick even the best-trained users. Some IT leaders may think this is just more of the same, but this technology is a potential game-changer for the attacker.

The sophistication of AI-generated socially engineered messages and deepfake videos using personal information from all social media will be convincing to even the most well-trained and cyber-savvy user. It will fool users into giving up credential information that allows the attacker to gain a foothold into the target network and then move laterally in search of other valuable information. Bad actors will also weaponize this technology to influence the 2024 presidential election through disinformation campaigns. The outcome of the election will decide the policies of this country for the next four years. 

Business transformation takes a customer-driven, digital-first approach to all aspects of a business operation to increase customer adoption and business opportunities. It uses advanced analytic engines, automation, hybrid cloud, and other smart digital technologies operating across the boundaries of information technology (IT) and operational technology (OT). While it creates new opportunities for efficiency and innovation, it also expands the cyberattack surface that security teams must defend to prevent significant damage to our industrial sector and the nation’s critical infrastructure. 

Recent cyber breaches expose common gaps in security tools, processes, and policies that prevent security teams from defeating sophisticated cybercriminals. Companies must be able to operate in a contested environment. They must invest in robust cybersecurity resilience strategies to maintain business continuity before, during, and after a cybersecurity incident. It’s not just about investing in the best cybersecurity tools to prevent an attack. Security teams must prepare for the inevitable. They must continuously update and exercise incident response plans and conduct tabletop exercises to ensure critical business workflows can operate while attacks are identified, contained, and remediated. These steps are essential to protect against financial loss and brand reputation.

Paul Valente, CEO & Co-Founder, VISO TRUST:

2024 AI predictions: 

The increased integration of AI in cybersecurity, with a focus on AI-enabled fraud prevention, threat detection, and response. AI plays a pivotal role in reshaping cybersecurity, addressing challenges such as AI-enabled third-party risk management, fraud, phishing, cyber attacks, and more. It enhances security leaders’ ability to make data-driven decisions, detect threats in real-time, and proactively manage user risk in a more efficient and scalable way than previously.

Businesses will experience heightened cybersecurity capabilities, leveraging AI to automatically detect and prevent threats, reduce response times, and address human errors. The integration of AI in cybersecurity will significantly improve overall security postures and contribute to a safer digital environment. 

The rising importance of AI in cybersecurity, as highlighted by the trends, emphasizes the need for organizations to adopt AI-driven solutions. As AI becomes a key catalyst in reducing user risk and boosting security awareness, businesses should consider leveraging such technologies to stay ahead of evolving cyber threats.

AND

The increased integration and growing need for AI in Third Party Risk Management (TPRM) within cybersecurity. AI enhances TPRM by enabling layered scrutiny of partner interactions, providing meaningful metrics, and replacing manual vendor questionnaires with efficient cyber risk assessments. 

Businesses will experience improved TPRM efficiency, with AI-driven assessments increasing completion rates to 95% or higher. This shift will enhance overall cybersecurity posture, especially in evaluating and mitigating risks associated with third-party interactions. The rise of AI in cybersecurity, particularly in TPRM, reflects the industry’s need for more effective and streamlined approaches to assess and manage cyber risks. The adoption of AI is poised to significantly contribute to the overall security posture, providing faster response times and better decision-making capabilities for security leaders.

Russell Sherman, CTO & Co-Founder, VISO TRUST

2024 AI predictions: 

The next iteration of generative AI moves beyond the basics, delving into crafting intricate narratives, musical compositions, and potentially bestselling novels. Imagine this: multi-modal generative AI seamlessly integrates text, voice, melodies, and visuals, not just for content creation but for creating immersive, multisensory experiences. It’s a significant step forward in AI capabilities, challenging us to discern between human craftsmanship and AI-generated output. This advancement holds promise for diverse opportunities, demanding a careful and nuanced evaluation. 

For businesses, this trend opens up new prospects in content creation, immersive experiences, and collaborative ventures. AI isn’t merely a tool; it’s evolving into a practical collaborator, ready to contribute creatively across various industries. 

As we navigate this AI landscape, emphasizing trust, fairness, accessibility, and vigilant governance is imperative. The subtle intertwining of human and AI contributions demands meticulous consideration in our pursuit of these cutting-edge innovations. Intriguing wonders that lie ahead in this evolving landscape.

AND

The democratization of Generative AI (GenAI) is set to transform our workplaces, breaking down barriers to and making collective knowledge accessible across roles. It’s not just a trend; it’s an evolution in how we work and learn. Significance of the Trend: GenAI’s democratization isn’t just about boosting productivity and efficiency; it’s about empowering everyone, regardless of technical expertise, to contribute and innovate. However, as we embrace this transformation, it’s crucial to acknowledge the concerns it brings, especially around security. Our journey toward progress should be both inclusive and secure.

For businesses, the promise is immense – improved productivity, cost-effectiveness, and new growth horizons. The beauty lies in its simplicity, enabling a “low-and no-code” approach. Yet, as we embark on this journey, let’s not forget to weave a protective layer around our innovations. Security measures need to be as effective as the innovation itself. As we democratize GenAI, fostering a workplace where everyone can thrive, security becomes paramount. It’s not just about data; it’s about trust and responsibility.

Avkash Kathiriya, Sr. VP – Research and Innovation at Cyware

2024 AI predictions: 

The AI Revolution: A Double-Edged Sword

Generative AI has firmly entered the security ecosystem and is already being used to pursue positive and malicious objectives. Threat actors exploit AI’s capabilities to craft more sophisticated attacks, forcing businesses to leverage it for defense. For example, organizations are increasingly utilizing AI products, like security co-pilot – Microsoft is changing the AI game for the security industry, introducing strategic AI features which will enhance the analyst experience and increase the cohesive nature of the security ecosystem. Advances like this are crucial to help incident response times and accuracy. With AI-driven content engineering, cybersecurity awareness and threat intelligence dissemination will continue to become more streamlined. At the same time, the rise of AI-driven security underlines the specific importance of protecting AI implementations.

Evolving Zero Trust Model: AI-Enhanced Security

Zero trust, while not a new concept, has evolved into a more adaptive model, leveraging AI’s powerful capabilities to deliver more effective protection. Given the burgeoning level of state-sponsored attacks and complex geopolitical situations, organizational reliance on AI-driven zero-trust models will become indispensable in the modern threat landscape.  As the adoption of the Zero Trust model grows in the coming years, the key base foundations for a sound model – namely centralized visibility, orchestration, and governance – will take center stage where today they are all too often ignored. 

Threat Intelligence: A Growing Necessity

Moving into 2024, the integration of threat intelligence with technologies such as AI and machine learning is expected to continue. This integration aims to enhance threat prediction and response capabilities. The trend of cross-industry collaboration in sharing threat intelligence is also likely to accelerate, underlining its role in building robust and adaptable cybersecurity strategies. It will drive change within the industry and we will see trusted community intelligence become more valuable than commodity intelligence.

The SOAR Conundrum: Promise vs. Reality

Security Orchestration, Automation, and Response (SOAR) products, though promising on paper, face practical implementation hurdles. The limitations of legacy SOAR platforms, for example, have highlighted the demand for more comprehensive solutions that cater to modern Security Operations Centers (SOCs). In 2024 and beyond I expect to see AI start to drive the SOAR industry to true No-Code platforms, reducing the complexity around workflows and playbook writing.

Looking ahead:

Over the next 12 months, we should expect to see further consolidation between security solutions like SIEM, SOAR, and data lakes. Integration will also increase between security tools and IT systems to enable smarter orchestration, while most important of all, organizations will harness AI to stay ahead of increasingly sophisticated AI-driven attacks

In addition, AI-enabled detection, together with seamless orchestration between machines and humans and security, will be more deeply embedded within systems and culture. The winners will find the right balance between integrated, intelligent technology and empowered, skilled analysts.

Dave Ratner, CEO, HYAS:

2024 AI predictions: 

The most important trend will be the use of AI for deep fakes — video and audio — and how this will play a role in phishing and social engineering attacks.  

We’re just starting to scratch the surface on what will happen and how audio and video deep-fakes will be used not just to sway or mislead public opinion but as new and powerful tools to penetrate the enterprise.  Despite awareness and continual training, social engineering attacks still prevail – the recent attack on MGM being just one example.  The use of AI to create credible and impressive video and audio deep fakes have the potential to supercharge social engineering and phishing attacks. Employees are well trained to ignore the obviously-fake email purportedly from the CEO.  When there is a near-perfect digital copy of the CEO in the wild, utilizing both natural voice and video, identifying fact from fiction becomes increasingly difficult and incredibly complex.  

Employees’ AI use is a new attack vector. Bad actors using AI aren’t the only imminent threat. A story that emerged at BlackHat this year demonstrates once again that employees can inadvertently be their organization’s biggest threat. Apparently, an employee at Company A used an LLM to help complete a whitepaper and asked the AI to write an executive summary and a conclusion. But to do so, the employee had to feed the paper into the LLM, which absorbed and spit it back out in summarized form before the author even published their work, raising privacy and NDA concerns. Everyone who conducts workforce security training needs to start warning about this concern, along with their phishing and social-engineering training.

Mike Barker, CCO, HYAS:

2024 AI predictions: 

The acceleration of generative AI capabilities will likely take center stage. As AI becomes more sophisticated, threat actors are poised to leverage generative models to craft hyper-realistic phishing attacks, deepfakes, and even simulate cyber vulnerabilities.

The significance lies in the potential for unprecedented cyber threats. Generative AI can mimic legitimate communications and behaviors, making traditional defense mechanisms less effective. This trend amplifies the need for advanced threat detection and response mechanisms.

Businesses will face increased challenges in defending against AI-driven cyber threats. The potential for highly convincing social engineering attacks and the creation of realistic but fabricated digital content poses a significant risk. As a result, organizations must bolster their cybersecurity strategies to stay ahead of these evolving threats.

Troy Batterberry, CEO and founder, EchoMark:

2024 Trending: 

Over 90% of the world’s organizations are completely unprepared for the risks imposed by insiders. Furthermore, these threats are growing in frequency by nearly 50% each year, and the scope of the damage for single event is growing as well. Insiders already have access to an organization’s most valuable assets, including customer information, intellectual property, trade secrets, etc. Insiders inherently know what is valuable. Their theft or leakage can even become an “extinction event” for an organization.

Many technologies try to monitor or even block end user behavior to help guard against threats. Unfortunately, such systems can block very legitimate behaviors and anger those in the organization trying to do their jobs. They also tend to be noisy with “false positives” and can even flag some of an organization’s very best hard-working employees, creating real employee morale issues. They also fail to address the human psychology of the problem through accountability. In the meantime, malicious leakers carefully cover their tracks including by using their personal cell phone camera to continue to steal or leak private information with impunity, knowing their device isn’t monitored.

A different approach is required. By making each person’s copy of private information securely watermarked and tied to their identity, organizations can dramatically raise the stewardship and accountability of private information without further impeding the ability of everyone to get their job done. The mere presence of watermarks will reduce leaks, and should one still happen, organizations can easily and quickly find the source.

George McGregor, VP, Approov Mobile Security:

IT leaders need to be preparing for this top 2024’s security threat: 

Initiate a review of mobile app security 1) ask all corporate enterprise app providers to provide detailed security assessments of their apps 2) Assess the security of your own apps.

Mobile apps are the gateway for hackers to corporate data and your business now depends on them. Multiple studies in 2023 show they still expose secrets and can be weaponized to attack back end systems.