As we start to wind down 2023, here’s some Predictions & Trends for 2024. I’ve collected expert predictions from nine of our top cyber security experts, with both AI and “Trending” centric thoughts for 2024:
Stefan Keller, Chief Product Officer, Open Systems:
2024 AI predictions:
Defeating AI generated phishing attacks will become a major area of investment due to the widespread availability of generative AI tools that leverage deepfakes and personalize messages with a greater degree of sophistication.
Defending converged ecosystems (IT/OT/IoT) will become an important focus area as companies move forward with business transformation initiatives to boost overall performance through increased revenue, lower operating costs, and better customer satisfaction and workforce productivity.
Increasing cyber resiliency of business systems will become a major growth area as senior executives weigh invest in capabilities to ensure continuity of operations even in the wake of a successful breach.
Craig Harber, Security Evangelist: Open Systems:
2024 Trending:
IT leaders need to prepare for a significant increase in cyberattack scale, scope, and sophistication in 2024. Generative AI tools give attackers a significant advantage in defeating malware detection engines deployed by security teams by modifying malware code and tailoring email messages to trick even the best-trained users. Some IT leaders may think this is just more of the same, but this technology is a potential game-changer for the attacker.
The sophistication of AI-generated socially engineered messages and deepfake videos using personal information from all social media will be convincing to even the most well-trained and cyber-savvy user. It will fool users into giving up credential information that allows the attacker to gain a foothold into the target network and then move laterally in search of other valuable information. Bad actors will also weaponize this technology to influence the 2024 presidential election through disinformation campaigns. The outcome of the election will decide the policies of this country for the next four years.
Business transformation takes a customer-driven, digital-first approach to all aspects of a business operation to increase customer adoption and business opportunities. It uses advanced analytic engines, automation, hybrid cloud, and other smart digital technologies operating across the boundaries of information technology (IT) and operational technology (OT). While it creates new opportunities for efficiency and innovation, it also expands the cyberattack surface that security teams must defend to prevent significant damage to our industrial sector and the nation’s critical infrastructure.
Recent cyber breaches expose common gaps in security tools, processes, and policies that prevent security teams from defeating sophisticated cybercriminals. Companies must be able to operate in a contested environment. They must invest in robust cybersecurity resilience strategies to maintain business continuity before, during, and after a cybersecurity incident. It’s not just about investing in the best cybersecurity tools to prevent an attack. Security teams must prepare for the inevitable. They must continuously update and exercise incident response plans and conduct tabletop exercises to ensure critical business workflows can operate while attacks are identified, contained, and remediated. These steps are essential to protect against financial loss and brand reputation.
Paul Valente, CEO & Co-Founder, VISO TRUST:
2024 AI predictions:
The increased integration of AI in cybersecurity, with a focus on AI-enabled fraud prevention, threat detection, and response. AI plays a pivotal role in reshaping cybersecurity, addressing challenges such as AI-enabled third-party risk management, fraud, phishing, cyber attacks, and more. It enhances security leaders’ ability to make data-driven decisions, detect threats in real-time, and proactively manage user risk in a more efficient and scalable way than previously.
Businesses will experience heightened cybersecurity capabilities, leveraging AI to automatically detect and prevent threats, reduce response times, and address human errors. The integration of AI in cybersecurity will significantly improve overall security postures and contribute to a safer digital environment.
The rising importance of AI in cybersecurity, as highlighted by the trends, emphasizes the need for organizations to adopt AI-driven solutions. As AI becomes a key catalyst in reducing user risk and boosting security awareness, businesses should consider leveraging such technologies to stay ahead of evolving cyber threats.
AND
The increased integration and growing need for AI in Third Party Risk Management (TPRM) within cybersecurity. AI enhances TPRM by enabling layered scrutiny of partner interactions, providing meaningful metrics, and replacing manual vendor questionnaires with efficient cyber risk assessments.
Businesses will experience improved TPRM efficiency, with AI-driven assessments increasing completion rates to 95% or higher. This shift will enhance overall cybersecurity posture, especially in evaluating and mitigating risks associated with third-party interactions. The rise of AI in cybersecurity, particularly in TPRM, reflects the industry’s need for more effective and streamlined approaches to assess and manage cyber risks. The adoption of AI is poised to significantly contribute to the overall security posture, providing faster response times and better decision-making capabilities for security leaders.
Russell Sherman, CTO & Co-Founder, VISO TRUST
2024 AI predictions:
The next iteration of generative AI moves beyond the basics, delving into crafting intricate narratives, musical compositions, and potentially bestselling novels. Imagine this: multi-modal generative AI seamlessly integrates text, voice, melodies, and visuals, not just for content creation but for creating immersive, multisensory experiences. It’s a significant step forward in AI capabilities, challenging us to discern between human craftsmanship and AI-generated output. This advancement holds promise for diverse opportunities, demanding a careful and nuanced evaluation.
For businesses, this trend opens up new prospects in content creation, immersive experiences, and collaborative ventures. AI isn’t merely a tool; it’s evolving into a practical collaborator, ready to contribute creatively across various industries.
As we navigate this AI landscape, emphasizing trust, fairness, accessibility, and vigilant governance is imperative. The subtle intertwining of human and AI contributions demands meticulous consideration in our pursuit of these cutting-edge innovations. Intriguing wonders that lie ahead in this evolving landscape.
AND
The democratization of Generative AI (GenAI) is set to transform our workplaces, breaking down barriers to and making collective knowledge accessible across roles. It’s not just a trend; it’s an evolution in how we work and learn. Significance of the Trend: GenAI’s democratization isn’t just about boosting productivity and efficiency; it’s about empowering everyone, regardless of technical expertise, to contribute and innovate. However, as we embrace this transformation, it’s crucial to acknowledge the concerns it brings, especially around security. Our journey toward progress should be both inclusive and secure.
For businesses, the promise is immense – improved productivity, cost-effectiveness, and new growth horizons. The beauty lies in its simplicity, enabling a “low-and no-code” approach. Yet, as we embark on this journey, let’s not forget to weave a protective layer around our innovations. Security measures need to be as effective as the innovation itself. As we democratize GenAI, fostering a workplace where everyone can thrive, security becomes paramount. It’s not just about data; it’s about trust and responsibility.
Avkash Kathiriya, Sr. VP – Research and Innovation at Cyware
2024 AI predictions:
The AI Revolution: A Double-Edged Sword
Generative AI has firmly entered the security ecosystem and is already being used to pursue positive and malicious objectives. Threat actors exploit AI’s capabilities to craft more sophisticated attacks, forcing businesses to leverage it for defense. For example, organizations are increasingly utilizing AI products, like security co-pilot – Microsoft is changing the AI game for the security industry, introducing strategic AI features which will enhance the analyst experience and increase the cohesive nature of the security ecosystem. Advances like this are crucial to help incident response times and accuracy. With AI-driven content engineering, cybersecurity awareness and threat intelligence dissemination will continue to become more streamlined. At the same time, the rise of AI-driven security underlines the specific importance of protecting AI implementations.
Evolving Zero Trust Model: AI-Enhanced Security
Zero trust, while not a new concept, has evolved into a more adaptive model, leveraging AI’s powerful capabilities to deliver more effective protection. Given the burgeoning level of state-sponsored attacks and complex geopolitical situations, organizational reliance on AI-driven zero-trust models will become indispensable in the modern threat landscape. As the adoption of the Zero Trust model grows in the coming years, the key base foundations for a sound model – namely centralized visibility, orchestration, and governance – will take center stage where today they are all too often ignored.
Threat Intelligence: A Growing Necessity
Moving into 2024, the integration of threat intelligence with technologies such as AI and machine learning is expected to continue. This integration aims to enhance threat prediction and response capabilities. The trend of cross-industry collaboration in sharing threat intelligence is also likely to accelerate, underlining its role in building robust and adaptable cybersecurity strategies. It will drive change within the industry and we will see trusted community intelligence become more valuable than commodity intelligence.
The SOAR Conundrum: Promise vs. Reality
Security Orchestration, Automation, and Response (SOAR) products, though promising on paper, face practical implementation hurdles. The limitations of legacy SOAR platforms, for example, have highlighted the demand for more comprehensive solutions that cater to modern Security Operations Centers (SOCs). In 2024 and beyond I expect to see AI start to drive the SOAR industry to true No-Code platforms, reducing the complexity around workflows and playbook writing.
Looking ahead:
Over the next 12 months, we should expect to see further consolidation between security solutions like SIEM, SOAR, and data lakes. Integration will also increase between security tools and IT systems to enable smarter orchestration, while most important of all, organizations will harness AI to stay ahead of increasingly sophisticated AI-driven attacks
In addition, AI-enabled detection, together with seamless orchestration between machines and humans and security, will be more deeply embedded within systems and culture. The winners will find the right balance between integrated, intelligent technology and empowered, skilled analysts.
Dave Ratner, CEO, HYAS:
2024 AI predictions:
The most important trend will be the use of AI for deep fakes — video and audio — and how this will play a role in phishing and social engineering attacks.
We’re just starting to scratch the surface on what will happen and how audio and video deep-fakes will be used not just to sway or mislead public opinion but as new and powerful tools to penetrate the enterprise. Despite awareness and continual training, social engineering attacks still prevail – the recent attack on MGM being just one example. The use of AI to create credible and impressive video and audio deep fakes have the potential to supercharge social engineering and phishing attacks. Employees are well trained to ignore the obviously-fake email purportedly from the CEO. When there is a near-perfect digital copy of the CEO in the wild, utilizing both natural voice and video, identifying fact from fiction becomes increasingly difficult and incredibly complex.
Employees’ AI use is a new attack vector. Bad actors using AI aren’t the only imminent threat. A story that emerged at BlackHat this year demonstrates once again that employees can inadvertently be their organization’s biggest threat. Apparently, an employee at Company A used an LLM to help complete a whitepaper and asked the AI to write an executive summary and a conclusion. But to do so, the employee had to feed the paper into the LLM, which absorbed and spit it back out in summarized form before the author even published their work, raising privacy and NDA concerns. Everyone who conducts workforce security training needs to start warning about this concern, along with their phishing and social-engineering training.
Mike Barker, CCO, HYAS:
2024 AI predictions:
The acceleration of generative AI capabilities will likely take center stage. As AI becomes more sophisticated, threat actors are poised to leverage generative models to craft hyper-realistic phishing attacks, deepfakes, and even simulate cyber vulnerabilities.
The significance lies in the potential for unprecedented cyber threats. Generative AI can mimic legitimate communications and behaviors, making traditional defense mechanisms less effective. This trend amplifies the need for advanced threat detection and response mechanisms.
Businesses will face increased challenges in defending against AI-driven cyber threats. The potential for highly convincing social engineering attacks and the creation of realistic but fabricated digital content poses a significant risk. As a result, organizations must bolster their cybersecurity strategies to stay ahead of these evolving threats.
Troy Batterberry, CEO and founder, EchoMark:
2024 Trending:
Over 90% of the world’s organizations are completely unprepared for the risks imposed by insiders. Furthermore, these threats are growing in frequency by nearly 50% each year, and the scope of the damage for single event is growing as well. Insiders already have access to an organization’s most valuable assets, including customer information, intellectual property, trade secrets, etc. Insiders inherently know what is valuable. Their theft or leakage can even become an “extinction event” for an organization.
Many technologies try to monitor or even block end user behavior to help guard against threats. Unfortunately, such systems can block very legitimate behaviors and anger those in the organization trying to do their jobs. They also tend to be noisy with “false positives” and can even flag some of an organization’s very best hard-working employees, creating real employee morale issues. They also fail to address the human psychology of the problem through accountability. In the meantime, malicious leakers carefully cover their tracks including by using their personal cell phone camera to continue to steal or leak private information with impunity, knowing their device isn’t monitored.
A different approach is required. By making each person’s copy of private information securely watermarked and tied to their identity, organizations can dramatically raise the stewardship and accountability of private information without further impeding the ability of everyone to get their job done. The mere presence of watermarks will reduce leaks, and should one still happen, organizations can easily and quickly find the source.
George McGregor, VP, Approov Mobile Security:
IT leaders need to be preparing for this top 2024’s security threat:
Initiate a review of mobile app security 1) ask all corporate enterprise app providers to provide detailed security assessments of their apps 2) Assess the security of your own apps.
Mobile apps are the gateway for hackers to corporate data and your business now depends on them. Multiple studies in 2023 show they still expose secrets and can be weaponized to attack back end systems.
Google discovers ChatGPT training data flaw
Posted in Commentary with tags Google on November 30, 2023 by itnerdOn the one year anniversary of ChatGPT going public comes these recent findings by Google researchers on ChatGPT’s training data.
The researchers successfully prompted ChatGPT to disclose parts of its training data using a novel attack technique, which involved asking the chatbot’s production model to repeatedly echo specific words indefinitely.
Anurag Gurtu, CPO, StrikeReady had this to say:
The exposure of training data in ChatGPT and other generative AI platforms raises significant privacy and security concerns. This situation underscores the need for more stringent data handling and processing protocols in AI development, especially regarding the use of sensitive and personal information. It also highlights the importance of transparency in AI development and the potential risks associated with the use of large-scale data. Addressing these challenges is critical for maintaining user trust and ensuring the responsible use of AI technologies.
This is not a good look for AI in general and ChatGPT specifically. Clearly people behind AI products need to get a handle on this sort of thing quickly or these sorts of issues will simply multiply.
UPDATE: Kevin Surace, Chair, Token adds this:
The attack was incredibly simple and some of them still work as of now. It is an absolute disaster for any model to reveal its training data – IP-wise, legal, integrity and so on. Certainly, OpenAI and others must put in more stringent safeguards to keep this from happening again.
Leave a comment »