A staggering 95% of all cybersecurity issues can be traced to human error, according to the World Economic Forum, highlighting that traditional cybersecurity awareness training may not be delivering the effectiveness urgently needed.
To get more insight on this, I had an interview with Theo Zafirakos, CISO Professional Services Lead at Fortra’s Terranova Security to see what his thoughts were in terms of cybersecurity training and how effective it is as well as how effective it can be:
1. Can you comment on how end users perceive cybersecurity threats and how they should deal with them?
Cybersecurity and cyber threat tactics are complex topics and because of this, individuals often feel intimidated and insecure when using technology. Additional stress is added when they are told that they must deal with the imminent threat of cyber criminals looking to steal their data, hack their systems, or compromise their passwords. It can be scary, and even technophiles are not all adept with cybersecurity best practices. This complexity and fear may make some individuals veer away from any responsibility for learning. If they do something wrong, it is easy for them to justify it with, “it was not my fault, I was not informed”. Even after learning, it is still easy to make mistakes, and this can lead to feelings of anger and embarrassment.
In a recent survey conducted by Fortra’s Terranova Security, 75% of respondents between the ages of 18 and 75 stated that they have been targeted or know someone who has been targeted in a phishing attack. It is not that the other 25% did not get targeted, it is most likely that they were not aware. We can no longer deny the threat – it is real, and it affects everyone. What was surprising from the same survey was that most of the respondents still believe and rely solely on their IT teams to protect them. But what happens when the cyber criminal manages to bypass technical controls or target an individual in a personal context. Whose responsibility is it then?
Organizations, schools, and governments must take the time to inform individuals of the threats associated with the use of technology, how to detect them, and what practices to adopt when they are online or dealing with sensitive information.
When users adopt secure behaviors and can consistently apply best practices, they will display positive emotions such as pride when detecting a phishing attack, confidence when they detect and report suspicious activity, or relief when they notice a malicious website just before they submit their password. This will motivate them to learn more.
2. How does your typical end user cybersecurity training fall short in terms of arming end users with the tools they need to protect themselves and their organizations?
Very often, cybersecurity awareness courses are too technical and may not be modified to suit the knowledge and competence of the learner. When users follow such courses, they may not understand the learning objectives or their individual role in contributing to the cybersecurity of their organization, and often become intimidated by future learning. Lengthy and non-interactive learning activities do not engage the learner.
Content is not the only issue. The design and deployment of the learning program is also very important. Gone are the days of taking an hour-long course once a year, using the same content. Organizations must adapt by providing fresh and relevant content on a regular basis without repeating it year after year. Developing and maintaining a large content library in all required languages, and very often in accessible formats, is a daunting and resource consuming task.
When the program and learning activity selection has not been well thought out in advance, we notice a decline in participation over time and a reduced retention of best practices.
3. How does Fortra’s Cyber Games modules fill in that gap?
Cyber Games modules are powerful tools for employee learning and professional development. By allowing players to solve virtual puzzles and interact with clickable on-screen elements, we tap into human psychology to ensure that the training is engaging and informative for participants.
Cyber Games provide instant performance feedback by measuring the player’s cybersecurity knowledge in real-time. Continuous feedback happens organically throughout each module, whether that is expanding on a correct response or explaining what led to a mistake. As a result, players are given autonomy to move through safe environments and see the impacts of their actions immediately.
We have created interactive eLearning modules that deliver unparalleled security awareness training results and enhanced problem-solving skills. Instead of subjecting players to a stream of endless text and visuals, users are encouraged to approach in-game tasks with a more critical mindset to determine the best possible strategy. This way, individuals grow their reasoning and detection skills.
We cannot have games without having some form of competition, which serves as a natural motivator. Unlike more traditional security awareness training initiatives, Cyber Games are fueled by inherent motivating forces. Bolstered by a scoring system, such as awarding a certain number of points for a correct response, players are pushed to improve their performance – whether they are scored against their previous results or those of other employees via a department or company leaderboards.
4. Can you speak to any success stories that you have seen with your Cyber Games modules?
Gamified cybersecurity awareness programs are a powerful tool for organizations to help motivate employees to engage with training and enhance their behavior by retaining what they learn.
In one situation, one of our customers had difficulty motivating their users to accept and follow the awareness program. By introducing Cyber Games, they were able to demystify cybersecurity and make it a fun and engaging experience. When the time came to launch their official program, they had a significant increase in voluntary participation compared to previous years.
Another customer used Cyber Games for just-in-time learning following undesired results during a phishing simulation. By providing end users with these additional learning opportunities with instant-feedback gaming modules, it is easy to distill complex topics into clear, actionable best practices. The consequence for failing a phishing simulation was to play a game, instead of being enrolled to training, which is often seen as punishment. Simply by changing the type and name of the activity, it created a more positive psychological environment for the learner.
Gamification can be used as a tool to build a culture that understands the value of cybersecurity and adopts it in daily routines. Organizations must use every tool at their disposal to encourage a mindset where security is everyone’s responsibility, not just the IT team!
5. Are your Cyber Games modules aimed at big businesses, or can SMB’s leverage this as well?
Cyber Games have been designed for any organization and any user, even those who are not gamers. While some games offer a more immersive experience with 3D concepts, others are simpler in design, which anyone can learn and play in a very short time. The Serious Games module leverages proven eLearning techniques and puts end users at the center of immersive, exciting scenarios in 3D virtual environments. They boost skill development and make learning key cyber concepts fun. The Cyber Challenges module reinforces existing security awareness training programs and provides quick, focused learning opportunities to end users. Each module zooms in on one specific unsafe behavior or best practice, supporting users with bite-sized content.
We cover topics that are relevant to all sectors and sizes, such as phishing and malware, social media, protecting sensitive information, and many others.
Many thanks to Theo Zafirakos for taking time to answer these questions.
HYAS Infosec Research On AI-Generated Malware Contributes to the AI Act And Other AI Policies And Regulations
Posted in Commentary with tags HYAS on December 4, 2023 by itnerdHYAS Infosec is pleased to share that research cited from HYAS Labs, the research arm of HYAS, is being utilized by contributors to and framers of the European Union’s AI Act.
The AI Act is widely viewed as a cornerstone initiative that is helping shape the trajectory of AI governance, with the United States’ policies and considerations soon to follow.
AI Act researchers and framers assert that the Act reflects a specific conception of AI systems, viewing them as non-autonomous statistical software with potential harms primarily stemming from datasets. The researchers view the concept of “intended purpose,” drawing inspiration from product safety principles, as a fitting paradigm and one that has significantly influenced the initial provisions and regulatory approach of the AI Act.
However, these researchers also see a substantial gap in the AI Act concerning AI systems devoid of an intended purpose, a category that encompasses General-Purpose AI Systems (GPAIS) and foundation models.
HYAS’ work on AI-generated malware — specifically, BlackMamba, as well as its more sophisticated and fully autonomous cousin, EyeSpy – is helping advance the understanding of AI systems that are devoid of an intended purpose, including GPAIS and the unique challenges posed by GPAIS to cybersecurity.
HYAS research is proving important for both the development of proposed policies and for the real-world challenges posed by the rising dilemma of fully autonomous and intelligent malware which cannot be solved by policy alone.
HYAS is providing researchers with tangible examples of GPAIS gone rogue. BlackMamba, the proof of concept cited in the research paper “General Purpose AI systems in the AI Act: trying to fit a square peg into a round hole,” by Claire Boine and David Rolnick, exploited a large language model to synthesize polymorphic keylogger functionality on-the-fly and dynamically modified the benign code at runtime — all without any command-and-control infrastructure to deliver or verify the malicious keylogger functionality.
EyeSpy, the more advanced (and more dangerous) proof of concept from HYAS Labs, is a fully autonomous AI-synthesized malware that uses artificial intelligence to make informed decisions to conduct cyberattacks and continuously morph to avoid detection. The challenges posed by an entity such as EyeSpy capable of autonomously assessing its environment, selecting its target and tactics of choice, strategizing, and self-correcting until successful – all while dynamically evading detection – was highlighted at the recent Cyber Security Expo 2023 in presentations such as “The Red Queen’s Gambit: Cybersecurity Challenges in the Age of AI.”
In response to the nuanced challenges posed by GPAIS, the EU Parliament has proactively proposed provisions within the AI Act to regulate these complex models. The significance of these proposed measures cannot be overstated and will help to further refine the AI Act and sustain its continued usefulness in the dynamic landscape of AI technologies.
Additional Resources:
“General Purpose AI systems in the AI Act: trying to fit a square peg into a round hole” https://www.bu.edu/law/files/2023/09/General-Purpose-AI-systems-in-the-AI-Act.pdf. Paper submitted by Claire Boine, Research Associate at the Artificial and Natural Intelligence Toulouse Institute and in the Accountable AI in a Global Context Research Chair at University of Ottawa, researcher in AI law, and CEO of Successif, and David Rolnick, Assistant Professor in CS at McGill and Co-Founder of Climate Change AI, to WeRobot 2023.
News – European Parliament – The European Union’s AI Act: https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Future of Life Institute “General Purpose – AI and the AI Act” What are general purpose AI systems? Why regulate general purpose AI systems? https://artificialintelligenceact.eu/wp-content/uploads/2022/05/General-Purpose-AI-and-the-AI-Act.pdf
Towards Data Science – “AI-powered Monopolies and the New World Order – How AI’s reliance on data will empower tech giants and reshape the global order” https://towardsdatascience.com/ai-powered-monopolies-and-the-new-world-order-1c56cfc76e7d
“The Red Queen’s Gambit: Cybersecurity Challenges in the Age of AI” presented by Lindsay Thorburn at Cyber Security Expo 2023 https://www.youtube.com/watch?v=Z2GsZHCXc_c
HYAS Blog: “Effective AI Regulation Requires Adaptability and Collaboration” https://www.hyas.com/blog/effective-ai-regulation-requires-adaptability-and-collaboration
Leave a comment »