Archive for June 14, 2023

Human Factor Remains Crucial While MFA Bypass Kits Surge: Proofpoint

Posted in Commentary with tags on June 14, 2023 by itnerd

According to Proofpoint’s report The Human Factor 2023, social engineering is more than three times more likely to be used in a cyber-attack than any other technique. 

“Among the many attacks we classified, the vast majority relied on some element of psychological manipulation. 

“Social Engineering is endlessly scalable and limited only by attackers’ ingenuity. And even without the use of malware or technical exploits, the aftermath of a successful social engineering attack can be devastating,” said the report. 

Assisting with social interactions was the rise in threat actors’ ability to sidestep user defenses with MFA bypass kits accounting for millions of phishing messages. 

Also, indicating the adoption by a significant number of less sophisticated groups: peaking at over 13 million per month is telephone-oriented attack delivery (TOAD) threats, and a twelvefold increase in “conversational” scams including romance fraud, fake job ads–the fastest growing threat in mobile.  

“…our research has consistently led us toward a simple but powerful observation: people – not technology-are the most critical variable in today’s cyber threats,” stated the report. 

Willy Leichter, PV of Marketing, Cyware had this to say:

    “As cybersecurity improves, it shouldn’t be surprising that humans are increasingly the weakest link. But it’s also a cop out for the security industry to shrug and blame the victims. Humans will inevitably get fooled and lured into scams. As an industry we must do a better job of connecting the dots and disseminating actionable intelligence on threats and attacks to keep the damage from spreading.”

This is where education and re-education can help to make humans less of a factor in terms of attacks. Hopefully there will be a shift to make that more of a focus than it is right now.

AI-powered Gmail Features That You Can Use In Both Work And Play

Posted in Commentary with tags on June 14, 2023 by itnerd

Google Cloud’s latest blog post highlights the six ways Gmail users can use AI features to help save time and improve their workflow in their day-to-day. Both your work and social life can be filled with to-dos, so why not use AI to help with something like Gmail to keep the day interesting and productive? 

Along with the new “Help me write” feature in Gmail to help users in Workspace Labs make composing emails easier than ever, Gmail users have a number of access to a host of other AI-powered features — and have, in some cases, for years.

Below are some of the AI-powered features you may not know you have:

  • “Help me write” can create entire email drafts for you based on simple prompts.
  • Smart Compose is great to use when you aren’t looking for help writing an email draft from scratch, but you’d still love some suggestions along the way.
  • Smart Reply to help generate up to three possible responses to emails you receive, in just two clicks.
  • Tabbed inbox to make your inbox easier to navigate – not a maze of clutter to dig through. 
  • Summary cards for those moments when you get a message with a lot of information, and just want the highlights. 
  • And last but not least, Nudging, which reminds you to reply to or follow up on important messages, and is the first Gmail AI feature that runs on both emails you have received and sent. 

You can find the blog here.

HP Warns That ChromeLoader “Shampoo” Malware Campaign Tough to Wash Out

Posted in Commentary with tags on June 14, 2023 by itnerd

HP Inc. today issued its quarterly HP Wolf Security Threat Insights Report, showing threat actors are hijacking users’ Chrome browsers if they try to download popular movies or video games from pirating websites. 

By isolating threats that have evaded detection tools on PCs, HP Wolf Security has specific insight into the latest techniques being used by cybercriminals in the fast-changing cybercrime landscape. To date, HP Wolf Security customers have clicked on over 30 billion email attachments, web pages, and downloaded files with no reported breaches. 

Based on data from millions of endpoints running HP Wolf Security, the researchers found:

  • The Shampoo Chrome extension is hard to wash out: A campaign distributing the ChromeLoader malware tricks users into installing a malicious Chrome extension called Shampoo. It can redirect the victim’s search queries to malicious websites, or pages that will earn the criminal group money through ad campaigns. The malware is highly persistent, using Task Scheduler to re-launch itself every 50 minutes.
  • Attackers bypass macro policies by using trusted domains: While macros from untrusted sources are now disabled, HP saw attackers bypass these controls by compromising a trusted Office 365 account, setting up a new company email, and distributing a malicious excel file that infects victims with the Formbook infostealer.
  • Firms must beware of what lurks beneath: OneNote documents can act as digital scrapbooks, so any file can be attached within. Attackers are taking advantage of this to embed malicious files behind fake “click here” icons. Clicking the fake icon opens the hidden file, executing malware to give attackers access to the users’ machine – this access can then be sold on to other cybercriminal groups and ransomware gangs.

Sophisticated groups like Qakbot and IcedID first embedded malware into OneNote files in January. With OneNote kits now available on cybercrime marketplaces and requiring little technical skill to use, their malware campaigns look set to continue over the coming months.

From malicious archive files to HTML smuggling, the report also shows cybercrime groups continue to diversify attack methods to bypass email gateways, as threat actors move away from Office formats. Key findings include:

  • Archives were the most popular malware delivery type (42%) for the fourth quarter running when examining threats stopped by HP Wolf Security in Q1.
  • There was a 37-percentage-point rise in HTML smuggling threats in Q1 versus Q4.
  • There was a 4-point rise in PDF threats in Q1 versus Q4.
  • There was a 6-point drop in Excel malware (19% to 13%) in Q1 versus Q4, as the format has become more difficult to run macros in. 
  • 14% of email threats identified by HP Sure Click bypassed one or more email gateway scanner in Q1 2023.
  • The top threat vector in Q1 was email (80%) followed by browser downloads (13%).

HP Wolf Security runs risky tasks like opening email attachments, downloading files and clicking links in isolated, micro-virtual machines (micro-VMs) to protect users. It also captures detailed traces of attempted infections. HP’s application isolation technology mitigates threats that might slip past other security tools and provides unique insights into novel intrusion techniques and threat actor behavior. 

About the data

This data was anonymously gathered within HP Wolf Security customer virtual machines from January-March 2023

The EU Passes Draft Legislation To Govern AI

Posted in Commentary with tags , on June 14, 2023 by itnerd

The news is out today that the EU Parliament has moved one step closer to putting legislation into force to govern AI:

The European parliament approved rules aimed at setting a global standard for the technology, which encompasses everything from automated medical diagnoses to some types of drone, AI-generated videos known as deepfakes, and bots such as ChatGPT.

MEPs will now thrash out details with EU countries before the draft rules – known as the AI act – become legislation.

“AI raises a lot of questions socially, ethically, economically. But now is not the time to hit any ‘pause button’. On the contrary, it is about acting fast and taking responsibility,” said Thierry Breton, the European commissioner for the internal market.

A rebellion by centre-right MEPs in the EPP political grouping over an outright ban on real-time facial recognition on the streets of Europe failed to materialise, with a number of politicians attending Silvio Berlusconi’s funeral in Italy.

The final vote was 499 in favour and 28 against with 93 abstentions.

Craig Burland, CISO, Inversion6 had this comment in relation to this news:

Let the debate begin! Similar to data privacy years ago, the EU has just taken a position at the far end of the spectrum to frame the parameters of the discussion. Putting aside the many challenges of enforcement as well as the ubiquitous use of AI in modern technology projects, the EU has documented intriguing concepts centered on ensuring the validity of the content and proper use cases. Contrast this with Google’s pronouncement last week that focused primarily on protecting the technology itself.  What was announced today will shift and transition as the debate plays out in the media and behind closed doors. But, in planting this flag, the EU has started what will be a fascinating dialog that affects businesses and individuals alike.

I’m honestly not sure how this will shake out. But based on the fact that the EU has come out with regulations like GDPR, this draft legislation is likely to shape the discussion about AI and how it should be used. Thus everyone need to pay attention to this.

UPDATE: Eduardo Azanza, CEO, Veridas adds this:

     “The passing of the Artificial Intelligence Act is a significant moment and should not be underestimated at all. For technologies such as AI and biometrics to ever be successful, it is essential that there is trust from businesses and the wider public.

It’s critical that we have established agreed standards and deliverables to ensure that AI and collected biometric data are used responsibly and ethically. There must be clearly defined responsibilities and chains of accountability for all parties, as well as a high degree of transparency for the processes involved. 

As the UK and US look to introduce their own Artificial Intelligence Act, it is essential they work with the EU to define minimum global standards – only then can we guarantee the ethical use of AI and biometrics.

Ultimately, it’s businesses’ duty to responsibly and ethically use AI technology, as its capability to replicate human abilities raises huge concerns. Organizations need to be conducting periodic diagnoses on the ethical principles of AI. Confidence in AI security technology must be based on transparency and compliance with legal, technical, and ethical standards.”

UPDATE #2: Ani Chaudhuri, CEO, Dasera had this comment:

European Union lawmakers have taken a decisive step in shaping the future of artificial intelligence by adopting the E.U. AI Act. This landmark legislation challenges the power of American tech giants and sets unprecedented restrictions on AI usage. This move is long overdue as it prioritizes data security and protects individuals from potential harm caused by unchecked AI systems.

The E.U. AI Act introduces essential guardrails to prevent deploying AI systems that pose an “unacceptable level of risk.” By banning tools like predictive policing and social scoring systems, the legislation safeguards against intrusive and discriminatory practices. Furthermore, it limits high-risk AI applications, such as those that could influence elections or jeopardize people’s health.

One significant aspect of the legislation is its focus on generative AI, including systems like ChatGPT. Requiring content generated by such systems to be labeled and mandating the publication of summaries of copyrighted data used for training promotes transparency and protects intellectual property rights. These measures address growing concerns and ensure responsible AI development.

While some voices express concern over the potential impact on AI development and adoption, the European Parliament’s determination to lead the global dialogue on responsible AI should be applauded.  European lawmakers have proactively developed comprehensive AI legislation that accounts for evolving technologies and potential risks.

The E.U.’s commitment to data privacy, tech competition, and social media regulation aligns with its ambitious AI regulations. This cohesive framework ensures that European companies adhere to high standards, promoting consumer trust and privacy. It also strengthens Europe’s position as the global tech regulator, setting precedents that will shape international tech policies.

As Europe leads in establishing AI standards, the United States must step up its efforts to keep pace. Congress must pass comprehensive legislation addressing AI and online privacy. Falling behind Europe risks hindering innovation and surrendering the opportunity to lead the global debate on AI governance.

We believe that responsible AI development should be a global endeavor. As Europe sets the bar, it is incumbent upon the United States to catch up and play an active role in shaping AI policies. We can strike the right balance and ensure AI benefits society by fostering innovation while safeguarding individual rights.

While concerns and challenges exist, the E.U. AI Act represents a significant step toward building a responsible and secure AI ecosystem. Europe’s commitment to protecting individuals and upholding data security sets an example for the world. As the AI landscape continues to evolve, we must embrace robust regulations that foster trust, innovation, and global cooperation.

Poly Strengthens Hybrid Ecosystem with AI-Powered Experiences

Posted in Commentary with tags on June 14, 2023 by itnerd

Today at InfoComm 2023, Poly, an HP Inc. company, announced new pro-grade audio and video solutions with AI-driven software to bring meetings to life.

Be Seen

As more people return to the office, the Poly Studio X52 all-in-one video bar maximizes the virtual meeting experience in mid-sized meeting spaces. New Poly DirectorAI smart camera technology offers automated camera framing modes like group, speaker, and people framing. The 4K, 20MP camera ensures clear visibility of every participant, reaching even the farthest corners of the conference room without any image distortion. Updates to the AI-driven software for group and speaker framing capabilities include the new Poly DirectorAI Perimeter feature and other audio enhancements. The Poly Studio X52 is certified for Google Meet, Microsoft Teams and Zoom, with pending certification for native support for BlueJeans by Verizon, GoTo, and RingCentral. 

The Poly Video OS provides a unified experience across all Poly video conferencing devices. With its latest update, Poly Video OS 4.1 delivers new features and improvements:

  • For better meeting room experiences with glass walls, Poly DirectorAI Perimeter technology ensures precise participant framing. IT administrators can input room dimensions, allowing AI-powered technology to define parameters accurately and prevent capturing faces beyond glass walls or windows. Additionally, Sound Reflection Reduction minimizes echo and reverberations caused by glass and hard surfaces.
  • The Poly Studio E70 smart camera can now connect directly to the Poly G7500 modular video conferencing system using a standard Ethernet cable for flexible room configurations and easier installation. The Ethernet cable can power the Studio E70 camera and extend up to 100 meters.
  • The Poly TC10 touch control panel now supports meeting control for Microsoft Teams Rooms on Android. It can also function as a room scheduling panel for Microsoft Teams, allowing better visibility into room availability and on-the-spot room reservations. The TC10 control panel is now certified for Microsoft Teams.

Poly Solutions for Large Rooms

  • The Poly G7500 Modular Video Conferencing System and the Shure Microflex Large Room Bundles deliver a seamless multi-vendor solution for large meeting spaces and have been jointly certified for Microsoft Teams Rooms on Android. Customers now have the option of a tested and certified multi-vendor solution that seamlessly integrates video, compute, and DSP audio solutions, ensuring an optimal Microsoft Teams Rooms experience.
  • The Poly Studio E70 and HP Mini Conferencing PC have been Zoom-certified for large meeting rooms, offering customers a complete intelligent solution for Zoom Rooms on Windows. The powerful combination of dual camera sensors and 12th generation Intel® Core i7 processor enable Zoom Room features, including a Zoom-verified Intelligent Director experience, to deliver more equitable meetings for in-person and remote participants.

Be Heard

The new Poly Voyager Surround 80 UC empowers employees to focus and sound their best with immersive rich audio and adaptive ANC. It is the first boomless headset certified for Microsoft Teams Open Office due to its outstanding performance in noisy environments. The Bluetooth enterprise headset offers a sleek design for complete comfort, featuring soft ear cushions and an adjustable headband for an ultralight fit. Users can stay in command with up to 21 hours of talk time, convenient on-ear controls, and smart sensors for automatic call answering. 

IT Management – Poly Lens

With companies striving to enhance the in-office experience and adapt workspaces to evolving utilization trends, the Poly Lens remote device management provides enhanced visibility and insights across company workspaces. IT professionals can remotely monitor, troubleshoot Poly devices, and streamline device management under one platform.

Poly has expanded its growing portfolio of API integration partners with Ubiqisense and Vyopta to deliver insights for customers leveraging the power of Poly Lens on the Poly Studio X30 and Studio X50 video bars.

  • Ubiqisense provides rich, actionable insights into room occupancy, usage patterns, and footfall across office floors, meeting rooms, open spaces, and shared desks.
  • Vyopta provides insights on space utilization, UC device monitoring, troubleshooting, and meeting experience analytics.

“Organizations seeking to bridge the gap between on-site, remote, and flexible workers will need tools and technologies designed to meet the challenge,” said Amy Loomis, Research VP Future of Work, IDC. “HP | Poly has extensive experience deploying their suite of AI-enabled audio and video solutions, which offer an immersive collaboration experience for end users.”

Pricing and Availability

  • The Poly Studio X52 is expected to be available worldwide in late Summer for a starting price of $4,300.
  • The Poly Video OS 4.1 is expected to be available worldwide across the Poly Studio X Series of video bars and the Poly G7500 modular video conferencing systems in late Summer.
  • The Poly Studio E70 featuring Zoom’s Intelligent Director on Windows is expected to be available worldwide in October 2023.
  • The Poly Voyager Surround 80 is expected to be available worldwide in August for a starting price of $449.95.
  • The Poly Lens App is currently available worldwide.

The Western Digital WDDA Controversy MAY Not Be As Shady As It Seems…. But Western Digital Needs To Fix How They’ve Handled This

Posted in Commentary with tags on June 14, 2023 by itnerd

I’ve been tracking a story about Western Digital for the last few days that broke via via Ars Technica. The story goes something like this:

As users have reported online, including on Synology-focused and Synology’s own forums, as well as on Reddit and YouTube, Western Digital drives using Western  Digital Device Analytics (WDDA) are getting a “warning” stamp in Synology DSM once their power-on hours count hits the three-year mark. WDDA is similar to SMART monitoring and rival offerings, like Seagate’s IronWolf, and is supposed to provide analytics and actionable items.

The recommended action says: “The drive has accumulated a large number of power on hours [throughout] the entire life of the drive. Please consider to replace the drive soon.” There seem to be no discernible problems with the hard drives otherwise.

Synology confirmed this to Ars Technica and noted that the labels come from Western Digital, not Synology. A spokesperson said the “WDDA monitoring and testing subsystem is developed by Western Digital, including the warning after they reach a certain number of power-on-hours.”

There’s a couple of ways to look at this.

Let me start with the cynical view. I have zero issues with a hard drive giving you a warning if the drive is about to fail. Especially if you use it in a Network Attached Storage box or NAS like the ones that Synology makes as that is mission critical use case. And drives have had technology built into them to warn you of a potential failure for years. That tech is called SMART or Self Monitoring Analysis And Reporting Technology. But Western Digital’s tech that seems to be designed to throw up a warning after three years of usage. Which by some strange coincidence is around the time the warranty on a lot of these drives expire. That seems a bit “sus” to me. It’s almost as if Western Digital is trying to scare people into replacing drives to drive their revenue upwards.

Here’s the charitable view. There’s a figure called MTBF or Mean Time Between Failures. This is a statistical model that estimates the average life span of a hard drive. A lot of this depends on how you use the drive. The generally accepted MTBF figure that I’ve always seen is three to five years in terms of what users should expect. In a NAS environment, you’re likely to be closer to that three year end of the spectrum. Which means Western Digital warning you about the fact that the drive is over three years old may be a good thing as a surprising number of people have a tendency to not only install and forget about NAS boxes, but they don’t back them up either. Which means a drive failure can be catastrophic.

Pro Tip: You should back up your NAS either to an external drive on a frequent basis (as in at least monthly if not more frequently) and store that backup off site. Or you should use a service like BackBlaze to back up your NAS to the cloud.

If you want my personal opinion, I don’t think that Western Digital is doing anything wrong here. Though there is a part of me that thinks that this is still a bit “sus”. But what I do think is that they did a horrible job of explaining what WDDA does and why it’s potentially valuable to end users. Having said that, these issues would have likely gotten in the way of explaining that:

In short, it is possible that even if Western Digital did a better job of rolling WDDA out, nobody would trust them anyway because of the above issues. And that reflects poorly on Western Digital which in my mind means that they need to address not only this specific issue, but the trust that users have of their brand overall as clearly it’s pretty bad at the moment.

Now some people have recommended against buying Western Digital drives because of this. At the moment, I am continuing to recommend those drives to my clients. But I have to admit that when I replace my NAS later this year, I’ll be looking at installing Seagate drives because while I have not had any of my personal Western Digital drives fail, and only one client over the last decade or so has had a Western Digital drive fail, this whole controversy has made me broaden my horizons. And if I have a good experience with Seagate drives, I will likely start recommending them to my clients as well. Which I suspect is the last thing that Western Digital wants. But given the state of play at the moment, until they come out and address this head on and transparently, that’s what they are likely to get. I say that because I am unable to find any example where Western Digital has said anything about this in public. Perhaps they’re hoping that this issue simply goes away? Who knows? But I do know that companies that don’t deal with issues head on end up with a bad outcome at the end of the day. And Western Digital has to decide if that’s what they want.

Your move Western Digital.

Next Announces John Stringer as Head of Product & Promotes Chris Denbigh-White to Chief Security Officer 

Posted in Commentary with tags on June 14, 2023 by itnerd

Next DLP, a leader in insider risk and data protection, today announced the appointments of John Stringer as Head of Product and Chris Denbigh-White as Chief Security Officer (CSO). 

Stringer joins Next as the Head of Product, bringing nearly two decades of endpoint and data security experience. Most recently, he was Director of Product Management at Crowdstrike, leading data security initiatives. Before Crowdstrike, Stringer was at Forcepoint for eight years, responsible for rapidly growing their Enterprise DLP business.

Denbigh-White, previously Head of Security Analysis at Next, will assume his new role as CSO, acting as the primary liaison between Customers, the C-Suite, and all security facets as the company expands its platform capabilities. Denbigh-White is responsible for spearheading cybersecurity initiatives to support current and future business endeavors and developing and managing the company’s information risk system and cyber defense strategies.

Before joining Next, Denbigh-White was Vice President of Information Security at Deutsche Bank. Formerly, he was Senior Consultant at Net Reply, a Cyber Security Analyst at Transport for London, and spent over a decade with the Metropolitan Police in various senior security roles.

New AI-Generated Phishing Attacks Spotted By Abnormal Security

Posted in Commentary with tags on June 14, 2023 by itnerd

Abnormal Security has released a generative AI-likelihood analysis on new email attacks in which threat actors leverage generative AI tools to create increasingly realistic and convincing email attacks, spotlighting what would be in the top 10 and top 100 predicted words if generated by AI.

Abnormal Security has discovered real-world examples of AI-generated attacks, presenting three cases: a credential phishing attempt impersonating Facebook, an employee impersonated in an AI-created payroll diversion scam, and a vendor email compromise (VEC) and invoice fraud generated by AI. 

Using AI, Abnormal Security analyzes the likelihood of each word in an email being AI-generated. The report provides the output of their analysis for each email example to demonstrate how they know these are AI-generated emails, highlighting words indicated as generated by the AI. 

I also have some detailed commentary from Dan Shiebler, Head of Machine Learning at Abnormal Security:

How are cybercriminals leveraging generative AI platforms to enhance email attack techniques?

One of the leading forms of cybercrime is business email compromise (BEC), whereby threat actors write seemingly realistic, socially-engineered emails that lure their victims into taking action, like paying a fake invoice, changing their bank account details, or sharing sensitive information.

BEC actors often use templates to write and launch their email attacks. Because of this, many traditional BEC attacks feature common or recurring content that can be detected by email security technology based on pre-set policies. But with generative AI tools like ChatGPT, cybercriminals could write a greater variety of unique content based on slight differences in their generative AI prompts, which makes detection based on known attack indicator matches much more complex while also allowing them to scale the volume of their attacks. 

What recent incidents have highlighted the growing threat of AI-generated email attacks?

While we are still doing a complete analysis to understand the extent of AI-generated email attacks, Abnormal has seen a definite increase in the number of attacks with AI indicators as a percentage of all attacks, particularly over the past few weeks. 

How are AI-driven phishing attacks becoming more convincing and difficult to detect?

The danger of generative AI in email attacks is that it allows threat actors to write increasingly sophisticated content, making it more likely that their target will be deceived into clicking a link or following their instructions. For example, using AI to write their email attacks can help eliminate the typos and grammatical errors that often characterize and help us identify traditional BEC attacks. 

It can also be used to create greater personalization. Imagine if threat actors were to input snippets of their victim’s email history or LinkedIn profile content within their ChatGPT queries. Emails will begin to show the typical context, language, and tone the victim expects, making BEC emails even more deceptive. 

Can you explain how an AI-generated phishing email example mimics legitimate communication?

The email snapshots in the article are great examples of how AI-generated email attacks can mimic legit communications from individuals and brands. Unlike traditional BEC attacks that tend to be riddled with grammatical errors, typos, vague senders, and formatting issues, these emails are free of those indicators. They’re written professionally, with a sense of formality that would be expected around a business matter, and in some cases—such as in the last example from an impersonated attorney—they are signed by a named sender from a legitimate organization. 

How is AI being used to detect AI-generated text in suspicious emails? 

At Abnormal, we use a specialized prediction engine to analyze how likely an AI system will select each word in an email, given the context to the left of that email. If the words in the email have consistently high likelihood (meaning each term is highly aligned with what an AI model would say, more so than in human text), then we classify the email as possibly written by AI. However, It should be noted that not all AI-generated emails can be blocked, as there are many legitimate use cases where employees use AI to create email content. As such, the fact that an email has AI indicators must be used alongside many other signals to indicate malicious intent. 

What are the challenges in accurately detecting AI-generated emails?

Many legitimate emails can look AI-generated, such as templatized messages and machine translations, making catching legitimate AI-generated emails difficult. When our system decides whether to block an email, it incorporates much information beyond whether AI may have generated the email using identity, behavior, and related indicators. 

Beyond phishing attacks, how has generative AI expanded into other types of email attacks?

Phishing attacks, business email compromise, and vendor fraud often fall under the same umbrella category of social engineering. Regardless of whether a threat actor intends to lure their victim into clicking a link to steal their credentials (phishing); impersonate a trusted or authoritative figure, like a senior executive or a colleague (BEC); or more specifically, impersonate a vendor (vendor fraud), generative AI is lowering the barrier to entry for launching sophisticated social engineering attacks of all types. Criminals simply need to input information and intent into a tool like ChatGPT to receive a legitimate-looking email they can send to their targets. 

What measures can organizations take to combat AI-generated email attacks?

Organizations must implement modern solutions capable of detecting threats—including highly sophisticated AI-generated attacks that can be nearly impossible to distinguish from legitimate emails. They must also see when an AI-generated email is legitimate versus when it has malicious intent. 

Solutions that leverage AI will be most effective in detecting these evolving attacks—think of it as good AI to fight bad AI. Instead of looking for known indicators of compromise, which constantly change, solutions that use AI to baseline normal behavior across the email environment—including typical user-specific communication patterns, styles, and relationships—will be able to detect the anomalies that may indicate a potential attack, no matter if it was created by a human or by AI. 

Organizations should also practice good cybersecurity hygiene, including implementing continuous security awareness training to ensure employees are vigilant about BEC risks. Additionally, implementing tactics like password management and multi-factor authentication will ensure the organization can limit further damage if any attack succeeds. 

You can read this analysis here.

An Uber Eats #Scam Is Making The Rounds

Posted in Commentary with tags on June 14, 2023 by itnerd

I became aware of an Uber Eats scam via a post on Mastodon. And after doing some investigation, I found it worth my while to publish a story on the scam for two reasons. First, let’s start with the scam which is detailed here by Landon Epps who was a victim of this scam:

As mentioned above, a Washington Post reporter by the name of Chris Dehghanpoor was scammed twice. And he has a detailed summary of this scam here. That brings me to the second reason why I have decided to post this, Uber Eats was of no help in dealing with the scam that this reporter tripped over. In fact, they stopped responding to his DM’s for assistance. That reflects poorly on Uber Eats as you would think that they would want to protect their brand by getting rid of scams like this. But clearly that isn’t the case.

Uber Eats has a serious problem here, but until they choose to address it, I would recommend that you reconsider the use of this food deliver app as it seems like if you get hit by this scam, the company has decided that they not only won’t have your back, but they don’t seem to care that this scam exists.

Consider yourself warned.

Rezilion Launches Agentless Runtime Monitoring Solution For Vulnerability Management

Posted in Commentary with tags on June 14, 2023 by itnerd

Rezilion, an automated software supply chain security platform, today announced the release of its Agentless solution. This new capability allows user connection and access to Rezlion’s full feature functionality across multiple cloud platforms. It enables security teams to monitor exploitable attack surfaces in runtime without using an agent to simultaneously minimize security and operational risk.

Many reports and analyses confirm that organizations spend extraordinary time prioritizing and remediating software vulnerabilities. Research conducted by Ponemon Institute underscores that vulnerability management is time-consuming, costly, and often too overwhelming. Nearly half (47%) of survey respondents reported backlogs ranging from  100,000 to 1.1 million vulnerabilities still awaiting patches. 

Yet, many vulnerabilities are not exploitable in runtime. Armed with this knowledge, Rezilion first introduced vulnerability prioritization using runtime data. This data reveals which vulnerabilities are exploitable depending on the user’s unique environment and reduces 85% of the noise because most do not require patching. However, an agent is needed to get this visibility into the runtime – formerly an unchallenged assumption.  

While some organizations feel comfortable with agents, it represents an operational risk and overhead, leading Rezilion to release the first agentless solution that can see into the runtime execution of the software and determine not only which components are vulnerable but know if they are exploitable in the runtime context. After years of research and significant breakthroughs, the Rezilion team discovered that achieving true non-agent-based runtime analysis is possible. 

Unlike some agents limited to precise mechanisms such as eBPF, Rezilion’s approach covers all versions of Windows and Linux across 12 code languages. The platform’s agentless solution empowers customers to ensure their software security in production and continuous integration from the convenience of a single platform and with no maintenance overhead or operational risk. 

With Rezilion, organizations can detect, aggregate, prioritize, and remediate without maintenance overhead. Rezilion allows customers to remove interference with product performance without additional code or agent execution. Unlike other agentless solutions that only offer a static understanding, Rezilion provides a Dynamic SBOM, which reveals both software components and how they’re being executed in runtime. Organizations receive the necessary tools to identify bugs – and potential exploitation by attackers.

Rezilion can now be deployed through a seamless workflow managed entirely from Rezilion’s platform user interface. For more information about securing the software supply chain without the hindrance of an agent, please visit https://info.rezilion.com/lp/demo-agentless-runtime-free-risk-assessment.