Guest Post: AI in the Data Management & Security Lifecycle

Posted in Commentary with tags on February 24, 2023 by itnerd

By Noah Johnson, Co-Founder & CTO, Dasera

No longer just a buzzword, companies have begun leveraging artificial intelligence (AI) to manage their data management lifecycles. AI has been increasingly adopted by companies across industries to help manage their data, and its use is expected to continue to grow. In fact, according to a survey conducted by Gartner, 37% of organizations have implemented AI in some form, and another 33% plan to implement it in the next year.

This adoption is because AI has proven to be effective in automating routine tasks and providing insights into data that can be used to make better business decisions. By leveraging AI, companies can ensure that their data is accurate, integrated, and stored in the most cost-effective way possible. Additionally, AI can help to improve data security by detecting and preventing data breaches and other security threats.

AI is a valuable tool for managing data throughout its lifecycle, and its use is likely to continue to grow as more and more companies realize its benefits. Here are some insights on the role of AI in data management:

  • Data quality management: AI can automatically detect and correct errors and inconsistencies in data, ensuring that data is accurate and reliable, which is crucial for making sound business decisions.
  • Data integration: AI can automate the process of integrating data from different sources, reducing the risk of errors that can occur when data is integrated manually and saving time.
  • Data storage: AI can optimize data storage, ensuring that data is stored in the most cost-effective way possible, reducing storage costs, and improving overall data management efficiency.
  • Data analysis: AI can automate data analysis tasks such as identifying patterns and anomalies in data, providing businesses with valuable insights, and helping them make better decisions based on the data they have.
  • Data security: AI can detect and prevent data breaches and other security threats, for example, by monitoring network traffic and detecting suspicious activity that could indicate a cyber attack.

AI now plays a vital role in effectively managing data throughout its lifecycle, from data quality management to data security. Its ability to automate routine tasks and provide insights can help businesses improve their data management and stay competitive in a data-driven world.

Twitter Continues To Show Signs Of Failure

Posted in Commentary with tags on February 24, 2023 by itnerd

Once again, the folks at Platformer are doing amazing work to show how dysfunctional Elon Musk led Twitter is. In their latest report that dropped last night, the team at Platformer starts with this:

On Wednesday, Twitter employees had the tech equivalent of a snow day: the company’s Slack instance was down for “routine maintenance,” they were told, and the company was implementing a deployment freeze as a result. 

That same day, Jira – a tool Twitter uses to track everything from progress on feature updates to regulatory compliance – also stopped working. With no way to chat and no code to ship, most engineers took the day off. 

Jira access was restored on Thursday. But Platformer can now confirm that Slack wasn’t down for “routine maintenance.” “There is no such thing as routine maintenance. That’s bullshit,” a current Slack employee told us.

In this as in so many other things, Twitter hasn’t paid its Slack bill. But that’s not why Slack went down: someone at Twitter manually shut off access, we’re told. Platformer was not able to learn the reason prior to publication, though the move suggests Musk may have turned against the communication app — or at least wants to see if Twitter can run without Slack and the expenses associated with it. (Musk’s Tesla uses a Slack competitor called Mattermost for in-house collaboration, and Microsoft Outlook and Teams for email and meetings.)

On Blind, the anonymous workplace chat app, the disappearance of such critical tools was met with a mixture of disbelief, frustration, and (to a lesser extent) glee.

“We didn’t pay our Slack bill,” one employee wrote. “Now everyone is barely working. Penny wise, pound foolish.”

Another worker called the disappearance of Slack the “proverbial final straw.” 

“Oddly enough, it’s the Slack deactivation that has pushed me to finally start applying to get out,” they wrote.

This underlines that Elon really doesn’t understand Twitter, its culture, and the tools that it users. And that lack of understanding has consequences as outlined above with employee discontent. But that’s not his only issue. Elon wants to open source Twitter’s algorithm for reasons nobody understands. But:

It’s unclear whether Twitter will actually hit that deadline — Musk seems to announce a new thing coming “next week” all the time, and often those deadlines pass and whatever feature was allegedly coming is never heard of again.

This is a classic example of Elon being someone who can’t follow through on his promises because he either lacks the ability to do so, or he’s just writing cheques that his a** can’t cash.

Another of Musk’s ongoing projects is to improve Twitter’s performance. At the end of last year, he claimed progress. “Significant backend server architecture changes rolled out,” he tweeted on December 28. “Twitter should feel faster.” 

In fact, publicly available data indicates that Twitter has been slowly degrading since that month, when it shut down its Sacramento data center. The information comes from Singlepane, a startup whose tool measures latency issues using external signals; the company has been actively monitoring what it describes as a degradation in Twitter’s quality of service.

According to the company’s data, Twitter has seen increased latency — the time between taking an action like refreshing the timeline and seeing new tweets populate in your feed — during times when more people are using the service. Singlepane showed latency spikes during the halftime show of the Super Bowl, for example, and in the aftermath of the recent earthquake in Turkey. 

We ran the data by current Twitter engineers, who say it tracks with what they’re seeing internally. 

But it’s not only big external events that can cause the platform to become slower or less stable. When a user takes their account private, Twitter’s systems have to go through every single tweet in the account’s history and mark them as private, before making those tweets visible to the private account’s followers. 

That can be a data-intensive request for a large account a big lift – like, say, Elon Musk’s. Singlepane’s data show that Twitter experienced significant latency issues when Musk took his account private in early February, as part of his effort to understand why fewer people have been liking his tweets lately. (He figured out a separate fix for that problem just a few days later.)

On top of all the other news, parts of Asia experienced a roughly 20 minute Twitter outage today, we’re told. 

This illustrates that this recent outage, this recent outage, and this other recent outage aren’t isolated incidents. They’re becoming the norm. And more outages are coming. You can bank on that because Elon has proven that he’s not capable of running Twitter. Thus it’s only a matter of time before he runs Twitter into the ground.

New Account Compromise attack Offers Fake Jobs to Students in Exchange for sensitive Information

Posted in Commentary with tags on February 23, 2023 by itnerd

Today, Armorblox released its latest blog of a recent account compromise attack that targeted a large university. 

These emails, targeting over 160,000 end users and a much larger number of organizations outside as well from this compromised account in a trusted university, bypassed native Microsoft 365 Email Security (receiving an SCL score of -1) to land in victims’ inboxes. 

How it worked: the attack involved the use of a compromised account to execute a malicious email attack to university students about a (fake) job that was open for applications. Clicking the Apply Here button directed victims to a google form that included a summary of the position and asked for sensitive information such as address, phone number, bank name, full name, age, etc.

The blog post can be found here.

ESET discovers WinorDLL64 backdoor, likely part of the Lazarus arsenal

Posted in Commentary with tags on February 23, 2023 by itnerd

ESET researchers have discovered the WinorDLL64 backdoor, one of the payloads of the Wslink downloader. The targeted region, and overlap in behavior and code, suggest the tool is used by the infamous North Korea-aligned APT group Lazarus. Wslink’s payload can exfiltrate, overwrite, and remove files, execute commands, and obtain extensive information about the underlying system.

WinorDLL64 contains overlaps in both behavior and code with several Lazarus samples, which indicates that it might be a tool from the vast arsenal of this North Korea-aligned APT group.

The initially unknown Wslink payload was uploaded to VirusTotal from South Korea shortly after the publication of an ESET Research blog post on the Wslink loader. ESET telemetry has seen only a few detections of the Wslink loader in Central Europe, US, Canada, and the Middle East. Researchers from AhnLab confirmed South Korean victims of Wslink in their telemetry, which is a relevant indicator, considering the traditional Lazarus targets and that ESET Research observed only a few detections.

Active since at least 2009, this infamous North Korea-aligned group is responsible for high-profile incidents such as the Sony Pictures Entertainment hack, the tens-of-millions-of-dollars cyberheists in 2016, the WannaCryptor (aka WannaCry) outbreak in 2017, and a long history of disruptive attacks against South Korean public and critical infrastructure since at least 2011. US-CERT and the FBI call this group HIDDEN COBRA.

You can read more here.

Guest Post: Types of Adversarial Attacks and How To Overcome Them

Posted in Commentary with tags on February 23, 2023 by itnerd

 By Brad Fisher, CEO Lumenova AI 

Machine Learning powered algorithms are susceptible to a variety of adversarial attacks that aim to degrade their performance. Here’s what you need to know.

From deep learning systems to traditional models, ML-powered algorithms are susceptible to a variety of adversarial attacks that aim to degrade their performance.

Poisoning attacks

Poisoning attacks are used to corrupt the data on which a model trains, by introducing maliciously designed samples in the training set. Hence, we may consider poisoning to be the adversarial contamination of data, used to reduce the performance of a model during deployment.

This type of contamination may also occur during re-training, as ML systems often rely on data collected while they’re in operation.

Poisoning attacks usually come in two nuances. Some target the model’s availability, while others its integrity.

Availability attacks

The concept behind availability attacks is pretty simple. The purpose is to feed so much bad data into a system that it loses most of its accuracy, thus becoming obsolete. While availability attacks might be unsophisticated, they are broadly used and, unfortunately, lead to disastrous outcomes.

Integrity attacks

Integrity poisoning, also known as a backdoor attack is much more sophisticated. The goal of these attacks is to cause the model to associate a specific ‘backdoor pattern’ with a ‘clean target label’. This way, whenever the attacker plans on inserting malware into a model, they just need to include the ‘backdoor pattern’ to get an easy pass.

For example, imagine a company asking a new employee to submit his photo ID. Their photo will be fed to a facial recognition control system for security purposes. However, if the employee provides a ‘poisoned’ photo, the system will associate the malicious pattern with a clear pass, thus creating a backdoor for future attacks.

While your classifier might still function the way it should, it will be completely exposed to further attacks. As long as the attacker inserts the ‘backdoor’ string into a file, they will be able to send it across without raising any suspicions. You can imagine how this might play out in the end.

Backdoor attacks are very difficult to detect since the model’s performance remains unchanged. As such, data poisoning can cause substantial damage with minimal effort.

Evasion attacks

An evasion attack happens when an adversarial example is carefully tailored to look genuine to a human, but completely different to a classifier.

These types of attacks are the most prevalent and, hence, the most researched ones. They are also the most practical types of attacks since they’re performed during the deployment phase, by manipulating data to deceive previously trained classifiers. As such, evasion doesn’t have any influence on the training data set. Instead, samples are modified to avoid detection altogether.

For example, in order to evade analysis by anti-spam models, attackers can embed the spam content within an attached image. The spam is thus obfuscated and classified as legitimate.

Model extraction

The third type of adversarial attack is model stealing or model extraction. In this particular case, the attacker will probe a black-box ML system with the goal of reconstructing the model or extracting the data it was trained on.

Model extraction can be used, for example, if the attacker wishes to steal a prediction model that can be used for their own benefit. Let’s say a stock market prediction model.

Extraction attacks are especially delicate considering the adjacent data theft involved. Not only do you lose exclusivity to your ML model, but given the sensitive and confidential nature of data, it might lead to additional hardships.

White-box and black-box attacks

On top of the classification above, adversarial attacks can be further subcategorized as being white-box or black-box.

During a white-box attack, the attacker has complete access to the target model, its architecture and the model parameters. In a black-box attack, he does not.

Making ML models more robust

While there are no techniques that guarantee 100% protection against adversarial attacks, some methods can provide a significant increase in defense.

Adversarial training

Adversarial training is a brute-force solution. Simply put, it involves generating a lot of adversarial examples and explicitly training the model not to be fooled by them.

However, there is only so much you can feed a model in a given time frame, and the list of adversarial attacks is, unfortunately, not an exhaustive one.

Defensive distillation

As opposed to adversarial training, defensive distillation adds some flexibility to the equation. Distillation training employs the use of two different models.

Model 1: The first model is trained with hard labels in order to achieve maximum accuracy. Let’s consider a biometric scan, for example. We train the first system, requiring a high probability threshold. Subsequently, we use it to create soft labels, defined by a 95% probability that a fingerprint will match the scan on record. These lower accuracy variations are then used to train the second model.

Model 2: Once trained, the second model will act as an additional filter. Even though the algorithm will not match every single pixel in a scan (that would take too much time), it will know which variations of an incomplete scan have a 95% probability of matching the fingerprint on record.

To sum up, defensive distillation provides protection by making it more difficult for the scammer to artificially create a perfect match for both systems. The algorithm becomes more robust and can easier spot spoofing attempts.

Final words

The constant effort which goes into AI research is ever-growing. Slowly, but steadily, Machine Learning is becoming a core element in the value proposition of organizations worldwide. At the same time, the need to protect these models is growing just as fast.

Meanwhile, governments around the globe have also started to implement security standards for ML-driven systems. In its effort to shape the digital future, the European Union has also released a complete checklist meant to assess the trustworthiness of AI algorithms: ALTAI.

Big industry names such as Google, Microsoft, and IBM have already started to invest both in developing ML models, but also in securing them against adversarial attacks.

Have you raised your defenses?

New Salesforce for Communications Innovations Announced At MWC

Posted in Commentary with tags on February 23, 2023 by itnerd

Today, as part of Mobile World Conference, Salesforce announced a series of innovations tailored to the communications industry that feature analytics, AI intelligence, and prebuilt solutions that automate common processes to boost customer experiences while driving down operational costs. The company also announced new integrations with WhatsApp and Infosys.

With the new features, communications providers can:

  • Accelerate time-to-value and deliver better customer experiences with enhanced agent performance through Salesforce’s new Contact Center for Communications. 
  • Leverage data and AI-powered insights to predict order delays and recommend fulfillment dates. 
  • Enrich communications and meet customers where they are through new WhatsApp integrations. 

You can read full release linked here as it has way more details.

Twelve Canadian startups joining the Google for Startups Accelerator: Canada Cohort Class of 2023

Posted in Commentary with tags on February 23, 2023 by itnerd

A total of 12 startups from across Canada will be participating in our 2023 Google for Startups Accelerator Canada program. Supporting the next generation of Canadian founders and kicking-off our first accelerator cohort of the year, the 10-week, equity-free program is designed to bring the best of Google’s programs, products, people and technology to Canadian startups – at a time when AI continues to advance.

Now in its fourth year, the Google for Startups Accelerator builds on Google’s continued support for Canada’s startup ecosystem. The program is one of five accelerators developed specifically for Canadian companies, others include the Cloud Accelerator,Women Founders AcceleratorBlack Founders Accelerator, and the Climate Change Accelerator. 

The participating startups are:

  • Bidmii (Toronto) is an online marketplace that quickly connects homeowners and contractors for home improvement projects, guaranteeing payment security for each party by holding payments in trust.
  • Chimoney (Toronto) enables businesses to send payments to phones, emails and Twitter, regardless of scale, currency, country and other factors.
  • Clavis Studio (Edmonton) is an AI and /machine learning (ML)-driven design, visualization, and sourcing platform that provides a marketplace for designers and decorators to source new clients and use supporting tools to deliver their projects.
  • Foqus Technologies (Toronto) is an AI and quantitative imaging technology company that designs and develops software solutions to enhance the speed and quality of MRI scans.
  • Gryd Digital Media (Winnipeg) is a PropTech company that has developed a suite of products and services designed to deliver increased efficiencies, increased asset value, and reduced costs to property owners, managers, and REITs nationwide.
  • Morpheus.Network (Burlington) focuses on helping companies and government organizations eliminate inefficiencies and remove barriers to optimize and automate their supply chain operations.
  • Moves (Toronto) is building the collective of the gig economy, solving financial challenges associated with being a gig worker, and the lack of representation and ownership gig workers experience.
  • My Choice (Toronto) is an insurance aggregator that partners with insurance companies and brokerages to bring customers the power of choice and transparency through seamless, personalized user experiences and automation.
  • SalonScale Technology Inc. (Saskatoon) is the salon industry’s leading B2B SAAS provider in professional goods management, providing solutions that address the rising cost of salon supplies.
  • ShareWares (Vancouver) Has developed a platform that pairs technology with current city infrastructure to allow reusable cups and food containers to be bought, returned, tracked, and processed for resale. Stay tuned as food packaging is just the beginning.
  • Tablz (Ottawa) is a 3D bookings platform that lets diners upgrade to the seat of their preference, while generating net new profit for restaurants.
  • TrojAI (Saint John) helps enterprises manage AI risk through stress testing and audit of AI/ML models.

You can read the blog post here.

Rezilion Research Discovers Hidden Vulnerabilities in Hundreds of Docker Container Images

Posted in Commentary with tags on February 23, 2023 by itnerd

Rezilion announced today the release of the company’s new research, “Hiding in Plain Sight: Hidden Vulnerabilities in Popular Open Source Containers,” uncovering the presence of hundreds of docker container images containing vulnerabilities that are not detected by most standard vulnerability scanners and SCA tools.

The research revealed numerous high severity/critical vulnerabilities hidden in hundreds of popular container images, downloaded billions of times collectively. This includes high-profile vulnerabilities with publicly known exploits. Some of the hidden vulnerabilities are known to be actively exploited in the wild and are part of the CISA known exploited vulnerabilities catalog, including CVE-2021-42013, CVE-2021-41773, CVE-2019-17558.

This finding follows Part I of the research, released in October, which was the first quality assessment for leading open-source and commercial vulnerability scanners and SCA tools. The vulnerability scanner benchmark survey discovered the most common causes for scanner misidentifications, including false positive and negative results.

The new research dives deeper into one of the root causes identified in the assessment – inability to detect software components not managed by package managers. The study explains how the inherent method of operation of standard vulnerability scanners and SCA tools relies on acquiring data from package managers to know what packages exist in the scanned environment, making them susceptible to missing vulnerable software packages in multiple common scenarios in which software is deployed in ways that circumvent these package managers. This research shows precisely how wide this gap is and its impact on organizations using third-party software. The report provides numerous real-world examples of some of the most popular docker container images that contain dozens of such hidden vulnerabilities. The report also offers recommendations on minimizing the risk presented in the research.

According to the report, package managers circumventing deployment methods are extremely common in Docker containers. The research team has identified over 100,000 container images that deploy code in a way that bypasses the package managers, including most of DockerHub’s official container images. These containers either already contain hidden vulnerabilities or are prone to have hidden vulnerabilities if a vulnerability in one of these components is identified.

The report identifies four different scenarios in which software is deployed without interaction with package managers, such as the application itself, runtimes required for the operation of the application, dependencies as are necessary for the application to work, and dependencies required for the deployment/build process of the application that are not deleted at the end of the container image build process and shows how hidden vulnerabilities can find their way to the container images.

To download the full report, please visit: https://info.rezilion.com/scanner-research-part-ii

New Attack Brief Finds Hackers Exploiting “Best Note Taking App” to Host Malicious BEC Phishing Campaign

Posted in Commentary with tags on February 23, 2023 by itnerd

Avanan, a Check Point Software Company, has revealed a new attack brief on how threat actors use Evernote’s legitimacy, an online note-taking and task management application, to help make their Business Email Compromise (BEC) attacks even more convincing.  

In this phishing attack, hackers use Evernote links to host malicious messages sent in BEC phishing attacks on users by compromising a company executive, in this case, the organization’s president, to send out emails with an attached “secure” message to the victims. 

The recipients have an unread email in their inbox encouraging them to click on the provided link to view the message, which directs them to an Evernote page. Susceptible, vulnerable employees, to their dismay, are led to a fake login page the attackers exploit and leverage to steal credentials. 

You can read the attack brief here.

Time To Deploy Ransomware Down… Successful Ransomware Prevention Up: IBM

Posted in Commentary with tags on February 22, 2023 by itnerd

According to IBM, ransomware prevention saw massive improvements in 2022, while ransomware time to deploy (TTD) dopped by 94%, just two findings derived from billions of datapoints collected in 2022 from network and endpoint devices by IBM and reported on in their “X-Force Threat Intelligence Index 2023.” This is a wide-ranging report with excellent stats:

  • 27% – Percentage of attacks included extortion – 30% aimed at manufacturing
  • 21% – Share of incidents that saw backdoors deployed – the top action on objective
  • 17% – Ransomware’s share of attacks (down from 21% in 2021)
  • 41% – Percentage of incidents involving phishing for initial access
  • 26% – Exploited public-facing applications
  • 100% – Increase in the number of thread hijacking attempts per month

Top impacts 2022

  • 21% – Extortion
  • 19% – Data theft
  • 11% – Credential harvesting
  • 11% – Data leak
  • 9% – Brand reputation

This is a bit of mixed bag. But at least the fact that ransomware is being stopped is good news.

Morten Gammelgaard, EMEA, co-founder of BullWall had this to say:

   “It is excellent news that ransomware prevention is improving, if for no other reason than it diverts cybercriminals away from executing attacks to developing new tactics, which they will. With extortion, data theft, data leaks and brand reputation being the top 4 out of 5 ways ransomware impacted organizations in 2022, organizations cannot rely solely on prevention and need to also consider active defense/containment strategies to catch the attacks that bypass prevention-based tools. When an active attack is unable to encrypt or exfiltrate data, organizations are given time to respond, eliminating 80% of the potential impact to their business.”
 

David Maynor, Senior Director of Threat Intelligence at Cybrary followed up with this:

“There are three kinds of lies: lies, damn lies, and ransomware stats. For the last couple of months depending on who you ask ransomware attacks and becoming less of a problem or they are increasing. If your risk model is based on arbitrary thresholds like at 20% we don’t address it but we take it seriously at 21% of attacks seen…you have already lost and a ransomware actor is probably watching you read this.”

Hopefully when this report comes out in 2024, we see more ransomware being stopped which means by extension that ransomware is less profitable for the people behind ransomware.