Elon Musk’s Lawsuit Against Media Matters Has Resulted In Him Being Introduced To The Streisand Effect

Posted in Commentary with tags on November 27, 2023 by itnerd

First some background. Here’s a definition of the Streisand effect:

The Streisand effect is an unintended consequence of attempts to hide, remove, or censor information, where the effort instead backfires by increasing awareness of that information. It is named after American singer and actress Barbra Streisand, whose attempt to suppress the California Coastal Records Project‘s photograph of her cliff-top residence in Malibu, California, taken to document California coastal erosion, inadvertently drew far greater attention to the heretofore obscure photograph in 2003.

Now here’s how it applies to Elon Musk. His lawsuit against Media Matters for exposing antisemitic posts on Twitter being served up beside ads from big name advertisers, who then pulled their ads from Twitter, is has basically resulted in the Streisand effect coming into play according to TechDirt:

in making a big deal out of this and filing one of the worst SLAPP suits I’ve ever seen, all while claiming that Media Matters “manipulated” things (even as the lawsuit admits that it did no such thing), it is only begging more people to go looking for ads appearing next to terrible content.

And they’re finding them. Easily.

As the DailyDot pointed out, a bunch of users started looking around and found that ads were being served next to the tag #HeilHitler and “killjews” among other neo-Nazi content and accounts.

SLAPP stands for Strategic lawsuit against public participation by the way. But I digress. The point is that he’s adding to the reasons that Media Matters is going to win this lawsuit. The fact is that what they said is true and evidence of antisemitism and Nazi posts are easily found if you go looking for them. And you apparently don’t have to try all that hard to find them. The only lawsuit that’s going to be even easier to win than this one is the dBrand vs. Casetify lawsuit. The fact is that Elon is going to get pwned in court as well as the court of public opinion at the rate he’s going. Thus if he were smart, he’d make this go away and do something more than go on the apology tour that he’s planning to go on. But as has been proven recently, he’s not smart. Which is why this will be one more thing that hurts him.

The Buffalo Sabres Team Up With Fubo TV

Posted in Commentary with tags on November 27, 2023 by itnerd

Fubo TV today announced they have entered into a multi-year partnership to expand streaming capability of over 40 Buffalo Sabres games into the Niagara region of Southern Ontario, beginning November 27th. This partnership signifies the first time the Buffalo Sabres’ MSG broadcast will be available to fans in Southern Ontario since 2015-16.   

In addition to in-game coverage, Fubo will also stream pre- and post-game coverage of Sabres hockey including but not limited to pre-game breakdowns, highlights, exclusive interviews and studio analysis. Fubo subscribers residing in zip codes in which Sabres broadcasts will be available will have access to all Buffalo Sabres game content and shoulder programming on the newly launched Fubo Sports Niagara channel.  

Additionally, to celebrate this new expansion, Buffalo Sabres season ticket members are eligible to receive an exclusive offer for a 30-day free trial of Fubo, while non-season ticket members are eligible to receive an offer for a 14-day free trial.   

Buffalo Sabres games will be available on Fubo to Niagara region subscribers beginning with Buffalo’s road contest against New York Rangers tonight.   

New Secure AI System Guidelines Agreed To By 18 Countries

Posted in Commentary with tags on November 27, 2023 by itnerd

The US, UK, among 16 other countries have jointly released secure AI system guidelines based on the principle that it should be secure by design:

This document recommends guidelines for providers of any systems that use artificial intelligence (AI), whether those systems have been created from scratch or built on top of tools and services provided by others. Implementing these guidelines will help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorised parties.

This document is aimed primarily at providers of AI systems who are using models hosted by an organisation, or are using external application programming interfaces (APIs). We urge all stakeholders (including data scientists, developers, managers, decision-makers and risk owners) to read these guidelines to help them make informed decisions about the design, development, deployment and operation of their AI systems.

 Anurag Gurtu , Chief Product Officer, StrikeReady had this comment:

The recent secure AI system development guidelines released by the U.K., U.S., and other international partners are a significant move in enhancing cybersecurity in the field of artificial intelligence. These guidelines emphasize the importance of security outcomes for customers, incorporating transparency and accountability, and promoting a secure organizational structure. They focus on managing AI-related risks, requiring rigorous testing of tools before public release, and establishing measures to counteract societal harms, like bias. Additionally, the guidelines advocate a ‘secure by design’ approach covering all stages of AI development and deployment, and address the need to combat adversarial attacks targeting AI and machine learning systems, including prompt injection attacks and data poisoning.

The fact that 18 countries agreed on a common set of principals is great. The thing is that more nations have to do the same thing. Otherwise you may still have AI that is closer to the “Terminator” end of the spectrum rather than being helpful and friendly.

UPDATE: Troy Batterberry, CEO and founder, EchoMark had this comment:

   “While logging and monitoring insider activities are important, we know they do not go nearly far enough to prevent insider leaks. Highly damaging leaks continue to happen at well-run government and commercial organizations all over the world, even with sophisticated monitoring activities in place. The leaker (insider) simply feels they can hide in the anonymity of the group and never be caught. An entirely new approach is required to help change human behavior. Information watermarking is one such technology that can help keep private information private.”

UPDATE #2:  Josh Davies, Principal Technical Manager, Fortra adds this:

The AI arms race and rapid adoption of open AI systems* have created concerns in the cyber security sector around the impact of a supply chain compromise – where the AI source code is compromised and used as a trusted delivery mechanism to pass on the compromise to third party users. These guidelines look to secure the design, development, and deployment of AI which will help reduce the likelihood of this type of attack.

As systems and nation states are increasingly interdependent on each other, global buy in is crucial. We have already seen how collective security is important, otherwise threats are allowed to grow, become more sophisticated, and attack global targets. Ransomware criminal families are a prime example. This levels the playing field by homogenising guidance across national states and limiting a race to the bottom with AI tech.

The guidelines recommend the use of red teaming. Red teaming surfaces the gaps in systems, and security strategies, and ties them directly to an impact. The AI Executive Order also mandates red teaming to identify flaws and vulnerabilities in AI systems. Mandating red teaming future proofs these guidelines (and other regulations) as it is hard to anticipate the threats of tomorrow and the appropriate mitigations – especially at the pace governments can legislate. It’s an indirect way of saying you need to make sure that your security strategies are always up to date, because if not, attackers will surely find and expose your gaps. This is important as we have seen other security regulations quickly become outdated and redundant as controls cannot be agreed upon and updated at the pace required to achieve good security.

Will we see adoption? Or does it just serve to re-assure the public that AI issues are being considered? What is the consequence of not following the guidance? I would hope to see soft enforcement through the exclusion of organisations that cannot show adherence to guidance from government or B2B collaborations.

Without any punitive measures, a cynic would say organizations have no motivation to implement the recommendations properly. An optimist might lean on the red team reports and hope for buy in on reporting flaws and issues, removing the ‘black box’ nature of AI which some executives have hid behind, and opening up these leaders to the court of public opinion if there is evidence they were aware of a flaw and did not take appropriate action, resulting in a compromise and/or data breach.

These guidelines are a step in the right direction. They pull together key AI stakeholders, from nation states and industry, and call for collaboration and consideration of the security of AI. Hopefully this is a continued theme, as we’ve seen with the United States AI executive order, and that AI systems are developed responsibly, without stifling innovation and adoption.

My personal opinion is that the real value we might see from such collaboration will be when we do see a large-scale AI compromise. Hopefully the involved parties are brave enough to lift the lid on what happened so everyone can learn how to be better prepared, and we can define further guidance (preferably as a requirement) beyond just secure build practices and a general monitoring requirement. But this is a good start.

Is it ground breaking? In my opinion, no. Security teams should already be looking to apply the principles outlined to any technological development. This has taken long standing DevSecOps principles and applied them to AI. I would expect it will have the most impact on startups entering the space, i.e. those without an existing level of security maturity.

*open source data sets, i.e. the internet, not OpenAI the company

Epson Has The Perfect Gift Idea For TV & Movie Lovers Alike

Posted in Commentary with tags on November 27, 2023 by itnerd

There are a few surefire signs that we’re heading into the holiday season. The weather starts to get colder, neighbourhoods come to life with red and green lights and you don’t have to go far to find your favourite holiday flicks streaming on repeat.

Gifting the Epson EpiqVision Mini EF12 Smart Streaming Laser Projector (MSRP: $ 1,299.99 CAD) means holiday fun for the whole family. Curl up on the couch and enjoy an epic viewing experience watching your favourite cult classics with stunning picture quality in up to 150″ – no screen required. Featuring built-in Android TV, sound by Yamaha and wireless connectivity, the EpiqVision Projector gives you seamless access to popular streaming services, including Hulu, HBO and YouTube™, right out of the box.

The portable projector has a compact yet elegant design that allows you to move from room to room (or house to house) so you can elevate movie night no matter which family member is hosting your holiday gathering this year.

If you are working on gift guides for splurge-worthy items, we hope you’ll consider the Epson EpiqVision Projector as the perfect family present to encourage time spent together.

US Navy Releases Its First Cybersecurity Strategy 

Posted in Commentary with tags on November 27, 2023 by itnerd

The U.S. Navy has released its first cybersecurity strategy as the service tries to modernize its efforts in the space after years of staffing and preparedness issues.

The blueprint devised by Chris Cleary, the Navy’s principal cyber advisor, and its CIO, features the following seven lines of effort:

  • Improve and support the cyber workforce
  • Shift from Compliance to Cyber Readiness
  • Defend Enterprise IT, Data, and Networks
  • Secure Defense Critical Infrastructure and Weapon Systems
  • Conduct and Facilitate Cyber Operations
  • Partner to Secure the Defense Industrial Base
  • Foster Cooperation and Collaboration

Troy Batterberry, CEO and founder, EchoMark had this comment:

   “In order for the USA to achieve and maintain information superiority, we must adopt new forms of insider risk management. Nearly all major government agencies have experienced highly damaging leaks in part because the leaker (insider) felt they would never be caught. An entirely new approach is required to help change human behavior. Information watermarking is one such technology that can help keep private information private.”


Stephen Gates, Principal Security SME, Horizon3.ai follows with this:

   “In the context of the Department of the Navy Cyber Strategy 2023, one line of effort stands out among the others: 2.0 Shift from Compliance to Cyber Readiness. As recent cyber events have repetitively proven, a purely defensive cyber strategy is not working and must be augmented by “adversarial assessments” of your own environments.

   “These adversarial assessments are not the run-of-the-mill vulnerability scans. These assessments are cyber red team exercises whereby organizations attack themselves using the same tools, tactics, and procedures (TTPs) attackers use. The reason for this is simple. If you cannot find that hidden chink in your armor, that crack in your layered walls of defense, that blind spot you didn’t even know existed, you will never be able to adequately defend yourself against a purposeful attacker with nothing but time on their side – and disruption on their mind.

   “Today, autonomous assessment solutions that let your see your environments through the eyes of an attacker are readily available. Having these solutions in the hands of highly skilled red teams allows them to force-multiply, meaning, they can do expansive cyber readiness exercises simultaneously, while using these solutions to accelerate their assessment analysis. Furthermore, these solutions also meet the objective of prioritizing mitigations and reassessment tracking to ensure issues have been remediated and readiness is confirmed.”

At least the Navy realizes that it has issues, and is moving to address them. That’s good. But everyone will be watching to see if the Navy “walks the walk” as opposed to just “talking the talk”.

General Electric Investigating Cyber Attack Which Could Include Possible DARPA Data Theft 

Posted in Commentary with tags on November 27, 2023 by itnerd

The threat actor “IntelBroker” was seen on a hacker forum, peddling a database allegedly containing information from General Electric and DARPA, complete with critical access credentials like SSH and SVN, as well as DARPA-related military documents, SQL files, and more.

General Electric is probing the claims of a breach that allegedly resulted in the data theft.
The company is investigating the suspected breach and potential data theft from their development environment, traced back to a hacker’s attempt to sell access and data on multiple occasions

Initially, the threat actor attempted to hawk access to GE’s “development and software pipelines” for $500 on a hacker forum. Failing to sell the access, the actor returned, offering both network access and the purportedly stolen data. From the threat actor:

“I previously listed the access to General Electrics, however, no serious buyers have actually responded to me or followed up. I am now selling the entire thing here separately, including access (SSH, SVN etc),” the threat actor posted to a hacking forum.

“Data includes a lot of DARPA-related military information, files, SQL files, documents etc.”

Troy Batterberry, CEO and founder, EchoMark had this comment:

   “Unfortunately, we see this every day. Highly skilled and well-funded organizations are working hard to protect their data with security stacks that include security gap discovery and analysis, EDR, Cloud security, UEBA, Identity & Access Analytics, SOAR and even ransomware killswitches, but then leave much of their most sensitive data both unprotected and readily sharable. The recent leaks of sensitive government and judicial information are just a few examples.

   By digitally watermarking data and assets, organizations get several key benefits. First, they can help deter insider leaks from ever happening in the first place by motivating better stewardship of the private information. If malicious or accidental insider leaks do happen, the source can be quickly identified and remediated. In the case of a successful external attack, watermarks can help quickly identify the compromised assets for fast remediation.”

It will be interesting to see what General Electric reports back in terms of the extent of this hack and what was swiped. Because like other hacks we’ve seen lately, this one is far from trivial.

AI-powered Cybersecurity Assistant from Trend Micro Announced

Posted in Commentary with tags on November 27, 2023 by itnerd

Trend Micro made a pair of announcements today:

  1. Trend Micro announced the launch of its new generative AI tool, Trend Companion, designed to empower security analysts by driving streamlined workflows and enhanced productivity. Trend Companioncould potentially reduce analyst time spent on manual risk assessments and threat investigations by 50% or more. Read the press release here
  2. Trend Micro also announced the latest evolution in generative AI: the integration of its leading global threat intelligence and millions of diverse sensor types to enhance outcomes for its flagship Trend Vision One™ cybersecurity. In 2022, Trend handled over six trillion threat queries from customers across 65+ countries. Using AI trained on this data, Trend blocked more than 146 billion threats, three billion of which were ransomware. Read the press release here

With the ever-evolving cyber landscape, security teams need more than just AI to work well. They also need strong data. Trend Micro’s global threat research and work in communities through its Zero Day Initiative, is helping to accelerate incident response times by 30 per cent, reduce incident reporting by up to two hours per report, and drive more complete attack containment – providing valuable insights to security teams.

AI Regulation In Canada: New Report Offers Strategies For Policymakers

Posted in Commentary with tags on November 27, 2023 by itnerd

 The rapid evolution of digital technologies, in particular Artificial Intelligence (AI), is showing no sign of slowing. Digital technologies can boost productivity, support innovations in medical care, and even help tackle our climate crisis. But earlier this year, technology experts called for a temporary pause in the development of advanced AI systems due to the risks they pose to society. In this charged environment, policymakers in Canada and globally are faced with the challenge of balancing innovation while introducing effective regulatory frameworks for digital technologies such as AI that safeguard the public interest. 

To support policymakers as they navigate these complex issues, the CSA Public Policy Centre and Digital Governance Councilhave published a new report, Ahead of the Curve: A Roadmap for Regulating Digital Technologies. The report provides an overview of the regulatory challenges posed by digital technologies – offering AI, 3D printing and blockchain as case studies – and outlines important considerations for policymakers as they navigate this evolving landscape. 

The report highlights a range of promising tools and methods, each with the potential to lead to quicker, more targeted, and effective regulation of digital technologies. While the challenges posed by digital technologies are numerous, policymakers should consider a multi-faceted approach as they seek to establish regulations. These include: 

  1. Enhancing existing frameworks by establishing core principles, shifting from reactive to proactive approaches, and developing strategies to put people first in a data-rich world.  
  2. Investing in the public sector by improving intergovernmental cooperation, enhancing skills, capacity, and knowledge, and establishing a Digital Centre of Excellence. 
  3. Using complementary tools such as risk-based approaches (e.g., certifications, audits, and inspections), standards-based solutions, and legal frameworks. 

To learn more and download Ahead of the Curve: A Roadmap for Regulating Digital Technologies, visit CSA Group’s website

Instagram Joins Twitter In Having Advertisers Halt Ads Due To Placement Next To Problematic Content

Posted in Commentary with tags on November 27, 2023 by itnerd

Elon Musk and Twitter are apparently not the only platform who is struggling with having advertisers halt ad campaigns due to those ads being placed next to content that is objectionable. Meta owned Instagram has is having problems with ads being placed next to sexually explicit images:

Instagram’s system served jarring doses of salacious content to those test accounts, including risqué footage of children as well as overtly sexual adult videos—and ads for some of the biggest U.S. brands.

The Journal set up the test accounts after observing that the thousands of followers of such young people’s accounts often include large numbers of adult men, and that many of the accounts who followed those children also had demonstrated interest in sex content related to both children and adults. The Journal also tested what the algorithm would recommend after its accounts followed some of those users as well, which produced more-disturbing content interspersed with ads.

As a result of this report, this happened:

After the Journal contacted companies whose ads appeared in the testing next to inappropriate videos, several said that Meta told them it was investigating and would pay for brand-safety audits from an outside firm.

Following what it described as Meta’s unsatisfactory response to its complaints, Match began canceling Meta advertising for some of its apps, such as Tinder, in October. It has since halted all Reels advertising and stopped promoting its major brands on any of Meta’s platforms. “We have no desire to pay Meta to market our brands to predators or place our ads anywhere near this content,” said Match spokeswoman Justine Sacco.

Robbie McKay, a spokesman for Bumble, said it “would never intentionally advertise adjacent to inappropriate content,” and that the company is suspending its ads across Meta’s platforms.

Charlie Cain, Disney’s vice president of brand management, said the company has set strict limits on what social media content is acceptable for advertising and has pressed Meta and other platforms to improve brand-safety features. A company spokeswoman said that since the Journal presented its findings to Disney, the company had been working on addressing the issue at the “highest levels at Meta.”

Walmart declined to comment, and Pizza Hut didn’t respond to requests for comment.

Now this is bad. But what I will say is this. Meta and its CEO Mark Zuckerberg will fix this because frankly, they don’t want to lose the advertising revenue, nor do they want to be seen in the same way that Twitter is seen. So I would expect some rapid action on this front in the coming days.

Best Buy Pulls Select Casetify Cases Related To The dBrand Lawsuit

Posted in Commentary with tags , on November 27, 2023 by itnerd

 Let’s recap what’s happened with the dBrand vs. Casetify fight:

  • YouTuber JerryRigEverything and dBrand are suing Casetify for blatantly ripping of the Teardown skins that JerryRigEveryting and dBrand co-created. 
  • Casetify responded by posting a really, really bad statement that promptly and deservedly got roasted by Twitter. Along with that they pulled their cases from their website. 
  • It was then discovered by dBrand that Casetify had been ripping stuff off from iFixit as well. Then iFixit called them on it.

As part of this, the cases in question from Casetify were still available at Best Buy stores as pointed out by dBrand:

That appears to have changed based on this:

I guess that Best Buy doesn’t want to be in the middle of this. Thus they pulled the cases in question from sale. That’s more pain for Casetify. At this point, it’s hard to feel sorry for Casetify as they brought this upon themselves. Perhaps they should find a way out of this that acknowledges what they’ve done and make restitution for it? Just a thought.