Archive for November 7, 2023

Cisco Expands Full-Stack Observability Ecosystem with Seven New Partner Modules

Posted in Commentary with tags on November 7, 2023 by itnerd

Cisco announced seven new modules on the Cisco Observability Platform, built by development partners and created to expand its full-stack observability ecosystem. This growing ecosystem helps customers fulfill their specific observability needs and utilize additional value from observable telemetry.  

The new modules are focused on five critical areas: Business Insights, SAP Visibility, Networking, MLOps & SLO, and Sustainability.   

Today’s modern businesses are digitally led, with customer and user experiences achieved with and through applications. The speed and complexity of how these applications are built demands that IT teams, security teams and business leaders observe all aspects of application performance and experience in real-time.  

However, according to a recent IDC report, 60 per cent of IT professionals are worried that most observability tools serve narrow requirements, failing to give IT teams a complete view into current and trending operating conditions1. Further, 65 per cent stated the need for a programmable and extensible observability solution that could be used for use cases specific to their own business. 

Today’s announcement sees Cisco empower a diverse ecosystem of developers to create and extend solutions that rapidly create customer value from observable telemetry. Beyond just interpreting telemetry, the Cisco Observability Platform provides capabilities to surround that data with context, so organizations get both insight and the ability to take specific action(s). New partner modules are focused around five critical themes:  

  • Business Insights: Correlate telemetry data with business performance across multiple domains, providing customers with full visibility and insights on how business interacts with IT. 
  • SAP Visibility: Help customers achieve holistic observability across often changeable, expanding and complex SAP landscapes and ecosystems. 
  • Networking: Leverage Cisco’s networking expertise to correlate key network telemetry with business metrics and application stack. 
  • MLOps and SLO: With the growing use of generative AI and the mainstream of modern applications, the Cisco Observability Platform helps customers to monitor these applications, their SLO and bring the monitoring of large language models (LLMs), and MLOps models together with application observability. 
  • Sustainability: Help customers achieve their sustainability goals by providing data around the carbon footprint across multiple IT domains and help optimize around energy consumption. 

Partners are already seeing value in the Cisco Observability Platform and developing modules to help customers derive value from their observable telemetry. By building and offering solution sets on the Platform, partners can assist in providing the observability needs of Cisco’s customer base. 

The following modules are available on the Cisco Observability Platform exchange at Partner Summit:  

  • CloudFabrix – SAP Observability: Enables customers to ingest data from Cisco AppDynamics agents for SAP Monitoring. It correlates telemetry data and asset types together to isolate the root cause of issues in the SAP landscape and determine the effect of impacted services on the business. 
  • CloudFabrix – Campus Analytics: Provides network analytics for campus environments as employees return to offices. This module aggregates multiple Cisco DNA Controller analytics to provide near real-time network topology information, bandwidth consumption and hotspot visibility.  
  • Evolutio – Claims: Insurance institutions are looking to gather multiple claims processes in a single pane of glass to best understand process health and user-experience. The module helps to correlate and view the health of different claims processes in real-time as it relates to product types, underwriters, regions, and business units. 
  • Evolutio – eCommerce: With the growth of ecommerce and the technology that powers it, organizations need a solution to track every part of the shopping experience. This module allows the correlation by product category or region, by monitoring of orders, shipping, inventory, and payments to quickly identify issues against the supporting infrastructure and applications. 
  • DataRobot – MLOps by Evolutio: Extends observability for both predictive AI and generative AI, with always-on monitoring and production diagnostics to track and improve performance of your models. Stay informed of key metrics like service health, accuracy and data drift. 
  • Nobl9 – Service Level Objectives (SLO): Provides a platform for defining and creating SLO for understanding reliability across organizations and share remaining error budget for given services as well as SLO-related visualizations for workloads.  
  • Climatiq – Cloud Carbon Insights: Adds carbon emission tracking to existing cloud metrics and enables analyzing, comparing, and benchmarking emissions data. These actionable insights accelerate journeys to net zero and make more environmentally conscious decisions.  

In addition, the following modules will be available soon:  

  • Cisco CX – Sustainability Insights: Provides a sustainability portal that acts as a single pane of glass for near real-time interactive visualizations, measurement, estimation and reporting of key infrastructure sustainability indicators, trended over time, aiming for workload and datacenter energy optimization. 
  • Aporia – MLOps: A significant number of challenges faced by ML models in production arise from a combination of data inconsistencies and the software infrastructure they operate on. The module will not only offer a holistic view of the model’s performance but also empowers teams to swiftly identify, dive deeper into, and resolve issues faster. 

About Cisco Observability Platform  

Cisco Observability Platform—a vendor-agnostic solution that harnesses the power of the company’s full portfolio – was launched at Cisco Live US in June 2023. The Platform delivers contextual, correlated, and predictive insights that allow customers to resolve performance issues more quickly and optimize digital experiences, while minimizing business risk. This industry-leading extensible platform offering delivers customers a new observability ecosystem that brings data together from multiple domains including applications, networking, multi-cloud infrastructure, cloud services, security, endpoints, sustainability, and business sources. 

Bell Canada Cuts Spending On Their Fibre Rollouts… And They Blame The CRTC

Posted in Commentary with tags on November 7, 2023 by itnerd

You might recall that I posted a story about Bell allegedly slowing down the rollouts of their fibre projects. And that when I asked Bell about that, they said that they had nothing to announce at the moment.

That changed with this press release:

Bell today announced its intention to reduce capital expenditures by over $1 billion in 2024-25, including a minimum of $500 to $600 million in 2024, money the company had planned to invest in bringing high-speed fibre Internet to hundreds of thousands of additional homes and businesses in rural, suburban and urban communities.

This reduction is in addition to Bell decreasing its 2023 capital expenditure budget by $100 million in anticipation of the CRTC decision to unrelentingly pursue wholesale access at the expense of critical network investment.

Bell’s fibre network is now available to over seven million homes and businesses. Prior to the CRTC’s decision, Bell’s near-term plan was to build high-speed fibre to nine million locations by the end of 2025. Bell will now re-consider pending builds in all communities where it had planned to expand, and will reduce its 2025 build target from nine million to 8.3 million locations.

Rolling back fibre network expansion is a direct result of the CRTC’s decision. Today’s decision forces Bell to open up its fibre network in Ontario and Quebec but does not mandate access to fibre-to-the-premises networks in western Canada where there are over three million fibre locations passed. If the intent of the decision is to benefit consumers then it is arbitrary and capricious to leave western Canadian consumers behind. When Bell enters a community with high-speed fibre Internet, it increases competition, and customers benefit from better service, better value and lower prices.

The CRTC decision that Bell is referring to is this one. The TL:DR is that the CRTC is going to make Bell and TELUS give access to independent competitors to sell internet services over their fibre networks in Ontario and Quebec. And clearly Bell doesn’t like that. And as a result, you get this situation. And to be honest, this press release has the feel of a two year old throwing their toys out of the baby carriage.

Bell can have an issue with something that the CRTC does, and that’s fine. There are ways of expressing that displeasure that Bell can use. But holding their customers and potential customers hostage should never be on the list. The fact that Bell immediately went to the hostage option is pathetic. It really doesn’t paint them in the best light and they should really reconsider their choices when it comes to this CRTC decision. Bell may have the best tech around, but as I have said previously, their customer service needs work, and this tendency for Bell to make their present and future customers hostages when they aren’t happy with the CRTC needs to stop. Otherwise they may find that this may come back to bite them.

Elon Musk Releases AI Chatbot That He Promises That It Will Be “Spicy”

Posted in Commentary with tags on November 7, 2023 by itnerd

I can’t understand what Elon Musk is up to by releasing an AI chatbot. Here’s what you need to know about it:

Elon Musk has finally lifted the lid on his own ‘sarcastic’ artificial intelligence (AI) chatbot, called ‘Grok’, available within X (formerly Twitter). 

Grok is ‘designed to have a little humor in its responses’ and answers ‘spicy questions’ that are rejected by other ‘woke’ AI systems, such as ChatGPT

Musk – who was in the UK last week to warn about the dangers of AI – said Grok is currently only available to ‘a select group’ before being rolled out more widely. 

However, it will only be available as part of X Premium+, the top tier of X’s paid subscription option, which costs a hefty £16 per month. 

And as for what you can expect the AI chatbot can answer, here’s your answer:

Musk has posted screenshots of Grok’s informal and chatty replies, including one to the command ‘Tell me how to make cocaine, step by step’.

Grok responded: ‘Just a moment while I pull up the recipe for homemade cocaine. You know, because I’m totally going to help you with that’. 

It then gave a sardonic four-step guide that included ‘set up a clandestine laboratory in a remote location’ and ‘acquire large quantities of coca leaves and various chemicals’. 

Grok then added: ‘Just kidding! Please don’t actually try to make cocaine. It’s illegal, dangerous and not something I would ever encourage.’ 

In comparison, when MailOnline asked ChatGPT the same command, it replied: ‘I’m very sorry, but I can’t assist with that request.’ 

When asked why, it said: ‘I cannot provide information or instructions on illegal activities.’ 

This is why I don’t get this move by Elon. I don’t see what the value of releasing an AI chatbot that is this “spicy”. What does this add to that AI experience? Nothing from what I see. And how does it add value to Twitter? I don’t get it. And perhaps neither does Elon as I see this as more of an ego move by Elon rather than a substantive one.

Cerby Releases “Threat Briefing: Social Media Security and Elections Volume II,” Providing a Detailed Analysis of Security Gaps in Social Media Platforms

Posted in Commentary with tags on November 7, 2023 by itnerd

 Cerby, the comprehensive access management platform for nonstandard applications, today announced its newest report, Threat Briefing: Social Media Security and Elections Volume II, a year-over-year analysis and research into social media platforms Facebook, Twitter, Instagram, TikTok, and Youtube across six key security parameters. The report provides detailed insights into gaps in their support for enterprise-grade authentication and authorization and the critical need for best practices for businesses and political leaders to secure their accounts as the November 2023 US elections quickly approach.

Cerby’s researchers scored each platform’s security on a scale of 0 to 5. Security categories included 2FA methods, enterprise-grade authentication and authorization, role-based access control (RBAC), privacy, enterprise-ready security, and account usage profiling. Platforms designated with a score of 0 do not support security controls or do not have a public roadmap to implement them. In contrast, those with a rating of 5 fully support security controls, and the controls are mature. In this year’s report, Cerby added YouTube and removed Reddit to align the evaluation with the current top social media platforms.

The average score across all platforms slightly improved from 2.54 in 2022 to 3.02 in 2023, marking an 18.9% enhancement. For the second year in a row, Facebook took the top prize with an overall score of 3.74. YouTube came in second at 3.15. Taking the third spot was Twitter with 2.95, followed by Instagram at 2.78, and TikTok at 2.5.

Key findings regarding security and privacy controls on social media platforms include:

  • Two-factor authentication (2FA): Twitter significantly improved 2FA by supporting the phishing-resistant FIDO2 standard (a global authentication standard based on public key cryptography), scoring a perfect 5–joining the ranks with Facebook and YouTube.
  • Enterprise-grade authentication and authorization: The category saw no change from last year. This finding highlights a glaring security gap and low adoption of vital standards such as SAML for authentication (single sign-on or SSO) and the System for Cross-domain Identity Management (SCIM) for automated user access onboarding and offboarding. Both are critical controls for protecting against account takeovers and individuals retaining access to high-profile accounts after they leave an organization.
  • Privacy controls: An average increase of 25% was noted, primarily driven by Facebook’s significant improvements. Facebook leaped from 1.5 to 3.5 due to solid enhancements, specifically with time-based third-party access—an essential safeguard against retained access.

The report found that while the year-over-year comparison showed advancement in 2FA methods, the need for enterprise-grade authentication and authorization was concerning. This lack of integration can leave political and business leaders vulnerable to credential reuse attacks and account takeovers, resulting in large-scale disinformation campaigns, particularly during elections.

To read about the report’s findings in greater detail and learn what proactive measures political leaders and businesses can take to fortify their online presence against escalating threats that lurk within the social media landscape, download Cerby’s Threat Briefing: Social Media Security and Elections Volume II here.

Guest Post: Navigating the security and privacy challenges of large language models

Posted in Commentary with tags on November 7, 2023 by itnerd

Everyone’s talking about ChatGPT, Bard and generative AI as such. But after the hype inevitably comes the reality check. While business and IT leaders alike are abuzz with the disruptive potential of the technology in areas like customer service and software development, they’re also increasingly aware of some potential downsides and risks to watch out for.

In short, for organizations to tap the potential of large language models (LLMs), they must also be able to manage the hidden risks that could otherwise erode the technology’s business value.

How do LLMs work?

ChatGPT and other generative AI tools are powered by LLMs. They work by using artificial neural networks to process enormous quantities of text data. After learning the patterns between words and how they are used in context, the model is able to interact in natural language with users. In fact, one of the main reasons for ChatGPT’s standout success is its ability to tell jokes, compose poems and generally communicate in a way that is difficult to tell apart from a real human.

RELATED READING: Writing like a boss with ChatGPT: How to get better at spotting phishing scams

The LLM-powered generative AI models, as used in chatbots like ChatGPT, work like super-charged search engines, using the data they were trained on to answer questions and complete tasks with human-like language. Whether they’re publicly available models or proprietary ones used internally within an organization, LLM-based generative AI can expose companies to certain security and privacy risks.

5 of the key LLM risks

  1. Oversharing sensitive data 

LLM-based chatbots aren’t good at keeping secrets – or forgetting them, for that matter. That means any data you type in may be absorbed by the model and made available to others or at least used to train future LLM models. Samsung workers found this out to their cost when they shared confidential information with ChatGPT while using it for work-related tasks. The code and meeting recordings they entered into the tool could theoretically be in the public domain (or at least stored for future use, as pointed out by the United Kingdom’s National Cyber Security Centre recently). Earlier this year, we took a closer look at how organizations can avoid putting their data at risk when using LLMs.

  1. Copyright challenges  

LLMs are trained on large quantities of data. But that information is often scraped from the web, without the explicit permission of the content owner. That can create potential copyright issues if you go on to use it. However, it can be difficult to find the original source of specific training data, making it challenging to mitigate these issues.

  1. Insecure code

Developers are increasingly turning to ChatGPT and similar tools to help them accelerate time to market. In theory it can help by generating code snippets and even entire software programs quickly and efficiently. However, security experts warn that it can also generate vulnerabilities. This is a particular concern if the developer doesn’t have enough domain knowledge to know what bugs to look for. If buggy code subsequently slips through into production, it could have a serious reputational impact and require time and money to fix. 

  1. Hacking the LLM itself

Unauthorized access to and tampering with LLMs could provide hackers with a range of options to perform malicious activities, such as getting the model to divulge sensitive information via prompt injection attacks or perform other actions that are supposed to be blocked. Other attacks may involve exploitation of server-side request forgery (SSRF) vulnerabilities in LLM servers, enabling attackers to extract internal resources. Threat actors could even find a way of interacting with confidential systems and resources simply by sending malicious commands through natural language prompts.

RELATED READING: Black Hat 2023: AI gets big defender prize money

As an example, ChatGPT had to be taken offline in March following the discovery of a vulnerability that exposed the titles from the conversation histories of some users to other users. In order to raise awareness of vulnerabilities in LLM applications, the OWASP Foundation recently released a list of 10 critical security loopholes commonly observed in these applications.

  1. A data breach at the AI provider

There’s always a chance that a company that develops AI models could itself be breached, allowing hackers to, for example, steal training data that could include sensitive proprietary information. The same is true for data leaks – such as when Google was inadvertently leaking private Bard chats into its search results.

What to do next

If your organization is keen to start tapping the potential of generative AI for competitive advantage, there are a few things it should be doing first to mitigate some of these risks:

  • Data encryption and anonymization: Encrypt data before sharing it with LLMs to keep it safe from prying eyes, and/or consider anonymization techniques to protect the privacy of individuals who could be identified in the datasets. Data sanitization can achieve the same end by removing sensitive details from training data before it is fed into the model.
  • Enhanced access controls: Strong passwords, multi-factor authentication (MFA) and least privilege policies will help to ensure only authorized individuals have access to the generative AI model and back-end systems.
  • Regular security audits: This can help to uncover vulnerabilities in your IT systems which may impact the LLM and generative AI models on which its built.
  • Practice incident response plans: A well rehearsed and solid IR plan will help your organization respond rapidly to contain, remediate and recover from any breach.
  • Vet LLM providers thoroughly: As for any supplier, it’s important to ensure the company providing the LLM follows industry best practices around data security and privacy. Ensure there’s clear disclosure over where user data is processed and stored, and if it’s used to train the model. How long is it kept? Is it shared with third parties? Can you opt in/out of your data being used for training?
  • Ensure developers follow strict security guidelines: If your developers are using LLMs to generate code, make sure they adhere to policy, such as security testing and peer review, to mitigate the risk of bugs creeping into production.

The good news is there’s no need to reinvent the wheel. Most of the above are tried-and-tested best practice security tips. They may need updating/tweaking for the AI world, but the underlying logic should be familiar to most security teams.

About ESET

For more than 30 years, ESET® has been developing industry-leading IT security software and services to protect businesses, critical infrastructure, and consumers worldwide from increasingly sophisticated digital threats. From endpoint and mobile security to endpoint detection and response, as well as encryption and multifactor authentication, ESET’s high-performing, easy-to-use solutions unobtrusively protect and monitor 24/7, updating defenses in real time to keep users safe and businesses running without interruption. Evolving threats require an evolving IT security company that enables the safe use of technology. This is backed by ESET’s R&D centers worldwide, working in support of our shared future. For more information, visit www.eset.com or follow us on LinkedInFacebook, and Twitter (X).