OpenAI Fires Back At Elon Musk Over His Tweetstorm

Posted in Commentary with tags on June 12, 2024 by itnerd

I swear, this will be fun to watch.

You might recall that Elon Musk went absolutely insane on Twitter after the Apple Intelligence announcement because of the involvement of OpenAI. As part of that he was saying things that at first glance do not seem to be true. Well, OpenAI has decided to return fire via Fortune Magazine:

A top OpenAI executive defended her company against Elon Musk, a day after the billionaire CEO described the integration of OpenAI’s chatbot technology into Apple iPhones as “creepy spyware.”

“That’s his opinion. Obviously I don’t think so,” Mira Murati, chief technology officer at OpenAI, said on stage at Fortune’s MPW dinner in San Francisco on Tuesday. “We care deeply about the privacy of our users and the safety of our products.”

And:

In her answers on Tuesday, Murati hammered home the idea that OpenAI is intensely focused on user privacy and security. “We’re trying to be as transparent as possible with the public,” she said, adding that “the biggest risk is that stakeholders misunderstand the technology.”

I seriously think that this has less to do about what Apple and OpenAI are doing, along with user safety, and more to do with the fact that Elon isn’t involved. Or he’s afraid that this will destroy his Grok AI because of the scale of Apple and Open AI. So he’s being as mature as a two year old as a result. Although I will concede this point. By Murati saying that “We’re trying to be as transparent as possible with the public” does leave some room for doubt. Another thing to point out is that using OpenAI’s ChatGPT4 is a choice. As in every time Apple Intelligence feels that the query would benefit from using ChatGPT4, it will ask you. And Apple Intelligence removes user identifiable data from any query involving ChatGPT4. Which means that Elon’s rants aren’t valid. Thus it might be in everyone’s interest to ignore Elon .

HP Releases Their 2023 Sustainable Impact Report

Posted in Commentary with tags on June 12, 2024 by itnerd

Today, HP published its 2023 Sustainable Impact Report, developed in partnership with Oxford Economics, revealing how 76% of leaders believe technology is key to expanding economy opportunity and that artificial intelligence will help drive progress towards sustainability and social impact goals. The study of business executives and government officials in 10 countries found that business leaders are either already using AI or plan to in the next 1-2 years for goals such as increasing access to digital education (90%), workforce development (89%), and workforce diversity (86%).

The study also highlights an increased emphasis on mutual trust between the public and private sector, suggesting that collaboration remains essential for increasing adoption and digitization.

Canadian findings include:

  • 89% of Canadian business respondents feel supported by the government to pursue environmental and social initiatives, compared to a 72% global average.
  • 83% of Canadian business respondents trust government to implement policies to help invest in social progress compared to 72% globally.
  • 88% of Canadian officials trust the private sector to drive social progress.

You can read the full report here.

Horizon3.ai Appoints Jill Passalacqua as Chief Legal Officer

Posted in Commentary with tags on June 12, 2024 by itnerd

Horizon3.ai, a leading provider of autonomous security solutions, today announced the appointment of Jill Passalacqua as Chief Legal Officer (CLO), effective immediately. 

As Chief Legal Officer, Jill leads Horizon3.ai’s legal department, bringing extensive experience in advising prominent public and private technology companies. Her expertise is crucial for Horizon3.ai during its rapid growth phase, driven by the global adoption of their autonomous penetration testing solution, NodeZero™. This solution empowers IT teams, security professionals, consulting pentesters, medium and large enterprises, and MSSPs to continuously perform autonomous cyber risk assessments for themselves and their clients.

Before joining Horizon3.ai, Jill was the Chief Legal Officer at JumpCloud, where she played a pivotal role in shaping the company’s legal framework. She also held General Counsel positions at Harness and Avi Networks where she led the corporate legal strategy and operations, and facilitated substantial growth, including a successful acquisition by VMware. 

Before Avi Networks, Jill was at FireEye, where she managed the commercial team, built the global compliance and legal operations functions, and managed international expansion and M&A integration. Prior to FireEye, Jill spent 12 years at NetApp and was a key contributor to the growth and expansion of the legal department. She was responsible for corporate securities, public company reporting and compliance, commercial contracts, and building the company-wide commercial legal team. 

Jill serves on the board of directors of the Palisades Tahoe Community Foundation and has offered invaluable guidance as an advisor to several early-stage technology companies. Jill received her B.A. from the University of California, Los Angeles and her J.D. (Juris Doctor) from Santa Clara University.

Hackers Have Pwned Tile…. And It’s Not Good

Posted in Commentary with tags on June 12, 2024 by itnerd

For the three of you who still use Tile bluetooth trackers, I have bad news for you. The company has been pwned. And while this isn’t as bad as it could have been. It’s pretty bad. Here’s the key details:

A hacker has gained access to internal tools used by the location tracking company Tile, including one that processes location data requests for law enforcement, and stolen a large amount of customer data, such as their names, physical addresses, email addresses, and phone numbers, according to samples of the data and screenshots of the tools obtained by 404 Media.

The stolen data itself does not include the location of Tile devices, which are small pieces of hardware users attach to their keys or other items to monitor remotely. But it is still a significant breach that shows how tools intended for internal use by company workers can be accessed and then leveraged by hackers to collect sensitive data en masse. It also shows that this type of company, one which tracks peoples’ locations, can become a target for hackers.

“Basically I had access to everything,” the hacker told 404 Media in an online chat. The hacker says they also demanded payment from Tile but did not receive a response.

That’s not good. Now the limit of this hack is limited because Tile’s business fell off a cliff the second that Apple AirTags appeared. But if your data is still in Tile’s systems, you have a problem.

Sidebar: It may be too late now, but if you want to delete your Tile account click here.

Anyway, I want to point out how the hacker got in:

The hacker says they obtained login credentials for a Tile system that they believe belonged to a former Tile employee. 

That’s bad. Clearly Tile dropped the ball here. And that continued with how they responded to 404 Media. Check this out:

Tile told 404 Media in a statement “Recently, an extortionist contacted us, claiming to have used compromised Tile admin credentials to access a Tile system and customer data. We promptly initiated an investigation into the potential incident. Our investigation detected that certain admin credentials were used by an unauthorized party to access a Tile customer support platform, but not our Tile service platform. The Tile customer support platform contains limited customer information, such as names, addresses, email addresses, phone numbers, and Tile device identification numbers. It does not include more sensitive information, such as credit card numbers, passwords or log-in credentials, location data, or government-issued identification numbers.”

“We disabled the credentials and took swift action designed to prevent any future unauthorized access to the Tile customer support platform and associated Tile customer data. At this time, we are confident there is no continued unauthorized access to the Tile customer support platform,” the statement continued.

Tile suggested in its statement that it was not aware of what data had been taken until 404 Media shared samples of the data for more verification. “Once you supplied us with additional data, we investigated further and determined that it is likely data from the impacted Tile customer support platform.  We thank you for bringing this new information to our attention,” it read.

Tile also published a version of this statement on its website, but only after 404 Media contacted the company for comment and proved to it that the stolen data was accurate.

Tile did not respond directly when asked if the hacker had the required access to perform a location data request.

Clearly Tile is clueless. I am certain that this is not going to be the last of this story. And secondary attacks against Tile customers are sure to come. And the blame for this rests solely with Tile. They and their corporate masters Life360 going forward don’t deserve a cent from you going forward as they clearly don’t have a clue when it comes to keeping your personal data secure. Not that I am shocked by that.

Apple Intelligence Announced…. What Does An Expert Think Of It?

Posted in Commentary with tags on June 12, 2024 by itnerd

On Monday at WWDC, Apple announced Apple Intelligence which is Apple’s spin on AI. You can read the marketing fluff here. But if you want a FAQ that will answer all your questions, this should help you. But the bottom line is that it’s supposed to be truly useful while being truly private. In fact Apple spent a lot of time talking about the privacy aspects of Apple Intelligence and how the company is open to having people verify its claims. To get another perspective on this, Kevin Surace, Chair, Token & “Father of the Virtual Assistant” had this to say:

Apple has taken a “privacy and security first” approach to handling all generative AI interactions that must be processed in the cloud. No one else comes close at this point, and no one else has spelled out with full transparency how they intend to meet that high bar. More information can be found here: https://security.apple.com/blog/private-cloud-compute/.

Note that, at least for now, this is for Apple hardware product users who must trust that what they say to the AI is private to them and can never be stolen or learned from. It’s possible that some enterprises will evaluate the strength of this and allow their employees to use Apple devices with Apple Intelligence without fear.

Apple didn’t exactly state what silicon they used here. Is it a custom GPU cluster they designed or their own M4 processors, which include a neural engine and substantial GPU resources? But in typical Apple fashion, they have vertically integrated everything and taken ownership of its security from top to bottom. It’s impressive and ahead of AWS, Microsoft, and Google cloud offerings for LLMs thus far, even if it is just in support of Apple Intelligence features.

Apple has set the bar for absolute privacy and security of generative AI interactions. Everyone else will need to scramble now to meet this bar. This may allow enterprises to trust the Apple infrastructure for routine Apple Intelligence interactions, even those that include some corporate data.

Apple has developed its own foundation models that are very impressive but don’t yet beat out GPT-4. They publish their comparisons here: https://machinelearning.apple.com/research/introducing-apple-foundation-models. While Apple has not said what its partnership with OpenAI entails, they hint that when GPT-4 (or GPT-5 perhaps) is required for more accuracy, they will use it. To ensure absolute privacy, they would need to host it themselves in their Private Cloud Compute. They didn’t state that yesterday, so I suspect that the ink is still drying on those agreements with details to be worked out. But bouncing out to GPT-4 anytime won’t work. They suggested there would be an opt-in to that, so perhaps users give up some privacy when they opt to use GPT-4. How safe is OpenAI? They do provide various levels of private operation, but no one really knows how safe, secure, and non-sharing it actually is. While Apple has published an extensive security white paper, OpenAI has a short ChatGPT Enterprise privacy note, which certainly isn’t convincing Elon Musk it’s safe.

Apple has set the bar for absolute privacy and security of generative AI interactions. This may allow enterprises to trust the Apple infrastructure for routine Apple Intelligence interactions, even those that include some corporate data. This is a world-class effort, one where they are inviting security experts to poke holes in their approach. I’d say it appears as rock solid as anything we have seen.

All data to the cloud is encrypted, so a simple man-in-the-middle attack won’t work. From what they are saying, one would have to break into their network, but they don’t even have any debugging tools enabled in runtime—no privileged runtime access. They even took major precautions against actual physical access (basically breaking into the data center). They state that they have made this so secure and so encrypted with no storage of your information that it isn’t a target. I’d say this is state-of-the-art from the silicon to the outer doors of the facility.

Apple is stating that they are using their own foundation models in the network and the devices. That’s first and foremost. Then they note a partnership with OpenAI, to be used only when required, and they will also use the best of breed models. They seem to be hedging their bets here. OpenAI is a bit of a black box. But I suspect either Apple will host it themselves or demand a very private instance for their users, and users have to opt-in to its use. They failed to give us more details on the partnership, so time will tell, but it’s clear Apple takes privacy and security seriously, and they realize the hesitancy when they mention OpenAI. My bet is they will do this right, and it won’t be an issue.

While I don’t trust any company completely, I trust Apple more than I trust most companies. Thus I will be taking a dive into the Apple Intelligence pool when it comes out. If it improves Siri, that alone would be worth it. But in all seriousness, the privacy first approach is a win in my mind for users.

Today Is Patch Tuesday…. It’s Patching Time!

Posted in Commentary with tags on June 11, 2024 by itnerd

Today is “Patch Tuesday” and Neowin and Bleeping Computer have the list of fixes that are included for these patches for Windows 11 and 10. Those articles are worth a read.

 Tom Marsland, VP of Technology, Cloud Range, and Board Chairman of VetSec had these comments:

Today’s Patch Tuesday from Microsoft fixes a publicly disclosed zero-day, a design issue in the Domain Name System Security Extensions (DNSSEC) that could be exploited to cause a denial-of-service attack in vulnerable DNS resolvers. According to researchers that found the vulnerability (which had been present in DNSSEC for the better part of two decades), an attacker “could completely disable large parts of the worldwide Internet.”

This patch Tuesday fixed quite a few remote code execution vulnerabilities, however, the vulnerabilities do require local access to the vulnerabilities in question. These attacks could’ve taken the form of tricking users into opening malicious documents, or other forms of social engineering to exploit these systems and applications, which includes SharePoint, Visual Studio, Microsoft Office, and Microsoft Outlook.

While most of these items patched are not seeing exploits in the wild, it is important for system administrators and security personnel to make a judicious effort to patch systems as soon as possible after this release.

I would encourage you to read those so that you can see what’s been fixed and deploy these fixes when you can. Because installing these patches are an easy way to keep yourself secure.

Fullcast Unveils Copilot for RevOps

Posted in Commentary with tags on June 11, 2024 by itnerd

Fullcast is proud to introduce Copilot for RevOps®, a new addition to the Fullcast platform that streamlines how organizations approach revenue operations.

Copilot was designed to assist revenue operations teams with the daily task of keeping the customer relationship management (CRM) platform aligned with the Go-to-Market plans. Copilot offers teams the ability to automate common tasks such as dealing with new hires, terminations, role changes, balancing of territories, managing holdovers and tracking service levels.

By streamlining workflows through automated action framework and event-driven automation, Copilot for RevOps ensures the organization’s GTM plans are dynamic and always aligned to the plan.

Companies will be able to set operational policies for tasks like territory balancing and lead routing, ensuring that their plans are always up to date. Key features of Copilot for RevOps include the following:

  1. Automated rules: Businesses can create automated rules for common GTM tasks, such as territory auto-balancing and lead routing.
  2. Automated balancing of policies: Copilot ensures sales reps’ territories are always balanced and responding to moves, additions and changes in their CRM through operational policies.
  3. Automatic updates and CRM syncing: In conjunction with Fullcast SmartPlan, organizations can build and adapt their GTM plans with automated updates that are seamlessly synced directly with their CRM platform, such as Salesforce.
  4. Rapid lead response times: Copilot improves “speed to lead” by setting and tracking service-level agreements for leads and critical RevOps processes.

Tony Anscombe to EMCEE Collision Conference 2024’s Developer Track: FullSTK

Posted in Commentary with tags on June 11, 2024 by itnerd

ESET today announced that Tony Anscombe,  Cyber Security Evangelist at ESET, will be the emcee for the Developer Track: FullSTK at this year’s Collision Conference. With topics ranging from AI and privacy to future tech, Anscombe will introduce and shed light on a range of critical technology topics during the event, which brings together the product managers, data scientists, coders and engineers programming the future to talk tech. 

Tony Anscombe brings a wealth of experience to the stage as Cyber Security Evangelist at ESET, having spoken at renowned industry conferences such as RSA, Black Hat, Infosec, Gartner Risk and Security Summit, and the Child Internet Safety Summit. Most recently, Anscombe presented on cyber risk insurance, and published an industry whitepaper on the topic, for ESET World 2024, an annual event where global cybersecurity professionals, analysts and decision-makers come together to discuss technological advancements.  

During the FullSTK Developer Track, the following topics will be highlighted: 

  • Future Tech: Explore the potential of superpositions and DNA enzymes in processing data at unprecedented speeds, the impact of identity orchestration on development, the future of ambient computing, and advances in AI and machine learning. 
  • Security and Compliance: With the escalation of cyberwarfare and increasingly stringent legislation, discover new security tools and tactics. Learn what companies and nation-states can do to thwart sophisticated cyberattacks and stay ahead of technological advancements. 
  • Privacy and Diversity in Data: Address the pressing ethics of AI technology, including opaque terms and conditions and algorithmic biases. Discuss how technology companies are advancing data privacy and fostering diversity to design complex AI systems free from bias. 
  • The Role of the Engineer: Analyze how DevOps teams have led the way in remote work and the ongoing influence of engineers on the future of work. Investigate the challenges companies face in acquiring technically skilled workers and the implications of nearshoring talent. 

As a speaker, author, and recognized expert in the current threat landscape, security technologies, data protection, privacy, and internet safety, Anscombe’s insights are highly sought after and respected globally. He is regularly quoted in leading security, technology, and business publications such as BBC, The Guardian, The New York Times, and USA Today. Additionally, he has made broadcast appearances on Bloomberg, BBC, CTV, CBC, CP24, Global News, and CBS, establishing himself as a trusted voice in the cybersecurity domain. 

Don’t miss the opportunity to engage with Tony Anscombe and gain valuable insights during the FullSTK sessions at Collision Conference 2024. For more details, visit here: LINK

Adobe To Change Terms Of Use After EPIC Backlash

Posted in Commentary with tags on June 11, 2024 by itnerd

Last week Adobe released new terms of use for its products that almost immediately sparked anger amongst its user base. And attempts to explain it away didn’t go over well. So Adobe is trying again for a third time via this blog post:

We recently rolled out a re-acceptance of our Terms of Use which has led to concerns about what these terms are and what they mean to our customers. This has caused us to reflect on the language we use in our Terms, and the opportunity we have to be clearer and address the concerns raised by the community.

Over the next few days, we will speak to our customers with a plan to roll out updated changes by June 18, 2024.

At Adobe, there is no ambiguity in our stance, our commitment to our customers, and innovating responsibly in this space. We’ve never trained generative AI on customer content, taken ownership of a customer’s work, or allowed access to customer content beyond legal requirements. Nor were we considering any of those practices as part of the recent Terms of Use update. That said, we agree that evolving our Terms of Use to reflect our commitments to our community is the right thing to do.

In other words, the blowback was so epic that Adobe has had to do a rethink. And next week Adobe will roll out new terms of use that clearly state what Adobe and and can’t do with user data. And at the same time, Adobe hopes by doing so that it can get users to trust them again. That might be a tall order given how epic the blowback was. But I guess we will see when these new terms of use drop.

Elon Musk Flips Out At Apple Working With OpenAI

Posted in Commentary with tags on June 11, 2024 by itnerd

From the “what drugs is this guy smoking” department comes a tweet storm from Elon Musk in regards to Apple integrating OpenAI’s Chat GPT 4 into their operating systems that are due to be released this fall. The TL:DR is that he’s so upset by this that he’s threatening to ban iPhones and other Apple devices from his companies:

Elon Musk is threatening to ban iPhones from all his companies over the newly announced OpenAI integrations Apple announced at WWDC 2024 on Monday. In a series of posts on X, the Tesla, SpaceX and xAI exec wrote that “if Apple integrates OpenAI at the OS level,” Apple devices would be banned from his businesses and visitors would have to check their Apple devices at the door where they’ll be “stored in a Faraday cage.”

His posts seem to misunderstand the relationship Apple announced with OpenAI or at least attempt to leave room for doubt about user privacy. While Apple and OpenAI both said that users are asked before “any questions are sent to ChatGPT,” along with any documents or photos, Musk’s responses indicate he believes OpenAI is deeply integrated into Apple’s operating system itself and therefore able to hoover up any personal and private data.

In iOS 18, Apple said people will be able to ask Siri questions, and if the assistant thinks ChatGPT can help, it will ask permission to share the question and present the answer directly. This allows users to get an answer from ChatGPT without having to open the ChatGPT iOS app. Photos, PDFs or other documents you want to send to ChatGPT get the same treatment.

Musk, however, would prefer that OpenAI’s capabilities remain bound to a dedicated app — not a Siri integration.

Responding to VC and CTO Sam Pullara at Sutter Hill Ventures who wrote that the user is approving a specific request on a per-request basis — OpenAI does not have access to the device — Musk wrote, “Then leave it as an app. This is bullshit.”

Pullara had said that the way ChatGPT was integrated was essentially the same way the ChatGPT app works today. The on-device AI models are either Apple’s own or those using Apple’s Private Cloud.

Meanwhile, replying to a post on X from YouTuber Marques Brownlee that further explained Apple Intelligence, Musk responded, “Apple using the words ‘protect your privacy’ while handing your data over to a third-party AI that they don’t understand and can’t themselves create is *not* protecting privacy at all!”

He even replied to a post by Apple CEO Tim Cook, wherein he threatened to ban Apple devices from the premises of his companies if he didn’t “stop this creepy spyware.”

“It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!” Musk exclaimed in one of many posts about the new integrations. “Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river,” he said. While it’s true that Apple may not know the inner workings of OpenAI, it’s not technically Apple handing over the data — the user is making that choice, from the sound of things.

I have a feeling that this is all a smokescreen for the fact that Apple is working with OpenAI and not with him and his Grok AI. I say that because Apple during the WWDC keynote where this was announced did say that it was open to integrating other AI’s, and that OpenAI was the first one. And I am going to guess that his AI isn’t on Apple list. So he’s having a tantrum and throwing his toys out of the stroller like a two year old. Which is typical for Elon as he seems to have the emotional maturity of a two year old. My advice is to completely ignore Elon as clearly he’s lost the plot here.