On Monday at WWDC, Apple announced Apple Intelligence which is Apple’s spin on AI. You can read the marketing fluff here. But if you want a FAQ that will answer all your questions, this should help you. But the bottom line is that it’s supposed to be truly useful while being truly private. In fact Apple spent a lot of time talking about the privacy aspects of Apple Intelligence and how the company is open to having people verify its claims. To get another perspective on this, Kevin Surace, Chair, Token & “Father of the Virtual Assistant” had this to say:
Apple has taken a “privacy and security first” approach to handling all generative AI interactions that must be processed in the cloud. No one else comes close at this point, and no one else has spelled out with full transparency how they intend to meet that high bar. More information can be found here: https://security.apple.com/blog/private-cloud-compute/.
Note that, at least for now, this is for Apple hardware product users who must trust that what they say to the AI is private to them and can never be stolen or learned from. It’s possible that some enterprises will evaluate the strength of this and allow their employees to use Apple devices with Apple Intelligence without fear.
Apple didn’t exactly state what silicon they used here. Is it a custom GPU cluster they designed or their own M4 processors, which include a neural engine and substantial GPU resources? But in typical Apple fashion, they have vertically integrated everything and taken ownership of its security from top to bottom. It’s impressive and ahead of AWS, Microsoft, and Google cloud offerings for LLMs thus far, even if it is just in support of Apple Intelligence features.
Apple has set the bar for absolute privacy and security of generative AI interactions. Everyone else will need to scramble now to meet this bar. This may allow enterprises to trust the Apple infrastructure for routine Apple Intelligence interactions, even those that include some corporate data.
Apple has developed its own foundation models that are very impressive but don’t yet beat out GPT-4. They publish their comparisons here: https://machinelearning.apple.com/research/introducing-apple-foundation-models. While Apple has not said what its partnership with OpenAI entails, they hint that when GPT-4 (or GPT-5 perhaps) is required for more accuracy, they will use it. To ensure absolute privacy, they would need to host it themselves in their Private Cloud Compute. They didn’t state that yesterday, so I suspect that the ink is still drying on those agreements with details to be worked out. But bouncing out to GPT-4 anytime won’t work. They suggested there would be an opt-in to that, so perhaps users give up some privacy when they opt to use GPT-4. How safe is OpenAI? They do provide various levels of private operation, but no one really knows how safe, secure, and non-sharing it actually is. While Apple has published an extensive security white paper, OpenAI has a short ChatGPT Enterprise privacy note, which certainly isn’t convincing Elon Musk it’s safe.
Apple has set the bar for absolute privacy and security of generative AI interactions. This may allow enterprises to trust the Apple infrastructure for routine Apple Intelligence interactions, even those that include some corporate data. This is a world-class effort, one where they are inviting security experts to poke holes in their approach. I’d say it appears as rock solid as anything we have seen.
All data to the cloud is encrypted, so a simple man-in-the-middle attack won’t work. From what they are saying, one would have to break into their network, but they don’t even have any debugging tools enabled in runtime—no privileged runtime access. They even took major precautions against actual physical access (basically breaking into the data center). They state that they have made this so secure and so encrypted with no storage of your information that it isn’t a target. I’d say this is state-of-the-art from the silicon to the outer doors of the facility.
Apple is stating that they are using their own foundation models in the network and the devices. That’s first and foremost. Then they note a partnership with OpenAI, to be used only when required, and they will also use the best of breed models. They seem to be hedging their bets here. OpenAI is a bit of a black box. But I suspect either Apple will host it themselves or demand a very private instance for their users, and users have to opt-in to its use. They failed to give us more details on the partnership, so time will tell, but it’s clear Apple takes privacy and security seriously, and they realize the hesitancy when they mention OpenAI. My bet is they will do this right, and it won’t be an issue.
While I don’t trust any company completely, I trust Apple more than I trust most companies. Thus I will be taking a dive into the Apple Intelligence pool when it comes out. If it improves Siri, that alone would be worth it. But in all seriousness, the privacy first approach is a win in my mind for users.
Vulnerabilities In Microsoft Apps Could Allow Hackers To Pwn macOS Users…. And Microsoft Won’t Fix These Vulnerabilities
Posted in Commentary with tags Apple, Cisco, Microsoft on August 20, 2024 by itnerdCisco’s Talos Intelligence group has a very interesting blog post that any macOS user that runs Microsoft apps should read. First the bad news from said blog post:
Cisco Talos recently conducted an analysis of macOS applications and the exploitability of the platform’s permission-based security model, which centers on the Transparency, Consent, and Control (TCC) framework.
We identified eight vulnerabilities in various Microsoft applications for macOS, through which an attacker could bypass the operating system’s permission model by using existing app permissions without prompting the user for any additional verification. If successful, the adversary could gain any privileges already granted to the affected Microsoft applications. For example, the attacker could send emails from the user account without the user noticing, record audio clips, take pictures or record videos without any user interaction.
All of that is pretty bad. Now here’s what’s worse:
Microsoft considers these issues low risk, and some of their applications, they claim, need to allow loading of unsigned libraries to support plugins and have declined to fix the issues.
Lovely. I can say with confidence that someone will look at this and say “that’s a great way to get into a Mac and use it for my evil purposes.” Then this will become a major problem. And you have to wonder what Microsoft will do at that point. Though there’s always the possibility that Apple will force Microsoft to do something as it is their platform after all. I would love to be a fly on the wall when that conversation happens. In the meantime, there’s no mitigations for these vulnerabilities at present. So you’ll just have do your best to be careful out there.
Leave a comment »