Researchers disclosed a critical remote code execution flaw (CVE-2026-25874, CVSS 9.3) in Hugging Face’s open-source robotics platform LeRobot, caused by unsafe deserialization through Python’s pickle format. The issue allows an unauthenticated attacker to send malicious payloads over unsecured gRPC channels and execute arbitrary code on both the policy server and connected robot clients.
You can read more here: https://github.com/advisories/GHSA-f7vj-73pm-m822
Eli Woodward, Cyber Threat Intelligence Advisor, Team Cymru has provided this comment:
“The bigger issue here is that AI infrastructure is increasingly becoming part of the external attack surface, often without the same visibility defenders have for traditional enterprise systems. Services like this can expose privileged environments that connect directly to valuable internal resources, making them attractive entry points for both financially motivated actors and more advanced threat groups. Once an attacker gains access, the challenge becomes understanding what else that infrastructure is connected to and how quickly they can pivot. External visibility and context become critical because many of these risks originate well beyond the traditional network perimeter. This is also an interesting case where even ‘physical safety’ becomes part of the risk model. While we’ve certainly seen that before in medical devices, the implementation of AI into robotics can create a whole new level of risk we haven’t seen before.”
This is a today problem. Especially since there is no fix at present. Not good in my opinion.
Like this:
Like Loading...
Related
This entry was posted on April 28, 2026 at 4:05 pm and is filed under Commentary with tags Hugging Face. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
Critical RCE in Hugging Face’s LeRobot
Researchers disclosed a critical remote code execution flaw (CVE-2026-25874, CVSS 9.3) in Hugging Face’s open-source robotics platform LeRobot, caused by unsafe deserialization through Python’s pickle format. The issue allows an unauthenticated attacker to send malicious payloads over unsecured gRPC channels and execute arbitrary code on both the policy server and connected robot clients.
You can read more here: https://github.com/advisories/GHSA-f7vj-73pm-m822
Eli Woodward, Cyber Threat Intelligence Advisor, Team Cymru has provided this comment:
“The bigger issue here is that AI infrastructure is increasingly becoming part of the external attack surface, often without the same visibility defenders have for traditional enterprise systems. Services like this can expose privileged environments that connect directly to valuable internal resources, making them attractive entry points for both financially motivated actors and more advanced threat groups. Once an attacker gains access, the challenge becomes understanding what else that infrastructure is connected to and how quickly they can pivot. External visibility and context become critical because many of these risks originate well beyond the traditional network perimeter. This is also an interesting case where even ‘physical safety’ becomes part of the risk model. While we’ve certainly seen that before in medical devices, the implementation of AI into robotics can create a whole new level of risk we haven’t seen before.”
This is a today problem. Especially since there is no fix at present. Not good in my opinion.
Share this:
Like this:
Related
This entry was posted on April 28, 2026 at 4:05 pm and is filed under Commentary with tags Hugging Face. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.