Rabbit R1 AI Toy Exposed for Leaking Private Conversations Through Hardcoded API
It's a stark reminder that promises of privacy and high-grade encryption by websites, apps, and hardware developers should not always be taken at face value. Before vulnerabilities are exposed, these claims stand tall, but once a flaw is uncovered, the fragility of these developers' security measures becomes glaringly apparent.
Rabbit R1, a popular AI conversational toy equipped with a built-in microphone and internet connectivity, allows users to interact vocally with an AI model. This gadget had previously enjoyed significant acclaim for its innovative approach to AI interaction.
Like many hardware developers, Rabbit R1's manufacturer boasted various security measures to protect user privacy on its official website. However, recent findings by security researchers have revealed that Rabbit R1 employed hardcoded API keys, which could potentially grant access to all user interactions and private information.
The APIs used by Rabbit R1 include text-to-speech services from ElevenLabs, the Azure text-to-speech system, Yelp for product reviews, and Google Maps for location services.
These API keys could be exploited to:
- Access all conversation logs with R1.
- Remotely compromise R1's backend services, effectively bricking all R1 devices.
- Alter R1’s conversational responses.
- Change R1's voice output.
Security vulnerabilities were also found in the manufacturer Rabbitude's foundational security systems. Researchers were able to breach these systems and use the company's email system to validate their intrusion. A similar test was conducted in April without detection, and recent tests have confirmed that the vulnerabilities remain unpatched.
Notably, Rabbitude was aware of these security issues within Rabbit R1 but failed to address or publicly acknowledge them.
The researchers have refrained from disclosing the vulnerability details not out of respect for the manufacturer but to protect users from potential risks that could arise from such disclosures, which could expose many to severe security threats.