top of page
AINews (3).png

Meta Ray-Ban Smart Glasses Spark Privacy Fears Over Human Video Review

  • Writer: Covertly AI
    Covertly AI
  • 15 minutes ago
  • 3 min read

Meta’s Ray-Ban smart glasses promise a futuristic convenience: hands-free recording paired with an AI assistant that can interpret what you see and hear. But recent reporting suggests that convenience can come with a troubling tradeoff, raising fresh questions about consent, transparency, and who may access the footage these devices capture. In the UK, the Information Commissioner’s Office (ICO) says it will write to Meta after a “concerning” investigation alleged that outsourced workers could view highly sensitive videos recorded by the company’s AI smart glasses (Vallance).


The allegations come from a joint investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten, which reported that contractors in Nairobi, Kenya reviewed intimate material recorded by wearers, including videos of people using the toilet or having sex (Vallance; Tangermann). Meta told the BBC that it may use contractors to review content people share with Meta AI to improve the glasses experience, and said this practice is described in its privacy documentation (Vallance). Meta also says content is filtered to protect privacy, including measures such as blurring faces, but sources cited in the Swedish reporting said the filtering sometimes failed and faces could still be visible (Vallance; Owen). While the glasses have a recording light and require users to start recording manually or via voice command, the reports suggest many users may not realize that some recordings and AI interactions can be reviewed by humans, especially when disclosures are buried in long terms and policies (Vallance; Tangermann).


Futurism’s reporting adds detail about the human process behind “smart” AI features. Contractors working for a Nairobi-based outsourcing firm called Sama described reviewing and labeling footage, a common step used to improve computer vision and other AI systems (Tangermann; Vallance). Workers said they encountered highly personal content, including people undressing, pornography, and footage exposing bank card details, and some described feeling pressured to continue annotating sensitive videos to avoid losing their jobs (Tangermann). Futurism also highlights a consumer dilemma: users effectively cannot use certain AI capabilities without sending media to Meta’s remote servers, and once the data is shared, control over how it is handled can become difficult to reclaim (Tangermann). A data protection lawyer quoted in the Swedish reporting argued that once material is fed into models, users can lose practical control over how it is used (Tangermann).


AppleInsider frames the controversy as a predictable outcome of camera-based wearables that rely on large datasets and “armies” of workers to refine AI. It describes Sama’s role in training workflows where annotators label objects in images and video, and repeats accounts of accidental recordings, such as glasses left on a bedside table capturing someone changing clothes (Owen). AppleInsider notes that workers typically sign non-disclosure agreements, but argues that the sheer existence of broad human access still undermines public expectations of privacy, particularly when anonymization systems are imperfect in real-world conditions like poor lighting (Owen). The outlet also draws a parallel to Apple’s 2019 Siri controversy, when contractors reviewing audio snippets reportedly encountered private material; AppleInsider says Apple has faced ongoing fallout, including settlements as recently as 2025, and warns that future Apple wearables could face similar scrutiny if privacy is mishandled (Owen).


Regulators and the public are now watching how Meta responds. The ICO emphasized that devices processing personal data should keep users in control and provide meaningful transparency about what data is collected and how it is used (Vallance). As AI wearables become more popular, the central issue is shifting from whether the technology can see and hear, to whether ordinary people truly understand where that data goes, who may view it, and what protections actually hold up in practice (Vallance; Tangermann; Owen). Works Cited

Owen, Malcolm. “What Privacy? As Expected, Meta Ray-Bans Are a Privacy Disaster.” AppleInsider, 3 Mar. 2026, appleinsider.com/articles/26/03/03/what-privacy-as-expected-meta-ray-bans-are-a-privacy-disaster

Tangermann, Victor. “Meta Workers Say They’re Seeing Disturbing Things Through Users’ Smart Glasses.” Futurism, 2 Mar. 2026, futurism.com/artificial-intelligence/meta-disturbing-smart-glasses

Vallance, Chris. “Regulator Contacts Meta Over Workers Watching Intimate AI Glasses Videos.” BBC News, 4 Mar. 2026, www.bbc.com/news/articles/c0q33nvj0qpo

Morris, David Paul, photographer. “Meta CEO Mark Zuckerberg wears a pair of Meta Ray-Ban Display AI glasses with an accompanying neural wristband at Meta Connect 2025.” Entrepreneur, 18 Sept. 2025, https://www.entrepreneur.com/business-news/meta-ceo-mark-zuckerberg-reveals-new-ray-ban-display-glasses/497298. Accessed 4 Mar. 2026. 

“Meta Ray-Ban Smart Glasses review: A glimpse of the future.” Pocket-lint, 7 Nov. 2023, http://pocket-lint.com/ray-ban-meta-smart-glasses-review/  Accessed 4 Mar. 2026. 

Subscribe to Our Newsletter

  • Instagram
  • Twitter
bottom of page