Meta Smart Glasses: Smarter or Awkward?

by Archynetys Technology & Science Desk

“`html





Meta’s <a href="https://www.wired.com/gallery/best-smart-glasses/" title="The Best Smart Glasses to Augment Your Reality" target="_blank" rel="noopener">Smart Glasses</a> Demo Mishap Highlights AI Challenges

Meta’s Smart Glasses Demo Mishap Highlights AI Challenges

A chaotic product demonstration at Meta’s Connect developer conference underscores the hurdles in making AI-powered smart glasses a seamless reality.


This summer,during an earnings call,Meta CEO Mark Zuckerberg expressed his strong belief in the future of smart glasses. He suggested that individuals who opt out of wearing AI-enabled smart spectacles, especially his own, might face a “pretty significant cognitive disadvantage” compared to those who embrace the technology.

However, Meta’s recent attempt to showcase the potential of its face computing platform to enhance human capabilities fell short of expectations.

During a live keynote address at the company’s Connect developer conference on Wednesday, Zuckerberg transitioned to a product demonstration of the newly unveiled smart glasses. The demo quickly encountered problems. When a chef was invited to the stage to use the Meta glasses’ voice assistant for recipe guidance,the “Hey Meta” wake word triggered every pair of Meta glasses in the room,causing hundreds of devices distributed to attendees to activate and generate noise simultaneously.

Meta CTO Andrew Bosworth explained in an Instagram Reel following the event that the issue arose because the numerous instances of Meta’s AI operating in the same location inadvertently created a DDOS situation. Furthermore, a video call demonstration also failed, and the demos that did function were plagued by delays and interruptions.

The intention here isn’t solely to criticize the flawed Connect keynote. The awkwardness, hesitant interactions, repeated commands, and stilted conversations inadvertently highlight the challenges of integrating this technology into real-world scenarios.

According to leo Gebbie, a director and analyst at CCS Insights, “The main problem for me is the raw amount of times were you do engage with an AI assistant and ask it to do something and it doesn’t actually understand. The failure risk just is high, and the gap is still pretty big between what’s being shown and what we’re actually going to get.”

Eyes of the World

Live Captions seen on the Meta Ran Ban Display.Courtesy of Meta

It’s evident that achieving Zuckerberg’s vision of smart glasses as a computing platform that elevates human intellect and capabilities remains a distant goal.While wearing internet-connected hardware on one’s face may expedite access to facts and perhaps enhance perceived intelligence or competence, the Connect demo’s awkwardness vividly illustrated that merely wearing a chatbot and a screen might negate any cognitive benefits. Smart glasses, in their current state, may even place the wearer at a significant social disadvantage.

“The failure risk just is high, and the gap is still pretty big between what’s being shown and what we’re actually going to get.”

Related Posts

Leave a Comment