Mark Zuckerberg recently walked into the courtroom to testify about social media addiction.
Image: David Paul Morris/Bloomberg via Getty Images
When Mark Zuckerberg walked into the courtroom to testify about social media addiction, the scene had the familiar choreography of modern accountability: cameras flashing, security in formation, the CEO in measured stride. But it was the sunglasses that caught my eye and the judges attention.
They were not merely dark lenses shielding against glare. They were Meta Ray-Bans, the product of a collaboration between Meta and EssilorLuxottica, the parent company of Ray-Ban. On the surface, they looked like a classic design icon. Beneath the frame, they housed cameras, microphones, speakers—I’m told in future they will have artificial intelligence capabilities.
Zuckerberg had come to answer questions about the psychological impact of social media on young people. Instead, his entourage found themselves scolded for wearing a device that could quietly record the proceedings. Judge Caroline Cool warned that any recordings made inside the courtroom must be disposed of or the consequences would follow.
It was a small but revealing moment. The courtroom was not reacting to a social network. It was reacting to a new interface.
A few years ago, Meta began experimenting with embedding technology into eyewear. The logic was subtle but profound: glasses are worn, not held. They sit at the point where perception meets interpretation. Unlike a phone, which must be lifted and unlocked, glasses are already in position—watching the world as you do.
At first, the Meta Ray-Bans seemed modest: the ability to take photos, record short videos, play audio. But the trajectory is clear. Future iterations are expected to incorporate facial recognition and more advanced contextual analysis. In other words, they will not simply capture what you see; they will attempt to understand it.
This is the rise of Visual Artificial Intelligence, systems that interpret and generate visual information by combining computer vision, machine learning, deep learning, and generative models. Traditional AI reads and listens. Visual AI sees.
The implications extend well beyond novelty. Eyewear may soon become the primary interface for artificial intelligence. It is always with you. It observes your environment. It can learn patterns in your movements and interactions. Over time, it can provide contextual information based on what it perceives—names of people, histories of buildings, instructions for tasks.
If used ethically, the applications are compelling. A police officer might identify a suspect more efficiently. A surgeon could receive real-time guidance during a procedure. A mechanic might see overlay instructions aligned precisely with engine components. These are not distant fantasies; they are extensions of capabilities already demonstrated in prototypes.
But the courtroom incident underscores a deeper question: what happens when every glance has the potential to be recorded, analyzed, and stored?
A blinking indicator light on the frame is a start, but it is not a safeguard. Consent becomes ambiguous when recording devices are indistinguishable from ordinary glasses. Public manners must evolve. So must regulation. The issue is no longer simply photography; it is persistent, automated interpretation of human environments.
And the momentum is accelerating. According to market research firm Circana, sales of smart glasses nearly tripled in 2025 compared to the year before. Meta has reportedly sold millions of pairs since the product’s launch, with sales climbing sharply last year. Google has unveiled Android XR glasses. Apple is widely reported to be developing its own version and may launch in March 2026. Samsung is expected to follow later this year. Even OpenAI has signaled interest in hardware pathways.
We may be entering a new era of consumer technology—one in which intelligence is no longer confined to devices we hold, but woven into objects we wear.
The instinctive reaction may mirror that of Judge Cool: to reject it, to demand it be turned off, to treat it as an intrusion. But dismissal is rarely a durable response to technological change. The printing press, the camera, the smartphone—all were greeted with suspicion before becoming normalised.
The difference this time is intimacy. When technology moves to the face, it moves closer to identity. It becomes less tool and more extension.
The moment in the courtroom may, in retrospect, seem symbolic. A technology leader being questioned about the consequences of one digital platform appears accompanied by people wearing the prototype of another. The challenge now is not merely to regulate these devices after they proliferate. It is to think carefully—before they become ubiquitous—about the norms and safeguards that will govern their use.
Because when machines begin to see alongside us, the question is not simply what they can capture. It is what they will come to know. Getting them may require an ethical license to use them responsibly.
Wesley Diphoko is a Technology Analyst and Editor-in-Chief of Fast Company (South Africa) magazine.
Image: Supplied
Wesley Diphoko is a Technology Analyst and the Editor-In-Chief of FastCompany (SA) magazine.
*** The views expressed here do not necessarily represent those of Independent Media or IOL.
BUSINESS REPORT