Google is preparing three families of connected glasses for 2026 and beyond, with Android XR and Gemini at the heart of the experience. Objective: regain the initiative against Ray-Ban MetaThe company is building on its already popular Android ecosystem, augmented reality and a more portable format than a headset.

The market for connected eyewear is no longer just a laboratory of ideas: it's becoming a field of conquest. Google is relaunching a structured offensive, conceived as a range, with three complementary approaches and a clear promise: to make AI and augmented reality useful, visible, and above all adoptable in everyday life.

With its minimalist display, dependence on the smartphone and "helmet" disguised as a frame, this strategy aims to destabilize a well-established player. Meta has already imposed its codes, as shown by the symbolic milestone of one million Ray-Ban Meta units soldand pushes innovation very fast.

discover the three new connected glasses launched by google, designed to rival ray-ban meta and revolutionize the wearables market.

Three models of Google connected glasses: a segmentation designed to compete with Ray-Ban Meta

Google doesn't present a single pair of glasses, but three product directionseach associated with a dominant use. This segmentation responds to a reality observed on the networks: the same object cannot simultaneously satisfy the content creator, the city dweller in a hurry and the user who wants to "replace a screen". Meta has understood this with its rapid iterations and special editions, such as its ultra-limited-edition Ray-Ban connectors.

First axis: a "classic" pair in appearance, with a screen in the right glass. The idea is to remain discreet, while displaying information of high immediate value. With this format, Google is pushing forward with a more streamlined interface, inspired by Android notifications: reading a last message, previewing an itinerary, or a quick question to Gemini. The most strategic idea is not the screen, but compatibility: if the display is based on the logic of notifications, adapting third-party services becomes simpler. For a brand, this opens up concrete scenarios: reservation confirmation, withdrawal code, event reminder, or real-time campaign alert.

Second axis: a similar version, but with two screensone per glass. The challenge is not just "bigger", but "more immersive": better display width, greater depth, and increased potential for spatial cues (navigation, object annotation). This choice has an industrial and energy cost, which explains the later timetable. But Meta is also making progress on the display front, as detailed by the arrival of a new Ray-Ban Meta screenThis makes the race for visual comfort a decisive one.

Third axis: a pair without screen in the same launch window as the single-screen model. Here, the proposition is purely "assistant + capture": voice commands, audio synthesis, photos and quick actions. This format is aimed at users who don't want a permanent display, but want creative and organizational functions. For a creator, the gain is immediate: capture a scene without taking out the phone, then publish faster, especially when automated editing progresses, as shown by the rise of machine translation on Reels to reach an international audience.

This range announces a central point: Google is not just looking to copy a success, but to three uses before consumers get used to it. The software field is therefore the next battleground.

This shift towards the software experience can already be seen in the media and community interest surrounding innovations, including analyses on connected glasses and their innovationswhich show the extent to which expectations are becoming structured.

One screen, two screens or none: the Google user experience between Android notifications and content creation

The "one screen" choice is not a compromise, it's a usage decision. On a mobile, the user tolerates the density of information. On a frame, acceptance depends on the cognitive friction Too much display is tiring, too little display is frustrating. Google therefore seems to prefer micro-interactions, close to what we already do dozens of times a day: consult, validate, reply, browse. In a logic of influence, this approach sticks to real-life routines: a creator doesn't "work" continuously, he alternates capture, conversation and publication.

A concrete example helps to visualize. A fictional designer, Lina, manages a campaign for a beauty brand. During an event, she receives a notification: adjusted briefing, time of appearance, link to a script. Wearing single-screen glasses, she reads the essentials without taking out her phone, maintaining eye contact and limiting the "off-ground" effect. The experience becomes social, not just technical. In the Instagram world, where attention is quickly lost, this continuity has value, especially when optimizing paths, as in best practices for navigating Instagram stories.

The two-screen model pushes another logic: contextualization. A more stable directional arrow, a distance marker, a "point of interest" annotation: these elements gain in legibility as the visual field widens. For a retail brand, this can become a drive-to-store tool: guiding customers to a store, displaying a nearby offer, or reminding them of an appointment. Success will depend on interface design, because "more information" should never mean "more confusion".

In contrast, the screenless pair focuses on audio and capture. They are aimed at those who want a discreet assistant and memories "on the fly". It's also a sensitive area: photo/video capture raises questions of consent. Meta is already being scrutinized on these issues, notably around potential facial recognition via Meta glasses. Google will therefore have to lock in transparency signals (LEDs, sounds, controls) and offer understandable settings, because trust is a prerequisite for adoption.

On the material level, one point structures everything: smartphone addiction for computing power, at least on certain models. This architecture may frustrate fans of total autonomy, but it has a strategic advantage: it speeds up time-to-market by building on what already exists, and taking advantage of Android updates. For a brand, it also means easier integration, as the ecosystem of applications and notifications is already a de facto standard. Final Insight: the battle will be played out less on "having a screen" than on make every interruption useful.

This logic of use naturally paves the way for a third territory: that of eyewear capable of competing with a helmet, while retaining a wearable form.

The Aura project and Android XR: when Google turns connected glasses into a compact alternative to headphones

The "Aura" project changes scale. Where the first two formats resemble augmented glasses, Aura is closer to a mixed device, halfway between augmented reality and immersion. The idea is to preserve the visibility of the real world through the glasses, while superimposing a gesture-controlled interface. This gestural grammar is key: it avoids public speaking and limits tactile dependence, while creating a sensation of digital space "set" in the environment.

The most revealing technological choice concerns hardware architecture. Aura integrates very little electronics into the frame, and offsets the "brain" in a external battery connected by cable. Inside, a Snapdragon XR2+already known to equip recent XR platforms. This design may come as a surprise, but it responds to a concrete problem: dissipating heat, limiting the weight on the nose, and increasing the duration of use without turning the frame into a massive object. Helmets have already shown that ergonomics determine frequency of use; here, Google is looking for a more "portable" compromise.

For brands and designers, Aura opens up more ambitious scenarios than simple notification. An agency could imagine an overlay product tour, a virtual showroom in a real location, or on-the-job training with contextual cues. In terms of influence, this could give rise to new formats: interactive demonstrations, guided virtual trials, or "spatial" content captured and then edited for networks. Competition is looking to the same horizon: between Meta, Apple and other players, comparison is becoming inevitable, as explained by the state of the Apple and Meta connected glasses competition.

A strategic dimension is added: Android XR "in its purest version" suggests an open platform, conducive to partnerships. Google has historically excelled when manufacturers and developers can build on a common base. Meta, on the other hand, relies on a more integrated ecosystem, and is already a leader in this field. Meta's acceleration to maintain its position. The trade-off will be clear: vertical integration versus variety of compatible devices.

To steer a strategy of influence around these objects, measurement must follow. Glasses create new points of contact (capture, navigation, assistance), and therefore new KPI to revisit: activation rates, retention, voice interactions, local conversions. Teams who already know how to structure social data, as in reading social network KPIsThey have a head start, because they transform technological innovation into concrete performance.

ValueYourNetwork is a natural part of this transition: working with ValueYourNetwork, an expert in influence marketing since 2016, helps orchestrate credible activations around connected glasses, from profile selection to usage storytelling. With hundreds of successful campaigns carried out on social networks, the team knows how to connect influencers and brands, while framing the challenges of format, acceptability and measurement. To structure a campaign linked to these new devices and secure the right partnerships, contact us.