Apple has quietly made one of its biggest artificial intelligence moves in years, acquiring Israeli startup Q.ai in a deal valued at about $1.6 to $2 billion, according to people familiar with the matter, setting the stage for a new way users may speak to their devices without making a sound.
The purchase underscores Apple’s push to bring advanced AI features directly onto its devices, especially wearables, at a time when rivals race to define the future of human computer interaction.
Apple confirms deal but keeps details close
Apple confirmed it has acquired Q.ai but declined to disclose financial terms or outline specific product plans. The company typically avoids public discussion of acquisitions unless they directly affect customers.
People familiar with the transaction said the deal closed quietly in recent weeks and values the startup between $1.6 billion and $2 billion. That places it among Apple’s larger acquisitions, even by the company’s standards.
Q.ai was backed by a roster of well known venture capital firms, including Matter Venture Partners, Kleiner Perkins, Spark Capital, Exor, and GV, formerly Google Ventures. Those investors are expected to see one of the most significant exits in Israel’s recent AI startup history.
The acquisition signals Apple’s belief that the next leap in AI will come from how people give commands, not just what devices can compute.

Q.ai’s silent speech and audio AI explained
Founded in 2022, Q.ai focuses on artificial intelligence for audio and imaging, with a narrow but ambitious goal: helping devices understand speech that is whispered, mouthed, or not spoken aloud at all.
Apple said the startup developed new uses of machine learning that allow devices to interpret subtle voice input and perform better in noisy or complex sound environments.
Patent filings linked to Q.ai describe systems that analyze tiny movements in facial skin and muscles to detect words formed silently or spoken at extremely low volume. The same systems can also estimate signals such as heart rate, breathing patterns, and emotional state by observing micro expressions and physiological cues.
In simple terms, the technology aims to let users interact with devices without talking out loud, which could offer more privacy and usability in public spaces.
Potential uses described in filings and technical documents include:
Silent commands for earbuds, headphones, and smart glasses
Voice input that works in loud places like streets or trains
More natural control of devices when speech is not practical
This approach differs from traditional voice assistants that rely on clear audio captured by microphones.
Why Apple wants this technology now
Apple has steadily expanded its AI ambitions over the past year, even as it positions itself as a company that values privacy and on device processing. The acquisition of Q.ai fits directly into that strategy.
Competitors such as Meta and Google are investing heavily in smart glasses and mixed reality devices that depend on new forms of input beyond touchscreens. OpenAI and others are also experimenting with voice driven interfaces that feel more natural and conversational.
Apple has already added AI powered features to its AirPods, including real time language translation and adaptive audio that adjusts to surroundings. Silent or near silent speech recognition could be the next step.
By reducing the need for spoken commands, Apple could make AI feel more personal, private, and always available.
The company recently confirmed it will use Google’s Gemini models to support parts of its broader AI services under the Apple Intelligence brand. Still, Apple has emphasized that much of its AI work will run directly on devices rather than in the cloud.
From Tel Aviv to Cupertino
Q.ai was founded in Tel Aviv by Aviad Maizels, Yonatan Wexler, and Avi Barliya. The startup grew quickly, reaching about 100 employees in just over two years.
As part of the deal, those employees, including the founders, are expected to join Apple. The company has a long history of acquiring Israeli startups and integrating their teams into its global engineering organization.
Maizels is a familiar name inside Apple. He previously founded PrimeSense, a three dimensional sensing company Apple acquired in 2013. PrimeSense technology later played a key role in Apple’s shift away from fingerprint sensors toward Face ID facial recognition on the iPhone.
That history adds weight to the idea that Q.ai’s work could become foundational rather than experimental.
How this could shape future Apple devices
Apple has not said how or when Q.ai’s technology will appear in consumer products. Still, industry analysts point to several areas where it could have an immediate impact.
Wearables are the most obvious target. Earbuds, headphones, and future smart glasses all benefit from hands free input that works quietly and reliably.
Below is a simple look at where Q.ai’s technology could fit inside Apple’s product lineup:
| Device type | Possible use of Q.ai technology |
|---|---|
| AirPods | Silent commands and better noise handling |
| Smart glasses | Non verbal input and private interaction |
| Headphones | Whisper level voice control in public spaces |
| Future wearables | Health signals tied to facial micro movements |
These uses align with Apple’s focus on health, accessibility, and seamless user experience.
Privacy, ethics, and open questions
The idea of devices reading facial micro movements and estimating emotional or physical states raises questions about privacy and consent. Apple has built much of its brand on limiting data collection and keeping sensitive information on device.
So far, Apple has not addressed how it would handle these concerns if the technology ships in products. The company typically emphasizes user control and transparency when rolling out new sensing capabilities.
How Apple balances powerful AI sensing with user trust will be critical to the success of this technology.
For now, the company appears to be in an exploration phase, integrating Q.ai’s research into its broader AI and hardware roadmap.
The acquisition also highlights a shift in the AI race. Rather than competing only on large language models or cloud services, Apple is betting on new input methods that change how people interact with machines in daily life.
If successful, silent speech could become as normal as tapping a screen or saying “Hey Siri” once was.
As Apple absorbs Q.ai’s team and technology, the rest of the industry will be watching closely. This deal may not grab headlines like a flashy product launch, but it could shape how millions of people communicate with their devices in the years ahead.
What do you think about a future where you can talk to your devices without speaking out loud? Share your view and pass this story along to friends on social media to keep the conversation going.
















