Google is integrating AI capabilities into its search engine through a feature called Search Live, enabling users to engage in voice conversations with its AI chatbot. Currently being tested among Google Labs users in the United States, this feature does not yet support camera-sharing, although the company intends to add this functionality in the upcoming months.
Search Live allows users to interact with a tailored version of Gemini while searching the web in real time. Available on the Google app for both Android and iOS, this feature enhances user interaction by allowing individuals to point their camera at objects and ask verbal questions about them.
When using Search Live in AI Mode, users can opt into the experiment via Google Labs. Once activated, they can ask questions aloud—such as, “What are some tips for preventing a linen dress from wrinkling in a suitcase?”—and receive audio replies from the AI chatbot. Follow-up questions can be posed audibly as well, for instance, “What should I do if it still wrinkles?” Throughout the conversation, the AI will also provide relevant links for further information.
Other companies in the AI sector are developing similar voice capabilities. Last year, OpenAI introduced an Advanced Voice Mode for ChatGPT, while Anthropic launched a voice mode for its Claude app in May. Apple is also working on an updated version of Siri featuring large language model capabilities, though its release has been delayed due to reliability concerns, as noted by Craig Federighi, Apple’s senior vice president of software.
Google has indicated that Search Live functions seamlessly in the background, enabling users to continue their interactions with the chatbot while navigating other applications. Additionally, a “transcript” button is available for those who prefer a text version of the responses, allowing users to type their replies. Conversations held through Search Live will be saved in the user’s AI Mode history for future reference.