Google Unveils Gemini 2.5 Model with Human-Like Web Interaction Abilities

Google to Invest Rs 88,000 Crore in Andhra Pradesh for Data Centres, AI Projects

New Delhi: Google has launched its new Gemini 2.5 Computer Use model, an artificial intelligence system designed to interact with web interfaces much like a human user. The model, built on the Gemini 2.5 Pro architecture, is now accessible to developers through Google AI Studio and Vertex AI.

The system can interpret detailed user instructions and perform digital actions such as clicking, scrolling, typing, and navigating dropdown menus within a browser. While it currently supports 13 action types, Google reports that Gemini 2.5 delivers improved accuracy and speed compared to similar AI models, with reduced latency for developers.

Demonstrations shared by Google show the model autonomously organising digital sticky notes on a collaborative board, classifying them into categories based on natural language prompts. Although the demonstrations were accelerated for presentation, they offer a glimpse into how the AI could handle user interface (UI) tasks without manual intervention.

At this stage, the model cannot control full desktop operating systems but is already being used within Google for UI testing and workflow automation. The company says these internal applications have streamlined software development and interface validation processes.

Several of Google’s ongoing projects, including AI Mode in Search, Firebase Testing Agent, and Project Mariner, have already integrated components of Gemini 2.5. These initiatives are part of Google’s broader effort to build more capable AI agents that can execute complex, real-world tasks with minimal human direction.

The launch of Gemini 2.5 reflects a step in Google’s long-term push toward developing autonomous, reliable AI systems suited for both enterprise and developer environments.

Also Read: Tinder Enhances User Security in India with Face Check Profile Verification