AI is still an evolving technology, advancing and spreading at an unprecedented pace. The more we engage with users and understand how they interact with AI, the more we realize how transformative this revolution is. However, we also recognize the need to proceed with caution. Our goal is to fully leverage the capabilities of AI Copilots to deliver rich, useful experiences, while ensuring safety and responsibility. We aim to strike a balance between innovation and accountability.
This is why we are introducing Copilot Labs. Before rolling out our most advanced tools to all users, we are testing them with a smaller group to gather feedback, learn, and improve the product—making it both better and safer. Copilot Labs is available to Copilot Pro users, offering a sneak peek into upcoming “work-in-progress” projects.
The first feature in Copilot Labs is Think Deeper, which enhances Copilot’s ability to reason through complex problems. By using cutting-edge reasoning models, Think Deeper helps with everything from solving difficult math problems to analyzing the costs of home projects. It may take a little more time to respond, but it provides detailed, step-by-step answers to tough questions. Think Deeper is rolling out today to a limited number of Copilot Pro users in Australia, Canada, New Zealand, the UK, and the US.
Next on the horizon is Copilot Vision.
One of the current limitations of Copilot has been its inability to understand what users are doing or viewing. While language is a powerful tool, much of the context around tasks goes beyond words. Copilot Vision addresses this by allowing users to let Copilot “see” what they see. This feature, currently in limited trial through Copilot Labs, is available in Microsoft Edge.
With Copilot Vision, Copilot can understand the webpage you’re viewing, answer questions about its content, suggest next steps, and assist with tasks—all through natural language interaction. It brings a new level of ease and practicality to the Copilot experience.
In developing this feature, we’ve prioritized the interests of both users and creators. It’s entirely opt-in, activated only when the user chooses to use it. Users have complete control over when, how, and if they want to engage the feature. In this preview, none of the content—whether it’s audio, images, text, or conversations with Copilot—will be stored or used for training. Once the feature is closed, all data is permanently discarded.
For now, we are restricting access to certain sites, blocking Vision from interacting with paywalled and sensitive content. Initially, Vision will only be available on a pre-approved list of popular websites, and it respects each site’s machine-readable AI controls. Over time, we plan to expand access, always with a focus on safety and responsibility. Copilot Vision is also designed to drive traffic to websites, and when encountering a paywall, it simply won’t comment. It is built to answer questions rather than take actions directly on the web.
Check out the detailed Q&A for more information on the safeguards we’ve built into the system. In developing Copilot Vision, and Copilot Labs more broadly, we’ve prioritized the interests of both users and creators to strike a balance between functionality and responsibility. We’ll be paying close attention to your feedback on this experimental feature. Initially, only a select group of Pro users in the United States will have access. We hope you find it valuable and are eager to hear your thoughts!