Brave adds experimental agentic AI browsing feature
Yesterday, Brave announced that they are testing out an agentic AI browsing mode in Brave Nightly - the official testing build of their browser. Users can now test out this feature before it is implemented into regular releases.
AI browsing mode functions as a part of Leo, which is Brave's resident large language learning model (LLM) that claims to not store your data. It functions solely in a dedicated user profile, meaning that it cannot access browsing data, such as cookies, of your main browsing session. This prevents the LLM from accessing confidential information that you may not want to share.
The company admitted that agentic AI has a long tack record of being insecure. They are vulnerable to prompt injection attacks that allow websites to embed secret instructions into a website for the AI model to execute. These models also have the ability to retain confidential information like passwords and ID numbers into memory. Brave themselves even disclosed several vulnerabilities affecting similar AI browsers.
Brave has developed safeguards that can somewhat lessen these security concerns. Firstly, AI browsing mode utilizes models such as Claude Sonnet, which is trained to resist prompt injection attacks. This is complemented with a second "task model" or "alignment checker" that detects deviations from the original prompt.
This “alignment checker” serves as a guardrail: it receives the system prompt, the user prompt, and the task model’s response, and then checks if the task model’s instructions match the user’s intention. This checker does not directly receive raw website content—by firewalling it from untrusted website input, we can reduce (but not eliminate) the risk of subversion by page-level prompt injection
Additional security features include a lack of access to internal websites, non-HTTPS websites, Chrome Web Store extension pages, and websites flagged by safe browsing. Users are also allowed to view proposed information to be stored into memory, allowing to detect potential prompt injection attacks and reject the storage of sensitive information.
There are also company-wide restrictions. According to their privacy policy, Brave promises to never collect your data for training purposes or store data collected from Leo.
Even so, you should never input your login credentials into Leo, much less in Brave's AI browsing mode. If you decide to test it out, be careful about the information you are giving to any cloud-based LLM.
Brave follows other companies experimenting with agentic AI browsing. Startups such as OpenAI and Perplexity have built Chromium‑based browsers, while established players like Google and Mozilla now offer comparable AI browsing modes.
Despite the hype, AI browsing often compromises on security and privacy. Only time will tell whether Brave's interpretation of agentic AI is safe enough for users.
Community Discussion