Photo by Igor Omilaev on Unsplash
Creating a Local AI-Powered Browser Assistant with Brave's Leo and Ollama
Integrating Brave's Leo AI with local Large Language Models (LLMs) using Ollama significantly enhances your browser experience by offering a privacy-focused and efficient AI assistant. This setup ensures that your data remains securely on your device, providing faster responses without the need for external servers.
Prerequisites:
Brave Browser: Ensure you have the latest version of the Brave browser installed.
Ollama: A platform that facilitates running LLMs locally on your machine.
Step 1: Install Ollama
Download and Install:
Visit the Ollama website and download the installer suitable for your operating system.
Run the installer and follow the on-screen instructions to complete the installation.
Verify Installation:
Open your terminal or command prompt.
Type
ollama
and press Enter. If installed correctly, you'll see the Ollama command-line interface (CLI) options.
Step 2: Download a Local LLM Model
Choose a Model:
- Ollama supports various models. For this guide, we'll use the Deepseek-r1:7b model.
Download the Model:
In the terminal, execute:
Ollama pull Deepseek-r1:7b
This command will download the specified model to your local machine.
Step 3: Configure Brave's Leo AI to Use the Local Model
Access Leo Settings:
Open the Brave browser.
Navigate to
Settings
>Leo
.
Enable 'Bring Your Own Model' (BYOM):
Scroll to the 'Bring your own model' section.
Toggle the feature to enable it.
Set Up the Local Model:
In the BYOM settings, input the necessary details to connect Leo to the Ollama-hosted model.
Ensure that the connection parameters match those provided by Ollama.
Step 4: Test the Integration
Interact with Leo:
Click on the Leo icon in the Brave browser.
Input a query or command to test the assistant.
Verify Functionality:
- Ensure that Leo responds appropriately, indicating successful integration with the local LLM.
Benefits of This Integration:
Enhanced Privacy: All data processing occurs locally, ensuring your information isn't transmitted to external servers.
Improved Performance: Local processing can lead to faster response times compared to cloud-based solutions.
Cost Efficiency: Eliminates the need for subscriptions to cloud AI services.
By following these steps, you've successfully integrated Brave's Leo AI with a local LLM using Ollama, creating a powerful, private, and efficient browser assistant tailored to your needs.