In our latest episode in the #SimplifyingLLMs series! I’m here to guide you through the nuts and bolts of setting up and running a Code Copilot-like model on your personal computer.
What is Ollama ?
Ollama is a streamlined tool for running open-source LLMs locally, including Mistral and Llama 2. Ollama bundles model weights, configurations, and datasets into a unified package managed by a Modelfile.
Ollama supports a variety of LLMs including LLaMA-2, uncensored LLaMA, CodeLLaMA, Falcon, Mistral, Vicuna model, WizardCoder, and Wizard uncensored.
Installation and Setup of Ollama
- Download Ollama from the official website.
- After downloading, the installation process is straightforward and similar to other software installations. For MacOS and Linux users, you can install Ollama with one command:
curl https://ollama.ai/install.sh | sh
. - Once installed, Ollama creates an API where it serves the model, allowing users to interact with the model directly from their local machine.