fbpx

Running your own code copliot

In our latest episode in the #SimplifyingLLMs series! I’m here to guide you through the nuts and bolts of setting up and running a Code Copilot-like model on your personal computer.

What is Ollama ?

Ollama is a streamlined tool for running open-source LLMs locally, including Mistral and Llama 2. Ollama bundles model weights, configurations, and datasets into a unified package managed by a Modelfile.

Ollama supports a variety of LLMs including LLaMA-2, uncensored LLaMA, CodeLLaMA, Falcon, Mistral, Vicuna model, WizardCoder, and Wizard uncensored.

Installation and Setup of Ollama

  1. Download Ollama from the official website.
  2. After downloading, the installation process is straightforward and similar to other software installations. For MacOS and Linux users, you can install Ollama with one command: curl https://ollama.ai/install.sh | sh.
  3. Once installed, Ollama creates an API where it serves the model, allowing users to interact with the model directly from their local machine.