Ollama Cheatsheet

Compiled it some for future use...

Page content

Here is the list and examples of the most useful Ollama commands (Ollama commands cheatsheet) I compiled some time ago. Hopefully it will be useful to you.

ollama cheatsheet

This Ollama cheatsheet is focusing on CLI commands, model management, and customization

Installation

  • Option 1: Download from Website
    • Visit ollama.com and download the installer for your operating system (Mac, Linux, or Windows).
  • Option 2: Install via Command Line
    • For Mac and Linux users, use the command:
      curl https://ollama.ai/install.sh | sh
      
    • Follow the on-screen instructions and enter your password if prompted[3].

System Requirements

  • Operating System: Mac or Linux (Windows version in development)
  • Memory (RAM): 8GB minimum, 16GB or more recommended
  • Storage: At least ~10GB free space
  • Processor: A relatively modern CPU (from the last 5 years)[3].

Basic Ollama CLI Commands

Command Description
ollama serve Starts Ollama on your local system.
ollama create <new_model> Creates a new model from an existing one for customization or training.
ollama show <model> Displays details about a specific model, such as its configuration and release date.
ollama run <model> Runs the specified model, making it ready for interaction.
ollama pull <model> Downloads the specified model to your system.
ollama list Lists all the downloaded models.
ollama ps Shows the currently running models.
ollama stop <model> Stops the specified running model.
ollama rm <model> Removes the specified model from your system.
ollama help Provides help about any command.

Model Management

  • Download a Model:

    ollama pull mistral-nemo:12b-instruct-2407-q6_K
    

    This command downloads the specified model (e.g., Gemma 2B) to your system.

  • Run a Model:

    ollama run qwen2.5:32b-instruct-q3_K_S
    

    This command starts the specified model and opens an interactive REPL for interaction.

  • List Models:

    ollama list
    

    This command lists all the models that have been downloaded to your system.

  • Stop a Model:

    ollama stop llama3.1:8b-instruct-q8_0
    

    This command stops the specified running model.

Customizing Models

  • Set System Prompt: Inside the Ollama REPL, you can set a system prompt to customize the model’s behavior:

    >>> /set system For all questions asked answer in plain English avoiding technical jargon as much as possible
    >>> /save ipe
    >>> /bye
    

    Then, run the customized model:

    ollama run ipe
    

    This sets a system prompt and saves the model for future use.

  • Create Custom Model File: Create a text file (e.g., custom_model.txt) with the following structure:

    FROM llama3.1
    SYSTEM [Your custom instructions here]
    

    Then, run:

    ollama create mymodel -f custom_model.txt
    ollama run mymodel
    

    This creates a customized model based on the instructions in the file[3].

Using Ollama with Files

  • Summarize Text from a File:

    ollama run llama3.2 "Summarize the content of this file in 50 words." < input.txt
    

    This command summarizes the content of input.txt using the specified model.

  • Log Model Responses to a File:

    ollama run llama3.2 "Tell me about renewable energy." > output.txt
    

    This command saves the model’s response to output.txt.

Common Use Cases

  • Text Generation:

    • Summarizing a large text file:
      ollama run llama3.2 "Summarize the following text:" < long-document.txt
      
    • Generating content:
      ollama run llama3.2 "Write a short article on the benefits of using AI in healthcare." > article.txt
      
    • Answering specific questions:
      ollama run llama3.2 "What are the latest trends in AI, and how will they affect healthcare?"
      

    .

  • Data Processing and Analysis:

    • Classifying text into positive, negative, or neutral sentiment:
      ollama run llama3.2 "Analyze the sentiment of this customer review: 'The product is fantastic, but delivery was slow.'"
      
    • Categorizing text into predefined categories: Use similar commands to classify or categorize text based on predefined criteria.

Using Ollama with Python

  • Install Ollama Python Library:
    pip install ollama
    
  • Generate Text Using Python:
    import ollama
    
    response = ollama.generate(model='gemma:2b', prompt='what is a qubit?')
    print(response['response'])
    
    This code snippet generates text using the specified model and prompt.