This documentation describes the integration of MindsDB with Ollama, a tool that enables local deployment of large language models. The integration allows for the deployment of Ollama models within MindsDB, providing the models with access to data from various data sources.

Prerequisites

Before proceeding, ensure the following prerequisites are met:

  1. Install MindsDB locally via Docker or Docker Desktop.
  2. To use Ollama within MindsDB, install the required dependencies following this instruction.
  3. Follow this instruction to download Ollama and run models locally.

Here are the recommended system specifications:

  • A working Ollama installation, as in point 3.
  • For 7B models, at least 8GB RAM is recommended.
  • For 13B models, at least 16GB RAM is recommended.
  • For 70B models, at least 64GB RAM is recommended.

Setup

Create an AI engine from the Ollama handler.

CREATE ML_ENGINE ollama_engine
FROM ollama;

Create a model using ollama_engine as an engine.

CREATE MODEL ollama_model
PREDICT completion
USING
   engine = 'ollama_engine',   -- engine name as created via CREATE ML_ENGINE
   model_name = 'model-name',  -- model run with 'ollama run model-name'
   ollama_serve_url = 'http://localhost:11434';

If you run Ollama and MindsDB in separate Docker containers, use the localhost value of the container. For example, ollama_serve_url = 'http://host.docker.internal:11434'.

You can find available models here.

Usage

The following usage examples utilize ollama_engine to create a model with the CREATE MODEL statement.

Deploy and use the llama2 model.

First, download Ollama and run the model locally by executing ollama run llama2.

Now deploy this model within MindsDB.

CREATE MODEL llama2_model
PREDICT completion
USING
   engine = 'ollama_engine',
   model_name = 'llama2';

Query the model to get predictions.

SELECT text, completion
FROM llama2_model
WHERE text = 'Hello';

Here is the output:

+-------+------------+
| text  | completion |
+-------+------------+
| Hello | Hello!     |
+-------+------------+

You can override the prompt message as below:

SELECT text, completion
FROM llama2_model
WHERE text = 'Hello'
USING 
   prompt_template = 'Answer using exactly five words: {{text}}:';

Here is the output:

+-------+------------------------------------+
| text  | completion                         |
+-------+------------------------------------+
| Hello | Hello! *smiles* How are you today? |
+-------+------------------------------------+

Next Steps

Go to the Use Cases section to see more examples.