This documentation describes the integration of MindsDB with Ollama, a tool that enables local deployment of large language models. The integration allows for the deployment of Ollama models within MindsDB, providing the models with access to data from various data sources.
Before proceeding, ensure the following prerequisites are met:
Here are the recommended system specifications:
Create an AI engine from the Ollama handler.
Create a model using ollama_engine
as an engine.
If you run Ollama and MindsDB in separate Docker containers, use the localhost
value of the container. For example, ollama_serve_url = 'http://host.docker.internal:11434'
.
You can find available models here.
The following usage examples utilize ollama_engine
to create a model with the CREATE MODEL
statement.
Deploy and use the llama3
model.
First, download Ollama and run the model locally by executing ollama pull llama3
.
Now deploy this model within MindsDB.
Models can be run in either the ‘generate’ or ‘embedding’ modes. The ‘generate’ mode is used for text generation, while the ‘embedding’ mode is used to generate embeddings for text.
However, these modes can only be used with models that support them. For example, the moondream
model supports both modes.
By default, if the mode is not specified, the model will run in ‘generate’ mode if multiple modes are supported. If only one mode is supported, the model will run in that mode.
To specify the mode, use the mode
parameter in the CREATE MODEL
statement. For example, mode = 'embedding'
.
Query the model to get predictions.
Here is the output:
You can override the prompt message as below:
Here is the output:
Next Steps
Go to the Use Cases section to see more examples.