The LLM() Function
MindsDB provides the LLM()
function that lets users incorporate the LLM-generated output directly into the data queries.
Prerequisites
The LLM()
function requires a large language model, which can be defined in the MindsDB configuration under default_llm
or via environment variables as below.
To use the LLM()
function with MindsDB, choosing one of the available model providers and define the following environment variables.
OpenAI
OpenAI
Here are the environment variables for the OpenAI provider:
Note that the values stored in the environment variables are specific for each provider.
Anthropic
Anthropic
Here are the environment variables for the Anthropic provider:
Note that the values stored in the environment variables are specific for each provider.
Anyscale
Anyscale
Here are the environment variables for the Anyscale provider:
Note that the values stored in the environment variables are specific for each provider.
LiteLLM
LiteLLM
Here are the environment variables for the LiteLLM provider:
Note that the values stored in the environment variables are specific for each provider.
Ollama
Ollama
Here are the environment variables for the Ollama provider:
Note that the values stored in the environment variables are specific for each provider.
Nvidia NIMs
Nvidia NIMs
Here are the environment variables for the Nvidia NIMs provider:
Note that the values stored in the environment variables are specific for each provider.
MindsDB
MindsDB
Here are the environment variables for the MindsDB provider:
To use MindsDB as a provider, create a model in a project within MindsDB and use its name in the LLM_FUNCTION_MODEL_NAME
environment variable and the project name in the LLM_FUNCTION_PROJECT_NAME
environment variable.
Usage
You can use the LLM()
function to simply ask a question and get an answer.
Here is the output:
Moreover, you can the LLM()
function with your data to swiftly complete tasks such as text generation or summarization.
Here is the output: