openai_engine
as an engine.
prompt_template
parameter, you do not have to recreate the model. Instead, you can override the prompt_template
parameter at prediction time like this:engine
CREATE ML_ENGINE
statement.api_base
mode
default
, conversational
, conversational-full
, image
, and embedding
.default
mode is used by default. The model replies to the prompt_template
message.conversational
mode enables the model to read and reply to multiple messages.conversational-full
mode enables the model to read and reply to multiple messages, one reply per message.image
mode is used to create an image instead of a text reply.embedding
mode enables the model to return output in the form of embeddings.model_name
gpt-3.5-turbo
model is used.You can find all available models here.question_column
context_column
prompt_template
question_column
. It stores the message or instructions to the model. Please note that this parameter can be overridden at prediction time.max_tokens
temperature
0
marks a well-defined answer, and the value of 0.9
marks a more creative answer. Please note that this parameter can be overridden at prediction time.json_struct
prompt_template
parameter. See examples here.prompt_template
alone.question_column
and optionally a context_column
.prompt
, user_column
, and assistant_column
to create a model in the conversational mode.openai_engine
to create a model with the CREATE MODEL
statement.
Answering questions without context
Answering questions with context
Prompt completion
prompt_template
parameter.openai_model
model created earlier and overrides parameters at prediction time.Conversational mode
Authentication Error
SQL statement cannot be parsed by mindsdb_sql