Wrapper around Minimax large language models that use the Chat endpoint.

To use you should have the MINIMAX_GROUP_ID and MINIMAX_API_KEY environment variable set.

Example

// Define a chat prompt with a system message setting the context for translation
const chatPrompt = ChatPromptTemplate.fromMessages([
SystemMessagePromptTemplate.fromTemplate(
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
HumanMessagePromptTemplate.fromTemplate("{text}"),
]);

// Create a new LLMChain with the chat model and the defined prompt
const chainB = new LLMChain({
prompt: chatPrompt,
llm: new ChatMinimax({ temperature: 0.01 }),
});

// Call the chain with the input language, output language, and the text to translate
const resB = await chainB.call({
input_language: "English",
output_language: "Chinese",
text: "I love programming.",
});

// Log the result
console.log({ resB });

Hierarchy

Implements

  • MinimaxChatInput

Constructors

  • Parameters

    • Optional fields: Partial<MinimaxChatInput> & BaseLanguageModelParams & {
          configuration?: ConfigurationParameters;
      }

    Returns ChatMinimax

Properties

apiUrl: string
modelName: string = "abab5.5-chat"
streaming: boolean = false
basePath?: string = "https://api.minimax.chat/v1"
beamWidth?: number
botSetting?: BotSetting[]
continueLastMessage?: boolean
defaultBotName?: string = "Assistant"
defaultUserName?: string = "I"
headers?: Record<string, string>
maskSensitiveInfo?: boolean
minimaxApiKey?: string
minimaxGroupId?: string
prefixMessages?: MinimaxChatCompletionRequestMessage[]
proVersion?: boolean = true
prompt?: string
replyConstraints?: ReplyConstraints
roleMeta?: RoleMeta
skipInfoMask?: boolean
temperature?: number = 0.9
tokensToGenerate?: number
topP?: number = 0.8
useStandardSse?: boolean

Accessors

Methods

  • Parameters

    Returns ReplyConstraints

  • Parameters

    Returns string

  • Get the identifying parameters for the model

    Returns {
        model: string;
        beam_width?: number;
        bot_setting?: BotSetting[];
        functions?: Function[];
        mask_sensitive_info?: boolean;
        plugins?: string[];
        prompt?: string;
        reply_constraints?: ReplyConstraints;
        role_meta?: RoleMeta;
        sample_messages?: MinimaxChatCompletionRequestMessage[];
        skip_info_mask?: boolean;
        stream?: boolean;
        temperature?: number;
        tokens_to_generate?: number;
        top_p?: number;
        use_standard_sse?: boolean;
    }

    • model: string
    • Optional beam_width?: number
    • Optional bot_setting?: BotSetting[]
    • Optional functions?: Function[]

      A list of functions the model may generate JSON inputs for.

    • Optional mask_sensitive_info?: boolean
    • Optional plugins?: string[]
    • Optional prompt?: string
    • Optional reply_constraints?: ReplyConstraints
    • Optional role_meta?: RoleMeta
    • Optional sample_messages?: MinimaxChatCompletionRequestMessage[]
    • Optional skip_info_mask?: boolean
    • Optional stream?: boolean
    • Optional temperature?: number
    • Optional tokens_to_generate?: number
    • Optional top_p?: number
    • Optional use_standard_sse?: boolean
  • Get the parameters used to invoke the model

    Parameters

    Returns Omit<MinimaxChatCompletionRequest, "messages">

  • Convert a list of messages to the format expected by the model.

    Parameters

    Returns undefined | MinimaxChatCompletionRequestMessage[]

Generated using TypeDoc