Class HuggingFaceInference

Class implementing the Large Language Model (LLM) interface using the Hugging Face Inference API for text generation.

Example

const model = new HuggingFaceInference({
model: "gpt2",
temperature: 0.7,
maxTokens: 50,
});

const res = await model.call(
"Question: What would be a good company name for a company that makes colorful socks?\nAnswer:"
);
console.log({ res });

Hierarchy

  • LLM
    • HuggingFaceInference

Implements

Constructors

Properties

apiKey: undefined | string = undefined

API key to use.

endpointUrl: undefined | string = undefined

Custom inference endpoint URL to use

frequencyPenalty: undefined | number = undefined

Penalizes repeated tokens according to frequency

includeCredentials: undefined | string | boolean = undefined

Credentials to use for the request. If this is a string, it will be passed straight on. If it's a boolean, true will be "include" and false will not send credentials at all.

maxTokens: undefined | number = undefined

Maximum number of tokens to generate in the completion.

model: string = "gpt2"

Model to use

temperature: undefined | number = undefined

Sampling temperature to use

topK: undefined | number = undefined

Integer to define the top tokens considered within the sample operation to create new text.

topP: undefined | number = undefined

Total probability mass of tokens to consider at each step

Generated using TypeDoc