Class QAEvalChain

Chain to run queries against LLMs.

Example

import { LLMChain } from "langchain/chains";
import { OpenAI } from "langchain/llms/openai";
import { PromptTemplate } from "langchain/prompts";

const prompt = PromptTemplate.fromTemplate("Tell me a {adjective} joke");
const llm = new LLMChain({ llm: new OpenAI(), prompt });

Hierarchy

Constructors

Properties

llm: LLMType

LLM Wrapper to use

outputKey: string = "text"

Key to use for output, defaults to text

prompt: BasePromptTemplate<any, BasePromptValue, any>

Prompt object to use

llmKwargs?: any

Kwargs to pass to LLM

memory?: BaseMemory
outputParser?: BaseLLMOutputParser<string>

OutputParser to use

Accessors

  • get inputKeys(): string[]
  • Returns string[]

  • get outputKeys(): string[]
  • Returns string[]

Methods

  • Call the chain on all inputs in the list

    Parameters

    • inputs: ChainValues[]
    • Optional config: (BaseCallbackConfig | Callbacks)[]

    Returns Promise<ChainValues[]>

  • Run the core logic of this chain and add to output if desired.

    Wraps _call and handles memory.

    Parameters

    • values: any
    • Optional config: BaseCallbackConfig | Callbacks

    Returns Promise<ChainValues>

  • Parameters

    • examples: ChainValues
    • predictions: ChainValues
    • args: EvaluateArgs = ...

    Returns Promise<ChainValues>

  • Invoke the chain with the provided input and returns the output.

    Parameters

    • input: ChainValues

      Input values for the chain run.

    • Optional config: BaseCallbackConfig

      Optional configuration for the Runnable.

    Returns Promise<ChainValues>

    Promise that resolves with the output of the chain run.

  • Format prompt with values and pass to LLM

    Parameters

    • values: any

      keys to pass to prompt template

    • Optional callbackManager: CallbackManager

      CallbackManager to use

    Returns Promise<string>

    Completion from LLM.

    Example

    llm.predict({ adjective: "funny" })
    
  • Parameters

    • inputs: Record<string, unknown>
    • outputs: Record<string, unknown>
    • returnOnlyOutputs: boolean = false

    Returns Promise<Record<string, unknown>>

  • Parameters

    • input: any
    • Optional config: BaseCallbackConfig | Callbacks

    Returns Promise<string>

  • Load a chain from a json-like object describing it.

    Parameters

    Returns Promise<LLMChain<string, BaseLanguageModel<any, BaseLanguageModelCallOptions>>>

  • Parameters

    • llm: BaseLanguageModel<any, BaseLanguageModelCallOptions>
    • options: {
          chainInput?: Omit<LLMChainInput<string, LLMType>, "llm">;
          prompt?: PromptTemplate<any, any>;
      } = {}
      • Optional chainInput?: Omit<LLMChainInput<string, LLMType>, "llm">
      • Optional prompt?: PromptTemplate<any, any>

    Returns QAEvalChain

Generated using TypeDoc