Optional
criterionOptional
evaluationOptional
llmOptional
memoryOptional
skipOptional
skipCheck if the evaluation arguments are valid.
Optional
reference: stringThe reference label.
Optional
input: stringThe input string.
If the evaluator requires an input string but none is provided, or if the evaluator requires a reference label but none is provided.
Invoke the chain with the provided input and returns the output.
Input values for the chain run.
Optional
config: BaseCallbackConfigOptional configuration for the Runnable.
Promise that resolves with the output of the chain run.
Format prompt with values and pass to LLM
keys to pass to prompt template
Optional
callbackManager: CallbackManagerCallbackManager to use
Completion from LLM.
llm.predict({ adjective: "funny" })
Static
deserializeStatic
fromLLMCreate a new TrajectoryEvalChain.
Optional
agentTools: StructuredTool<ZodObject<any, any, any, any, {}>>[]The tools used by the agent.
Optional
chainOptions: Partial<Omit<LLMEvalChainInput<EvalOutputType, BaseLanguageModel<any, BaseLanguageModelCallOptions>>, "llm">>The options for the chain.
Static
resolveOptional
prompt: BasePromptTemplate<any, BasePromptValue, any>Optional
agentTools: StructuredTool<ZodObject<any, any, any, any, {}>>[]Static
toolsGenerated using TypeDoc
A chain for evaluating ReAct style agents.
This chain is used to evaluate ReAct style agents by reasoning about the sequence of actions taken and their outcomes.