Chain to use to combine results of applying llm_chain to documents.
Variable name in the LLM chain to put the documents in
Ensures that the map step is taken regardless of max tokens
LLM Wrapper to use after formatting documents
The maximum number of iterations to run through the map
The maximum number of tokens before requiring to do the reduction
Return the results of the map steps in the output.
Optional
memoryRun the core logic of this chain and add to output if desired.
Wraps _call and handles memory.
Optional
config: BaseCallbackConfig | CallbacksOptional
tags: string[]Invoke the chain with the provided input and returns the output.
Input values for the chain run.
Optional
config: BaseCallbackConfigOptional configuration for the Runnable.
Promise that resolves with the output of the chain run.
Return a json-like object representing this chain.
Static
deserializeLoad a chain from a json-like object describing it.
Generated using TypeDoc
Combine documents by mapping a chain over them, then combining results.