To use this model you need to have the @mlc-ai/web-llm module installed. This can be installed using npm install -S @mlc-ai/web-llm.

You can see a list of available model records here: https://github.com/mlc-ai/web-llm/blob/main/src/config.ts

Example

// Initialize the ChatWebLLM model with the model record.
const model = new ChatWebLLM({
model: "Phi2-q4f32_1",
chatOptions: {
temperature: 0.5,
},
});

// Call the model with a message and await the response.
const response = await model.invoke([
new HumanMessage({ content: "My name is John." }),
]);

Hierarchy (view full)

Constructors

Properties

model: string
appConfig?: any
chatOptions?: any
temperature?: number
inputs: WebLLMInputs
engine: MLCEngine

Methods

Generated using TypeDoc