WebAssembly binding for llama.cpp
For changelog, please visit releases page
Important
Version 2.0 is released 👉 read more
- Typescript support
- Can run inference directly on browser (using WebAssembly SIMD), no backend or GPU is needed!
- No runtime dependency (see package.json)
- High-level API: completions, embeddings
- Low-level API: (de)tokenize, KV cache control, sampling control,...
- Ability to split the model into smaller files and load them in parallel (same as
splitandcat) - Auto switch between single-thread and multi-thread build based on browser support
- Inference is done inside a worker, does not block UI render
- Pre-built npm package @wllama/wllama
Limitations:
- To enable multi-thread, you must add
Cross-Origin-Embedder-PolicyandCross-Origin-Opener-Policyheaders. See this discussion for more details. - No WebGPU support, but maybe possible in the future
- Max file size is 2GB, due to size restriction of ArrayBuffer. If your model is bigger than 2GB, please follow the Split model section below.
Demo:
- Basic usages with completions and embeddings: https://github.ngxson.com/wllama/examples/basic/
- Embedding and cosine distance: https://github.ngxson.com/wllama/examples/embeddings/
- For more advanced example using low-level API, have a look at test file: wllama.test.ts
Install it:
npm i @wllama/wllamaThen, import the module:
import{Wllama}from'@wllama/wllama';letwllamaInstance=newWllama(WLLAMA_CONFIG_PATHS, ...);// (the rest is the same with earlier example)For complete code example, see examples/main/src/utils/wllama.context.tsx
NOTE: this example only covers completions usage. For embeddings, please see examples/embeddings/index.html
- It is recommended to split the model into chunks of maximum 512MB. This will result in slightly faster download speed (because multiple splits can be downloaded in parallel), and also prevent some out-of-memory issues.
See the "Split model" section below for more details. - It is recommended to use quantized Q4, Q5 or Q6 for balance among performance, file size and quality. Using IQ (with imatrix) is not recommended, may result in slow inference and low quality.
For complete code, see examples/basic/index.html
import{Wllama}from'./esm/index.js';(async()=>{constCONFIG_PATHS={'single-thread/wllama.wasm': './esm/single-thread/wllama.wasm','multi-thread/wllama.wasm' : './esm/multi-thread/wllama.wasm',};// Automatically switch between single-thread and multi-thread version based on browser support// If you want to enforce single-thread, add{"n_threads": 1 } to LoadModelConfigconstwllama=newWllama(CONFIG_PATHS);// Define a function for tracking the model download progressconstprogressCallback=({ loaded, total })=>{// Calculate the progress as a percentageconstprogressPercentage=Math.round((loaded/total)*100);// Log the progress in a user-friendly formatconsole.log(`Downloading... ${progressPercentage}%`);};// Load GGUF from Hugging Face hub// (alternatively, you can use loadModelFromUrl if the model is not from HF hub)awaitwllama.loadModelFromHF('ggml-org/models','tinyllamas/stories260K.gguf',{ progressCallback,});constoutputText=awaitwllama.createCompletion(elemInput.value,{nPredict: 50,sampling: {temp: 0.5,top_k: 40,top_p: 0.9,},});console.log(outputText);})();Alternatively, you can use the *.wasm files from CDN:
importWasmFromCDNfrom'@wllama/wllama/esm/wasm-from-cdn.js';constwllama=newWllama(WasmFromCDN);// NOTE: this is not recommended, only use when you can't embed wasm files in your projectCases where we want to split the model:
- Due to size restriction of ArrayBuffer, the size limitation of a file is 2GB. If your model is bigger than 2GB, you can split the model into small files.
- Even with a small model, splitting into chunks allows the browser to download multiple chunks in parallel, thus making the download process a bit faster.
We use llama-gguf-split to split a big gguf file into smaller files. You can download the pre-built binary via llama.cpp release page:
# Split the model into chunks of 512 Megabytes ./llama-gguf-split --split-max-size 512M ./my_model.gguf ./my_modelThis will output files ending with -00001-of-00003.gguf, -00002-of-00003.gguf, and so on.
You can then pass to loadModelFromUrl or loadModelFromHF the URL of the first file and it will automatically load all the chunks:
constwllama=newWllama(CONFIG_PATHS,{parallelDownloads: 5,// optional: maximum files to download in parallel (default: 3)});awaitwllama.loadModelFromHF('ngxson/tinyllama_split_test','stories15M-q8_0-00001-of-00003.gguf');When initializing Wllama, you can pass a custom logger to Wllama.
Example 1: Suppress debug message
import{Wllama,LoggerWithoutDebug}from'@wllama/wllama';constwllama=newWllama(pathConfig,{// LoggerWithoutDebug is predefined inside wllamalogger: LoggerWithoutDebug,});Example 2: Add emoji prefix to log messages
constwllama=newWllama(pathConfig,{logger: {debug: (...args)=>console.debug('🔧', ...args),log: (...args)=>console.log('ℹ️', ...args),warn: (...args)=>console.warn('⚠️', ...args),error: (...args)=>console.error('☠️', ...args),},});This repository already come with pre-built binary from llama.cpp source code. However, in some cases you may want to compile it yourself:
- You don't trust the pre-built one.
- You want to try out latest - bleeding-edge changes from upstream llama.cpp source code.
You can use the commands below to compile it yourself:
# /!\ IMPORTANT: Require having docker compose installed# Clone the repository with submodule git clone --recurse-submodules https://github.com/ngxson/wllama.git cd wllama # Optionally, you can run this command to update llama.cpp to latest upstream version (bleeding-edge, use with your own risk!)# git submodule update --remote --merge# Install the required modules npm i # Firstly, build llama.cpp into wasm npm run build:wasm # Then, build ES module npm run build- Add support for LoRA adapter
- Support GPU inference via WebGL
- Support multi-sequences: knowing the resource limitation when using WASM, I don't think having multi-sequences is a good idea
- Multi-modal: Waiting for refactoring LLaVA implementation from llama.cpp

