Download a VRM avatar, wire up AnimaSync V1, and make it talk — all in 6 steps, entirely in the browser.
VRM is an open standard for 3D humanoid avatars. You can download free models from VRoid Hub. Here's how:
This character is free to download with "Allow" characterization. Click the link, then press the "Download" button on the model page to get the .vrm file.
1. Visit a model page →
2. Click "Download" (agree to terms) →
3. Save the .vrm file →
4. Drag it into the viewport in Step 4 below.
Add the dependencies via a CDN import map — no bundler needed. This loads Three.js for 3D rendering,
@pixiv/three-vrm for VRM support, ONNX Runtime for neural inference, and AnimaSync V1 for lip sync.
Or install via npm: npm install three @pixiv/three-vrm onnxruntime-web @goodganglabs/lipsync-wasm-v1
Create a LipSyncWasmWrapper, then call init().
This loads the Rust/WASM module, validates the license (30-day free trial with no signup), and decrypts the ONNX model.
Create a scene with a camera, orbit controls, and lighting. Then load the VRM using
GLTFLoader with the VRMLoaderPlugin.
AnimaSync also provides embedded VRMA bone animation clips for idle breathing and speaking gestures.
Process an audio file to get blendshape frames, then apply each frame's 52 ARKit values to the VRM
expressionManager inside the render loop. AnimaSync outputs at 30 fps.
For real-time lip sync, capture microphone audio with an AudioWorklet at 16 kHz,
then feed 100 ms chunks to processAudioChunk(). Pipeline latency is ~130-300 ms.