How to integrate your custom VoiceLib with our lightweight browser-side SDK.
Use our pre-built SDK for handling audio, transcription, and intent parsing with zero-hassle.
Manually use Transformers.js to load the model and parse the exported config yourself.
In your project dashboard, click Export Config. This downloads a JSON file containing your intents, examples, and variables.
FunctionGemma requires you to define the available tools (functions) in the system prompt. You can transform the exported VoiceLib JSON into a function signature format.
const voicelib = require('./voicelib.json');
const tools = voicelib.map(intent => ({
name: intent.intent_action,
description: `Triggers when user says something like: ${intent.sentences[0]}`,
parameters: {
type: "object",
properties: intent.variables.reduce((acc, v) => ({ ...acc, [v]: { type: "string" } }), {}),
required: intent.variables
}
}));
const systemPrompt = `You are a helpful AI assistant. You have access to the following tools:
${JSON.stringify(tools, null, 2)}
If the user's request matches a tool, output a JSON object with the tool name and arguments.
`;The simplest way to integrate is using our SDK. Download nanovoice.ts and worker.ts from the project "Embed" tab.
import { NanoVoice } from './nanovoice';
import voicelib from './my_voicelib.json';
const ai = new NanoVoice({
workerPath: '/worker.ts',
intents: voicelib,
onStatusChange: (status) => console.log('AI Status:', status)
});
ai.on('intent', (result) => {
console.log('Recognized Intent:', result.tool, result.arguments);
});
// Start listening
ai.start();You can use @huggingface/transformers to run FunctionGemma (270M) directly in the browser with WebGPU acceleration.
import { pipeline } from '@huggingface/transformers';
// Load model with WebGPU support
const generator = await pipeline('text-generation', 'onnx-community/functiongemma-270m-it-ONNX', {
device: 'webgpu', // Use 'webgpu' for maximum performance
dtype: 'fp32'
});
// Run
const output = await generator(systemPrompt + "\nUser: Turn on kitchen lights\nModel:", {
max_new_tokens: 128,
return_full_text: false,
});
console.log(output);For the best experience, ensure WebGPU is enabled in your browser. In Chrome, you can force enable it using these flags:
open -a "Google Chrome" --args --enable-features=Vulkan --enable-unsafe-webgpu