Firebase AI (Gemini)
svelte-firekit wraps Firebase AI (powered by Gemini) in three reactive primitives:
| Primitive | Use case |
|---|---|
firekitGenerate | One-shot text or multimodal generation |
firekitStream | Token-by-token streaming generation |
firekitChat | Stateful multi-turn conversation |
One-shot generation
Section titled “One-shot generation”<script lang="ts"> import { firekitGenerate } from 'svelte-firekit';
const gen = firekitGenerate({ model: 'gemini-2.0-flash' });
async function summarize() { await gen.generate('Summarize the key benefits of Svelte 5 runes in 3 bullet points.'); }</script>
<button onclick={summarize} disabled={gen.loading}>Summarize</button>
{#if gen.loading} <p>Generating…</p>{:else if gen.text} <p>{gen.text}</p>{/if}Instance properties
Section titled “Instance properties”| Property | Type | Description |
|---|---|---|
text | string | Generated text (empty until complete) |
loading | boolean | true while generating |
error | Error | null | Set if generation fails |
Streaming generation
Section titled “Streaming generation”firekitStream updates text token-by-token as the model generates.
<script lang="ts"> import { firekitStream } from 'svelte-firekit';
const stream = firekitStream({ model: 'gemini-2.0-flash' });</script>
<button onclick={() => stream.generate('Write a short poem about SvelteKit.')}> Generate</button>
<p>{stream.text}</p>
{#if stream.streaming} <span class="cursor">▋</span>{/if}Instance properties
Section titled “Instance properties”| Property | Type | Description |
|---|---|---|
text | string | Accumulated text so far |
streaming | boolean | true while tokens are being received |
error | Error | null | Set if the stream fails |
Multi-turn chat
Section titled “Multi-turn chat”firekitChat maintains conversation history and sends it with each message.
<script lang="ts"> import { firekitChat } from 'svelte-firekit';
const chat = firekitChat({ model: 'gemini-2.0-flash' });
let input = $state('');
async function send() { const message = input; input = ''; await chat.send(message); }</script>
<div class="messages"> {#each chat.history as turn} <div class="message {turn.role}"> {#each turn.parts as part} {part.text} {/each} </div> {/each}
{#if chat.pendingText} <div class="message model">{chat.pendingText}</div> {/if}</div>
<input bind:value={input} onkeydown={(e) => e.key === 'Enter' && send()} /><button onclick={send} disabled={chat.streaming}>Send</button>Instance properties
Section titled “Instance properties”| Property | Type | Description |
|---|---|---|
history | ChatMessage[] | Full conversation history |
pendingText | string | Partial response being streamed |
streaming | boolean | true while the response is streaming |
error | Error | null | Set if the request fails |
Clear history
Section titled “Clear history”chat.clearHistory();Backends: Google AI vs Vertex AI
Section titled “Backends: Google AI vs Vertex AI”By default, Firebase AI uses the Google AI backend. Switch to Vertex AI:
import { firekitGenerate, firekitStream, firekitChat } from 'svelte-firekit';
const gen = firekitGenerate({ backend: 'vertexai', model: 'gemini-2.0-flash' });const stream = firekitStream({ backend: 'vertexai', model: 'gemini-2.0-flash' });const chat = firekitChat({ backend: 'vertexai', model: 'gemini-2.0-flash' });Or import the backend classes directly:
import { GoogleAIBackend, VertexAIBackend } from 'svelte-firekit';Multimodal content
Section titled “Multimodal content”Use content helpers to build multimodal prompts:
import { firekitGenerate, textPart, imagePart, imageUrlPart } from 'svelte-firekit';
const gen = firekitGenerate({ model: 'gemini-2.0-flash' });
// Image from base64const base64 = await fileToBase64(imageFile);await gen.generate([ textPart('What is in this image?'), imagePart(base64, 'image/jpeg'),]);
// Image from URLawait gen.generate([ textPart('Describe this image:'), imageUrlPart('https://example.com/photo.jpg'),]);
console.log(gen.text);Generation config & safety settings
Section titled “Generation config & safety settings”import type { FirekitAIOptions, GenerationConfig, SafetySetting } from 'svelte-firekit';
const options: FirekitAIOptions = { model: 'gemini-2.0-flash', generationConfig: { temperature: 0.7, topP: 0.9, maxOutputTokens: 1024, } satisfies GenerationConfig,};
const gen = firekitGenerate(options);