Skip to content

Firebase AI (Gemini)

svelte-firekit wraps Firebase AI (powered by Gemini) in three reactive primitives:

PrimitiveUse case
firekitGenerateOne-shot text or multimodal generation
firekitStreamToken-by-token streaming generation
firekitChatStateful multi-turn conversation
<script lang="ts">
import { firekitGenerate } from 'svelte-firekit';
const gen = firekitGenerate({ model: 'gemini-2.0-flash' });
async function summarize() {
await gen.generate('Summarize the key benefits of Svelte 5 runes in 3 bullet points.');
}
</script>
<button onclick={summarize} disabled={gen.loading}>Summarize</button>
{#if gen.loading}
<p>Generating…</p>
{:else if gen.text}
<p>{gen.text}</p>
{/if}
PropertyTypeDescription
textstringGenerated text (empty until complete)
loadingbooleantrue while generating
errorError | nullSet if generation fails

firekitStream updates text token-by-token as the model generates.

<script lang="ts">
import { firekitStream } from 'svelte-firekit';
const stream = firekitStream({ model: 'gemini-2.0-flash' });
</script>
<button onclick={() => stream.generate('Write a short poem about SvelteKit.')}>
Generate
</button>
<p>{stream.text}</p>
{#if stream.streaming}
<span class="cursor"></span>
{/if}
PropertyTypeDescription
textstringAccumulated text so far
streamingbooleantrue while tokens are being received
errorError | nullSet if the stream fails

firekitChat maintains conversation history and sends it with each message.

<script lang="ts">
import { firekitChat } from 'svelte-firekit';
const chat = firekitChat({ model: 'gemini-2.0-flash' });
let input = $state('');
async function send() {
const message = input;
input = '';
await chat.send(message);
}
</script>
<div class="messages">
{#each chat.history as turn}
<div class="message {turn.role}">
{#each turn.parts as part}
{part.text}
{/each}
</div>
{/each}
{#if chat.pendingText}
<div class="message model">{chat.pendingText}</div>
{/if}
</div>
<input bind:value={input} onkeydown={(e) => e.key === 'Enter' && send()} />
<button onclick={send} disabled={chat.streaming}>Send</button>
PropertyTypeDescription
historyChatMessage[]Full conversation history
pendingTextstringPartial response being streamed
streamingbooleantrue while the response is streaming
errorError | nullSet if the request fails
chat.clearHistory();

By default, Firebase AI uses the Google AI backend. Switch to Vertex AI:

import { firekitGenerate, firekitStream, firekitChat } from 'svelte-firekit';
const gen = firekitGenerate({ backend: 'vertexai', model: 'gemini-2.0-flash' });
const stream = firekitStream({ backend: 'vertexai', model: 'gemini-2.0-flash' });
const chat = firekitChat({ backend: 'vertexai', model: 'gemini-2.0-flash' });

Or import the backend classes directly:

import { GoogleAIBackend, VertexAIBackend } from 'svelte-firekit';

Use content helpers to build multimodal prompts:

import { firekitGenerate, textPart, imagePart, imageUrlPart } from 'svelte-firekit';
const gen = firekitGenerate({ model: 'gemini-2.0-flash' });
// Image from base64
const base64 = await fileToBase64(imageFile);
await gen.generate([
textPart('What is in this image?'),
imagePart(base64, 'image/jpeg'),
]);
// Image from URL
await gen.generate([
textPart('Describe this image:'),
imageUrlPart('https://example.com/photo.jpg'),
]);
console.log(gen.text);
import type { FirekitAIOptions, GenerationConfig, SafetySetting } from 'svelte-firekit';
const options: FirekitAIOptions = {
model: 'gemini-2.0-flash',
generationConfig: {
temperature: 0.7,
topP: 0.9,
maxOutputTokens: 1024,
} satisfies GenerationConfig,
};
const gen = firekitGenerate(options);