AI
Shipkit includes AI features at two levels: browser-based inference that runs entirely client-side, and cloud provider integrations for OpenAI and Anthropic.
Browser-Based AI (SmolLM)
The headline feature is SmolLM running in the browser via WebGPU. No server required, no API keys, works offline after the model loads.
- Model: HuggingFace SmolLM2-1.7B
- Runtime: Transformers.js + ONNX Runtime via WebGPU
- Model size: ~50MB (downloaded once, cached by browser)
- Privacy: 100% local processing, no data sent to any server
The component lives at src/components/blocks/ai/smollm-web/ai-smollm-webgpu.tsx.
Cloud AI Providers
Set the relevant API key to enable:
OPENAI_API_KEY=sk-... # OpenAI
ANTHROPIC_API_KEY=sk-... # Anthropic (Claude)
The integration service (src/server/services/integration-service.ts) tracks which providers are configured and reports status on the admin dashboard.
Demo Pages
AI demos are at /ai and /landing in the app. Source: src/app/(app)/(demo)/.
Demo Components
Shipkit includes several AI demo components you can use or adapt:
| Component | Location | What It Does |
|---|---|---|
AiDemo | src/components/blocks/ai-demo.tsx | Basic prompt/response UI |
AiDemoCloud | src/components/blocks/ai-demo-cloud.tsx | Cloud-based AI demo |
AiSection | src/components/blocks/ai-section.tsx | Interactive section with v0.dev-style component generation |
AiLandingDemo | src/app/(app)/landing/_components/ai-landing-demo.tsx | Landing page demo with WebGPU |
AiDemosLocal | src/app/(app)/landing/_components/ai-demos-local.tsx | Dual demo: chat + voice recognition |
Adding AI Server Actions
The infrastructure is ready for custom AI endpoints. Follow the existing server action pattern:
// src/server/actions/ai.ts
"use server"
import { auth } from "@/server/lib/auth"
export async function generateResponse(prompt: string) {
const session = await auth({ protect: true })
// Call OpenAI/Anthropic API here
}
AI Prompts for Development
Shipkit includes curated prompts for AI-powered IDE tools (Cursor, Copilot) in the .cursor directory. These help AI assistants understand the codebase structure and conventions.
Dependencies
@huggingface/transformers- Browser-based LLM inference@huggingface/inference- API-based inference- ONNX Runtime Web - WebGPU execution backend