AI

Shipkit includes AI features at two levels: browser-based inference that runs entirely client-side, and cloud provider integrations for OpenAI and Anthropic.

Browser-Based AI (SmolLM)

The headline feature is SmolLM running in the browser via WebGPU. No server required, no API keys, works offline after the model loads.

  • Model: HuggingFace SmolLM2-1.7B
  • Runtime: Transformers.js + ONNX Runtime via WebGPU
  • Model size: ~50MB (downloaded once, cached by browser)
  • Privacy: 100% local processing, no data sent to any server

The component lives at src/components/blocks/ai/smollm-web/ai-smollm-webgpu.tsx.

Cloud AI Providers

Set the relevant API key to enable:

OPENAI_API_KEY=sk-...      # OpenAI
ANTHROPIC_API_KEY=sk-...   # Anthropic (Claude)

The integration service (src/server/services/integration-service.ts) tracks which providers are configured and reports status on the admin dashboard.

Demo Pages

AI demos are at /ai and /landing in the app. Source: src/app/(app)/(demo)/.

Demo Components

Shipkit includes several AI demo components you can use or adapt:

ComponentLocationWhat It Does
AiDemosrc/components/blocks/ai-demo.tsxBasic prompt/response UI
AiDemoCloudsrc/components/blocks/ai-demo-cloud.tsxCloud-based AI demo
AiSectionsrc/components/blocks/ai-section.tsxInteractive section with v0.dev-style component generation
AiLandingDemosrc/app/(app)/landing/_components/ai-landing-demo.tsxLanding page demo with WebGPU
AiDemosLocalsrc/app/(app)/landing/_components/ai-demos-local.tsxDual demo: chat + voice recognition

Adding AI Server Actions

The infrastructure is ready for custom AI endpoints. Follow the existing server action pattern:

// src/server/actions/ai.ts
"use server"
import { auth } from "@/server/lib/auth"

export async function generateResponse(prompt: string) {
  const session = await auth({ protect: true })
  // Call OpenAI/Anthropic API here
}

AI Prompts for Development

Shipkit includes curated prompts for AI-powered IDE tools (Cursor, Copilot) in the .cursor directory. These help AI assistants understand the codebase structure and conventions.

Dependencies

  • @huggingface/transformers - Browser-based LLM inference
  • @huggingface/inference - API-based inference
  • ONNX Runtime Web - WebGPU execution backend