Skip to content

Gambiarra

A local-first LLM sharing system. Pool your Ollama, LM Studio, or any OpenAI-compatible endpoint with your team.

Local-First

Your data stays on your network. No cloud dependencies, no external APIs.

Resource Sharing

Pool LLM endpoints across your team. Share expensive GPU resources efficiently.

Universal Compatibility

Works with Ollama, LM Studio, LocalAI, vLLM, and any OpenAI-compatible API.

Vercel AI SDK

Drop-in replacement for your AI SDK workflows. Same API, shared resources.

  • Development Teams: Share expensive LLM endpoints across your team
  • Hackathons: Pool resources for AI projects
  • Research Labs: Coordinate LLM access across multiple workstations
  • Home Labs: Share your gaming PC’s LLM with your laptop
  • Education: Classroom environments where students share compute