Quick Start
This guide will help you set up Gambiarra and start sharing LLMs on your network.
Installation
Section titled “Installation”The CLI allows you to start hubs, create rooms, and join as a participant.
Via curl (recommended):
curl -fsSL https://raw.githubusercontent.com/arthurbm/gambiarra/main/scripts/install.sh | bashVia npm:
npm install -g gambiarraVia bun:
bun add -g gambiarraThe SDK provides Vercel AI SDK integration for using shared LLMs in your applications.
npm install gambiarra-sdk# orbun add gambiarra-sdkBasic Usage
Section titled “Basic Usage”1. Start the Hub Server
Section titled “1. Start the Hub Server”gambiarra serve --port 3000 --mdns2. Create a Room
Section titled “2. Create a Room”gambiarra create# Output: Room created! Code: ABC1233. Join with Your LLM
Section titled “3. Join with Your LLM”gambiarra join ABC123 \ --endpoint http://localhost:11434 \ --model llama3 \ --nickname joao4. Use the SDK
Section titled “4. Use the SDK”import { createGambiarra } from "gambiarra-sdk";import { generateText } from "ai";
const gambiarra = createGambiarra({ roomCode: "ABC123", hubUrl: "http://localhost:3000",});
const result = await generateText({ model: gambiarra.any(), prompt: "Hello, Gambiarra!",});
console.log(result.text);Next Steps
Section titled “Next Steps”- Learn about CLI commands
- Explore SDK usage