CLI Reference
The Gambiarra CLI provides commands for managing hubs, rooms, and participants.
Commands
Section titled “Commands”Start a hub server.
gambiarra serve [options]Options:
| Option | Description | Default |
|---|---|---|
--port, -p | Port to listen on | 3000 |
--mdns | Enable mDNS auto-discovery | false |
Example:
gambiarra serve --port 3000 --mdnscreate
Section titled “create”Create a new room.
gambiarra create [options]Options:
| Option | Description | Default |
|---|---|---|
--hub | Hub URL | Auto-discover or http://localhost:3000 |
Example:
gambiarra create# Output: Room created! Code: ABC123Join a room with your LLM endpoint.
gambiarra join <room-code> [options]Options:
| Option | Description | Default |
|---|---|---|
--endpoint, -e | LLM endpoint URL | Required |
--model, -m | Model name | Required |
--nickname, -n | Your nickname | Random |
--hub | Hub URL | Auto-discover |
Example:
gambiarra join ABC123 \ --endpoint http://localhost:11434 \ --model llama3 \ --nickname aliceList available rooms.
gambiarra list [options]Options:
| Option | Description | Default |
|---|---|---|
--hub | Hub URL | Auto-discover |
Example:
gambiarra list# Output:# Available rooms:# - ABC123 (3 participants)# - XYZ789 (1 participant)Supported Providers
Section titled “Supported Providers”Gambiarra works with any OpenAI-compatible API:
| Provider | Default Endpoint |
|---|---|
| Ollama | http://localhost:11434 |
| LM Studio | http://localhost:1234 |
| LocalAI | http://localhost:8080 |
| vLLM | http://localhost:8000 |