261 lines
5.6 KiB
Markdown
Raw Normal View History

2026-01-12 03:36:55 +00:00
# VAPORA Configuration Examples
Reference configurations for all deployment modes.
## Files Overview
### TOML Format (Direct Usage)
Copy and customize for your environment:
- **`vapora.solo.example.toml`** - Development mode (local, single-user)
- **`vapora.multiuser.example.toml`** - Team mode (shared infrastructure, cost-tracking)
- **`vapora.enterprise.example.toml`** - Production mode (HA, multi-provider, enterprise features)
**How to use:**
```bash
cp vapora.solo.example.toml ../runtime/vapora.toml
# Edit ../runtime/vapora.toml as needed
```
### Nickel Format (Generated Configs)
Use Nickel for composable, mergeable configurations:
- **`vapora.solo.example.ncl`** - Solo mode with composition
- **`vapora.multiuser.example.ncl`** - Multiuser mode with customization examples
- **`vapora.enterprise.example.ncl`** - Enterprise mode with tuning options
**How to use:**
```bash
# Export to JSON
nickel export vapora.solo.example.ncl > ../runtime/vapora.json
# Or convert to TOML (via jq + toml converters)
nickel export vapora.multiuser.example.ncl | jq . > ../runtime/vapora.json
```
## Quick Selection Guide
### I'm developing locally
→ Use `vapora.solo.example.toml`
- All services on localhost
- File-based database
- No authentication complexity
- Perfect for testing
### We're a small team
→ Use `vapora.multiuser.example.toml`
- Shared backend infrastructure
- Cost tracking per developer role
- MFA and audit logging
- Team collaboration ready
### We need production deployment
→ Use `vapora.enterprise.example.toml`
- High availability setup
- All LLM providers enabled
- Aggressive cost optimization
- Enterprise security features
## Common Customizations
### Change Backend Port
**TOML:**
```toml
[backend]
port = 9001
```
**Nickel:**
```nickel
{
backend.port = 9001,
}
```
### Enable Ollama for Local LLMs
**TOML:**
```toml
[providers]
ollama_enabled = true
ollama_url = "http://localhost:11434"
```
**Nickel:**
```nickel
{
providers.ollama_enabled = true,
}
```
### Adjust Agent Learning Window
**TOML:**
```toml
[agents.learning]
recency_window_days = 14
recency_multiplier = 3.5
```
**Nickel:**
```nickel
{
agents.learning = {
recency_window_days = 14,
recency_multiplier = 3.5,
},
}
```
### Set Role-Based Budgets
**TOML:**
```toml
[llm_router.budget_enforcement.role_limits]
architect_cents = 750000 # $7500/month
developer_cents = 500000 # $5000/month
```
**Nickel:**
```nickel
{
llm_router.budget_enforcement.role_limits = {
architect_cents = 750000,
developer_cents = 500000,
},
}
```
## Environment Variables Override
All settings can be overridden via environment variables:
```bash
# Backend settings
export VAPORA_BACKEND_PORT=9001
export VAPORA_BACKEND_WORKERS=8
# Database
export SURREAL_URL=ws://surrealdb.example.com:8000
# LLM Providers
export ANTHROPIC_API_KEY=sk-ant-xxx
export OPENAI_API_KEY=sk-xxx
export GOOGLE_API_KEY=xxx
export OLLAMA_URL=http://localhost:11434
```
## Deployment Checklist
### Before Using Solo Mode
- [ ] Single developer machine
- [ ] Local development only
- [ ] No sensitive data
### Before Using Multiuser Mode
- [ ] SurrealDB instance ready
- [ ] NATS cluster running
- [ ] Network connectivity tested
- [ ] TLS certificates available
### Before Using Enterprise Mode
- [ ] Kubernetes cluster (or equivalent) ready
- [ ] SurrealDB cluster configured
- [ ] NATS JetStream cluster running
- [ ] All TLS certificates prepared
- [ ] LLM provider accounts configured
- [ ] Backup strategy in place
- [ ] Monitoring/observability stack ready
## Validation
### TOML Files
```bash
# Syntax check
toml-cli validate vapora.solo.example.toml
# Or via Rust
cargo build -p vapora-backend --features toml-validate
```
### Nickel Files
```bash
# Type check
nickel typecheck vapora.solo.example.ncl
# Export and validate
nickel export vapora.solo.example.ncl | jq .
```
## Performance Notes
- **Solo mode**: 2-10 concurrent tasks (development)
- **Multiuser mode**: 50-100 concurrent tasks (team of 10-20)
- **Enterprise mode**: 500+ concurrent tasks (organization scale)
Adjust `max_instances` in agents config based on actual needs:
```toml
[agents]
max_instances = 50 # For multiuser team
max_instances = 100 # For enterprise
```
## Cost Estimation
### Typical Monthly Costs (Multiuser Mode)
With default role budgets:
- **Architect tasks**: $5000/month
- **Developer tasks**: $3000/month
- **Review tasks**: $2000/month
- **Testing**: $1000/month
- **Total budget**: $11,000/month
Adjust `role_limits` in `llm_router.budget_enforcement` as needed.
### Cost Optimization Tips
1. **Use Ollama** for development (free, local)
2. **Set realistic budgets** per role
3. **Enable cost tracking** for visibility
4. **Use cheaper providers** for testing (set in fallback_chain)
5. **Monitor usage** via Prometheus metrics
## Troubleshooting
### "Connection refused" on localhost:8001
- Ensure backend config uses `127.0.0.1` for solo mode
- Check no other process is using port 8001
- Verify `[backend]` host and port settings
### "Database connection timeout"
- For solo: File path must be writable
- For multiuser: Verify SurrealDB is running and accessible
- Check `[database]` URL and credentials
### "Budget exceeded" warnings
- Review `role_limits` in `[llm_router.budget_enforcement]`
- Increase budgets for busy months
- Check `auto_fallback` is enabled
## Next Steps
1. **Select a mode** based on your needs
2. **Copy example to `../runtime/`**
3. **Customize for your environment**
4. **Validate configuration**
5. **Deploy using docker-compose or Kubernetes**
For detailed instructions, see `../README.md`.
---
**Last Updated**: January 12, 2026