Deployment Guide
Hosting
Options
Personal Hosting
Local Device
Run Ona on your own machine for maximum privacy and direct local file access.
Personal Server
Use a VPS or home server for always-on personal runtime and multi-device access.
Hybrid
Keep sensitive context local and route selected heavy workloads to cloud compute.
Family Hosting
- Host on a Raspberry Pi, NAS, or an old gaming computer for low-cost shared usage.
- Set role-based permissions for adults, guests, and child-safe automations.
- Use backup and health-check routines to keep household operations reliable.
Company Hosting
- Deploy on cloud infrastructure, local servers, or a hybrid model.
- Use Sphere governance for shared files, shared models, approvals, and audit trails.
- Standardize observability, incident response, and rollout policies from day one.
AI Model Hosting
Ona can run cloud-hosted models, self-hosted local models, or hybrid routing across both. This lets you tune privacy, cost, and reasoning depth per mission.
Supported Backends
- Cloud APIs: Anthropic, OpenAI, Groq, Gemini, OpenRouter.
- Local model server: Ollama.
- Docker-hosted model services (self-managed containers).
- OpenAI-compatible: llama.cpp, LM Studio, vLLM.
Routing Modes
- Cloud-only for low-maintenance operation.
- Self-hosted only for maximum local control.
- Hybrid for local-first privacy with cloud escalation.
Recommended Pattern
- Keep Solin and core workflows on local models.
- Escalate deep research and online tasks to larger cloud models.
- Store accepted escalated results in memory with metadata.
For full model architecture details, role routing, dual-bot separation, and production validation, see the dedicated Models page.
Open Models Guide