Llaminal is the ultimate CLI companion for your local Ollama instance. Stop wrestling with raw API calls and curl commands. Embrace the terminal with style.
Featuring a cyberpunk aesthetic, Llaminal gives you a powerful, persistent, and multimodal interface to your local LLMs.
- π£οΈ Interactive Chat: Full REPL with history, slash commands, and session management.
- ποΈ Multimodal Support: Drag-and-drop images into your terminal chat to use Vision models.
- π RAG & Context: Pipe files directly into
askor use/addto read entire directories into context. - πΎ Sessions: Save, list, and reload your conversations anytime.
- π¨ Rich UI: Beautiful markdown rendering, tables, and spinners.
- π οΈ Model Ops: Pull, show, and delete models without leaving the tool.
- π Resilient: Automatic retry on transient network errors with helpful diagnostics.
# Clone the repository
git clone <repo-url>
cd Llaminal
# Install locally
python3 -m venv venv
source venv/bin/activate
pip install -e .1. The "One-Shot" Ask Perfect for quick scripts or piped debugging.
# Simple question
llaminal ask "What is the capital of Peru?"
# Debugging a log file
cat error.log | llaminal ask "Explain this error and fix it"2. The Interactive Chat Your main command center.
llaminal chat- Type
/helpto see all commands. - Type
/image ./path/to/img.pngto attach an image. - Type
/add ./srcto read your codebase.
3. Model Management
llaminal list
llaminal pull tinyllama
llaminal show llama3.2Control Llaminal via environment variables:
| Variable | Default | Description |
|---|---|---|
OLLAMA_HOST |
http://localhost:11434 |
URL of your Ollama server. |
LLAMINAL_MODEL |
llama3.2 |
Default model for ask and chat. |
If you love Llaminal, consider supporting the development!
Built with β€οΈ by a developer who loves the terminal.
