mirror of
https://github.com/YuuKi-OS/yuy-chat.git
synced 2026-02-18 22:01:09 +00:00
4.2 KiB
4.2 KiB
yuy-chat
$$\ $$\
\$$\ $$ |
\$$\ $$ /$$\ $$\ $$\ $$\
\$$$$ / $$ | $$ |$$ | $$ |
\$$ / $$ | $$ |$$ | $$ |
$$ | $$ | $$ |$$ | $$ |
$$ | \$$$$$$ |\$$$$$$$ |
\__| \______/ \____$$ |
$$\ $$ |
\$$$$$$ |
\______/
Beautiful TUI chat interface for local AI models
🌟 Features
- ✨ Beautiful TUI - Gorgeous terminal interface powered by ratatui
- 🔍 Auto-discovery - Automatically finds
.ggufand.llamafilemodels - 🎨 Presets - Creative, Balanced, and Precise modes
- 💾 Save conversations - Keep your chat history
- 🌐 HuggingFace API - Use models from HuggingFace (optional)
- ⚡ Fast & Lightweight - ~5MB binary, minimal dependencies
- 🚀 Streaming responses - See words appear as they're generated
- 🎯 Zero configuration - Just run and chat
📦 Installation
From source:
git clone https://github.com/YuuKi-OS/yuy-chat
cd yuy-chat
cargo build --release
Install globally:
cargo install --path .
🚀 Quick Start
# Run yuy-chat
yuy-chat
# It will auto-scan ~/.yuuki/models/ for .gguf and .llamafile files
# Select a model and start chatting!
📁 Supported Model Formats
- ✅ GGUF (
.gguf) - Runs with llama.cpp - ✅ Llamafile (
.llamafile) - Self-contained executables
🎮 Controls
Model Selector
↑/↓orj/k- Navigate modelsEnter- Select modelR- Refresh model listQ- Quit
Chat
Type- Write your messageEnter- Send messageShift+Enter- New lineCtrl+Enter- Send (always)Ctrl+C- Open menuCtrl+L- Clear chatCtrl+S- Save conversation↑/↓- Scroll chat (when input is empty)
Menu
1- Change model2- Change preset3- Save conversation4- Load conversation5- Clear chat6- SettingsQ- Back to chat
⚙️ Configuration
Config file location: ~/.config/yuy-chat/config.toml
models_dir = "/home/user/.yuuki/models"
hf_token = "hf_xxxxxxxxxxxxx" # Optional
default_preset = "Balanced"
save_history = true
theme = "Dark"
🎯 Presets
- Creative (temp: 0.8, top_p: 0.9) - More random and creative
- Balanced (temp: 0.6, top_p: 0.7) - Good middle ground
- Precise (temp: 0.3, top_p: 0.5) - More focused and deterministic
🌐 HuggingFace Integration
Add your HuggingFace token in settings to use models via API:
- Press
Ctrl+C→6(Settings) - Edit
HuggingFace Token - Paste your token from https://huggingface.co/settings/tokens
- Save and refresh models
📚 Directory Structure
~/.config/yuy-chat/
├── config.toml # Configuration
└── conversations/ # Saved chats
├── conversation-20240206-143022.json
└── conversation-20240206-150133.json
🔧 Requirements
- Rust 1.70+ (for building)
- llama.cpp (for .gguf models) - Install with:
yuy runtime install llama-cpp - chmod +x (for .llamafile models)
🤝 Integration with yuy
yuy-chat is designed to work alongside yuy:
# Download models with yuy
yuy download Yuuki-best
# Chat with yuy-chat
yuy-chat
🐛 Troubleshooting
No models found?
- Make sure you have models in
~/.yuuki/models/ - Or specify custom directory:
yuy-chat --models-dir /path/to/models
llama.cpp not found?
- Install with:
yuy runtime install llama-cpp - Or:
brew install llama.cpp(macOS) - Or:
pkg install llama-cpp(Termux)
Streaming not working?
- Ensure llama.cpp is installed and in PATH
- Check model file permissions
📝 License
MIT License - see LICENSE file
🌸 Credits
Made with love by the Yuuki team
For model management, see yuy