Ask LLM directly from your terminal, written in Rust.
You need to deploy Ollama first.
# install ollama curl -fsSL https://ollama.com/install.sh | sh # start ollama server, # If you install ollama using above command, an ollama server has been running at http://127.0.0.1:11434.# So do not start another server again. ollama serve # pull a model from ollama, e.g. llama3.1:8b ollama pull llama3.1:8bYou also need to download rust first.
# enter the project dir and execute cargo install --path .ask-rs "hello, who are you ?"ls .| ask-rs "how many files in current dir ?"ask-rs "write me a simple python program" -cask-rs -s "why the sky is blue ?"config.toml is located at ~/.config/ask-rs/config.toml
[ollama] host = "http://127.0.0.1"port = 11434model = "llama3.1:8b"borrow examples from ollama-rs
Inspired by shell-ask