Skip to content

cmdai: fast Rust CLI that turns natural‑language tasks into a safe POSIX command. Built for macOS (MLX/Metal) with a built‑in model; supports vLLM/Ollama/LM Studio. JSON‑only output, safety checks, confirmation, multi‑step goals, devcontainer included.

License

Notifications You must be signed in to change notification settings

wildcard/cmdai

cmdai

🚧 Early Development Stage - Architecture defined, core implementation in progress

cmdai converts natural language descriptions into safe POSIX shell commands using local LLMs. Built with Rust for blazing-fast performance, single-binary distribution, and safety-first design.

$ cmdai "list all PDF files in Downloads folder larger than 10MB" Generated command: find ~/Downloads -name "*.pdf" -size +10M -ls Execute this command? (y/N) y

📋 Project Status

This project is in active early development. The architecture and module structure are in place, with implementation ongoing.

✅ Completed

  • Core CLI structure with comprehensive argument parsing
  • Modular architecture with trait-based backends
  • Embedded model backend with MLX (Apple Silicon) and CPU variants
  • Remote backend support (Ollama, vLLM) with automatic fallback
  • Safety validation with pattern matching and risk assessment
  • Configuration management with TOML support
  • Interactive user confirmation flows
  • Multiple output formats (JSON, YAML, Plain)
  • Contract-based test structure with TDD methodology
  • Multi-platform CI/CD pipeline

🚧 In Progress

  • Model downloading and caching system
  • Advanced command execution engine
  • Performance optimization

📅 Planned

  • Multi-step goal completion
  • Advanced context awareness
  • Shell script generation
  • Command history and learning

✨ Features (Planned & In Development)

  • 🚀 Instant startup - Single binary with <100ms cold start (target)
  • 🧠 Local LLM inference - Optimized for Apple Silicon with MLX
  • 🛡️ Safety-first - Comprehensive command validation framework
  • 📦 Zero dependencies - Self-contained binary distribution
  • 🎯 Multiple backends - Extensible backend system (MLX, vLLM, Ollama)
  • 💾 Smart caching - Hugging Face model management
  • 🌐 Cross-platform - macOS, Linux, Windows support

🚀 Quick Start

Prerequisites

  • Rust 1.75+ with Cargo
  • macOS with Apple Silicon (for MLX backend, optional)

Building from Source

# Clone the repository git clone https://github.com/wildcard/cmdai.git cd cmdai # Build the project cargo build --release # Run the CLI ./target/release/cmdai --version

Development Commands

# Run tests make test# Format code make fmt # Run linter make lint # Build optimized binary make build-release # Run with debug logging RUST_LOG=debug cargo run -- "your command"

📖 Usage

Basic Syntax

cmdai [OPTIONS] <PROMPT>

Examples

# Basic command generation cmdai "list all files in the current directory"# With specific shell cmdai --shell zsh "find large files"# JSON output for scripting cmdai --output json "show disk usage"# Adjust safety level cmdai --safety permissive "clean temporary files"# Auto-confirm dangerous commands cmdai --confirm "remove old log files"# Verbose mode with timing info cmdai --verbose "search for Python files"

CLI Options

OptionDescriptionStatus
-s, --shell <SHELL>Target shell (bash, zsh, fish, sh, powershell, cmd)✅ Implemented
--safety <LEVEL>Safety level (strict, moderate, permissive)✅ Implemented
-o, --output <FORMAT>Output format (json, yaml, plain)✅ Implemented
-y, --confirmAuto-confirm dangerous commands✅ Implemented
-v, --verboseEnable verbose output with timing✅ Implemented
-c, --config <FILE>Custom configuration file✅ Implemented
--show-configDisplay current configuration✅ Implemented
--autoExecute without confirmation📅 Planned
--allow-dangerousAllow potentially dangerous commands📅 Planned
--verboseEnable verbose logging✅ Available

Examples (Target Functionality)

# Simple command generation cmdai "compress all images in current directory"# With specific backend cmdai --backend mlx "find large log files"# Verbose mode for debugging cmdai --verbose "show disk usage"

🏗️ Architecture

Module Structure

cmdai/ ├── src/ │ ├── main.rs # CLI entry point │ ├── backends/ # LLM backend implementations │ │ ├── mod.rs # Backend trait definition │ │ ├── mlx.rs # Apple Silicon MLX backend │ │ ├── vllm.rs # vLLM remote backend │ │ └── ollama.rs # Ollama local backend │ ├── safety/ # Command validation │ │ └── mod.rs # Safety validator │ ├── cache/ # Model caching │ ├── config/ # Configuration management │ ├── cli/ # CLI interface │ ├── models/ # Data models │ └── execution/ # Command execution ├── tests/ # Contract-based tests └── specs/ # Project specifications 

Core Components

  1. CommandGenerator Trait - Unified interface for all LLM backends
  2. SafetyValidator - Command validation and risk assessment
  3. Backend System - Extensible architecture for multiple inference engines
  4. Cache Manager - Hugging Face model management (planned)

Backend Architecture

#[async_trait]traitCommandGenerator{asyncfngenerate_command(&self,request:&CommandRequest) -> Result<GeneratedCommand,GeneratorError>;asyncfnis_available(&self) -> bool;fnbackend_info(&self) -> BackendInfo;}

🔧 Development

Prerequisites

  • Rust 1.75+
  • Cargo
  • Make (optional, for convenience commands)
  • Docker (optional, for development container)

Setup Development Environment

# Clone and enter the project git clone https://github.com/wildcard/cmdai.git cd cmdai # Install dependencies and build cargo build # Run tests cargo test# Check formatting cargo fmt -- --check # Run clippy linter cargo clippy -- -D warnings

Backend Configuration

cmdai supports multiple inference backends with automatic fallback:

Embedded Backend (Default)

  • MLX: Optimized for Apple Silicon Macs (M1/M2/M3)
  • CPU: Cross-platform fallback using Candle framework
  • Model: Qwen2.5-Coder-1.5B-Instruct (quantized)
  • No external dependencies required

Remote Backends (Optional)

Configure in ~/.config/cmdai/config.toml:

[backend] primary = "embedded"# or "ollama", "vllm"enable_fallback = true [backend.ollama] base_url = "http://localhost:11434"model_name = "codellama:7b" [backend.vllm] base_url = "http://localhost:8000"model_name = "codellama/CodeLlama-7b-hf"api_key = "optional-api-key"

Project Configuration

The project uses several configuration files:

  • Cargo.toml - Rust dependencies and build configuration
  • ~/.config/cmdai/config.toml - User configuration
  • clippy.toml - Linter rules
  • rustfmt.toml - Code formatting rules
  • deny.toml - Dependency audit configuration

Testing Strategy

The project uses contract-based testing:

  • Unit tests for individual components
  • Integration tests for backend implementations
  • Contract tests to ensure trait compliance
  • Property-based testing for safety validation

🛡️ Safety Features

cmdai includes comprehensive safety validation to prevent dangerous operations:

Implemented Safety Checks

  • ✅ System destruction patterns (rm -rf /, rm -rf ~)
  • ✅ Fork bombs detection (:(){:|:&};:)
  • ✅ Disk operations (mkfs, dd if=/dev/zero)
  • ✅ Privilege escalation detection (sudo su, chmod 777 /)
  • ✅ Critical path protection (/bin, /usr, /etc)
  • ✅ Command validation and sanitization

Risk Levels

  • Safe (Green) - Normal operations, no confirmation needed
  • Moderate (Yellow) - Requires user confirmation in strict mode
  • High (Orange) - Requires confirmation in moderate mode
  • Critical (Red) - Blocked in strict mode, requires explicit confirmation

Safety Configuration

Configure safety levels in ~/.config/cmdai/config.toml:

[safety] enabled = truelevel = "moderate"# strict, moderate, or permissiverequire_confirmation = truecustom_patterns = ["additional", "dangerous", "patterns"]

🤝 Contributing

We welcome contributions! This is an early-stage project with many opportunities to contribute.

Areas for Contribution

  • 🔌 Backend implementations
  • 🛡️ Safety pattern definitions
  • 🧪 Test coverage expansion
  • 📚 Documentation improvements
  • 🐛 Bug fixes and optimizations

Getting Started

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes with tests
  4. Ensure all tests pass
  5. Submit a pull request

Development Guidelines

  • Follow Rust best practices
  • Add tests for new functionality
  • Update documentation as needed
  • Use conventional commit messages
  • Run make check before submitting

📜 License

This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0) - see the LICENSE file for details.

License Summary

  • ✅ Commercial use
  • ✅ Modification
  • ✅ Distribution
  • ✅ Private use
  • ⚠️ Network use requires source disclosure
  • ⚠️ Same license requirement
  • ⚠️ State changes documentation

🙏 Acknowledgments

  • MLX - Apple's machine learning framework
  • vLLM - High-performance LLM serving
  • Ollama - Local LLM runtime
  • Hugging Face - Model hosting and caching
  • clap - Command-line argument parsing

📞 Support & Community

🗺️ Roadmap

Phase 1: Core Structure (Current)

  • CLI argument parsing
  • Module architecture
  • Backend trait system
  • Basic command generation

Phase 2: Safety & Validation

  • Dangerous pattern detection
  • POSIX compliance checking
  • User confirmation workflows
  • Risk assessment system

Phase 3: Backend Integration

  • vLLM HTTP API support
  • Ollama local backend
  • Response parsing
  • Error handling

Phase 4: MLX Optimization

  • FFI bindings with cxx
  • Metal Performance Shaders
  • Unified memory handling
  • Apple Silicon optimization

Phase 5: Production Ready

  • Comprehensive testing
  • Performance optimization
  • Binary distribution
  • Package manager support

Built with Rust | Safety First | Open Source

Note: This is an active development project. Features and APIs are subject to change. See the specs directory for detailed design documentation.

About

cmdai: fast Rust CLI that turns natural‑language tasks into a safe POSIX command. Built for macOS (MLX/Metal) with a built‑in model; supports vLLM/Ollama/LM Studio. JSON‑only output, safety checks, confirmation, multi‑step goals, devcontainer included.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •