A REST API for data modeling, schema management, and collaboration built with Rust and Axum.
- Workspace & Domain Management: Organize data models into workspaces and domains
- Table & Relationship CRUD: Full CRUD operations for tables and relationships
- Multi-format Import: Import from SQL, ODCS, JSON Schema, Avro, Protobuf, DrawIO
- Multi-format Export: Export to various formats including ODCS v3.1.0
- Git Synchronization: Version control integration via Git repositories
- Real-time Collaboration: Shared editing sessions with presence tracking
- GitHub OAuth: Secure authentication via GitHub
- PostgreSQL & File Storage: Flexible storage backends
- OpenAPI Documentation: Auto-generated API documentation
- Audit Trail: Complete audit history of all changes
- Rust 1.75 or later
- PostgreSQL 15+ (optional, for database-backed storage)
- Docker & Docker Compose (optional)
- Clone the repository:
git clone https://github.com/pixie79/data-modelling-api.git cd data-modelling-api- Set environment variables:
export WORKSPACE_DATA=/tmp/workspace_data export JWT_SECRET=your-secret-key-change-in-production export GITHUB_CLIENT_ID=your-github-client-id export GITHUB_CLIENT_SECRET=your-github-client-secret export FRONTEND_URL=http://localhost:8080- (Optional) Set up PostgreSQL:
export DATABASE_URL=postgresql://postgres:postgres@localhost:5432/data_modelling- Run migrations (if using PostgreSQL):
sqlx migrate run- Run the API:
# Use SQLX_OFFLINE=true to avoid database connection during compilation# (matches CI/CD behavior - uses pre-generated query metadata) SQLX_OFFLINE=true cargo run --bin apiNote: If you don't set SQLX_OFFLINE=true, SQLx will try to verify queries against your database at compile time. Ensure your database schema matches the migrations, or use offline mode (recommended).
The API will be available at http://localhost:8081
- Build and run with Docker Compose:
docker-compose up -d- The API will be available at
http://localhost:8081
The OpenAPI specification is available at:
http://localhost:8081/api/v1/openapi.json
The API provides health check endpoints to monitor service availability:
GET /health: Basic health check endpointGET /api/v1/health: API versioned health check endpoint
Both endpoints return 200 OK if the service is running. These endpoints are useful for:
- Load balancer health checks
- Monitoring and alerting systems
- Container orchestration (Kubernetes liveness/readiness probes)
Example:
curl http://localhost:8081/health curl http://localhost:8081/api/v1/health- Initiate GitHub OAuth:
curl "http://localhost:8081/api/v1/auth/github/login?redirect_uri=http://localhost:8080/callback"- After OAuth callback, use the returned JWT token:
curl -H "Authorization: Bearer <token>" http://localhost:8081/api/v1/workspace/infoWORKSPACE_DATA: Path to workspace data directoryJWT_SECRET: Secret key for JWT signingGITHUB_CLIENT_ID: GitHub OAuth client IDGITHUB_CLIENT_SECRET: GitHub OAuth client secret
DATABASE_URL: PostgreSQL connection string (default: file-based storage)FRONTEND_URL: Frontend URL for OAuth redirects (default: http://localhost:8080)REDIRECT_URI_WHITELIST: Comma-separated allowed redirect URIsENFORCE_HTTPS_REDIRECT: Enforce HTTPS for redirects (true/false)OTEL_SERVICE_NAME: OpenTelemetry service nameOTEL_EXPORTER_OTLP_ENDPOINT: OpenTelemetry endpoint URL
Set DATABASE_URL environment variable to enable PostgreSQL storage:
export DATABASE_URL=postgresql://user:password@localhost:5432/dbnameMigrations are automatically run on startup.
If DATABASE_URL is not set, the API uses file-based storage in the WORKSPACE_DATA directory.
This project uses sqlx's offline mode to avoid requiring a database connection during compilation. The .sqlx directory contains pre-generated query metadata.
- A database connection (set
DATABASE_URL), OR - Generated
.sqlxmetadata files (see below)
First-time setup (requires database):
# Set up database connectionexport DATABASE_URL=postgresql://postgres:postgres@localhost:5432/data_modelling # Run migrations cargo sqlx migrate run # Generate offline metadata ./scripts/prepare-sqlx.sh # Or manually: cargo sqlx prepare -- --all-features# Commit the .sqlx directory to git git add .sqlx git commit -m "Add sqlx offline metadata"Normal development (no database required after metadata is generated):
# Set SQLX_OFFLINE=true to use pre-generated metadata (recommended)export SQLX_OFFLINE=true # Build with offline mode cargo build # Run with offline mode cargo run --bin api # Or explicitly set itexport SQLX_OFFLINE=true cargo buildIf you don't have database access yet: Pre-commit will fail until .sqlx metadata is generated. You can:
- Skip pre-commit temporarily:
git commit --no-verify - Or set up a local PostgreSQL instance and generate metadata
- Or wait for someone else to commit the
.sqlxdirectory
# Run all tests sequentially (recommended for integration tests) cargo test -- --test-threads=1 # Run specific test cargo test --test test_name# Format code cargo fmt # Lint code cargo clippy --all-features # Check for security vulnerabilities cargo auditInstall pre-commit hooks:
pre-commit install├── src/ │ ├── api/ # API implementation │ │ ├── routes/ # Route handlers │ │ ├── services/ # Business logic │ │ ├── storage/ # Storage backends │ │ └── middleware/ # Middleware │ ├── export/ # Format exporters │ └── lib.rs # Library root ├── migrations/ # Database migrations ├── tests/ # Test suites └── Cargo.toml # Dependencies data-modelling-sdk = "1.0.2"- Shared types and Git operations
axum = "0.7"- Web frameworksqlx = "0.8"- Database toolkitutoipa = "5.0"- OpenAPI generationtokio = "1.0"- Async runtime
MIT License - see LICENSE file for details.
Contributions are welcome! Please ensure:
- Code is formatted with
cargo fmt - Code passes
cargo clippy - Tests pass
- Security audit passes (
cargo audit)
For issues and questions, please open an issue on GitHub.