MatrixHub is an open-source, self-hosted AI model registry engineered for large-scale enterprise inference. It serves as a drop-in private replacement for Hugging Face, purpose-built to accelerate vLLM and SGLang workloads.
MatrixHub streamlines the transition from public model hubs to production-grade infrastructure:
- Zero-Wait Distribution: Eliminate bandwidth bottlenecks with a "Pull-once, serve-all" cache, enabling 10Gbps+ speeds across 100+ GPU nodes simultaneously.
- Air-Gapped Delivery: Securely ferry models into isolated networks while maintaining a native
HF_ENDPOINTexperience for researchers—no internet required. - Private AI model Registry: Centralize fine-tuned weights with Tag locking and CI/CD integration to guarantee absolute consistency from development to production.
- Global Multi-Region Sync: Automate asynchronous, resumable replication between data centers for high availability and low-latency local access.
- Transparent HF Proxy: Switch to private hosting with zero code changes by simply redirecting your endpoint.
- On-Demand Caching: Automatically localizes public models upon the first request to slash redundant traffic.
- Inference Native: Native support for P2P distribution, OCI artifacts, and NetLoader for direct-to-GPU weight streaming.
- RBAC & Multi-Tenancy: Project-based isolation with granular permissions and seamless LDAP/SSO integration.
- Audit & Compliance: Full traceability with comprehensive logs for every upload, download, and configuration change.
- Integrity Protection: Built-in malware scanning and content signing to ensure models remain untampered.
- Storage Agnostic: Compatible with local file systems, NFS, and S3-compatible backends (MinIO, AWS, etc.).
- Reliable Replication: Policy-driven, chunked transfers ensure data consistency even over unstable global networks.
- Cloud-Native Design: Optimized for Kubernetes with official Helm charts and horizontal scaling capabilities.
Use Docker Compose with the provided configuration files:
website/static/deploy/docker/docker-compose.yamlwebsite/static/deploy/docker/config.yaml
Make sure docker-compose.yaml and config.yaml are in the same folder, then start the service:
docker compose -f docker-compose.yaml up -dDefault service endpoint:
http://127.0.0.1:3001
Install MatrixHub using the built-in Helm chart:
helm install matrixhub ./deploy/charts/matrixhubExpose it locally (default ClusterIP) via port-forward:
kubectl port-forward deploy/matrixhub-apiserver 9527:9527Or expose it via NodePort:
helm install matrixhub ./deploy/charts/matrixhub --set apiserver.service.type=NodePortSlack is our primary channel for community discussion, contribution coordination, and support. You can reach the maintainers and community at: