AI Infrastructure
Part of AI & Machine Learning
GPU compute and AI training infrastructure
Services(8)
Ownership Structure
Deployment
License Type
Country
67%
BootstrappedSelf-HostableApache-2.0
AI & Machine Learning
vLLM
High-throughput LLM serving engine with PagedAttention for efficient memory management.
★ 73.0k
67%
BootstrappedSelf-HostableMIT
AI & Machine Learning
LocalAI
Self-hosted OpenAI-compatible API for running LLMs locally with CPU or GPU support.
★ 43.6k
63%
FoundationSelf-HostableMIT
AI & Machine Learning
llama.cpp
Efficient LLM inference in C/C++ with support for CPU, Metal, and CUDA acceleration.
★ 97.8k
63%
BootstrappedSelf-HostableAGPL-3.0
AI & Machine Learning
Text Generation WebUI
Web interface for running LLMs locally with support for many model formats and extensions.
★ 46.2k
61%
EU
Public (EU)
ML Platforms
OVHcloud AI Training
GPU compute for AI/ML training.
61%
EU
Non-EU Controlled
Compute
Scaleway GPU Instances
GPU instances for ML workloads.
55%
BootstrappedSelf-HostableMIT
AI & Machine Learning
Open WebUI
Self-hosted AI chat interface supporting multiple LLM backends including Ollama, OpenAI-compatible APIs, and more.
★ 127.1k
40%
BootstrappedSelf-HostableMIT
AI & Machine Learning
Ollama
Run large language models locally with a simple CLI and API, supporting Llama, Mistral, and more.
★ 165.0k