Docker

llama.swap Model Switcher Quickstart for OpenAI-Compatible Local LLMs

llama.swap Model Switcher Quickstart for OpenAI-Compatible Local LLMs

Hot-swap local LLMs without changing clients.

Soon you are juggling vLLM, llama.cpp, and more—each stack on its own port. Everything downstream still wants one /v1 base URL; otherwise you keep shuffling ports, profiles, and one-off scripts. llama-swap is the /v1 proxy before those stacks.

Developer Tools: The Complete Guide to Modern Development Workflows

Developer Tools: The Complete Guide to Modern Development Workflows

Developing software involves Git for version control, Docker for containerization, bash for automation, PostgreSQL for databases, and VS Code for editing — along with countless other tools that make or break your productivity. This page collects the essential cheatsheets, workflows, and comparisons you need to work efficiently across the full development stack.

LocalAI QuickStart: Run OpenAI-Compatible LLMs Locally

LocalAI QuickStart: Run OpenAI-Compatible LLMs Locally

Self-host OpenAI-compatible APIs with LocalAI in minutes.

LocalAI is a self-hosted, local-first inference server designed to behave like a drop-in OpenAI API for running AI workloads on your own hardware (laptop, workstation, or on-prem server).