I have tested how OpenCode works with several locally hosted on Ollama and llama.cpp LLMs,
and for comparison added some Free models from OpenCode Zen.
OpenHands is an open-source, model-agnostic platform for AI-driven software development agents.
It lets an agent behave more like a coding partner than a simple autocomplete tool.
Self-host OpenAI-compatible APIs with LocalAI in minutes.
LocalAI is a self-hosted, local-first inference server designed to behave like a drop-in OpenAI API for running AI workloads on your own hardware (laptop, workstation, or on-prem server).
AWS S3, Garage, or MinIO - overview and comparison.
AWS S3 remains the “default” baseline for object storage: it is fully managed, strongly consistent, and designed for extremely high durability and availability. Garage and MinIO are self-hosted, S3-compatible alternatives: Garage is designed for lightweight, geo-distributed small-to-medium clusters, while MinIO emphasises broad S3 API feature coverage and high performance in larger deployments.
Garage is an open-source, self-hosted, S3-compatible object storage system designed for small-to-medium deployments, with a strong emphasis on resilience and geo-distribution.
Strategic guide to hosting large language models locally with Ollama, llama.cpp, vLLM, or in the cloud. Compare tools, performance trade-offs, and cost considerations.
Running large language models locally gives you privacy, offline capability, and zero API costs.
This benchmark reveals exactly what one can expect from 14 popular
LLMs on Ollama on an RTX 4080.
The Go ecosystem continues to thrive with innovative projects spanning AI tooling, self-hosted applications, and developer infrastructure. This overview analyzes the top trending Go repositories on GitHub this month.
vLLM is a high-throughput, memory-efficient inference and serving engine for Large Language Models (LLMs) developed by UC Berkeley’s Sky Computing Lab.