ORM to use in GO: GORM, sqlc, Ent or Bun?
GORM vs sqlc vs Ent vs Bun
Go’s ecosystem offers a range of ORM (Object-Relational Mapping) tools and database libraries, each with its own philosophy. Here is a comprehensive comparison of four major solutions for using PostgreSQL in Go: GORM, sqlc, Ent, and Bun.
Let’s evaluate them on performance, developer experience, popularity, and
features/extensibility, with code examples demonstrating basic CRUD
operations on a User
model. Intermediate and advanced Go developers
will gain insight into the trade-offs of each tool and which might suit
their needs best.
Overview of the ORMs
-
GORM – A feature-rich, Active Record-style ORM. GORM lets you define Go structs as models and provides an extensive API to query and manipulate data using method chaining. It has been around for years and is one of the most widely used Go ORMs (with nearly 39k GitHub stars). GORM’s appeal is its robust feature set (automatic migrations, relationship handling, eager loading, transactions, hooks, and more) which enables a clean Go codebase with minimal raw SQL. However, it introduces its own patterns and has runtime overhead due to reflection and interface usage, which can impact performance on large operations.
-
sqlc – Not a traditional ORM but a SQL code generator. With sqlc, you write plain SQL queries (in
.sql
files), and the tool generates type-safe Go code (DAO functions) to execute those queries. This means you interact with the database by calling generated Go functions (e.g.CreateUser
,GetUser
) and get strongly-typed results without manually scanning rows. sqlc essentially lets you use raw SQL with compile-time safety – there’s no layer of query building or runtime reflection. It has quickly become popular (~16k stars) for its simplicity and performance: it avoids runtime overhead entirely by leveraging prepared SQL, making it as fast as usingdatabase/sql
directly. The trade-off is that you must write (and maintain) SQL statements for your queries, and dynamic query logic can be less straightforward compared to ORMs that build SQL for you. -
Ent (Entgo) – A code-first ORM that uses code generation to create a type-safe API for your data model. You define your schema in Go (using Ent’s DSL for fields, edges/relations, constraints, etc.), then Ent generates Go packages for the models and query methods. The generated code uses a fluent API to build queries, with full compile-time type checking. Ent is relatively newer (backed by the Linux Foundation and developed by Ariga). It focuses on developer experience and correctness – queries are constructed via chainable methods instead of raw strings, which makes them easier to compose and less error-prone. Ent’s approach yields highly optimized SQL queries and even caches query results to reduce duplicate trips to the database. It was designed with scalability in mind, so it handles complex schemas and relationships efficiently. However, using Ent adds a build step (running codegen) and ties you to its ecosystem. It currently supports PostgreSQL, MySQL, SQLite (and experimental support for other databases), whereas GORM supports a broader range of databases (including SQL Server and ClickHouse).
-
Bun – A newer ORM with a “SQL-first” approach. Bun was inspired by Go’s
database/sql
and the older go-pg library, aiming to be lightweight and fast with a focus on PostgreSQL features. Instead of an Active Record pattern, Bun provides a fluent query builder: you start a query withdb.NewSelect()
/NewInsert()
/ etc., and build it with methods that closely resemble SQL (you can even embed raw SQL snippets). It maps query results to Go structs using struct tags similar to GORM’s. Bun favors explicitness and allows falling back to raw SQL easily when needed. It does not auto-run migrations for you (you write migrations or use Bun’s migrate package manually), which some consider a plus for control. Bun is praised for handling complex queries elegantly and supporting advanced Postgres types (arrays, JSON, etc.) out of the box. In practice, Bun imposes much less runtime overhead than GORM (no heavy reflection on every query), translating to better throughput in high-load scenarios. Its community is smaller (~4k stars) and its feature set, while solid, isn’t as expansive as GORM’s. Bun might require a bit more SQL knowledge to use effectively, but it rewards that with performance and flexibility.
Performance Benchmarks
When it comes to performance (query latency and throughput), there are significant differences in how these tools behave, especially as load increases. Recent benchmarks and user experiences highlight a few key points:
-
GORM: GORM’s convenience comes at the cost of some performance. For simple operations or small result sets, GORM can be quite fast – in fact, one benchmark showed GORM had the quickest execution time for very small queries (e.g. fetching 1 or 10 records). However, as the number of records grows, GORM’s overhead causes it to trail significantly. In a test fetching 15,000 rows, GORM was roughly 2× slower than sqlc or even raw
database/sql
(59.3 ms for GORM vs ~31.7 ms for sqlc and 32.0 ms for database/sql in that scenario). The reflection-heavy design and query building abstractions add latency, and GORM may issue multiple queries for complex loads (e.g. one query per related entity when usingPreload
by default) which can amplify the delay. In summary: GORM is usually fine for moderate workloads, but in high throughput scenarios or large bulk operations, its overhead can become a bottleneck. -
sqlc: Because sqlc calls are essentially hand-written SQL under the hood, its performance is on par with using the standard library directly. Benchmarks consistently place sqlc among the fastest options, often only marginally slower or even slightly faster than raw
database/sql
for comparable operations. For example, at 10k+ records, sqlc was observed to slightly edge out plaindatabase/sql
in throughput (likely due to avoiding some of the scanning boilerplate and using efficient codegen). With sqlc, there’s no runtime query builder – your query executes as a prepared statement with minimal overhead. The key is that you’ve already done the work of writing an optimal SQL query; sqlc simply spares you the effort of scanning rows into Go types. In practice, this means excellent scalability – sqlc will handle large data loads as well as raw SQL would, making it a top choice when raw speed is critical. -
Ent: Ent’s performance falls somewhere in between raw-SQL approaches and traditional ORMs. Because Ent generates type-safe code, it avoids reflection and most runtime query composition costs. The SQL it produces is quite optimized (you have control to write efficient queries via the Ent API), and Ent includes an internal caching layer to reuse query execution plans/results in some cases. This can reduce repetitive database hits in complex workflows. While specific benchmarks of Ent vs others vary, many developers report that Ent outperforms GORM for equivalent operations, especially as complexity grows. One reason is that GORM might generate suboptimal queries (e.g., N+1 selects if not carefully using joins/preloads), whereas Ent encourages explicit eager loading of relations and will join data in fewer queries. Ent also tends to allocate less memory per operation than GORM, which can improve throughput by putting less pressure on Go’s garbage collector. Overall, Ent is built for high performance and large schemas – its overhead is low, and it can handle complex queries efficiently – but it may not match the raw throughput of hand-written SQL (sqlc) in every scenario. It’s a strong contender if you want both speed and the safety of an ORM layer.
-
Bun: Bun was created with performance in mind and is often cited as a faster alternative to GORM. It uses a fluent API to build SQL queries, but these builders are lightweight. Bun doesn’t hide the SQL from you – it’s more of a thin layer over
database/sql
, which means you incur very little overhead beyond what the Go standard library does. Users have noted significant improvements when switching from GORM to Bun in large projects: for example, one report mentioned GORM was “super slow” for their scale, and replacing it with Bun (and some raw SQL) solved their performance issues. In benchmarks like go-orm-benchmarks, Bun tends to be near the top in speed for most operations (often within 1.5× of raw sqlx/sql) and well ahead of GORM in throughput. It also supports batch inserts, manual control of joins vs separate queries, and other optimizations that developers can leverage. Bottom line: Bun delivers performance close to bare-metal, and it’s a great choice when you want an ORM convenience but cannot sacrifice speed. Its SQL-first nature ensures you can always optimize queries if the generated ones aren’t optimal.
Summary of Performance: If maximum performance is the goal (e.g.
data-intensive applications, microservices under high load), sqlc or
even hand-written database/sql
calls are the
winners.
They have virtually no abstraction cost and scale linearly. Ent and
Bun also perform very well and can handle complex queries
efficiently – they strike a balance between speed and abstraction.
GORM offers the most features but at the cost of overhead; it’s
perfectly acceptable for many applications, but you should be mindful of
its impact if you expect to handle huge data volumes or need ultra-low
latency. (In such cases, you might mitigate GORM’s cost by carefully
using Joins
instead of Preload
to reduce query count, or by mixing
in raw SQL for critical paths, but that adds complexity.)
Developer Experience and Ease of Use
Developer experience can be subjective, but it encompasses the learning curve, the clarity of the API, and how productive you can be in day-to-day coding. Here’s how our four contenders compare in terms of ease of use and development workflow:
-
GORM – Featureful but a Learning Curve: GORM is often the first ORM Go newcomers try because it promises a fully automated experience (you define structs and can immediately Create/Find without writing SQL). The documentation is very comprehensive with lots of guides, which helps. However, GORM has its own idiomatic way of doing things (“code-based approach” to DB interactions) and it can feel awkward initially if you’re used to raw SQL. Many common operations are straightforward (e.g.
db.Find(&objs)
), but when you venture into associations, polymorphic relations, or advanced queries, you need to learn GORM’s conventions (struct tags,Preload
vsJoins
, etc.). This is why there’s often mention of a steep initial learning curve. Once you climb that curve, GORM can be very productive – you spend less time writing repetitive SQL and more time in Go logic. In fact, after mastering it, many find they can develop features faster with GORM than with lower-level libraries. In short, GORM’s DX: high on initial complexity but smooths out with experience. The large community means plenty of examples and StackOverflow answers are available, and a rich plugin ecosystem can further simplify tasks (for example, plugins for soft deletes, auditing, etc.). The main caveat is that you must remain aware of what GORM is doing under the hood to avoid pitfalls (like accidental N+1 queries). But for a developer who invests time in GORM, it indeed “feels” like a powerful, integrated solution that handles a lot for you. -
Ent – Schema-First and Type-Safe: Ent was explicitly designed to improve the developer experience of ORMs. You describe your schema in one place (Go code) and get a strongly-typed API to work with – that means no stringly-typed queries, and many errors are caught at compile time. For example, if you try to refer to a field or edge that doesn’t exist, your code simply won’t compile. This safety net is a huge DX win for large projects. Ent’s API for building queries is fluent and intuitive: you chain methods like
.Where(user.EmailEQ("alice@example.com"))
and it reads almost like English. Developers coming from ORMs in other languages often find Ent’s approach natural (it’s somewhat akin to Entity Framework or Prisma, but in Go). Learning Ent does require understanding its code generation workflow. You’ll need to runentc generate
(or usego generate
) whenever you modify your schema. This is an extra step, but usually part of the build process. The learning curve for Ent is moderate – if you know Go and some SQL fundamentals, Ent’s concepts (Fields, Edges, etc.) are straightforward. In fact, many find Ent easier to reason about than GORM, because you don’t have to wonder what SQL is being produced; you can often predict it from the method names, and you can output the query for debugging if needed. Ent’s documentation and examples are quite good, and being backed by a company ensures it’s actively maintained. Overall DX with Ent: very friendly for those who want clarity and safety. The code you write is verbose compared to raw SQL, but it’s self-documenting. Also, refactoring is safer (renaming a field in the schema and regenerating will update all usages in queries). The downside might be that you are somewhat constrained by what the Ent codegen provides – if you need a very custom query, you may either write it with the Ent API (which can handle complex joins, but occasionally you might find a case that’s easier to write in raw SQL). Ent does allow raw SQL snippets when necessary, but if you find yourself doing that often, you might question if an ORM is the right fit. In summary, Ent provides a smooth developer experience for most CRUD and query logic with minimal surprises, as long as you’re okay running a codegen step and sticking to the patterns Ent enforces. -
Bun – Closer to SQL, less Magic: Using Bun feels different from using GORM or Ent. Bun’s philosophy is to not hide SQL — if you know SQL, you already know how to use Bun in large part. For example, to query users you might write:
db.NewSelect().Model(&users).Where("name = ?", name).Scan(ctx)
. This is a fluent API but very close to an actual SELECT statement’s structure. The learning curve for Bun is generally low if you are comfortable with SQL itself. There’s less “ORM magic” to learn; you mostly learn the method names for building queries and the struct tag conventions. This makes Bun quite approachable for seasoned developers who have useddatabase/sql
or other query builders likesqlx
. In contrast, a beginner who isn’t confident in writing SQL might find Bun a bit less helpful than GORM – Bun won’t automatically generate complex queries for you via relationships unless you specify them. However, Bun does support relationships and eager loading (Relation()
method to join tables) – it just does so explicitly, which some developers prefer for clarity. Developer experience with Bun can be described as lightweight and predictable. You write slightly more code than with GORM for some operations (since Bun often asks you to explicitly specify columns or joins), but in return you have more control and visibility. There’s minimal internal magic, so debugging is easier (you can usually log the query string Bun built). Bun’s documentation (the uptrace.dev guide) is thorough and includes patterns for migrations, transactions, etc., though the community being smaller means fewer third-party tutorials. Another aspect of DX is the available tooling: Bun being just an extension ofdatabase/sql
means any tool that works withsql.DB
(like debugging proxies, query loggers) works with Bun easily. In summary, Bun offers a no-nonsense experience: it feels like writing structured SQL in Go. This is great for developers who value control and performance, but it might not hand-hold less experienced devs as much as something like GORM. The consensus is that Bun makes simple things easy and complex things possible (much like raw SQL), without imposing a lot of framework on you. -
sqlc – Write SQL, Get Go Code: sqlc’s approach flips the usual ORM narrative. Instead of writing Go code to produce SQL, you write SQL and get Go code. For developers who love SQL, this is fantastic – you get to use all the power of SQL (complex joins, CTEs, window functions, you name it) with zero ORM limitations. The learning curve for sqlc itself is very small. If you know how to write a SELECT/INSERT/UPDATE in SQL, you already have the hard part done. You do need to learn how to annotate queries with
-- name: Name :one/many/exec
comments and set up the config file, but that’s trivial. The generated code will be straightforward function definitions, which you call like any Go function. Developer experience pros: there’s no ORM API to learn, no surprises – the queries run exactly as written. You avoid an entire class of ORMs issues (like figuring out why an ORM generated a certain JOIN or how to tweak a query). Also, your code reviews can include reviewing the SQL itself, which is often clearer for complex logic. Another big DX advantage: type safety and IDE support – the generated methods and structs can be jumped to in your editor, refactoring tools work on them, etc., as opposed to raw string queries that are opaque to IDEs. On the con side, sqlc does require you to manage SQL scripts. That means if your schema or requirements change, you have to manually update or add the relevant SQL and re-run codegen. It’s not difficult, but it is more manual work than just calling an ORM method. Also, dynamic queries (where parts of the SQL are conditional) can be cumbersome – you either write multiple SQL variants or use SQL syntax tricks. Some developers mention that as a limitation of sqlc’s approach. In practice, you can often structure your data access such that you don’t need overly dynamic SQL, or you can call rawdatabase/sql
for those edge cases. But it’s a consideration: sqlc is superb for well-defined queries, less so for ad-hoc query building. In summary, for a developer who is proficient with SQL, using sqlc feels natural and highly efficient. There’s very little new to learn, and it removes repetitive Go boilerplate. For a developer not as comfortable with SQL, sqlc might be initially slower to work with (compared to an ORM that, for example, auto-generates queries for basic CRUD). Yet many Go developers consider sqlc a must-have because it hits a sweet spot: manual control with high safety and no runtime cost.
Popularity and Ecosystem Support
Adoption and community support can influence your choice – a popular library means more community contributions, better maintenance, and more resources to learn from.
-
GORM: As the oldest and most mature of the four, GORM has by far the largest user base and ecosystem. It’s currently the top-starred Go ORM on GitHub (over 38k stars) and is used in countless production projects. The maintainers are active, and the project regularly updates (GORM v2 was a major overhaul improving performance and architecture). A big advantage of GORM’s popularity is the wealth of extensions and integrations. There are official and third-party plugins for things like database drivers (e.g. for PostgreSQL, MySQL, SQLite, SQL Server, ClickHouse), out-of-the-box. GORM also supports a broad set of use cases: migrations, schema auto-generation, soft deletes, JSON fields (with
gorm.io/datatypes
), full-text search, etc., often via either built-in functionality or add-ons. The community has produced various tools likegormt
(to generate struct definitions from an existing database), and many tutorials and examples. If you run into a problem, a quick search is likely to find an issue or Stack Overflow question asked by someone else. Ecosystem summary: GORM is extremely well-supported. It’s the “default” ORM choice for many, meaning you’ll find it in frameworks and boilerplates. The flip side is that its very size can make it feel heavy, but for many that’s a worthwhile trade-off for community depth. -
Ent: Despite being newer (open-sourced around 2019), Ent has grown a strong community. With ~16k stars and backing by the Linux Foundation, it’s not a fringe project. Large companies have started using Ent for its schema-centric advantages, and the maintainers (at Ariga) provide enterprise support which is a sign of confidence for business-critical use. The ecosystem around Ent is growing: there are ent extensions (ent/go) for things like OpenAPI/GraphQL integration, gRPC integration, and even an SQL migration tool that works with Ent schemas. Because Ent generates code, some ecosystem patterns differ – for example, if you want to integrate with GraphQL, you might use Ent’s code generation to produce GraphQL resolvers. The learning resources for Ent are good (official documentation and an example project covering a simple app). The community forums and GitHub discussions are active with schema design questions and tips. In terms of community support, Ent is certainly past the “early adopter” phase and is considered production-ready and reliable. It might not have as many Stack Overflow answers as GORM yet, but it’s catching up quickly. One thing to note: since Ent is a bit more opinionated (e.g., it wants to manage the schema), you’ll likely use it standalone rather than alongside another ORM. It doesn’t have “plugins” in the same way GORM does, but you can write custom templates to extend code generation or hook into lifecycle events (Ent has hooks/middleware support for generated clients). The backing by a foundation indicates long-term support, so choosing Ent is a safe bet if its model fits your needs.
-
Bun: Bun (part of the Uptrace open-source suite) is gaining traction, especially among those who were fans of the now-unmaintained
go-pg
library. With ~4.3k stars, it’s the smallest community in this comparison, but it’s a very active project. The maintainer is responsive and has been rapidly adding features. Bun’s community is enthusiastic about its performance. You’ll find discussions on Go forums and Reddit from developers who switched to Bun for speed. However, because the user base is smaller, you might not always find answers to niche questions readily – sometimes you’ll need to read the docs/source or ask in Bun’s GitHub or Discord (Uptrace provides a community chat). The ecosystem of extensions is more limited: Bun comes with its own migration library, a fixture loader, and ties into Uptrace’s observability tooling, but you won’t find the plethora of plugins that GORM has. That said, Bun is compatible withsql.DB
usage, so you can mix and match – for instance, usinggithub.com/jackc/pgx
as the driver underneath or integrating with other packages that expect a*sql.DB
. Bun doesn’t lock you in. Support-wise, being newer means documentation is up-to-date and examples are modern (often showing usage with context and so on). The official docs compare Bun with GORM and Ent directly, which is helpful. If community size is a concern, one strategy could be to adopt Bun for its advantages but keep your usage of it relatively shallow (e.g., you could swap it out for another solution if needed since it doesn’t impose a heavy abstraction). In any case, Bun’s trajectory is upward, and it fills a specific niche (performance-oriented ORM) which gives it staying power. -
sqlc: sqlc is quite popular in the Go community, evidenced by ~15.9k stars and many advocates especially in performance-conscious circles. It’s often recommended in discussions about “avoiding ORMs” because it strikes a nice balance. The tool is maintained by contributors (including the original author, who is active in improving it). Being more of a compiler than a runtime library, its ecosystem revolves around integrations: for example, editors/IDEs can have syntax highlighting for
.sql
files and you runsqlc generate
as part of your build or CI pipeline. The community has created templates and base repo examples on how to organize code with sqlc (often pairing it with a migration tool like Flyway or Golang-Migrate for schema versioning, since sqlc itself doesn’t manage schema). There’s an official Slack/Discord for sqlc where you can ask questions, and issues on GitHub tend to get attention. Many of the common patterns (like how to handle nullable values, or JSON fields) are documented in sqlc’s docs or have community blog posts. One thing to highlight: sqlc is not Go-specific – it can generate code in other languages (like TypeScript, Python). This broadens its community beyond Go, but in Go specifically, it’s widely respected. If you choose sqlc, you’re in good company: it’s used in production by many startups and large firms (as per community showcases and the sponsors listed on the repo). The key ecosystem consideration is that since sqlc doesn’t provide runtime features like an ORM, you may need to pull in other libraries for things like transactions (though you can usesql.Tx
easily with sqlc) or perhaps a lightweight DAL wrapper. In practice, most use sqlc alongside the standard library (for transactions, context cancellation, etc., you usedatabase/sql
idioms directly in your code). This means less vendor lock-in – moving away from sqlc would just mean writing your own data layer, which is as hard as writing the SQL (which you’ve already done). Overall, community and support for sqlc are robust, with many recommending it as a “must-use” for Go projects interacting with SQL databases due to its simplicity and reliability.
Feature Set and Extensibility
Each of these tools offers a different set of features. Here we compare their capabilities and how extensible they are for advanced use cases:
- GORM Features: GORM aims to be a full-service ORM. Its feature list is extensive:
- Multiple Databases: First-class support for PostgreSQL, MySQL, SQLite, SQL Server, and more. Switching DBs is usually as easy as changing the connection driver.
- Migrations: You can auto-migrate your schema from your models
(
db.AutoMigrate(&User{})
will create or alter theusers
table). While convenient, be cautious using auto-migration in production – many use it for dev and have more controlled migrations for prod. - Relationships: GORM’s struct tags
(
gorm:"foreignKey:...,references:..."
) allow you to define one-to-many, many-to-many, etc. It can handle linking tables for many-to-many and hasPreload
for eager loading relations. GORM defaults to lazy loading (i.e., separate queries) when you access related fields, but you can usePreload
orJoins
to customize that. Newer versions also have a generics-based API for easier association queries. - Transactions: GORM has an easy-to-use transaction wrapper
(
db.Transaction(func(tx *gorm.DB) error { ... })
) and even support for nested transactions (savepoints). - Hooks and Callbacks: You can define methods like
BeforeCreate
,AfterUpdate
on your model structs, or register global callbacks that GORM will call at certain lifecycle events. This is great for things like automatically setting timestamps or soft-delete behavior. - Extensibility: GORM’s plugin system (
gorm.Plugin
interface) allows extending its functionality. Examples: thegorm-gen
package generates type-safe query methods (if you prefer compile-time query checks), or community plugins for auditing, multi-tenancy, etc. You can also fall back to raw SQL any time viadb.Raw("SELECT ...", params).Scan(&result)
. - Other niceties: GORM supports composite primary keys, embedding models, polymorphic associations, and even a schema resolver to use multiple databases (for read-replicas, sharding, etc.).
Overall, GORM is highly extensible and feature-rich. Virtually any ORM-related feature you might want has either a built-in mechanism or a documented pattern in GORM. The cost of this breadth is complexity and some rigidity (you often need to conform to GORM’s way of doing things). But if you need a one-stop solution, GORM delivers.
- Ent Features: Ent’s philosophy is centered on schema as code. Key features include:
- Schema Definition: You can define fields with constraints (unique,
default values, enums, etc.), and edges (relations) with cardinality
(one-to-many, etc.). Ent uses this to generate code and also can
generate migration SQL for you (there’s an
ent/migrate
component which can produce diff SQL between your current schema and the desired schema). - Type Safety and Validation: Because fields are strongly typed, you
can’t, say, accidentally set an integer field to a string. Ent also
allows custom field types (e.g., you can integrate with
sql.Scanner
/driver.Valuer
for JSONB fields or other complex types). - Query Builder: Ent’s generated query API covers most SQL constructs:
you can do selects, filters, ordering, limits, aggregates, joins
across edges, and even subqueries. It’s expressive – e.g., you can
write
client.User.Query().WithOrders(func(q *ent.OrderQuery) { q.Limit(5) }).Where(user.StatusEQ(user.StatusActive)).All(ctx)
to get active users with their first 5 orders each, in one go. - Transactions: Ent’s client supports transactions by exposing a
transactional variant of the client (via
tx, err := client.Tx(ctx)
which yields aent.Tx
that you can use to do multiple operations and then commit or rollback). - Hooks and Middleware: Ent allows registering hooks on create/update/delete operations – these are like interceptors where you can, for example, auto-fill a field or enforce custom rules. There’s also middleware for the client to wrap operations (useful for logging, instrumentation).
- Extensibility: While Ent doesn’t have “plugins” per se, its code
generation is templated and you can write custom templates if you need
to extend the generated code. Many advanced features have been
implemented this way: for instance, integration with OpenTelemetry for
tracing DB calls, or generating GraphQL resolvers from the Ent schema,
etc. Ent also allows mixing raw SQL when necessary through the
entsql
package or by getting the underlying driver. The ability to generate additional code means teams can use Ent as a base and layer their own patterns on top, if needed. - GraphQL/REST integration: A big selling point of Ent’s graph-based approach is that it fits well with GraphQL – your ent schema can be almost directly mapped to a GraphQL schema. Tools exist to automate a lot of that. This can be a productivity win if you’re building an API server.
- Performance tweaks: Ent, for example, can batch load related edges
to avoid N+1 queries (it has an eager loading API
.With<EdgeName>()
). It will use JOINs or additional queries under the hood in an optimized way. Also, the caching of query results (in-memory) can be enabled to avoid hitting the DB for identical queries in a short span.
In summary, Ent’s feature set is geared toward large-scale, maintainable projects. It may lack some of GORM’s out-of-the-box goodies (for example, GORM’s automatic schema migration vs Ent’s generated migration scripts – Ent chooses to separate that concern), but it makes up for it with powerful dev tools and type safety. Extensibility in Ent is about generating what you need – it’s very flexible if you’re willing to dive into how entc (the codegen) works.
- Bun Features: Bun focuses on being a power-tool for those who know SQL. Some features and extensibility points:
- Query Builder: At Bun’s core is the fluent query builder which
supports most SQL constructs (SELECT with joins, CTEs,
subqueries,
as well as INSERT, UPDATE with bulk support). You can start a query
and customize it deeply, even injecting raw SQL
(
.Where("some_condition (?)", value)
). - PostgreSQL-specific features: Bun has built-in support for Postgres
array types, JSON/JSONB (mapping to
[]<type>
ormap[string]interface{}
or custom types), and supports scanning into composite types. For example, if you have a JSONB column, you can map it to a Go struct withjson:
tags, and Bun will handle marshaling for you. This is a strength over some ORMs that treat JSONB as opaque. - Relationships/Eager Loading: As shown earlier, Bun lets you define
struct tags for relations and then you can use
.Relation("FieldName")
in queries to join and load related entities. By default,.Relation
uses LEFT JOIN (you can simulate inner joins or other types by adding conditions). This gives you fine-grained control over how related data is fetched (in one query vs multiple). If you prefer manual queries, you can always just write a join in Bun’s SQL builder directly. Unlike GORM, Bun will never load relations automatically without you asking, which avoids inadvertent N+1 issues. - Migrations: Bun provides a separate migration toolkit (in
github.com/uptrace/bun/migrate
). Migrations are defined in Go (or SQL strings) and versioned. It’s not as automagic as GORM’s auto-migrate, but it’s robust and integrates with the library nicely. This is great for keeping schema changes explicit. - Extensibility: Bun is built on interfaces similar to database/sql’s
– it even wraps a
*sql.DB
. You can therefore use any lower-level tool with it (e.g., you could execute a*pgx.Conn
query if needed and still scan into Bun models). Bun allows custom value scanners, so if you have a complex type you want to store, you implementsql.Scanner
/driver.Valuer
and Bun will use it. For extending Bun’s functionality, since it’s relatively straightforward, many people just write additional helper functions on top of Bun’s API in their projects rather than distinct plugins. The library itself is evolving, so new features (like additional query helpers or integrations) are being added by the maintainers. - Other features: Bun supports context cancellation (all queries
accept
ctx
), it has a connection pool viadatabase/sql
underneath (configurable there), and supports prepared statement caching (through the driver if usingpgx
). There’s also a nice feature where Bun can scan to struct pointers or primitive slices easily (for example, selecting one column into a[]string
directly). It’s small conveniences like this that make Bun enjoyable for those who dislike repetitive scanning code.
In summary, Bun’s feature set covers 90% of ORM needs but with an emphasis on transparency and performance. It may not have every bell and whistle (for instance, it doesn’t do automagic schema updates or have an active record pattern), but it provides the building blocks to implement whatever you need on top. Because it’s young, expect its feature set to keep expanding, guided by real-world use cases (the maintainers often add features as users request them).
- sqlc Features: sqlc’s “features” are quite different since it’s a generator:
- Full SQL support: The biggest feature is simply that you’re using PostgreSQL’s own features directly. If Postgres supports it, you can use it in sqlc. This includes CTEs, window functions, JSON operators, spatial queries (PostGIS), etc. There’s no need for the library itself to implement anything special to use these.
- Type mappings: sqlc is smart about mapping SQL types to Go types. It
handles standard types and lets you configure or extend mappings for
custom types or enums. For example, a Postgres
UUID
can map to agithub.com/google/uuid
type if you want, or a domain type can map to the underlying Go type. - Null handling: It can generate
sql.NullString
or pointers for nullable columns, depending on your preference, so you don’t have to fight with scanning nulls. - Batch operations: While sqlc itself doesn’t provide a high-level API for batching, you can certainly write a bulk insert SQL and generate code for it. Or call a stored procedure – that’s another feature: since it’s SQL, you can leverage stored procedures or DB functions, and have sqlc generate a Go wrapper.
- Multi-statement queries: You can put multiple SQL statements in one named query and sqlc will execute them in a transaction (if the driver supports it), returning results of the last query or whatever you specify. This is a way to, say, do something like a “create and then select” in one call.
- Extensibility: Being a compiler, extensibility comes in the form of
plugins for new languages or community contributions to support
new SQL constructs. For example, if a new PostgreSQL data type comes
out, sqlc can be updated to support mapping it. In your application,
you might extend around sqlc by writing wrapper functions. Since the
generated code is under your control, you could modify it – though
normally you wouldn’t, you’d adjust the SQL and re-generate. If
needed, you can always mix raw
database/sql
calls with sqlc in your codebase (there’s no conflict, since sqlc just outputs some Go code). - Minimal runtime dependencies: The code sqlc generates typically
depends only on the standard
database/sql
(and a specific driver like pgx). There’s no heavy runtime library; it’s just some helper types. This means zero overhead in production from the library side – all the work is at compile time. It also means you won’t get features like an object cache or identity map (as some ORMs have) – if you need caching, you’d implement that in your service layer or use a separate library.
In effect, sqlc’s “feature” is that it shrinks the gap between your SQL database and your Go code without adding extra stuff in between. It’s a specialized tool – it doesn’t do migrations, it doesn’t track object state, etc., by design. Those concerns are left to other tools or to the developer. This is appealing if you want a lean data layer, but if you’re looking for a one-and-done solution that handles everything from schema to queries to relationships in code, sqlc alone isn’t it – you’d pair it with other patterns or tools.
To recap this section, GORM is the powerhouse of features and a clear choice if you need a mature, extensible ORM that can be adapted via plugins. Ent offers a modern, type-safe take on features, prioritizing correctness and maintainability (with things like codegen, hooks, and integrations into API layers). Bun provides the essentials and banks on SQL proficiency, making it easy to extend by writing more SQL or slight wrappers. sqlc strips the features to the bare metal – it’s essentially as feature-rich as SQL itself, but anything beyond that (caching, etc.) is up to you to layer on.
Code Example: CRUD Operations in Each ORM
Nothing illustrates the differences better than seeing how each tool
handles basic CRUD for a simple model. Let’s assume we have a User
model with fields ID
, Name
, and Email
. Below are side-by-side code
snippets for creating, reading, updating, and deleting a user in each
library, using PostgreSQL as the database.
Note: In all cases, assume we have an established database
connection (e.g., db
or client
) already set up, and error handling
is omitted for brevity. The purpose is to compare the API and verbosity
of each approach.
GORM (Active Record style)
// Model definition
type User struct {
ID uint `gorm:"primaryKey"`
Name string
Email string
}
// Create a new user
user := User{Name: "Alice", Email: "alice@example.com"}
db.Create(&user) // INSERT INTO users (name,email) VALUES ('Alice','alice@example.com')
// Read (find by primary key)
var u User
db.First(&u, user.ID) // SELECT * FROM users WHERE id = X LIMIT 1
// Update (single column)
db.Model(&u).Update("Email", "alice_new@example.com")
// (generates: UPDATE users SET email='alice_new@example.com' WHERE id = X)
// Delete
db.Delete(&User{}, u.ID) // DELETE FROM users WHERE id = X
Ent (Codegen, fluent API)
// (Ent schema is defined elsewhere and code generated. Assume client is ent.Client)
// Create a new user
u, err := client.User.
Create().
SetName("Alice").
SetEmail("alice@example.com").
Save(ctx)
// SQL: INSERT INTO users (name, email) VALUES ('Alice','alice@example.com') RETURNING id, name, email
// Read (by ID)
u2, err := client.User.Get(ctx, u.ID)
// SQL: SELECT * FROM users WHERE id = X
// Update (single field)
u3, err := client.User.
UpdateOneID(u.ID).
SetEmail("alice_new@example.com").
Save(ctx)
// SQL: UPDATE users SET email='alice_new@example.com' WHERE id = X RETURNING id, name, email
// Delete
err = client.User.DeleteOneID(u.ID).Exec(ctx)
// SQL: DELETE FROM users WHERE id = X
Bun
// Model definition
type User struct {
bun.BaseModel `bun:"table:users"`
ID int64 `bun:",pk,autoincrement"`
Name string `bun:",notnull"`
Email string `bun:",unique,notnull"`
}
// Create a new user
user := &User{Name: "Alice", Email: "alice@example.com"}
_, err := db.NewInsert().Model(user).Exec(ctx)
// INSERT INTO users (name, email) VALUES ('Alice','alice@example.com')
// Read (find by ID)
var u User
err = db.NewSelect().Model(&u).
Where("id = ?", user.ID).
Scan(ctx)
// SELECT * FROM users WHERE id = X
// Update (single column)
_, err = db.NewUpdate().Model(&u).
Set("email = ?", "alice_new@example.com").
Where("id = ?", u.ID).
Exec(ctx)
// UPDATE users SET email='alice_new@example.com' WHERE id = X
// Delete
_, err = db.NewDelete().Model((*User)(nil)).
Where("id = ?", u.ID).
Exec(ctx)
// DELETE FROM users WHERE id = X
sqlc
// Assume we wrote SQL in files and sqlc generated a Queries struct with methods.
queries := New(db) // New takes a *sql.DB (or pgx.Conn) and returns generated Queries
// Create a new user (generated method executes the INSERT and scans the result)
newUser, err := queries.CreateUser(ctx, "Alice", "alice@example.com")
// SQL in queries.sql -> INSERT INTO users (name, email) VALUES ($1, $2) RETURNING id, name, email
// Read (by ID)
user, err := queries.GetUser(ctx, newUser.ID)
// SQL -> SELECT id, name, email FROM users WHERE id = $1
// Update (email by ID)
updatedUser, err := queries.UpdateUserEmail(ctx, newUser.ID, "alice_new@example.com")
// SQL -> UPDATE users SET email=$2 WHERE id = $1 RETURNING id, name, email
// Delete
err = queries.DeleteUser(ctx, newUser.ID)
// SQL -> DELETE FROM users WHERE id = $1
As the code above shows, each approach has a different feel:
-
GORM uses struct methods and fluent chaining on
db.Model(&obj)
or directly on thedb
object. It populates the struct with returned values (e.g., afterCreate
,user.ID
is set). It also hides SQL details by design – you generally don’t see the query unless debugging. -
Ent uses a generated fluent API. Notice how methods like
Create().SetX().Save(ctx)
orUpdateOneID(id).SetX().Save(ctx)
clearly separate the build vs execute phases. Ent returns ent.Type objects (which correspond to rows) and errors, similar to how an SQL query would either return results or an error. -
Bun requires specifying more explicitly (e.g., using
Set("email = ?", ...)
for updates), which is very much like writing SQL but with Go syntax. After an insert, the structuser
isn’t automatically filled with the new ID unless you add aRETURNING
clause (Bun supports.Returning()
if needed). The example above keeps things simple. -
sqlc appears like calling any Go function. We call
queries.CreateUser
, etc., and under the hood those are executing prepared statements. The SQL is written in external files, so while you don’t see it in the Go code, you have full control over it. The returned objects (e.g.,newUser
) are plain Go structs generated by sqlc to model the data.
One can observe verbosity differences (GORM is quite concise; Bun and sqlc require a bit more typing in code or SQL) and style differences (Ent and GORM offer higher-level abstractions whereas Bun and sqlc are closer to raw queries). Depending on your preferences, you might favor explicitness over brevity or vice versa.
TL;DR
Choosing the “right” ORM or database library in Go comes down to your application’s needs and your team’s preferences:
-
GORM is a great choice if you want a tried-and-true ORM that handles a lot for you. It shines in quick development of CRUD applications where convenience and a rich feature set are more important than squeezing out every drop of performance. The community support and documentation are excellent, which can smooth over the rough edges of its learning curve. Be mindful to use its features appropriately (e.g., use
Preload
orJoins
to avoid lazy-loading pitfalls) and expect some overhead. In exchange, you get productivity and a one-stop solution for most problems. If you need things like multiple database support or an extensive plugin ecosystem, GORM has you covered. -
Ent appeals to those who prioritize type safety, clarity, and maintainability. It is well-suited for large codebases where schema changes are frequent and you want the compiler on your side to catch errors. Ent may involve more upfront design (defining schemas, running generation), but it pays off as the project grows – your code remains robust and refactor-friendly. Performance-wise, it can handle heavy loads and complex queries efficiently, often better than active-record ORMs, thanks to its optimized SQL generation and caching. Ent is slightly less “plug-and-play” for quick scripts, but for long-lived services it provides a solid, scalable foundation. Its modern approach (and active development) make it a forward-looking choice.
-
Bun is ideal for developers who say “I know SQL and I just want a lightweight helper around it.” It foregoes some magic to give you control and speed. If you’re building a performance-sensitive service and aren’t afraid to be explicit in your data access code, Bun is a compelling option. It’s also a good middle ground if you’re migrating from raw
database/sql
+sqlx
and want to add structure without sacrificing much efficiency. The trade-off is a smaller community and fewer high-level abstractions – you will write a bit more code by hand. But as the Bun docs put it, it doesn’t get in your way. Use Bun when you want an ORM that feels like using SQL directly, especially for PostgreSQL-centric projects where you can leverage database-specific features. -
sqlc is in a category of its own. It’s perfect for the Go team that says “we don’t want an ORM, we want compile-time assurances for our SQL.” If you have strong SQL skills and prefer to manage queries and schema in SQL (perhaps you have DBAs or just enjoy crafting efficient SQL), sqlc will likely boost your productivity and confidence. Performance is essentially optimal, since nothing stands between your query and the database. The only reasons not to use sqlc would be if you truly dislike writing SQL or if your queries are so dynamic that writing many variants becomes burdensome. Even then, you might use sqlc for the majority of static queries and handle the few dynamic cases with another approach. sqlc also plays well with others – it doesn’t preclude using an ORM in parts of your project (some projects use GORM for simple stuff and sqlc for the critical paths, for example). In short, choose sqlc if you value explicitness, zero overhead, and type-safe SQL – it’s a powerful tool to have in your toolbox.
Finally, it’s worth noting that these tools are not mutually exclusive in the ecosystem. They each have their pros and cons, and in Go’s pragmatic spirit, many teams evaluate trade-offs on a case-by-case basis. It’s not uncommon to start with an ORM like GORM or Ent for speed of development, and then use sqlc or Bun for specific hot paths that need maximum performance. All four solutions are actively maintained and widely used, so there’s no “wrong” choice overall – it’s about the right choice for your context. Hope this comparison gave you a clearer picture of how GORM, Ent, Bun, and sqlc stack up, and helps you make an informed decision for your next Go project.