Validate queries at compile time (avoid runtime errors)
Syntax checking is already supported (using the @Query annotation). ✅
Validate queries by accessing the DB schema
Add support for SQLite JVM target.
WASM support (?).
Pure Kotlin implementation for PostgreSQL.
Pure Kotlin implementation for MySQL.
Pure Kotlin implementation for SQLite.
PostgreSQL
MySQL
SQLite
The driver is designed with full support for non-blocking I/O, enabling seamless integration with modern,
high-performance applications. By leveraging asynchronous, non-blocking operations, it ensures efficient resource
management, reduces latency, and improves scalability.
The driver allows you to configure connection pool settings directly from its constructor, giving you fine-grained
control over how database connections are managed. These settings are designed to optimize performance and resource
utilization for your specific application requirements.
Key Configuration Options:
minConnections
Specifies the minimum number of connections to maintain in the pool at all times. This ensures that a baseline number
of connections are always ready to serve requests, reducing the latency for acquiring connections during peak usage.
maxConnections
Defines the maximum number of connections that can be maintained in the pool. This setting helps limit resource usage
and ensures the pool does not exceed the available database or system capacity.
acquireTimeout
Sets the maximum duration to wait when attempting to acquire a connection from the pool. If a connection cannot be
acquired within this time, an exception is thrown, allowing you to handle connection timeouts gracefully.
idleTimeout
Specifies the maximum duration a connection can remain idle before being closed and removed from the pool. This helps
clean up unused connections, freeing up resources.
maxLifetime
Defines the maximum lifetime for individual connections. Once a connection reaches this duration, it is closed and
replaced, even if it is active, helping prevent issues related to stale or long-lived connections.
By adjusting these parameters, you can fine-tune the driver's behavior to match the specific needs of your application,
whether you're optimizing for low-latency responses, high-throughput workloads, or efficient resource utilization.
// Additionally, you can set minConnections, acquireTimeout, idleTimeout, etc. val options =Driver.Pool.Options.builder()
.maxConnections(10)
.build()
/** * The following urls are supported: * postgresql:// * postgresql://localhost * postgresql://localhost:5433 * postgresql://localhost/mydb * * Additionally, you can use the `postgreSQL` function, if you are working in a multiplatform setup.*/val db =PostgreSQL(
url ="postgresql://localhost:15432/test",
username ="postgres",
password ="postgres",
options = options
)
/** * The connection URL should follow the nex pattern, * as described by [MySQL](https://dev.mysql.com/doc/connector-j/8.0/en/connector-j-reference-jdbc-url-format.html). * The generic format of the connection URL: * mysql://[host][/database][?properties]*/val db =MySQL(
url ="mysql://localhost:13306/test",
username ="mysql",
password ="mysql"
)
/** * The following urls are supported: * `sqlite::memory:` | Open an in-memory database. * `sqlite:data.db` | Open the file `data.db` in the current directory. * `sqlite://data.db` | Open the file `data.db` in the current directory. * `sqlite:///data.db` | Open the file `data.db` from the root (`/`) directory. * `sqlite://data.db?mode=ro` | Open the file `data.db` for read-only access.*/val db =SQLite(
url ="sqlite://test.db", // If the `test.db` file is not found, a new db will be created.
options = options
)
Acquiring and using connections
The driver provides two complementary ways to run queries:
Directly through the database instance (recommended). Each call acquires a pooled connection, executes the work, and
returns it to the pool automatically.
Manually acquire a connection from the pool when you need to batch multiple operations on the same connection without
starting a transaction.
Notes:
When you manually acquire a connection, you must release it to return it to the pool.
Examples (PostgreSQL shown, similar to MySQL/SQLite):
// Manual connection acquisition (remember to release)val conn:Connection= db.acquire().getOrThrow()
try {
conn.execute("insert into users(id, name) values (2, 'Bob');").getOrThrow()
val rs = conn.fetchAll("select * from users;").getOrThrow()
// ...
} finally {
conn.release().getOrThrow() // Return to pool
}
// With named parameters:val st1 =Statement
.create("select * from sqlx4k where id = :id")
.bind("id", 65)
db.fetchAll(st1).getOrThrow().map {
val id:ResultSet.Row.Column= it.get("id")
Test(id = id.asInt())
}
// With positional parameters:val st2 =Statement
.create("select * from sqlx4k where id = ?")
.bind(0, 65)
db.fetchAll(st2).getOrThrow().map {
val id:ResultSet.Row.Column= it.get("id")
Test(id = id.asInt())
}
object Sqlx4kRowMapper : RowMapper<Sqlx4k> {
overridefunmap(row:ResultSet.Row): Sqlx4k {
val id:ResultSet.Row.Column= row.get(0)
val test:ResultSet.Row.Column= row.get(1)
// Use built-in mapping methods to map the values to the corresponding type.returnSqlx4k(id = id.asInt(), test = test.asString())
}
}
val res:List<Sqlx4k> = db.fetchAll("select * from sqlx4k limit 100;", Sqlx4kRowMapper).getOrThrow()
val tx1:Transaction= db.begin().getOrThrow()
tx1.execute("delete from sqlx4k;").getOrThrow()
tx1.fetchAll("select * from sqlx4k;").getOrThrow().forEach { println(it) }
tx1.commit().getOrThrow()
You can also execute entire blocks in a transaction scope.
db.transaction {
execute("delete from sqlx4k;").getOrThrow()
fetchAll("select * from sqlx4k;").getOrThrow().forEach { println(it) }
// At the end of the block will auto commit the transaction.// If any error occurs, it will automatically trigger the rollback method.
}
TransactionContext (coroutines)
When using coroutines, you can propagate a transaction through the coroutine context using TransactionContext.
This allows you to write small, composable suspend functions that either:
start a transaction at the boundary of your use case, and
inside helper functions call TransactionContext.current() to participate in the same transaction without having to
propagate Transaction or Driver parameters everywhere.
val db =PostgreSQL(
url ="postgresql://localhost:15432/test",
username ="postgres",
password ="postgres",
options = options
)
funmain() = runBlocking {
TransactionContext.new(db) {
// `this` is a TransactionContext and also a Transaction (delegation),// so you can call query methods directly:
execute("insert into sqlx4k (id, test) values (66, 'test');").getOrThrow()
// In deeper code, fetch the same context and keep using the same tx
doBusinessLogic()
doMoreBusinessLogic()
doExtraBusinessLogic()
}
}
suspendfundoBusinessLogic() {
// Get the active transaction from the coroutine contextval tx =TransactionContext.current()
// Continue operating within the same database transaction
tx.execute("update sqlx4k set test = 'updated' where id = 66;").getOrThrow()
}
// Or you can use the `withCurrent` method to get the transaction and execute the block in an ongoing transaction.suspendfundoMoreBusinessLogic(): Unit=TransactionContext.withCurrent {
// Continue operating within the same database transaction
}
// You can also pass the db instance to `withCurrent`.// If a transaction is already active, the block runs within it; otherwise, a new transaction is started for the block.suspendfundoExtraBusinessLogic(): Unit=TransactionContext.withCurrent(db) {
// Continue operating within the same database transaction
}
Code-Generation, CRUD and @Repository Implementations
For this operation you will need to include the KSP plugin to your project.
plugins {
alias(libs.plugins.ksp)
}
// Then you need to configure the processor (it will generate the necessary code files).
ksp {
// Optional: pick SQL dialect for CRUD generation from @Table classes.// Currently only "mysql" is special-cased; everything else falls back to a generic ANSI-like dialect.// This setting affects the shape of INSERT/UPDATE/DELETE that TableProcessor emits.// It does NOT affect @Query validation (see notes below).// arg("dialect", "mysql")// Required: where to place the generated sources.
arg("output-package", "io.github.smyrgeorge.sqlx4k.examples.postgres")
// Compile-time SQL syntax checking for @Query methods (default = true).// Set to "false" to turn it off if you use vendor-specific syntax not understood by the parser.// arg("validate-sql-syntax", "false")
}
dependencies {
// Will generate code for macosArm64. Add more targets if you want.
add("kspMacosArm64", implementation("io.github.smyrgeorge:sqlx4k-codegen:x.y.z"))
}
Then create your data class that will be mapped to a table:
@Table("sqlx4k")
data classSqlx4k(
@Id(insert =true) // Will be included in the insert query.valid:Int,
valtest:String
)
@Repository(mapper =Sqlx4kRowMapper::class)
interfaceSqlx4kRepository : CrudRepository<Sqlx4k> {
// The processor will validate the SQL syntax in the @Query methods.// If you want to disable this validation, you can set the "validate-sql-syntax" arg to "false".
@Query("SELECT * FROM sqlx4k WHERE id = :id")
suspendfunfindOneById(context:QueryExecutor, id:Int): Result<Sqlx4k?>
@Query("SELECT * FROM sqlx4k")
suspendfunfindAll(context:QueryExecutor): Result<List<Sqlx4k>>
@Query("SELECT count(*) FROM sqlx4k")
suspendfuncountAll(context:QueryExecutor): Result<Long>
}
Note
Besides your @Query methods, because your interface extends CrudRepository<T>, the generator also adds the CRUD
helper methods automatically: insert, update, delete, and save.
Then in your code you can use it like:
// Insert a new record.val record =Sqlx4k(id =1, test ="test")
val res:Sqlx4k=Sqlx4kRepositoryImpl.insert(db, record).getOrThrow()
// Execute a generated query.val res:List<Sqlx4k> =Sqlx4kRepositoryImpl.selectAll(db).getOrThrow()
What it is: during code generation, sqlx4k parses the SQL string in each @Query method using JSqlParser. If the
parser detects a syntax error, the build fails early with a clear error message pointing to the offending repository
method.
What it checks: only SQL syntax. It does not verify that tables/columns exist, parameter names match, or types are
compatible.
When it runs: at KSP processing time, before your code is compiled/run.
Dialect notes: validation is dialect-agnostic and aims for an ANSI/portable subset. Some vendor-specific features
(e.g., certain MySQL or PostgreSQL extensions) may not be recognized. If you hit a false positive, you can disable
validation per module with ksp arg validate-sql-syntax=false, or disable it per query with
@Query(checkSyntax = false).
Most reliable with: SELECT, INSERT, UPDATE, DELETE statements. DDL or very advanced constructs may not be fully
supported.
Example of a build error you might see if your query is malformed:
> Task :compileKotlin
Invalid SQL in function findAllBy: Encountered "FROMM" at line 1, column 15
Tip: keep it enabled to catch typos early; if you rely heavily on vendor-specific syntax not yet supported by the
parser, turn it off either globally or just for a specific method:
Globally (module-wide):
ksp { arg("validate-sql-syntax", "false") }
Per query:
@Repository(mapper =UserMapper::class)
interfaceUserRepository {
@Query("select * from users where id = :id", checkSyntax =false)
suspendfunfindOneById(context:QueryExecutor, id:Int): Result<User?>
}
Run any pending migrations against the database; and validate previously applied migrations against the current
migration source to detect accidental changes in previously applied migrations.
val res = db.migrate(
path ="./db/migrations",
table ="_sqlx4k_migrations",
afterFileMigration = { m, d ->println("Migration of file: $m, took $d") }
).getOrThrow()
println("Migration completed. $res")
This process will create a table with name _sqlx4k_migrations. For more information, take a look at
the examples.
jvm (only PostgreSQL and MySQL are supported at the moment)
iosArm64
iosSimulatorArm64
androidNativeX64
androidNativeArm64
macosArm64
macosX64
linuxArm64
linuxX64
mingwX64
wasmWasi (potential future candidate)
implementation("io.github.smyrgeorge:sqlx4k-postgres:x.y.z")
// or for MySQL
implementation("io.github.smyrgeorge:sqlx4k-mysql:x.y.z")
// or for SQLite
implementation("io.github.smyrgeorge:sqlx4k-sqlite:x.y.z")
If you are building your project on Windows, for target mingwX64, and you encounter the following error:
lld-link: error: -exclude-symbols:___chkstk_ms is not allowed in .drectve
First, you need to run start-up the postgres instance.
And then run the examples.
# For macosArm64
./examples/postgres/build/bin/macosArm64/releaseExecutable/postgres.kexe
./examples/mysql/build/bin/macosArm64/releaseExecutable/mysql.kexe
./examples/sqlite/build/bin/macosArm64/releaseExecutable/sqlite.kexe
# If you run in another platform consider running the correct tartge.
Here are small, self‑contained snippets for the most common tasks.
For full runnable apps, see the modules under: