Type-safe, normalized filter model and utilities for building dynamic queries with JetBrains Exposed and Ktor.
-
Accept filters as JSON, parse them into a normalized filter tree, and safely apply them to Exposed queries.
-
Strong typing and SQL injection safety via Exposed DSL.
-
Flexible: use flat or nested (tree) filter structures with AND/OR/NOT combinators.
-
Modules:
core– domain model (FilterOperator,FilterNode, normalization utils)jdbc– translation of filters to Exposed DSL predicatesrest– Ktor helpers to parse HTTP JSON bodies intoFilterRequest
-
Artifacts (Maven Central):
ua.wwind.exposed-filters:exposed-filters-core:<version>ua.wwind.exposed-filters:exposed-filters-jdbc:<version>ua.wwind.exposed-filters:exposed-filters-rest:<version>
- Installation
- Compatibility
- Quick start
- DSL builder for FilterRequest
- JSON serialization
- Field naming in filters
- Filtering by related entities (references)
- Filtering by JSON/JSONB fields
- Validation and errors
- JSON request format
- Supported operators
- Type handling (JDBC adapter)
- Excluding fields from filters
- Custom column type mappers
- Custom expression maps for complex queries
- Example: filtering by date and timestamp
- Why use this
- CI and Release
- License
Prerequisites: Kotlin 2.3.20, repository mavenCentral().
dependencies {
implementation("ua.wwind.exposed-filters:exposed-filters-core:1.6.2")
implementation("ua.wwind.exposed-filters:exposed-filters-jdbc:1.6.2")
implementation("ua.wwind.exposed-filters:exposed-filters-rest:1.6.2")
}| Library version | Kotlin | Ktor | Exposed |
|---|---|---|---|
| 1.6.1 | 2.3.20 | 3.4.2 | 1.2.0 |
| 1.5.0 | 2.3.20 | 3.4.1 | 1.1.1 |
| 1.4.0 | 2.3.0 | 3.3.3 | 1.0.0 |
| 1.3.0 | 2.3.0 | 3.3.2 | 1.0.0-rc-4 |
| 1.2.1 | 2.2.21 | 3.3.2 | 1.0.0-rc-4 |
| 1.2.0 | 2.2.20 | 3.3.1 | 1.0.0-rc-2 |
| 1.0.7 | 2.2.20 | 3.3.0 | 1.0.0-rc-1 |
| 1.0.1 | 2.2.0 | 3.2.3 | 1.0.0-beta-1 |
object Users : Table("users") {
val id: Column<Int> = integer("id").autoIncrement()
val name: Column<String> = varchar("name", 100)
val age: Column<Int> = integer("age")
override val primaryKey = PrimaryKey(id)
}routing {
post("/users") {
val filter = call.receiveFilterRequestOrNull()
val result = transaction {
Users
.selectAll()
.applyFiltersOn(Users, filter)
.map { row ->
mapOf(
"id" to row[Users.id],
"name" to row[Users.name],
"age" to row[Users.age]
)
}
}
call.respond(result)
}
}Build FilterRequest objects programmatically using a type-safe Kotlin DSL. Useful for constructing filters in tests,
backend services, or when you need to modify filters before applying them.
val filter = filterRequest {
eq("status", "ACTIVE")
gte("age", 18)
contains("name", "John")
}Use infix functions for a more readable syntax:
val filter = filterRequest {
"status" eq "ACTIVE"
"age" gte 18
"name" contains "John"
"role" inList listOf("ADMIN", "USER")
"score" between 0..100
}Create complex filter trees with and, or, and not blocks:
val filter = filterRequest {
"active" eq true
or {
"role" eq "ADMIN"
"role" eq "MODERATOR"
}
not {
"status" eq "DELETED"
}
}val filter = filterRequest {
// Equality
eq("field", value)
neq("field", value)
// String operators
contains("name", "substring")
startsWith("name", "prefix")
endsWith("name", "suffix")
// Comparison
gt("age", 18)
gte("age", 18)
lt("age", 65)
lte("age", 65)
// Collections
inList("status", listOf("A", "B", "C"))
notInList("status", listOf("DELETED", "ARCHIVED"))
// Range
between("age", 18, 65)
"score" between (0 to 100) // Pair syntax
"year" between 2020..2024 // ClosedRange syntax
// Nullability
isNull("deletedAt")
isNotNull("email")
// Or extension syntax:
"deletedAt".isNull()
"email".isNotNull()
}These operators remove the predicate when the value is null:
val filter = filterRequest {
eqOrRemove("status", statusParam) // adds EQ if not null, removes if null
gteOrRemove("minAge", minAgeParam)
inListOrRemove("tags", tagsParam)
betweenOrRemove("date", fromDate, toDate)
}Special case: add EQ or IS_NULL based on value:
val filter = filterRequest {
eqOrIsNull("parentId", parentIdParam) // EQ if not null, IS_NULL if null
}Convert an existing FilterRequest to a builder for modification:
val existingFilter: FilterRequest = call.receiveFilterRequestOrNull() ?: return
val modifiedFilter = existingFilter.toBuilder().apply {
// Add a predicate if not already present
eqIfAbsent("tenantId", currentTenantId)
// Replace all predicates for a field
replace("status", FilterOperator.IN, listOf("ACTIVE", "PENDING"))
// Remove predicates for a field
remove("internalField")
}.build()Serialize FilterRequest objects to JSON for caching, logging, or sending to other services.
val filter = filterRequest {
"status" eq "ACTIVE"
"age" gte 18
or {
"role" eq "ADMIN"
"role" eq "MODERATOR"
}
}
val json = filter?.toJsonString()
// Produces JSON compatible with parseFilterRequestOrNull()The serialized JSON can be parsed back using the existing parseFilterRequestOrNull() function:
val original = filterRequest {
"status" eq "ACTIVE"
"department" inList listOf("IT", "HR")
}
// Serialize
val json = original?.toJsonString()
// Parse back (e.g., from cache or another service)
val restored = parseFilterRequestOrNull(json!!)
// Use restored filter
Users.selectAll().applyFiltersOn(Users, restored)- Caching filters: Store frequently used filters in Redis or other caches
- Audit logging: Log filter requests in a readable JSON format
- Service communication: Send filters between microservices
- Testing: Assert filter structure in unit tests
- Always use the Kotlin
Tableproperty name (usually camelCase) in your JSON filters, not the physical DB column name. - This decouples your API from database naming conventions.
Example with a UUID foreign key:
object Warehouses : Table("warehouses") {
val id: Column<UUID> = uuid("id")
}
object Products : Table("products") {
val id: Column<Int> = integer("id").autoIncrement()
val warehouseId: Column<UUID> = reference("warehouse_id", Warehouses.id) // property name vs DB column
}Filter by property warehouseId (maps to DB warehouse_id):
{
"filters": {
"warehouseId": [ { "op": "EQ", "value": "11111111-1111-1111-1111-111111111111" } ]
}
}Notes:
- Property names are resolved via reflection. This also works for enums, UUIDs, booleans, numbers, and strings.
- You can filter by a field of a related entity using dot-paths on reference columns:
referenceField.nestedField. - Under the hood, this is implemented via an
EXISTSsubquery against the referenced table. - Currently supports one-level nesting on reference columns.
Example: filter products by warehouse name prefix
{
"filters": {
"warehouseId.name": [ { "op": "STARTS_WITH", "value": "Cent" } ]
}
}Constraints:
- Nested paths are allowed only on reference columns; using
field.subFieldon a non-reference column raises an error. - Operator constraints still apply based on the target column type (e.g.,
CONTAINS/STARTS_WITHonly for strings). - Path nesting for references is limited to one level (e.g.,
warehouseId.name). Multi-hop likea.b.cis not supported.
You can filter columns containing JSON or JSONB data using dot-path notation to access nested values within the JSON
structure. The library uses Exposed's extract() with JSONPath to navigate into JSON objects.
Example: filter orders by customer city stored in a JSONB column
object Orders : Table("orders") {
val id: Column<Int> = integer("id").autoIncrement()
val payload: Column<Payload> = jsonb("payload", Json.Default) // JSONB column
}Filter by nested JSON field:
{
"filters": {
"payload.customer.address.city": [
{
"op": "EQ",
"value": "Kyiv"
}
]
}
}All standard operators work with JSON fields:
- Equality:
EQ,NEQ - String search:
CONTAINS,STARTS_WITH,ENDS_WITH - Sets:
IN,NOT_IN - Ranges and comparisons:
BETWEEN,GT,GTE,LT,LTE - Nullability:
IS_NULL,IS_NOT_NULL
The library automatically infers the JSON value type based on the raw string:
- Numbers: values like
123,45.67are treated asDouble - Booleans:
trueorfalse(parsed withtoBooleanStrict()) - Dates:
YYYY-MM-DDformat → SQLDATE - Timestamps:
YYYY-MM-DDTHH:MMorYYYY-MM-DDTHH:MM:SS→ SQLTIMESTAMP - Strings: any other value
Example with multiple operators:
{
"filters": {
"payload.status": [
{
"op": "EQ",
"value": "active"
}
],
"payload.amount": [
{
"op": "GT",
"value": "100"
}
],
"payload.tags": [
{
"op": "CONTAINS",
"value": "vip"
}
],
"payload.createdAt": [
{
"op": "GTE",
"value": "2024-01-01T00:00:00"
}
]
}
}Constraints:
- JSON path filtering only works on columns marked with
JsonColumnMarker(JSON/JSONB columns in Exposed) CONTAINS,STARTS_WITH,ENDS_WITHonly work with string JSON valuesBETWEENis not supported for boolean JSON values- Comparison operators (
GT,GTE,LT,LTE) are not supported for boolean JSON values
- Requests that reference unknown fields will fail fast with a clear error message, e.g.
Unknown filter field: foo. - In the sample Ktor app, such cases are mapped to HTTP 400 Bad Request via a global
StatusPageshandler.
You can send a flat structure or a nested tree. Both parse into a FilterRequest.
{
"combinator": "AND",
"filters": {
"name": [{ "op": "CONTAINS", "value": "Al" }],
"age": [{ "op": "GTE", "value": "18" }]
}
}{
"combinator": "OR",
"children": [
{ "filters": { "age": [{ "op": "BETWEEN", "values": ["18", "25"] }] } },
{ "filters": { "name": [{ "op": "STARTS_WITH", "value": "Da" }] } }
]
}Arbitrary depth is supported for grouping with AND/OR/NOT combinators. You can nest groups within children to
any
level to express complex logic.
{
"combinator": "AND",
"children": [
{
"filters": {
"status": [{ "op": "IN", "values": ["ACTIVE", "PENDING"] }]
}
},
{
"combinator": "OR",
"children": [
{ "filters": { "name": [{ "op": "CONTAINS", "value": "Ann" }] } },
{
"combinator": "AND",
"children": [
{ "filters": { "age": [{ "op": "GTE", "value": "21" }] } },
{ "filters": { "age": [{ "op": "LT", "value": "30" }] } }
]
}
]
}
]
}Negate a group by using combinator: "NOT". All children at that level are combined with AND and wrapped in NOT.
{
"combinator": "NOT",
"children": [
{
"filters": {
"status": [
{
"op": "EQ",
"value": "DELETED"
}
]
}
},
{
"filters": {
"archived": [
{
"op": "EQ",
"value": "true"
}
]
}
}
]
}Notes:
- Nested
childrencan be combined withANDorORat each level. This supports many levels of nesting. - Single-child groups are normalized away; empty groups are ignored.
- A NOT group with a single child is preserved (not flattened) and negates that child's conjunction.
- This is independent from nested field paths for references, which remain limited to one level (see next section).
- Equality:
EQ,NEQ - String search:
CONTAINS,STARTS_WITH,ENDS_WITH - Sets:
IN,NOT_IN(including array and JSON fields) - Ranges and comparisons:
BETWEEN,GT,GTE,LT,LTE - Nullability:
IS_NULL,IS_NOT_NULL
Automatic conversion from JSON strings to column types:
- Int, Long, Short, Double
- String (
VarChar,Text) withLIKEsupport for the string operators - UUID
- Boolean (
toBooleanStrict(): only "true"/"false") - Enums (by enum constant
name) - Date and time
- Date-only (LocalDate): supports both Java
java.time.LocalDateandkotlinx.datetime.LocalDatebacked columns - Timestamp/DateTime: supports
LocalDateTime,Instant(includingkotlin.time.Instant), and SQLTIMESTAMP - Accepted formats in JSON
- Date:
YYYY-MM-DD - Timestamp: ISO date-time with optional seconds,
YYYY-MM-DDTHH:MMorYYYY-MM-DDTHH:MM:SS(a space instead ofTis allowed). A date-only value for a timestamp column is treated as start of day (00:00:00)
- Date:
- Date-only (LocalDate): supports both Java
Operator constraints:
LIKE-style operators only for string columns.BETWEENrequires exactly two values; not supported for UUID/Enum/Boolean.INrequires non-empty values;NOT_INwith an emptyvaluesarray is treated as a no-op (ignored).INandNOT_INare supported for scalar and array fields.- Date/time operators:
EQ,IN,BETWEEN,GT,GTE,LT,LTE,IS_NULL,IS_NOT_NULLare supported for date and timestamp columns.
When building complex queries with joins, subqueries, or multi-level data aggregation, you may need to apply the same user filter at different query levels. However, not all fields may be available at every level — some columns might only exist in the final result set, or certain fields should not constrain intermediate queries.
The excludeFields utility allows you to safely remove specific field conditions from a FilterRequest while
preserving correct filtering semantics. The key guarantee is that excluding fields will never narrow the result
set — it may only broaden it. This is safe because you can apply the full, unmodified filter on the final result.
The behavior depends on the logical combinator:
- AND: Removing a field broadens the result set (safe). The remaining conditions still apply.
- OR: Removing a field from an OR group would narrow results (records matching only the removed condition would be lost). Therefore, the entire OR group is removed if any direct leaf contains an excluded field.
- NOT: Due to De Morgan's law, removing fields inside a NOT group can narrow results. Therefore, the entire NOT group is removed if any child at any depth contains an excluded field.
val filter: FilterRequest = ...
// Exclude specific fields
val partialFilterForTable1 = filter.excludeFields("profileId")
val partialFilterForTable2 = filter.excludeFields("userId")
// Use partial filter for intermediate query (missing some columns)
val intermediateQuery1 = SomeTable1
.selectAll()
.applyFiltersOn(SomeTable1, partialFilterForTable1)
val intermediateQuery2 = SomeTable2
.selectAll()
.applyFiltersOn(SomeTable2, partialFilterForTable2)
// Apply full filter on the final result
val finalQuery =
(intermediateQuery1 union intermediateQuery2)
.selectAll()
finalQuery
.applyFiltersOn(finalQuery, filter)Imagine a query that joins Orders with OrderItems and computes a totalAmount field. The user sends a filter:
{
"combinator": "AND",
"filters": {
"status": [{ "op": "EQ", "value": "COMPLETED" }],
"totalAmount": [{ "op": "GTE", "value": "1000" }]
}
}The totalAmount field only exists after aggregation, so you cannot apply it to the base Orders query. Using
excludeFields("totalAmount") removes that condition for the intermediate query while keeping the status filter.
After aggregation, you apply the full filter including totalAmount on the final dataset.
For custom column types not covered by built-in support, you can provide value mappers using ColumnMappersModule. This
allows you to register multiple mappers and apply them to filter operations.
Create a mappers module with the columnMappers DSL function:
val customMappers = columnMappers {
// Add a mapper using a lambda
mapper { columnType, rawValue ->
when {
columnType is BigDecimalColumnType -> BigDecimal(rawValue)
columnType.javaClass.simpleName == "JsonColumnType" -> Json.parse(rawValue)
else -> null // return null if this mapper doesn't handle the type
}
}
// Add another mapper for different types
mapper { columnType, rawValue ->
if (columnType is MyCustomType) {
parseMyCustomValue(rawValue)
} else null
}
// Or add a pre-built mapper object
addMapper(MyCustomMapper())
}Pass the mappers module as the third argument to applyFiltersOn:
routing {
post("/products") {
val filter = call.receiveFilterRequestOrNull()
val result = transaction {
Products
.selectAll()
.applyFiltersOn(Products, filter, customMappers)
.map { /* ... */ }
}
call.respond(result)
}
}- Custom mappers are tried first: Mappers registered in the module are tried in reverse order (last registered first), giving priority to the most recently added mappers.
- Built-in mappers as fallback: If no custom mapper handles the type (returns
null), the library falls back to built-in type handling for standard types (Int, Long, String, UUID, Date, etc.). - Error on unsupported types: If neither custom nor built-in mappers handle the type, an error is thrown.
This design allows you to override built-in behavior for standard types if needed, while still providing sensible defaults.
You can implement the ColumnValueMapper interface for reusable mappers:
class BigDecimalMapper : ColumnValueMapper {
override fun <T : Any> map(columnType: IColumnType<T>, raw: String): T? {
return if (columnType is BigDecimalColumnType) {
@Suppress("UNCHECKED_CAST")
BigDecimal(raw) as T
} else null
}
}
// Usage:
val mappers = columnMappers {
addMapper(BigDecimalMapper())
}val mappers = columnMappers {
mapper { columnType, rawValue ->
if (columnType.javaClass.simpleName.contains("Json")) {
// Parse JSON string into your JSON object type
kotlinx.serialization.json.Json.parseToJsonElement(rawValue)
} else null
}
}
// Apply to query
MyTable
.selectAll()
.applyFiltersOn(MyTable, filterRequest, mappers)When working with complex queries involving unions, joins, or computed expressions, you may need to explicitly map field names to specific SQL expressions rather than relying on the default table column lookup. This is useful when:
- Union queries: combining results from multiple sources where field values come from different tables or expressions
- Aliased expressions: using computed fields, literal values, or expressions with aliases
- Joined queries: selecting columns from multiple related tables in a single result set
Instead of using applyFiltersOn(table, filter), use the lower-level applyFilters(expressionMap, filter) method and
provide an explicit mapping from field names to SQL expressions.
Consider a query that combines employees and contractors into a unified result set:
object Employees : Table("employees") {
val id = integer("id")
val name = varchar("name", 100)
val salary = integer("salary")
val departmentId = integer("department_id")
}
object Contractors : Table("contractors") {
val id = integer("id")
val name = varchar("name", 100)
val hourlyRate = integer("hourly_rate")
val projectId = integer("project_id")
}
// Build column mappings for union query
fun buildWorkersQuery(filter: FilterRequest?): Query {
// Map property names to expressions for employees
val employeeColumns = linkedMapOf(
"id" to Employees.id.alias("id"),
"name" to Employees.name.alias("name"),
"rate" to Employees.salary.alias("rate"),
"assignmentId" to Employees.departmentId.alias("assignmentId"),
"type" to stringLiteral("EMPLOYEE").alias("type") // computed field
)
// Map property names to expressions for contractors
val contractorColumns = linkedMapOf(
"id" to Contractors.id.alias("id"),
"name" to Contractors.name.alias("name"),
"rate" to Contractors.hourlyRate.alias("rate"),
"assignmentId" to Contractors.projectId.alias("assignmentId"),
"type" to stringLiteral("CONTRACTOR").alias("type")
)
// Build and union queries with filters applied
val employeeQuery = Employees
.select(employeeColumns.values.toList())
.applyFilters(employeeColumns, filter)
val contractorQuery = Contractors
.select(contractorColumns.values.toList())
.applyFilters(contractorColumns, filter)
val unionAlias = employeeQuery.unionAll(contractorQuery).alias("workers")
// Map union result columns to property names
val finalMapping = employeeColumns.keys.withIndex().associate { (index, propertyName) ->
propertyName to unionAlias.fields[index] as ExpressionWithColumnType<*>
}
return unionAlias.selectAll().applyFilters(finalMapping, filter)
}- LinkedHashMap preserves order: Use
LinkedHashMapto ensure column order matches between queries in a union - Consistent property names: Both sub-queries must map the same property names, even if they source from different columns
- Computed expressions: You can include literal values, calculated fields, or any Exposed expression
- Apply filters at each level: Filters can be applied to individual queries before union, and again to the final result
- Position-based mapping: After union, map union alias fields by position to property names
Users can filter the unified result using property names from the expression map:
{
"filters": {
"type": [{ "op": "EQ", "value": "EMPLOYEE" }],
"rate": [{ "op": "GTE", "value": "50000" }],
"name": [{ "op": "CONTAINS", "value": "John" }]
}
}The filter will be correctly applied whether the record comes from Employees or Contractors, because both map to the
same unified property names.
Custom expression maps work seamlessly with custom column type mappers:
val query = buildWorkersQuery(filter)
.applyFilters(expressionMap, filter, customColumnMappers)This allows you to handle both complex query structures and custom column types in the same query.
The example module contains an Events table demonstrating both a date-only field and a timestamp field, along with a
POST /events endpoint that accepts filters:
{
"filters": {
"day": [ { "op": "EQ", "value": "2024-01-01" } ],
"occurredAt": [ { "op": "GTE", "value": "2024-07-01T00:00:00" } ]
}
}- Clear separation: request format → normalized model → SQL predicates.
- Safer queries by leveraging Exposed DSL.
- Reusable across endpoints; supports both simple and complex trees of conditions.
- CI builds on pushes and PRs against
main. - Releases are published to Maven Central on pushing tags
v*.
Apache License 2.0 — © 2026 White Wind LLC