Skip to content

ourstudio-se/oat-db-rust

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

40 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

OAT-DB Rust: Git-like Combinatorial Database Backend

A Rust-based backend for a combinatorial database system with git-like branching, typed properties, and class-based schemas. Features conditional properties, derived fields, and flexible relationship management with branch-based version control. All data modifications are managed through working-commit endpoints for proper version control and audit trails.

🌟 Key Features

Git-like Branch Model

  • Database β†’ Branches β†’ Schema + Instances (like git repositories)
  • Working commit staging system for grouping multiple changes into logical commits
  • Default branch (typically "main") for each database
  • Branch lineage tracking with parent-child relationships
  • Commit history with hash, message, and author tracking
  • Branch status management (Active, Merged, Archived)

Schema & Data Features

  • Class-based schemas with separate definitions for each entity type
  • Typed properties with explicit data types (string, number, bool)
  • Conditional properties using rule-based evaluation with relationship presence checking
  • Pool resolution system for combinatorial optimization with default pool strategies
  • Derived fields with expression evaluation (sum, count, arithmetic operations)
  • Flexible relationships with quantifiers (EXACTLY, AT_LEAST, AT_MOST, RANGE, OPTIONAL, ANY, ALL)
  • Advanced relationship selection with pool-based, filter-based, and explicit selection modes
  • PostgreSQL backend with git-like commit storage and branch-aware queries
  • Immutable commits with SHA-256 hashing and compressed binary data storage
  • Comprehensive audit trail system with user tracking for all class and instance operations
  • REST API built with Axum for complete CRUD operations

Architecture

src/
β”œβ”€β”€ api/            # Axum HTTP handlers and routes
β”œβ”€β”€ model/          # Core data structures (Database, Branch, Schema, Instance)
β”‚   β”œβ”€β”€ database.rs # Database and Branch models with git-like properties
β”‚   β”œβ”€β”€ schema.rs   # Schema with class-based definitions
β”‚   β”œβ”€β”€ instance.rs # Instances with typed property values
β”‚   └── ...
β”œβ”€β”€ logic/          # Business logic (validation, evaluation, resolution)
β”œβ”€β”€ store/          # Storage traits and in-memory implementation
β”œβ”€β”€ seed/           # Sample data for testing
└── lib.rs         # Module exports and tests

Data Hierarchy

Database (with default_branch_id)
└── Branches (git-like: main, feature-xyz, etc.)
    β”œβ”€β”€ Schema (class-based with multiple ClassDef entries)
    β”‚   β”œβ”€β”€ Class: "Underbed" (properties, relationships, derived)
    β”‚   β”œβ”€β”€ Class: "Size" (properties, relationships, derived)
    β”‚   β”œβ”€β”€ Class: "Fabric" (properties, relationships, derived)
    β”‚   └── Class: "Leg" (properties, relationships, derived)
    └── Instances (many per branch, typed properties)
        β”œβ”€β”€ Underbed instances
        β”œβ”€β”€ Size instances
        β”œβ”€β”€ Fabric instances
        └── Leg instances

Git-like Workflow

Typical User Story

  1. Work on default branch - Query/modify data on main branch
  2. Create feature branch - Branch off main for new changes
  3. Make changes - Edit schema and/or instances on feature branch
  4. Validate data - Ensure data integrity on feature branch
  5. Commit changes - Create commit with message and author
  6. Merge back - Merge feature branch back to main when ready

Granular Operations Workflow

  1. Add classes individually - POST /schema/classes with just the new class data
  2. Modify specific classes - PATCH /schema/classes/{id} for targeted updates
  3. Remove obsolete classes - DELETE /schema/classes/{id} for clean schema management
  4. Instance-level control - Individual CRUD operations on specific instances
  5. Branch-specific changes - Apply granular operations to specific branches

Quick Start

Prerequisites

You need PostgreSQL running. Set up your database connection:

cp .env.example .env
# Edit .env with your PostgreSQL connection details

Running the Server

# With PostgreSQL (recommended)
DATABASE_TYPE=postgres

# If you'd like to prepend data
LOAD_SEED_DATA=true

The server starts on http://localhost:7061. It uses the git-like schema:

  • Databases with git-like branches and commit history
  • SHA-256 commit hashes with compressed binary data
  • Branch-aware instance queries and proper database isolation

Running Tests

cargo test

API Endpoints

Databases

  • GET /databases - List all databases
  • POST /databases - Create database (auto-creates main branch)
  • GET /databases/{db_id} - Get specific database
  • GET /databases/{db_id}/commits - List all commits for database
  • DELETE /databases/{db_id} - Delete database (only allows deletion of empty databases)

Branches (Git-like)

  • GET /databases/{db_id}/branches - List branches for database
  • POST /databases/{db_id}/branches - Create new branch
  • GET /databases/{db_id}/branches/{branch_id} - Get specific branch
  • PATCH /databases/{db_id}/branches/{branch_id} - Update branch status

Database-level Endpoints (Auto-select Main Branch) - READ ONLY

⚠️ All modifications must use working-commit endpoints

  • GET /databases/{db_id}/schema - Get schema from main branch
  • GET /databases/{db_id}/schema/classes/{class_id} - Get individual class
  • GET /databases/{db_id}/instances - List instances from main branch
  • GET /databases/{db_id}/instances/{id} - Get instance from main branch

Branch-specific Endpoints - READ ONLY

⚠️ All modifications must use working-commit endpoints

  • GET /databases/{db_id}/branches/{branch_id}/schema - Get schema for specific branch
  • GET /databases/{db_id}/branches/{branch_id}/schema/classes/{class_id} - Get individual class
  • GET /databases/{db_id}/branches/{branch_id}/instances - List instances from branch
  • GET /databases/{db_id}/branches/{branch_id}/instances/{id} - Get instance from branch

Working Commit Endpoints - REQUIRED FOR ALL MODIFICATIONS

All data modifications must go through the working-commit workflow:

Schema Modifications

  • POST /databases/{db_id}/branches/{branch_id}/working-commit/schema/classes - Add new class
  • PATCH /databases/{db_id}/branches/{branch_id}/working-commit/schema/classes/{class_id} - Update class
  • DELETE /databases/{db_id}/branches/{branch_id}/working-commit/schema/classes/{class_id} - Delete class

Instance Modifications

  • POST /databases/{db_id}/branches/{branch_id}/working-commit/instances - Create instance
  • PATCH /databases/{db_id}/branches/{branch_id}/working-commit/instances/{instance_id} - Update or create instance
  • DELETE /databases/{db_id}/branches/{branch_id}/working-commit/instances/{instance_id} - Delete instance

Working Commit Management

  • POST /databases/{db_id}/branches/{branch_id}/working-commit - Create staging area (auto-created if needed)
  • GET /databases/{db_id}/branches/{branch_id}/working-commit - View staged changes
  • GET /databases/{db_id}/branches/{branch_id}/working-commit/validate - Validate staged changes
  • POST /databases/{db_id}/branches/{branch_id}/working-commit/commit - Commit all staged changes
  • DELETE /databases/{db_id}/branches/{branch_id}/working-commit - Abandon staged changes

Query Endpoints - Simplified Format

All query endpoints now accept simple property-weight pairs:

Single Instance Queries (GET & POST)

  • GET /databases/{db_id}/instances/{instance_id}/query?price=-1&weight=0.5&derived_properties=total_cost

  • GET /databases/{db_id}/branches/{branch_id}/instances/{instance_id}/query?price=-1

  • GET /databases/{db_id}/branches/{branch_id}/working-commit/instances/{instance_id}/query?price=-1

  • GET /databases/{db_id}/commits/{commit_hash}/instances/{instance_id}/query?price=-1

  • POST /databases/{db_id}/working-commit/instances/{instance_id}/query - Body: {"price": -1.0, "weight": 0.5, "derived_properties": ["total_cost"]}

  • POST /databases/{db_id}/branches/{branch_id}/working-commit/instances/{instance_id}/query - Same simple format

Batch Queries (POST)

  • POST /databases/{db_id}/branches/{branch_id}/working-commit/instances/{instance_id}/batch-query
  • POST /databases/{db_id}/commits/{commit_hash}/instances/{instance_id}/batch-query

Simple Batch Request Format:

{
  "queries": [
    {"id": "min_price", "price": -1.0, "weight": 0.5},
    {"id": "max_comfort", "comfort": 1.0, "price": -0.5}
  ],
  "derived_properties": ["total_cost", "summary"]
}

Simple Batch Response Format:

{
  "results": [
    {
      "id": "min_price",
      "configuration": { /* ConfigurationArtifact */ }
    },
    {
      "id": "max_comfort",
      "configuration": { /* ConfigurationArtifact */ }
    }
  ]
}

Type Validation Endpoints

  • GET /databases/{db_id}/validate - Validate all instances in database (main branch)
  • GET /databases/{db_id}/instances/{instance_id}/validate - Validate single instance (main branch)
  • GET /databases/{db_id}/branches/{branch_id}/validate - Validate all instances in specific branch
  • GET /databases/{db_id}/branches/{branch_id}/instances/{instance_id}/validate - Validate single instance in branch

Merge Validation Endpoints

  • GET /databases/{db_id}/branches/{source_branch_id}/validate-merge - Validate merge into main branch
  • GET /databases/{db_id}/branches/{source_branch_id}/validate-merge/{target_branch_id} - Validate merge between branches

Rebase Endpoints

  • POST /databases/{db_id}/branches/{feature_branch_id}/rebase - Rebase feature branch onto main branch
  • POST /databases/{db_id}/branches/{feature_branch_id}/rebase/{target_branch_id} - Rebase feature branch onto specific target

Rebase Validation Endpoints

  • GET /databases/{db_id}/branches/{feature_branch_id}/validate-rebase - Validate rebase onto main branch
  • GET /databases/{db_id}/branches/{feature_branch_id}/validate-rebase/{target_branch_id} - Validate rebase onto specific branch

Working Commit Endpoints (Git-like Staging)

  • POST /databases/{db_id}/branches/{branch_id}/working-commit - Create staging area (Note: normally not needed as branches auto-maintain working commits)
  • GET /databases/{db_id}/branches/{branch_id}/working-commit - View staged changes with resolved relationships (includes schema default pools)
  • GET /databases/{db_id}/branches/{branch_id}/working-commit/validate - Validate all staged changes before committing
  • GET /databases/{db_id}/branches/{branch_id}/working-commit/raw - View raw working commit data without relationship resolution
  • POST /databases/{db_id}/branches/{branch_id}/working-commit/commit - Commit all staged changes as single commit
  • DELETE /databases/{db_id}/branches/{branch_id}/working-commit - Abandon staged changes without committing

Query Parameters

  • ?class=ClassID - Filter instances by class ID
  • ?expand=rel1,rel2&depth=N - Expand relationships with depth control (expand defaults to all relationships)
  • ?depth=N - Control expansion depth for included instances (depth=0 shows relationships without nested instances)

Model Structures

Class Models

The API supports different models for different operations:

ClassDef (Full Class with ID)

Used for responses and internal storage:

{
  "id": "class-chair",
  "name": "Chair",
  "description": "Chair furniture class",
  "properties": [...],
  "relationships": [...],
  "derived": [...]
}

NewClassDef (Input Model for Creation)

Used when creating classes (ID generated server-side):

{
  "name": "Chair",
  "description": "Chair furniture class",
  "properties": [...],
  "relationships": [...],
  "derived": [...]
}

ClassDefUpdate (Partial Update Model)

Used for PATCH operations (all fields optional):

{
  "description": "Updated description only"
}

Instance Models

Instance (Full Instance with ID)

Used for responses:

{
  "id": "chair-001",
  "branch_id": "main-branch-id",
  "class": "class-chair",
  "properties": {...},
  "relationships": {...}
}

NewInstance (Input Model for Creation)

Used when creating instances:

{
  "class": "class-chair",
  "properties": {...},
  "relationships": {...}
}

InstanceUpdate (Partial Update Model)

Used for PATCH operations:

{
  "properties": {
    "price": { "value": 299.99, "type": "number" }
  }
}

Design Decisions

Why Do Instances Have a branch_id Field?

You might wonder why each instance stores its branch ID. This design decision serves several important purposes in the git-like database system:

Benefits of branch_id on Instances:

  1. Branch Isolation - Ensures instances are properly isolated between branches, preventing accidental cross-branch data access
  2. Performance - Direct filtering by branch_id is faster than maintaining separate branch-instance relationship tables
  3. Data Integrity - Clear ownership model prevents data corruption and ensures consistency
  4. Merge Operations - Essential for branch merge logic to identify which instances belong to which branch
  5. Validation - Handlers can quickly validate that an instance belongs to the expected branch

Alternative Approaches Considered:

  • Branch-agnostic instances with separate mapping tables (more complex queries, harder to maintain consistency)
  • Context-based approach where branch info is only in the URL (loses data integrity guarantees)
  • Implicit branch membership (makes merge operations much more complex)

Git-like Semantics:

Just like git commits belong to specific branches, instances in this system belong to specific branches. This makes the git-like workflow intuitive and maintains clear data lineage.

The field is serialized as branch_id in JSON for clarity, while internally using version_id for backward compatibility.

Example Usage

1. Create Database (with Main Branch)

curl -X POST http://localhost:7061/databases \
  -H "Content-Type: application/json" \
  -d '{
    "id": "furniture-db",
    "name": "Furniture Store",
    "description": "Product catalog database"
  }'

This automatically creates a "main" branch as the default.

1.1. Query Database Directly (Auto-selects Main Branch)

# Get schema from main branch automatically
curl http://localhost:7061/databases/furniture-db/schema

# List instances from main branch automatically
curl http://localhost:7061/databases/furniture-db/instances

# Get specific instance from main branch
curl http://localhost:7061/databases/furniture-db/instances/delux-underbed

These endpoints automatically use the database's default branch (typically "main") under the hood, providing a convenient way to work with the primary dataset without specifying branch IDs.

1.2. Granular Class Management

# Add a new class to the main branch
curl -X POST http://localhost:7061/databases/furniture-db/schema/classes \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Chair",
    "description": "Chair furniture class",
    "properties": [
      {"id": "prop-chair-name", "name": "name", "data_type": "String", "required": true},
      {"id": "prop-chair-price", "name": "price", "data_type": "Number", "required": true}
    ],
    "relationships": [
      {"id": "rel-chair-legs", "name": "legs", "targets": ["class-leg"], "quantifier": {"Exactly": 4}, "selection": "ExplicitOrFilter"}
    ],
    "derived": []
  }'

# Get an individual class
curl http://localhost:7061/databases/furniture-db/schema/classes/class-chair

# Update a class (partial update)
curl -X PATCH http://localhost:7061/databases/furniture-db/schema/classes/class-chair \
  -H "Content-Type: application/json" \
  -d '{
    "description": "Updated chair description"
  }'

# Delete a class
curl -X DELETE http://localhost:7061/databases/furniture-db/schema/classes/class-chair

1.3. Granular Instance Management

# Delete an individual instance from main branch
curl -X DELETE http://localhost:7061/databases/furniture-db/instances/delux-underbed

# Update just specific properties of an instance
curl -X PATCH http://localhost:7061/databases/furniture-db/instances/delux-underbed \
  -H "Content-Type: application/json" \
  -d '{
    "properties": {
      "price": {"value": 250.00, "type": "number"}
    }
  }'

2. Create Feature Branch

curl -X POST http://localhost:7061/databases/furniture-db/branches \
  -H "Content-Type: application/json" \
  -d '{
    "id": "feature-new-tables",
    "name": "Add Table Support",
    "description": "Branch for adding table furniture support",
    "parent_branch_id": "main-branch-id"
  }'

3. Create Class-based Schema

curl -X POST http://localhost:7061/databases/furniture-db/branches/feature-new-tables/schema \
  -H "Content-Type: application/json" \
  -d '{
    "id": "FurnitureSchema",
    "classes": [
      {
        "name": "Table",
        "description": "Table furniture class",
        "properties": [
          {"id": "name", "data_type": "string", "required": true},
          {"id": "basePrice", "data_type": "number", "required": true},
          {"id": "material", "data_type": "string", "required": true}
        ],
        "relationships": [
          {
            "id": "legs",
            "targets": ["class-leg"],
            "quantifier": {"Exactly": 4},
            "selection": "explicit-or-filter"
          }
        ],
        "derived": [
          {
            "id": "totalPrice",
            "data_type": "number",
            "expr": {
              "Add": {
                "left": {"Prop": {"prop": "basePrice"}},
                "right": {"Sum": {"over": "legs", "prop": "price"}}
              }
            }
          }
        ]
      },
      {
        "name": "Leg",
        "description": "Furniture leg class",
        "properties": [
          {"id": "name", "data_type": "string", "required": true},
          {"id": "material", "data_type": "string", "required": true},
          {"id": "price", "data_type": "number", "required": true}
        ],
        "relationships": [],
        "derived": []
      }
    ]
  }'

4. Create Typed Instances

# Create leg instances first
curl -X POST http://localhost:7061/databases/furniture-db/branches/feature-new-tables/instances \
  -H "Content-Type: application/json" \
  -d '{
    "id": "oak-leg-1",
    "class": "class-leg",
    "properties": {
      "name": {"value": "Oak Table Leg #1", "type": "string"},
      "material": {"value": "Oak", "type": "string"},
      "price": {"value": 45, "type": "number"}
    }
  }'

# Create more legs (oak-leg-2, oak-leg-3, oak-leg-4)...

# Create table instance with relationships
curl -X POST http://localhost:7061/databases/furniture-db/branches/feature-new-tables/instances \
  -H "Content-Type: application/json" \
  -d '{
    "id": "dining-table-001",
    "class": "class-table",
    "properties": {
      "name": {"value": "Oak Dining Table", "type": "string"},
      "basePrice": {"value": 800, "type": "number"},
      "material": {"value": "Oak", "type": "string"}
    },
    "relationships": {
      "legs": {"Ids": {"ids": ["oak-leg-1", "oak-leg-2", "oak-leg-3", "oak-leg-4"]}}
    }
  }'

5. Get Instance with Derived Values

# Get derived totalPrice (basePrice + sum of leg prices)
curl http://localhost:7061/databases/furniture-db/branches/feature-new-tables/instances/dining-table-001/derived

Response:

{
  "derived": {
    "totalPrice": 980
  }
}

6. Query with Expansion

# Relationships are expanded by default, showing resolved pool information and filter details
curl "http://localhost:7061/databases/furniture-db/branches/feature-new-tables/instances/dining-table-001"

# Control expansion depth to include related instances
curl "http://localhost:7061/databases/furniture-db/branches/feature-new-tables/instances/dining-table-001?depth=1"

# Expand specific relationships only
curl "http://localhost:7061/databases/furniture-db/branches/feature-new-tables/instances/dining-table-001?expand=legs"

7. Branch Management

# List all branches
curl http://localhost:7061/databases/furniture-db/branches

# Get branch details with commit info
curl http://localhost:7061/databases/furniture-db/branches/feature-new-tables

Response shows git-like branch info:

{
  "id": "feature-new-tables",
  "database_id": "furniture-db",
  "name": "Add Table Support",
  "parent_branch_id": "main-branch-id",
  "commit_hash": "abc123...",
  "commit_message": "Created branch 'Add Table Support'",
  "author": "[email protected]",
  "status": "active",
  "created_at": "2024-01-15T10:30:00Z"
}

8. Database Deletion

Database deletion includes comprehensive safety checks to prevent accidental data loss:

# Try to delete database with commit history (will be blocked)
curl -X DELETE http://localhost:7061/databases/furniture-db

Response (409 Conflict):

{
  "error": "Cannot delete database: contains commit history. This operation would cause data loss."
}
# Create a test database for deletion
curl -X POST http://localhost:7061/databases \
  -H "Content-Type: application/json" \
  -d '{
    "id": "test-db",
    "name": "Test Database",
    "description": "Database for testing deletion"
  }'

# Delete empty database (succeeds)
curl -X DELETE http://localhost:7061/databases/test-db

Response (200 OK):

{
  "message": "Database deleted successfully",
  "deleted_database_id": "test-db"
}

Safety Features:

  • Won't delete databases with commit history (prevents data loss)
  • Won't delete databases with multiple branches (must delete feature branches first)
  • Won't delete databases with active working commits (must commit or abandon changes first)
  • Only allows deletion of truly empty databases (new databases with only main branch, no commits)

πŸš€ Working Commit System (Git-like Staging)

The OAT-DB includes a sophisticated working commit system that enables git-like staging of changes before creating permanent commits. This allows you to group multiple related changes into single, logical commits with clean history.

Note: Each branch automatically maintains an active working commit. You don't need to manually create working commits - the system ensures one is always available.

Core Concepts

  • Working Commit: A mutable staging area where you accumulate changes before committing
  • Staging: Making changes that are stored temporarily in the working commit
  • Committing: Converting all staged changes into a permanent, immutable commit
  • Abandoning: Discarding all staged changes without creating a commit
  • Schema Default Pool Resolution: Working commits automatically resolve relationships using class schema default pools, just like regular branch endpoints

Enhanced Relationship Resolution

Working commits now provide comprehensive relationship resolution that matches the behavior of regular branch endpoints:

  • Explicit Relationships: Instance-configured relationships are resolved using working commit data
  • Schema Default Pools: Relationships defined in class schema with default_pool settings are automatically resolved even if not explicitly configured on instances
  • Complete Coverage: All relationships defined in the class schema are shown, providing full visibility into available selections
  • Working Commit Context: All resolution uses staged working commit data, not just the base branch data

Why Use Working Commits?

Without Working Commits With Working Commits
Each API call = 1 commit Multiple API calls = 1 commit
Verbose commit history Clean, logical commit history
Hard to group related changes Easy to group related changes
No review before commit Review staged changes first

Working Commit API Endpoints

Staging Management

  • POST /databases/{db_id}/branches/{branch_id}/working-commit - Create staging area (Note: rarely needed as branches auto-create working commits)
  • GET /databases/{db_id}/branches/{branch_id}/working-commit - View staged changes
  • DELETE /databases/{db_id}/branches/{branch_id}/working-commit - Abandon staged changes

Committing

  • POST /databases/{db_id}/branches/{branch_id}/working-commit/commit - Commit staged changes

Complete Working Commit Workflow

Example: Adding Description Property to Color Class

Let's walk through adding a "description" property to the Color class and updating all existing color instances.

Step 1: Stage Changes

Since each branch automatically maintains a working commit, you can directly start making changes:

Step 2: Stage Schema Change

# Add description property to Color class (staged)
curl -X PATCH http://localhost:7061/databases/furniture_catalog/schema/classes/class-color \
  -H "Content-Type: application/json" \
  -d '{
    "properties": [
      {"id": "prop-color-name", "name": "name", "data_type": "String", "required": true},
      {"id": "prop-color-price", "name": "price", "data_type": "Number", "required": true},
      {"id": "prop-color-description", "name": "description", "data_type": "String", "required": false}
    ]
  }'

Step 3: Stage Instance Changes

# Add description to red color (staged)
curl -X PATCH http://localhost:7061/databases/furniture_catalog/instances/color-red \
  -H "Content-Type: application/json" \
  -d '{
    "properties": {
      "description": {
        "value": "A vibrant red color perfect for bold designs",
        "type": "string"
      }
    }
  }'

# Add description to blue color (staged)
curl -X PATCH http://localhost:7061/databases/furniture_catalog/instances/color-blue \
  -H "Content-Type: application/json" \
  -d '{
    "properties": {
      "description": {
        "value": "A calming blue color ideal for modern aesthetics",
        "type": "string"
      }
    }
  }'

# Add description to gold color (staged)
curl -X PATCH http://localhost:7061/databases/furniture_catalog/instances/color-gold \
  -H "Content-Type: application/json" \
  -d '{
    "properties": {
      "description": {
        "value": "An elegant gold color for luxury applications",
        "type": "string"
      }
    }
  }'

Step 4: Review Staged Changes

# View what's currently staged with full relationship resolution
curl http://localhost:7061/databases/furniture_catalog/branches/main/working-commit

This returns the working commit with all staged changes - the updated Color class schema and all modified color instances. All relationships are fully resolved, including:

  • Explicit instance relationships with their original configuration and resolved instance IDs
  • Schema default pool relationships that are automatically resolved from class definitions
  • Detailed resolution metadata showing how each relationship was resolved and from what source

The enhanced response shows both original relationship configuration and resolved materialized IDs with comprehensive resolution details.

Step 5: Commit All Changes Together

# Create single logical commit with all changes
curl -X POST http://localhost:7061/databases/furniture_catalog/branches/main/working-commit/commit \
  -H "Content-Type: application/json" \
  -d '{
    "message": "Add description property to Color class and update all color instances",
    "author": "[email protected]"
  }'

Response:

{
  "hash": "def456789abcdef...",
  "database_id": "furniture_catalog",
  "parent_hash": "abc123456fedcba...",
  "author": "[email protected]",
  "message": "Add description property to Color class and update all color instances",
  "created_at": "2024-01-15T10:35:00Z",
  "data_size": 15420,
  "schema_classes_count": 8,
  "instances_count": 26
}

Alternative: Abandon Changes

If you decide not to commit the changes:

# Discard all staged changes
curl -X DELETE http://localhost:7061/databases/furniture_catalog/branches/main/working-commit

Commit History Comparison

Without Working Commits (Old Approach)

abc123 <- def456 <- ghi789 <- jkl012 <- mno345
  ^        ^         ^         ^         ^
initial  add desc   red desc  blue desc gold desc
commit   property   value     value     value

Result: 4 separate commits for one logical change

With Working Commits (New Approach)

abc123 <- def456
  ^        ^
initial  add description property + update all instances
commit   (single logical commit)

Result: 1 clean commit with all related changes

Advanced Working Commit Features

Auto-Creation

If you make changes without an active working commit, the system automatically creates one:

# This automatically creates a working commit if none exists
curl -X PATCH .../schema/classes/class-color -d '{...changes...}'

Working Commit Status

Working commits have three states:

  • active - Currently being worked on (can stage more changes)
  • committing - In the process of being committed (temporary state)
  • abandoned - Discarded (will be garbage collected)

Conflict Prevention

Only one active working commit per branch is allowed:

# If you try to create when one already exists
curl -X POST .../working-commit -d '{...}'
# Returns: 409 Conflict - "Branch already has an active working commit"

Working Commits vs Direct Commits

Direct Commits (Method 1)

# Each change creates immediate commit
curl -X PATCH .../schema/classes/class-color -d '{...}'  # Commit A
curl -X PATCH .../instances/color-red -d '{...}'         # Commit B
curl -X PATCH .../instances/color-blue -d '{...}'        # Commit C

Use when: Making simple, standalone changes

Working Commits (Method 2)

# Stage multiple changes, then commit together
curl -X POST .../working-commit -d '{...}'               # Create staging
curl -X PATCH .../schema/classes/class-color -d '{...}'  # Stage change
curl -X PATCH .../instances/color-red -d '{...}'         # Stage change
curl -X PATCH .../instances/color-blue -d '{...}'        # Stage change
curl -X POST .../working-commit/commit -d '{...}'        # Single commit

Use always: This is now the only supported way to make modifications

Integration with Git-like Operations

Working commits integrate seamlessly with branch operations:

  • Merge: Working commits must be committed or abandoned before merging branches
  • Rebase: Similar requirements - clean working state needed
  • Branch Switching: Working commits are branch-specific
  • Validation: All staged changes are validated before committing

Create-If-Not-Exists for Instances

The PATCH endpoint for working commit instances now supports creating instances if they don't exist:

# This will create the instance if it doesn't exist
curl -X PATCH http://localhost:7061/databases/furniture_catalog/branches/main/working-commit/instances/new-color-001 \
  -H "Content-Type: application/json" \
  -d '{
    "class": "class-color",  # Required when creating new instance
    "properties": {
      "name": {"value": "Purple", "type": "string"},
      "price": {"value": 85, "type": "number"}
    },
    "relationships": {}
  }'

Important: When creating a new instance via PATCH, the class or class_id field is required.

Best Practices

  1. Logical Grouping: Group related schema and instance changes together
  2. Clear Messages: Write descriptive commit messages that explain the complete change
  3. Review Before Commit: Use GET working-commit to review staged changes
  4. Clean Up: Abandon working commits you decide not to pursue
  5. One Feature Per Working Commit: Don't mix unrelated changes in one working commit

The working commit system provides the best of both worlds - the simplicity of direct commits when you need them, and the power of git-like staging for complex, multi-step changes.

Property Type System

All properties include explicit typing. The API accepts properties in a straightforward format:

{
  "properties": {
    "name": {
      "value": "Oak Table",
      "type": "string"
    },
    "price": {
      "value": 299.99,
      "type": "number"
    },
    "inStock": {
      "value": true,
      "type": "boolean"
    }
  }
}

For conditional properties, use the rules-based format:

{
  "properties": {
    "dynamicPrice": {
      "rules": [
        {
          "when": { "all": ["size", "premium"] },
          "then": 399.99
        }
      ],
      "default": 299.99
    }
  }
}

πŸ“‹ Audit Trail System

The OAT-DB includes a comprehensive audit trail system that tracks who created and modified every class and instance in the system, providing full accountability and change history.

Core Audit Features

  • Object-level Tracking: Every ClassDef and Instance tracks its audit information
  • Creation Tracking: created_by (user ID) and created_at (UTC timestamp) for all objects
  • Modification Tracking: updated_by (user ID) and updated_at (UTC timestamp) for updates
  • User Context Extraction: Automatic user identification from HTTP headers
  • Legacy Data Compatibility: Seamless handling of existing data through serde defaults
  • API Integration: All create/update operations automatically populate audit fields

Audit Field Structure

Every class and instance includes audit metadata:

{
  "id": "class-chair",
  "name": "Chair", 
  "properties": [...],
  "relationships": [...],
  "created_by": "user-123",
  "created_at": "2024-01-15T10:30:00.000Z",
  "updated_by": "admin-456", 
  "updated_at": "2024-01-16T14:45:30.000Z"
}

User Context Headers

The API extracts user information from request headers:

  • X-User-Id (required): Unique user identifier
  • X-User-Email (optional): User email address
  • X-User-Name (optional): User display name
curl -X POST http://localhost:7061/databases/furniture-db/schema/classes \
  -H "Content-Type: application/json" \
  -H "X-User-Id: developer-123" \
  -H "X-User-Email: [email protected]" \
  -H "X-User-Name: Jane Developer" \
  -d '{ "name": "NewClass", ... }'

Legacy Data Handling

Existing data without audit fields is automatically handled using default values:

  • Legacy User: "legacy-user" for created_by/updated_by fields
  • Legacy Timestamp: Unix epoch (1970-01-01) for created_at/updated_at fields

This ensures backward compatibility while enabling audit tracking for all future operations.

API Operations with Audit Trail

All class and instance operations now track user activity:

  • Class Creation: POST /databases/{db_id}/schema/classes - Records creator
  • Class Updates: PATCH /databases/{db_id}/schema/classes/{class_id} - Records modifier
  • Instance Creation: POST /databases/{db_id}/instances - Records creator
  • Instance Updates: PATCH /databases/{db_id}/instances/{id} - Records modifier

Audit Benefits

  • Accountability: Know exactly who made each change and when
  • Change History: Track evolution of classes and instances over time
  • Compliance: Meet audit requirements for data modification tracking
  • Debugging: Identify who introduced specific changes or data issues
  • Security: Monitor and audit all data modification activities

🧩 Conditional Properties System

The OAT-DB includes a sophisticated conditional properties system that allows property values to be determined by rules based on relationship presence, enabling dynamic pricing, configuration, and business logic.

Core Features

  • Rule-based Property Evaluation: Properties can use conditional logic instead of fixed values
  • Relationship Presence Checking: Rules can check if specific relationships exist on an instance
  • Simple JSON Syntax: Clean, readable conditional syntax with {"all": ["rel1", "rel2"]} format
  • Fallback Values: Default values when no rules match
  • Validation Integration: Conditional properties are validated to ensure referenced relationships exist

Conditional Property Format

{
  "properties": {
    "price": {
      "rules": [
        {
          "when": { "all": ["a", "b"] },
          "then": 100.0
        },
        {
          "when": { "all": ["a", "c"] },
          "then": 110.0
        }
      ],
      "default": 0
    }
  }
}

Example: Dynamic Painting Pricing

The seed data includes a Painting class that demonstrates conditional pricing based on component relationships:

Painting Schema:

{
  "name": "Painting",
  "properties": [
    {
      "name": "price",
      "data_type": "Number",
      "required": false
    }
  ],
  "relationships": [
    { "name": "a", "targets": ["Component"] },
    { "name": "b", "targets": ["Component"] },
    { "name": "c", "targets": ["Component"] }
  ]
}

Painting Instances with Conditional Pricing:

  • painting1: Has components A + B β†’ Price = $100
  • painting2: Has components A + C β†’ Price = $110
  • painting3: Has only component A β†’ Price = $0 (default)

Conditional Property Evaluation

When accessing a conditional property, the system:

  1. Evaluates each rule in order - checks when condition against instance relationships
  2. Returns first match - uses then value from first rule where condition is true
  3. Falls back to default - uses default value if no rules match
  4. Validates relationships - ensures all referenced relationships exist in class schema

Use Cases

  • Dynamic Pricing: Prices based on selected options or configurations
  • Configuration Logic: Different settings based on feature combinations
  • Business Rules: Complex logic based on relationship presence
  • Conditional Features: Features enabled/disabled based on other selections

Testing Conditional Properties

See verify_features.md for complete testing instructions. Quick test:

cargo run  # Start server
# Instances now return expanded relationships by default with detailed resolution information
curl -s http://localhost:7061/databases/furniture_catalog/instances/painting1 | jq '.properties.price'  # Returns: 100
curl -s http://localhost:7061/databases/furniture_catalog/instances/painting-minimal | jq '.properties.price'  # Returns: 25

# View relationship resolution details with filter information
curl -s http://localhost:7061/databases/furniture_catalog/instances/car-001 | jq '.relationships.color.resolution_details'

πŸ“ Derived Properties System

The OAT-DB includes a powerful derived properties system that enables dynamic calculations based on instance properties and relationships, supporting complex business logic without storing redundant data.

Core Features

  • Expression-Based Calculations: Properties computed using a rich expression language
  • Relationship Aggregations: Sum, count, and aggregate values across relationships
  • Arithmetic Operations: Support for add, subtract, multiply, divide operations
  • Conditional Logic: If-then-else expressions for complex business rules
  • Type-Safe: Each derived property declares its expected data type
  • Lazy Evaluation: Calculated on-demand only when requested

Expression Language

Property References

// Own property
{ "type": "prop", "prop": "basePrice" }

// Related instance property
{ "type": "rel_prop", "rel": "color", "prop": "price" }

Arithmetic Operations

// Addition: basePrice + 50
{
  "type": "add",
  "left": { "type": "prop", "prop": "basePrice" },
  "right": { "type": "lit_number", "value": 50 }
}

// Complex calculation: (basePrice * quantity) - discount
{
  "type": "sub",
  "left": {
    "type": "mul",
    "left": { "type": "prop", "prop": "basePrice" },
    "right": { "type": "prop", "prop": "quantity" }
  },
  "right": { "type": "prop", "prop": "discount" }
}

Aggregations

// Sum all component prices
{
  "type": "sum",
  "over": "components",
  "prop": "price",
  "where": null
}

// Count components with price > 100
{
  "type": "count",
  "over": "components",
  "where": {
    "type": "gt",
    "left": { "type": "prop", "prop": "price" },
    "right": { "type": "lit_number", "value": 100 }
  }
}

Conditional Expressions

// Apply 10% discount if quantity > 10
{
  "type": "if",
  "condition": {
    "type": "gt",
    "left": { "type": "prop", "prop": "quantity" },
    "right": { "type": "lit_number", "value": 10 }
  },
  "then": {
    "type": "mul",
    "left": { "type": "prop", "prop": "price" },
    "right": { "type": "lit_number", "value": 0.9 }
  },
  "else": { "type": "prop", "prop": "price" }
}

Schema Definition

Add derived properties to any class using either full expressions or shortcuts:

Full Expression Format

{
  "classes": [{
    "id": "class-table",
    "name": "Table",
    "properties": [
      { "id": "base_price", "name": "base_price", "data_type": "number" },
      { "id": "discount", "name": "discount", "data_type": "number" }
    ],
    "relationships": [
      { "id": "chairs", "name": "chairs", "targets": ["class-chair"] },
      { "id": "color", "name": "color", "targets": ["class-color"] }
    ],
    "derived": [
      {
        "id": "total_price",
        "name": "total_price",
        "data_type": "number",
        "expr": {
          "type": "sub",
          "left": {
            "type": "add",
            "left": {
              "type": "add",
              "left": { "type": "prop", "prop": "base_price" },
              "right": { "type": "sum", "over": "chairs", "prop": "price" }
            },
            "right": { "type": "sum", "over": "color", "prop": "price" }
          },
          "right": { "type": "prop", "prop": "discount" }
        }
      }
    ]
  }]
}

This calculates: total_price = base_price + sum(chair prices) + sum(color prices) - discount

Shortcut Format (fn_short)

For common patterns like summing a property across all relationships:

{
  "derived": [
    {
      "id": "der-totalPrice",
      "name": "totalPrice",
      "data_type": "number",
      "fn_short": {
        "method": "sum",
        "property": "price"
      }
    }
  ]
}

This automatically expands to: own price + sum of all children's price properties

API Usage

Adding Derived Properties

  1. Create working commit:
curl -X POST http://localhost:7061/databases/{db_id}/branches/{branch_id}/working-commit
  1. Update class with derived property:
curl -X PATCH http://localhost:7061/databases/{db_id}/branches/{branch_id}/working-commit/schema/classes/{class_id} \
  -H "Content-Type: application/json" \
  -d '{
    "derived": [{
      "id": "der-totalPrice",
      "name": "totalPrice",
      "data_type": "number",
      "expr": {
        "type": "add",
        "left": { "type": "prop", "prop": "basePrice" },
        "right": { "type": "sum", "over": "components", "prop": "price" }
      }
    }]
  }'
  1. Commit changes:
curl -X POST http://localhost:7061/databases/{db_id}/branches/{branch_id}/working-commit/commit \
  -d '{"message": "Add totalPrice derived property"}'

Querying Derived Values

Include in configuration queries:

curl -X POST http://localhost:7061/databases/{db_id}/instances/{instance_id}/query \
  -H "Content-Type: application/json" \
  -d '{
    "objective": "minimize_cost",
    "derived_properties": ["totalPrice", "unitPrice"]
  }'

Response includes calculated values:

{
  "configuration": { ... },
  "derived_properties": {
    "product-001": {
      "totalPrice": 1275.0,
      "unitPrice": 8.5
    }
  }
}

Examples from Seed Data

Underbed Total Price

{
  "id": "der-underbed-totalPrice",
  "name": "totalPrice",
  "data_type": "number",
  "expr": {
    "type": "add",
    "left": { "type": "prop", "prop": "basePrice" },
    "right": { "type": "sum", "over": "leg", "prop": "price" }
  }
}

Common Patterns

Total with Percentage Discount:

{
  "expr": {
    "type": "mul",
    "left": {
      "type": "add",
      "left": { "type": "prop", "prop": "basePrice" },
      "right": { "type": "sum", "over": "addons", "prop": "price" }
    },
    "right": {
      "type": "sub",
      "left": { "type": "lit_number", "value": 1.0 },
      "right": { "type": "prop", "prop": "discountRate" }
    }
  }
}

Average Price:

{
  "expr": {
    "type": "div",
    "left": { "type": "sum", "over": "items", "prop": "price" },
    "right": { "type": "count", "over": "items" }
  }
}

Key Characteristics

  • Domain-Aware: Aggregations only include selected instances (domain.lower >= 1)
  • Performance: Complex expressions may impact query performance
  • No Circular References: Cannot reference other derived properties
  • JSON Types: Handles conversion between JSON values and numeric types
  • Configuration Context: Calculations consider full configuration state

Use Cases

  • Dynamic Pricing: Calculate totals, apply discounts, handle complex pricing rules
  • Inventory Management: Track quantities, calculate stock levels
  • Business Metrics: Compute KPIs and aggregated values
  • Configuration Validation: Ensure configurations meet business constraints
  • Cost Optimization: Support solver objectives with calculated costs

🎯 Domain System for Configuration Spaces

The OAT-DB includes a comprehensive domain system for managing configuration spaces and instance selection constraints. Domains define value ranges for instances, enabling super-configuration management and constraint satisfaction.

Core Domain Concepts

  • Domain: A range [lower, upper] defining possible values for an instance
  • Class Domain Constraints: Default domain ranges for instances of a class
  • Instance Domains: Actual domain values for specific instances (can override class defaults)
  • Super Configuration: Collection of instances with their domain ranges
  • Specific Configuration: All domains collapsed to constants (lower == upper)

Domain Structure

{
  "domain": {
    "lower": 0,
    "upper": 1
  }
}

Domain Types and Semantics

Binary Domains [0,1]

Instance can be included (1) or excluded (0):

Domain::binary()  // Creates [0,1] domain

Constant Domains [n,n]

Instance has fixed value (always selected with specific quantity):

Domain::constant(1)  // Creates [1,1] domain (always 1 copy)
Domain::constant(5)  // Creates [5,5] domain (always 5 copies)

Range Domains [min,max]

Instance can have any value within range:

Domain::new(0, 4)   // Creates [0,4] domain (0 to 4 copies allowed)
Domain::new(1, 10)  // Creates [1,10] domain (1 to 10 copies allowed)

Class Domain Constraints (Schema Level)

Classes define default domains for their instances:

{
  "name": "Color",
  "domain_constraint": {
    "lower": 1,
    "upper": 1
  }
}

This means: Every Color instance defaults to domain [1,1] (always selected).

Instance Domains (Instance Level)

Instances can override class defaults:

{
  "id": "painting-minimal",
  "class": "class-painting",
  "domain": {
    "lower": 0,
    "upper": 1
  }
}

Configuration Space Examples

The seed data demonstrates various domain strategies:

Class Domain Constraints

  • Painting/Component/Option/Car: [0,1] β†’ instances default to binary selection
  • Size/Color: [1,1] β†’ instances default to always selected
  • Fabric: [0,10] β†’ instances default to allowing 0-10 copies
  • Leg: [0,4] β†’ instances default to allowing 0-4 copies

Instance Domain Overrides

  • painting-minimal: [0,1] (inherits Painting class default)
  • comp-a: [1,5] (overrides Component class default [0,1])
  • painting1: [1,1] (constant - always included)
  • painting2: [0,3] (allows 0-3 copies)

Domain Helper Methods

The Domain struct provides useful utility methods:

let domain = Domain::new(0, 3);

domain.is_constant()     // false (0 != 3)
domain.is_binary()       // false (not [0,1])
domain.contains(2)       // true (2 is in [0,3])

let constant = Domain::constant(5);
constant.is_constant()   // true (5 == 5)

Configuration Workflow

  1. Super Configuration: Start with instances having domain ranges

    painting-a: [0,1], color-red: [1,1], option-gps: [0,1]
    
  2. Configuration Process: Make selection decisions

    painting-a: [1,1], color-red: [1,1], option-gps: [0,0]
    
  3. Specific Configuration: All domains are constants

    • painting-a: included (1 copy)
    • color-red: selected (1 copy)
    • option-gps: excluded (0 copies)

Domain Validation

Domains provide the foundation for:

  • Configuration Validation: Ensuring selections respect domain constraints
  • Solution Space Definition: Defining valid configuration boundaries
  • Constraint Satisfaction: Managing complex selection rules
  • Optimization: Finding optimal configurations within domain bounds

API Integration

Domains appear in both class definitions and instance responses:

# View class domain constraints
curl http://localhost:7061/databases/furniture_catalog/branches/main/schema | jq '.classes[] | {name, domain_constraint}'

# View instance domains
curl http://localhost:7061/databases/furniture_catalog/instances/painting-minimal | jq '{id, type, domain}'

Use Cases

  • Product Configuration: Define valid option ranges for products
  • Resource Allocation: Constrain resource assignment quantities
  • Combinatorial Optimization: Search within defined solution spaces
  • Configuration Management: Manage valid configuration states
  • Constraint Programming: Express domain constraints for solvers

🎯 Pool Resolution System

The OAT-DB includes an advanced pool resolution system for combinatorial optimization, allowing sophisticated control over which instances are available for selection in relationships.

Core Concepts

  • Default Pools: Schema-level defaults for what instances are available by default
  • Instance Overrides: Instance-level pool customization with filters
  • Pool Resolution: Determines all available instances for relationships
  • Solver Selection: Quantifiers and solvers determine final selections from available instances

Pool Resolution Modes

DefaultPool::All

All instances of target type(s) are available in the pool by default.

{
  "name": "color",
  "targets": ["class-color"],
  "default_pool": { "mode": "all" }
}

DefaultPool::None

No instances are available by default - must be explicitly specified.

{
  "name": "freeOptions",
  "targets": ["Option"],
  "default_pool": { "mode": "none" }
}

DefaultPool::Filter

A filtered subset based on conditions.

{
  "name": "budgetColors",
  "targets": ["class-color"],
  "default_pool": {
    "mode": "filter",
    "type": ["class-color"],
    "where": {
      "all": {
        "predicates": [{ "prop_lt": { "prop": "price", "value": 100 } }]
      }
    }
  }
}

Pool-Based Relationship Customization

Instances can override schema defaults with custom pool filters:

{
  "relationships": {
    "color": {
      "pool": {
        "type": ["class-color"],
        "where": {
          "all": {
            "predicates": [{ "prop_lt": { "prop": "price", "value": 100 } }]
          }
        },
        "limit": 2
      }
    }
  }
}

Key Points:

  • Pool defines availability: What instances CAN be chosen from
  • No selection property: The solver determines what IS chosen based on quantifiers
  • No sort property: Order doesn't matter for combinatorial problems

Example: Car Color and Options

The seed data includes comprehensive Car/Color/Option examples demonstrating different pool strategies:

Car Schema:

{
  "name": "Car",
  "relationships": [
    {
      "name": "color",
      "targets": ["class-color"],
      "quantifier": { "Exactly": 1 },
      "default_pool": { "mode": "all" }
    },
    {
      "name": "freeOptions",
      "targets": ["Option"],
      "quantifier": { "AtLeast": 0 },
      "default_pool": { "mode": "none" }
    }
  ]
}

Car Examples:

  1. Sedan (car-001): Custom color pool (under $100), explicit GPS option
  2. Luxury SUV (car-002): Schema default (all colors), custom expensive options pool
  3. Economy Hatchback (car-003): Budget color pool with sorting/limiting, no free options

Pool Resolution Process

Single-Step Pool Resolution

Determines all instances available for solver selection:

let effective_pool = PoolResolver::resolve_effective_pool(
    store,
    relationship_def,
    instance_pool_override, // Optional custom filter
    branch_id,
).await?;

Result: Available Options

All instances from the pool are returned as materialized_ids:

// Pool resolution provides ALL available options
// Solver uses quantifiers to make final selections
let materialized_ids = effective_pool; // All instances available for solver

Solver Integration

  • Pool Resolution: Finds all available instances (e.g., all colors under $100)
  • Materialized IDs: Contains full set of options for solver to choose from
  • Quantifiers: Guide solver selection (e.g., EXACTLY(1) = pick exactly 1 color)
  • Solver Output: Final configuration with specific instance selections

Use Cases

  • E-commerce Configuration: Available options based on product tier
  • Resource Allocation: Constrain resource pools based on quotas or policies
  • Combinatorial Optimization: Complex constraint satisfaction problems
  • Dynamic Catalogs: Available products change based on customer segment

Testing Pool Resolution

See verify_features.md for complete testing instructions. Quick test:

cargo run  # Start server
# All instances now return expanded relationships by default with comprehensive pool resolution details
curl -s http://localhost:7061/databases/furniture_catalog/instances/car-001 | jq '.relationships'
# Shows resolved pools with filter descriptions, timing, and resolution methods

# View detailed filter information for pool resolution
curl -s http://localhost:7061/databases/furniture_catalog/instances/car-001 | jq '.relationships.color.resolution_details.filter_description'
# Returns: "Pool filter: InstanceFilter { types: Some([\"Color\"]), where_clause: Some(All { predicates: [PropLt { prop: \"price\", value: Number(100) }] }), sort: None, limit: None }"

Sample Data Structure

The seed data creates this git-like structure:

Furniture Catalog Database
β”œβ”€β”€ default_branch_id: "main-branch-uuid"
└── Main Branch (name: "main")
    β”œβ”€β”€ commit_hash: "initial-commit-uuid"
    β”œβ”€β”€ commit_message: "Initial commit"
    β”œβ”€β”€ author: "System"
    β”œβ”€β”€ status: "active"
    β”œβ”€β”€ FurnitureCatalogSchema (class-based)
    β”‚   β”œβ”€β”€ Class: "Underbed" (domain_constraint: [0,1])
    β”‚   β”‚   β”œβ”€β”€ Properties: name, basePrice, price
    β”‚   β”‚   β”œβ”€β”€ Relationships: size, fabric, leg
    β”‚   β”‚   └── Derived: totalPrice = basePrice + Sum(leg.price)
    β”‚   β”œβ”€β”€ Class: "Size" (domain_constraint: [1,1], Properties: name, width, length)
    β”‚   β”œβ”€β”€ Class: "Fabric" (domain_constraint: [0,10], Properties: name, color, material)
    β”‚   β”œβ”€β”€ Class: "Leg" (domain_constraint: [0,4], Properties: name, material, price)
    β”‚   β”œβ”€β”€ Class: "Painting" (domain_constraint: [0,1], Conditional pricing based on components)
    β”‚   β”‚   β”œβ”€β”€ Properties: name, price (conditional)
    β”‚   β”‚   └── Relationships: a, b, c (to Component instances)
    β”‚   β”œβ”€β”€ Class: "Component" (domain_constraint: [0,1], Properties: name, type)
    β”‚   β”œβ”€β”€ Class: "Car" (domain_constraint: [0,1], Pool resolution examples)
    β”‚   β”‚   β”œβ”€β”€ Properties: model
    β”‚   β”‚   β”œβ”€β”€ Relationships: color (default pool: All), freeOptions (default pool: None)
    β”‚   β”‚   └── Pool strategies: DefaultPool::All vs DefaultPool::None
    β”‚   β”œβ”€β”€ Class: "Color" (domain_constraint: [1,1], Properties: name, price)
    β”‚   └── Class: "Option" (domain_constraint: [0,1], Properties: name, price)
    └── Instances (all with typed properties):
        β”œβ”€β”€ Size: size-small, size-medium
        β”œβ”€β”€ Fabric: fabric-cotton-white, fabric-linen-beige
        β”œβ”€β”€ Legs: leg-wooden, leg-wooden-2, leg-wooden-3, leg-wooden-4, leg-metal
        β”œβ”€β”€ Underbed: delux-underbed (with unique leg references)
        β”œβ”€β”€ Painting: painting1 (domain: [1,1], components A+B, $100), painting2 (domain: [0,3], A+C, $110), painting3 (A only, $0), painting-minimal (domain: [0,1], no components, $25)
        β”œβ”€β”€ Components: comp-a (domain: [1,5]), comp-b, comp-c (for conditional pricing examples)
        β”œβ”€β”€ Cars: car-001 (Sedan), car-002 (Luxury SUV), car-003 (Economy Hatchback)
        β”œβ”€β”€ Colors: color-red ($50), color-blue ($75), color-gold ($150)
        └── Options: option-gps ($300), option-sunroof ($800)

Development

The project uses:

  • Axum for HTTP server
  • SQLx for PostgreSQL integration with compile-time query checking
  • Serde for JSON serialization
  • Tokio for async runtime
  • Anyhow/ThisError for error handling
  • Chrono for timestamps
  • SHA2 for commit hashing
  • Flate2 for commit data compression

Architecture Highlights

  • Git-like Branch Model: Each database has branches with commit history like git repos
  • PostgreSQL Backend: Production-ready persistence with proper ACID transactions
  • Immutable Commits: SHA-256 hashed commits with compressed schema + instance data
  • Branch-aware Queries: All operations respect database isolation boundaries
  • Class-based Schemas: One schema contains multiple class definitions
  • Typed Properties: Every property has explicit type information
  • Trait-based Storage: Abstracted storage layer supporting multiple backends

Testing

Tests cover:

  • Database and branch creation (git-like workflow)
  • Class-based schema management
  • Typed property instance operations
  • Branch-based data isolation
  • Hierarchical data integrity

Run cargo test to verify implementation.

Current Status

The current implementation provides a complete production-ready system with PostgreSQL backend:

βœ… Core Architecture

  • Git-like PostgreSQL schema with commits, branches, and immutable history
  • Enhanced working commit staging system with full relationship resolution including schema default pools
  • SHA-256 commit hashing with compressed binary data storage (gzip)
  • Branch-aware database isolation preventing cross-database data leakage
  • Comprehensive audit trail system with user tracking for all class and instance operations
  • Class-based schemas with separate entity definitions
  • Typed properties with explicit data types (String, Number, Boolean, Object, Array, StringList)
  • Conditional properties system with rule-based evaluation and relationship presence checking
  • Advanced pool resolution system for combinatorial optimization with default pool strategies and working commit context
  • Domain system for configuration space management with class constraints and instance domains
  • Database/Branch hierarchy with proper isolation
  • Production PostgreSQL backend with trait-based abstraction
  • Full backward compatibility with in-memory storage option

βœ… API Features

  • REST API endpoints with comprehensive CRUD operations
  • Database-level API endpoints that auto-select main branch
  • Branch-specific endpoints for isolated operations
  • Granular class CRUD operations with individual endpoints
  • Individual instance delete and update operations
  • Working commit system with git-like staging and commit workflow
  • Query parameters for filtering and relationship expansion

βœ… Improved Validation Workflow

The system now supports a user-controlled validation approach that separates data modification from validation:

  • πŸ”§ PATCH Operations: Work without validation constraints, allowing incremental data fixes
  • πŸ” Explicit Validation: Use dedicated /validate endpoints to check data when ready
  • πŸ“ Working Commit Validation: New /working-commit/validate endpoint for pre-commit validation
  • βœ… Commit Control: Users decide when data is ready to be committed

Benefits:

  • Fix invalid data step-by-step without being blocked
  • Make partial changes and validate when ready
  • Complete control over validation timing
  • No more validation errors preventing legitimate data updates

Example Workflow:

# 1. Make changes without validation blocking
curl -X PATCH /databases/db/instances/item \
  -d '{"properties": {"new_field": {"value": "test", "type": "String"}}}'

# 2. Validate staged changes when ready
curl /databases/db/branches/main/working-commit/validate

# 3. Commit when validation passes
curl -X POST /databases/db/branches/main/working-commit/commit

βœ… Type Validation System

  • Comprehensive instance validation against class-based schemas
  • Schema compliance checking with detailed error reporting
  • Data type validation for all property values
  • Required property enforcement
  • Value-type consistency verification
  • Relationship validation for undefined connections
  • Conditional property validation with relationship reference checking
  • Pool-based relationship validation with constraint verification

βœ… Merge Validation System

  • Pre-merge validation to prevent data corruption
  • Merge simulation without affecting actual data
  • Validation conflict detection integrated with merge process
  • Enhanced merge blocking for validation errors
  • Detailed reporting of affected instances and potential issues

βœ… Branch Operations

  • Branch merge operations with comprehensive conflict detection
  • Git-like rebase functionality for keeping branches up to date
  • Branch deletion with proper status management
  • Branch commit functionality with hash and author tracking
  • Automatic validation integration in merge and rebase processes
  • Force merge/rebase capability for override scenarios
  • Pre-operation validation to prevent data corruption

βœ… Documentation & Developer Experience

  • Interactive Swagger UI documentation at /docs
  • Complete OpenAPI 3.0 specification with all endpoints
  • Live API testing directly from browser
  • Comprehensive schema definitions with examples
  • Error response documentation with detailed schemas

Branch Operations API

Merge Branch

POST /databases/{db_id}/branches/{branch_id}/merge
{
  "target_branch_id": "main-branch-id",
  "author": "[email protected]",
  "force": false
}

Response:

{
  "success": true,
  "conflicts": [],
  "merged_instances": 5,
  "merged_schema_changes": true,
  "message": "Successfully merged branch 'feature-xyz' into 'main'"
}

Rebase Branch

POST /databases/{db_id}/branches/{feature_branch_id}/rebase
{
  "target_branch_id": "main",
  "author": "[email protected]",
  "force": false
}

Response:

{
  "success": true,
  "conflicts": [],
  "message": "Successfully rebased 'feature-add-materials' onto 'main'",
  "rebased_instances": 10,
  "rebased_schema_changes": true
}

Rebase with Specific Target

POST /databases/{db_id}/branches/{feature_branch_id}/rebase/{target_branch_id}
{
  "author": "[email protected]",
  "force": false
}

Commit Changes

POST /databases/{db_id}/branches/{branch_id}/commit
{
  "message": "Add new table support with validation",
  "author": "[email protected]"
}

Delete Branch

POST /databases/{db_id}/branches/{branch_id}/delete
{
  "force": false
}

πŸ” Type Validation System

The OAT-DB includes a comprehensive type validation system that ensures data integrity across all branches and merge operations.

Core Validation Features

  • Schema Compliance: All properties validated against class definitions
  • Data Type Checking: Values validated against declared types (String, Number, Boolean, Object, Array, StringList)
  • Required Property Validation: Missing required properties caught during validation
  • Type Consistency: Declared type must match actual JSON value type
  • Relationship Validation: Basic checks for undefined relationships
  • Detailed Error Reporting: Rich error and warning information with specific property details

Validation API Endpoints

Instance Validation

  • GET /databases/{db_id}/validate - Validate all instances in database (main branch)
  • GET /databases/{db_id}/instances/{instance_id}/validate - Validate single instance in database (main branch)
  • GET /databases/{db_id}/branches/{branch_id}/validate - Validate all instances in specific branch
  • GET /databases/{db_id}/branches/{branch_id}/instances/{instance_id}/validate - Validate single instance in specific branch

Merge Validation (Pre-merge Safety Checks)

  • GET /databases/{db_id}/branches/{source_branch_id}/validate-merge - Validate merge into database main branch
  • GET /databases/{db_id}/branches/{source_branch_id}/validate-merge/{target_branch_id} - Validate merge between specific branches

Validation Result Format

{
  "valid": true,
  "errors": [],
  "warnings": [
    {
      "instance_id": "delux-underbed",
      "warning_type": "ConditionalPropertySkipped",
      "message": "Conditional property 'price' was not type-checked",
      "property_name": "price"
    }
  ],
  "instance_count": 10,
  "validated_instances": ["size-small", "size-medium", "fabric-cotton-white", ...]
}

Validation Error Types

  • TypeMismatch: Property type doesn't match schema
  • MissingRequiredProperty: Required field is absent
  • UndefinedProperty: Instance has property not in schema
  • ValueTypeInconsistency: JSON value doesn't match declared type
  • ClassNotFound: Instance type has no schema definition
  • RelationshipError: Undefined relationships

Example: Validate All Instances

# Check all instances in main branch
curl http://localhost:7061/databases/furniture_catalog/validate

# Check specific instance
curl http://localhost:7061/databases/furniture_catalog/instances/delux-underbed/validate

# Check all instances in feature branch
curl http://localhost:7061/databases/furniture_catalog/branches/feature-xyz/validate

πŸ”„ Merge Validation System

The merge validation system prevents data integrity issues by validating merges before they happen, ensuring schema changes don't break existing instances.

Pre-Merge Workflow

  1. Developer creates feature branch and modifies schema classes
  2. Before merging back to main, calls validation endpoint:
    GET /databases/furniture_catalog/branches/feature-new-properties/validate-merge
  3. System simulates the merge and validates all main branch instances against the modified schema
  4. Returns detailed report showing potential validation errors

Merge Validation Features

  • Merge Simulation: Creates virtual merge result without affecting actual data
  • Full Instance Validation: Validates all instances against merged schema
  • Conflict Detection: Identifies schema/instance conflicts and validation issues
  • Detailed Reporting: Shows exactly which instances would fail and why
  • Prevention: Stops problematic merges before they corrupt data

Merge Validation Result

{
  "can_merge": false,
  "conflicts": [
    {
      "conflict_type": "ValidationConflict",
      "resource_id": "delux-underbed",
      "description": "Merge would create validation error: Required property 'material' is missing (Instance: delux-underbed)"
    }
  ],
  "validation_result": {
    "valid": false,
    "errors": [
      {
        "instance_id": "delux-underbed",
        "error_type": "MissingRequiredProperty",
        "message": "Required property 'material' is missing",
        "property_name": "material",
        "expected": "String",
        "actual": null
      }
    ],
    "warnings": [],
    "instance_count": 10,
    "validated_instances": ["delux-underbed", ...]
  },
  "simulated_schema_valid": true,
  "affected_instances": ["delux-underbed", "size-small", "fabric-cotton-white"]
}

Integration with Merge Process

The validation system is automatically integrated into the merge process:

  • Automatic Detection: Normal merge operations now detect validation conflicts
  • Merge Blocking: Merges fail if validation errors would be introduced (unless force is used)
  • Enhanced Conflicts: Merge conflicts now include validation issues alongside traditional conflicts

Example: Pre-Merge Validation

# Check if feature branch can safely merge into main
curl http://localhost:7061/databases/furniture_catalog/branches/feature-add-materials/validate-merge

# Check merge between specific branches
curl http://localhost:7061/databases/furniture_catalog/branches/feature-src/validate-merge/feature-target

When Validation Helps

Perfect for preventing:

  • Schema changes that break existing instances
  • Adding required properties without providing values
  • Type changes that invalidate existing data
  • Relationship modifications that break connections

Example Scenario:

  1. Feature branch adds required material property to Underbed class
  2. Main branch has delux-underbed instance without material property
  3. Pre-merge validation catches this conflict before merge
  4. Developer can either:
    • Make material optional instead of required
    • Add default material value to existing instances
    • Update the problematic instances in their branch first

πŸ”€ Git-like Rebase System

The OAT-DB includes a comprehensive rebase system that allows you to replay your feature branch changes on top of the latest target branch state, similar to git rebase.

Core Rebase Features

  • Branch Rebasing: Replay feature branch changes on top of target branch (usually main)
  • Automatic Conflict Detection: Identifies schema, instance, and validation conflicts before rebasing
  • Smart Merging: Target branch provides the base, feature branch changes override conflicts
  • Validation Integration: Ensures rebased result passes validation checks
  • Force Option: Override conflicts when you're confident about changes

Rebase vs Merge

Operation What It Does When to Use
Merge Combines two branches, creating a merge commit When you want to preserve branch history
Rebase Replays feature changes on top of target branch When you want a linear, clean history

Rebase Workflow

  1. Check if rebase is needed: Use validate-rebase to see if target branch has new changes
  2. Pre-rebase validation: Check for conflicts and validation issues
  3. Resolve conflicts: Fix schema or validation issues if needed
  4. Execute rebase: Apply feature branch changes on top of target branch
  5. Verify result: Feature branch now contains target branch base + feature changes

Rebase API Endpoints

Rebase Validation

  • GET /databases/{db_id}/branches/{feature_branch_id}/validate-rebase - Check rebase compatibility with main
  • GET /databases/{db_id}/branches/{feature_branch_id}/validate-rebase/{target_branch_id} - Check rebase with specific target

Execute Rebase

  • POST /databases/{db_id}/branches/{feature_branch_id}/rebase - Rebase onto main branch
  • POST /databases/{db_id}/branches/{feature_branch_id}/rebase/{target_branch_id} - Rebase onto specific branch

Rebase Validation Result

{
  "can_rebase": true,
  "conflicts": [],
  "validation_result": {
    "valid": true,
    "errors": [],
    "warnings": [],
    "instance_count": 11,
    "validated_instances": ["instance1", "instance2", ...]
  },
  "needs_rebase": true,
  "affected_instances": ["instance1", "instance2", ...]
}

Example: Complete Rebase Workflow

# 1. Check if rebase is needed and safe
curl http://localhost:7061/databases/furniture_catalog/branches/feature-add-materials/validate-rebase

# 2. If validation shows conflicts, fix them first
curl -X PATCH http://localhost:7061/databases/furniture_catalog/branches/feature-add-materials/schema/classes/class-underbed \
  -H "Content-Type: application/json" \
  -d '{"properties": [{"id": "prop-material", "name": "material", "data_type": "String", "required": false}]}'

# 3. Execute the rebase
curl -X POST http://localhost:7061/databases/furniture_catalog/branches/feature-add-materials/rebase \
  -H "Content-Type: application/json" \
  -d '{
    "target_branch_id": "main",
    "author": "[email protected]",
    "force": false
  }'

Response:

{
  "success": true,
  "conflicts": [],
  "message": "Successfully rebased 'feature-add-materials' onto 'main'",
  "rebased_instances": 10,
  "rebased_schema_changes": true
}

What Happens During Rebase

  1. Target Branch Base: Feature branch gets all instances and schema from target branch as the new base
  2. Feature Changes Applied: Feature branch's schema and instance changes are applied on top
  3. Conflict Resolution: Feature branch changes take precedence over target branch for same resources
  4. Branch Update: Feature branch metadata updated with new commit hash and parent reference
  5. Validation Check: Final result validated to ensure data integrity

When to Use Rebase

Perfect for:

  • Keeping feature branches up to date with main branch
  • Creating linear history without merge commits
  • Incorporating latest main branch changes before final merge
  • Updating long-running feature branches

Example Scenario:

  1. You create feature-add-tables branch from main
  2. While you work, main branch gets new commits (new classes, instances)
  3. Before merging back, you rebase to get latest main changes
  4. Your feature branch now contains main's latest changes + your feature work
  5. Final merge into main will be clean and linear

Rebase Conflict Types

  • Schema Conflicts: Both branches modified same classes
  • Instance Conflicts: Both branches modified same instances
  • Validation Conflicts: Rebased result would fail validation
  • Structural Conflicts: Changes incompatible with target branch structure

Force Rebase

Use "force": true when:

  • You're confident about overriding conflicts
  • Schema conflicts are intentional (feature branch has better schema)
  • You've manually verified the result will be correct

⚠️ Warning: Force rebase can override validation errors and may break data integrity.

πŸ“š Interactive API Documentation

The OAT-DB includes comprehensive interactive API documentation powered by Swagger UI.

Accessing Documentation

  • Interactive UI: Visit http://localhost:7061/docs for full Swagger UI interface
  • OpenAPI Spec: Access raw specification at http://localhost:7061/docs/openapi.json

Documentation Features

  • Complete API Coverage: All endpoints with detailed descriptions
  • Interactive Testing: Test API calls directly from the browser
  • Schema Definitions: Full model documentation with examples
  • Error Responses: Comprehensive error handling documentation
  • Organized by Tags: Logical grouping (Databases, Validation, Branches, etc.)
  • Request/Response Examples: Clear examples for all operations

What's Documented

  • All database, branch, schema, and instance endpoints
  • Type validation endpoints with detailed error schemas
  • Merge validation endpoints with conflict resolution examples
  • Model definitions for all request/response structures
  • Query parameters and their usage
  • HTTP status codes and error conditions

Granular Operations Benefits

Why Use Individual Endpoints?

  1. Precision - Modify only what needs changing without affecting other schema elements
  2. Atomic Operations - Each class/instance operation is independent and atomic
  3. Better Error Handling - Specific error messages for individual class/instance operations
  4. Conflict Avoidance - No need to worry about concurrent modifications to other parts of the schema
  5. Cleaner API Design - RESTful semantics with proper HTTP methods (POST, PATCH, DELETE)
  6. Model Separation - Clean input models without server-managed fields (like IDs)

When to Use Which Approach?

Use Granular Endpoints When:

  • Adding a single new class to an existing schema
  • Updating specific properties of one class
  • Removing obsolete classes
  • Need precise control over individual operations

Use Bulk Schema Operations When:

  • Creating entirely new schemas from scratch
  • Major schema restructuring affecting multiple classes
  • Migrating between schema versions

πŸš€ Advanced Solve System

The OAT-DB includes a sophisticated solve system that transforms abstract relationship selections and conditional properties into concrete, reproducible configuration artifacts through a comprehensive pipeline.

Core Architecture: Selector vs Resolution Context

The solve system separates WHAT to select from WHERE/WHEN to select it:

  • Selectors: Abstract descriptions of what instances to choose (independent of branch/commit)
  • ResolutionContext: The scope and policies for evaluating selectors at solve time
  • ConfigurationArtifacts: Immutable, reproducible solve results with complete metadata

Selector Types

Static Selectors

Pre-materialized instance IDs for deterministic selection:

{
  "resolution_mode": "static",
  "materialized_ids": ["color-red", "color-blue"],
  "metadata": {
    "description": "Manually selected premium colors"
  }
}

Dynamic Selectors

Filter-based selection resolved at solve time:

{
  "resolution_mode": "dynamic",
  "filter": {
    "type": ["class-color"],
    "where": {
      "all": ["premium", "available"]
    },
    "limit": 3
  }
}

Resolution Context

Defines the scope and policies for selector evaluation:

{
  "database_id": "furniture_catalog",
  "branch_id": "main",
  "commit_hash": "abc123def456", // Optional point-in-time
  "policies": {
    "cross_branch_policy": "reject",
    "missing_instance_policy": "skip",
    "empty_selection_policy": "allow",
    "max_selection_size": 1000
  }
}

Resolution Policies

  • Cross-Branch Policy: How to handle references across branches (reject/allow/allow_with_warnings)
  • Missing Instance Policy: Handle missing static IDs (fail/skip/placeholder)
  • Empty Selection Policy: Handle empty dynamic results (fail/allow/fallback)
  • Max Selection Size: Prevent runaway selections from dynamic filters

Configuration Artifacts

Immutable solve results containing everything needed for reproducibility:

{
  "id": "artifact-12345",
  "created_at": "2024-01-15T10:30:00Z",
  "resolution_context": {
    /* Full context snapshot */
  },
  "schema_snapshot": {
    /* Schema at solve time */
  },
  "resolved_domains": {
    "painting-a": { "lower": 1, "upper": 1 }, // Constant selection
    "color-red": { "lower": 1, "upper": 1 }
  },
  "resolved_properties": {
    "painting-a": {
      "price": 110.0, // Evaluated conditional property
      "name": "Premium Painting"
    }
  },
  "selector_snapshots": {
    "painting-a": {
      "color": {
        "selector": {
          /* Original selector definition */
        },
        "resolved_ids": ["color-red"],
        "resolution_notes": [
          {
            "note_type": "info",
            "message": "Static selector resolved successfully"
          }
        ]
      }
    }
  },
  "solve_metadata": {
    "total_time_ms": 250,
    "pipeline_phases": [
      { "name": "snapshot", "duration_ms": 50 },
      { "name": "expand", "duration_ms": 75 },
      { "name": "evaluate", "duration_ms": 80 },
      { "name": "validate", "duration_ms": 30 },
      { "name": "compile", "duration_ms": 15 }
    ],
    "statistics": {
      "total_instances": 5,
      "total_selectors": 3,
      "conditional_properties_evaluated": 2,
      "domains_resolved": 5
    }
  }
}

Five-Phase Solve Pipeline

  1. Snapshot Phase: Capture immutable state of schema and instances at solve time
  2. Expand Phase: Resolve all selectors to concrete instance sets using resolution policies
  3. Evaluate Phase: Process conditional properties and resolve domains to final values
  4. Validate Phase: Check constraints, quantifiers, and relationship consistency
  5. Compile Phase: Assemble final artifact with metadata and timing information

Backwards Compatibility

The solve system automatically converts legacy pool-based selections to modern selectors:

  • Simple IDs β†’ Static selectors with materialized IDs
  • Filters β†’ Dynamic selectors with filter definitions
  • Pool-based β†’ Selectors derived from pool and selection components
  • All/None β†’ Dynamic selectors with appropriate filters

API Usage

Create a Solve Operation

curl -X POST http://localhost:7061/solve \
  -H "Content-Type: application/json" \
  -d '{
    "resolution_context": {
      "database_id": "furniture_catalog",
      "branch_id": "main",
      "policies": {
        "cross_branch_policy": "reject",
        "missing_instance_policy": "skip"
      }
    },
    "user_metadata": {
      "name": "Production Configuration V1",
      "tags": ["production", "validated"]
    }
  }'

List Configuration Artifacts

curl "http://localhost:7061/artifacts?database_id=furniture_catalog&branch_id=main"

Get Artifact Details

curl http://localhost:7061/artifacts/{artifact_id}

Get Solve Summary

curl http://localhost:7061/artifacts/{artifact_id}/summary

Key Benefits

  • Reproducible Solves: Artifacts contain everything needed to reproduce exact results
  • Branch-Aware Resolution: Proper isolation and cross-branch policy enforcement
  • Comprehensive Metadata: Full timing, statistics, and resolution notes for debugging
  • Policy-Driven: Configurable behavior for missing instances, empty selections, etc.
  • Immutable Results: Artifacts never change, enabling reliable caching and auditing
  • Backwards Compatible: Seamlessly works with existing pool/selection formats

Use Cases

  • Configuration Management: Generate and track validated product configurations
  • Audit Trails: Immutable record of how configurations were derived
  • A/B Testing: Compare different resolution contexts and policies
  • Debugging: Detailed resolution notes and timing for troubleshooting
  • Caching: Reuse artifacts for identical resolution contexts
  • Compliance: Prove configurations meet specific constraints and policies

Next Steps

Future enhancements could include:

  • Advanced conflict resolution for complex merges
  • Branch history and timeline tracking
  • Full relationship validation with quantifiers (currently warnings only)
  • Advanced expression evaluation for derived fields
  • Full filter resolution implementation with branch context
  • Branch-aware relationship expansion
  • Database cloning and forking operations
  • Granular property and relationship management within classes
  • Enhanced pool filtering - implement full predicate evaluation in pool resolution
  • Universe constraints - support for relationship universe restrictions
  • Cascading pool effects - where selecting one option affects available pools for other relationships

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •