Walter is an AI-powered personal finance platform that provides a complete, real-time view of your financial life. Securely connect all your accounts — banking, credit, investments, and more — to track expenses, optimize budgets, and accelerate wealth building through intelligent automation.
- 🤖 Intelligent Transaction Processing Machine learning automatically categorizes expenses and adapts to your unique spending patterns over time.
- 📊 Unified Financial Dashboard View assets, liabilities, investments, and cash flow in a single, real-time interface.
- 🔍 AI-Powered Insights Discover spending trends, identify savings opportunities, and receive alerts for unusual activity or budget deviations.
- 🎯 Advanced Retirement Planning Plan with confidence using Monte Carlo simulations and customizable assumptions.
Walter combines automation and intelligence to help you spend smarter, save faster, and reach your financial goals.
- Walter
- Table of Contents
- Features
- Architecture
- API Documentation
- Deployments
- Monitoring & Observability
- Contributions
Walter's REST API is fully documented using OpenAPI 3.0 specifications and provides interactive documentation through Swagger UI:
- Explore the API — Browse all available endpoints and their schemas in the interactive API documentation
- Test endpoints — Click
Try it outon any method to make live API calls directly from the browser - Authenticate — For protected endpoints, first call the
/authmethod with valid credentials to obtain an access token
# 1. Get access token for user from Swagger UI
POST /auth/login
{
"email": "user@example.com",
"password": "your_password"
}
# 2. Use token in subsequent requests via the authorize button
Authorization: Bearer <your_access_token>The API documentation is automatically generated from the OpenAPI specifications file openapi.yml and must stay in sync with the codebase:
# Deploy documentation changes to S3
make docsImportant: Always update openapi.yml when adding, modifying, or removing API endpoints to ensure documentation accuracy.
WalterBackend is deployed to AWS using an automated deployment pipeline powered by the deploy.py script. The deploy script is called by the deploy.yml GitHub action on merges to main to ensure the production environment stays up to date with the latest changes. This ensures consistent, reliable deployments with zero-downtime updates.
🚀 All merges to the main branch automatically trigger a production deployment.
The deployment process is fully automated and executes the following steps:
- Syncs the latest OpenAPI specifications to the documentation site
- Ensures API docs stay current with code changes
- Builds a new
WalterBackendimage with the latest source code - Pushes the
WalterBackendimage to Amazon ECR (Elastic Container Registry)
- Updates the
WalterBackendsource code in the AWS environment
- Publishes a new version of
WalterBackendand updates the release alias - Ensures all API methods call the latest version of
WalterBackend
The AWS infrastructure is managed through CloudFormation templates with parameterized versioning powered by Jinja:
# the latest version of WalterAPI is injected as a Jinja2 template parameter in the deploy.py script
WalterAPIAlias:
Type: AWS::Lambda::Alias
Properties:
FunctionName: !Ref WalterAPI
FunctionVersion: {{ walter_api_version }}
Name: "release"The deploy.py script dynamically updates these version parameters after each successful build, enabling:
- Rollback capability to previous versions
- Infrastructure as Code (IaC) with version tracking
WalterBackend emits operational and business metrics to Datadog and uses dashboards and monitors for proactive alerting.
Quick access to essential monitoring views:
- API Performance Dashboard - API response times, error rates, and throughput metrics
- Canaries Dashboard - Canary deployment health and rollback triggers
- Workflow Dashboard - Price update workflows and batch processing jobs
- Dev Environment Monitors - Active monitoring alerts for development environment
Metrics Emission: Lambda functions are wrapped with the Datadog Lambda handler/wrapper, which forwards custom business metrics and AWS Lambda enhanced metrics to Datadog for dashboarding and alerting.
Alerting Model: Datadog monitors are configured with warning and critical thresholds to surface early signals vs. actionable incidents.
Key metrics and their thresholds:
- Lambda Memory Usage - Using
aws.lambda.enhanced.max_memory_usedwith warning at ~80% and critical at ~90% of the function's configured memory - Lambda Duration/Timeouts - Using
aws.lambda.enhanced.durationwith warning at ~70% and critical at ~90% of the function's configured timeout - Business Logic Success/Failure - Via custom metric
${component}.failurethat triggers on failure conditions within handlers
Source of Truth: All monitors are defined as code in Terraform:
infra/infrastructure/modules/lambda_function_memory_monitor/main.tf
infra/infrastructure/modules/lambda_function_timeout_monitor/main.tf
infra/infrastructure/modules/lambda_function_failure_monitor/main.tf
When a monitor breaches thresholds, Datadog sends notifications with links for investigation:
- Warning alerts indicate potential degradation
- Critical alerts indicate user-impacting or imminent failures requiring immediate action
We welcome contributions to Walter! This guide will help you set up your development environment and understand our workflow.
Walter follows trunk-based development with short-lived feature branches:
- Create a feature branch from
mainfor your changes - Develop locally using the CLI tool and Makefile commands
- Test thoroughly in a non-production environment
- Open a merge request to
mainwith test artifacts - Automated production deployment occurs after successful merge to
main
The Makefile provides shortcuts for common development tasks:
# View all available commands
make help
# Code quality and testing
make format # Format code with Black
make lint # Run Flake8 linting
make test # Execute unit tests with Pytest
# Development workflow
make docs # Deploy documentation changes to S3
make deploy # Deploy changes to specified environmentTest your changes locally without deploying to AWS using the built-in CLI tool powered by Typer:
# Explore available CLI methods
pipenv run python cli.py --help
# Get help for specific methods
pipenv run python cli.py "${METHOD_NAME}" --help
# Authenticate and get access token
pipenv run python cli.py auth-user --email="${EMAIL}" --password="${PASSWORD}"
# Export token for authenticated API calls
export WALTER_TOKEN=your_access_token_hereImportant: Always use non-production AWS credentials to avoid modifying customer data.
All contributions must pass automated quality checks before being pushed:
- Black - Code formatting
- Flake8 - Python linting
- Codespell - Spelling validation
- Pytest - Unit test execution
Pre-commit hooks prevent commits that fail these checks from being pushed to the repository. See the pre-commit documentation and .pre-commit-config.yaml file for more information.
# Install pre-commit hooks
pre-commit install
# Run checks manually
pre-commit run --all-files# Run the full test suite
make test
# Run specific tests
pipenv run pytest tests/test_specific_module.py
# Run with coverage report
pipenv run pytest --cov=walter_backendOn merge request creation, Codecov automatically:
- Runs the complete test suite
- Calculates code coverage metrics
- Posts detailed coverage reports as comments
- Blocks merges if coverage drops below thresholds
-
Pre-deployment Testing
- Deploy your changes to a non-production environment
- Include test results and validation artifacts in your MR description
-
Code Review
- All code changes require review before merging
- Address feedback and ensure all checks pass
-
Automated Deployment
- Successful merges to
mainautomatically deploy to production - Monitor deployment logs and service health post-merge
- Successful merges to
- Keep branches short-lived (< 3 days preferred)
- Write descriptive commit messages following conventional commits
- Include tests for all new functionality
- Update documentation for API changes
- Test in non-prod before opening merge requests
- Monitor post-deployment for any issues
- Check the
Makefilefor available development commands - Use
--helpflags with CLI commands for detailed usage - Review existing tests for examples and patterns
- Open an issue for questions or suggestions
