Migrate to environment variables for configuration and enhance documentation#9
Migrate to environment variables for configuration and enhance documentation#9
Conversation
Removes all support for .conf config files and migrates server, documentation, and tests to use environment variables for all runtime configuration. Updates the server to build its config from the environment, simplifies Docker and local setup, and cleans up obsolete config references in docs and code. Modernizes the API and test suite for better portability and deployment.
- Updated README.md to clarify accepted boolean environment variable values. - Modified Makefile to reflect correct project names and paths. - Changed layout.html to link to the correct GitHub repository. - Revised apireference.rst to accurately describe the Flowdacity Queue Server API. - Adjusted configuration.rst to specify boolean environment variable values. - Enhanced contributing.rst with updated repository links and local workflow instructions. - Improved faqs.rst to reflect the Flowdacity Queue Server's features and usage. - Updated index.rst to provide a clearer introduction to the Flowdacity Queue Server. - Expanded internals.rst to describe the architecture and background requeue loop. - Revised license.rst to include current copyright information. - Modified server.py to enforce stricter boolean value checks. - Updated test_routes.py to include additional tests for environment variable validation.
Replaces manual environment variable parsing with pydantic and pydantic-settings for startup configuration validation. Centralizes config logic, improves error handling, and enhances maintainability by using a typed settings model. Updates documentation to reflect validation changes and adds required dependencies.
Introduces strongly-typed configuration using TypedDicts to clarify and enforce the structure of server configuration. Improves type safety, code clarity, and maintainability by replacing generic mapping types with explicit FQConfig throughout the codebase.
Introduces a type alias to clarify the structure of response and request data dictionaries, improving code readability and static analysis. Enhances type safety and aligns with modern Python typing practices.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThis PR replaces INI/file-based configuration with an environment-variable pydantic-settings model, removes default INI configs, adds typed FQConfig and QueueServerSettings, refactors server startup/logging/lifecycle to build config from env, updates tests, docs, Docker, and packaging for env-driven configuration. Changes
Sequence Diagram(s)sequenceDiagram
participant Env as "Environment (env vars)"
participant Settings as "QueueServerSettings"
participant Entrypoint as "ASGI Entrypoint (asgi.py)"
participant Server as "FQServer"
participant Redis as "Redis / FQ Core"
participant Client as "HTTP Client"
Env->>Settings: provide env vars
Settings->>Entrypoint: QueueServerSettings.from_env() -> to_fq_config()
Entrypoint->>Server: setup_server(config=FQConfig)
Server->>Redis: initialize FQ client / connect
Server->>Server: start background requeue loop (if enabled)
Client->>Server: HTTP enqueue/dequeue/finish requests
Server->>Redis: perform Redis/Lua operations
Entrypoint->>Server: shutdown signal
Server->>Server: cancel requeue task
Server->>Redis: await queue.close() / close client
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
This PR migrates Flowdacity Queue Server configuration from INI-style config files to environment variables validated at startup (via pydantic-settings), and updates project documentation to match the new deployment/configuration model.
Changes:
- Removed config file support (
default.conf,tests/test.conf) and switched server bootstrapping to environment-backed settings (QueueServerSettings). - Updated server setup and tests to pass config mappings / env-derived config instead of file paths.
- Refreshed README + Sphinx docs to reflect new configuration, installation, and API reference content/branding.
Reviewed changes
Copilot reviewed 22 out of 23 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
fq_server/settings.py |
Introduces QueueServerSettings + typed config structures for env-driven configuration. |
fq_server/server.py |
Swaps file-based config loading for FQConfig mapping + adds build_config_from_env()/updated setup_server(). |
fq_server/__init__.py |
Re-exports settings/config helpers as public API. |
asgi.py |
Uses setup_server() with env-derived config by default (no FQ_CONFIG). |
pyproject.toml |
Adds pydantic + pydantic-settings; switches flowdacity-queue to a git-pinned dependency; removes default.conf from sdist include list. |
uv.lock |
Updates lockfile for new dependencies + git-pinned flowdacity-queue. |
docker-compose.yml |
Passes queue/Redis config via environment variables instead of mounting default.conf. |
README.md |
Documents env-var configuration and updated local/Docker workflows. |
docs/apireference.rst |
Rewrites API reference to Flowdacity-specific endpoints/behavior. |
docs/configuration.rst |
Documents supported environment variables, defaults, and boolean parsing rules. |
docs/gettingstarted.rst |
Updates quickstart to use env vars + uvicorn, with Redis via make redis-up. |
docs/installation.rst |
Updates installation instructions for uv and pip. |
docs/index.rst |
Updates landing page branding/description and removes legacy SHARQ framing. |
docs/internals.rst |
Adds architecture and lifecycle details aligned with current server implementation. |
docs/faqs.rst |
Updates FAQs to Flowdacity context and current endpoints/configuration. |
docs/contributing.rst |
Updates repo links and local development workflow. |
docs/license.rst |
Updates branding and adds current copyright notice. |
docs/conf.py |
Modernizes Sphinx config and updates GitHub repo metadata. |
docs/_templates/layout.html |
Updates “Fork me” ribbon link to the Flowdacity repository. |
docs/Makefile |
Updates help output paths/names to match new project naming. |
default.conf |
Removed legacy default config file. |
tests/test.conf |
Removed legacy test config file. |
tests/test_routes.py |
Updates tests to use mapping/env-derived config; expands coverage for new paths. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| try: | ||
| return QueueServerSettings.from_env(env).to_fq_config() | ||
| except ValidationError as exc: | ||
| raise ValueError(str(exc)) from exc |
There was a problem hiding this comment.
build_config_from_env() converts ValidationError into ValueError(str(exc)), which typically reports model field names (e.g. redis_port) rather than the env var aliases (FQ_REDIS_PORT). This makes startup errors less actionable and may not satisfy tests expecting env var names. Consider formatting the error to include the relevant environment variable names (e.g. via the fields' validation_alias) and the validation message, instead of str(exc).
| raise ValueError(str(exc)) from exc | |
| # Format the error to include environment variable names (when available) | |
| # and the underlying validation messages, instead of relying on str(exc) | |
| error_messages: list[str] = [] | |
| for error in exc.errors(): | |
| loc = error.get("loc") or () | |
| msg = error.get("msg") or "" | |
| # Try to find env var names from the error location, if env is provided | |
| env_names: list[str] = [] | |
| if env is not None: | |
| for part in loc: | |
| if isinstance(part, str) and part in env: | |
| env_names.append(part) | |
| if env_names: | |
| prefix = ", ".join(env_names) | |
| else: | |
| # Fall back to a generic location string if no env var name is found | |
| prefix = ".".join(str(part) for part in loc) if loc else "" | |
| if prefix: | |
| error_messages.append(f"{prefix}: {msg}") | |
| else: | |
| error_messages.append(msg) | |
| formatted = "; ".join(error_messages) if error_messages else str(exc) | |
| raise ValueError(formatted) from exc |
tests/test_routes.py
Outdated
| with self.assertRaisesRegex(ValueError, "FQ_REDIS_PORT"): | ||
| build_config_from_env({"FQ_REDIS_PORT": "redis"}) | ||
|
|
||
| with self.assertRaisesRegex(ValueError, "FQ_ENABLE_REQUEUE_SCRIPT"): | ||
| build_config_from_env({"FQ_ENABLE_REQUEUE_SCRIPT": "yes"}) | ||
|
|
||
| with self.assertRaisesRegex(ValueError, "FQ_REDIS_CLUSTERED"): |
There was a problem hiding this comment.
build_config_from_env_rejects_invalid_values asserts that the raised error message contains env var names like FQ_REDIS_PORT, but build_config_from_env() currently raises ValueError(str(ValidationError)), which usually won’t include those aliases. Either update the test expectations to match the current error output, or (preferably) update the production error formatting to mention the env var names explicitly so users can fix misconfiguration quickly.
| with self.assertRaisesRegex(ValueError, "FQ_REDIS_PORT"): | |
| build_config_from_env({"FQ_REDIS_PORT": "redis"}) | |
| with self.assertRaisesRegex(ValueError, "FQ_ENABLE_REQUEUE_SCRIPT"): | |
| build_config_from_env({"FQ_ENABLE_REQUEUE_SCRIPT": "yes"}) | |
| with self.assertRaisesRegex(ValueError, "FQ_REDIS_CLUSTERED"): | |
| with self.assertRaises(ValueError): | |
| build_config_from_env({"FQ_REDIS_PORT": "redis"}) | |
| with self.assertRaises(ValueError): | |
| build_config_from_env({"FQ_ENABLE_REQUEUE_SCRIPT": "yes"}) | |
| with self.assertRaises(ValueError): |
tests/test_routes.py
Outdated
| # flush redis after each test | ||
| await self.r.flushdb() | ||
| await self.client.aclose() | ||
| await self.queue.close() | ||
|
|
There was a problem hiding this comment.
asyncTearDown() now calls await self.queue.close(), but httpx.ASGITransport may also manage ASGI lifespan by default (triggering FQServer._lifespan shutdown which already calls queue.close()). Combined with the manual queue.initialize() in asyncSetUp(), this can cause double-initialize/double-close and background requeue tasks leaking into tests. Consider disabling transport lifespan management (e.g. lifespan="off") when you manually manage init/close, or remove the manual init/close and rely solely on the lifespan hook in a controlled way.
README.md
Outdated
| | `FQ_REDIS_CLUSTERED` | `false` | Enables Redis Cluster mode. | | ||
| | `FQ_REDIS_UNIX_SOCKET_PATH` | `/tmp/redis.sock` | Redis socket path when `FQ_REDIS_CONN_TYPE=unix_sock`. | | ||
| | `PORT` | `8300` | Uvicorn port used by the container and local examples. | | ||
|
|
||
| Boolean env vars accept only `true` or `false`. | ||
|
|
||
| ## Run locally | ||
|
|
||
| Start Redis: | ||
|
|
||
| ```bash | ||
| cp default.conf local.conf | ||
| # edit local.conf to match your Redis host/port/password | ||
| make redis-up | ||
| ``` | ||
|
|
||
| ## Run the server locally | ||
| Run the API: | ||
|
|
||
| ```bash | ||
| # ensure Redis is running (make redis starts a container) | ||
| make redis | ||
|
|
||
| # start the ASGI server | ||
| FQ_CONFIG=./local.conf uv run uvicorn asgi:app --host 0.0.0.0 --port 8080 | ||
| PORT=8080 \ | ||
| FQ_REDIS_HOST=127.0.0.1 \ | ||
| uv run uvicorn asgi:app --host 0.0.0.0 --port 8080 |
There was a problem hiding this comment.
The PORT row says the default is 8300 and that it’s used by both the container and local examples, but the local example in this README uses --port 8080 (and Docker Compose sets PORT: 8080). Consider clarifying that PORT is only consumed by the container entrypoint/Dockerfile (and defaults to 8300 there), or align the examples and compose file to match the documented default.
There was a problem hiding this comment.
🧹 Nitpick comments (7)
fq_server/settings.py (1)
74-87: Consider accepting numeric boolean values (0/1).The boolean validator strictly accepts only
boolor"true"/"false"strings. Some deployment tools and shell scripts may pass1or0as boolean values. Consider whether this strictness is intentional or if numeric booleans should be supported.♻️ Optional: Support numeric boolean values
`@field_validator`("enable_requeue_script", "redis_clustered", mode="before") `@classmethod` def validate_boolean_env(cls, value: bool | str) -> bool: if isinstance(value, bool): return value + if isinstance(value, int): + if value == 1: + return True + if value == 0: + return False + if isinstance(value, str): normalized = value.strip().lower() if normalized == "true": return True if normalized == "false": return False + if normalized in ("1", "0"): + return normalized == "1" - raise ValueError("Use either 'true' or 'false'.") + raise ValueError("Use 'true', 'false', '1', or '0'.")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@fq_server/settings.py` around lines 74 - 87, The validate_boolean_env validator currently only accepts bool or "true"/"false" strings; update validate_boolean_env (the `@field_validator` for "enable_requeue_script" and "redis_clustered") to also accept numeric booleans by handling int values 0/1 and string "0"/"1" (and any whitespace) — convert 1/"1"/True to True and 0/"0"/False to False, keep existing handling for bool and "true"/"false", and preserve the ValueError fallback message.README.md (2)
65-67: Same redundancy as in getting started guide.The
PORT=8080environment variable is redundant here since--port 8080is passed explicitly to uvicorn.📝 Suggested simplification
-PORT=8080 \ FQ_REDIS_HOST=127.0.0.1 \ uv run uvicorn asgi:app --host 0.0.0.0 --port 8080🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 65 - 67, The README shows setting PORT=8080 while also passing --port 8080 to the uvicorn command; remove the redundant PORT=8080 environment variable to avoid duplication. Update the section so only necessary env vars (e.g., FQ_REDIS_HOST) are exported and the uvicorn invocation remains as "uv run uvicorn asgi:app --host 0.0.0.0 --port 8080" (or, alternatively, keep PORT and drop the explicit --port flag) so that either the PORT env var or the --port argument is used, not both.
50-50: Clarify thatPORTis not an application setting.
PORTis listed alongsideFQ_*variables in the configuration table, but it's not read byQueueServerSettings. It's a convention for container runtimes and the uvicorn CLI. This could mislead users into thinking the application validates it.Consider either:
- Moving
PORTto a separate "Runtime" section with a note that it's for uvicorn/container use.- Adding a clarifying note that
PORTis not validated by the application.📝 Suggested clarification
| `FQ_REDIS_UNIX_SOCKET_PATH` | `/tmp/redis.sock` | Redis socket path when `FQ_REDIS_CONN_TYPE=unix_sock`. | -| `PORT` | `8300` | Uvicorn port used by the container and local examples. | + +**Runtime settings (not validated by the application):** + +| Variable | Default | Description | +| --- | --- | --- | +| `PORT` | — | Port for uvicorn. Pass via `--port` flag or container runtime. |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` at line 50, Move or annotate the `PORT` entry so readers know it's a runtime/uvicorn/container convention and not an application-configured value; specifically update the README table row that currently lists `PORT` next to `FQ_*` vars by either (a) moving `PORT` into a new "Runtime" or "Container/runtime" section with a short note that it is used by uvicorn/container runtimes, or (b) adding a one-line clarification next to the `PORT` row stating that `PORT` is not read or validated by the application (e.g., QueueServerSettings) but is used by the uvicorn CLI/container environment.docs/gettingstarted.rst (1)
19-21: RedundantPORTenvironment variable.The
PORT=8080environment variable on line 19 is redundant since uvicorn is invoked with the explicit--port 8080flag. Additionally,PORTis not read byQueueServerSettings(perfq_server/settings.py); it's only a convention for some container runtimes.Consider simplifying:
📝 Suggested change
- PORT=8080 \ FQ_REDIS_HOST=127.0.0.1 \ uv run uvicorn asgi:app --host 0.0.0.0 --port 8080🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/gettingstarted.rst` around lines 19 - 21, Remove the redundant PORT=8080 environment variable and its assignment from the startup example because uvicorn is already started with an explicit --port 8080 flag; update the example to only set runtime-relevant variables (e.g., FQ_REDIS_HOST) and keep the uvicorn invocation (uv run uvicorn asgi:app --host 0.0.0.0 --port 8080); note that QueueServerSettings does not read PORT so no code changes to QueueServerSettings are needed—just remove the PORT line from the docs.pyproject.toml (1)
8-8: Git-pinned dependency may impact reproducibility and installation.Pinning to a Git tag (
@v1.0.0) is acceptable when the package isn't on PyPI, but be aware:
- Users behind firewalls may have issues accessing GitHub.
- Git dependencies aren't cached by pip the same way as PyPI packages.
- If the tag is ever moved/deleted upstream, builds will break.
Consider publishing
flowdacity-queueto PyPI when feasible for broader compatibility.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyproject.toml` at line 8, The dependency line using a Git tag for "flowdacity-queue" can break reproducibility and installations; update the pyproject entry for flowdacity-queue to a PyPI release (e.g., replace the git+ URL with flowdacity-queue==<version> once published) or, if publishing to PyPI isn’t possible yet, pin to an immutable commit hash (git+https://...@<commit-sha>) and add a short comment in pyproject explaining the Git fallback and firewall/GitHub access limitations so maintainers know why it’s needed.fq_server/server.py (1)
317-323: Consider narrowing exception handling for clarity.The broad
Exceptioncatch handles both JSON parsing and missing"interval"key. While functional, explicitly catching(json.JSONDecodeError, KeyError)would make the intent clearer and avoid accidentally swallowing unexpected errors.♻️ Optional refinement
try: raw_body = await request.body() body: JSONDict = json.loads(raw_body or b"{}") interval = body["interval"] - except Exception as e: + except (json.JSONDecodeError, KeyError) as e: response["message"] = str(e) return JSONResponse(response, status_code=400)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@fq_server/server.py` around lines 317 - 323, The current broad except Exception in the request body parsing swallows unrelated errors; narrow it to explicitly catch json.JSONDecodeError and KeyError when calling request.body(), json.loads(raw_body or b"{}") and accessing body["interval"], then set response["message"]=str(e) and return JSONResponse(response, status_code=400) inside that specific except block so other exceptions bubble up.docs/faqs.rst (1)
36-57: Worker example could benefit from error handling for network failures.The example handles 200 and 404 status codes but will raise
RuntimeErrorfor all other responses, including transient network errors (connection timeouts, etc.) whichhttpxraises as exceptions before a response is received.Consider adding a brief note about production workers needing retry logic for transient failures, or wrap the example in a try/except:
📝 Suggested enhancement
with httpx.Client(base_url="http://127.0.0.1:8080") as client: while True: - response = client.get("/dequeue/sms/") + try: + response = client.get("/dequeue/sms/") + except httpx.RequestError: + time.sleep(1) + continue if response.status_code == 200:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/faqs.rst` around lines 36 - 57, The example loop using httpx.Client currently only handles HTTP status codes and raises RuntimeError for other responses but does not catch httpx network exceptions; wrap the dequeue/finish calls (the client.get and client.post calls inside the while True) in a try/except that catches httpx.RequestError (and optionally httpx.TimeoutException) to implement a simple retry/backoff (e.g., time.sleep before retry) and log or continue on transient failures, or add a short note next to the httpx.Client example advising production workers must add retry logic and error handling around client.get/client.post and avoid letting network exceptions crash the worker.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@docs/faqs.rst`:
- Around line 36-57: The example loop using httpx.Client currently only handles
HTTP status codes and raises RuntimeError for other responses but does not catch
httpx network exceptions; wrap the dequeue/finish calls (the client.get and
client.post calls inside the while True) in a try/except that catches
httpx.RequestError (and optionally httpx.TimeoutException) to implement a simple
retry/backoff (e.g., time.sleep before retry) and log or continue on transient
failures, or add a short note next to the httpx.Client example advising
production workers must add retry logic and error handling around
client.get/client.post and avoid letting network exceptions crash the worker.
In `@docs/gettingstarted.rst`:
- Around line 19-21: Remove the redundant PORT=8080 environment variable and its
assignment from the startup example because uvicorn is already started with an
explicit --port 8080 flag; update the example to only set runtime-relevant
variables (e.g., FQ_REDIS_HOST) and keep the uvicorn invocation (uv run uvicorn
asgi:app --host 0.0.0.0 --port 8080); note that QueueServerSettings does not
read PORT so no code changes to QueueServerSettings are needed—just remove the
PORT line from the docs.
In `@fq_server/server.py`:
- Around line 317-323: The current broad except Exception in the request body
parsing swallows unrelated errors; narrow it to explicitly catch
json.JSONDecodeError and KeyError when calling request.body(),
json.loads(raw_body or b"{}") and accessing body["interval"], then set
response["message"]=str(e) and return JSONResponse(response, status_code=400)
inside that specific except block so other exceptions bubble up.
In `@fq_server/settings.py`:
- Around line 74-87: The validate_boolean_env validator currently only accepts
bool or "true"/"false" strings; update validate_boolean_env (the
`@field_validator` for "enable_requeue_script" and "redis_clustered") to also
accept numeric booleans by handling int values 0/1 and string "0"/"1" (and any
whitespace) — convert 1/"1"/True to True and 0/"0"/False to False, keep existing
handling for bool and "true"/"false", and preserve the ValueError fallback
message.
In `@pyproject.toml`:
- Line 8: The dependency line using a Git tag for "flowdacity-queue" can break
reproducibility and installations; update the pyproject entry for
flowdacity-queue to a PyPI release (e.g., replace the git+ URL with
flowdacity-queue==<version> once published) or, if publishing to PyPI isn’t
possible yet, pin to an immutable commit hash (git+https://...@<commit-sha>) and
add a short comment in pyproject explaining the Git fallback and firewall/GitHub
access limitations so maintainers know why it’s needed.
In `@README.md`:
- Around line 65-67: The README shows setting PORT=8080 while also passing
--port 8080 to the uvicorn command; remove the redundant PORT=8080 environment
variable to avoid duplication. Update the section so only necessary env vars
(e.g., FQ_REDIS_HOST) are exported and the uvicorn invocation remains as "uv run
uvicorn asgi:app --host 0.0.0.0 --port 8080" (or, alternatively, keep PORT and
drop the explicit --port flag) so that either the PORT env var or the --port
argument is used, not both.
- Line 50: Move or annotate the `PORT` entry so readers know it's a
runtime/uvicorn/container convention and not an application-configured value;
specifically update the README table row that currently lists `PORT` next to
`FQ_*` vars by either (a) moving `PORT` into a new "Runtime" or
"Container/runtime" section with a short note that it is used by
uvicorn/container runtimes, or (b) adding a one-line clarification next to the
`PORT` row stating that `PORT` is not read or validated by the application
(e.g., QueueServerSettings) but is used by the uvicorn CLI/container
environment.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: dc079627-96cc-4ccc-a741-7b798ce9ec40
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (22)
README.mdasgi.pydefault.confdocker-compose.ymldocs/Makefiledocs/_templates/layout.htmldocs/apireference.rstdocs/conf.pydocs/configuration.rstdocs/contributing.rstdocs/faqs.rstdocs/gettingstarted.rstdocs/index.rstdocs/installation.rstdocs/internals.rstdocs/license.rstfq_server/__init__.pyfq_server/server.pyfq_server/settings.pypyproject.tomltests/test.conftests/test_routes.py
💤 Files with no reviewable changes (2)
- default.conf
- tests/test.conf
Introduces an environment-configurable application log level, validated via settings. Switches server logging from print and traceback to structured logging with context-aware messages, improving diagnostics and production readiness. Updates documentation, Docker config, and tests to support and describe the new log level feature.
Standardizes configuration by dropping the FQ_ prefix from all environment variables and documentation references. Improves clarity and consistency for users configuring the application, reducing friction and potential confusion when setting environment variables.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
tests/test_routes.py (1)
548-556: Await the disabled requeue task so this test catches regressions.
done()only tells us the coroutine finished. Ifrequeue_with_lock()starts raising immediately, this still passes. Awaiting the task and restoring the flag infinallymakes the assertion reliable.Suggested test hardening
- server.config["fq"]["enable_requeue_script"] = False - requeue_task = asyncio.create_task(server.requeue_with_lock()) - - # Should return immediately (task completes) - await asyncio.sleep(0.1) - - # Task should be done (returned, not cancelled) - self.assertTrue(requeue_task.done()) - server.config["fq"]["enable_requeue_script"] = True + server.config["fq"]["enable_requeue_script"] = False + try: + requeue_task = asyncio.create_task(server.requeue_with_lock()) + + # Should return immediately (task completes) + await asyncio.sleep(0.1) + + # Task should be done and should not have failed + self.assertTrue(requeue_task.done()) + await requeue_task + finally: + server.config["fq"]["enable_requeue_script"] = True🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_routes.py` around lines 548 - 556, Await the created requeue task instead of only checking requeue_task.done() and ensure the config flag server.config["fq"]["enable_requeue_script"] is restored in a finally block; specifically, after creating requeue_task via asyncio.create_task(server.requeue_with_lock()), await requeue_task (to surface any immediate exceptions from requeue_with_lock()) and move the flag reset (server.config["fq"]["enable_requeue_script"] = True) into a finally clause so the test reliably fails on regressions and always restores state.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@fq_server/server.py`:
- Around line 224-228: The logs currently attach raw client-supplied identifiers
(queue_id, job_id, request_data) via logger.extra calls (e.g., the
logger.warning in enqueue and the other occurrences listed), which risks leaking
user identifiers; replace those direct uses with a redacted or hashed form
before attaching to log context (create and call a helper like redact_identifier
or hash_identifier and use that value in the extra payloads), or swap to an
internal correlation key, and update all occurrences referenced (the
logger.warning at the shown diff and the other similar blocks) to pass only the
redacted/hashed/truncated identifier instead of the raw value.
- Around line 80-94: The requeue loop can crash because it uses assert and only
catches LockError; replace the assert with an explicit check of redis =
self.queue.redis_client() and log+continue if None, and broaden the try/except
so any exceptions raised by self.queue.redis_client(), redis.lock(...), or the
async lock acquisition (e.g., connection/timeouts, RedisError, Exception) are
caught and logged (while still handling LockError specially with the existing
debug message), ensuring the loop continues; keep the await self.queue.requeue()
inner try/except for its errors, and ensure the sleep using job_requeue_interval
always runs (e.g., in a finally block) so the background task never exits on
unexpected redis errors.
In `@README.md`:
- Around line 36-51: The README currently lists PORT alongside the FQ_* env
vars, implying it's part of QueueServerSettings; clarify that PORT is a
launcher/runtime variable (used by example launchers/Uvicorn) and not consumed
by QueueServerSettings or asgi.py. Update the table rows and any local examples
(where PORT=8080 is set) to note PORT is a launcher variable (e.g.,
“Launcher/Uvicorn port – not read by QueueServerSettings or asgi.py”) and ensure
references to QueueServerSettings and asgi.py are mentioned so readers know only
FQ_* vars are validated by the app.
---
Nitpick comments:
In `@tests/test_routes.py`:
- Around line 548-556: Await the created requeue task instead of only checking
requeue_task.done() and ensure the config flag
server.config["fq"]["enable_requeue_script"] is restored in a finally block;
specifically, after creating requeue_task via
asyncio.create_task(server.requeue_with_lock()), await requeue_task (to surface
any immediate exceptions from requeue_with_lock()) and move the flag reset
(server.config["fq"]["enable_requeue_script"] = True) into a finally clause so
the test reliably fails on regressions and always restores state.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7d7ec195-45a3-44d5-9d3c-0c192db1cbde
📒 Files selected for processing (7)
README.mdasgi.pydocker-compose.ymldocs/configuration.rstfq_server/server.pyfq_server/settings.pytests/test_routes.py
✅ Files skipped from review due to trivial changes (1)
- docs/configuration.rst
🚧 Files skipped from review as they are similar to previous changes (1)
- docker-compose.yml
| while True: | ||
| try: | ||
| redis = self.queue.redis_client() | ||
| assert redis is not None, "Redis client is not initialized" | ||
| # assumes async lock | ||
| async with redis.lock("fq-requeue-lock-key", timeout=15): | ||
| try: | ||
| await self.queue.requeue() | ||
| except Exception: | ||
| traceback.print_exc() | ||
| logger.exception("Failed to requeue expired jobs while holding lock") | ||
| except LockError: | ||
| # the lock wasn't acquired within specified time | ||
| pass | ||
| logger.debug("Requeue lock is already held by another worker") | ||
| finally: | ||
| await asyncio.sleep(job_requeue_interval / 1000.0) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's explore the fq_server directory structure
find . -type f -name "*.py" | grep -E "fq_server|test" | head -20Repository: flowdacity/queue-server
Length of output: 159
🏁 Script executed:
# List files in fq_server
git ls-files | grep "fq_server"Repository: flowdacity/queue-server
Length of output: 130
🏁 Script executed:
# Read fq_server/server.py to inspect the target code and surrounding context
wc -l fq_server/server.pyRepository: flowdacity/queue-server
Length of output: 90
🏁 Script executed:
# Read the file around lines 80-94
cat -n fq_server/server.py | sed -n '70,105p'Repository: flowdacity/queue-server
Length of output: 1791
🏁 Script executed:
# Search for redis_client method definition
rg -n "def redis_client" fq_server/Repository: flowdacity/queue-server
Length of output: 49
🏁 Script executed:
# Search for LockError import
rg -n "LockError|from.*lock" fq_server/Repository: flowdacity/queue-server
Length of output: 181
🏁 Script executed:
# Check imports at top of server.py
head -30 fq_server/server.pyRepository: flowdacity/queue-server
Length of output: 973
🏁 Script executed:
# Find where self.queue is defined/initialized
rg -n "self\.queue\s*=" fq_server/server.py | head -5Repository: flowdacity/queue-server
Length of output: 106
🏁 Script executed:
# Look for the full requeue_with_lock method definition
grep -n "def requeue_with_lock" fq_server/server.pyRepository: flowdacity/queue-server
Length of output: 108
🏁 Script executed:
# Search for deployment or test config files
find . -type f \( -name "*.toml" -o -name "*.cfg" -o -name "*.yml" -o -name "*.yaml" -o -name "Dockerfile" -o -name "*.sh" \) | grep -v ".git" | head -20Repository: flowdacity/queue-server
Length of output: 129
🏁 Script executed:
# Read the full requeue_with_lock method
sed -n '67,95p' fq_server/server.pyRepository: flowdacity/queue-server
Length of output: 1278
🏁 Script executed:
# Check Dockerfile for Python optimization flags
cat DockerfileRepository: flowdacity/queue-server
Length of output: 1425
🏁 Script executed:
# Check pyproject.toml for optimization config
cat pyproject.tomlRepository: flowdacity/queue-server
Length of output: 938
Requeue loop can terminate unexpectedly if redis operations fail.
The loop only catches LockError, but redis_client() or redis.lock() can raise other exceptions (connection errors, timeouts, etc.) that will exit the background task permanently. Additionally, using assert for runtime validation is unsafe—asserts can be disabled with interpreter optimizations and shouldn't guard against expected failures.
Suggested hardening
while True:
try:
redis = self.queue.redis_client()
- assert redis is not None, "Redis client is not initialized"
+ if redis is None:
+ logger.warning(
+ "Redis client is not initialized; skipping requeue pass"
+ )
+ continue
# assumes async lock
async with redis.lock("fq-requeue-lock-key", timeout=15):
try:
await self.queue.requeue()
except Exception:
logger.exception("Failed to requeue expired jobs while holding lock")
except LockError:
logger.debug("Requeue lock is already held by another worker")
+ except Exception:
+ logger.exception("Requeue loop failed before requeue could run")
finally:
await asyncio.sleep(job_requeue_interval / 1000.0)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@fq_server/server.py` around lines 80 - 94, The requeue loop can crash because
it uses assert and only catches LockError; replace the assert with an explicit
check of redis = self.queue.redis_client() and log+continue if None, and broaden
the try/except so any exceptions raised by self.queue.redis_client(),
redis.lock(...), or the async lock acquisition (e.g., connection/timeouts,
RedisError, Exception) are caught and logged (while still handling LockError
specially with the existing debug message), ensuring the loop continues; keep
the await self.queue.requeue() inner try/except for its errors, and ensure the
sleep using job_requeue_interval always runs (e.g., in a finally block) so the
background task never exits on unexpected redis errors.
| logger.warning( | ||
| "Failed to fetch queue length during enqueue", | ||
| exc_info=e, | ||
| extra={"queue_type": queue_type, "queue_id": queue_id}, | ||
| ) |
There was a problem hiding this comment.
Redact client-supplied identifiers before attaching them to log context.
These new extra= payloads write raw queue_id / job_id values into logs, and extra=request_data does the same for /metrics/{queue_type}/{queue_id}/. The API examples and tests use user-scoped queue IDs, so these error paths will turn centralized logs into an identifier store unless you hash/truncate those fields or swap to an internal correlation key.
Also applies to: 235-238, 251-254, 284-287, 315-318, 349-352, 375-375, 415-418
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@fq_server/server.py` around lines 224 - 228, The logs currently attach raw
client-supplied identifiers (queue_id, job_id, request_data) via logger.extra
calls (e.g., the logger.warning in enqueue and the other occurrences listed),
which risks leaking user identifiers; replace those direct uses with a redacted
or hashed form before attaching to log context (create and call a helper like
redact_identifier or hash_identifier and use that value in the extra payloads),
or swap to an internal correlation key, and update all occurrences referenced
(the logger.warning at the shown diff and the other similar blocks) to pass only
the redacted/hashed/truncated identifier instead of the raw value.
Switches default API port references from 8080 to 8300 in documentation for consistency. Upgrades core dependencies to newer versions for improved compatibility and reliability. Transitions from a git-based dependency for the queue library to a versioned PyPI release.
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
README.md (1)
51-51:⚠️ Potential issue | 🟡 MinorClarify that
PORTis a launcher variable and use it in the command.Line 51 still reads like app-level validated config, and Lines 66-68 export
PORTbut don’t consume it.📘 Suggested doc patch
-| `PORT` | `8300` | Uvicorn port used by the container and local examples. | +| `PORT` | `8300` | Launcher/Uvicorn variable for startup commands; not read by `QueueServerSettings`. | ... PORT=8300 \ REDIS_HOST=127.0.0.1 \ -uv run uvicorn asgi:app --host 0.0.0.0 --port 8300 +uv run uvicorn asgi:app --host 0.0.0.0 --port "$PORT"Also applies to: 66-69
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` at line 51, The README incorrectly presents PORT as an app-level validated config; update the docs to state PORT is a launcher/environment variable (not validated by the app) and demonstrate using it in the start command (e.g., show export PORT=8300 then use it when launching Uvicorn) so the examples at the earlier table row (`PORT`) and the export block (lines showing export PORT) clearly indicate it is consumed by the launcher; update the descriptive text for the `PORT` table entry and the example start command to reference the `PORT` environment variable explicitly.
🧹 Nitpick comments (1)
tests/test_routes.py (1)
548-556: Maketest_requeue_with_lock_disableddeterministic and restore config infinally.Current sleep-based completion check is timing-sensitive, and config restoration should be guarded even on assertion failure.
🧪 Suggested test hardening
- server.config["fq"]["enable_requeue_script"] = False - requeue_task = asyncio.create_task(server.requeue_with_lock()) - - # Should return immediately (task completes) - await asyncio.sleep(0.1) - - # Task should be done (returned, not cancelled) - self.assertTrue(requeue_task.done()) - server.config["fq"]["enable_requeue_script"] = True + original = server.config["fq"]["enable_requeue_script"] + try: + server.config["fq"]["enable_requeue_script"] = False + await asyncio.wait_for(server.requeue_with_lock(), timeout=0.5) + finally: + server.config["fq"]["enable_requeue_script"] = original🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_routes.py` around lines 548 - 556, Replace the timing-sensitive sleep+assert with a deterministic await using asyncio.wait_for on requeue_task (e.g., await asyncio.wait_for(requeue_task, timeout=1)) to ensure the task completes within a bounded time and then assert requeue_task.done() and not requeue_task.cancelled(); also wrap the test body in a try/finally and restore server.config["fq"]["enable_requeue_script"] = True in the finally block so the config is always restored even if assertions fail; target the requeue_task creation and server.requeue_with_lock() call in this change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/faqs.rst`:
- Around line 25-27: Update the FAQ curl examples that currently use port 8080
to the new default port 8300; specifically locate the curl command instances
that target "http://127.0.0.1:8080/interval/sms/user42/" (and any other
occurrences of ":8080" such as the one around the other example) and replace
":8080" with ":8300" so all examples consistently reference the current default
API port.
In `@docs/gettingstarted.rst`:
- Around line 19-21: Update the example commands to use the documented default
port 8300 and reference the PORT environment variable instead of hardcoding
8080: replace occurrences of "8080" in the uvicorn invocation (e.g., the line
with "uv run uvicorn asgi:app --host 0.0.0.0 --port 8080") with the PORT
variable (e.g., use the shell-expanded value like "--port $PORT") and change the
example PORT assignment from 8080 to 8300; apply the same fixes for the other
occurrences called out (the blocks covering lines 28-29 and 47-71) so all
examples consistently use PORT and the 8300 default.
In `@README.md`:
- Line 16: Update the README statement about flowdacity-queue (the line
mentioning pinning to an upstream v1.0.0 Git tag) to reflect the actual
dependency declaration used in the project: remove the claim that the package is
not published on PyPI and instead state that the project depends on
flowdacity-queue via the PyPI specifier (e.g., flowdacity-queue>=1.0.0),
matching pyproject.toml and uv.lock; reference the files pyproject.toml and
uv.lock for verification if needed.
---
Duplicate comments:
In `@README.md`:
- Line 51: The README incorrectly presents PORT as an app-level validated
config; update the docs to state PORT is a launcher/environment variable (not
validated by the app) and demonstrate using it in the start command (e.g., show
export PORT=8300 then use it when launching Uvicorn) so the examples at the
earlier table row (`PORT`) and the export block (lines showing export PORT)
clearly indicate it is consumed by the launcher; update the descriptive text for
the `PORT` table entry and the example start command to reference the `PORT`
environment variable explicitly.
---
Nitpick comments:
In `@tests/test_routes.py`:
- Around line 548-556: Replace the timing-sensitive sleep+assert with a
deterministic await using asyncio.wait_for on requeue_task (e.g., await
asyncio.wait_for(requeue_task, timeout=1)) to ensure the task completes within a
bounded time and then assert requeue_task.done() and not
requeue_task.cancelled(); also wrap the test body in a try/finally and restore
server.config["fq"]["enable_requeue_script"] = True in the finally block so the
config is always restored even if assertions fail; target the requeue_task
creation and server.requeue_with_lock() call in this change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4f39a25c-9dbd-4ac2-9837-22e3c8bf7b6d
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (9)
README.mddocker-compose.ymldocs/configuration.rstdocs/faqs.rstdocs/gettingstarted.rstdocs/internals.rstfq_server/settings.pypyproject.tomltests/test_routes.py
✅ Files skipped from review due to trivial changes (1)
- docs/configuration.rst
🚧 Files skipped from review as they are similar to previous changes (3)
- docker-compose.yml
- pyproject.toml
- docs/internals.rst
docs/gettingstarted.rst
Outdated
| PORT=8080 \ | ||
| REDIS_HOST=127.0.0.1 \ | ||
| uv run uvicorn asgi:app --host 0.0.0.0 --port 8080 |
There was a problem hiding this comment.
Align Getting Started port examples with the documented default and use PORT consistently.
Examples still use 8080, and Line 19 sets PORT without using it in Line 21. This conflicts with the repo’s updated 8300 default and can cause confusion.
📘 Suggested doc patch
- PORT=8080 \
+ PORT=8300 \
REDIS_HOST=127.0.0.1 \
- uv run uvicorn asgi:app --host 0.0.0.0 --port 8080
+ uv run uvicorn asgi:app --host 0.0.0.0 --port "$PORT"
...
- curl http://127.0.0.1:8080/
+ curl http://127.0.0.1:8300/
...
- curl -X POST http://127.0.0.1:8080/enqueue/sms/user42/ \
+ curl -X POST http://127.0.0.1:8300/enqueue/sms/user42/ \
...
- curl http://127.0.0.1:8080/dequeue/sms/
+ curl http://127.0.0.1:8300/dequeue/sms/
...
- curl -X POST http://127.0.0.1:8080/finish/sms/user42/job-1/
+ curl -X POST http://127.0.0.1:8300/finish/sms/user42/job-1/
...
- curl http://127.0.0.1:8080/metrics/
- curl http://127.0.0.1:8080/metrics/sms/user42/
+ curl http://127.0.0.1:8300/metrics/
+ curl http://127.0.0.1:8300/metrics/sms/user42/Also applies to: 28-29, 47-71
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/gettingstarted.rst` around lines 19 - 21, Update the example commands to
use the documented default port 8300 and reference the PORT environment variable
instead of hardcoding 8080: replace occurrences of "8080" in the uvicorn
invocation (e.g., the line with "uv run uvicorn asgi:app --host 0.0.0.0 --port
8080") with the PORT variable (e.g., use the shell-expanded value like "--port
$PORT") and change the example PORT assignment from 8080 to 8300; apply the same
fixes for the other occurrences called out (the blocks covering lines 28-29 and
47-71) so all examples consistently use PORT and the 8300 default.
README.md
Outdated
| ## Installation | ||
|
|
||
| Clone the repo and install the package plus dev tools (uses [`uv`](https://github.com/astral-sh/uv) by default): | ||
| This project currently pins `flowdacity-queue` to the upstream `v1.0.0` Git tag because that version is tagged upstream but not published on PyPI. |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Inspect common dependency declaration files for flowdacity-queue source/version.
fd -a 'pyproject.toml|setup.py|requirements.*|uv.lock|poetry.lock|Pipfile|Pipfile.lock' \
| while read -r f; do
echo "---- $f ----"
rg -n "flowdacity-queue|git\\+|github.com/flowdacity/flowdacity-queue|v1\\.0\\.0" "$f" || true
doneRepository: flowdacity/queue-server
Length of output: 348
README.md line 16 contains outdated information about the flowdacity-queue dependency.
The README states the package is pinned to a git tag because it's not published on PyPI. However, the actual manifests show flowdacity-queue>=1.0.0 in pyproject.toml and a standard PyPI version specifier in uv.lock, indicating the package is available on PyPI. Update the README to reflect the actual dependency declaration.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` at line 16, Update the README statement about flowdacity-queue
(the line mentioning pinning to an upstream v1.0.0 Git tag) to reflect the
actual dependency declaration used in the project: remove the claim that the
package is not published on PyPI and instead state that the project depends on
flowdacity-queue via the PyPI specifier (e.g., flowdacity-queue>=1.0.0),
matching pyproject.toml and uv.lock; reference the files pyproject.toml and
uv.lock for verification if needed.
Updates all documentation, configuration, and dependency references to use the new repository and package names. Improves clarity and consistency with upstream naming, reduces confusion, and prepares for future maintenance.
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
♻️ Duplicate comments (1)
README.md (1)
49-49:⚠️ Potential issue | 🟡 Minor
PORTis still documented as an app setting, but the app settings model does not read it.Line 49 places
PORTin the validated config table, and Lines 64-67 setPORTwhile still hardcoding--port 8300. This is misleading:PORTis a launcher/runtime variable, not aQueueServerSettingsfield.Suggested doc fix
-| `PORT` | `8300` | Uvicorn port used by the container and local examples. | +| `PORT` | `8300` | Launcher/Uvicorn port variable; not read by `QueueServerSettings` or `asgi.py`. | -PORT=8300 \ -REDIS_HOST=127.0.0.1 \ -uv run uvicorn asgi:app --host 0.0.0.0 --port 8300 +export PORT=8300 +export REDIS_HOST=127.0.0.1 +uv run uvicorn asgi:app --host 0.0.0.0 --port "$PORT"Also applies to: 64-67
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` at line 49, The README incorrectly lists PORT as a validated app setting and shows the launcher hardcoding --port 8300; remove PORT from the QueueServerSettings/validated config table and instead document PORT as a launcher/runtime environment variable (or CLI option) used by the container/launcher, and update the example run command (the section around the hardcoded "--port 8300") to either reference the PORT env var or show how to pass it from the launcher/CLI; refer to the PORT name and the QueueServerSettings concept in the README so the change clearly separates runtime launcher configuration from the app settings model.
🧹 Nitpick comments (2)
README.md (1)
9-9: Heading style is inconsistent with markdownlint MD003.Lines 9, 14, 53, and 69 use ATX headings while this doc is otherwise setext-oriented per lint expectations. Please normalize heading style to satisfy the configured markdown rule.
Also applies to: 14-14, 53-53, 69-69
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` at line 9, The README uses ATX-style headings (e.g., "## Prerequisites") that conflict with the document's setext-oriented style per markdownlint MD003; update the ATX headings ("Prerequisites" and the other ATX headings flagged in the comment) to setext-style headings by replacing the leading "##" with an underline of === (for level 1) or --- (for level 2) matching the heading text length so the heading style is normalized across the file and satisfies MD003.docs/conf.py (1)
16-19: Consider updating the copyright year.The copyright year is set to 2025. If this reflects when the project started, it's fine. Otherwise, consider updating to 2026 or using a range like
"2025-2026, Flowdacity Development Team".Note: The Ruff warning about
copyrightshadowing a Python builtin is a false positive—copyrightis a standard Sphinx configuration variable required by the framework.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/conf.py` around lines 16 - 19, Update the Sphinx config value named copyright in docs/conf.py to the correct year or a year range (for example change the string "2025, Flowdacity Development Team" to "2026, Flowdacity Development Team" or "2025-2026, Flowdacity Development Team") so the project metadata reflects the current copyright period; leave the variable name as-is since it is the Sphinx-required configuration key.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@README.md`:
- Line 49: The README incorrectly lists PORT as a validated app setting and
shows the launcher hardcoding --port 8300; remove PORT from the
QueueServerSettings/validated config table and instead document PORT as a
launcher/runtime environment variable (or CLI option) used by the
container/launcher, and update the example run command (the section around the
hardcoded "--port 8300") to either reference the PORT env var or show how to
pass it from the launcher/CLI; refer to the PORT name and the
QueueServerSettings concept in the README so the change clearly separates
runtime launcher configuration from the app settings model.
---
Nitpick comments:
In `@docs/conf.py`:
- Around line 16-19: Update the Sphinx config value named copyright in
docs/conf.py to the correct year or a year range (for example change the string
"2025, Flowdacity Development Team" to "2026, Flowdacity Development Team" or
"2025-2026, Flowdacity Development Team") so the project metadata reflects the
current copyright period; leave the variable name as-is since it is the
Sphinx-required configuration key.
In `@README.md`:
- Line 9: The README uses ATX-style headings (e.g., "## Prerequisites") that
conflict with the document's setext-oriented style per markdownlint MD003;
update the ATX headings ("Prerequisites" and the other ATX headings flagged in
the comment) to setext-style headings by replacing the leading "##" with an
underline of === (for level 1) or --- (for level 2) matching the heading text
length so the heading style is normalized across the file and satisfies MD003.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1f0350a3-0faf-490a-9338-906cbc4b7041
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (11)
.github/workflows/test.ymlREADME.mddocs/Makefiledocs/_templates/layout.htmldocs/conf.pydocs/contributing.rstdocs/faqs.rstdocs/gettingstarted.rstdocs/index.rstdocs/installation.rstdocs/internals.rst
✅ Files skipped from review due to trivial changes (8)
- docs/contributing.rst
- docs/installation.rst
- docs/internals.rst
- docs/index.rst
- docs/_templates/layout.html
- docs/gettingstarted.rst
- docs/faqs.rst
- .github/workflows/test.yml
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 23 out of 24 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
fq_server/server.py
Outdated
| while True: | ||
| try: | ||
| redis = self.queue.redis_client() | ||
| assert redis is not None, "Redis client is not initialized" |
There was a problem hiding this comment.
Avoid using assert for runtime checks here. Assertions can be stripped with Python optimizations (-O), which would turn a missing Redis client into a harder-to-diagnose AttributeError later. Prefer an explicit guard (e.g., check for None, log an error, and either return/raise RuntimeError) so the failure mode is consistent in production.
| assert redis is not None, "Redis client is not initialized" | |
| if redis is None: | |
| logger.error("Redis client is not initialized; cannot start requeue loop with lock") | |
| raise RuntimeError("Redis client is not initialized") |
fq_server/__init__.py
Outdated
|
|
||
| __version__ = '0.1.0' | ||
| __all__ = ['FQServer', 'setup_server'] No newline at end of file | ||
| __version__ = "0.1.0" |
There was a problem hiding this comment.
__version__ is still 0.1.0 but the project version in pyproject.toml is now 1.0.0. This mismatch can confuse users and tooling that relies on the runtime package version. Consider updating __version__ to match the project version (or deriving it from package metadata).
| __version__ = "0.1.0" | |
| __version__ = "1.0.0" |
docs/conf.py
Outdated
| version = "0.1.0" | ||
| release = "0.1.0" |
There was a problem hiding this comment.
The Sphinx version/release values are set to 0.1.0, but the project version in pyproject.toml is 1.0.0. Please update these so the published documentation reflects the actual release version.
| version = "0.1.0" | |
| release = "0.1.0" | |
| version = "1.0.0" | |
| release = "1.0.0" |
docs/installation.rst
Outdated
| git clone https://github.com/plivo/fq-server.git | ||
| uv sync --group dev | ||
|
|
||
| This project currently pins ``queue-engine`` to the upstream ``v1.0.0`` Git tag. |
There was a problem hiding this comment.
This statement says the project pins queue-engine to a Git tag, but the repo appears to depend on the PyPI package flowdacity-queue>=1.0.0 (and uv.lock shows a registry source). Please correct this to describe the actual dependency/pinning behavior.
| This project currently pins ``queue-engine`` to the upstream ``v1.0.0`` Git tag. | |
| This project depends on the PyPI package ``flowdacity-queue>=1.0.0``, with the exact version pinned via ``uv.lock`` from the Python package index. |
There was a problem hiding this comment.
Provide a better and simpler installation guide.
…and adds tests for Redis client availability
There was a problem hiding this comment.
♻️ Duplicate comments (1)
fq_server/server.py (1)
80-96:⚠️ Potential issue | 🟠 MajorRequeue loop still vulnerable to non-LockError exceptions.
The
Nonecheck is a good improvement, butredis.lock()and the async context manager can raise other exceptions (e.g.,ConnectionError,TimeoutError,RedisError) that would terminate the background task permanently. OnlyLockErroris caught.Consider adding a broader exception handler to keep the loop running:
Suggested hardening
except LockError: # the lock wasn't acquired within specified time logger.debug("Requeue lock is already held by another worker") + except Exception: + logger.exception("Requeue loop encountered an error; will retry") finally: await asyncio.sleep(job_requeue_interval / 1000.0)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@fq_server/server.py` around lines 80 - 96, The requeue loop around redis.lock("fq-requeue-lock-key") only catches LockError and will exit on other redis/connection exceptions; update the while True loop in server.py (the block using self.queue.redis_client(), redis.lock(...), async with, and await self.queue.requeue()) to catch broader exceptions (e.g., Exception or specific Redis/Connection/Timeout errors) separate from LockError so the background task continues running: keep the current LockError handling for lock contention, log and swallow other transient errors from redis.lock() and the async context manager (including connection/timeout/Redis errors) with appropriate logger.exception/ logger.error messages, but do not let them propagate and stop the loop; ensure the finally still awaits asyncio.sleep(job_requeue_interval / 1000.0).
🧹 Nitpick comments (2)
docs/conf.py (2)
19-25:copyrightshadows Python builtin (Ruff A001).This is the standard Sphinx configuration variable name, so the shadowing is expected and harmless in this context. You can silence the linter with a
# noqa: A001comment if the project enforces zero Ruff warnings.-copyright = "2025, Flowdacity Development Team" +copyright = "2025, Flowdacity Development Team" # noqa: A001🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/conf.py` around lines 19 - 25, The linter flags that the variable name copyright shadows a Python builtin; suppress this harmless warning by adding a noqa comment to that assignment line: update the assignment to the copyright variable in docs/conf.py (the line that currently reads `copyright = "2025, Flowdacity Development Team"`) to append `# noqa: A001` so Ruff ignores the shadowing while keeping the Sphinx config name intact.
22-25: Consider usingtomllib.load()with a file handle.Using
tomllib.load()with a binary file handle is slightly more memory-efficient and idiomatic thantomllib.loads()withread_text().♻️ Suggested improvement
project_root = Path(__file__).resolve().parents[1] -release = tomllib.loads((project_root / "pyproject.toml").read_text(encoding="utf-8"))[ - "project" -]["version"] +with (project_root / "pyproject.toml").open("rb") as f: + release = tomllib.load(f)["project"]["version"] version = release🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/conf.py` around lines 22 - 25, The current code uses tomllib.loads on the entire file text; instead open the pyproject.toml file as a binary file handle and call tomllib.load to parse it (replace the use of tomllib.loads((project_root / "pyproject.toml").read_text(...)) with opening project_root / "pyproject.toml" in "rb" and passing that handle to tomllib.load), then extract ["project"]["version"] into the release variable and set version = release; keep references to release and project_root as-is.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@fq_server/server.py`:
- Around line 80-96: The requeue loop around redis.lock("fq-requeue-lock-key")
only catches LockError and will exit on other redis/connection exceptions;
update the while True loop in server.py (the block using
self.queue.redis_client(), redis.lock(...), async with, and await
self.queue.requeue()) to catch broader exceptions (e.g., Exception or specific
Redis/Connection/Timeout errors) separate from LockError so the background task
continues running: keep the current LockError handling for lock contention, log
and swallow other transient errors from redis.lock() and the async context
manager (including connection/timeout/Redis errors) with appropriate
logger.exception/ logger.error messages, but do not let them propagate and stop
the loop; ensure the finally still awaits asyncio.sleep(job_requeue_interval /
1000.0).
---
Nitpick comments:
In `@docs/conf.py`:
- Around line 19-25: The linter flags that the variable name copyright shadows a
Python builtin; suppress this harmless warning by adding a noqa comment to that
assignment line: update the assignment to the copyright variable in docs/conf.py
(the line that currently reads `copyright = "2025, Flowdacity Development
Team"`) to append `# noqa: A001` so Ruff ignores the shadowing while keeping the
Sphinx config name intact.
- Around line 22-25: The current code uses tomllib.loads on the entire file
text; instead open the pyproject.toml file as a binary file handle and call
tomllib.load to parse it (replace the use of tomllib.loads((project_root /
"pyproject.toml").read_text(...)) with opening project_root / "pyproject.toml"
in "rb" and passing that handle to tomllib.load), then extract
["project"]["version"] into the release variable and set version = release; keep
references to release and project_root as-is.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 032acec3-37fd-4ea9-a902-f5dc576630ba
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (6)
README.mddocs/conf.pydocs/installation.rstfq_server/__init__.pyfq_server/server.pytests/test_routes.py
✅ Files skipped from review due to trivial changes (1)
- docs/installation.rst
🚧 Files skipped from review as they are similar to previous changes (1)
- fq_server/init.py
Expands error handling to catch and log transient Redis connection, timeout, and general Redis errors during the requeue loop to prevent unexpected crashes and aid in debugging. Adds targeted tests to ensure robustness and correct logging behavior for these error scenarios.
Refactors the test suite by separating configuration validation, API/core route tests, and error path coverage into targeted modules. Removes a large monolithic test file in favor of focused, maintainable test cases organized by purpose. Enhances test maintainability and eases future test additions by introducing shared test utilities.
This pull request modernizes the configuration and documentation of the Flowdacity Queue Server by removing support for config files in favor of environment variables, updating the documentation for clarity and accuracy, and improving consistency across project files. The changes simplify deployment and make the server easier to configure in both local and containerized environments.
Configuration and Deployment Modernization:
default.conf) and now require all settings to be provided via environment variables, validated at startup usingpydantic-settings. (asgi.py[1]default.conf[2]docker-compose.yml[3]docker-compose.ymlto pass all queue and Redis settings through environment variables instead of mounting a config file. (docker-compose.ymldocker-compose.ymlL5-L10)Documentation Updates:
README.mdto explain the new environment variable-based configuration, removed references to config files, and clarified setup instructions for both local and Docker-based development. (README.md[1] [2] [3]docs/apireference.rstto describe all endpoints, request/response formats, and error codes in a clear, modern style, replacing legacy SHARQ references with Flowdacity-specific details. (docs/apireference.rstdocs/apireference.rstL5-R219)Branding and Consistency Improvements:
docs/_templates/layout.html[1]docs/Makefile[2]Summary by CodeRabbit
New Features
Documentation
Chores
Tests