FatAdvisor is an AI-powered nutrition assistant built in .NET, which uses the FatSecret API to fetch food / weight logs and leverages a language model to provide personalized dietary suggestions and analysis.
This project is currently in prototype / proof-of-concept stage, starting from a console interface and expanding toward more advanced “agentic” features.
In Part 1 of the dev blog series you can find an overview of the architecture and goals.
Many users already track their meals, calories, and weight using services like FatSecret. The idea behind FatAdvisor is to let an AI agent:
- fetch your recent nutrition and weight logs
- reason over trends (macros, calories, deviations)
- respond to natural-language questions like “What should I eat tonight?” or “How to reduce carbs tomorrow?” using personalized data, not generic advice.
The solution is organized as a Visual Studio / .NET solution with three main projects. These correspond to layered responsibilities:
| Project | Purpose / Responsibility | Key Features |
|---|---|---|
| FatAdvisor.Console | The entry point / UI for user interaction | Hosts Program.cs, configures dependency injection, logging, hosts the runner |
| FatAdvisor.Ai | Core logic for AI orchestration | Wraps Semantic Kernel, builds prompt templates, handles decision logic |
| FatAdvisor.FatSecretApi | Integration with the FatSecret REST API | DTOs, HTTP client, authentication, request/response models |
- FatAdvisor.Console depends on both Ai and FatSecretApi.
- Ai may depend on FatSecretApi (or at least its models), e.g. if the agent wants to call API functions.
- FatSecretApi is self-contained with no dependencies on AI logic.
Sensitive configuration is usually loaded via User Secrets or environment variables. This avoids checking keys into source control.
You must configure at least:
GitHubModels:ApiKey — your token for the LLM service
GitHubModels:Endpoint — the API endpoint (e.g. models inference endpoint)
Example:
cd FatAdvisor.Console
dotnet user-secrets init
dotnet user-secrets set "GitHubModels:ApiKey" "<your-token>"
dotnet user-secrets set "GitHubModels:Endpoint" "https://models.inference.ai.azure.com"You can also override via environment variables if preferred.
Here are some enhancements you might consider (some already hinted in the blog post):
- Add error handling, rate limiting, caching, and retry logic
- Improve prompt engineering (chain-of-thought, few-shot examples)
- Add web UI