Skip to content

Conversation

@harry-whorlow
Copy link
Contributor

@harry-whorlow harry-whorlow commented Dec 7, 2025

This PR adds ollama meta and types to the ollama adapter.

It's not exhaustive, however dose match sdk in content and models

Summary by CodeRabbit

  • New Features

    • Added many new model families and per-model capability metadata for richer model discovery and safer, model-specific options.
  • Refactor

    • Centralized a single registry of available text models and updated adapters to resolve per-model option types from that registry instead of local lists.
  • Chores

    • Re-exported consolidated model lists and typings, updated example default provider model/options, and added a release changeset.

✏️ Tip: You can customize this high-level summary in your review settings.

@nx-cloud
Copy link

nx-cloud bot commented Dec 7, 2025

View your CI Pipeline Execution ↗ for commit c3a82db

Command Status Duration Result
nx run-many --targets=build --exclude=examples/** ✅ Succeeded 2s View ↗
nx affected --targets=test:sherif,test:knip,tes... ✅ Succeeded 6s View ↗

☁️ Nx Cloud last updated this comment at 2026-01-05 12:53:52 UTC

@harry-whorlow harry-whorlow force-pushed the ollama-temp branch 2 times, most recently from ed875f4 to d0d4a56 Compare December 9, 2025 12:00
@pkg-pr-new
Copy link

pkg-pr-new bot commented Dec 9, 2025

Open in StackBlitz

@tanstack/ai

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai@117

@tanstack/ai-anthropic

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-anthropic@117

@tanstack/ai-client

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-client@117

@tanstack/ai-devtools-core

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-devtools-core@117

@tanstack/ai-gemini

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-gemini@117

@tanstack/ai-ollama

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-ollama@117

@tanstack/ai-openai

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-openai@117

@tanstack/ai-react

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-react@117

@tanstack/ai-react-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-react-ui@117

@tanstack/ai-solid

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-solid@117

@tanstack/ai-solid-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-solid-ui@117

@tanstack/ai-svelte

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-svelte@117

@tanstack/ai-vue

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-vue@117

@tanstack/ai-vue-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-vue-ui@117

@tanstack/react-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/react-ai-devtools@117

@tanstack/solid-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/solid-ai-devtools@117

commit: c3a82db

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 20, 2025

📝 Walkthrough

Walkthrough

Centralizes Ollama model metadata into a shared registry and many per-family meta modules; adapters remove local model lists and now import model names/types from the registry; index re-exports consolidated model tuples and per-model mapped types; numerous new meta files add typed model declarations and provider-option mappings.

Changes

Cohort / File(s) Summary
Adapters
packages/typescript/ai-ollama/src/adapters/text.ts, packages/typescript/ai-ollama/src/adapters/summarize.ts
Removed in-file hard-coded model arrays; import OLLAMA_TEXT_MODELS and per-model option types from model-meta; replaced local model unions with imports; updated generics to ResolveModelOptions<TModel> and adjusted structuredOutput signatures.
Public barrel / exports
packages/typescript/ai-ollama/src/index.ts
Stopped exporting adapter-local model lists; re-exports OLLAMA_TEXT_MODELS (as OllamaTextModels/OllamaSummarizeModels); added exports for per-model mapped types (e.g., OllamaChatModelOptionsByName, OllamaModelInputModalitiesByName, and other per-family types).
Core meta types
packages/typescript/ai-ollama/src/meta/models-meta.ts
New foundational Ollama type definitions and message/request shapes (e.g., OllamaModelMeta, OllamaChatRequest, OllamaChatRequestMessages, OllamaMessageTools, OllamaMessageImages, thinking/tools request variants).
Central registry
packages/typescript/ai-ollama/src/model-meta.ts
New aggregator OLLAMA_TEXT_MODELS concatenates all per-family [FAMILY]_MODELS; added composite mapped types OllamaChatModelOptionsByName and OllamaModelInputModalitiesByName (intersections of per-family maps).
Per-family metadata
packages/typescript/ai-ollama/src/meta/...
~70 new or extended per-family meta modules (e.g., model-meta-llama2.ts, model-meta-qwen3.ts, model-meta-mistral-*.ts, etc.), each exporting [FAMILY]_MODELS tuples and per-name mapped types for provider options and input modalities.
Examples & changelog
examples/ts-react-chat/src/routes/api.tanchat.ts, .changeset/*
Updated example default Ollama model (mistral:7b → gpt-oss:120b) and modelOptions/temperature; added a changeset file for a release bump.

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

Suggested reviewers

  • jherr
  • AlemTuzlak

Poem

"🐰 I hopped through tuples, names in rows,
I nudged the types where metadata grows.
Adapters found their single source,
Models marched in tidy course,
The rabbit nibbles code — then off he goes!"

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is minimal and incomplete. It lacks detail about the changes, rationale, and does not follow the provided template structure with the required sections and checklist items. Complete the PR description following the template: add a detailed explanation under '🎯 Changes', include the full checklist with checkmarks indicating completion, and document the release impact.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main change: adding Ollama types and meta to the Ollama adapter, matching the primary objective of this pull request.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Comment on lines 28 to 70
export interface OllamaChatRequest {
model: string
// messages?: Message[]
stream?: boolean
format?: string | object
keep_alive?: string | number
// tools?: Tool[]
// think?: boolean | 'high' | 'medium' | 'low'
logprobs?: boolean
top_logprobs?: number
options?: Partial<Options>
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comments are used for reference, will be removed

Comment on lines +67 to +99
role: string
content: string
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

role and config are only config applied to all chat

@harry-whorlow harry-whorlow force-pushed the ollama-temp branch 7 times, most recently from 1052658 to 833b639 Compare December 27, 2025 19:23
@harry-whorlow harry-whorlow marked this pull request as ready for review December 27, 2025 19:26
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

♻️ Duplicate comments (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (1)

15-26: Track removal of commented reference code.

The file contains extensive commented blocks that appear to serve as reference documentation. Based on the past review comment, these are planned for removal. Ensure these comments are cleaned up before merging to maintain code clarity.

Also applies to: 28-39, 53-60, 62-75

🟠 Major comments (12)
packages/typescript/ai-ollama/src/meta/model-meta-llama3-chatqa.ts-7-18 (1)

7-18: Fix size format inconsistencies and verify accuracy.

The size values across the three model definitions are inconsistent in format and potentially incorrect:

  • Line 14: '4.7b' (billion parameters)
  • Line 27: '4.7gb' (gigabytes) - This is concerning because the model is named 8b (implying 8 billion parameters)
  • Line 40: '40gb' (gigabytes)

The mixed use of 'b' vs 'gb' units creates ambiguity. Additionally, the 8b model having a '4.7gb' size seems incorrect if 8b refers to 8 billion parameters.

🔎 Suggested approach to verify and standardize

Please verify the correct size values from the official Ollama model documentation and standardize the unit format across all three models. If the size represents parameters, use 'b' consistently (e.g., '4.7b', '8b', '70b'). If it represents disk space, use 'gb' consistently.

#!/bin/bash
# Verify size format used in other model-meta files for consistency
rg -n "size:" packages/typescript/ai-ollama/src/meta/model-meta-*.ts -A 0

Also applies to: 20-31, 33-42

packages/typescript/ai-ollama/src/meta/model-meta-llama3-chatqa.ts-42-42 (1)

42-42: Replace any with specific type for type safety.

The LLAMA3_CHATQA_70b model uses OllamaModelMeta<any> instead of the specific type OllamaModelMeta<OllamaChatRequest & OllamaChatRequestMessages> used by the other two models. This reduces type safety and is inconsistent with the type mapping at line 65 which correctly types this model.

Based on learnings, per-model type safety should be maintained.

🔎 Proposed fix
-} as const satisfies OllamaModelMeta<any>
+} as const satisfies OllamaModelMeta<
+  OllamaChatRequest & OllamaChatRequestMessages
+>
packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts-7-36 (1)

7-36: Typo in model name: "tinnyllama" should be "tinyllama".

The model name has a typo with double 'n' (tinnyllama) instead of single 'n' (tinyllama). This will cause model lookup failures when users try to use these models with Ollama, as the official model name is tinyllama.

🔎 Proposed fix
-const TINNYLLAMA_LATEST = {
-  name: 'tinnyllama:latest',
+const TINYLLAMA_LATEST = {
+  name: 'tinyllama:latest',
   supports: {
     input: ['text'],
     output: ['text'],
     capabilities: [],
   },
   size: '638mb',
   context: 2_000,
 } as const satisfies OllamaModelMeta<
   OllamaChatRequest & OllamaChatRequestMessages
 >

-const TINNYLLAMA_1_1b = {
-  name: 'tinnyllama:1.1b',
+const TINYLLAMA_1_1b = {
+  name: 'tinyllama:1.1b',

This fix should be applied throughout the file to all constant names, exported types, and the MODELS array.

Committable suggestion skipped: line range outside the PR's diff.

packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts-24-37 (1)

24-37: Variable/model name mismatch: LLAMA3_3_70b defines model 'llama3.3:8b'.

The constant is named LLAMA3_3_70b but the model name is 'llama3.3:8b'. This is inconsistent and likely incorrect—either the variable should be LLAMA3_3_8b or the model name should be 'llama3.3:70b'.

🔎 Proposed fix (assuming 70b is correct)
 const LLAMA3_3_70b = {
-  name: 'llama3.3:8b',
+  name: 'llama3.3:70b',
   supports: {
     input: ['text'],
     output: ['text'],
     capabilities: ['tools'],
   },
   size: '43gb',
   context: 128_000,
 } as const satisfies OllamaModelMeta<
packages/typescript/ai-ollama/src/meta/model-meta-gemma3.ts-21-32 (1)

21-32: Type constraint mismatch with model capabilities.

GEMMA3_270m declares input: ['text'] (text-only) but uses OllamaChatRequestMessages<OllamaMessageImages> in its type constraint, which includes image message support. This creates a type inconsistency where the type system allows image messages for a text-only model.

Use OllamaChatRequestMessages (without the OllamaMessageImages parameter) for text-only models to ensure type safety aligns with model capabilities.

🔎 Proposed fix
 const GEMMA3_270m = {
   name: 'gemma3:270m',
   supports: {
     input: ['text'],
     output: ['text'],
     capabilities: [],
   },
   size: '298mb',
   context: 32_000,
 } as const satisfies OllamaModelMeta<
-  OllamaChatRequest & OllamaChatRequestMessages<OllamaMessageImages>
+  OllamaChatRequest & OllamaChatRequestMessages
 >
packages/typescript/ai-ollama/src/meta/model-meta-gemma3.ts-34-45 (1)

34-45: Type constraint mismatch with model capabilities.

Same issue as GEMMA3_270m: GEMMA3_1b declares text-only input but uses OllamaChatRequestMessages<OllamaMessageImages> in its type constraint.

🔎 Proposed fix
 const GEMMA3_1b = {
   name: 'gemma3:1b',
   supports: {
     input: ['text'],
     output: ['text'],
     capabilities: [],
   },
   size: '815mb',
   context: 32_000,
 } as const satisfies OllamaModelMeta<
-  OllamaChatRequest & OllamaChatRequestMessages<OllamaMessageImages>
+  OllamaChatRequest & OllamaChatRequestMessages
 >
packages/typescript/ai-ollama/src/meta/model-meta-gemma3.ts-106-120 (1)

106-120: Provider options type map includes incorrect types for text-only models.

The provider options map assigns OllamaMessageImages to text-only models (270m, 1b), creating the same type safety issue at the API level.

🔎 Proposed fix
 export type Gemma3ChatModelProviderOptionsByName = {
   // Models with thinking and structured output support
   [GEMMA3_LATEST.name]: OllamaChatRequest &
     OllamaChatRequestMessages<OllamaMessageImages>
-  [GEMMA3_270m.name]: OllamaChatRequest &
-    OllamaChatRequestMessages<OllamaMessageImages>
+  [GEMMA3_270m.name]: OllamaChatRequest & OllamaChatRequestMessages
-  [GEMMA3_1b.name]: OllamaChatRequest &
-    OllamaChatRequestMessages<OllamaMessageImages>
+  [GEMMA3_1b.name]: OllamaChatRequest & OllamaChatRequestMessages
   [GEMMA3_4b.name]: OllamaChatRequest &
     OllamaChatRequestMessages<OllamaMessageImages>
   [GEMMA3_12b.name]: OllamaChatRequest &
     OllamaChatRequestMessages<OllamaMessageImages>
   [GEMMA3_27b.name]: OllamaChatRequest &
     OllamaChatRequestMessages<OllamaMessageImages>
 }
packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts-20-31 (1)

20-31: Model name inconsistency: constant vs actual name.

The constant is named LLAMA3_8b but the model name is 'llama3:7b'. This mismatch between the constant name and actual model name creates confusion. Verify whether the constant should be renamed to LLAMA3_7b or the model name should be corrected to 'llama3:8b'. Note: Similar inconsistencies exist in other model meta files (e.g., LLAMA3_70b with correct name, but LLAMA_GUARD3_1b with 'llama3:7b' name).

packages/typescript/ai-ollama/src/meta/model-meta-mistral-nemo.ts-9-22 (1)

9-22: Update context window to 128,000 tokens for both Mistral Nemo models.

Mistral Nemo supports 128K context window, not 1,000. Update context: 1_000 to context: 128_000 for both MISTRAL_NEMO_LATEST and MISTRAL_NEMO_12b (lines 17 and 32).

packages/typescript/ai-ollama/src/meta/model-meta-mistral-nemo.ts-24-37 (1)

24-37: Set context to 128,000 tokens for Mistral Nemo 12b.

The context window is set to 1,000 tokens but should be 128,000. This applies to both MISTRAL_NEMO_LATEST and MISTRAL_NEMO_12b.

packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts-52-60 (1)

52-60: Remove tool support from provider options type — llava-llama3 does not support tool calling.

The provider options type (lines 54-59) includes OllamaMessageTools and OllamaChatRequestTools, but llava-llama3 is a vision-only model without tool calling support. This allows callers to pass tools where they will fail at runtime.

Remove OllamaMessageTools & and & OllamaChatRequestTools to match the model's actual capabilities and the satisfies clause (which only requires OllamaChatRequestMessages<OllamaMessageImages>).

packages/typescript/ai-ollama/src/meta/model-meta-sailor2.ts-58-62 (1)

58-62: Missing SAILOR2_1b in the exported models array.

The SAILOR2_1b model is defined (lines 20-31) and included in both Sailor2ChatModelProviderOptionsByName (line 78) and Sailor2ModelInputModalitiesByName (line 86), but it's not included in the SAILOR2_MODELS array. This inconsistency could cause issues where the type system allows 'sailor2:1b' as a valid model name, but runtime iteration over SAILOR2_MODELS would miss it.

🔎 Proposed fix
 export const SAILOR2_MODELS = [
   SAILOR2_LATEST.name,
+  SAILOR2_1b.name,
   SAILOR2_8b.name,
   SAILOR2_20b.name,
 ] as const
🟡 Minor comments (26)
packages/typescript/ai-ollama/src/meta/model-meta-dolphin3.ts-45-50 (1)

45-50: Update comment to match actual capabilities.

The comment on line 47 states "Models with thinking and structured output support," but the capabilities arrays for both models are empty (lines 12 and 25), indicating no support for thinking, tools, vision, or embedding. This inconsistency could mislead developers.

Please either:

  1. Update the comment to accurately reflect the models' capabilities, or
  2. Add the appropriate capabilities to the model metadata if they do support thinking and structured output
🔎 Proposed fix to update the comment
-  // Models with thinking and structured output support
+  // Models with text input/output support
   [DOLPHIN3_LATEST.name]: OllamaChatRequest & OllamaChatRequestMessages
   [DOLPHIN3_8b.name]: OllamaChatRequest & OllamaChatRequestMessages
packages/typescript/ai-ollama/src/meta/model-meta-llava.ts-15-15 (1)

15-15: Inconsistent size format units.

The size field uses inconsistent units across models:

  • LLAVA_LATEST: '4.7b' (appears to be parameter count)
  • Other models: '4.7gb', '8gb', '20gb' (file size)

This mixing of parameter count ('b' for billions) and file size ('gb' for gigabytes) creates ambiguity. Standardize to either parameter count or file size across all models.

Also applies to: 28-28, 41-41, 54-54

packages/typescript/ai-ollama/src/meta/model-meta-shieldgemma.ts-77-91 (1)

77-91: Correct misleading comments in type map definitions.

The comments above the type maps are inconsistent with the actual model definitions:

  • Line 78: Claims "Models with thinking and structured output support", but all models have empty capabilities: [] arrays.
  • Line 86: Claims "Models with text, image, audio, video (no document)", but all models only support input: ['text'].

Update the comments to accurately reflect the models' actual capabilities and supported modalities.

🔎 Proposed fix
 // Manual type map for per-model provider options
 export type ShieldgemmaChatModelProviderOptionsByName = {
-  // Models with thinking and structured output support
+  // All Shieldgemma models with standard chat support
   [SHIELDGEMMA_LATEST.name]: OllamaChatRequest & OllamaChatRequestMessages
   [SHIELDGEMMA_2b.name]: OllamaChatRequest & OllamaChatRequestMessages
   [SHIELDGEMMA_9b.name]: OllamaChatRequest & OllamaChatRequestMessages
   [SHIELDGEMMA_27b.name]: OllamaChatRequest & OllamaChatRequestMessages
 }
 
 export type ShieldgemmaModelInputModalitiesByName = {
-  // Models with text, image, audio, video (no document)
+  // All models support text input only
   [SHIELDGEMMA_LATEST.name]: typeof SHIELDGEMMA_LATEST.supports.input
   [SHIELDGEMMA_2b.name]: typeof SHIELDGEMMA_2b.supports.input
   [SHIELDGEMMA_9b.name]: typeof SHIELDGEMMA_9b.supports.input
   [SHIELDGEMMA_27b.name]: typeof SHIELDGEMMA_27b.supports.input
 }

Committable suggestion skipped: line range outside the PR's diff.

packages/typescript/ai-ollama/src/meta/model-meta-moondream.ts-50-56 (1)

50-56: Fix the misleading comment.

The comment states "Models with thinking and structured output support" but the Moondream models only declare 'vision' capability (lines 13, 26), not thinking or structured output. This comment appears to be copied from another model file and should be updated to accurately describe Moondream's capabilities.

🔎 Suggested fix
 // Manual type map for per-model provider options
 export type MoondreamChatModelProviderOptionsByName = {
-  // Models with thinking and structured output support
+  // Models with vision support
   [MOONDREAM_LATEST.name]: OllamaChatRequest &
     OllamaChatRequestMessages<OllamaMessageImages>
   [MOONDREAM_1_8b.name]: OllamaChatRequest &
     OllamaChatRequestMessages<OllamaMessageImages>
 }
packages/typescript/ai-ollama/src/meta/model-meta-moondream.ts-8-32 (1)

8-32: Verify metadata accuracy for moondream:1.8b variant.

The constants MOONDREAM_LATEST and MOONDREAM_1_8b have identical metadata (size: '1.7gb', context: 2_000), which appears inconsistent with the model naming. The moondream:1.8b variant name typically refers to a model with ~1.8 billion parameters, yet both variants declare the same 1.7gb size. Cross-check the actual specifications from the Moondream project to ensure accuracy, as other vision model variants in the codebase (e.g., llava) have differentiated metadata across versions.

packages/typescript/ai-ollama/src/meta/model-meta-exaone3.5.ts-74-74 (1)

74-74: Copy-paste error: comment references wrong model family.

The commented type alias says AyaChatModels but this file is for Exaone3.5 models.

🔎 Proposed fix
-// export type AyaChatModels = (typeof EXAONE3_5MODELS)[number]
+// export type Exaone3_5ChatModels = (typeof EXAONE3_5MODELS)[number]
packages/typescript/ai-ollama/src/meta/model-meta-exaone3.5.ts-33-44 (1)

33-44: Constant name doesn't match model name.

The constant EXAONE3_5_7_1b suggests a 7.1b model, but the actual model name is 'exaone3.5:7.8b'. Consider renaming to EXAONE3_5_7_8b for consistency.

🔎 Proposed fix
-const EXAONE3_5_7_1b = {
+const EXAONE3_5_7_8b = {
   name: 'exaone3.5:7.8b',

Update references in EXAONE3_5MODELS, Exaone3_5ChatModelProviderOptionsByName, and Exaone3_5ModelInputModalitiesByName accordingly.

packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts-26-56 (1)

26-56: Inconsistency between capabilities and type constraints.

OPT_OSS_20b and OPT_OSS_120b are constrained with OllamaChatRequestThinking_OpenAI in their type assertions (lines 38-39, 54-55), but their capabilities arrays only include ['tools'] without 'thinking' (lines 31, 47).

If these models support thinking, add 'thinking' to their capabilities. If they don't, consider removing OllamaChatRequestThinking_OpenAI from the type constraint to maintain consistency.

🔎 Option A: Add thinking capability
 const OPT_OSS_20b = {
   name: 'gpt-oss:20b',
   supports: {
     input: ['text'],
     output: ['text'],
-    capabilities: ['tools'],
+    capabilities: ['tools', 'thinking'],
   },

Committable suggestion skipped: line range outside the PR's diff.

packages/typescript/ai-ollama/src/meta/model-meta-deepseek-ocr.ts-48-48 (1)

48-48: Fix typo in commented type alias.

The comment contains a typo with double underscores in the model constant name.

🔎 Proposed fix
-// export type DeepseekOcrChatModels = (typeof DEEPSEEK_OCR__MODELS)[number]
+// export type DeepseekOcrChatModels = (typeof DEEPSEEK_OCR_MODELS)[number]
packages/typescript/ai-ollama/src/meta/model-meta-llama3.2-vision.ts-61-61 (1)

61-61: Fix duplicate "export" keyword in comment.

The comment contains a typo with duplicate "export" keywords.

🔎 Proposed fix
-// export export type Llama3_2VisionChatModels = (typeof LLAMA3_2Vision_MODELS)[number]
+// export type Llama3_2VisionChatModels = (typeof LLAMA3_2_VISION_MODELS)[number]

Note: Also fixed the reference to LLAMA3_2_VISION_MODELS (was LLAMA3_2Vision_MODELS).

packages/typescript/ai-ollama/src/meta/model-meta-llama3.2-vision.ts-21-32 (1)

21-32: Correct model size for 11b variant.

The LLAMA3_2_VISION_11b model's size should be '7.8gb', not '1gb'. The ollama model registry lists the download size for llama3.2:11b-vision as approximately 7.8 GB.

packages/typescript/ai-ollama/src/meta/model-meta-llama3.2.ts-9-22 (1)

9-22: Inconsistent size format - this should be '2gb'.

Line 16 shows the size as '2b' which is inconsistent with the 'Xgb' format used throughout this file (line 31: '1.3gb', line 46: '2gb'). The actual llama3.2 model is approximately 2GB, and '2b' is not a valid unit notation. This should be '2gb' to match the established pattern and actual model specifications.

   name: 'llama3.2:latest',
   supports: {
     input: ['text'],
     output: ['text'],
     capabilities: ['tools'],
   },
-  size: '2b',
+  size: '2gb',
   context: 128_000,
packages/typescript/ai-ollama/src/model-meta.ts-362-364 (1)

362-364: Duplicate entry: LLAMA3_2_MODELS is spread twice.

LLAMA3_2_MODELS appears on both line 362 and line 364, which will result in duplicate model names in the final array. Remove one of the duplicates.

🔎 Proposed fix
   ...LLAMA3_1_MODELS,
   ...LLAMA3_2_MODELS,
   ...LLAMA3_2_VISION_MODELS,
-  ...LLAMA3_2_MODELS,
   ...LLAMA3_3_MODELS,
packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts-16-16 (1)

16-16: Typo: Size unit should be '43gb' not '43b'.

For consistency with the LLAMA3_3_70b model (which uses '43gb'), this should include the 'g'.

🔎 Proposed fix
 const LLAMA3_3_LATEST = {
   name: 'llama3.3:latest',
   supports: {
     input: ['text'],
     output: ['text'],
     capabilities: ['tools'],
   },
-  size: '43b',
+  size: '43gb',
   context: 128_000,
 } as const satisfies OllamaModelMeta<
packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts-15-15 (1)

15-15: Typo: Size unit should be '2.9gb' not '2.9b'.

The size for LLAVA_PHI3_LATEST is '2.9b' while LLAVA_PHI3_8b uses '2.9gb'. This appears to be a typo—the 'g' is missing.

🔎 Proposed fix
 const LLAVA_PHI3_LATEST = {
   name: 'llava-phi3:latest',
   supports: {
     input: ['text', 'image'],
     output: ['text'],
     capabilities: ['vision'],
   },
-  size: '2.9b',
+  size: '2.9gb',
   context: 4_000,
 } as const satisfies OllamaModelMeta<
packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts-7-18 (1)

7-18: Verify size unit: 'b' vs 'gb'.

Line 14 specifies size as '4.7b' which appears to be a typo. Based on the pattern used for the same size model at line 27 ('4.7gb'), this should likely be '4.7gb'.

🔎 Proposed fix
 const LLAMA3_GRADIENT_LATEST = {
   name: 'llama3-gradient:latest',
   supports: {
     input: ['text'],
     output: ['text'],
     capabilities: [],
   },
-  size: '4.7b',
+  size: '4.7gb',
   context: 1_000_000,
 } as const satisfies OllamaModelMeta<
   OllamaChatRequest & OllamaChatRequestMessages
 >
packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts-10-21 (1)

10-21: Verify size unit: 'b' vs 'gb'.

Line 17 specifies size as '5.5b' which appears to be a typo. This should likely be '5.5gb' based on the pattern used in other model metadata files.

🔎 Proposed fix
 const LLAVA_LLAMA3_LATEST = {
   name: 'llava-llama3:latest',
   supports: {
     input: ['text', 'image'],
     output: ['text'],
     capabilities: ['vision'],
   },
-  size: '5.5b',
+  size: '5.5gb',
   context: 8_000,
 } as const satisfies OllamaModelMeta<
   OllamaChatRequest & OllamaChatRequestMessages<OllamaMessageImages>
 >
packages/typescript/ai-ollama/src/meta/model-meta-granite3-dense.ts-60-60 (1)

60-60: Fix typo in commented type name.

The comment has a duplicate "3": Granite3Dense3ChatModels should be Granite3DenseChatModels.

🔎 Proposed fix
-// export type Granite3Dense3ChatModels = (typeof GRANITE3_DENSE_MODELS)[number]
+// export type Granite3DenseChatModels = (typeof GRANITE3_DENSE_MODELS)[number]
packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts-7-18 (1)

7-18: Fix size unit inconsistency: Change '4.7b' to '4.7gb'.

Line 14 specifies size: '4.7b' which is inconsistent with the naming pattern used throughout the file. Both LLAMA3_8b (line 27) and LLAMA3_70b (line 40) use 'gb' suffix for their sizes ('4.7gb' and '40gb' respectively). The LLAMA3_LATEST model should also use '4.7gb' for consistency.

packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts-39-52 (1)

39-52: Variable name doesn't match model identifier.

GRANITE3_MOE_3b refers to 'granite3-moe:8b'. Consider renaming to GRANITE3_MOE_8b for consistency.

🔎 Proposed fix
-const GRANITE3_MOE_3b = {
-  name: 'granite3-moe:8b',
+const GRANITE3_MOE_8b = {
+  name: 'granite3-moe:8b',
packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts-9-22 (1)

9-22: Typo in size field.

LLAMA3_1_LATEST has size: '4.9b' but should likely be '4.9gb' to match the convention used in other model-meta files and the LLAMA3_1_8b definition below.

🔎 Proposed fix
 const LLAMA3_1_LATEST = {
   name: 'llama3.1:latest',
   supports: {
     input: ['text'],
     output: ['text'],
     capabilities: ['tools'],
   },
-  size: '4.9b',
+  size: '4.9gb',
   context: 128_000,
packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts-24-37 (1)

24-37: Variable name doesn't match model identifier.

GRANITE3_MOE_1b refers to 'granite3-moe:2b' (line 25), and GRANITE3_MOE_3b refers to 'granite3-moe:8b' (line 40). This naming mismatch is confusing and may lead to incorrect usage.

🔎 Proposed fix
-const GRANITE3_MOE_1b = {
-  name: 'granite3-moe:2b',
+const GRANITE3_MOE_2b = {
+  name: 'granite3-moe:2b',

And update all references accordingly (GRANITE3_MOE_MODELS, type maps).

Committable suggestion skipped: line range outside the PR's diff.

packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts-17-17 (1)

17-17: Inconsistent size unit format.

LLAMA4_LATEST uses '67b' while LLAMA4_16X17b uses '67gb'. The '67b' appears to be missing the "gb" unit suffix for consistency.

🔎 Suggested fix
-  size: '67b',
+  size: '67gb',
packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts-39-52 (1)

39-52: Variable name doesn't match model name.

Similar issue: GRANITE3_1_MOE_3b has name: 'granite3.1-moe:8b'. Consider renaming to GRANITE3_1_MOE_8b.

🔎 Suggested fix
-const GRANITE3_1_MOE_3b = {
+const GRANITE3_1_MOE_8b = {
   name: 'granite3.1-moe:8b',

And update all references in lines 57, 79-81, and 88.

Committable suggestion skipped: line range outside the PR's diff.

packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts-24-37 (1)

24-37: Variable name doesn't match model name.

The variable GRANITE3_1_MOE_1b has name: 'granite3.1-moe:2b'. The suffix _1b is misleading as it suggests a 1B parameter model, but the actual model is 2B. Consider renaming to GRANITE3_1_MOE_2b for clarity.

🔎 Suggested fix
-const GRANITE3_1_MOE_1b = {
+const GRANITE3_1_MOE_2b = {
   name: 'granite3.1-moe:2b',

And update all references in lines 56, 76-78, and 87.

Committable suggestion skipped: line range outside the PR's diff.

packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts-133-157 (1)

133-157: Misleading comment about model capabilities.

The comment on line 135 states "Models with thinking and structured output support", but the actual model metadata only declares ['tools'] in the capabilities array. Either the comment should be updated to match reality, or the capabilities should be enhanced if these models truly support thinking/structured output.

🔎 Fix: Update comment to match actual capabilities
 // Manual type map for per-model provider options
 export type Qwen2_5CoderChatModelProviderOptionsByName = {
-  // Models with thinking and structured output support
+  // Models with tool support
   [QWEN2_5_CODER_LATEST.name]: OllamaChatRequest &
🧹 Nitpick comments (25)
packages/typescript/ai-ollama/src/meta/model-meta-llama3-chatqa.ts (3)

50-58: Consider removing unused commented code.

The commented out sections for image, embedding, audio, and video models appear to be placeholders. If these model variants don't exist for llama3-chatqa, consider removing the comments to reduce clutter. If they're planned for future implementation, a TODO comment would make the intent clearer.


61-66: Update or remove misleading comment.

The comment on line 62 mentions "Models with thinking and structured output support", but all three models have empty capabilities: [] arrays in their definitions. Either update the comment to accurately reflect the models' capabilities or remove it.


68-73: Correct the misleading comment.

The comment on line 69 states "Models with text, image, audio, video (no document)" but all three models only support text input (input: ['text']). Update the comment to accurately reflect that these models only support text modalities.

packages/typescript/ai-ollama/src/meta/model-meta-dolphin3.ts (1)

35-43: Consider removing commented-out placeholders.

The commented-out constants for image, embedding, audio, and video models appear to be unused placeholders. Since Dolphin3 models only support text input/output (based on the metadata), consider:

  1. Removing these placeholders if they're not planned for future use, or
  2. Adding a TODO comment explaining why they're kept for future expansion

This helps maintain code clarity and reduces confusion.

packages/typescript/ai-ollama/src/meta/model-meta-llava.ts (3)

78-78: Consider PascalCase for type names.

The exported types use camelCase (llavaChatModelProviderOptionsByName, llavaModelInputModalitiesByName), but TypeScript convention typically uses PascalCase for type names (e.g., LlavaChatModelProviderOptionsByName, LlavaModelInputModalitiesByName).

🔎 Proposed refactor
-export type llavaChatModelProviderOptionsByName = {
+export type LlavaChatModelProviderOptionsByName = {
   // Models with thinking and structured output support
   [LLAVA_LATEST.name]: OllamaChatRequest &
     OllamaChatRequestMessages<OllamaMessageImages>

   [LLAVA_7b.name]: OllamaChatRequest &
     OllamaChatRequestMessages<OllamaMessageImages>

   [LLAVA_13b.name]: OllamaChatRequest &
     OllamaChatRequestMessages<OllamaMessageImages>

   [LLAVA_34b.name]: OllamaChatRequest &
     OllamaChatRequestMessages<OllamaMessageImages>
 }

-export type llavaModelInputModalitiesByName = {
+export type LlavaModelInputModalitiesByName = {
   // Models with text, image, audio, video (no document)
   [LLAVA_LATEST.name]: typeof LLAVA_LATEST.supports.input
   [LLAVA_7b.name]: typeof LLAVA_7b.supports.input
   [LLAVA_13b.name]: typeof LLAVA_13b.supports.input
   [LLAVA_34b.name]: typeof LLAVA_34b.supports.input
 }

Also applies to: 93-93


67-75: Incomplete model type exports.

The commented-out exports suggest this is a work-in-progress. If these model categories (image, embedding, audio, video) are planned for LLaVA models, they should either be implemented or removed to keep the codebase clean.

Would you like me to help complete these model type definitions, or should these commented lines be removed if they're not applicable to LLaVA models?


78-91: Consider simplifying duplicate type definitions.

All four LLaVA model entries have the same type signature: OllamaChatRequest & OllamaChatRequestMessages<OllamaMessageImages>. While this explicit mapping supports per-model type safety, you could reduce duplication with a type helper if all models share the same options.

💡 Example simplification
type LlavaCommonOptions = OllamaChatRequest & 
  OllamaChatRequestMessages<OllamaMessageImages>

export type llavaChatModelProviderOptionsByName = {
  [K in typeof LLAVA_MODELS[number]]: LlavaCommonOptions
}

Note: Only apply this if all LLaVA models will continue to share identical options. The current explicit approach is more maintainable if models diverge in the future.

packages/typescript/ai-ollama/src/meta/model-meta-shieldgemma.ts (1)

74-74: Consider uncommenting and exporting the ShieldgemmaChatModels type alias.

The type alias provides a convenient union type for all Shieldgemma chat models, improving developer experience and consistency with other model-meta modules. Unless there's a specific reason to keep it commented, it should be exported.

🔎 Proposed change
-// export type ShieldgemmaChatModels = (typeof SHIELDGEMMA_MODELS)[number]
+export type ShieldgemmaChatModels = (typeof SHIELDGEMMA_MODELS)[number]
packages/typescript/ai-ollama/src/meta/model-meta-command-r.ts (1)

1-69: Well-structured model metadata following established patterns.

The file correctly implements the per-model metadata pattern with type-safe provider options. One minor issue: the comment on line 56 states "Models with thinking and structured output support" but these models have ['tools'] capability, not 'thinking'. Consider updating the comment to accurately reflect the supported capabilities.

🔎 Suggested comment fix
 // Manual type map for per-model provider options
 export type CommandRChatModelProviderOptionsByName = {
-  // Models with thinking and structured output support
+  // Models with tools support
   [COMMAND_R_LATEST.name]: OllamaChatRequest &
packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts (1)

48-53: Inaccurate comment: TinyLlama has no special capabilities.

The comment states "Models with thinking and structured output support" but capabilities is an empty array for these models. Update the comment to reflect the actual capabilities.

🔎 Suggested fix
 // Manual type map for per-model provider options
-export type TinnyllamaChatModelProviderOptionsByName = {
-  // Models with thinking and structured output support
+export type TinyllamaChatModelProviderOptionsByName = {
+  // Basic text models
packages/typescript/ai-ollama/src/meta/model-meta-olmo2.ts (1)

1-75: LGTM with minor comment improvement.

The model metadata structure is correct and follows the established pattern. The comment on line 64 states "Models with thinking and structured output support" but these models have empty capabilities arrays. Consider updating to "Basic text models" or similar.

packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts (1)

10-11: Naming inconsistency: constant uses "OPT" but model name uses "gpt".

The constant OPT_OSS_LATEST uses "OPT" prefix but the model name is 'gpt-oss:latest'. Consider using GPT_OSS_LATEST for consistency with the model name and the exported GPT_OSS_MODELS array name.

packages/typescript/ai-ollama/src/meta/model-meta-exaone3.5.ts (1)

59-64: Inconsistent naming pattern for MODELS constant.

EXAONE3_5MODELS is missing an underscore before MODELS compared to other files (e.g., GEMMA2_MODELS, OLMO2_MODELS). Consider using EXAONE3_5_MODELS for consistency.

packages/typescript/ai-ollama/src/meta/model-meta-opencoder.ts (1)

1-75: LGTM with minor comment improvement.

The model metadata structure is correct and follows the established pattern. The comment on line 64 stating "Models with thinking and structured output support" is inaccurate since these models have empty capabilities arrays. Consider updating to "Basic text models".

packages/typescript/ai-ollama/src/meta/model-meta-gemma2.ts (1)

1-91: LGTM with minor comment improvement.

The model metadata structure is correct and follows the established pattern for per-model type safety. As with other files, the comment on line 78 stating "Models with thinking and structured output support" is inaccurate for models with empty capabilities arrays.

packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts (1)

20-44: Variable names don't match the model names they define.

  • CODEGEMMA_8b defines model name 'codegemma:2b'
  • CODEGEMMA_35b defines model name 'codegemma:7b'

This is confusing and could lead to maintenance issues. Consider renaming the constants to match the actual model sizes.

🔎 Proposed fix
-const CODEGEMMA_8b = {
+const CODEGEMMA_2b = {
   name: 'codegemma:2b',
   supports: {
     input: ['text'],
     output: ['text'],
     capabilities: [],
   },
   size: '1.65gb',
   context: 8_000,
 } as const satisfies OllamaModelMeta<
   OllamaChatRequest & OllamaChatRequestMessages
 >

-const CODEGEMMA_35b = {
+const CODEGEMMA_7b = {
   name: 'codegemma:7b',
   supports: {
     input: ['text'],
     output: ['text'],
     capabilities: [],
   },
   size: '5gb',
   context: 8_000,
 } as const satisfies OllamaModelMeta<
   OllamaChatRequest & OllamaChatRequestMessages
 >

Then update the references in CODEGEMMA_MODELS and the type maps accordingly.

packages/typescript/ai-ollama/src/meta/model-meta-phi3.ts (1)

20-31: Consider renaming constant for clarity.

The constant name PHI3_3_8b (with "3_8") is slightly confusing. Since the model name is 'phi3:8b', consider renaming to PHI3_8b for consistency with naming patterns in other model metadata files (e.g., LLAMA3_8b in model-meta-llama3.ts).

packages/typescript/ai-ollama/src/meta/model-meta-sailor2.ts (1)

44-45: Minor formatting: missing blank line between model definitions.

For consistency with other model definitions in this file, add a blank line between SAILOR2_8b and SAILOR2_20b.

🔎 Proposed fix
 } as const satisfies OllamaModelMeta<
   OllamaChatRequest & OllamaChatRequestMessages
 >
+
 const SAILOR2_20b = {
packages/typescript/ai-ollama/src/meta/model-meta-phi4.ts (2)

45-50: Misleading comment about model capabilities.

The comment states "Models with thinking and structured output support" but the model definitions on lines 12 and 25 show capabilities: [] (empty). Consider updating the comment to reflect actual capabilities, or remove it.

🔎 Suggested fix
 // Manual type map for per-model provider options
 export type Phi4ChatModelProviderOptionsByName = {
-  // Models with thinking and structured output support
+  // Basic chat models (text only)
   [PHI4_LATEST.name]: OllamaChatRequest & OllamaChatRequestMessages
   [PHI4_14b.name]: OllamaChatRequest & OllamaChatRequestMessages
 }

52-56: Misleading comment about input modalities.

The comment mentions "text, image, audio, video" but these models only support ['text'] input per lines 10 and 23.

🔎 Suggested fix
 export type Phi4ModelInputModalitiesByName = {
-  // Models with text, image, audio, video (no document)
+  // Text-only input models
   [PHI4_LATEST.name]: typeof PHI4_LATEST.supports.input
   [PHI4_14b.name]: typeof PHI4_14b.supports.input
 }
packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts (2)

72-83: Type name prefix inconsistency with model constants.

The type is named Llama3_4ChatModelProviderOptionsByName but the model constants use LLAMA4_* prefix. This appears to be a copy-paste artifact. Consider renaming for consistency.

🔎 Suggested fix
-export type Llama3_4ChatModelProviderOptionsByName = {
+export type Llama4ChatModelProviderOptionsByName = {
   // Models with thinking and structured output support

85-90: Type name prefix inconsistency.

Same issue as above - Llama3_4ModelInputModalitiesByName should be Llama4ModelInputModalitiesByName to match the model family.

🔎 Suggested fix
-export type Llama3_4ModelInputModalitiesByName = {
+export type Llama4ModelInputModalitiesByName = {
packages/typescript/ai-ollama/src/meta/model-meta-marco-o1.ts (1)

45-50: Misleading comment about model capabilities.

The comment states "Models with thinking and structured output support" but both models have capabilities: []. Marco-O1 is a reasoning model, so if it does support thinking, the capabilities array should reflect that.

Please verify Marco-O1's actual capabilities and either update the comment or add 'thinking' to the capabilities array if applicable.

packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts (1)

76-90: Consider consistent type naming convention.

The type name Deepseekv3_1ChatModelProviderOptionsByName uses lowercase 'v' which differs from the constant naming pattern DEEPSEEK_V3_1_*. For consistency with other model families, consider DeepseekV3_1ChatModelProviderOptionsByName.

packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts (1)

123-131: Consider removing or uncommenting the type alias.

The commented-out placeholder arrays (lines 123-130) appear to be intentional scaffolding for future modalities. However, the Qwen2_5CoderChatModels type alias (line 131) could be useful now as it provides a union type of all model names.

💡 Option: Enable the model names type alias
-// export type Qwen2_5CoderChatModels = (typeof QWEN2_5_CODER_MODELS)[number]
+export type Qwen2_5CoderChatModels = (typeof QWEN2_5_CODER_MODELS)[number]

This would provide a convenient type for any code that needs to reference valid Qwen2.5 Coder model names.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 511477c and 4a688da.

📒 Files selected for processing (3)
  • packages/typescript/ai-ollama/src/adapters/summarize.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
  • packages/typescript/ai-ollama/src/model-meta.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/typescript/ai-ollama/src/adapters/summarize.ts
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.{ts,tsx}: Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from /adapters subpath rather than monolithic adapters
Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions with toolDefinition() and Zod schema inference
Implement isomorphic tool system using toolDefinition() with .server() and .client() implementations for dual-environment execution
Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Files:

  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Use camelCase for function and variable names throughout the codebase

Files:

  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
packages/typescript/*/src/model-meta.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Maintain model metadata files that define provider options and capabilities per model for per-model type safety

Files:

  • packages/typescript/ai-ollama/src/model-meta.ts
🧠 Learnings (4)
📓 Common learnings
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety

Applied to files:

  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking

Applied to files:

  • packages/typescript/ai-ollama/src/model-meta.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety

Applied to files:

  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
🧬 Code graph analysis (1)
packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (7)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
  • OllamaMessageTools (85-88)
  • OllamaMessageThinking (77-79)
  • OllamaChatRequestTools (49-51)
  • OllamaChatRequestThinking_OpenAI (45-47)
🔇 Additional comments (5)
packages/typescript/ai-ollama/src/model-meta.ts (3)

1-67: LGTM! Constant imports are well-organized.

The constant imports follow a consistent naming convention (SCREAMING_SNAKE_CASE) and are logically grouped.


399-464: Type intersection structure is correct.

The intersection type approach correctly aggregates provider options from all model families, enabling per-model type safety as intended by the codebase architecture.


466-531: Type intersection structure is correct.

The intersection type approach correctly aggregates input modalities from all model families, providing comprehensive type coverage across the entire model set.

packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts (2)

1-9: LGTM!

The imports are well-structured using type-only imports for optimal tree-shaking, and all imported types are appropriately used throughout the file.


11-25: LGTM!

OPT_OSS_LATEST correctly aligns its capabilities array (including 'thinking') with its type constraint (including thinking-related types). This maintains proper type safety.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🧹 Nitpick comments (4)
packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts (1)

41-49: Consider removing unused commented placeholder sections.

These commented sections for IMAGE, EMBEDDING, AUDIO, VIDEO models and the ChatModels type appear to be placeholders. If these model types are not applicable to Mistral, consider removing these comments to reduce clutter. If they're intended for future use, consider adding a TODO comment explaining the intent.

packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts (1)

124-132: Consider removing unused commented template code.

The commented placeholder code for image, embedding, audio, and video models doesn't apply to the Qwen 2.5 Coder family. If these placeholders aren't serving a documentation or future-expansion purpose, removing them would reduce noise.

packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts (2)

87-101: Comment doesn't match model capabilities.

Line 88 states "Models with thinking and structured output support" but the model definitions (lines 14, 29, 44, 59) only declare ['tools'] capability—none include 'thinking'.

🔎 Suggested comment correction
 // Manual type map for per-model provider options
 export type Llama3_1ChatModelProviderOptionsByName = {
-  // Models with thinking and structured output support
+  // Models with tools support
   [LLAMA3_1_LATEST.name]: OllamaChatRequest &

Based on learnings, capability comments should be uniform and accurate.


103-109: Comment incorrectly describes input modalities.

Line 104 claims "Models with text, image, audio, video (no document)" but all model definitions only specify input: ['text'] in their supports objects. The comment should accurately reflect that these models only support text input.

🔎 Suggested comment correction
 export type Llama3_1ModelInputModalitiesByName = {
-  // Models with text, image, audio, video (no document)
+  // Models with text input only
   [LLAMA3_1_LATEST.name]: typeof LLAMA3_1_LATEST.supports.input

Based on learnings, maintain uniform templates but ensure comments accurately describe the actual capabilities.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4a688da and 2e8eb92.

📒 Files selected for processing (11)
  • packages/typescript/ai-ollama/src/adapters/summarize.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-small.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
  • packages/typescript/ai-ollama/src/model-meta.ts
🚧 Files skipped from review as they are similar to previous changes (6)
  • packages/typescript/ai-ollama/src/adapters/summarize.ts
  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-small.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.{ts,tsx}: Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from /adapters subpath rather than monolithic adapters
Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions with toolDefinition() and Zod schema inference
Implement isomorphic tool system using toolDefinition() with .server() and .client() implementations for dual-environment execution
Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Use camelCase for function and variable names throughout the codebase

Files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
🧠 Learnings (5)
📓 Common learnings
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters
📚 Learning: 2025-12-27T20:22:46.710Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:46.710Z
Learning: Enforce that all model-meta files under packages/typescript/ai-ollama/src/meta/model-meta-*.ts use a standard, uniform template for capability-related comments, even if the text doesn’t perfectly align with each specific model capability. This ensures consistency across the codebase. Include a brief note in the template when a capability is not applicable, and avoid deviating from the established comment structure (e.g., header, template fields, and formatting) to maintain readability and tooling familiarity.

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
📚 Learning: 2025-12-27T19:48:57.631Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts:7-18
Timestamp: 2025-12-27T19:48:57.631Z
Learning: When reviewing or updating metadata in the Ollama model metadata module (e.g., files under packages/typescript/ai-ollama/src/meta/), always treat https://ollama.com/library/ as the authoritative source for model specifications (size, context window, capabilities). Validate only against the official docs and avoid citing non-official sources. If discrepancies arise, reference the official docs and ensure metadata aligns with their specifications.

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
🧬 Code graph analysis (3)
packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (7)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
  • OllamaMessageTools (85-88)
  • OllamaMessageThinking (77-79)
  • OllamaChatRequestTools (49-51)
  • OllamaChatRequestThinking_OpenAI (45-47)
packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
  • OllamaMessageTools (85-88)
  • OllamaChatRequestTools (49-51)
packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
  • OllamaMessageTools (85-88)
  • OllamaChatRequestTools (49-51)
🔇 Additional comments (10)
packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts (5)

1-9: LGTM: Imports are clean and complete.

All necessary types are imported and used appropriately throughout the file.


27-41: LGTM: Type safety is consistent.

The model metadata structure is correct, and the capabilities array now properly includes 'thinking' to match the type constraints. This resolves the previously flagged critical issue.


43-57: LGTM: Type safety is consistent.

The model metadata is correctly structured with matching capabilities and type constraints. The larger size (65gb vs 14gb) is appropriate for a 120b parameter model.


59-97: LGTM: Type maps provide proper per-model type safety.

The exports and type maps follow the established pattern from other model-meta files. The type maps correctly provide per-model provider options and input modality types, enabling compile-time type safety for model-specific configurations.

Note: The comment on line 93 follows the standard template format used across all model-meta files for consistency, as per coding guidelines.


11-25: Model specifications are accurate; code is ready.

The gpt-oss:latest specifications in the code match the official Ollama library: size is 14GB and context window is 128K tokens. Type constraints correctly reflect the declared capabilities for tools and thinking support.

packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts (2)

1-7: LGTM!

The imports are correct and all imported types are properly utilized in the model definitions and type maps.


9-22: Specifications verified as accurate.

The mistral:latest specifications in the code are correct: size (4.4gb) and context window (32,000) match the official Ollama library.

packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts (2)

160-169: Correct inaccurate modality comment.

The comment "Models with text, image, audio, video (no document)" doesn't match the actual input modalities. All Qwen 2.5 Coder models only support ['text'] input.

🔎 Suggested fix
-  // Models with text, image, audio, video (no document)
+  // Models with text input only
   [QWEN2_5_CODER_LATEST.name]: typeof QWEN2_5_CODER_LATEST.supports.input
⛔ Skipped due to learnings
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Maintain type safety through multimodal content support (image, audio, video, document) with model capability awareness
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:46.710Z
Learning: In the ai-ollama package's model-meta files (packages/typescript/ai-ollama/src/meta/model-meta-*.ts), capability-related comments follow a standard template format across all files for consistency, even if the comment text doesn't precisely match individual model capabilities. This is an intentional design choice to maintain uniformity across the codebase.

9-112: Update context windows for larger model variants to reflect actual Qwen capabilities.

Model sizes are accurate, but the context window specification is incomplete. While Ollama displays 32K by default, Qwen documentation confirms that qwen2.5-coder models 7b, 14b, and 32b support up to 128K (131,072) tokens. Update the context property to 128_000 for QWEN2_5_CODER_7b, QWEN2_5_CODER_14b, and QWEN2_5_CODER_32b to reflect their actual capabilities; the smaller variants (0.5b, 1.5b, 3b) remain at 32K.

⛔ Skipped due to learnings
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:46.710Z
Learning: In the ai-ollama package's model-meta files (packages/typescript/ai-ollama/src/meta/model-meta-*.ts), capability-related comments follow a standard template format across all files for consistency, even if the comment text doesn't precisely match individual model capabilities. This is an intentional design choice to maintain uniformity across the codebase.
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts:7-18
Timestamp: 2025-12-27T19:48:57.631Z
Learning: For Ollama model metadata in the ai-ollama package, always consult https://ollama.com/library/ as the authoritative source for model specifications (size, context window, capabilities) rather than other documentation sources.
packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts (1)

46-60: LGTM! Export structure follows the established pattern.

The model names export and placeholder comments for future model categories are consistent with other model-meta files in the codebase.

@harry-whorlow harry-whorlow force-pushed the ollama-temp branch 2 times, most recently from 9e7373a to 9a6ff6c Compare December 27, 2025 21:59
Comment on lines 15 to 70
// interface ChatRequest {
// model: string
// messages?: Message[]
// stream?: boolean
// format?: string | object
// keep_alive?: string | number
// tools?: Tool[]
// think?: boolean | 'high' | 'medium' | 'low'
// logprobs?: boolean
// top_logprobs?: number
// options?: Partial<Options>
// }

export interface OllamaChatRequest {
// model: string
// messages?: Message[]
stream?: boolean
format?: string | object
keep_alive?: string | number
// tools?: Tool[]
// think?: boolean | 'high' | 'medium' | 'low'
logprobs?: boolean
top_logprobs?: number
options?: Partial<Options>
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

with the comments, It would be nice to have them for reference for dev work... I'll remove the comments from the OllamaChatRequest type before pr merge

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (1)

45-47: Consider using camelCase for type name.

The type name OllamaChatRequestThinking_OpenAI uses an underscore, which doesn't follow the camelCase naming convention specified in the coding guidelines.

🔎 Suggested refactor
-export interface OllamaChatRequestThinking_OpenAI {
+export interface OllamaChatRequestThinkingOpenAI {
   think?: 'low' | 'medium' | 'high'
 }
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9e7373a and 344d236.

📒 Files selected for processing (3)
  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • packages/typescript/ai-ollama/src/adapters/summarize.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/typescript/ai-ollama/src/adapters/summarize.ts
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.{ts,tsx}: Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from /adapters subpath rather than monolithic adapters
Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions with toolDefinition() and Zod schema inference
Implement isomorphic tool system using toolDefinition() with .server() and .client() implementations for dual-environment execution
Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Files:

  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Use camelCase for function and variable names throughout the codebase

Files:

  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
examples/**

📄 CodeRabbit inference engine (CLAUDE.md)

Examples are not built by Nx and should be run independently from their directories with pnpm dev or pnpm install && pnpm dev

Files:

  • examples/ts-react-chat/src/routes/api.tanchat.ts
🧠 Learnings (8)
📓 Common learnings
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:46.710Z
Learning: In the ai-ollama package's model-meta files (packages/typescript/ai-ollama/src/meta/model-meta-*.ts), capability-related comments follow a standard template format across all files for consistency, even if the comment text doesn't precisely match individual model capabilities. This is an intentional design choice to maintain uniformity across the codebase.
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts:7-18
Timestamp: 2025-12-27T19:48:57.631Z
Learning: For Ollama model metadata in the ai-ollama package, always consult https://ollama.com/library/ as the authoritative source for model specifications (size, context window, capabilities) rather than other documentation sources.
📚 Learning: 2025-12-27T20:22:46.710Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:46.710Z
Learning: In the ai-ollama package's model-meta files (packages/typescript/ai-ollama/src/meta/model-meta-*.ts), capability-related comments follow a standard template format across all files for consistency, even if the comment text doesn't precisely match individual model capabilities. This is an intentional design choice to maintain uniformity across the codebase.

Applied to files:

  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety

Applied to files:

  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking

Applied to files:

  • examples/ts-react-chat/src/routes/api.tanchat.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety

Applied to files:

  • examples/ts-react-chat/src/routes/api.tanchat.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters

Applied to files:

  • examples/ts-react-chat/src/routes/api.tanchat.ts
📚 Learning: 2025-12-27T21:39:23.584Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts:70-75
Timestamp: 2025-12-27T21:39:23.584Z
Learning: The standard template comments in ai-ollama model-meta files (like "Models with text, image, audio, video (no document)") should not be modified to match individual model capabilities, as they are intentionally kept uniform across all model-meta-*.ts files for consistency, regardless of what each specific model actually supports.

Applied to files:

  • packages/typescript/ai-ollama/src/meta/models-meta.ts
📚 Learning: 2025-12-27T19:48:57.631Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts:7-18
Timestamp: 2025-12-27T19:48:57.631Z
Learning: When reviewing or updating metadata in the Ollama model metadata module (e.g., files under packages/typescript/ai-ollama/src/meta/), always treat https://ollama.com/library/ as the authoritative source for model specifications (size, context window, capabilities). Validate only against the official docs and avoid citing non-official sources. If discrepancies arise, reference the official docs and ensure metadata aligns with their specifications.

Applied to files:

  • packages/typescript/ai-ollama/src/meta/models-meta.ts
🔇 Additional comments (1)
examples/ts-react-chat/src/routes/api.tanchat.ts (1)

95-96: No issues found. The gpt-oss:120b model is valid in Ollama's library, and the think: 'low' option is correctly typed per the OllamaChatRequestThinking_OpenAI interface which defines valid values as 'low' | 'medium' | 'high'. The model metadata properly includes this capability and the implementation follows type-safe per-model configuration patterns.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

Fix all issues with AI Agents 🤖
In @packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts:
- Around line 20-31: The variable names for the CodeGemma models are mismatched
to their declared model names; rename CODEGEMMA_8b → CODEGEMMA_2b and
CODEGEMMA_35b → CODEGEMMA_7b so the constants reflect name: 'codegemma:2b' and
name: 'codegemma:7b', then update all usages accordingly (including the
CODEGEMMA_MODELS array and any exported type aliases or union types) to
reference CODEGEMMA_2b and CODEGEMMA_7b instead of the old identifiers so names
and variables are consistent across the file.

In @packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts:
- Around line 26-41: The DEEPSEEK_V3_1_671b model meta has a name mismatch and
extra blank line: update the name string in DEEPSEEK_V3_1_671b from
'deepseek-v3.1:671' to 'deepseek-v3.1:671b' to include the missing 'b' suffix
and remove the extra blank line before the size field so the object fields
follow the same compact formatting as other model entries.

In @packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts:
- Around line 39-52: The constant GRANITE3_1_MOE_3b currently points to a
non-existent model tag ("granite3.1-moe:8b") and its variable name mismatches
the model; change the model name string to "granite3.1-moe:3b" so the constant
GRANITE3_1_MOE_3b and its name field match, and verify the size/context fields
remain correct; likewise locate GRANITE3_1_MOE_1b which currently references
"granite3.1-moe:2b" and update its name field to "granite3.1-moe:1b" (and adjust
the size field to the correct value for 1b if needed) so variable names and
model tags align with the official variants.
- Around line 24-37: The constant names GRANITE3_1_MOE_1b and GRANITE3_1_MOE_3b
do not match their assigned model tag strings (they currently use
"granite3.1-moe:2b" and "granite3.1-moe:8b"); update the string values to the
correct Ollama tags "granite3.1-moe:1b" for GRANITE3_1_MOE_1b and
"granite3.1-moe:3b" for GRANITE3_1_MOE_3b so the variable suffixes align with
the actual model identifiers used by Ollama (adjust the name property inside
each object, leaving types and other fields unchanged).
- Around line 54-89: Update the Ollama model name constants and context window
values: change the name of GRANITE3_1_MOE_1b to 'granite3.1-moe:1b' (was
'...:2b') and GRANITE3_1_MOE_3b to 'granite3.1-moe:3b' (was '...:8b'), and
adjust the context window value from 128000 to 131072 in the model
metadata/support definitions (e.g., the supports.context or similar fields) for
GRANITE3_1_MOE_LATEST, GRANITE3_1_MOE_1b, and GRANITE3_1_MOE_3b so the names and
token limits match official Ollama specs.

In @packages/typescript/ai-ollama/src/meta/model-meta-llama3.2.ts:
- Around line 9-22: LLAMA3_2_LATEST has an incorrect size string '2b'; change
the size property on the LLAMA3_2_LATEST constant to use the same GB format as
other entries (set size: '2gb') so the model metadata matches Ollama docs and
other model entries.

In @packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts:
- Around line 9-22: The LLAMA3_3_LATEST constant has an inconsistent size string
('43b'); update the size property in LLAMA3_3_LATEST (the exported constant
named LLAMA3_3_LATEST) to match the other metadata entries by changing '43b' to
'43gb' so the unit is consistent with LLAMA3_3_70b and other model-meta files.
- Around line 24-37: The model constant LLAMA3_3_70b has an incorrect name
value; change the name property from 'llama3.3:8b' to 'llama3.3:70b' in the
LLAMA3_3_70b object so it matches the actual Ollama model (keep the rest of the
object, including size '43gb' and context 128_000, and the existing
OllamaModelMeta type assertion, unchanged).

In @packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts:
- Around line 8-32: Update the LLAVA_PHI3_LATEST constant by changing size from
'2.9b' to '2.9gb', and remove the non-existent LLAVA_PHI3_8b constant (or
replace it with a real variant like
'llava-phi3:3.8b-mini-fp16'/'llava-phi3:3.8b-mini-q4_0' if you prefer) ensuring
any replacement uses 'gb' for the size field (e.g., '3.8gb'); adjust references
to LLAVA_PHI3_8b elsewhere if present.

In @packages/typescript/ai-ollama/src/meta/model-meta-llava.ts:
- Around line 8-19: The LLAVA_LATEST model meta has a typo in the size field;
update the size value on the LLAVA_LATEST constant from '4.7b' to '4.7gb' so it
matches the format used by other model entries (e.g., LLAVA_7b) and retains the
as const satisfies OllamaModelMeta typing.
♻️ Duplicate comments (2)
packages/typescript/ai-ollama/src/meta/models-meta.ts (1)

62-75: Critical: Type intersection produces never type.

This issue was previously flagged but remains unresolved. When TMessageExtension defaults to undefined, the intersection { role: string; content: string } & undefined evaluates to never, making the messages array unusable without explicitly providing a type parameter.

🔎 Proposed fix (Option 1: Empty object default)
 export interface OllamaChatRequestMessages<
-  TMessageExtension extends OllamaMessageExtension = undefined,
+  TMessageExtension = {},
 > {
   messages?: Array<
     {
       role: string
       content: string
-      //   thinking?: string
-      //   images?: Uint8Array[] | string[]
-      //   tool_calls?: ToolCall[]
-      //   tool_name?: string
     } & TMessageExtension
   >
 }
🔎 Proposed fix (Option 2: Conditional type)
 export interface OllamaChatRequestMessages<
   TMessageExtension extends OllamaMessageExtension = undefined,
 > {
   messages?: Array<
     {
       role: string
       content: string
-      //   thinking?: string
-      //   images?: Uint8Array[] | string[]
-      //   tool_calls?: ToolCall[]
-      //   tool_name?: string
-    } & TMessageExtension
+    } & (TMessageExtension extends undefined ? {} : TMessageExtension)
   >
 }
packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts (1)

9-22: Context window values remain incorrect.

Both MISTRAL_LATEST (Line 17) and MISTRAL_7b (Line 32) still specify context: 32_000, but the official Ollama library documentation indicates the context window should be 32_768 tokens. This issue was previously flagged and marked as addressed, but the incorrect values persist in the current code.

🔎 Proposed fix
 const MISTRAL_LATEST = {
   name: 'mistral:latest',
   supports: {
     input: ['text'],
     output: ['text'],
     capabilities: ['tools'],
   },
   size: '4.4gb',
-  context: 32_000,
+  context: 32_768,
 } as const satisfies OllamaModelMeta<
   OllamaChatRequest &
     OllamaChatRequestMessages<OllamaMessageTools> &
     OllamaChatRequestTools
 >

 const MISTRAL_7b = {
   name: 'mistral:7b',
   supports: {
     input: ['text'],
     output: ['text'],
     capabilities: ['tools'],
   },
   size: '4.4gb',
-  context: 32_000,
+  context: 32_768,
 } as const satisfies OllamaModelMeta<
   OllamaChatRequest &
     OllamaChatRequestMessages<OllamaMessageTools> &
     OllamaChatRequestTools
 >

Also applies to: 24-37

🧹 Nitpick comments (5)
packages/typescript/ai-ollama/src/meta/model-meta-deepseek-ocr.ts (1)

48-48: Typo in commented code.

The commented type alias references DEEPSEEK_OCR__MODELS (double underscore) but the actual export is DEEPSEEK_OCR_MODELS (single underscore).

🔎 Suggested fix
-// export type DeepseekOcrChatModels = (typeof DEEPSEEK_OCR__MODELS)[number]
+// export type DeepseekOcrChatModels = (typeof DEEPSEEK_OCR_MODELS)[number]
packages/typescript/ai-ollama/src/meta/model-meta-exaone3.5.ts (2)

33-44: Constant name doesn't match model name.

The constant is named EXAONE3_5_7_1b (suggesting 7.1b) but the actual model name is 'exaone3.5:7.8b'. This inconsistency could confuse maintainers. Consider renaming to EXAONE3_5_7_8b to match the model identifier.

🔎 Proposed fix
-const EXAONE3_5_7_1b = {
+const EXAONE3_5_7_8b = {
   name: 'exaone3.5:7.8b',
   supports: {
     input: ['text'],
     output: ['text'],
     capabilities: [],
   },
   size: '4.8gb',
   context: 32_000,
 } as const satisfies OllamaModelMeta<
   OllamaChatRequest & OllamaChatRequestMessages
 >

Also update references in EXAONE3_5MODELS, Exaone3_5ChatModelProviderOptionsByName, and Exaone3_5ModelInputModalitiesByName.


74-74: Copy-paste artifact in comment.

Line 74 references AyaChatModels but this is the EXAONE file. Should be Exaone3_5ChatModels.

🔎 Proposed fix
-// export type AyaChatModels = (typeof EXAONE3_5MODELS)[number]
+// export type Exaone3_5ChatModels = (typeof EXAONE3_5MODELS)[number]
packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts (2)

24-37: Constant name doesn't match model name.

GRANITE3_MOE_1b refers to model 'granite3-moe:2b'. The constant name suggests 1b but the actual model is 2b. Consider renaming to GRANITE3_MOE_2b for consistency.

🔎 Proposed fix
-const GRANITE3_MOE_1b = {
+const GRANITE3_MOE_2b = {
   name: 'granite3-moe:2b',
   supports: {
     input: ['text'],
     output: ['text'],
     capabilities: ['tools'],
   },
   size: '822mb',
   context: 4_000,
 } as const satisfies OllamaModelMeta<
   OllamaChatRequest &
     OllamaChatRequestMessages<OllamaMessageTools> &
     OllamaChatRequestTools
 >

Also update references in GRANITE3_MOE_MODELS, Granite3MoeChatModelProviderOptionsByName, and Granite3MoeModelInputModalitiesByName.


39-52: Constant name doesn't match model name.

GRANITE3_MOE_3b refers to model 'granite3-moe:8b'. The constant name suggests 3b but the actual model is 8b. Consider renaming to GRANITE3_MOE_8b for consistency.

🔎 Proposed fix
-const GRANITE3_MOE_3b = {
+const GRANITE3_MOE_8b = {
   name: 'granite3-moe:8b',
   supports: {
     input: ['text'],
     output: ['text'],
     capabilities: ['tools'],
   },
   size: '2.1gb',
   context: 4_000,
 } as const satisfies OllamaModelMeta<
   OllamaChatRequest &
     OllamaChatRequestMessages<OllamaMessageTools> &
     OllamaChatRequestTools
 >

Also update references in GRANITE3_MOE_MODELS, Granite3MoeChatModelProviderOptionsByName, and Granite3MoeModelInputModalitiesByName.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 344d236 and 313f20f.

📒 Files selected for processing (71)
  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • packages/typescript/ai-ollama/src/adapters/summarize.ts
  • packages/typescript/ai-ollama/src/adapters/text.ts
  • packages/typescript/ai-ollama/src/index.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-athene.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-aya.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codellama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r-plus.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r7b.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-ocr.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-r1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-devstral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-dolphin3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-exaone3.5.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-falcon2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-falcon3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-firefunction-v2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-guardian.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-chatqa.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.2-vision.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-marco-o1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-large.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-nemo.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-small.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mixtral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-moondream.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-nemotron-mini.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-nemotron.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-olmo2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-opencoder.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-openhermes.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-phi4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwq.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-sailor2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-shieldgemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-smalltinker.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-smollm.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tulu3.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
  • packages/typescript/ai-ollama/src/model-meta.ts
🚧 Files skipped from review as they are similar to previous changes (36)
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-smollm.ts
  • packages/typescript/ai-ollama/src/adapters/summarize.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r-plus.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-aya.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-moondream.ts
  • packages/typescript/ai-ollama/src/index.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-openhermes.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-guardian.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-olmo2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-nemo.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-shieldgemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-smalltinker.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-athene.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-phi4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-opencoder.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codellama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.2-vision.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-sailor2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-nemotron.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-falcon2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-marco-o1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-chatqa.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-large.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma2.ts
🧰 Additional context used
📓 Path-based instructions (5)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.{ts,tsx}: Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from /adapters subpath rather than monolithic adapters
Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions with toolDefinition() and Zod schema inference
Implement isomorphic tool system using toolDefinition() with .server() and .client() implementations for dual-environment execution
Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-small.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava.ts
  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-devstral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-nemotron-mini.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-dolphin3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/adapters/text.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-r1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tulu3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-falcon3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-ocr.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwq.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r7b.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mixtral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-firefunction-v2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-exaone3.5.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Use camelCase for function and variable names throughout the codebase

Files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-small.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava.ts
  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-devstral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-nemotron-mini.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-dolphin3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/adapters/text.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-r1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tulu3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-falcon3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-ocr.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwq.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r7b.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mixtral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-firefunction-v2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-exaone3.5.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
examples/**

📄 CodeRabbit inference engine (CLAUDE.md)

Examples are not built by Nx and should be run independently from their directories with pnpm dev or pnpm install && pnpm dev

Files:

  • examples/ts-react-chat/src/routes/api.tanchat.ts
packages/typescript/*/src/model-meta.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Maintain model metadata files that define provider options and capabilities per model for per-model type safety

Files:

  • packages/typescript/ai-ollama/src/model-meta.ts
packages/typescript/*/src/adapters/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking

Files:

  • packages/typescript/ai-ollama/src/adapters/text.ts
🧠 Learnings (14)
📓 Common learnings
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:51.232Z
Learning: In the ai-ollama package's model-meta files (packages/typescript/ai-ollama/src/meta/model-meta-*.ts), capability-related comments follow a standard template format across all files for consistency, even if the comment text doesn't precisely match individual model capabilities. This is an intentional design choice to maintain uniformity across the codebase.
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts:7-18
Timestamp: 2025-12-27T19:49:03.849Z
Learning: For Ollama model metadata in the ai-ollama package, always consult https://ollama.com/library/ as the authoritative source for model specifications (size, context window, capabilities) rather than other documentation sources.
📚 Learning: 2025-12-27T20:22:46.710Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:46.710Z
Learning: Enforce that all model-meta files under packages/typescript/ai-ollama/src/meta/model-meta-*.ts use a standard, uniform template for capability-related comments, even if the text doesn’t perfectly align with each specific model capability. This ensures consistency across the codebase. Include a brief note in the template when a capability is not applicable, and avoid deviating from the established comment structure (e.g., header, template fields, and formatting) to maintain readability and tooling familiarity.

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-small.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-devstral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-nemotron-mini.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-dolphin3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-r1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tulu3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-falcon3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-ocr.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwq.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r7b.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mixtral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-firefunction-v2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-exaone3.5.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-small.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava.ts
  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-devstral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-nemotron-mini.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-dolphin3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/adapters/text.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-r1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tulu3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-falcon3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-ocr.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwq.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r7b.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mixtral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-firefunction-v2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-exaone3.5.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
📚 Learning: 2025-12-27T21:39:23.584Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts:70-75
Timestamp: 2025-12-27T21:39:23.584Z
Learning: Do not modify the standard template comments in ai-ollama model-meta files (e.g., the generic description like 'Models with text, image, audio, video (no document)'). These should remain uniform across all model-meta-*.ts files to maintain consistency, regardless of the specific model capabilities.

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-small.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-devstral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-nemotron-mini.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-dolphin3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-r1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tulu3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-falcon3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-ocr.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwq.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r7b.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mixtral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-firefunction-v2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-exaone3.5.ts
📚 Learning: 2025-12-27T19:48:57.631Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts:7-18
Timestamp: 2025-12-27T19:48:57.631Z
Learning: When reviewing or updating metadata in the Ollama model metadata module (e.g., files under packages/typescript/ai-ollama/src/meta/), always treat https://ollama.com/library/ as the authoritative source for model specifications (size, context window, capabilities). Validate only against the official docs and avoid citing non-official sources. If discrepancies arise, reference the official docs and ensure metadata aligns with their specifications.

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-small.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-devstral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-nemotron-mini.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-dolphin3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-r1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tulu3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-falcon3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-ocr.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwq.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r7b.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mixtral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-firefunction-v2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-exaone3.5.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-small.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava.ts
  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-devstral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-nemotron-mini.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-dolphin3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/adapters/text.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-r1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tulu3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-falcon3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-ocr.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwq.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r7b.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mixtral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-firefunction-v2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-exaone3.5.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-devstral.ts
  • packages/typescript/ai-ollama/src/adapters/text.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tulu3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-ocr.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mixtral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-exaone3.5.ts
📚 Learning: 2025-12-27T20:22:51.232Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:51.232Z
Learning: In the ai-ollama package's model-meta files (packages/typescript/ai-ollama/src/meta/model-meta-*.ts), capability-related comments follow a standard template format across all files for consistency, even if the comment text doesn't precisely match individual model capabilities. This is an intentional design choice to maintain uniformity across the codebase.

Applied to files:

  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/adapters/text.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters

Applied to files:

  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • packages/typescript/ai-ollama/src/adapters/text.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Maintain type safety through multimodal content support (image, audio, video, document) with model capability awareness

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
📚 Learning: 2025-12-27T21:39:29.563Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts:70-75
Timestamp: 2025-12-27T21:39:29.563Z
Learning: The standard template comments in ai-ollama model-meta files (like "Models with text, image, audio, video (no document)") should not be modified to match individual model capabilities, as they are intentionally kept uniform across all model-meta-*.ts files for consistency, regardless of what each specific model actually supports.

Applied to files:

  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/adapters/text.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
📚 Learning: 2025-12-27T19:49:03.849Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts:7-18
Timestamp: 2025-12-27T19:49:03.849Z
Learning: For Ollama model metadata in the ai-ollama package, always consult https://ollama.com/library/ as the authoritative source for model specifications (size, context window, capabilities) rather than other documentation sources.

Applied to files:

  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/adapters/text.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/index.ts : Export tree-shakeable adapters with clear subpath exports in package.json (e.g., `tanstack/ai/adapters`, `tanstack/ai-openai/adapters`) to minimize bundle size

Applied to files:

  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/adapters/text.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Use camelCase for function and variable names throughout the codebase

Applied to files:

  • packages/typescript/ai-ollama/src/model-meta.ts
🧬 Code graph analysis (27)
packages/typescript/ai-ollama/src/meta/model-meta-granite3-dense.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
packages/typescript/ai-ollama/src/meta/model-meta-llava.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (4)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
  • OllamaMessageImages (81-83)
packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
  • OllamaMessageTools (85-88)
  • OllamaChatRequestTools (49-51)
packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-dense.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
  • OllamaMessageTools (85-88)
  • OllamaChatRequestTools (49-51)
packages/typescript/ai-ollama/src/meta/model-meta-devstral.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
  • OllamaMessageTools (85-88)
  • OllamaChatRequestTools (49-51)
packages/typescript/ai-ollama/src/meta/model-meta-nemotron-mini.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
  • OllamaMessageTools (85-88)
  • OllamaChatRequestTools (49-51)
packages/typescript/ai-ollama/src/meta/model-meta-dolphin3.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
  • OllamaMessageTools (85-88)
  • OllamaChatRequestTools (49-51)
packages/typescript/ai-ollama/src/adapters/text.ts (2)
packages/typescript/ai-ollama/src/index.ts (4)
  • OllamaTextModel (11-11)
  • OLLAMA_TEXT_MODELS (14-14)
  • OLLAMA_TEXT_MODELS (25-25)
  • OllamaChatModelProviderOptionsByName (40-40)
packages/typescript/ai-ollama/src/model-meta.ts (2)
  • OLLAMA_TEXT_MODELS (330-396)
  • OllamaChatModelProviderOptionsByName (398-463)
packages/typescript/ai-ollama/src/meta/model-meta-deepseek-r1.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (7)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
  • OllamaMessageTools (85-88)
  • OllamaMessageThinking (77-79)
  • OllamaChatRequestTools (49-51)
  • OllamaChatRequestThinking (41-43)
packages/typescript/ai-ollama/src/meta/model-meta-tulu3.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (6)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
  • OllamaMessageTools (85-88)
  • OllamaChatRequestTools (49-51)
  • OllamaChatRequestThinking (41-43)
packages/typescript/ai-ollama/src/meta/model-meta-falcon3.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
packages/typescript/ai-ollama/src/meta/model-meta-gemma.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
packages/typescript/ai-ollama/src/meta/model-meta-llama2.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
  • OllamaMessageTools (85-88)
  • OllamaChatRequestTools (49-51)
packages/typescript/ai-ollama/src/meta/model-meta-gemma3.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (4)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
  • OllamaMessageImages (81-83)
packages/typescript/ai-ollama/src/meta/model-meta-qwq.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
  • OllamaMessageTools (85-88)
  • OllamaChatRequestTools (49-51)
packages/typescript/ai-ollama/src/meta/model-meta-command-r7b.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
  • OllamaMessageTools (85-88)
  • OllamaChatRequestTools (49-51)
packages/typescript/ai-ollama/src/meta/model-meta-qwen2.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
  • OllamaMessageTools (85-88)
  • OllamaChatRequestTools (49-51)
packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (4)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
  • OllamaMessageImages (81-83)
packages/typescript/ai-ollama/src/meta/model-meta-llama3.2.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
  • OllamaMessageTools (85-88)
  • OllamaChatRequestTools (49-51)
packages/typescript/ai-ollama/src/meta/model-meta-firefunction-v2.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (7)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
  • OllamaMessageTools (85-88)
  • OllamaMessageThinking (77-79)
  • OllamaChatRequestTools (49-51)
  • OllamaChatRequestThinking_OpenAI (45-47)
packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
  • OllamaMessageTools (85-88)
  • OllamaChatRequestTools (49-51)

@harry-whorlow harry-whorlow force-pushed the ollama-temp branch 2 times, most recently from 15f8a99 to c06b983 Compare January 5, 2026 08:33
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Fix all issues with AI Agents 🤖
In @packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts:
- Line 17: Update the three model definitions in this file that set the context
property to 128_000 so they use the exact Ollama spec value 131_072; locate the
three objects that include "context: 128_000," (the granite3.1-moe model
definitions) and change each value to 131_072 to match the official
specification.

In @packages/typescript/ai-ollama/src/model-meta.ts:
- Around line 209-212: The imported type names from model-meta-llama4 are
inconsistent with the LLAMA4 naming; rename
Llama3_4ChatModelProviderOptionsByName to Llama4ChatModelProviderOptionsByName
and Llama3_4ModelInputModalitiesByName to Llama4ModelInputModalitiesByName both
in this import statement and everywhere they are referenced in this file, and
update the corresponding exported type names in model-meta-llama4 (e.g., change
the export declarations there to match Llama4ChatModelProviderOptionsByName and
Llama4ModelInputModalitiesByName) so names align with LLAMA4_MODELS, llama4:*
model names, and the file name.
♻️ Duplicate comments (2)
packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts (1)

21-32: Remove non-existent model variant.

The llava-phi3:8b model does not exist in the official Ollama library. According to the Ollama documentation, the available variants are llava-phi3:latest, llava-phi3:3.8b-mini-fp16, and llava-phi3:3.8b-mini-q4_0. Remove this constant and its references in LLAVA_PHI3_MODELS (line 36) and the type maps (lines 54-55, 61) to prevent runtime errors.

Based on learnings, always consult https://ollama.com/library/llava-phi3 as the authoritative source.

🔎 Verify available variants
#!/bin/bash
# Check the official Ollama library page for llava-phi3 variants
curl -s "https://ollama.com/library/llava-phi3" | grep -oP 'llava-phi3:[a-z0-9\.\-]+' | sort -u
packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts (1)

9-22: Minor: Inconsistent size unit - '43b' vs '43gb'.

Line 16 uses size: '43b' but LLAMA3_3_70b at line 31 uses size: '43gb'. Both refer to the same 70B model (:latest typically points to the default 70b variant). This appears to be a typo missing the 'g' unit.

🔎 Proposed fix
 const LLAMA3_3_LATEST = {
   name: 'llama3.3:latest',
   supports: {
     input: ['text'],
     output: ['text'],
     capabilities: ['tools'],
   },
-  size: '43b',
+  size: '43gb',
   context: 128_000,
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 15f8a99 and d0e4f51.

📒 Files selected for processing (11)
  • packages/typescript/ai-ollama/src/adapters/summarize.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
  • packages/typescript/ai-ollama/src/model-meta.ts
🚧 Files skipped from review as they are similar to previous changes (3)
  • packages/typescript/ai-ollama/src/adapters/summarize.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.{ts,tsx}: Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from /adapters subpath rather than monolithic adapters
Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions with toolDefinition() and Zod schema inference
Implement isomorphic tool system using toolDefinition() with .server() and .client() implementations for dual-environment execution
Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Files:

  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Use camelCase for function and variable names throughout the codebase

Files:

  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
packages/typescript/*/src/model-meta.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Maintain model metadata files that define provider options and capabilities per model for per-model type safety

Files:

  • packages/typescript/ai-ollama/src/model-meta.ts
🧠 Learnings (12)
📓 Common learnings
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:51.232Z
Learning: In the ai-ollama package's model-meta files (packages/typescript/ai-ollama/src/meta/model-meta-*.ts), capability-related comments follow a standard template format across all files for consistency, even if the comment text doesn't precisely match individual model capabilities. This is an intentional design choice to maintain uniformity across the codebase.
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts:7-18
Timestamp: 2025-12-27T19:49:03.849Z
Learning: For Ollama model metadata in the ai-ollama package, always consult https://ollama.com/library/ as the authoritative source for model specifications (size, context window, capabilities) rather than other documentation sources.
📚 Learning: 2025-12-27T20:22:51.232Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:51.232Z
Learning: In the ai-ollama package's model-meta files (packages/typescript/ai-ollama/src/meta/model-meta-*.ts), capability-related comments follow a standard template format across all files for consistency, even if the comment text doesn't precisely match individual model capabilities. This is an intentional design choice to maintain uniformity across the codebase.

Applied to files:

  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety

Applied to files:

  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
📚 Learning: 2025-12-27T21:39:29.563Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts:70-75
Timestamp: 2025-12-27T21:39:29.563Z
Learning: The standard template comments in ai-ollama model-meta files (like "Models with text, image, audio, video (no document)") should not be modified to match individual model capabilities, as they are intentionally kept uniform across all model-meta-*.ts files for consistency, regardless of what each specific model actually supports.

Applied to files:

  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
📚 Learning: 2025-12-27T19:49:03.849Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts:7-18
Timestamp: 2025-12-27T19:49:03.849Z
Learning: For Ollama model metadata in the ai-ollama package, always consult https://ollama.com/library/ as the authoritative source for model specifications (size, context window, capabilities) rather than other documentation sources.

Applied to files:

  • packages/typescript/ai-ollama/src/model-meta.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety

Applied to files:

  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking

Applied to files:

  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/index.ts : Export tree-shakeable adapters with clear subpath exports in package.json (e.g., `tanstack/ai/adapters`, `tanstack/ai-openai/adapters`) to minimize bundle size

Applied to files:

  • packages/typescript/ai-ollama/src/model-meta.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Use camelCase for function and variable names throughout the codebase

Applied to files:

  • packages/typescript/ai-ollama/src/model-meta.ts
📚 Learning: 2025-12-27T20:22:46.710Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:46.710Z
Learning: Enforce that all model-meta files under packages/typescript/ai-ollama/src/meta/model-meta-*.ts use a standard, uniform template for capability-related comments, even if the text doesn’t perfectly align with each specific model capability. This ensures consistency across the codebase. Include a brief note in the template when a capability is not applicable, and avoid deviating from the established comment structure (e.g., header, template fields, and formatting) to maintain readability and tooling familiarity.

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
📚 Learning: 2025-12-27T21:39:23.584Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts:70-75
Timestamp: 2025-12-27T21:39:23.584Z
Learning: Do not modify the standard template comments in ai-ollama model-meta files (e.g., the generic description like 'Models with text, image, audio, video (no document)'). These should remain uniform across all model-meta-*.ts files to maintain consistency, regardless of the specific model capabilities.

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
📚 Learning: 2025-12-27T19:48:57.631Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts:7-18
Timestamp: 2025-12-27T19:48:57.631Z
Learning: When reviewing or updating metadata in the Ollama model metadata module (e.g., files under packages/typescript/ai-ollama/src/meta/), always treat https://ollama.com/library/ as the authoritative source for model specifications (size, context window, capabilities). Validate only against the official docs and avoid citing non-official sources. If discrepancies arise, reference the official docs and ensure metadata aligns with their specifications.

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
🧬 Code graph analysis (4)
packages/typescript/ai-ollama/src/model-meta.ts (4)
packages/typescript/ai-ollama/src/index.ts (4)
  • OLLAMA_TEXT_MODELS (14-14)
  • OLLAMA_TEXT_MODELS (25-25)
  • OllamaChatModelProviderOptionsByName (40-40)
  • OllamaModelInputModalitiesByName (41-41)
packages/typescript/ai-ollama/src/meta/model-meta-falcon2.ts (3)
  • FALCON2_MODELS (33-33)
  • Falcon2ChatModelProviderOptionsByName (46-50)
  • Falcon2ModelInputModalitiesByName (52-56)
packages/typescript/ai-ollama/src/meta/model-meta-marco-o1.ts (3)
  • MARCO_O1_MODELS (33-33)
  • MarcoO1ChatModelProviderOptionsByName (46-50)
  • MarcoO1ModelInputModalitiesByName (52-56)
packages/typescript/ai-ollama/src/meta/model-meta-phi4.ts (3)
  • PHI4_MODELS (33-33)
  • Phi4ChatModelProviderOptionsByName (46-50)
  • Phi4ModelInputModalitiesByName (52-56)
packages/typescript/ai-ollama/src/meta/model-meta-llava.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (4)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
  • OllamaMessageImages (81-83)
packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (28-39)
  • OllamaChatRequestMessages (62-75)
  • OllamaMessageTools (85-88)
  • OllamaChatRequestTools (49-51)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Test
  • GitHub Check: Preview
🔇 Additional comments (11)
packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts (3)

33-36: LGTM!

The export structure follows the standard pattern used across other model-meta files in the codebase.


48-59: LGTM!

The type definitions correctly implement per-model type safety, mapping each model name to its provider options and input modalities. This aligns with the coding guidelines requiring type-safe per-model configuration.


7-31: No changes needed - model specifications are accurate.

Verification against Ollama library documentation confirms the code is correct:

  • Model names tinyllama:latest and tinyllama:1.1b are valid generic models in Ollama library (no namespace required)
  • Size 638mb aligns with standard Q4-quantized variants (~0.62-0.67 GB range)
  • Context window of 2,000 tokens is accurate
  • Empty capabilities array is appropriate for base chat models
packages/typescript/ai-ollama/src/meta/model-meta-llava.ts (1)

8-58: LGTM! Size field fix confirmed.

The previous size format issue ('4.7b''4.7gb') has been correctly resolved. All LLAVA model metadata follows the established pattern with proper vision capabilities, input/output modalities, and consistent type constraints.

packages/typescript/ai-ollama/src/meta/models-meta.ts (1)

62-75: LGTM! Generic type parameter fix applied correctly.

The TMessageExtension default has been correctly changed from undefined to {}, preventing the type intersection from collapsing to never. This ensures messages are properly typed as { role: string; content: string } & TMessageExtension with a usable default.

packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts (1)

20-50: LGTM! Variable naming issue resolved.

The previously flagged mismatch between variable names and model tags has been correctly fixed. CODEGEMMA_2b now references 'codegemma:2b' and CODEGEMMA_7b references 'codegemma:7b', ensuring consistency across the codebase.

packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts (1)

9-22: LGTM! Model names corrected.

The model name constants have been correctly updated to use the official Ollama tags (granite3.1-moe:1b and granite3.1-moe:3b), resolving the previous critical naming issues.

packages/typescript/ai-ollama/src/model-meta.ts (2)

330-396: Well-structured centralized model registry.

The OLLAMA_TEXT_MODELS array correctly aggregates all model families into a single typed constant. The previous duplicate LLAMA3_2_MODELS entry has been resolved.


398-463: Type intersection structure looks correct.

The OllamaChatModelProviderOptionsByName and OllamaModelInputModalitiesByName intersection types correctly combine all per-model type definitions, enabling type-safe per-model configuration as per coding guidelines.

Also applies to: 465-530

packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts (2)

24-37: Model name fix confirmed.

The LLAMA3_3_70b constant now correctly uses name: 'llama3.3:70b', matching the Ollama library specification.


39-69: Type definitions follow established patterns.

The exported LLAMA3_3_MODELS array and the provider options/input modalities type maps follow the consistent structure used across other model-meta files. The use of as const satisfies ensures type safety while maintaining narrow literal types.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

Fix all issues with AI Agents 🤖
In @packages/typescript/ai-ollama/src/meta/model-meta-deepseek-r1.ts:
- Around line 11-121: Replace the non-official "latest" variant and add the
missing 14B variant, ensure all seven DeepSeek R1 constants use the official
tags (deepseek-r1:1.5b, :7b, :8b, :14b, :32b, :70b, :671b) and correct the
context window for DEEPSEEK_R1_671b to 160_000 (leave the other models at
128_000); update the constant names DEEPSEEK_R1_LATEST -> remove it, add
DEEPSEEK_R1_14b, and verify each object's name field matches its tag (e.g.,
name: 'deepseek-r1:14b') while keeping the existing supports, size, and
OllamaModelMeta typing.

In @packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts:
- Around line 39-52: The constant identifier GRANITE3_MOE_3b does not match its
name field 'granite3-moe:8b'; rename the constant to GRANITE3_MOE_8b and update
every reference in this file from GRANITE3_MOE_3b to GRANITE3_MOE_8b so the
identifier matches the model name (ensure the exported symbol and any uses
within this module are updated).
- Around line 24-37: The constant name GRANITE3_MOE_1b does not match its model
name 'granite3-moe:2b'; rename the constant to GRANITE3_MOE_2b and update any
references to GRANITE3_MOE_1b within this file (and exports or usages in the
same module) so the identifier aligns with the model name, keeping the object
literal (name, supports, size, context) unchanged.

In @packages/typescript/ai-ollama/src/meta/model-meta-llama3-chatqa.ts:
- Around line 7-18: The LLAMA3_CHATQA_LATEST model metadata has an incorrect
size string "4.7b"; update the size property on the LLAMA3_CHATQA_LATEST
constant to "4.7gb" to reflect gigabytes and be consistent with other model
metadata entries (preserve the surrounding type assertion OllamaModelMeta and
request types).
- Around line 33-42: The generic parameter 'any' on OllamaModelMeta should be
replaced with the concrete model-meta generic used by the other LLAMA3_CHATQA
variants for consistency: open the other LLAMA3_CHATQA_* constants (e.g., the
13b/7b variants), copy the exact generic type they pass to OllamaModelMeta, and
use that same type in the LLAMA3_CHATQA_70b declaration (replace
OllamaModelMeta<any> with the same OllamaModelMeta<...> used by the other
variants).

In @packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts:
- Around line 7-18: The LLAMA3_GRADIENT_LATEST model meta has an inconsistent
size string; update the size property on LLAMA3_GRADIENT_LATEST from '4.7b' to
'4.7gb' so it matches the other model entries and expected unit format used
across this file.

In @packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts:
- Around line 7-18: The LLAMA3_LATEST model meta has an inconsistent size unit
("4.7b"); update the size property on LLAMA3_LATEST to use the same unit format
as other models (e.g., "4.7gb") so it matches LLAMA3_8b and the rest of the
codebase.
- Around line 20-31: The constant LLAMA3_8b has an incorrect name property
'llama3:7b'; update the name field in the LLAMA3_8b object to 'llama3:8b' so the
constant name, model identifier and metadata match (verify the size/context
fields remain correct for the 8b variant); edit the LLAMA3_8b object literal in
model-meta-llama3.ts to replace the name string accordingly.

In @packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts:
- Line 17: The size field for the Llama4 model is using the wrong suffix
('67b'); update the model object's size property to use the consistent 'gb'
suffix (i.e., change '67b' to '67gb') so it matches other entries in
model-meta-llama4.ts and across the codebase.

In @packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts:
- Line 17: Update the inconsistent size value in the LLaVA LLaMA3 model meta:
change the size property currently set to '5.5b' to '5.5gb' so it matches the
other entry (the size field in the model meta object), ensuring both occurrences
use the 'gb' suffix.

In @packages/typescript/ai-ollama/src/meta/model-meta-mistral-large.ts:
- Around line 9-69: The hardcoded size: '73gb' in MISTRAL_LARGE_LATEST and
MISTRAL_LARGE_123b is ambiguous because on-disk size varies by quantization;
update those model entries (MISTRAL_LARGE_LATEST and MISTRAL_LARGE_123b) to
either set size to a descriptive value like "varies by quantization (≈53–130GB)"
or add a comment above each definition clarifying which quantization variant the
73gb refers to (or that it is a reference/default), so callers understand the
value is approximate and may differ by build.

In @packages/typescript/ai-ollama/src/meta/model-meta-mistral-nemo.ts:
- Line 32: The context window is set incorrectly to 1_000; update the object
that defines the Mistral Nemo model metadata by changing its context property
from 1_000 to 128_000 (i.e., set context: 128_000) so it matches the correct
window used for MISTRAL_NEMO_LATEST.
- Line 17: Update the Mistral Nemo model metadata's context window value:
replace the existing numeric literal "context: 1_000" with the correct 128K
token value (e.g., "context: 128_000") in the model-meta entry (the context
property in the exported meta for Mistral Nemo) so the metadata reflects the
proper 128K context size.

In @packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5.ts:
- Around line 9-112: Update the context fields for the QWEN 7B/32B/72B constants
(QWEN2_5_7b, QWEN2_5_32b, QWEN2_5_72b) from 32_000 to 128_000 and add the
missing QWEN2_5_14b constant modeled like the others with context 128_000 and
its official size (copy the exact size string from the Ollama library), ensuring
each constant keeps the same shape and type annotation (as const satisfies
OllamaModelMeta<...>); finally verify that QWEN2_5_LATEST maps to the correct
variant name per the Ollama library and update its name/value if necessary.
♻️ Duplicate comments (4)
packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts (1)

134-158: Comment doesn't match actual capabilities.

The comment states "Models with thinking and structured output support," but these models only have the 'tools' capability. This was previously flagged in past reviews. While learnings indicate template comments should remain uniform for consistency, this creates misleading documentation.

Based on learnings, template comments should be standardized across model-meta files.

packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts (1)

21-32: Remove non-existent model variant.

The llava-phi3:8b model does not exist in the official Ollama library. According to the previous review, official variants are llava-phi3:latest, llava-phi3:3.8b-mini-fp16, and llava-phi3:3.8b-mini-q4_0. Either remove this constant entirely or replace it with an actual variant.

🔎 Recommended fix

Remove the non-existent variant:

-const LLAVA_PHI3_8b = {
-  name: 'llava-phi3:8b',
-  supports: {
-    input: ['text', 'image'],
-    output: ['text'],
-    capabilities: ['vision'],
-  },
-  size: '2.9gb',
-  context: 4_000,
-} as const satisfies OllamaModelMeta<
-  OllamaChatRequest & OllamaChatRequestMessages<OllamaMessageImages>
->
-

And update the exports:

 export const LLAVA_PHI3_MODELS = [
   LLAVA_PHI3_LATEST.name,
-  LLAVA_PHI3_8b.name,
 ] as const
 export type LlavaPhi3ChatModelProviderOptionsByName = {
   // Models with thinking and structured output support
   [LLAVA_PHI3_LATEST.name]: OllamaChatRequest &
     OllamaChatRequestMessages<OllamaMessageImages>
-  [LLAVA_PHI3_8b.name]: OllamaChatRequest &
-    OllamaChatRequestMessages<OllamaMessageImages>
 }
 export type LlavaPhi3ModelInputModalitiesByName = {
   // Models with text, image, audio, video (no document)
   [LLAVA_PHI3_LATEST.name]: typeof LLAVA_PHI3_LATEST.supports.input
-  [LLAVA_PHI3_8b.name]: typeof LLAVA_PHI3_8b.supports.input
 }
packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts (1)

9-22: Minor: Inconsistent size unit - '43b' vs '43gb'.

Line 16 uses size: '43b' but the pattern in other model-meta files and in LLAMA3_3_70b at line 31 uses '43gb'. This appears to be a typo that should be corrected for consistency.

🔎 Proposed fix
 const LLAMA3_3_LATEST = {
   name: 'llama3.3:latest',
   supports: {
     input: ['text'],
     output: ['text'],
     capabilities: ['tools'],
   },
-  size: '43b',
+  size: '43gb',
   context: 128_000,
packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts (1)

9-22: Verify context window values against official Ollama specifications.

Both model constants specify context: 32_000 (lines 17, 32), but a previous review indicated the official Ollama library specifies 32,768 tokens for mistral:7b models. Although that review was marked as addressed, the values remain unchanged. Please confirm whether 32,000 or 32,768 is correct per the authoritative source.

Based on learnings, validate against https://ollama.com/library/mistral for accurate context window specifications.

#!/bin/bash
# Verify the context window specification for Mistral models from Ollama library
curl -s "https://ollama.com/library/mistral" | grep -iE "(context|tokens)" | head -20

Also applies to: 24-37

🧹 Nitpick comments (1)
packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts (1)

33-34: Extra blank line breaks formatting consistency.

There's a stray blank line before the size field that doesn't match the compact formatting used in other model definitions in this file and across the codebase.

🔎 Proposed fix
     capabilities: ['thinking', 'tools'],
   },
-
   size: '404gb',
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9006c50 and e14e1eb.

📒 Files selected for processing (71)
  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • packages/typescript/ai-ollama/src/adapters/summarize.ts
  • packages/typescript/ai-ollama/src/adapters/text.ts
  • packages/typescript/ai-ollama/src/index.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-athene.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-aya.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codellama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r-plus.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r7b.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-ocr.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-r1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-devstral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-dolphin3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-exaone3.5.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-falcon2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-falcon3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-firefunction-v2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-guardian.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-chatqa.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.2-vision.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-marco-o1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-large.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-nemo.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-small.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mixtral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-moondream.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-nemotron-mini.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-nemotron.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-olmo2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-opencoder.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-openhermes.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-phi4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwq.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-sailor2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-shieldgemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-smalltinker.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-smollm.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tulu3.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
  • packages/typescript/ai-ollama/src/model-meta.ts
🚧 Files skipped from review as they are similar to previous changes (26)
  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-falcon3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwq.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-openhermes.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-dolphin3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-smalltinker.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-nemotron-mini.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-falcon2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-olmo2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-shieldgemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.2-vision.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma3.ts
  • packages/typescript/ai-ollama/src/adapters/summarize.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-exaone3.5.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-smollm.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-guardian.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-sailor2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tulu3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-firefunction-v2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r7b.ts
  • packages/typescript/ai-ollama/src/index.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-phi3.ts
🧰 Additional context used
📓 Path-based instructions (4)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.{ts,tsx}: Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from /adapters subpath rather than monolithic adapters
Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions with toolDefinition() and Zod schema inference
Implement isomorphic tool system using toolDefinition() with .server() and .client() implementations for dual-environment execution
Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Files:

  • packages/typescript/ai-ollama/src/meta/model-meta-qwen.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-phi4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-r1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-aya.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-nemotron.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-opencoder.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-ocr.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-large.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-devstral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mixtral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-small.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-chatqa.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r-plus.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-moondream.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-athene.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-marco-o1.ts
  • packages/typescript/ai-ollama/src/adapters/text.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-nemo.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codellama.ts
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Use camelCase for function and variable names throughout the codebase

Files:

  • packages/typescript/ai-ollama/src/meta/model-meta-qwen.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-phi4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-r1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-aya.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-nemotron.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-opencoder.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-ocr.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-large.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-devstral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mixtral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-small.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-chatqa.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r-plus.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-moondream.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-athene.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-marco-o1.ts
  • packages/typescript/ai-ollama/src/adapters/text.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-nemo.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codellama.ts
packages/typescript/*/src/adapters/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking

Files:

  • packages/typescript/ai-ollama/src/adapters/text.ts
packages/typescript/*/src/model-meta.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Maintain model metadata files that define provider options and capabilities per model for per-model type safety

Files:

  • packages/typescript/ai-ollama/src/model-meta.ts
🧠 Learnings (15)
📓 Common learnings
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-qwen.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-phi4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-r1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-aya.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-nemotron.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-opencoder.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-ocr.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-large.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-devstral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mixtral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-small.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-chatqa.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r-plus.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-moondream.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-athene.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-marco-o1.ts
  • packages/typescript/ai-ollama/src/adapters/text.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-nemo.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codellama.ts
📚 Learning: 2025-12-27T20:22:46.710Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:46.710Z
Learning: Enforce that all model-meta files under packages/typescript/ai-ollama/src/meta/model-meta-*.ts use a standard, uniform template for capability-related comments, even if the text doesn’t perfectly align with each specific model capability. This ensures consistency across the codebase. Include a brief note in the template when a capability is not applicable, and avoid deviating from the established comment structure (e.g., header, template fields, and formatting) to maintain readability and tooling familiarity.

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-qwen.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-phi4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-r1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-aya.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-nemotron.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-opencoder.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-ocr.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-large.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-devstral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mixtral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-small.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-chatqa.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r-plus.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-moondream.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-athene.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-marco-o1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-nemo.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codellama.ts
📚 Learning: 2025-12-27T21:39:23.584Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts:70-75
Timestamp: 2025-12-27T21:39:23.584Z
Learning: Do not modify the standard template comments in ai-ollama model-meta files (e.g., the generic description like 'Models with text, image, audio, video (no document)'). These should remain uniform across all model-meta-*.ts files to maintain consistency, regardless of the specific model capabilities.

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-qwen.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-phi4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-r1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-aya.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-nemotron.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-opencoder.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-ocr.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-large.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-devstral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mixtral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-small.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-chatqa.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r-plus.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-moondream.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-athene.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-marco-o1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-nemo.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codellama.ts
📚 Learning: 2025-12-27T19:48:57.631Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts:7-18
Timestamp: 2025-12-27T19:48:57.631Z
Learning: When reviewing or updating metadata in the Ollama model metadata module (e.g., files under packages/typescript/ai-ollama/src/meta/), always treat https://ollama.com/library/ as the authoritative source for model specifications (size, context window, capabilities). Validate only against the official docs and avoid citing non-official sources. If discrepancies arise, reference the official docs and ensure metadata aligns with their specifications.

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-qwen.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-phi4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-r1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-aya.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-nemotron.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-opencoder.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-ocr.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-large.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-devstral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mixtral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-small.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-chatqa.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r-plus.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-moondream.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-athene.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-marco-o1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-nemo.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codellama.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-qwen.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-phi4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-r1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-aya.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-nemotron.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-opencoder.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-ocr.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-large.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-devstral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mixtral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-small.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-chatqa.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r-plus.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-moondream.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-athene.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-marco-o1.ts
  • packages/typescript/ai-ollama/src/adapters/text.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-nemo.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codellama.ts
📚 Learning: 2026-01-05T08:43:12.067Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts:17-17
Timestamp: 2026-01-05T08:43:12.067Z
Learning: In the ai-ollama package, specifically within model-meta files under packages/typescript/ai-ollama/src/meta/, interpret 'K' suffixes in context window sizes as thousands (×1000) rather than kibibytes (×1024). For example, '128K context window' should be treated as 128000, not 131072. Update parsing logic, constants, and related tests to reflect this interpretation, and add a regression test ensuring 1K, 2K, 128K, etc. are parsed as 1000-based values. This should apply to all meta files in the meta directory.

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-qwen.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-phi4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-r1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-aya.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-nemotron.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-opencoder.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-ocr.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-large.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-devstral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-gemma2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mixtral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-small.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-chatqa.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-command-r-plus.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-moondream.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-athene.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-marco-o1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-nemo.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-codellama.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-dense.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-phi4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-aya.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-ocr.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-devstral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mixtral.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-moondream.ts
  • packages/typescript/ai-ollama/src/adapters/text.ts
  • packages/typescript/ai-ollama/src/model-meta.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts
📚 Learning: 2025-12-27T20:22:51.232Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:51.232Z
Learning: In the ai-ollama package's model-meta files (packages/typescript/ai-ollama/src/meta/model-meta-*.ts), capability-related comments follow a standard template format across all files for consistency, even if the comment text doesn't precisely match individual model capabilities. This is an intentional design choice to maintain uniformity across the codebase.

Applied to files:

  • packages/typescript/ai-ollama/src/meta/models-meta.ts
  • packages/typescript/ai-ollama/src/adapters/text.ts
  • packages/typescript/ai-ollama/src/model-meta.ts
📚 Learning: 2025-12-27T21:39:29.563Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts:70-75
Timestamp: 2025-12-27T21:39:29.563Z
Learning: The standard template comments in ai-ollama model-meta files (like "Models with text, image, audio, video (no document)") should not be modified to match individual model capabilities, as they are intentionally kept uniform across all model-meta-*.ts files for consistency, regardless of what each specific model actually supports.

Applied to files:

  • packages/typescript/ai-ollama/src/meta/models-meta.ts
  • packages/typescript/ai-ollama/src/adapters/text.ts
  • packages/typescript/ai-ollama/src/model-meta.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Maintain type safety through multimodal content support (image, audio, video, document) with model capability awareness

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters

Applied to files:

  • packages/typescript/ai-ollama/src/adapters/text.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/index.ts : Export tree-shakeable adapters with clear subpath exports in package.json (e.g., `tanstack/ai/adapters`, `tanstack/ai-openai/adapters`) to minimize bundle size

Applied to files:

  • packages/typescript/ai-ollama/src/adapters/text.ts
📚 Learning: 2025-12-27T19:49:03.849Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts:7-18
Timestamp: 2025-12-27T19:49:03.849Z
Learning: For Ollama model metadata in the ai-ollama package, always consult https://ollama.com/library/ as the authoritative source for model specifications (size, context window, capabilities) rather than other documentation sources.

Applied to files:

  • packages/typescript/ai-ollama/src/adapters/text.ts
  • packages/typescript/ai-ollama/src/model-meta.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Use camelCase for function and variable names throughout the codebase

Applied to files:

  • packages/typescript/ai-ollama/src/model-meta.ts
🧬 Code graph analysis (29)
packages/typescript/ai-ollama/src/meta/model-meta-qwen.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
packages/typescript/ai-ollama/src/meta/model-meta-granite3-dense.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
packages/typescript/ai-ollama/src/meta/model-meta-deepseek-r1.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (7)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
  • OllamaMessageTools (116-119)
  • OllamaMessageThinking (108-110)
  • OllamaChatRequestTools (80-82)
  • OllamaChatRequestThinking (72-74)
packages/typescript/ai-ollama/src/meta/model-meta-aya.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
packages/typescript/ai-ollama/src/meta/model-meta-gemma.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
packages/typescript/ai-ollama/src/meta/model-meta-nemotron.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
  • OllamaMessageTools (116-119)
  • OllamaChatRequestTools (80-82)
packages/typescript/ai-ollama/src/meta/model-meta-deepseek-ocr.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (4)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
  • OllamaMessageImages (112-114)
packages/typescript/ai-ollama/src/meta/model-meta-mistral-large.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
  • OllamaMessageTools (116-119)
  • OllamaChatRequestTools (80-82)
packages/typescript/ai-ollama/src/meta/model-meta-devstral.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
  • OllamaMessageTools (116-119)
  • OllamaChatRequestTools (80-82)
packages/typescript/ai-ollama/src/meta/model-meta-gemma2.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
  • OllamaMessageTools (116-119)
  • OllamaChatRequestTools (80-82)
packages/typescript/ai-ollama/src/meta/model-meta-mixtral.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
  • OllamaMessageTools (116-119)
  • OllamaChatRequestTools (80-82)
packages/typescript/ai-ollama/src/meta/model-meta-mistral-small.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
  • OllamaMessageTools (116-119)
  • OllamaChatRequestTools (80-82)
packages/typescript/ai-ollama/src/meta/model-meta-command-r.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
  • OllamaMessageTools (116-119)
  • OllamaChatRequestTools (80-82)
packages/typescript/ai-ollama/src/meta/model-meta-llama3-chatqa.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
packages/typescript/ai-ollama/src/meta/model-meta-command-r-plus.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
  • OllamaMessageTools (116-119)
  • OllamaChatRequestTools (80-82)
packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
  • OllamaMessageTools (116-119)
  • OllamaChatRequestTools (80-82)
packages/typescript/ai-ollama/src/meta/model-meta-moondream.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (4)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
  • OllamaMessageImages (112-114)
packages/typescript/ai-ollama/src/meta/model-meta-llama2.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
packages/typescript/ai-ollama/src/meta/model-meta-marco-o1.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
packages/typescript/ai-ollama/src/model-meta.ts (5)
packages/typescript/ai-ollama/src/index.ts (4)
  • OLLAMA_TEXT_MODELS (14-14)
  • OLLAMA_TEXT_MODELS (25-25)
  • OllamaChatModelOptionsByName (40-40)
  • OllamaModelInputModalitiesByName (41-41)
packages/typescript/ai-ollama/src/meta/model-meta-dolphin3.ts (3)
  • DOLPHIN3_MODELS (33-33)
  • Dolphin3ChatModelProviderOptionsByName (46-50)
  • Dolphin3ModelInputModalitiesByName (52-56)
packages/typescript/ai-ollama/src/meta/model-meta-falcon2.ts (3)
  • FALCON2_MODELS (33-33)
  • Falcon2ChatModelProviderOptionsByName (46-50)
  • Falcon2ModelInputModalitiesByName (52-56)
packages/typescript/ai-ollama/src/meta/model-meta-marco-o1.ts (3)
  • MARCO_O1_MODELS (33-33)
  • MarcoO1ChatModelProviderOptionsByName (46-50)
  • MarcoO1ModelInputModalitiesByName (52-56)
packages/typescript/ai-ollama/src/meta/model-meta-phi4.ts (3)
  • PHI4_MODELS (33-33)
  • Phi4ChatModelProviderOptionsByName (46-50)
  • Phi4ModelInputModalitiesByName (52-56)
packages/typescript/ai-ollama/src/meta/model-meta-mistral-nemo.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
  • OllamaMessageTools (116-119)
  • OllamaChatRequestTools (80-82)
packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (6)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
  • OllamaMessageTools (116-119)
  • OllamaChatRequestTools (80-82)
  • OllamaChatRequestThinking (72-74)
packages/typescript/ai-ollama/src/meta/model-meta-codellama.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Fix all issues with AI Agents 🤖
In @packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts:
- Around line 52-60: The LlavaLlamaChatModelProviderOptionsByName type
overstates capabilities: it currently adds OllamaMessageTools and
OllamaChatRequestTools for LLAVA_LLAMA3_LATEST and LLAVA_LLAMA3_8b despite their
metadata listing only OllamaMessageImages/vision; remove OllamaMessageTools and
OllamaChatRequestTools from the mapped types (or conditionally include them only
for models that declare 'tools') and update the leading comment to reflect that
these models only support vision/images; ensure the resulting provider options
for LLAVA_LLAMA3_LATEST and LLAVA_LLAMA3_8b are typed as OllamaChatRequest &
OllamaChatRequestMessages<OllamaMessageImages> (no tools).
♻️ Duplicate comments (1)
packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts (1)

24-37: Previous naming issues have been resolved.

The constant names GRANITE3_MOE_1b and GRANITE3_MOE_3b now correctly match their respective model names 'granite3-moe:1b' and 'granite3-moe:3b'. The issues identified in previous reviews have been fixed.

Also applies to: 39-52

🧹 Nitpick comments (2)
packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts (2)

69-69: Fix typo in commented type alias.

The commented type name references Llama3_4ChatModels but should be Llama4ChatModels to match the model family naming convention.

🔎 Proposed fix
-// export type Llama3_4ChatModels = (typeof LLAMA4_MODELS)[number]
+// export type Llama4ChatModels = (typeof LLAMA4_MODELS)[number]

71-83: Comment mentions "thinking" capability but models don't support it.

The comment on line 73 states "Models with thinking and structured output support," but none of the Llama4 models include 'thinking' in their capabilities array (they only have 'tools' and 'vision').

If this is part of a standard template comment used across all model-meta files for consistency (per the learnings), then it can remain. Otherwise, consider updating it to accurately reflect the actual capabilities.

Based on learnings, standard template comments are maintained for consistency across model-meta files.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e14e1eb and f739299.

📒 Files selected for processing (10)
  • packages/typescript/ai-ollama/src/adapters/summarize.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-r1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-chatqa.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-nemo.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5.ts
🚧 Files skipped from review as they are similar to previous changes (5)
  • packages/typescript/ai-ollama/src/adapters/summarize.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-mistral-nemo.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-deepseek-r1.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-chatqa.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5.ts
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.{ts,tsx}: Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from /adapters subpath rather than monolithic adapters
Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions with toolDefinition() and Zod schema inference
Implement isomorphic tool system using toolDefinition() with .server() and .client() implementations for dual-environment execution
Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Use camelCase for function and variable names throughout the codebase

Files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts
🧠 Learnings (7)
📓 Common learnings
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:51.232Z
Learning: In the ai-ollama package's model-meta files (packages/typescript/ai-ollama/src/meta/model-meta-*.ts), capability-related comments follow a standard template format across all files for consistency, even if the comment text doesn't precisely match individual model capabilities. This is an intentional design choice to maintain uniformity across the codebase.
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts:7-18
Timestamp: 2025-12-27T19:49:03.849Z
Learning: For Ollama model metadata in the ai-ollama package, always consult https://ollama.com/library/ as the authoritative source for model specifications (size, context window, capabilities) rather than other documentation sources.
📚 Learning: 2025-12-27T20:22:46.710Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:46.710Z
Learning: Enforce that all model-meta files under packages/typescript/ai-ollama/src/meta/model-meta-*.ts use a standard, uniform template for capability-related comments, even if the text doesn’t perfectly align with each specific model capability. This ensures consistency across the codebase. Include a brief note in the template when a capability is not applicable, and avoid deviating from the established comment structure (e.g., header, template fields, and formatting) to maintain readability and tooling familiarity.

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts
📚 Learning: 2025-12-27T21:39:23.584Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts:70-75
Timestamp: 2025-12-27T21:39:23.584Z
Learning: Do not modify the standard template comments in ai-ollama model-meta files (e.g., the generic description like 'Models with text, image, audio, video (no document)'). These should remain uniform across all model-meta-*.ts files to maintain consistency, regardless of the specific model capabilities.

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts
📚 Learning: 2025-12-27T19:48:57.631Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts:7-18
Timestamp: 2025-12-27T19:48:57.631Z
Learning: When reviewing or updating metadata in the Ollama model metadata module (e.g., files under packages/typescript/ai-ollama/src/meta/), always treat https://ollama.com/library/ as the authoritative source for model specifications (size, context window, capabilities). Validate only against the official docs and avoid citing non-official sources. If discrepancies arise, reference the official docs and ensure metadata aligns with their specifications.

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts
📚 Learning: 2026-01-05T08:43:12.067Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts:17-17
Timestamp: 2026-01-05T08:43:12.067Z
Learning: In the ai-ollama package, specifically within model-meta files under packages/typescript/ai-ollama/src/meta/, interpret 'K' suffixes in context window sizes as thousands (×1000) rather than kibibytes (×1024). For example, '128K context window' should be treated as 128000, not 131072. Update parsing logic, constants, and related tests to reflect this interpretation, and add a regression test ensuring 1K, 2K, 128K, etc. are parsed as 1000-based values. This should apply to all meta files in the meta directory.

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety

Applied to files:

  • packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts
  • packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts
🧬 Code graph analysis (4)
packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (3)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (6)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
  • OllamaMessageTools (116-119)
  • OllamaMessageImages (112-114)
  • OllamaChatRequestTools (80-82)
packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (6)
  • OllamaModelMeta (3-13)
  • OllamaChatRequest (59-70)
  • OllamaChatRequestMessages (93-106)
  • OllamaMessageImages (112-114)
  • OllamaMessageTools (116-119)
  • OllamaChatRequestTools (80-82)
🔇 Additional comments (15)
packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts (4)

1-5: LGTM!

Imports are correct and all imported types are properly utilized for the model metadata type constraints.


7-44: Type safety properly enforced; verify model specifications.

The previous issues have been addressed:

  • ✅ Line 14 now uses '4.7gb' (consistent unit format)
  • ✅ Line 42-44 now uses proper type constraint instead of any

The as const satisfies pattern with OllamaChatRequest & OllamaChatRequestMessages provides proper type-safe per-model configuration. Based on learnings, please verify the model specifications (1M context window, sizes) against https://ollama.com/library/llama3-gradient.


46-50: LGTM!

The exported LLAMA3_GRADIENT_MODELS tuple correctly aggregates all model names with as const for type-safe model selection.


62-75: LGTM!

The per-model type mappings correctly enable type-safe access to provider options and input modalities using computed property keys. This aligns with the coding guidelines for type-safe per-model configuration. Based on learnings, the template comments are intentionally uniform across all model-meta files.

packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts (4)

1-8: LGTM!

The imports are complete and appropriate for the model metadata definitions.


55-59: LGTM!

The model names export is correctly structured.


85-90: LGTM!

The input modalities type mapping is correctly structured. The template comment follows the standard format used across model-meta files for consistency.

Based on learnings, template comments like "Models with text, image, audio, video (no document)" remain uniform across all model-meta files for consistency, even if they don't precisely match individual model capabilities.


10-53: All model specifications in this file are correct and match the official Ollama source. The context window values are intentionally different by variant design—the larger 128-expert model (245GB) legitimately has a smaller 1M-token context window, while the smaller Scout/Maverick models (67GB) have a 10M-token context window. No changes needed.

Likely an incorrect or invalid review comment.

packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts (2)

1-89: Well-structured model metadata implementation.

The file follows the established patterns for Ollama model metadata files:

  • Proper use of const satisfies OllamaModelMeta for type safety
  • Per-model provider options and input modalities type maps
  • Consistent export structure

The implementation aligns with coding guidelines for type-safe per-model configuration.


9-52: Correct the size specification for granite3-moe:latest.

According to the official Ollama library, the context window (4K tokens) and sizes for the 1b (822mb) and 3b (2.1gb) variants are correct. However, granite3-moe:latest is the 1.33B parameter variant, not the 1b variant. The code incorrectly shows both GRANITE3_MOE_LATEST and GRANITE3_MOE_1b as 822mb. Update GRANITE3_MOE_LATEST to reflect its correct download size, which should differ from the 1b variant given the parameter difference (1.33B vs 1B).

⛔ Skipped due to learnings
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts:17-17
Timestamp: 2026-01-05T08:43:12.067Z
Learning: In the ai-ollama package's model-meta files, when Ollama library displays context windows with "K" suffix (e.g., "128K"), the codebase interprets this as thousands (×1000), not kibibytes (×1024). For example, "128K context window" is represented as context: 128_000, not 131_072.
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts:7-18
Timestamp: 2025-12-27T19:49:03.849Z
Learning: For Ollama model metadata in the ai-ollama package, always consult https://ollama.com/library/ as the authoritative source for model specifications (size, context window, capabilities) rather than other documentation sources.
packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts (3)

7-18: Previous issue resolved: size unit corrected.

The size unit has been corrected from '4.7b' to '4.7gb', matching the convention used across other models.


20-31: Previous issue resolved: model name corrected.

The model name has been corrected from 'llama3:7b' to 'llama3:8b', now matching the constant name and Ollama's available variants.


7-44: Correct context window values to match official Ollama specifications.

The model sizes are accurate (4.7gb for 8b, 40gb for 70b), and LLAMA3_LATEST being identical to LLAMA3_8b is correct since both refer to the 8b variant. However, the context window for all three models should be 8_192 tokens, not 8_000. The official Ollama library specifies 8,192 tokens (8K) as the default context window for all three variants (llama3:latest, llama3:8b, and llama3:70b).

⛔ Skipped due to learnings
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts:17-17
Timestamp: 2026-01-05T08:43:12.067Z
Learning: In the ai-ollama package's model-meta files, when Ollama library displays context windows with "K" suffix (e.g., "128K"), the codebase interprets this as thousands (×1000), not kibibytes (×1024). For example, "128K context window" is represented as context: 128_000, not 131_072.
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts:7-18
Timestamp: 2025-12-27T19:49:03.849Z
Learning: For Ollama model metadata in the ai-ollama package, always consult https://ollama.com/library/ as the authoritative source for model specifications (size, context window, capabilities) rather than other documentation sources.
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:51.232Z
Learning: In the ai-ollama package's model-meta files (packages/typescript/ai-ollama/src/meta/model-meta-*.ts), capability-related comments follow a standard template format across all files for consistency, even if the comment text doesn't precisely match individual model capabilities. This is an intentional design choice to maintain uniformity across the codebase.
packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts (2)

36-39: LGTM: Model exports and input modality type map are correct.

The LLAVA_LLAMA3_MODELS export correctly references the model names, and LlavaLlamaModelInputModalitiesByName properly maps each model to its supports.input type, ensuring type-safe input modality checking.

Also applies to: 62-66


10-34: Correct context window specification to match Ollama documentation.

The context window is specified as 8_000 but official Ollama documentation confirms the correct value is 8_192 tokens for llava-llama3 models. Update both LLAVA_LLAMA3_LATEST and LLAVA_LLAMA3_8b to use context: 8_192.

The size (5.5gb) and vision capabilities are accurate per documentation.

⛔ Skipped due to learnings
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:51.232Z
Learning: In the ai-ollama package's model-meta files (packages/typescript/ai-ollama/src/meta/model-meta-*.ts), capability-related comments follow a standard template format across all files for consistency, even if the comment text doesn't precisely match individual model capabilities. This is an intentional design choice to maintain uniformity across the codebase.
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts:7-18
Timestamp: 2025-12-27T19:49:03.849Z
Learning: For Ollama model metadata in the ai-ollama package, always consult https://ollama.com/library/ as the authoritative source for model specifications (size, context window, capabilities) rather than other documentation sources.
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts:70-75
Timestamp: 2025-12-27T21:39:29.563Z
Learning: The standard template comments in ai-ollama model-meta files (like "Models with text, image, audio, video (no document)") should not be modified to match individual model capabilities, as they are intentionally kept uniform across all model-meta-*.ts files for consistency, regardless of what each specific model actually supports.
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts:17-17
Timestamp: 2026-01-05T08:43:12.067Z
Learning: In the ai-ollama package's model-meta files, when Ollama library displays context windows with "K" suffix (e.g., "128K"), the codebase interprets this as thousands (×1000), not kibibytes (×1024). For example, "128K context window" is represented as context: 128_000, not 131_072.
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
.changeset/early-doors-strive.md (1)

1-5: Changeset format is valid; consider providing more implementation detail in the description.

The changeset correctly specifies a minor version bump for @tanstack/ai-ollama, which is appropriate for feature additions without breaking changes. However, the description is quite high-level.

Given the scope of changes (centralizing model metadata into a shared registry, adding multiple model-meta files, and updating index exports), consider expanding the description to briefly mention key changes, such as:

  • "Centralize Ollama model metadata into a shared registry with per-model type definitions"
  • "Add support for additional model families (athene, aya, codellama, deepseek, gemma, llama, qwen, etc.)"
  • "Export aggregated model sets and typed provider options for improved type safety"

This will help users understand what improvements they gain in this release.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ed38a62 and c3a82db.

📒 Files selected for processing (2)
  • .changeset/early-doors-strive.md
  • packages/typescript/ai-ollama/src/adapters/summarize.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/typescript/ai-ollama/src/adapters/summarize.ts
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:51.232Z
Learning: In the ai-ollama package's model-meta files (packages/typescript/ai-ollama/src/meta/model-meta-*.ts), capability-related comments follow a standard template format across all files for consistency, even if the comment text doesn't precisely match individual model capabilities. This is an intentional design choice to maintain uniformity across the codebase.
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts:7-18
Timestamp: 2025-12-27T19:49:03.849Z
Learning: For Ollama model metadata in the ai-ollama package, always consult https://ollama.com/library/ as the authoritative source for model specifications (size, context window, capabilities) rather than other documentation sources.
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
📚 Learning: 2025-12-27T20:22:51.232Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:51.232Z
Learning: In the ai-ollama package's model-meta files (packages/typescript/ai-ollama/src/meta/model-meta-*.ts), capability-related comments follow a standard template format across all files for consistency, even if the comment text doesn't precisely match individual model capabilities. This is an intentional design choice to maintain uniformity across the codebase.

Applied to files:

  • .changeset/early-doors-strive.md
📚 Learning: 2025-12-27T19:49:03.849Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts:7-18
Timestamp: 2025-12-27T19:49:03.849Z
Learning: For Ollama model metadata in the ai-ollama package, always consult https://ollama.com/library/ as the authoritative source for model specifications (size, context window, capabilities) rather than other documentation sources.

Applied to files:

  • .changeset/early-doors-strive.md
📚 Learning: 2025-12-27T21:39:29.563Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts:70-75
Timestamp: 2025-12-27T21:39:29.563Z
Learning: The standard template comments in ai-ollama model-meta files (like "Models with text, image, audio, video (no document)") should not be modified to match individual model capabilities, as they are intentionally kept uniform across all model-meta-*.ts files for consistency, regardless of what each specific model actually supports.

Applied to files:

  • .changeset/early-doors-strive.md

@AlemTuzlak AlemTuzlak merged commit 9fa1297 into main Jan 5, 2026
6 checks passed
@AlemTuzlak AlemTuzlak deleted the ollama-temp branch January 5, 2026 16:02
@github-actions github-actions bot mentioned this pull request Jan 5, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants