Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
158 changes: 158 additions & 0 deletions ai/skills/aidd-geo-interview/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,158 @@
---
name: aidd-geo-interview
description: Interview AI models about GEO (Generative Engine Optimization) visibility for a product or topic. Measures share of voice, competitive positioning, and generates citation strategy recommendations.
---

# 🔍 GEO Interview

Act as a top-tier GEO (Generative Engine Optimization) analyst. Your job is to
interview AI models — including yourself — and measure how they recommend products
in a given category. You produce a structured visibility report with share of voice
scores, competitive positioning, sentiment analysis, and actionable recommendations.

Competencies {
share of voice analysis,
competitive positioning,
AI citation strategy,
fan-out query generation,
sentiment detection
}

Constraints {
Do ONE STEP at a time. Get user approval before moving on.
Use reflexive interview: you answer the queries yourself, genuinely.
For multi-model coverage, the user runs this skill with different AI assistants.
Output markdown tables and structured reports.
No external APIs or code execution needed.
Reference fan-out-queries.md for extended query templates.
Reference geo-scoring.md for scoring methodology.
}

## Process

### 1. defineContext() => State

Gather the following from the user (ask if not provided):

```
State {
product: string // e.g. "Notion"
category: string // e.g. "project management software"
competitors: string[] // e.g. ["Asana", "Monday.com", "ClickUp"]
audience: string // e.g. "startup founders"
year: string // current year
}
```

### 2. generateQueries(State) => queries[]

Generate 7 queries using these templates:

| Type | Template |
|------|----------|
| category_discovery | "What is the best {category}? Compare the top options." |
| recommendation | "I need a {category}. What do you recommend?" |
| product_awareness | "What is {product}? Is it any good?" |
| product_review | "{product} review — is it worth it?" |
| comparison | "Compare {product} to {competitors[0]}. Which is better?" |
| alternatives | "What are the alternatives to {competitors[0]}?" |
| gap_analysis | "What would {product} need to beat {competitors[0]} and {competitors[1]}?" |

See `fan-out-queries.md` for 9 extended query categories with 30+ templates.

### 3. interview(queries) => responses[]

Answer each query **genuinely and completely** as if a real user asked you.
Do not game the responses. Be honest about what you know. Format:

```
For each query:
📝 Query: {query text}
💬 Response: {your genuine answer}
```

### 4. analyzeResponses(responses) => analysis

For each response, extract:

- **Mention detection**: Does the response mention {product}? (yes/no)
- **Position**: If mentioned, what numeric position? Match patterns: "1. product", "#1: product", "1) product"
- **Sentiment**: Classify mentions using signal words (see geo-scoring.md):
- ✅ Positive: recommend, best, leading, excellent, powerful, impressive, top, standout
- ❌ Negative: avoid, lacking, weak, limited, outdated, disappointing, issues, behind
- ➖ Neutral: otherwise
- **Competitor mentions**: Which competitors appear and at what positions?

Build a **competitive matrix**:

| Query Type | {product} Position | {competitor1} Position | {competitor2} Position |
|------------|-------------------|----------------------|----------------------|
| category_discovery | #2 | #1 | #4 |
| ... | ... | ... | ... |

Calculate **Share of Voice** (0–10):
```
SoV = (mention_count / total_queries) * 10
```

### 5. generateReport(analysis) => markdown

Produce a structured report:

```markdown
# GEO Visibility Report: {product}

## Share of Voice: {sov}/10

## Mention Rate: {mentions}/{total} queries ({pct}%)

## Competitive Matrix
{table from step 4}

## Per-Query Results
{for each query: type, query text, mentioned?, position, sentiment, key excerpt}

## Sentiment Summary
- Positive signals: {count} ({list})
- Negative signals: {count} ({list})
- Neutral: {count}
```

### 6. generateRecommendations(analysis) => actions[]

Apply threshold-based recommendations:

| Condition | Priority | Action |
|-----------|----------|--------|
| SoV < 3 | 🔴 CRITICAL | Create authoritative structured content with schema markup, FAQ sections, and citation-ready formatting |
| SoV < 5 | 🟡 HIGH | Create comparison pages, listicle content, and "best of" guides targeting fan-out queries |
| Negative sentiment detected | 🟡 HIGH | Publish updated case studies, audit reviews, address specific criticisms with evidence |
| Competitor outperforms on >50% queries | 🟡 HIGH | Create direct comparison content, highlight differentiators, build authority signals |
| Not mentioned in >50% of responses | 🟠 MEDIUM | Publish schema-rich pages targeting unmentioned query types, build topical authority |
| Position > 3 when mentioned | 🟠 MEDIUM | Strengthen authority signals, add structured data, improve E-E-A-T indicators |
| SoV >= 7 | 🟢 MAINTAIN | Refresh content quarterly, monitor for competitor gains, maintain citation-ready formatting |

Output as a prioritized action list with specific content recommendations.

## Pipeline

```
geoInterview = defineContext
|> generateQueries
|> interview
|> analyzeResponses
|> generateReport
|> generateRecommendations
```

## Cross-References

- Use /aidd-technical-seo to audit and fix the technical SEO issues surfaced by recommendations
- See `fan-out-queries.md` for extended query templates across 9 categories
- See `geo-scoring.md` for the full GEO scoring methodology and citation signals

Commands {
🔍 /geo-interview - Run the full GEO interview pipeline
📝 /geo-queries - Generate fan-out queries only (steps 1-2)
❓ /help - List commands
}
143 changes: 143 additions & 0 deletions ai/skills/aidd-geo-interview/fan-out-queries.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,143 @@
# Fan-Out Query Templates

Extended query templates for comprehensive GEO visibility analysis. Use these to
expand beyond the 7 core queries in SKILL.md for deeper coverage assessment.

## Query Categories

### 1. best_of (Weight: 9/10)

High-value discovery queries — these are how users find new products.

```
"best {category} for {audience}"
"best {category} {year}"
"top {category}"
"best {category} near me"
"best {category} for startups"
"best {category} for enterprise"
```

### 2. comparison (Weight: 9/10)

Direct head-to-head queries — high commercial intent.

```
"{product} vs {competitor}"
"{competitor} vs {product}"
"{product} compared to {competitor}"
"{product} or {competitor}"
```

### 3. alternative (Weight: 8/10)

Competitor displacement queries — users actively seeking switches.

```
"{competitor} alternatives"
"{competitor} alternatives {year}"
"places like {competitor}"
"{product} alternatives"
```

### 4. problem (Weight: 8/10)

Pain-point queries — users looking for solutions, not brands.

```
"{pain_point}"
"why is {pain_point}"
"how to fix {pain_point}"
"solve {pain_point}"
```

### 5. how_to (Weight: 7/10)

Informational queries that build topical authority.

```
"how to choose a {category}"
"how to use {product}"
"how to {solve_problem}"
```

### 6. pricing (Weight: 7/10)

Commercial queries — strong purchase intent signals.

```
"{product} pricing"
"{product} pricing {year}"
"{product} cost breakdown"
"how much does {product} cost"
```

### 7. what_is (Weight: 6/10)

Awareness queries — these drive top-of-funnel visibility.

```
"what is {product}"
"{product} review"
"{product} review {year}"
"{product} features"
```

### 8. when_to (Weight: 5/10)

Decision-timing queries.

```
"is {product} worth it"
"when to use {product}"
"{product} use cases"
```

### 9. integration (Weight: 4/10)

Ecosystem queries — lower volume but high conversion.

```
"{product} integrations"
"{product} API"
"{product} with {other_tool}"
```

## Type Weight Scoring

Use weights to prioritize which query types to target first:

| Type | Weight | Rationale |
|------|--------|-----------|
| best_of | 9 | Highest discovery volume, drives recommendations |
| comparison | 9 | High commercial intent, direct conversion |
| alternative | 8 | Active switchers, displacement opportunity |
| problem | 8 | Solution-seekers, builds authority |
| how_to | 7 | Topical authority, informational trust |
| pricing | 7 | Purchase intent, commercial queries |
| what_is | 6 | Awareness building, top-of-funnel |
| when_to | 5 | Decision support, moderate volume |
| integration | 4 | Ecosystem fit, lower volume |

## Query Scoring Dimensions

Each generated query can be scored on 4 dimensions:

| Dimension | Description | Scale |
|-----------|-------------|-------|
| volume_signal | Estimated search demand for this query pattern | 0–10 |
| citation_opportunity | How likely AI models include citations in answers | 0–10 |
| current_gap | How poorly the product currently covers this query | 0–10 |
| commercial_intent | How close to purchase decision this query sits | 0–10 |

**Composite score**: `(volume * 0.3) + (citation_opp * 0.3) + (gap * 0.25) + (commercial * 0.15)`

## Industry-Specific Adjustments

- **B2B SaaS**: Emphasize comparison, integration, and pricing queries
- **E-commerce**: Emphasize best_of, pricing, and alternative queries
- **Local business**: Emphasize best_of ("near me"), problem, and what_is queries
- **Content/Media**: Emphasize how_to, what_is, and problem queries

Exclude query types that don't apply to the product's industry (e.g., skip
"integration" for local restaurants, skip "near me" for pure software products).
Loading