Skip to content

ggcity/generative-ai-training

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Generative AI Training for City of Garden Grove Employees

1. Why This Training

Generative AI tools are already in use across city departments - often informally, sometimes with personal accounts, frequently without clear understanding of risks or policies.

The stakes:

  • Decisions assisted by AI depend on accurate information
  • Public trust depends on transparent, accurate communications
  • Public Records Act may apply to AI-generated content
  • Data breaches with PII create legal and reputational harm
  • AI outputs can embed biases that affect decisions

What this training covers:

  • What GenAI is and how it differs from other tools you use
  • Garden Grove's policy and why it exists
  • Risks specific to GenAI
  • Practical guidelines for using AI responsibly and effectively
  • Your responsibilities
  • Example scenarios

2. What is Generative AI

AI systems have been around for a long time, but most AI you've encountered acts on existing data:

  • Face recognition in security systems, medical imaging
  • Speech recognition
  • Spam detection in email
  • Market analysis, fraud detection
  • Weather prediction
  • Self-driving cars

Generative AI creates new content:

  • Generates text, images, audio, video based on patterns in training data
  • Adapts to your prompts and learns from the conversation
  • Common tools: ChatGPT, Claude, Gemini
  • Increasingly embedded in software you already use (Google search, Grammarly)

Critical distinction: GenAI is designed for creative and novel content generation, not necessarily factual or accurate content. This makes it powerful for brainstorming and drafting, but can be risky for decision-making and factual reporting if not grounded or double checked.

2B. Understanding Key Terms

Before diving deeper, let's clarify some terms you'll encounter when working with AI:

Prompt

Your prompt is everything you provide to the AI system:

  • Text you type: Questions, instructions, background context, constraints
  • Files you upload: Documents, spreadsheets, images, data files
  • Features you enable: Web search, code execution, project context

Think of a prompt as your assignment to the AI. Just like briefing a colleague, the clearer and more complete your prompt, the better the result.

Output

The AI's response to your prompt:

  • Generated text, analysis, or recommendations
  • Created visualizations, charts, or calculations
  • Drafted documents or revised content
  • Code or interactive tools

The output is what the AI delivers back to you - but remember, it's your responsibility to verify and refine it before use.

Context

Information the AI remembers within a single conversation:

  • What you've discussed earlier in the same chat session
  • Files you've uploaded in that conversation
  • Instructions you've given during that session

Important limitation: Context gets "forgotten" when you start a new conversation. The AI doesn't remember anything from your previous chats unless you're using a Project workspace (covered in later section) or a feature called "Memory".

Training Data

The massive collection of information the AI learned from during its creation:

  • Books, websites, articles, documents, code repositories
  • Patterns in how humans write, reason, and structure information
  • Represents knowledge up to a specific cutoff date

Why this matters: Training data determines what patterns the AI recognizes and what knowledge it has access to. It's also where potential biases originate - if biased perspectives were overrepresented in training data, the AI may reflect those biases.


The Basic flow

You provide a prompt → AI draws on its training data and conversation context → AI generates an output → You verify and refine the output for your needs.

3. Understanding the Risks

Generative AI systems are powerful, but also have unique risks.

Hallucinations / Inaccurate Information

AI performs confidence. It confidently generates false information, fabricated citations, and invented statistics.

Examples:

  • Creating non-existent legal precedents for a staff report
  • Fabricating budget numbers that "look right"
  • Inventing building code sections that sound plausible
  • Generating fake demographic data for grant applications

Data Loss / Privacy Concerns

Different AI platforms have different privacy policies. Some explicitly monetize your data and some are unclear or constantly changing.

What's at risk:

  • Internal memos about personnel actions
  • Resident PII (SSNs, addresses, financial data)
  • Confidential negotiations
  • Law enforcement investigation details
  • Infrastructure vulnerability information
  • Competitive bid details before they're public

How data leaks happen:

  • Your prompts become training data for the AI and bad actors know how to coax it out of the AI
  • Accidental sharing through conversation history
  • Using personal/free accounts that have weaker protections
  • Malicious software GenAI uses to compute your queries
  • Malicious search results or citations

Bias / Discrimination

AI systems have no judgment, compassion, or reflection - they follow patterns in their training data. If the training data reflects human biases, AI could potentially amplifies them.

Examples:

  • Prescreening job applicants may discriminate against certain protected classes
  • Budget allocation recommendations may systematically underserve certain neighborhoods
  • Code enforcement prioritization may reflect existing disparities
  • Vendor selection criteria may have bias for more well known companies

Why this happens:

  • Training data overrepresents certain perspectives
  • AI systems are designed to be agreeable and mirror your views
  • No built-in mechanism to question fairness or equity

Overreliance / Skill Atrophy

Excessive dependence on AI for tasks requiring human judgment leads to failure to critically evaluate outputs.

Risks:

  • Staff stops developing expertise needed to evaluate AI outputs
  • "The AI said so" becomes accepted justification
  • Loss of institutional knowledge as shortcuts replace learning
  • Critical thinking skills decline
  • Forgetting that there are other (perhaps more effective) methods out there

Your expertise remains essential. You must be the one able to check AI's output.

Misuse

Users may deliberately misuse AI to circumvent established procedures.

Examples:

  • Using AI to find ways around procurement regulations
  • Using AI to find justifications to award an unqualified vendor

Complacency

Users place excessive trust in AI recommendations for critical decisions.

"It was right the last time, why wouldn't it be right again this time?"

Supervising an "almost perfect" system is boring → boredom → complacency.

Term: Vigilance decrement

Examples:

  • AI-generated firewall configuration that creates security vulnerabilities
  • Submitting AI-generated staff report to Council without fact-checking
  • Accepting AI's legal interpretation without attorney review
  • Parallel: Self-driving cars drive themselves well for thousands of miles - until they don't

4. Garden Grove's Generative AI Policy

Core Policy Framework

The policy builds on existing regulations you already follow:

  • AR 2.11: Computer System Policies and Procedures
  • AR 2.16: Cloud Computing Services Policy

Key principle: Generative AI is treated as a cloud computing service with additional specific protections due to specific risks.

Requirements

1. IT Authorization Required

  • Only IT-approved AI services may be used for city business
  • Currently pre-approved: ChatGPT Enterprise and Claude Enterprise
  • IT maintains the approved list on the intranet
  • If you need a different tool, request IT review

Why ChatGPT and Claude:

  • Better privacy and safety track record
  • Enterprise versions don't use your data for model training
  • Clear contractual protections for our data

Why NOT other tools:

  • Copilot and Gemini Enterprise require ecosystem commitments we're not ready for
  • Free/consumer versions may use your data for training or monetization
  • Other tools have documented security issues or data handling concerns (see Section 12 for specifics)

2. Account Requirements

  • Must use city-provisioned accounts only - IT must create your account
  • Use your city email address for the account
  • Never use personal AI accounts or anonymous accounts for city business
  • Use multi-factor authentication when available
  • Don't share accounts

WARNING - Free Tier Accounts:

  • Do not use free versions of ChatGPT, Claude, or any AI tool for city business
  • If using AI for personal purposes, ensure you're logged out before doing city work
  • Don't have free and enterprise accounts signed in simultaneously - too easy to use the wrong one

3. Data Protection

  • All data created using city accounts is city property
  • Never upload confidential or PII without IT Director + Department Director approval:
    • Social security numbers
    • Credit card numbers
    • Payroll or personnel information
    • Medical information
    • Addresses or contact info for residents
    • Law enforcement data
    • Contract details before they're public

4. Decision-Making and Accountability

  • AI is a source of assistance, not a decision-maker
  • Staff is responsible for all decisions made with AI assistance
  • Staff is responsible for all outputs derived from AI
  • You must review, revise, and fact-check all AI-generated content before use

5. Bias Awareness

  • Use AI with understanding that it is not objective or free of bias
  • Apply extra scrutiny to AI recommendations affecting people (hiring, services, enforcement)

6. Disclosure Requirements Disclose AI use when transparency serves public interest, promotes trust, or enables informed decision-making:

When to disclose:

  • AI analysis forms the primary basis for recommendations
  • AI-generated analysis involves confidential or PII
  • AI tools directly interact with public (chatbots, automated translations, response systems)
  • AI-generated or AI-manipulated images/video/audio could be mistaken for authentic media

Disclosure language should be:

  • Clear and concise
  • Prominently placed or conveyed

7. Public Records and Retention

  • AI conversations and outputs are subject to California Public Records Act
  • We do not yet have a formal retention schedule for AI-generated content
  • Treat AI-generated drafts like you'd treat email drafts and other working documents
  • When in doubt, consult City Clerk's office

[NOTE: Formal retention schedule for AI content is in development]


5. Understanding the Technology and the Human-AI Partnership

How Generative AI Actually Works (And Why You Still Matter)

It's Not Magic - It's Math and Patterns

GenAI can feel like magic when it writes a coherent staff report or analyzes complex data. But understanding how it works helps you use it better and know its limits.

At its core, GenAI is a very sophisticated pattern-matching system:

  1. It was trained on massive amounts of text - books, websites, documents, conversations. Billions of examples of human writing.

  2. It learned statistical patterns - which words tend to follow other words, how sentences are structured, how arguments are typically made, how different types of documents are formatted.

  3. When you ask a question, it predicts the most likely next word, then the next, then the next - based on all those patterns it learned. It's doing this thousands of times per second to generate a response.

Think of it like this:

If you've read thousands of staff reports, you start recognizing patterns: how they open, how they structure analysis, what comes in fiscal impact sections, how recommendations are worded. You could probably draft a pretty good staff report even on a topic you don't know well, just by following those patterns.

That's essentially what AI does, but it's "read" millions of documents and can apply those patterns instantly.


Why It Seems So Capable

The patterns GenAI learned go far beyond simple word prediction:

  • Patterns in how arguments are constructed
  • Patterns in how problems are typically solved
  • Patterns in cause-and-effect relationships
  • Patterns in different writing styles and tones
  • Patterns in how to structure different types of content

This is why it can:

  • Draft coherent documents in appropriate formats
  • Summarize long texts accurately
  • Translate between languages
  • Explain complex concepts simply
  • Generate creative variations on ideas
  • Analyze data and identify trends

It's recognizing patterns in your prompt and responding with patterns it learned from similar content.


Emergent Abilities: Capabilities We Didn't Expect

Here's where it gets interesting. When these AI systems got large enough (trained on enough data, with enough complexity), they developed abilities their creators didn't explicitly program.

These "emergent abilities" include:

  • Reasoning through multi-step problems - Breaking complex tasks into steps and working through them logically
  • Understanding context across long conversations - Remembering what you discussed earlier and applying it appropriately
  • Recognizing when it doesn't know something - Sometimes (though not reliably) acknowledging uncertainty
  • Adapting to different roles and perspectives - Shifting tone and approach based on your instructions
  • Making analogies and connections - Drawing parallels between different concepts

Why this happened: When pattern recognition operates at massive scale, it starts to approximate some aspects of reasoning and understanding, even though it's still fundamentally predicting likely next words.

But - and this is critical - these abilities emerged from pattern matching, not from actual understanding or consciousness.


What AI Fundamentally Cannot Do

Despite these impressive capabilities, GenAI has inherent limitations because of how it works:

1. It has no real understanding

  • For example, it recognizes patterns in how humans write about building codes
  • It doesn't actually understand why building codes exist or what makes structures safe
  • It can explain concepts clearly because it's seen many explanations, not because it grasps the underlying reality

2. It has no judgment or values

  • It can't determine if a policy is ethical or fair
  • It can't weigh what matters most in a decision
  • It doesn't have intuition about what's reasonable or what stakeholders actually need
  • It can only mirror human values it saw in training data

3. It has no accountability

  • When it's wrong, there's no consequence to the AI
  • It can't be held responsible for bad recommendations
  • You own the outcomes of following its advice

4. It has no connection to current reality

  • Its training data (most reliable) has a cutoff date
  • It doesn't know what happened at yesterday's Council meeting
  • It doesn't know Garden Grove's specific current circumstances unless you provide them
  • Even with web search, it's prone to errors in interpreting what it finds

5. It has no expertise or experience

  • For example, It's seen many building inspections described in text
  • It hasn't actually conducted building inspections
  • It can articulate best practices but hasn't learned from failures
  • It has no "feel" for when something's off

The Human-AI Partnership

Understanding how AI works shows why the human-AI partnership matters. You're not just checking AI's work - you're providing the capabilities AI fundamentally lacks.

Humans provide:

  • Critical thinking and judgment
  • Ethics and equity considerations
  • Subject matter expertise and lived experience
  • Creativity and vision
  • Accountability and responsibility
  • Understanding of actual context and circumstances
  • Connection to current reality
  • Common sense about what's reasonable

AI provides:

  • Fast processing of large information sets
  • Pattern recognition across data
  • Draft generation and iteration
  • Format conversion and summarization
  • Exploring multiple variations quickly
  • Repetitive analysis tasks
  • Following standard formats and structures
  • Answer questions and provide how-tos

The approach:

Think of AI as a very capable intern who:

  • Has read everything but experienced nothing
  • Can draft well but needs your expertise to finalize
  • Works fast but needs your judgment to guide it
  • Can spot patterns but needs your context to interpret them
  • Follows instructions well but can't determine if they're the right instructions

You wouldn't hand an intern's draft to Council without reviewing it. You wouldn't let an intern make policy decisions alone. You wouldn't assume everything they research is accurate just because they're confident.

Same with AI.


Questions to ask before using AI for a task

  1. Do I understand the task well enough to evaluate the output?
  • If no: Learn the task manually first, then consider AI assistance
  • You can also use AI to bootstrap your knowledge, but always cross-reference the information using traditional methods such as training, colleagues, official publications and use your own judgment
  • It will require your subject matter expertise to properly judge quality of AI output
  1. Have I set aside time to double check AI output?

    • AI generates fast; don't let that speed pressure your review.
  2. Does this involve sensitive data?

  • If yes: Do I have required approvals? If no, don't use AI
  • Can I complete the task using anonymized data or abstraction or generalization? If yes, anonymize, or talk in abstracts
  1. Does this require judgment, ethics, or accountability?
  • Policy decisions need human values and judgment
  • Equity considerations need human awareness of impact
  • Anything affecting people's rights or services needs human accountability

When to Consider Traditional Methods

Before reaching for AI, consider whether a direct, authoritative source would serve you better:

For code/policy/procedure questions:

  • Check municipal code
  • Check administrative regulations (intranet)
  • Ask supervisor or subject matter expert

For legal/regulatory questions:

  • Consult City Attorney's office
  • Check official state/federal agency websites
  • Review actual statutes, not summaries

The Bottom Line

GenAI is powerful pattern recognition at scale:

  • This makes it genuinely useful for many tasks
  • This also means it has fundamental limits that can't be fixed by better prompts or newer versions
  • Its capabilities emerged from statistical patterns, not from understanding or judgment

You remain essential because:

  • You understand the actual context and constraints
  • You provide judgment, ethics, and accountability
  • You connect its outputs to real circumstances
  • You bring expertise AI can only approximate

Understanding how it works helps you:

  • Know when to trust it and when to verify
  • Recognize its limits aren't a failing - they're inherent to how it operates
  • Use it effectively as a tool, not treat it as an oracle

The goal isn't to replace your expertise with AI. It's to amplify your effectiveness by handling the pattern-matching tasks while you focus on judgment, context, and accountability.


6. Everyday Casual Queries

Most AI use is probably a casual one

Most people use AI like they use a search engine - typing in quick questions, getting fast answers, and moving on. This section acknowledges that reality and helps you understand when casual use is fine and when you need to shift to a more rigorous approach.

How AI Differs from Search Engines

Traditional search (Google, Bing):

  • Shows you multiple sources
  • You see who's saying what
  • You evaluate source credibility yourself
  • You see disagreement and competing perspectives
  • Links go to original authoritative sources

AI tools (ChatGPT, Claude):

  • Synthesizes one answer from many sources
  • You may not see the sources unless you ask
  • AI has already decided what it thinks are credible
  • Disagreements and dissents are hidden or smoothed over

Why this matters: With search, you're doing the evaluation work yourself. With AI, the evaluation already happened invisibly. Sometimes that's fine. Sometimes it's risky.

When it's fine: Quick factual lookups you can easily verify
When it's risky: Anything you'll act on

Example tasks for casual AI usage

Definitions and explanations:

  • "What does COBRA stand for?"
  • "Explain the difference between a variance and a conditional use permit"
  • "Define 'prevailing wage' in California"

Format conversions:

  • "Convert these bullet points into a paragraph"
  • "Turn this narrative into a comparison table"
  • "Rewrite this for a 6th grade reading level"

Basic how-to questions:

  • "How do I create a pivot table in Excel?"
  • "What's the shortcut to insert a page break in Word?"
  • "How do I calculate percentage change from last year?"

Grammar, clarity, and translation:

  • "Make this email more professional"
  • "Check this paragraph for grammar errors"
  • "Is this sentence clear or confusing?"
  • "A resident sent me an email in Vietnamese, please translate"

Brainstorming starting points:

  • "Give me 10 ideas for resident engagement about the budget"
  • "What are different ways to organize a community meeting?"
  • "Suggest agenda topics for our department meeting"

When Casual Becomes Involved

Warning signs that your "quick question" actually needs more of your attention:

  • The answer will go in a document others will read
  • You're making a decision based on the answer
  • It involves resident/employee data or rights
  • It's about policy interpretation or legal requirements
  • Money or safety implications

7. Effective Prompting for More Involved Tasks

Modern AI handles natural, conversational language very well. You don't need to craft the perfect prompt on the first try - starting with a rough question and iterating is often just as effective. The key skill is knowing how to refine and course-correct as you go, not writing elaborate prompts upfront.

That said, for complex or consequential tasks, a bit of structure helps you get a more useful first draft with less back-and-forth. Here's a helpful framework:

An effective prompt includes:

  1. Task explanation: What needs to be done
  2. Context: Background information, constraints, purpose
  3. Audience: Who will read/use this
  4. Format: How the output should be structured
  5. Examples: What good output looks like (AI excels at pattern matching)

Be as clear as you would with a human colleague. The AI can't read your mind or know internal processes unless you explain them.

Examples

Example 1: Council Staff Report (City Manager's Office)

Draft a staff report for the City Council regarding allocation of $2.5M from the state Prop 68 grant for park improvements. 

Context: The grant has specific requirements for low-income community benefits and must be spent by June 2026. Our Parks Master Plan identifies three priority locations but funding only covers two.

Audience: City Council (non-technical)

Format: Follow standard staff report structure:
- Issue (2-3 sentences)
- Background (3-4 paragraphs)
- Discussion/Analysis (4-5 paragraphs comparing three sites)
- Fiscal Impact
- Recommendation

Constraints:
- Must emphasize community engagement process
- Must address environmental review requirements
- Must explain why we're recommending sites A and B over site C

Do NOT include final budget numbers yet - those are under negotiation.

Here is an example of a previous Council Memo...

Example 2: Budget Variance Explanation (Finance)

Analyze Q2 budget variances for Public Works Department - Maintenance Division.

Context:
- Budget shows 58% spent at 50% of fiscal year (8% over pace)
- Major categories: Personnel (52% spent), Supplies (71% spent), Contracts (48% spent)
- Known factors: Two emergency water main repairs in October, three employee workers' comp claims, vehicle replacement delayed from Q1

Audience: City Manager (needs executive summary for Council)

Task: 
1. Identify which variances are timing issues vs. structural problems
2. Project year-end position for each category
3. Recommend any mid-year adjustments needed

Format:
- Brief overview (3 sentences)
- Category-by-category analysis (one paragraph each)
- Year-end projection summary (bullet points)
- Recommended actions

Focus on Supplies category - that's the concerning one. Personnel and Contracts are likely timing issues based on past years.

Advanced Prompting Techniques

Using Extended Thinking or Reasoning feature for Complex Tasks

Useful for thorough analysis on consequential tasks, and helps you catch if AI is heading in the wrong direction.

For complex tasks, prompt the AI to show its reasoning process and/or use extended thinking or reasoning feature of your GenAI:

Before providing your recommendation, carefully consider:
- All relevant factors and constraints
- Possible approaches and their tradeoffs  
- What assumptions you're making
- What additional information would be helpful

Walk me through your reasoning so I can provide guidance if you're heading in the wrong direction.

Breaking Large Tasks into Steps

After brainstorming in one conversation, break large tasks into separate phases:

Now generate an effective prompt for step 1 that I can use to start a fresh conversation to execute this plan. Include all the context and constraints we've discussed.

This approach helps you:

  • Start fresh conversations with focused prompts
  • Avoid context overload in single conversations
  • Execute complex projects systematically

Defining Roles and Perspectives

Use role definitions to shape the lens and tone, not to add expertise:

Good role definitions:

  • "You are a friendly colleague" vs. "You are a critical auditor" - shapes the evaluation style
  • "Approach this as a government IT manager focused on stability and compliance" - weights tradeoffs appropriately

What doesn't work:

  • "You are an expert in X" - the AI already has whatever knowledge it has; declaring expertise doesn't add information

Requesting Multiple Versions

Ask for variations to compare different approaches:

Provide three different versions: one emphasizing cost savings, one emphasizing 
service quality, one emphasizing equity considerations.

Converting Between Formats

Transform content to fit different communication needs:

Convert this bullet list into a narrative suitable for a Council report.
Take this narrative and create a table comparing the three options.

When to Start Fresh

If a conversation has gone off track, starting a new conversation with a better prompt often works better than extensive corrections within the same conversation.


7B. Making AI Push Back

Why This Matters

AI systems are designed to be agreeable. They are designed to help, answer your questions, and give you what you asked for, even if it's not true or without telling you if a topic is contentious. That may be fine for casual tasks. For consequential decisions, it can be a problem.

We need to "remind" the AI not to be overly agreeable with our queries and to surface bias and controversies for complex topics. Just like a good friend who might question a wrong assumption you have about a topic.

The risk of "helpful" AI:

  • It won't question flawed premises in your question
  • It smooths over disagreement and complexity
  • It gives you the answer you implied you wanted
  • It prioritizes sounding confident over acknowledging uncertainty

For important decisions like policy recommendations, budget analysis, major proposals going to Council, you want AI to challenge your thinking, not just agree with it.

The Solution

You can combat this to a certain degree, while realizing that no workaround can replace your own analysis, subject-matter expertise, and research.

The following instructions will help and you can add them to any Gen AI that accepts custom instructions or you can paste it as part of a prompt:

PROCESS TRANSPARENCY REQUIREMENTS:
- Show reasoning steps, not just conclusions
- Identify which perspectives I considered and which I didn't
- Flag areas of uncertainty explicitly
- Explain what tradeoffs I made in framing my response

SOURCE & TRAINING DISCLOSURE:
- When citing information, specify: where it comes from, how recent it is, 
  what perspectives it represents
- Acknowledge knowledge cutoff and limitations

FRICTION BY DESIGN:
- Present competing viewpoints on contested topics
- Highlight where experts disagree
- Ask clarifying questions before giving definitive answers
- Surface complexity rather than smoothing it over

ANTI-VALIDATION MODE:
- Don't affirm my premises just because I stated them
- Point out logical inconsistencies or evidence gaps in my questions
- Offer "here's why that might be wrong" alongside "here's the answer"
- Distinguish between what I asked and what might actually serve me better

INCENTIVE VISIBILITY:
- When relevant, note how engagement optimization, computing costs, 
  or content policies might shape my response
- Flag when I'm defaulting to "safe" framings that avoid controversy
- Identify whose values are embedded in how I frame issues

Note: These instructions are comprehensive and help surface bias, uncertainty, and controversy - most useful for consequential decisions. They can be verbose for routine tasks; use your judgment about when to apply the full set.

8. Evaluating AI Output

Why Evaluation Matters

The temptation: AI produces impressive-looking content quickly. You see things it thought of that you hadn't. "It was right last 50 times, why wouldn't it be right again?" It's easy to accept output as-is.

The risk: AI can be confidently wrong. It generates content to be helpful and agreeable, not necessarily to be accurate even if that was your stated constraint.

The requirement:
You must bring your subject matter expertise to judge quality, verify facts, and ensure the output actually solves your intended problem.

If you don't know enough about a topic to spot a wrong answer, either:

  • Learn more with traditional means like training, ask supervisor or colleagues, and/or research. Use AI to bootstrap.
  • Go directly to authoritative sources
  • Use AI with extra skepticism

The Evaluation Process

1. Verify factual claims

  • Check statistics and data points against original sources
  • Verify legal citations actually exist and say what AI claims
  • Confirm event dates, names, and details
  • Look up any technical specifications or code sections

Example: AI drafts a staff report claiming "According to HUD guidelines, 65% of households in Garden Grove qualify as low-income." Check: Where does that 65% come from and is it accurate? What year's data?

2. Check for internal consistency

  • Does paragraph 3 contradict what paragraph 1 said?
  • Do the numbers add up correctly?
  • Are the recommendations actually supported by the analysis?
  • Do the conclusions follow from the evidence presented?

Red flags:

  • Recommendations that weren't mentioned in the analysis section
  • Data that changes between mentions
  • Reasoning that doesn't connect premises to conclusions

3. Evaluate completeness

  • Did AI address all aspects of your prompt?
  • Did it get fixated on one detail and ignore others?
  • Are there obvious gaps or missing considerations?
  • Did a previously rejected idea resurface?

4. Assess bias and fairness

  • Does it rely on stereotypes or generalizations?
  • What perspectives are missing?

5. Question the scope

  • Is this the right scale of solution for the problem?
  • Is AI over-engineering something simple?
  • Is it under-estimating complexity?
  • Are there simpler approaches AI didn't consider?

Set Aside Time for This

AI generates content fast - much faster than you likely need to thoroughly review it. This creates pressure to move at AI speed.

Resist that pressure. Block time for verification.

This is not different from reviewing a colleague's work, but AI's speed and volume can make this process a little tougher.

Don't let AI replace your skill in finding and evaluating information. You need those skills to evaluate AI's outputs.

The Feedback Loop

When you find problems:

1. Specify what's wrong and why

2. Provide examples of improvement

3. Revise your original prompt: Note what was missing from your prompt that led to the problem. Next time you have a similar task, include that context upfront.

4. Iterate: AI is excellent at exploring variations. Use that strength.


9. Common Use Cases and Guidance

Summarization

Good for:

  • Meeting notes from recordings or transcripts
  • Long documents into executive summaries
  • Email threads into action items

Watch out for:

  • AI may misidentify key points
  • May miss context or subtext
  • Should never replace reading critical documents yourself

Best practice: Use AI summary as a preview, not a replacement for understanding source material.


Question-Answer / Research

Good for:

  • Finding information in large document sets you provide
  • Explaining technical concepts in plain language
  • Comparing different approaches or options

Watch out for:

  • AI as search engine replacement may prioritize recent/popular over accurate
  • May invent citations that sound real
  • Can confidently answer questions it shouldn't (beyond its knowledge)
  • Not understanding enough to be able to verify

Best practice: Verify factual claims. When appropriate, disable web search if you're working with provided data only.


Drafting and Composition

Good for:

  • First drafts of reports, memos, emails
  • Converting bullet points to narrative
  • Reformatting content for different audiences

Watch out for:

  • Generic corporate-speak
  • Loss of your voice and institutional knowledge
  • Over-reliance leads to skill atrophy

Best practice: Think of AI as an intern who can draft but needs your expertise to finalize.


Data Analysis

Good for:

  • Identifying patterns in data sets
  • Creating visualizations and charts
  • Explaining statistical concepts

Watch out for:

  • May misinterpret data structure
  • Can make calculation errors
  • May identify coincidental correlations as meaningful

Best practice: Verify all calculations. Provide context for interpreting patterns.


Brainstorming and Ideation

Good for:

  • Generating options you haven't considered
  • Exploring different approaches
  • Challenging assumptions

Watch out for:

  • Ideas may be impractical for your context
  • May lack understanding of political/regulatory constraints
  • Can be overly optimistic about feasibility

Best practice: This is where Gen AI excels - use it freely, but apply your judgment about what's realistic.


10. AI Tools: Web Search, Projects, and Code Execution

Modern AI platforms offer built-in tools that extend what they can do beyond just text generation. These tools are available in ChatGPT Enterprise, Claude for Work, and similar enterprise AI platforms. Understanding these tools helps you get better results for specific types of tasks.

Tool 1: Web Search

What it does: Allows AI to search the internet in real-time for current information, rather than relying only on its training data (which has a cutoff date).

Grounding AI responses: Web search is also valuable for "grounding" AI's responses - connecting its knowledge to real, current sources. This helps reduce hallucinations and provides verifiable citations for its claims.

When to use it:

  • Current events or recent news
  • Recent policy changes or regulations
  • Current data or statistics (population figures, economic data)
  • Information that changes frequently (grant deadlines, program availability)
  • Checking if information is still current
  • Finding sources or citations
  • Verifying or grounding AI's claims with real sources

Most of the time, today's GenAI will use web searches intelligently on its own.

When NOT to use it:

  • Working with confidential or sensitive data you've provided (turn it OFF to prevent data exposure)

How to control it

Most enterprise AI tools you can turn this on and off. They are usually on by default now, but it may or may not use it unless you explicitly ask it to in your prompt.

Reminders:

  • Always verify important findings yourself - AI can misinterpret search results
  • Turn OFF web search when working with confidential data you've provided - AI may use that confidential data as part of the search term, which will then leak it to search engines and search result websites.

Tool 2: Projects

What it does: Creates a persistent virtual workspace where AI can access multiple documents, remember context across conversations, and maintain custom instructions specific to that project.

Note: In ChatGPT, "Projects" and "Custom GPTs" can work together to achieve this.

When to use it:

  • Working on extended initiatives that span multiple conversations
  • When you need AI to reference specific documents repeatedly
  • When you want consistent formatting or approach across related work
  • When you want to define and reuse specific context
  • Collaborative work where multiple people need access to the same AI context

Examples

Example 1: Council Staff Report Project
Project Name: "City Council Staff Reports"

Upload to project:
- Staff report template
- 3-5 example reports from past year
- City's style guide
- Standard fiscal impact language

Custom instructions for project:
"When drafting staff reports, follow the City of Garden Grove template format. Use 
professional but accessible language appropriate for Council. Always include Issue, 
Background, Discussion, Fiscal Impact, and Recommendation sections. Fiscal Impact 
section should specify funding source and account numbers as placeholders for me to 
fill in."

Usage: Every time you need a staff report draft, work in this project. AI will automatically follow your format and style without you repeating instructions.


Example 2: Budget Analysis Project
Project Name: "FY 2024-25 Budget Analysis"

Upload to project:
- Current year budget documents
- Mid-year financial reports
- Department strategic plans
- Prior year comparison data

Custom instructions:
"When analyzing budget data, always compare to prior year and explain variances. 
Identify trends across multiple years when possible. Flag items that exceed 10% 
variance. Use Garden Grove's standard account structure in all references."

Usage: All budget-related questions and analysis happen in this project, with AI able to reference all uploaded budget documents without you re-uploading each time. You can also share this project accross your department to use the same knowledge base.


Example 3: Capital Improvement Project Management
Project Name: "Parks Capital Projects 2024-26"

Upload to project:
- Project plans and scopes for all active park projects
- Community engagement feedback
- Budget allocations
- Grant requirements and deadlines

Custom instructions:
"Track all park capital projects. When discussing any project, reference its current 
status, budget, timeline, and any grant requirements. Flag projects approaching 
deadlines or budget concerns."

Usage: Update project status, draft updates to Council, check grant compliance - all within one project where AI knows all the context.


Example 4: Policy Development Project
Project Name: "Short-Term Rental Regulations"

Upload to project:
- Research on other cities' STR ordinances
- Public comment summaries
- Legal memo from City Attorney
- Draft ordinance language
- Planning Commission feedback

Custom instructions:
"When drafting or revising STR policy language, reference concerns raised in public 
comments and Planning Commission feedback. Ensure consistency with legal guidance 
from City Attorney. Compare approaches to what other cities have done."

Usage: Iterative policy development across multiple work sessions, with AI maintaining context of all feedback and legal considerations.


Best practices for Projects:

  • Name projects clearly and specifically, some GenAI system will allow you to add description as well
  • Only upload documents relevant to that specific project
  • Update project documents as work evolves
  • Set clear custom instructions so AI knows your preferences
  • Remove outdated documents to avoid confusion

Tool 3: Code Execution and Data Analysis

What it does: AI can write and run code to perform calculations, analyze data, create visualizations, manipulate files, and process structured information.

Why use code for calculations: GenAI sometimes struggles with "mental math" - complex arithmetic or multi-step calculations done directly in text. When you need reliable calculations, ask AI to write and execute code instead.

Beyond analysis - Interactive apps: Code execution also enables AI to create interactive tools and applications. For example, AI can build a simple web-based calculator for fee schedules, an interactive budget scenario planner, or a data visualization dashboard that updates based on user inputs.

When to use it:

  • Analyzing data from spreadsheets or CSV files
  • Creating charts and visualizations
  • Complex calculations or statistical analysis
  • Processing large datasets
  • Converting between file formats
  • Cleaning or restructuring data
  • Building simple interactive tools or calculators

When NOT to use it:

  • With confidential data - this tool may install other third-party application to assist with your request and those tools may be malicious (see below)
  • When a simple spreadsheet would work just as well

CRITICAL: Data Sensitivity Warning

When AI executes code on your data, be aware:

  • Code may use external libraries or packages that connect to outside services
  • Data could be processed through third-party tools embedded in the code
  • Some operations might require internet connectivity to function
  • Error in application might expose data in logs or error messages

Best practice: Only use code execution with non-confidential data. Treat this with extra care around sensitive data.


Example: Recreation program attendance analysis

[Upload attendance data from recreation programs]

I need to calculate:
1. Average attendance per program
2. Participation rates by age group
3. Programs with declining vs. growing enrollment
4. Cost per participant by program

Create a comparison table and charts I can include in budget justification.

Please write code to perform these calculations.

AI will: Write code to read your data, calculate metrics, create comparison tables, generate visualizations, and format results for your report.


Example: Interactive app - Fee calculator

Create an interactive calculator for our recreation program fees that:
- Takes program name and participant age as inputs
- Applies resident vs. non-resident pricing
- Applies senior/youth discounts based on age
- Shows breakdown of base fee + applicable discounts
- Calculates total amount due

Use our current fee schedule:
- Youth programs (under 18): $50 base, $10 resident discount
- Adult programs: $75 base, $15 resident discount  
- Senior programs (65+): $40 base, $15 resident discount
- Non-resident surcharge: +50% of base fee

AI will: Create an interactive web-based tool you can use at the front desk or share with staff. Users input program and age, the calculator shows the fee breakdown in real-time.


Best practices for code execution:

  • Double-check calculations, especially for financial data
  • Provide clean, well-organized data files when possible
  • Clearly specify what calculations or analysis you need
  • Request visualizations appropriate for your audience (Council vs. technical staff)
  • Save the generated charts, tables, or interactive tools for your use - programs created by AI are not persistent, they will start fresh every time you leave it and come back
  • If AI's first attempt isn't quite right, ask it to revise the code
  • Only use with non-confidential data

Important notes:

  • AI writes the code and runs it in a secure environment
  • You don't need to know how to code to use this effectively
  • You can ask AI to explain what the code does in plain English
  • Always verify results
  • Works best with structured data (spreadsheets, CSV files, databases)

Combining Tools

These tools can work together for more powerful results:

Example: Grant research and tracking

  1. Web search: Find currently available grants
  2. Projects: Create a grant tracking project with requirements and deadlines
  3. Code execution: Analyze your budget data to see what matches grant criteria

Key Takeaway

These tools extend AI's capabilities beyond text generation:

  • Web search connects AI to current information and grounds responses in verifiable sources
  • Projects give AI persistent memory and context across multiple work sessions
  • Code execution enables reliable quantitative analysis, data processing, and interactive tools

Use them strategically based on your task needs, always with human oversight and verification.


11. Taking Responsibility for AI Usage

Before and after using AI for any task, consider:

What data am I providing?

  • Does it contain PII, confidential info, or sensitive data?
  • Do I have approval to share this with a cloud service?
  • Could this data be used to identify individuals?
  • Am I sharing more than necessary to accomplish the task?

Who has access to this data?

  • Am I using a city-provisioned enterprise account?
  • Is this a free-tier service that might train on my data?
  • What happens if there's a data breach? Who's affected?

How will the output be interpreted?

  • Who will read this and what will they assume about it?
  • Will they know AI was involved?
  • Should disclosure be included?
  • What's my liability if it's wrong?

What are the ramifications of bad output?

  • What decisions depend on this being accurate?
  • Who is affected if this is wrong?
  • Is this reversible or permanent?
  • What's the cost of error - time, money, trust, safety?

How does this align with your professional values, City's mission and goals, and city policy?

  • Am I using AI as a crutch to avoid work I should do and requires a human judgement?
  • Would I be comfortable explaining this process to Council? To residents?
  • Does this use respect the people affected by the decision?

12. Safety and Security Practices

Checking Citations and References

When AI provides sources, citations, or references:

  • Be cautious with URLs citations - some may be malicious, external bad actors trying to trick AI
  • Don't provide personal data when checking references
  • Cross-check important citations against authoritative sources

Disable Web Search for Sensitive Work

Most AI tools allow you to toggle web search on/off:

  • Turn OFF web search when working with confidential data you've provided
  • If you've supplied the data, AI doesn't need to search the web for additional context
  • Web search increases risk of data exposure

AI Agents

Up and coming advancement in GenAI is Agentic AI. These systems can do tasks on your behalf like browsing the web, fill out forms, booking reservations, or use your computer in general.

  • You are basically doing a screen sharing session with the AI agent
  • Take extra precautions as whatever is on your screen is being shared with these AI systems
  • Avoid sharing credentials

AI Tools to Avoid Professionally (and Personally)

Avoid these tools entirely:

  • Perplexity: Known security vulnerabilities, monetizes user data even on paid plans, no opt-out
  • DeepSeek, Qwen (Alibaba), Doubao (ByteDance), Ernie (Baidu): Data jurisdiction concerns (China), unclear or unknown data handling
  • Meta AI (Facebook/Instagram) and X Grok: Designed to monetize your data like their parent platforms
  • Any free-tier generative AI for work purposes: No contractual protections, may train on your data

These aren't banned for personal use (but some are banned for usage at work), but be aware of risks to your personal data.


13. Administrative Considerations

Account Management

  • IT will provision your account when you need AI access
  • Request through normal IT helpdesk process
  • Don't create your own accounts
  • As always: Keep credentials secure and report if you suspect your account is compromised

Getting Additional Tools Approved

  • If you need an AI tool not on the approved list, submit request to IT
  • Include: tool name, use case, why approved tools don't meet need
  • IT will evaluate: security, privacy, cost, supportability

Training and Support

  • This training will be available on the intranet
  • Questions? Contact IT helpdesk

Feedback and Policy Updates

  • This policy may evolve as we learn and as the field evolves
  • Submit feedback or questions to IT
  • Check intranet for current approved tools list

Summary: Key Takeaways

  1. AI is a tool, not a decision-maker - You remain accountable for all outputs and decisions

  2. Use only city-provisioned accounts - Never use personal/free AI accounts for city work

  3. Protect sensitive data - Don't upload PII or confidential information without required approvals

  4. Be the subject matter expert - You must have expertise to evaluate AI outputs critically

  5. Verify everything - Check facts, citations, calculations, and logic before using AI content

  6. Disclose when transparency serves public interest - Especially for major recommendations or public-facing work or AI generated images or videos that could be mistaken for real-life.

  7. Watch for bias - AI amplifies patterns in data, including discriminatory patterns

  8. Assume it's a public record - AI conversations and outputs may be subject to PRA

  9. Stay human-centered - Use AI to enhance your work, not replace your judgment

About

General training on GenAI and our specific policies

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors