Prompt & Context Engineering for Government Work
From Good Questions to Great AI Output
Core Thesis: The difference between mediocre and exceptional AI output is almost never the AI — it's the prompt and context you give it. Master these fundamentals and you'll transform AI from a novelty into a daily force multiplier.
February 2026 · Module 4
Why This Matters
You've used AI. You've typed prompts. Sometimes the output is brilliant — sometimes it's garbage. The variable isn't the AI. It's you.
10x
Productivity gain with
well-crafted prompts
80%
Of AI output quality
is determined by input
~40%
Performance drop when
context is poorly structured
This module teaches you to move from casual AI user to intentional AI operator — someone who consistently gets high-quality, compliant, useful output from every interaction.
CDT Mandate: SAM 4986.13 requires all state employees to complete GenAI training. This module builds on that foundation with practical skills you can use immediately.
The Evolution: Prompts to Context Engineering
The field has moved fast. What worked in 2023 is already outdated.
2023
"Magic words"
era
→
2024
Structured
prompts
→
2025-26
Context
engineering
→
Now
Agentic
workflows
Prompt Engineering (Old)
- Focus on the question you type
- "Add magic phrases to get better output"
- One-shot: type a prompt, get an answer
- Limited to what fits in a chat box
Context Engineering (Now)
- Focus on everything the AI sees
- Curate the right information, examples, and constraints
- Multi-turn: iterate, refine, build on previous output
- Attach documents, use tools, retrieve knowledge
Andrej Karpathy (OpenAI cofounder): "Context engineering is the delicate art and science of filling the context window with just the right information for the next step."
Tobi Lutke (Shopify CEO): "I prefer the term context engineering over prompt engineering. It describes the core skill better: the art of providing all the context for the task to be plausibly solvable by the LLM."
What Happens When You Hit Enter
Understanding the basics of how AI processes your input helps you write better prompts. You don't need to be an engineer — just know the flow.
You type
a prompt
→
Tokenizer
breaks text
into tokens
→
Context
Assembly
system + history
+ your message
→
AI Model
processes
all tokens
→
Response
generated
token by token
Tokens are how AI reads text — roughly 1 token per ¾ of a word. "California Department of Technology" = ~5 tokens.
The critical part is Context Assembly. Your prompt is never processed alone. It's combined with:
- System instructions (set by the app developer — defines AI behavior)
- Conversation history (everything said so far in the chat)
- Retrieved documents (files, knowledge bases, search results)
- Tool results (data from web searches, calculations, API calls)
All of this competes for space in a fixed-size context window — AI's working memory.
Context Windows: AI's Working Memory
A context window is the total amount of text an AI model can "see" at once. Think of it as a desk — everything needs to fit on the desk, or the AI simply can't see it.
200K–1M
Claude Opus 4.6
(1M beta)
1M
Gemini 3.1 Pro
(~750K words)
Context windows have exploded — even open-source models like Llama 4 Scout now offer 10M tokens. Here's what today's windows look like in practice:
- 200K tokens (Claude standard) ≈ 500 pages — a full policy manual + conversation history
- 400K tokens (GPT-5.2) ≈ 1,000 pages — an entire agency's annual report suite
- 1M tokens (Claude/Gemini) ≈ 2,500 pages — multiple codebases or years of legislation
System Prompt
Behavior rules, role definition, constraints (~100-1,000 tokens)
Conversation History
Everything said so far in the chat (grows each turn)
Retrieved Documents
Uploaded files, knowledge base results, search results
Your Current Message
The prompt you just typed
Why this matters: Even with million-token windows, your prompt is just one piece of what the AI sees. A well-structured prompt in a poorly managed context window will still produce poor results. Bigger desks don't help if they're covered in irrelevant paper — think about the whole desk, not just the note you're handing over.
Anatomy of a Great Prompt
Every effective prompt has up to five components. You don't always need all five — but the more complex your task, the more components you should include.
1
Role / Persona
Who should the AI act as? "You are a senior policy analyst..."
2
Context / Background
What does the AI need to know? Agency, situation, constraints.
3
Task / Instruction
What exactly should the AI do? Be specific and actionable.
4
Format / Output
How should the response look? Bullet list, table, memo format, etc.
5
Constraints / Guardrails
What should the AI avoid? Length limits, tone, what NOT to include.
Rule of thumb: For quick questions, #3 alone is fine. For anything going into official work, use at least #2 through #5. The more stakes, the more structure.
Before & After: Government Examples
The same AI, the same model, dramatically different results based on the prompt.
Weak Prompt
Write me a summary of the new water regulations.
Strong Prompt
Role: You are a policy analyst at a California state water agency.
Task: Summarize the key provisions of the 2025 updates to Title 23, Division 2 of the California Code of Regulations regarding water recycling.
Format: Use a 3-section structure: (1) What changed, (2) Who is affected, (3) Implementation timeline. Use bullet points. Keep it under 400 words.
Constraints: Focus only on agricultural reuse provisions. Do not include residential or industrial provisions. Cite specific section numbers where possible.
Policy Memo
Weak
Help me write a BCP narrative for more IT staff.
Strong
You are a budget analyst at [Department]. Draft a Budget Change Proposal (BCP) narrative requesting 3.0 permanent IT positions (2 SSA and 1 SSMA) for the 2026-27 fiscal year. The positions will support the department's GenAI governance mandate under SAM 4986.
Structure the narrative with: (1) Problem statement citing specific workload metrics, (2) Proposed solution with classification justification, (3) Cost/benefit analysis with estimated annual costs using current CalHR salary ranges, (4) Consequences of not funding.
Tone: formal, data-driven. Audience: Department of Finance analysts.
Summarizing Legislation
Weak
What does SB 53 say?
Strong
Summarize California SB 53 (Transparency in Frontier AI Act, effective January 2026). Cover: (1) Who it regulates, (2) Key requirements for frontier AI developers, (3) Enforcement mechanisms and penalties, (4) How it affects state agencies that procure AI systems.
Format as a 1-page executive briefing suitable for a non-technical CIO. Include the bill number and effective date at the top.
Public Communications
Strong
Draft a public-facing FAQ (5 questions) about our department's new AI-assisted customer service chatbot. The audience is California residents with varying technical literacy. Tone should be reassuring, transparent, and plain language (8th grade reading level).
Must include: what data we collect, how to opt out and speak to a real person, and that AI-generated responses are reviewed by staff. This is required by SAM 4986.10 disclosure rules.
Prompt Library: Ready-to-Use Templates
Copy, adapt, and use these prompts for common government tasks. Each one uses the five-component structure.
Meeting Prep
Role: You are my executive assistant.
Context: I have a 30-minute briefing with [Deputy Director / stakeholder name] about [topic]. They care most about [budget impact / timeline / staffing].
Task: Create a 1-page briefing sheet with: (1) 3 key talking points, (2) Anticipated questions with suggested responses, (3) One clear ask or decision I need from them.
Format: Bullet points. Keep it to one page. Bold the ask.
Constraints: No jargon. Assume the audience has 5 minutes to read this before the meeting.
Email Drafter
Role: You are a senior state employee writing to [audience: team / executive / external stakeholder].
Task: Draft an email about [topic]. The purpose is to [inform / request action / provide an update].
Key points to include: [list 2-3 bullets]
Tone: Professional but approachable. Keep it under 200 words.
Constraints: Do not include any PII. End with a clear next step or call to action.
Comparing Options / Alternatives Analysis
Options Analysis
Role: You are a policy analyst at [Department].
Context: We need to decide between [Option A] and [Option B] for [project/initiative]. Budget is [amount]. Timeline is [deadline].
Task: Create a comparison table with columns for: Option, Pros, Cons, Estimated Cost, Risk Level, Recommendation.
Format: Markdown table followed by a 2-sentence recommendation with justification.
Constraints: Be objective. Flag any assumptions you're making.
Writing Procedures / SOPs
Standard Operating Procedure
Role: You are a process improvement specialist in state government.
Task: Write a step-by-step SOP for [process, e.g., "onboarding a new hire to our team's AI tools"].
Format: Numbered steps. Each step should include: the action, who is responsible, and any tools/forms needed. Add a "Common Mistakes" section at the end.
Constraints: Assume the reader has never done this before. Reference SAM 4986 where relevant to AI tool access.
Summarizing Long Documents
Document Summary
I'm attaching [document name — a 45-page audit report / legislative analysis / policy manual].
Task: Provide a 3-level summary:
1.
Executive Summary (3 sentences max — what a CIO needs to know)
2.
Key Findings (5-7 bullet points with page references)
3.
Action Items (what our department needs to do in response)
Constraints: Use the document's own terminology. Do not infer or add information not in the document. Flag anything that seems ambiguous.
Editing & Tone Adjustment
Tone Rewrite
Here is a draft [memo / email / report section] I wrote:
[paste your draft]
Task: Rewrite this for [audience]. Adjust the tone to be [more formal / more accessible / more concise / more data-driven]. Keep all factual content intact.
Format: Show me the rewritten version, then list the 3 biggest changes you made and why.
Creating Training Materials
Training Content
Role: You are an instructional designer for state employee training.
Context: I need to train [audience, e.g., "program managers with 10+ years experience but minimal AI exposure"] on [topic].
Task: Create a 15-minute training outline with: learning objectives, 3 key concepts with real-world government examples, a hands-on exercise, and assessment questions.
Constraints: Reading level should be accessible to non-technical staff. Include at least one California-specific example.
Prompting Techniques That Work
Different tasks call for different approaches. Here are the three you'll use most often.
Zero-Shot
Just ask. No examples needed. Works for simple, well-defined tasks.
"Translate this paragraph into plain language at an 8th-grade reading level."
Best for: Simple transformations, summaries, translations
Few-Shot
Give examples of what you want. The AI mimics the pattern.
"Format these entries like this example:
Input: John Smith, 2024-01-15, Sacramento
Output: Smith, J. — Sacramento (Jan 2024)
Now format these 50 entries the same way..."
Best for: Data formatting, consistent style, structured output
Chain-of-Thought
Ask the AI to reason step by step. Dramatically improves accuracy for complex analysis.
"Analyze whether this proposed regulation change is consistent with existing CalHR policy. Think through this step by step: (1) Identify the relevant existing policies, (2) Compare the proposed language against each one, (3) Flag any conflicts or gaps, (4) Provide your recommendation."
Best for: Policy analysis, complex comparisons, multi-step reasoning, audit reviews
Counterintuitive finding: Chain-of-thought can actually hurt performance on simple tasks — research shows a 36.3% drop when forcing step-by-step reasoning on tasks that don't need it. Match the technique to the task complexity.
When in doubt: Start with zero-shot. If the output isn't right, add examples (few-shot). If the task requires reasoning, add "think through this step by step" (chain-of-thought). Note: the latest models (Claude Opus 4.6, GPT-5.2, Gemini 3.1) have adaptive thinking built in — they can dynamically decide when to reason deeper. But explicit CoT in your prompt still helps when you want to see the reasoning or control the analysis structure.
Context Engineering: Beyond the Prompt
Your prompt is 5% of what the AI sees. The other 95% — the context — is where the real leverage is.
Three ways to supercharge context:
1. Attach Documents
Upload the actual policy, report, or legislation you're working with. Don't make the AI guess — give it the source material. In Perplexity, use Spaces to keep reference files persistent.
2. Provide Examples
Show the AI what good output looks like. Paste a previous memo, a sample format, or an example response. The AI will match the pattern, tone, and structure.
3. Set System Context
In tools that support it (Perplexity Spaces, Claude Projects), set persistent instructions: "You are an analyst at [Department]. Always cite California code sections. Format responses as executive briefings."
CDT Compliance Note: When attaching documents, apply the
Public Records Act test — would you be comfortable if this document appeared on a public website? If not, it likely contains confidential data that should NOT go into AI tools. See
SAM 4986.12 for acceptable use rules.
Spaces let you set up a project hub with persistent files and instructions:
- Create a Space for your project (e.g., "2026-27 Budget Analysis")
- Upload reference files: budget documents, prior year reports, relevant legislation
- Set custom instructions: "Always reference FY amounts. Use Department of Finance formatting."
- Every search within that Space automatically includes your files as context
This means the AI has your department's actual data — not generic internet knowledge — every time you ask a question.
The Lost in the Middle Problem
Here's a counterintuitive finding from Stanford/MIT research: AI pays the most attention to information at the beginning and end of its context. Information in the middle gets overlooked — even in models with massive context windows.
← Beginning of context | Middle (danger zone) | End of context →
What this means for you:
- Put the most important information first in your prompts
- When pasting long documents, put your question at the very end (after the document)
- If you're analyzing a specific section, extract and paste just that section rather than uploading the entire 200-page document
- More context isn't always better — curated context beats massive context
In research terms: The Stanford/MIT study found models actually performed worse when given documents with relevant info buried in the middle than with no documents at all. This pattern persists even in today's million-token models — bigger context with poor positioning actively hurts performance.
How AI Gets Smarter: RAG & Knowledge Bases
When you use Perplexity or interact with a chatbot like WaterBot, the AI isn't just using its training data — it's retrieving real information first, then generating answers based on what it found.
This is called Retrieval-Augmented Generation (RAG).
Your
Question
→
Search
knowledge base
or the web
→
Retrieve
most relevant
documents
→
Generate
answer grounded
in real sources
Why RAG matters:
- Reduces hallucinations — AI answers from retrieved facts, not memory
- Stays current — knowledge bases can be updated without retraining the model
- Citable — Perplexity shows inline citations; WaterBot references specific regulations
Without RAG
AI relies on training data (which has a cutoff date). May fabricate plausible-sounding but incorrect facts. No citations.
With RAG
AI searches real sources first, grounds its answer in retrieved documents, and can cite specific sources you can verify.
Practical tip: When using Perplexity, always click the citation numbers to verify sources — especially for facts going into official documents. RAG dramatically reduces hallucinations, but doesn't eliminate them entirely.
Tool Use & Agentic AI
Modern AI doesn't just generate text — it can use tools and take actions. This is the "agentic" frontier.
Perplexity (Tool Use)
- Searches the live web for every query
- Reads and synthesizes dozens of pages
- Deep Research: multi-step research agent
- Uploads: analyzes your PDFs and documents
You already use this
Claude Code (Agentic Engineering)
- Reads, writes, and edits code files
- Runs terminal commands and tests
- Plans multi-step tasks autonomously
- Creates, deploys, and validates systems
The next frontier
The Agentic Loop:
Plan
Break task
into steps
→
Act
Call a tool
or run code
→
Observe
Read the
result
→
Decide
Done? Or
next step?
↺
This is how Claude Code built this entire presentation — researching, writing markdown, building HTML, and deploying to GitHub Pages — in a single session. Agentic AI is the future of government technology work: fewer repetitive tasks, more strategic thinking.
CDT Compliance: Know the Rules
California has one of the most comprehensive AI governance frameworks in the nation. These aren't suggestions — they're policy.
The Big Three
Every state employee using AI must follow these three rules from SAM 4986:
Rule 1: Approved Tools Only
Use only state-approved AI tools on state-approved equipment. Do not use your personal ChatGPT, Google Gemini, or other consumer AI accounts for state work. Do not register for AI tools using your state email without IT approval. (TL 24-01)
Rule 2: No Confidential Data in AI
Never enter PII, confidential, proprietary, or sensitive state data into AI prompts. This includes names, SSNs, case files, draft policies, vendor details, and employment records. Treat every prompt as if it were publicly visible.
Rule 3: Human Review Required
All AI output used for decision-making must have human verification. AI generates drafts, not final products. You are responsible for accuracy, bias review, and DEIA compliance. (SAM 4986.12)
The Golden Rule: Treat AI prompts as if they were subject to the California Public Records Act. If you wouldn't put it on a public website, don't put it in a prompt.
Data Classification & AI
Not all data is created equal. California classifies data into categories — and your classification determines what can go into AI tools.
You CAN put in AI prompts:
✓ Publicly available information
✓ Published reports and statistics
✓ General policy frameworks
✓ Public legislation and regulations
✓ Your own draft content for editing
✓ Non-sensitive meeting agendas
You CANNOT put in AI prompts:
✗ Names, SSNs, DOB, addresses (PII)
✗ Medical or health records (PHI)
✗ Draft policies or internal memos
✗ Vendor/procurement details
✗ Employment or investigation records
✗ Passwords, API keys, credentials
| Classification |
Enterprise AI? |
Consumer AI? |
Notes |
| Public |
Yes (with authorization) |
Generally yes |
Already widely available |
| Confidential |
Only with explicit approval + security assessment |
Never |
Exempt from CA Public Records Act |
| PII |
Only with privacy assessment + controls |
Never |
Names, SSN, addresses, DOB, photos |
| Sensitive Personal |
Only with heightened protections |
Never |
Financial, biometric, health, genetic data |
| Proprietary |
Only with explicit authorization |
Never |
Trade secrets, attorney-client privileged |
Key policies: SAM 4986.10 (Privacy), TL 24-03 (Clarifications), SIMM 5310-C (Privacy Threshold Assessment)
As of January 2025 (AB 1008), AI-generated data about individuals is classified as "personal information" under CCPA — meaning both your input AND the AI's output derived from personal data are subject to privacy law.
Human Review & Disclosure
AI generates first drafts, not final products. SAM 4986.12 is explicit: human verification is mandatory for any AI output used in decision-making.
Best practices for human review:
Form Your Own Assessment First
CDT guidance explicitly warns against anchoring bias — don't look at the AI output before forming your own initial assessment. Read the source material yourself first, then compare.
Verify Claims & Citations
AI can generate citations that look real but don't exist. Click every link. Check every reference. Verify numerical data against authoritative sources.
Review for Bias & DEIA
Check AI output for discriminatory language, embedded stereotypes, or exclusionary framing — especially for content affecting vulnerable populations.
When AI content is public-facing, you MUST include:
✓ A disclaimer that AI was used (placed before the content, not after)
✓ Contact information for a real state employee
✓ An opt-out option to speak with a real person instead of AI
Required disclosure language: "This [content type] has been generated, in whole or in part, using artificial intelligence." —
SB 896 (CA AI Accountability Act, effective Jan 2025)
Workflow: Drafting a Policy Memo
Here's a practical workflow for using AI to draft a policy document while staying compliant.
Step 1: Gather Context
Collect your source materials: relevant statutes, existing policy language, data/metrics, stakeholder input. Remove any PII or confidential data before uploading.
Step 2: Research with Perplexity
Use Perplexity (or Deep Research for complex topics) to gather background: "What are other states' approaches to [topic]?" "What are the latest federal guidelines on [topic]?" Save citations.
Step 3: Draft with a Structured Prompt
Use all five prompt components: role, context, task, format, constraints. Attach relevant documents. Be specific about audience and tone.
Step 4: Iterate & Refine
Review the first draft critically. Ask for specific revisions: "Make the problem statement more data-driven." "Add a cost-benefit section." "Match the tone of [attached example]."
Step 5: Human Review & Finalize
Verify all facts, citations, and data against authoritative sources. Review for bias. Add disclosure if needed. The final product is yours — you own the accuracy.
Remember: AI is your research assistant and first-draft machine, not your replacement. The value you add is judgment, domain expertise, and accountability — things AI cannot provide.
Workflow: Research & Legislative Analysis
Perplexity is particularly powerful for government research tasks. Here's how to use it effectively.
Quick Lookups (Regular Search)
- "What is the current CalPERS contribution rate for state employees?"
- "When does AB 2013 take effect?"
- "What is CDT's Technology Letter 24-01 about?"
Speed: 10-30 seconds | Sources: 5-10 pages
Deep Analysis (Deep Research)
- "Compare California's AI governance framework with the EU AI Act. Focus on risk classification."
- "What are the fiscal impacts of the 2024 California AI legislation package?"
- "Analyze trends in state IT procurement spending from 2020-2025."
Speed: Up to 3 min | Sources: 20-50+ pages
Pro tips for government research:
- Include specificity: Agency names, bill numbers, date ranges, California-specific terms
- Request citations: "Cite the specific code section" or "Include the URL for each source"
- Cross-reference: Ask Perplexity to compare its findings against a specific document you upload
- Use Spaces: Create a Space for ongoing projects — upload baseline documents so every search has your context
For Module 1 veterans: This builds on the Perplexity training (Module 1). If you haven't set up your free .gov Pro account yet, start there.
Workflow: Data Analysis
AI can accelerate data work — from summarizing spreadsheets to identifying patterns in large datasets.
Example: Budget Data Analysis
I'm attaching our department's FY 2024-25 expenditure data (CSV). Please:
1. Summarize total spending by program area
2. Identify the top 5 line items by dollar amount
3. Flag any line items that increased more than 15% from the prior year
4. Present results as a formatted table
Note: This data contains only aggregate budget figures — no PII or confidential information.
Best practices:
- Anonymize first — remove names, IDs, and PII before uploading data to AI
- State what the data contains (and doesn't contain) so the AI processes it correctly
- Ask for methodology — "Explain how you calculated each figure" catches errors
- Verify calculations independently — AI math is generally reliable but not infallible
CDT Compliance: Before uploading data to any AI tool, classify it per your department's data governance policy. Aggregate, de-identified, publicly available budget data is generally safe. Individual-level records, case data, and employee data are NOT. When in doubt, consult your Information Security Officer.
The Iteration Loop
Great AI output almost never comes from a single prompt. It comes from iteration — a cycle of prompting, evaluating, and refining.
Prompt
Your initial
request
→
Evaluate
Is the output
what you need?
→
Refine
Be specific about
what to change
→
Repeat
Until the
output is right
↺
Effective refinement prompts:
Too Long
"Cut this to 250 words while keeping all key policy points."
Wrong Tone
"Rewrite this for a non-technical executive audience. Remove all jargon."
Missing Detail
"Add a section on fiscal impact. Include estimated costs using CalHR salary data."
Wrong Format
"Convert this narrative into a comparison table with columns for: Current Policy, Proposed Change, Impact."
Mindset shift: Anthropic's own guide says to "think of Claude as a brilliant but new employee who lacks context on your norms." You'd never hand a new hire a task with zero context. Give the AI the same courtesy — and iterate like you would with a colleague.
Common Mistakes & How to Avoid Them
The Lazy Prompt
"Summarize this document."
Fix: What aspect? For what audience? In what format? How long? What should it emphasize?
The Trust Fall
Copying AI output directly into an official document without verifying.
Fix: Always verify facts, citations, and calculations. AI is a draft machine, not an authority.
The Data Leak
Pasting constituent names, case numbers, or confidential data into ChatGPT.
Fix: Apply the PRA test before every prompt. Anonymize data. Use approved tools only.
The One-Shot Wonder
Accepting the first output without iterating.
Fix: Expect 2-3 rounds of refinement. Each iteration should target a specific improvement.
The Kitchen Sink
Dumping an entire 200-page document and asking a vague question.
Fix: Extract the relevant section. Put key info at the beginning or end. Be specific.
The Copy-Paste Zombie
Taking AI output and pasting it into three different documents without adapting it for each audience.
Fix: Each audience needs a different version. Ask the AI to rewrite for each context: "Now rewrite this for [audience] with [tone]."
Your AI Toolkit
Different tools for different jobs. Know when to reach for each one.
Perplexity AI
Best for: Research, fact-finding, legislative analysis, current events
- Searches the live web with citations
- Deep Research for multi-step analysis
- Spaces for persistent project context
- Free Pro for .gov emails
perplexity.ai · Module 1
Claude (Anthropic)
Best for: Writing, analysis, long-document processing, nuanced reasoning
- Opus 4.6 / Sonnet 4.6 — 200K standard, 1M beta context
- Adaptive thinking with configurable effort levels
- Strong on safety — constitutional AI with reason-based alignment
- Projects feature for persistent context
claude.ai
Claude Code (Agentic Engineering)
Best for: Building systems, automating workflows, technical projects
- Reads, writes, and runs code autonomously
- Plans multi-step tasks and executes them end-to-end
- This entire presentation was built with Claude Code — from research to deployed slides
- The future of how government technology teams will work
Claude Code docs · Advanced
Key Takeaways
1. Context Over Cleverness
Don't obsess over finding the "perfect prompt." Instead, give the AI the right context: relevant documents, clear examples, specific constraints. What you feed the AI matters more than how you ask.
2. Structure Your Prompts
Use the five components: Role, Context, Task, Format, Constraints. The more important the output, the more structure you should provide.
3. Iterate, Don't One-Shot
Treat AI like a capable colleague, not a search engine. Have a conversation. Refine the output. Expect 2-3 rounds for quality work.
4. Comply by Default
Know SAM 4986. Use approved tools. Never put confidential data in prompts. Always verify AI output. Disclose AI use in public-facing content.
5. AI Augments, It Doesn't Replace
You bring judgment, domain expertise, and accountability. AI brings speed, breadth, and tireless first drafts. Together, you're unstoppable.
Resources & References
California AI Policy
California AI Legislation (Key Bills)
- SB 53 — Transparency in Frontier AI Act (effective Jan 2026)
- SB 896 — CA AI Accountability Act (effective Jan 2025)
- AB 1008 — CCPA AI Amendment (effective Jan 2025)
- AB 2885 — Uniform AI definition across CA law
Prompt & Context Engineering Guides
Training Tools
Fellowship Resources
Start today: Pick one task you do regularly — a research question, a memo draft, a data summary — and try the structured prompt approach from this module. Compare the output to your usual approach. See the difference for yourself.