Prompt Engineering Best Practices for Government Users
Practical guide to writing effective prompts for M365 Copilot in government contexts, including techniques, examples, and common mistakes to avoid.
Overview
The quality of Copilot’s output depends largely on the quality of your prompts. This video teaches practical prompt engineering techniques that government employees can apply immediately to get better, more useful results from Copilot.
No technical background required—just a willingness to experiment and refine your approach.
What You’ll Learn
- RACE Framework: Structured approach to prompt writing
- Specificity: Why detail matters and how to provide it
- Context: Giving Copilot the right background information
- Iteration: Refining prompts based on results
- Application-Specific Tips: Tailoring prompts for Word, Excel, Teams, etc.
Transcript
[00:00 - Introduction]
Hi everyone, Jane Smith here. Today we’re talking about prompt engineering—which sounds technical but really just means: how to ask Copilot questions effectively. Get this right, and Copilot becomes incredibly useful. Get it wrong, and you’ll be frustrated. Let’s learn the techniques.
[00:30 - Why Prompt Quality Matters]
Copilot is like a very capable but literal assistant. If you say “write something about the budget,” you’ll get generic content. But if you say “write a two-paragraph executive summary of Q3 budget performance, highlighting areas over or under budget by more than 10%,” you’ll get exactly what you need.
Specificity is the difference between wasting time and saving time.
[01:30 - The RACE Framework]
I teach a simple framework: RACE.
R - Role: Tell Copilot what role it should assume. “Act as a policy analyst” or “You are an HR specialist.”
A - Action: What do you want Copilot to do? “Draft,” “summarize,” “analyze,” “create a list of.”
C - Context: Provide relevant background. “Using data from the attached spreadsheet” or “For an audience of senior executives.”
E - Expectation: Describe the desired output format. “In three bullet points,” “As a 500-word memo,” “In a table with three columns.”
Let’s see RACE in action.
[03:00 - RACE Example: Writing a Memo]
Poor prompt: “Write about the new policy.”
RACE prompt: “Act as a program manager (Role). Draft a memo (Action) announcing the new telework policy for my team (Context). The memo should be two paragraphs, professional tone, and highlight the effective date and key changes (Expectation).”
The RACE version gives Copilot everything it needs to produce useful output on the first try.
[04:15 - Being Specific]
General prompts get general results. Specific prompts get specific results. Compare:
Vague: “Help me with this email.” Specific: “Rewrite this email to be more concise, maintain a professional but friendly tone, and emphasize the urgency of the deadline.”
Vague: “Summarize this document.” Specific: “Summarize this 40-page policy document in 5 bullet points, focusing on what changes for employees effective January 1st.”
Always ask yourself: “Could someone else interpret my prompt differently?” If yes, add more detail.
[05:30 - Providing Context]
Copilot performs better when it understands context. Provide:
Audience: “For senior leadership” vs. “For technical staff.” Purpose: “For a congressional briefing” vs. “For internal team review.” Constraints: “Keep it under 200 words,” “Must be compliant with agency style guide.” Data sources: “Using information from the Q3 financial report in SharePoint.”
The more context you provide, the more tailored Copilot’s response will be.
[06:45 - Iterating and Refining]
You won’t always get perfect results on the first try. That’s okay. Refine your prompt based on what Copilot returns.
First attempt: “Summarize this meeting.” Copilot provides a generic summary. Second attempt: “Summarize this meeting, focusing specifically on decisions made and action items assigned.” Much better.
Think of prompt engineering as a conversation, not a one-time command.
[07:45 - Application-Specific Tips]
Different M365 apps benefit from different prompt styles:
Word: Focus on structure and tone. “Draft a three-section report with introduction, findings, and recommendations.”
Excel: Be specific about calculations and visualizations. “Calculate year-over-year growth for each category and create a line chart showing trends.”
Teams: Ask for specific meeting elements. “What were the key decisions in this meeting and who is responsible for each action item?”
Outlook: Specify tone and length. “Draft a response declining this meeting request, polite but firm, two sentences maximum.”
[08:45 - Common Mistakes to Avoid]
Don’t be too vague: “Help me with this” doesn’t give Copilot enough to work with. Don’t omit expected format: Copilot might give you a paragraph when you wanted bullet points. Don’t forget to verify: Copilot is helpful but not infallible. Always review output for accuracy. Don’t give up after one try: If the first result isn’t perfect, refine your prompt and try again.
[09:30 - Conclusion]
Prompt engineering is a skill that improves with practice. Start with the RACE framework, be specific, provide context, and iterate based on results. The more you use Copilot, the better you’ll get at crafting prompts that deliver exactly what you need. Download our Prompt Engineering Guide linked below for dozens of example prompts across different scenarios.