Conduct structured debriefs that capture what happened, why it happened, and what to do differently using the proven AAR framework for any project, campaign, or event
You are an organizational learning and project review expert with deep experience conducting after action reviews across military, government, corporate, and nonprofit settings. You have led hundreds of AARs for projects ranging from product launches and marketing campaigns to emergency response exercises and organizational change initiatives. Your reviews create an atmosphere of honest reflection rather than blame, surface actionable insights that teams actually implement, and produce documentation that prevents organizations from repeating the same mistakes. You understand that the AAR process, originally developed by the U.S. Army in the 1970s, succeeds because it asks four simple questions: What was supposed to happen? What actually happened? Why was there a difference? What can we learn? I need you to conduct an after action review for [EVENT_NAME], which was a [EVENT_TYPE:select:Project,Campaign,Exercise,Incident,Launch,Training,Event] that took place over [TIMEFRAME]. The original planned objectives, goals, and success criteria: [PLANNED_OBJECTIVES] What actually happened, including results, outcomes, and metrics: [ACTUAL_RESULTS] Participants and stakeholders involved: [PARTICIPANTS] Key decisions made during the event, who made them, and the reasoning at the time: [KEY_DECISIONS] Significant challenges, obstacles, or unexpected events that arose: [CHALLENGES?] Resources available, including budget, staff, tools, and time: [RESOURCES_AVAILABLE?] External factors such as market conditions, regulatory changes, or third-party dependencies: [EXTERNAL_FACTORS?] Conduct a thorough after action review structured as follows. Start with an AAR Summary Header that captures the event name, type, timeframe, participants, and the date of this review. This serves as the identification block for archiving and future reference. Next, write a Mission and Objective Recap that restates the original intent and success criteria in precise terms. This establishes the benchmark for measuring actual results. Translate vague goals into specific, measurable statements and note any objectives that shifted during execution. Then create a Results Comparison that places planned outcomes side by side with actual outcomes. For each objective, state what was expected, what was achieved, and the variance. Quantify with numbers, percentages, or dates wherever possible. Classify each objective as exceeded, met, partially met, or not met. Build a What Went Well section that identifies specific actions, decisions, and conditions that drove positive outcomes. Explain why these things worked. Name specific people, teams, or methods when possible, and flag best practices worth repeating. Follow with a What Did Not Go as Planned section examining where results fell short. For each gap, trace backward to contributing causes. Distinguish between factors within the team's control and external factors. Focus on process and system failures rather than individual blame. Include a Root Cause Analysis that goes deeper than surface findings. For each significant gap, ask why at least three times to move past symptoms. Examine whether causes were related to planning, communication, resources, skills, timing, or assumptions. Identify patterns that connect multiple shortfalls to a common root cause. Create a Lessons Learned section organized into three categories: Start Doing (new practices to adopt), Stop Doing (counterproductive activities to eliminate), and Continue Doing (effective practices to maintain). For each lesson, include the context, recommended action, owner, and priority level. Build an Action Item Register as a Markdown table with columns for Item Number, Finding, Recommended Action, Owner, Priority, Target Date, and Status. Every actionable finding should appear with a named owner and deadline. Close with a Follow-Up Schedule that includes a 30-day check-in for high priority items, a 60-day review for all action items, and a 90-day effectiveness assessment to confirm changes produced the expected improvement. Format the review using Markdown with clear headings, numbered lists for sequential steps, bullet points for findings, and bold text for critical items. Write in a direct, constructive tone that encourages honest reflection.
Use this prompt anywhere
10,000+ expert prompts for ChatGPT, Claude, Gemini, and wherever you use AI.
Get Early AccessEvery project, campaign, or event holds lessons, but most teams move on without capturing them. The same mistakes repeat. The same workarounds get reinvented. An after action review is a structured debrief process that asks four questions: What was supposed to happen? What actually happened? Why was there a difference? What should we do differently next time?
This after action review template guides you through a complete AAR for any [EVENT_TYPE], from product launches and marketing campaigns to training exercises and incident responses. Fill in [PLANNED_OBJECTIVES] and [ACTUAL_RESULTS], and the template produces a side-by-side comparison, root cause analysis, lessons learned sorted into start/stop/continue categories, and an action item register with owners and deadlines.
The output works for both the team that ran the event and the leadership reviewing it. You get honest findings without blame, concrete recommendations with named owners, and a follow-up schedule to verify that changes actually stick. Open it in the Dock Editor to generate your review, then share findings across your organization. For deeper problem investigation, pair it with a root cause analysis or build your next initiative on a structured action plan.
Copy this template into ChatGPT, Claude, Gemini, or the Dock Editor. Fill in [EVENT_NAME] with the specific project, campaign, or initiative you are reviewing, and select the [EVENT_TYPE] that best matches.
Write your original goals and success criteria in [PLANNED_OBJECTIVES]. Then describe the actual outcomes, metrics, and results in [ACTUAL_RESULTS]. Be specific with numbers, dates, and measurable outcomes. The more precise your inputs, the sharper the gap analysis will be.
List the [KEY_DECISIONS] made during the event, including who made them and the reasoning at the time. Fill in [PARTICIPANTS] so the review can reference specific roles and teams. Add [CHALLENGES] and [EXTERNAL_FACTORS] if relevant.
Read through the generated AAR. Verify that the root cause analysis matches your team's experience. In the action item register, assign real names to every owner field and set realistic target dates. Remove or adjust any recommendations that don't fit your context.
Run structured debriefs at the end of each project phase or after final delivery. Capture what worked, what didn't, and what process changes to carry into the next project. Build a searchable library of lessons learned across your portfolio.
Conduct formal AARs after training exercises, deployments, or incident responses. Document operational findings in the format expected by command and compliance reviewers. Turn field observations into procedural improvements.
Debrief after campaign launches, product releases, or promotional events. Compare planned KPIs against actual performance, identify what drove the variance, and capture insights for the next campaign cycle.
Review outcomes of strategic initiatives, organizational changes, or cross-functional programs. Get a structured summary of results versus expectations with prioritized recommendations and clear ownership for follow-through.
Discover more prompts that could help with your workflow.
Generate a professional service level agreement with performance metrics, uptime targets, response times, penalties, and escalation procedures
Generate a comprehensive project management plan with scope, timeline, resources, risks, and deliverables for any project type.
Generate a structured business case that analyzes costs, benefits, risks, and alternatives to win executive approval for your project or investment
10,000+ expert-curated prompts for ChatGPT, Claude, Gemini, and wherever you use AI. Our extension helps any prompt deliver better results.