Skip to content

feat: enhance compaction prompts for higher-fidelity context restoration#14662

Open
ryanwyler wants to merge 2 commits intoanomalyco:devfrom
gignit:enhanced-compaction-prompt
Open

feat: enhance compaction prompts for higher-fidelity context restoration#14662
ryanwyler wants to merge 2 commits intoanomalyco:devfrom
gignit:enhanced-compaction-prompt

Conversation

@ryanwyler
Copy link
Contributor

@ryanwyler ryanwyler commented Feb 22, 2026

Issue for this PR

Closes #3031
Also addresses #14368 and #2945

Type of change

  • Bug fix
  • New feature
  • Refactor / code improvement
  • Documentation

What does this PR do?

Provides SIGNIFICANT improvements for maintaining a session after compaction. This rewrites the compaction system prompt and inline template so the agent retains significant working context to continue after compaction. Re-frames compaction as a memory transfer rather than a summary, expands the inline template from 5 sections to 8 (adding separated User Directives, Failed Approaches, Resolved Code and Discoveries, and How Things Work Now), and adds imperative preservation rules for resolved code, credentials, and user corrections.

Good to use paired with this PR to be able to use different models during compaction controlled via TUI: #14665

How did you verify your code works?

Tested against a real multi-hour session export, comparing compaction output across 5 prompt iterations. Final version benchmarked against a 22-cycle reference compaction from the same session.

Checklist

  • I have tested my changes locally
  • I have not included unrelated changes in this PR

Rewrite both the system prompt (compaction.txt) and the inline user
message template (compaction.ts defaultPrompt) to produce compaction
output that enables an AI agent to resume complex work after a full
context wipe.

## System prompt (compaction.txt)

Reframed from summarization to memory transfer. The model is told its
context is about to be wiped and everything not written down will be
lost. Preservation rules are direct commands, not conditionals:

- PRESERVE resolved code verbatim — query patterns, state machines,
  auth flows, data model behaviors, correct field names, API response
  shapes. Do not describe code that you can show.
- INCLUDE failed approaches numbered for cross-cycle reference
- PRESERVE user directives as sacred, accumulating across cycles
- PRESERVE credentials and operational details
- RESOLVE implementation contradictions to settled state
- DISCARD debugging noise, narration, superseded work, trivial code

## Inline template (compaction.ts defaultPrompt)

Expanded from 5 sections to 8, structured around what an agent needs
most when told to continue with no prior context:

1. Objective — immediate next action, current blockers
2. User Directives — separated from task instructions, accumulate
3. Plan — remaining steps, enough detail to pick up mid-step
4. Failed Approaches — exhaustive, numbered, highest-value section
5. Resolved Code & Discoveries — verbatim code blocks and patterns
   that would be painful to rediscover
6. How Things Work Now — settled system state as direct instructions
7. Accomplished — completed, in-progress, remaining as direct facts
8. Relevant files / environment — modification status, credentials,
   credential sync locations, endpoints, service ports

## Approach

Developed through iterative testing against a long-running multi-
compaction session, comparing output quality at each step. Key
phrasings were refined based on measurable differences in what the
model chose to preserve or discard — particularly around verbatim
code inclusion, failed approach depth, and credential capture.
@github-actions github-actions bot added contributor needs:compliance This means the issue will auto-close after 2 hours. and removed needs:compliance This means the issue will auto-close after 2 hours. labels Feb 22, 2026
@github-actions
Copy link
Contributor

Thanks for updating your PR! It now meets our contributing guidelines. 👍

@mmabrouk
Copy link

Thanks for the PR @ryanwyler !

Here is my take:

  1. It's clear that the compaction feature we have right now sucks. Any compaction mid work right now means almost lost work
  2. It's very tricky to test compaction feature without investing a lot of ressources. One solution could be A/B testing different prompts / harnesses and getting user feedback. Another is one of the core contributors vibe testing this. However it's unlikely such a PR would e merged even if it's great.
  3. In my opinion the prompt is only part of the solution. The solution should include adding tools to the LLM to fetch information from the lost conversation, for instance the last messages (these are usually the most relevant, my guess is that just including them would solve a large portion of the cases where things derail, I usually simply copy past them after the compaction).

In any case, compaction is one of the core problems when using opencode day to day. I hope the core team review this and prioritize working on it.

@NamedIdentity
Copy link

I would suggest that instead of making the compaction coding specific, you make it focused on general task performance.

A wider umbrella will improve the longevity and overall effectiveness of the solution.

A lot of coding work isn't coding work, but knowledge-work (e.g. research and design). The non-coding work is arguably negatively affected far more by compaction than coding - harder to recover the context.

That said, the first thing I replaced in OpenCode was the compaction prompt. I'd be very interested in your ideas and approach to the higher-level problem of making compaction work for general task performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Model in BUILD mode does not have enough context to continue after compaction

3 participants