Fix AppendNextTokensToSequences heap overflow#2111
Open
apsonawane wants to merge 3 commits intomainfrom
Open
Conversation
Contributor
There was a problem hiding this comment.
Pull request overview
This PR aims to prevent out-of-bounds writes into the preallocated sequence buffers during generation by adding explicit max_length bounds checks across CPU and CUDA search implementations, and by failing fast when GenerateNextToken() is called after reaching max_length.
Changes:
- Added early-return bounds checks in CPU greedy/beam append paths to prevent writing past the sequences buffer.
- Added CUDA-side guards to avoid launching append kernels once the sequence length reaches
max_length, and updated done/max-length checks. - Added an early
GenerateNextToken()runtime error when sequence length is already atmax_length.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
src/search.cpp |
Adds CPU-side bounds checks and adjusts done/reset behavior around appending tokens. |
src/generators.cpp |
Adds a fast-fail guard in GenerateNextToken() when already at max_length. |
src/cuda/search_cuda.cpp |
Adds CUDA-side conditional guards to avoid appending once at/over max_length, plus logging/done handling. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This pull request adds robust bounds checking to prevent writing past the allocated sequence buffer during token generation, both on CPU and CUDA backends. It ensures that sequence appending operations do not exceed the configured
max_length, and introduces early error handling and logging when the maximum length is reached. These changes improve the safety and stability of sequence generation, preventing out-of-bounds writes and making debugging easier.Sequence bounds checking and safety:
GreedySearch_Cuda::SampleTopKTopP,GreedySearch_Cuda::AppendTokens,GreedySearch_Cpu::AppendNextTokensToSequences, andBeamSearch_Cpu::AppendNextTokensToSequencesto ensure tokens are only appended if the current sequence length is less thanmax_length, preventing buffer overflows. [1] [2] [3] [4]>=instead of==when checking if the sequence has reachedmax_length, ensuring no further tokens are appended once the limit is reached. [1] [2]Error handling and logging:
Generator::GenerateNextToken, added an early runtime error if called when the sequence is already atmax_length, providing a clear message for debugging.State management improvements:
done_state is only reset if the sequence buffer is not full, preserving the correct state and preventing accidental out-of-bounds writes on subsequent calls. [1] [2]