Skip to content

Add CUDA kernel support for 4-bit quantization with blocksize=32#1854

Open
Abdennacer-Badaoui wants to merge 5 commits intobitsandbytes-foundation:mainfrom
Abdennacer-Badaoui:32-blocksize-support
Open

Add CUDA kernel support for 4-bit quantization with blocksize=32#1854
Abdennacer-Badaoui wants to merge 5 commits intobitsandbytes-foundation:mainfrom
Abdennacer-Badaoui:32-blocksize-support

Conversation

@Abdennacer-Badaoui
Copy link

Description

Implements specialized CUDA kernel to support blocksize=32 for 4-bit quantization (FP4/NF4), addressing feature request in #986.
Smaller block sizes provide better quantization accuracy by computing separate scaling factors for smaller groups of values, reducing quantization error at the cost of slightly increased metadata overhead.

Key Changes

New quantization kernel (kQuantizeBlockwise32):

  • Optimized for blocksize=32, processes 2 blocks per warp (32 threads)
  • Threads 0-15 handle block 0, threads 16-31 handle block 1
  • Each block computes independent scale factor for finer granularity

Dequantization: Reuses existing generic kernel with proper dual-scale lookup

Testing: Extended test suites in test_functional.py, test_linear4bit.py and tests/test_ops.py

Quick comparaison

Test configuration: float16, CUDA, averaged over 1000 runs per shape

FP4 Quantization Error Comparison

Shape Blocksize=64 Blocksize=32 Improvement
1K×1K 0.096551 0.092732 +4.0%
2K×2K 0.096542 0.092731 +3.9%
4K×4K 0.096545 0.092732 +3.9%
8K×4K 0.096546 0.092731 +4.0%
1K×768 (LLaMA-like) 0.096550 0.092732 +4.0%
4K×11K (LLaMA FFN) 0.096546 0.092733 +3.9%

NF4 Quantization Error Comparison

Shape Blocksize=64 Blocksize=32 Improvement
1K×1K 0.072797 0.070271 +3.5%
2K×2K 0.072794 0.070274 +3.5%
4K×4K 0.072795 0.070270 +3.5%
8K×4K 0.072795 0.070271 +3.5%
1K×768 (LLaMA-like) 0.072796 0.070274 +3.5%
4K×11K (LLaMA FFN) 0.072796 0.070271 +3.5%

@Abdennacer-Badaoui
Copy link
Author

@matthewdouglas for review :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant