: Pre-defined sparsity levels (e.g., 1% outliers) to ensure predictable memory usage.
Traditional quantization methods, such as , often struggle with "outlier" weights—individual parameters that have a disproportionate impact on the model's output. When these outliers are forced into low-bit representations (like 4-bit), the model's perplexity (accuracy) degrades significantly. 2. Technical Mechanism
Large Language Models (LLMs) are often bottlenecked by memory requirements, limiting their deployment on consumer hardware. , introduced by researchers including Tim Dettmers and documented on arXiv , is a hybrid quantization technique. It achieves high-accuracy compression by isolating "outlier" weights that are sensitive to quantization and storing them in high precision, while compressing the remaining 99% of weights to 3-4 bits. 1. The Challenge of Quantization Error SPQR.SPQRAlive.18.var
: The final model is a combination of a dense, low-bit matrix and a sparse, high-precision matrix. 3. Key Performance Metrics
: It uses a Hessian-based regularizer to identify which weights are most sensitive to quantization. : Pre-defined sparsity levels (e
SpQR: Sparse-Quantized Representation for Near-Lossless LLM Compression
Based on experimental data from the SpQR GitHub Repository , the method offers: 4. Implementation (SPQRAlive.18.var)
: Despite the hybrid structure, optimized kernels allow for faster inference compared to uncompressed models due to reduced memory bandwidth bottlenecks. 4. Implementation (SPQRAlive.18.var)