The Thriller Behind the PyTorch Automated Blended Precision Library | by Mengliu Zhao | Sep, 2024

The Thriller Behind the PyTorch Automated Blended Precision Library | by Mengliu Zhao | Sep, 2024
The Thriller Behind the PyTorch Automated Blended Precision Library | by Mengliu Zhao | Sep, 2024


Knowledge Format Fundamentals — Single Precision (FP32) vs Half Precision (FP16)

Now, let’s take a better take a look at FP32 and FP16 codecs. The FP32 and FP16 are IEEE codecs that signify floating numbers utilizing 32-bit binary storage and 16-bit binary storage. Each codecs comprise three elements: a) an indication bit, b) exponent bits, and c) mantissa bits. The FP32 and FP16 differ within the variety of bits allotted to exponent and mantissa, which lead to completely different worth ranges and precisions.

Distinction between FP16 (IEEE normal), BF16 (Google Mind-standard), FP32 (IEEE-standard), and TF32 (Nvidia-standard). Picture supply: https://en.wikipedia.org/wiki/Bfloat16_floating-point_format

How do you exchange FP16 and FP32 to actual values? Based on IEEE-754 requirements, the decimal worth for FP32 = (-1)^(signal) × 2^(decimal exponent —127 ) × (implicit main 1 + decimal mantissa), the place 127 is the biased exponent worth. For FP16, the method turns into (-1)^(signal) × 2^(decimal exponent — 15) × (implicit main 1 + decimal mantissa), the place 15 is the corresponding biased exponent worth. See additional particulars of the biased exponent worth here.

On this sense, the worth vary for FP32 is roughly [-2¹²⁷, 2¹²⁷] ~[-1.7*1e38, 1.7*1e38], and the worth vary for FP16 is roughly [-2¹⁵, 2¹⁵]=[-32768, 32768]. Be aware that the decimal exponent for FP32 is between 0 and 255, and we’re excluding the biggest worth 0xFF because it represents NAN. That’s why the biggest decimal exponent is 254–127 = 127. The same rule applies to FP16.

For the precision, observe that each the exponent and mantissa contributes to the precision limits (which can also be referred to as denormalization, see detailed discussion here), so FP32 can signify precision as much as 2^(-23)*2^(-126)=2^(-149), and FP16 can signify precision as much as 2^(10)*2^(-14)=2^(-24).

The distinction between FP32 and FP16 representations brings the important thing considerations of combined precision coaching, as completely different layers/operations of deep studying fashions are both insensitive or delicate to worth ranges and precision and should be addressed individually.

Leave a Reply

Your email address will not be published. Required fields are marked *