Tuesday, September 27
Shadow

NVIDIA, Intel, and ARM bet their AI future on FP8, 8-bit FP whitepaper released

Three major tech and AI companies, Arm, Intel and NVIDIA have teamed up to standardize the brand new FP8 or 8-Bit Floating Point standard. The companies have released a new white paper outlining the concept of an 8-bit floating-point specification and corresponding variants, called FP8 with variants E5M2 and E4M3, to provide a standard interchangeable arrangement that can work for both inference and artificial intelligence (AI) training. .

NVIDIA, ARM, and Intel Look to 8-Bit Floating Point FP8 for Future AI Projects

In theory, this new alignment of cross-industry specifications between these three tech giants will allow AI models to work and function on all hardware platforms, accelerating AI software development.

Innovation in artificial intelligence has increasingly become a necessity in software and hardware to produce sufficient computational throughput for the technology to advance. The requirements for AI calculations have increased over the past few years, but more over the past year. One such area of ​​AI research that is gaining importance in bridging the computing divide is reducing numerical precision requirements in deep learning, improving both memory and computational efficiency.

Image source: “FP8 formats for deep learning”, via NVIDIA, Arm and Intel.

Intel intends to support the AI ​​format specification in its roadmap which covers processors, graphics cards and many AI accelerators. The company is working on an accelerator, the Habana Gaudi Deep Learning Accelerator. The promise of reduced-precision methods uncovers inherent noise-resistant properties in deep learning neural networks focused on improving computational efficiency.

Image source: “FP8 formats for deep learning”, via NVIDIA, Arm and Intel.

The new FP8 specification will close the gaps to current IEEE 754 floating-point formats with a comfortable level between software and hardware, taking advantage of current AI implementations, accelerating adoption and improving developer productivity.

language-model-ai-training-1
ai-model-language-inference-1

The document will fund the principle of leveraging any algorithms, concepts, or conventions built on IEEE standardization between Intel, Arm, and NVIDIA. Having a more consistent standard across all companies will allow the most considerable latitude for the future of AI innovation while maintaining current conventions in the industry.

News sources: Arm, FP8 specification