By submitting this form, you are agreeing to the Terms of Use and Privacy Policy.
INTRODUCTION
HBM-PIM incorporates the AI processing function to improve high-speed data processing in supercomputers and AI applications. High-bandwidth memory (HBM) is the world’s fastest DRAM, designed for applications that require the highest possible bandwidth between memory and processing.
This performance is achieved by combining TSV stacked memory dies with logic in the same chip package. High Bandwidth Memory (HBM) is a high-speed computer memory interface for 3D-stacked synchronous dynamic random-access memory (SDRAM)
HBM-PIM MEMORY MARKET SIZE AND FORECAST
The Global HBM-PIM memory market accounted for $XX Billion in 2022 and is anticipated to reach $XX Billion by 2030, registering a CAGR of XX% from 2024 to 2030.
HBM-PIM MEMORY MARKET NEW PRODUCT LAUNCH
Samsung unveils high-bandwidth memory and a processing-in-memory architecture. Samsung Electronics has announced the development of a new type of memory chip architecture known as high-bandwidth memory, processing-in-memory—HBM-PIM—on its Newsroom webpage.
Artificial intelligence processing is added to high-bandwidth memory chips via this architecture. The new chips will be marketed as a means of speeding up data centres, increasing the speed of high-performance computers, and enabling AI applications.
Computer engineers have long worked to eliminate bottlenecks in computers caused by the need to shuffle data back and forth between a computer’s CPU and memory chips. Most efforts have focused on adding small amounts of fast memory caches to CPUs; however, doing so increases energy consumption, resulting in more heat production.
In this new effort, the Samsung team has taken the opposite approach, allowing memory chips to do some of the processing. Samsung has put “a DRAM optimised AI engine inside of a memory bank” with the new HBM-PIM.This reduces the CPU’s processing load by offloading some of its work to remote databanks.
Not only is the workload reduced, but the processing speed is increased due to reduced data movement. The HBM-PIM has a PCU that runs at 300MHz and is controlled by the host CPU using conventional memory commands. Using this method, the PCU can be instructed to perform FP16 calculations directly inside the DRAM unit. Notably, the HBM-PIM can function as normal RAM when a system is running non-HBM-PIM applications.
According to Samsung, when the new technology was tested with their existing HBM2 Aquabolt systems, system performance doubled and energy consumption was reduced by 70%.
They also stated that installing HBM-PIMs in existing systems would necessitate no changes to existing hardware or software. Their HBM-PIM technology is currently being tested with previously developed AI accelerators developed by AI solution partners; results are expected in the first half of this year.
HBM-PIM MEMORY MARKET COMPANY PROFILE
HBM-PIM MEMORY MARKET THIS REPORT WILL ANSWER FOLLOWING QUESTIONS