By submitting this form, you are agreeing to the Terms of Use and Privacy Policy.
Industries such as AI, deep learning, scientific research, and financial services require high-performance computing solutions capable of handling massive datasets and complex calculations. HBM offers significant advantages over traditional memory architectures in terms of bandwidth and energy efficiency, making it an attractive choice for these applications.
The gaming industry continues to grow, driven by the increasing popularity of esports, virtual reality, and high-resolution gaming. HBM’s high bandwidth and low latency make it well-suited for graphics-intensive applications, driving demand for HBM-enabled GPUs and gaming consoles.
With the proliferation of cloud computing, IoT (Internet of Things), and big data analytics, data centres are experiencing unprecedented growth. HBM technology helps address the memory bandwidth bottlenecks in data centre servers and accelerators, improving overall system performance and energy efficiency.
The automotive industry is increasingly incorporating advanced driver-assistance systems (ADAS), autonomous driving technologies, and in-vehicle infotainment systems, which require high-performance computing capabilities. HBM’s ability to deliver high bandwidth with lower power consumption makes it suitable for automotive applications where space and power constraints are significant considerations.
HBM technology tends to be more expensive compared to traditional memory solutions like DDR. Higher production costs and more complex packaging processes contribute to this higher cost, which could limit the adoption of HBM in some market segments.
HBM memory requires specialized packaging and interface designs, which may require significant changes to existing system architectures. Compatibility and integration challenges could slow down the adoption of HBM, particularly in markets where backward compatibility and ease of integration are critical.
The high-performance memory market is competitive, with other technologies such as GDDR (Graphics Double Data Rate) and emerging technologies like GDDR6X and DDR5 also vying for market share. Competition from alternative memory solutions could impact the growth of the HBM market
The adoption of HBM technology varies by region, with major semiconductor manufacturing hubs such as the United States, South Korea, Taiwan, and China playing key roles in the development and production of HBM-enabled devices. North America and Asia-Pacific are expected to remain dominant regions in the global HBM market, driven by the presence of major technology companies and growing demand for high-performance computing solutions.
HBM3E, like its predecessors, is a type of high-speed, high-bandwidth memory technology designed to address the growing demand for memory bandwidth in advanced computing systems. It is built upon the HBM architecture, featuring multiple layers of DRAM (Dynamic Random Access Memory) stacked vertically on top of each other, interconnected by through-silicon vias (TSVs) to achieve high bandwidth and energy efficiency.
Applications:
Benefits:
Challenges:
Samsung has developed a 12-layer (12H) HBM3E stack with a 36 gigabytes (GB) total capacity, purportedly the largest capacity HBM3E product at this time. Samsung’s HBM3E 12H delivers 1.28 TB/s memory bandwidth. Both the capacity and speed are 50% higher than Samsung’s prior 8-stack HBM3 (8H) stack product. The 12H uses a thinner nonconductive thermal interconnect layer, allowing it to maintain the same height dimensions as the prior 8H stacked product.
This allows the higher-capacity part to fit within the same packaging dimensions as the earlier 8H parts. The thermal compression non-conductive film (TC NCF) is a critical element of the product that dictates the overall height and carries heat out from the inner layers.
Micron recently announced that its HBM3E memory product promises greater than 1.2 TB/s bandwidth and a 30% lower power consumption than similar competitive offerings. It also plans to offer up to 24 GB capacity. The Micron HBM3E launch products will come in an 8-layer, 24-GB stack that will ship in Q2 2024 with the new Nvidia H200 Tensor Core GPU.
Micron believes that power efficiency in its HBM3E stacks will improve data center operating costs as the demand for high-performance AI computing grows. Micron is behind Samsung and SK Hynix in market share and is already looking beyond HBM3E to the next revision, HBM4. Micron hopes its HBM3E installations and early HBM4 work will give it a path to a greater share of the HBM market.
SK Hynix has reportedly promised the first deliveries of its HBM3E memory stack this month (March 2024). The parts are planned for delivery to Nvidia. While SK Hynix has not revealed the details of its deliveries or plans, it has reported in financial statements that it expects strong growth in its HBM shipments, up to 100 million units by 2030. Some of the scaling will take place in South Korea, and there are also plans for an HBM plant in Indiana in the U.S. to produce HBM stacks for Nvidia. SK Hynix created an HBM division to enable greater focus on the high-value market.
The Global HBM3E market can be segmented into following categories for further analysis.
Here is a list of some of the leading HBM3E companies in the world: