Teralab HBM Milestone and Roadmap

Kim's Law

Kim’s Law: Vertical Stack-up Trends

Kim’s law is the observation that the number of stack-up and layers in a dense 3-dimensional integrated circuit doubles about every two years. It is named after Joungho Kim who is a professor of Korea Advanced Institute of Science and Technology (KAIST). In 2017, he first officially announced Kim’s law by publishing a related article in a newspaper [1], and projected this rate of growth would continue for at least another years.

Kim’s prediction has been proven true for a number of years. It has been used to guide long-term planning and to set targets for research and development (R&D) in semiconductor industry in Korea. Advancements in high bandwidth system are strongly linked to Kim’s law: data bandwidth, the number of I/Os, and memory capacity. High data bandwidth system has contributed to the latest development of graphic module, high performance computing system, and machine learning applications.

Kim’s law starts with the dying Moore’s law. Although the rate of Moore’s law was held steady from 1975 until around 2012, the rate has become slowed after 2013. Moore’s law would continue with leading semiconductor manufacturers, TSMC and Samsung Electronics until 2 nm nodes in mass production (2024 expected). However, MOSFET scaling would reach the physical limits of miniaturization at atomic levels; only a few number of silicon atoms form a transistor channel. Before rising of a new concept of transistor or qubit, Kim’s law would replace Moore’s law.

Ref [1] https://www.etnews.com/20170331000273

Fig. 1 Kim's Law : Vertical Stack-up Trends. 

Fig. 2 Die Shots of High Bandwidth Memory (HBM) version 1 and 2. 

Fig. 3 Vertical Stack-up History and Die Shots of NAND Flash Memory. 

Teralab HBM Milestone

As artificial Intelligence (AI) and big data technologies advance, the demand for high bandwidth and high capacity of DRAM is rapidly increasing. As seen from the milestone above, in order to supply high bandwidth and high capacity, HBM, a silicon interposer and TSV-based 3D ICs, were developed for the first time in 2013 and adopted as the JEDEC standard. Since that time, HBM has been providing higher bandwidth and higher capacity by increasing data rates and stacking and more number of DRAMs. Recently, Samsung and SK Hynix released HBM2E that can provide 410 GB/s of bandwidth and 16 GB of capacity per HBM. However, as the performance of parallel computing units such as GPUs and NPUs continues to increase, higher bandwidth and capacity are required than those of the current HBM.

Fig. HBM Milestone 

Our lab proposes the next generation HBM 3,4,5 roadmap based on the current technical trend of HBM. Next-generation HBM is expected to undergo various changes for higher bandwidth and capacity. The figure above is a conceptual diagram of the next generation HBM5 proposed by our lab. For higher internal TSV bandwidth, spiral point-to-point TSV will be introduced, and to reduce power consumption, small-swing interface will be introduced for the TSV interface. Also, a buffer layer for power supply and signal transmission of DRAM will be added to compensate loading effect from the rapidly increasing number of stacked DRAM layers. In addition, Processing in Memory (PIM) structure will be introduced in the logic layer of HBM to take advantage of the extreme TSV high bandwidth.

Fig. 2 Teralab HBM5 Package Roadmap. 

Whereas silicon interposer of conventional HBM contains only passive components, the interposers of the next generation HBM will contain active circuits. As the data rate increases, a repeater and an equalizer will be integrated in the interposer. Unlike conventional HBM, which power is supplied by off-chip VRM, for faster power supply, the next generation HBM will use a voltage regulator integrated into the interposer. In addition, as the number of DRAM layers and its density increase, a new thermal solution will be needed. Therefore, in the next-generation HBMs, liquid micro-channel and liquid fluid cooling using TSV will be introduced.

Fig. 3 Teralab HBM 3 - 5 Specification Roadmap Table.

Fig. 4 Teralab HBM 3 - 5 Function Roadmap Table. 

The table above is the spec and the functionality contained in the next-generation HBM 3,4,5 proposed by our lab. HBM3 is expected to be on the extension of the currently developed HBM2e. Therefore, the capacity is 24 GB per cube by stacking 12 layers at 16 Gb per layer, and the bandwidth will be 512 Gbps per cube with 1024 I/Os at 4 Gbps data rate. HBM3 seems to be similar to HBM2e in terms of functionality. For HBM4 and HBM5, the density per layer will increase by 2 times for each generation, and the number of layers will increase by 8 layers, and the capacity will increase by 2 times. To achieve this, from HBM4, an equalizer and a voltage regulator will be implemented in the active interposer, and a spiral TSV and small-swing interface will be introduced for lower power consumption. For HBM5, as the maximum data rate reaches 8 Gbps, micro channel cooling will be introduced to solve thermal issues, and a buffer layer will be added for stacking 24 DRAM layers. In addition, in HBM5, a PIM structure will be introduced for higher data bandwidth. Therefore, the next-generation HBM will become a solution with ultra-high performance, ultra-high bandwidth, and ultra-low power, and various structural changes and aggregates of comprehensive technologies will be required to achieve this.