Intel’s Long Awaited Return to the Memory Business

Pages: 1 2

For those who follow history, Haswell will be remembered as a rather ironic product. For the first decade, Intel was known for its memory products (e.g., SRAM and DRAM), but by the 1980’s the company was facing a difficult market due to Japanese competitors and the undifferentiated nature of the product. In a move that would be documented by any number of business books, the company shifted away from DRAM to microprocessors under the guidance of Andy Grove and Gordon Moore. This was a gut-wrenching and disruptive event that eventually placed the company on the path to dominate the PC via the x86 processor line.

Memory is a critical element of any computing platform, hence a crucial concern for Intel. Since the company’s 1985 exit from the business, there have been occasional flirtations with DRAM. In the 1990’s, Intel attempted to proliferate high bandwidth Rambus memory as the successor to SDRAM. This ultimately ended in an expensive and embarrassing failure due to the high cost of RDRAM and vicious industry politics. In particular, memory manufacturers balked at Rambus’ royalty demands and a number of patent infringement lawsuits ensued that made everyone in the memory business look particularly bad. Intel eventually ended up supporting DDR memory for the P4, following AMD’s lead with the K8.

A decade later, Intel would embark upon a slightly different project aimed at improving bandwidth and capacity for servers with Fully Buffered DIMMs. Learning from prior mistakes, FB-DIMMs used commodity DRAMs with special buffer chips that communicated over a high-speed serial interface. The FB-DIMM interface reduced the number of pins and enabled high capacity memory configurations without reducing bandwidth. Unfortunately, the buffer chips were relatively power hungry (2-4W/DIMM) and never took off for mainstream servers. Instead, Intel ended up adopting DDR3 for 1-2 socket Xeons, starting with Nehalem. However, this technology is used in highly scalable servers (e.g., Intel’s 4-socket Xeon and Itanium chips and proprietary designs from IBM and Oracle).

Intel’s upcoming generation of SoCs include the new Haswell CPU microarchitecture, and a new GPU microarchitecture that spans a wide range of performance. The integrated graphics comes in several flavors, the low-end GT1, mid-range GT2, and high-end GT3. The highest performance version though is the GT3e, which includes dedicated DRAM, primarily for the graphics and video cores. One of the key technical differentiators between discrete and integrated graphics is dedicated bandwidth, so the performance impact should be quite significant.

Rumors about the DRAM for Haswell have circulated for quite some time on hardware enthusiast websites, with rumored specifications varying from site to site and month to month. Based on our analysis, the most likely configuration is a 128MB DRAM that is mounted on the same package with at least 64GB/s of bandwidth (and probably more) to the SoC over a 512-bit bus (Note: sources recently suggested that the bandwidth is slightly higher and that the interface is narrow and high frequency, rather than wider and slower). A recent presentation from Intel at IDF Beijing indicates that the DRAM actually functions as another level in the memory hierarchy for both the CPU cores and graphics; essentially a bandwidth optimized L4 cache.

Some of the lingering questions about Haswell concern the memory itself: what company is providing the DRAM and what technology is it using? Speculation is rampant and mentions nearly all of the obvious commodity options including Low Power (LP) DDR, Graphics (G) DDR and even regular DDR. Unsurprisingly most of these theories are utterly incorrect and ill-informed. None of these technologies are consistent with an interconnect delivering >64GB/s of bandwidth from a single DRAM. Moreover, DDR and GDDR are far too power hungry because they are designed for operating over a board, rather than within a package and LPDDR is simply not high enough performance.

The answer to these questions was alluded to in a recently announced paper from Intel at the VLSI Symposium in Kyoto. The VLSI Symposium is a joint conference held in June that focuses on circuit design (akin to ISSCC) and manufacturing (like IEDM), which rotates between Hawaii and Japan (hosted in Kyoto for 2013). According to the advance program, paper 2.1 is entitled “A 22nm High Performance Embedded DRAM SoC Technology Featuring Tri-Gate Transistors and MIMCAP COB,” with nearly 30 co-authors from Intel’s Technology Manufacturing Group. After 28 years, Intel is returning to the DRAM business; a twist of fate that is surely ironic to Gordon Moore and Andy Grove.


Pages:   1 2  Next »

Discuss (53 comments)