Stable with a stable memory


Intel in October clarified its memory plans moving forward when it announced that it is selling its NAND memory business to SK Hynix in a $ 9 billion two-phase build and will bring it to 2025 for completion. .

However, while the deal itself is unusual – the deal is expected to close in the second half of 2021 and the first phase will involve Hynix paying $ 7 billion for Intel and NAND SSD business fab in China, while Hynix will pay the remaining $ 2 billion four years later for the rest of Intel ‘s NAND activity – the message from Intel was clear: at a time when data and demand for both skyrocketing computing, a key growth area in the memory market will be fixed memory. For Intel, that means its Optane business.

The growth in data – driven by the cloud, Internet of Things (IoT), emerging technologies such as artificial intelligence (AI) and data analytics – is accelerating rapidly, with analysts IDC predicts that the generated data will accelerate by an average of 26. per cent per annum, reaching 175 zettabytes in 2025. It is a challenge for computer companies like Intel, which is trying to keep up with demand through more cores and accelerators along with declining mortality and they are looking to correct the slowdown in Moore’s Law.

“This abstract growth of data is coupled with the new analytical capabilities that businesses are using to extract insights from the data, such as AI, machine learning [and] big data analytics, working on ever-larger databases do two things of urgency, ”said Alper Ilkbahar, vice president of Intel Data Platforms Group and general manager of Intel Optane Group, in a press conference call. “The first is to fuel the huge demand for computing. This is great for a company like ours, which is involved in computing. We get that huge demand for computing to process all this data. But the same forces also drive the need to bring more and more data close to the CPU and this drives the demand for memory. The phenomenal growth of this data greatly puts… some weight and pressure on the datacenter architectures and creates new stress points for us to address. ”

Transistor scaling and process degradation have become slower compared to Moore’s Law, although the introduction of chiplet-based scattering architecture enables continuous growth in basic accounts. However, there is a growing gap between the industry between the demand for more memory capacity to be driven by the rapid data growth and the slow capacity for scale DRAM to meet that demand, Ilkbahar said. Memory needs to be maintained.

Stable memory – also known as storage class memory – is a bridge between DRAM and storage designed to do just that. Like DRAM, it is directly accessible and is faster than hard disk drives (HDDs) and solid-state drives (SSDs). In addition, continuous memory can hold stored data even if the power has been turned off and seated on the DRAM bus, closer to the processor, an ideal place to store ‘handling the large and complex datasets that are part of the modern datacenter.

With that in mind, it’s no surprise that Optane was at the forefront and center of this week’s Intel Memory and Storage flagship event with Intel. The chip maker, despite their agreement to sell the NAND business to Hynix, introduced a new add-on to their NAND SSD package. The D7-P5510, a 144-band NAND 3D TLC device was focused on a cloud datacenter workload that delivers 3.84TB and 7.68TB capacities and will be available this month. The D5-P5316, a ultra-thick and high-performance QLC NAND 144-cover machine with up to 30.72TB of capacity and available in the first half of 2021.

But much of the focus was on Optane, which first started coming to market in 2017. Last year the chip maker introduced the Optane 100 series in conjunction with the release of 2nd Scalable Xeon generation processors. This week the company introduced three new Optane products, all of which are faster than before and one – the P5800X, code-named “Alder Stream” – a datacenter driver that offers 400GB capacity to 3.2TB, supports PCIe 4.0 and reaches 1.8 IOPS in 70/30 read / write workload. David Tuhy, vice president and general manager of Intel Data Center Optane Storage Division, said it is the fastest SSD datacenter in the world. It includes three times the random 4k IOPS / random readout of its predecessor, the P4800X, and 67 percent higher stability, at 100 drive speeds per day.

Intel isn’t the only memory maker looking at continuous memory. Other vendors, including Hynix as well as Samsung and Toshiba America Memory, are also making progress. Intel expects its startup with Optane, developed by Micron, to give it an edge in what is expected to be a fast-growing market. Ilkbahar said the company is working to create a two-tier stable memory and storage environment with Optane. As a reminder, Optane will work in the capability suite, with DRAM in the performance suite. On the storage side, Optane will be used for performance and SSDs for capacity.

<>

“We are definitely seeing, step by step, a two-dimensional architectural vision and storage architecture come to fruition,” he said. “You may be wondering how stable the memory can be of removing some of those memory and storage bottles in the server, and part of the answer is that we have revolutionary media, we have a byte with address [memory] and direct load source access and memory sitting on a DDR and CPU bus certainly do not wait for data that is normally stored. And we also delete all the storage software as well. It is a completely hardware interaction trading system. ”

Intel also has a roadmap that looks far beyond 2021. When it introduced the Series 100, the company also said that they were designing a second generation – the Series 200, called “ Barlow Pass ”- launched in June by the” Cooper Lake “Xeon SP Processors and is also compatible with” Ice Lake “Xeon SPan. The plan is to release the Ice Lake Xeons and Series 200 together as a platform, he said.

Series 200, which delivers up to 25 percent performance over the first generation at a lower power consumption point. It comes in two flavors, the Memory mode – which provides great memory capacity without application changes and near DRAM performance – and App Direct Mode, which also enables large memory capacity and data stability for software acquisition to DRAM and continuous memory as two layers. of memory. About 60 percent of current organizations opt for Memory Mode, but as more software becomes available on App Direct, the mix is ​​expected to move toward App Direct , Ilkbahar said.

The 200 Series also comes with a feature called Asynchronous DRAM Repression (eADR) that is unique for continuous memory. It enables the content of the cache to be automatically streamed into the fixed memory, which improves application performance and protects against conflicts in the event of power loss.

“When you look at some applications such as databases or task processes, mission-critical data handling applications or high-performance applications that use unobstructed algorithms, one thing that needs to be considered is be in all data access or data acquisition… ‘Am I worried about this data in case I lose power? ” He said, adding that with these claims, data integrity is compromised if power goes out. “As a result, when they accept this critical data, they burn the entire cache into continuous memory and have to restart the cache. This is an important performance impact. With this new feature – and of course this is something that is already enabled in the standard programming model – the application simply checks if the system is enabled by the eADR and, if so, you don’t have to flush the cache and lose the performance. You know that if the power goes out, the system will automatically take care of that, so you don’t risk losing your critical data. ”

Ilkbahar also announced the Optane Series 300 “Crow Pass” product, which will be available on the future Xeon SP “Sapphire Rapids” slits, which is expected to be released by late 2021 or early 2022. He declined to speak. about specs or other architecture details about Pass Pass.

Source