[Intel DCI Summit 2018] Keynote Speech - Navin Shenoy Executive VP

Published : Saturday, November 24, 2018, 2:39 pm
ACROFAN=Yong-Man Kwon | | SNS
On August 8, Intel hosted ‘Data-Centric Innovation Summit 2018’ at its headquarters in Santa Clara, California, the U.S., and introduced Intel’s strategy for ‘data-centric’ era. The event featured a variety of technologies for data centers such as memory and networking, as well as related products, strategies, and Intel's processors.

Today, 90% of the world's data has been generated within the past two years, and the amount of this data will continue to grow exponentially, reaching 163ZB in 2025, which is 10 times larger than today’s size of data. However, only about 1% of this data is being used for decision making, and the ‘data-centric’ environment also determines the competitiveness of an organization. Intel introduced that the total Service Available Market (SAM) of ‘data-centric’ business, such as how to move data faster, store and handle more data, is estimated at $200 billion by 2022, the largest opportunity in Intel history.

During this summit, Intel stated that the next-generation Xeon Scalable processor series, ‘Cascade Lake,’ will significantly improve AI inference performance with new VNNI instructions and DLBoost technology. In addition, 'Optane DC Persistent Memory' provides a large amount of memory storage space that can be used based on the existing memory system. For more efficient storage configuration, Intel presented a combination of Optane DC SSD and QLC 3D NAND-based new products. Meanwhile, in aspect of the network for faster data movement, 'Silicon Photonics' and a new SmartNIC product line were introduced.

▲ Navin Shenoy, Executive VP & GM of the Data Center Group at Intel Corp., gave a keynote speech.

Navin Shenoy, Executive Vice President and General Manager of the Data Center Group at Intel Corporation, revealed the market investigation result that 90% of the total data was generated within the past two years and the amount of data will reach 163ZB in 2025, which is 10 times larger than today’s size of data. In addition to the growth of the data, he mentioned that the unit cost of the technology to handle the data has decreased by 56% in compute and 77% in storage over the past decade, and the performance improvement shows 41 times higher compared to 10 years ago.

Shenoy mentioned autonomous vehicles as the best example of ‘data-centric’ in data collection, analysis, and consumption. And Intel emphasized that Intel's ability to provide an end-to-end computing environment could make an important contribution to the implementation of autonomous vehicles. In addition, the autonomous vehicle recognizes the surroundings in real time through high-resolution maps, various cameras and sensors for route determination and driving, and the collected data reaches 4TB per hour and will require a lot of computing resources to process it. In other words, the autonomous driving car ecosystem is an end-to-end form that is implemented from the edge to the AI and the cloud.

Intel revised the TAM size of its data-centric business from $160 billion in 2021 to $200 billion by 2022. This is because it is the largest market opportunity in Intel history, and Intel tries to increase its influence and maximize opportunities in these markets. Moreover, ‘cloud’, particularly the ‘growth of public cloud’, was pointed out as a major factor in the expansion of the TAM, and it is introduced that the demand for service providers' custom processors is close to 50% by 2017. Along with this, the network is expected to have great opportunities in 5G and edge computing, and AI is expected to grow by nearly 30% annually.

▲ Intel divided the challenges and portfolio of data-centric infrastructure into three broad categories.

As conditions of ‘data-centric infrastructure’ for data-centric era, Shenoy highlighted three conditions: moving faster, storing more, and processing everything. Among them, in case of ‘moving faster’, the network traffic of a data center is expected to grow at an annual average rate of 25%. Especially, he introduced that the traffic inside the data center is growing at a rapid rate. As a result, the market size of logic silicon in terms of connection is expected to grow at an average annual rate of 26% until 2022, from $4 billion in 2017 to $11 billion in 2022. A corresponding portfolio includes Omni-Path Fabric for HPC, Arria10 FPGA-based SmartNIC product line (codename “Cascade Glacier”), and Silicon Photonics for high-speed optical interconnection.

In terms of storage of data, Intel resolves performance bottlenecks and implements efficient infrastructure with the reduction of the performance gap across the entire realm by overcoming the large performance gap between memory and storage through Optane memory and reducing the difference in the aspect of performance and cost between hard drives and SSD with QLC 3D NAND. In particular, with Optane DC Persistent Memory that is located between memory and storage, it will be possible to deal with a larger size of work by providing a cost-effective, high-capacity memory usage environment in the data center memory market, estimated at $10 billion in 2022. Moreover, the non-volatile characteristic will significantly increase availability of high-availability systems and reduce the memory loading time of the system at startup of in-memory applications.

Intel has begun shipping its first mass-production unit in Optane DC Persistent Memory and is officially supported from ‘Cascade Lake’, which will be the next-generation Xeon scalable processor. Meanwhile, Google has already applied the next-generation Xeon processor and Optane DC Persistent Memory into Google’s cloud service-based SAP HANA-driven environment. As a result, it could greatly reduce the time required for putting a large amount of data from storage to memory in system restarts and updates, thereby increasing availability. Also, it can be used as an alternative to DRAM and expect to resolve the scalability issues of DRAM because they offer more capacity and lower cost than DRAM.

▲ Xeon scalable processor is introduced to achieve the highest performance ever.

In relation to microprocessors for ‘processing everything’, Shenoy emphasized that Intel has continued to grow as well as meeting the needs of diverse data centers in the right time and in the right ways for 20 years after the first Xeon processors. And Intel will show products that can efficiently respond to various workloads. The current Xeon Scalable Processor is introduced as the most successful product. That is, with the largest early ship program scale in Xeon processor history, one million units of the processor were delivered in the earliest time among the previous Xeon processors. Also, over 2 million Xeon Scalable platforms were shipped in the second quarter of 2018.

Intel mentioned 'performance' and 'flexibility' as the strengths of the Xeon scalable processor. In terms of performance, compared to competing products, Xeon scalable processor has 1.48 times higher performance per core, 1.72 times higher L3 packet forwarding performance, 3.2 times higher LINPACK operation performance on HPC, 1.85 times higher performance on the database and 1.45 times higher performance on memory caching. In terms of flexibility, it offers a wider choice with 60 SKUs that have a wide operating range of 1.7 to 3.6 GHz, a TDP of 70 to 205 W, and a wide range of prices from $213 to $10,000 across a range of configurations from one socket to eight sockets.

Meanwhile, it is introduced that the performance of AI inference based on Xeon Scalable Processor is already 5.4 times higher than that at the time of release through software optimization, since the introduction of the Xeon Scalable Processor. With Caffe Resnet-50 and FP32 performance in July 2017 as standards, 2.8 times better performance was provided through framework optimization in January 2018, and 5.4 times of the performance is provided through INT8 optimization in August 2018. In addition, many companies are using Xeon processor-based systems for their AI-related workloads, and Intel has sold more than $1 billion in AI workloads-related areas in 2017.

▲ 'DLBoost' of 'Cascade Lake' improves inference performance by up to 11 times.

▲ After Cascade Lake, Cooper Lake for a new platform and Ice Lake for a new process are planned.

Intel also introduced the codename "Cascade Lake," known as the next-generation Xeon Scalable Processor. As notable points on this 14nm process-based processor, support for new 'Optane DC Persistent Memory' and support for a new command in acceleration of AI workload were introduced. What’s more, higher operating speeds, optimized cache configuration, and security enhancements were introduced. This processor is aimed at delivering to customers in the fourth quarter of 2018, and Intel has also announced plans to release new processors on a one-year cycle, and to introduce “Ice Lake”, which is a 10nm process-based.

Vector Neural Network Instruction (VNNI), which is based on the AVX-512 unit, is at the center of Intel's Deep Learning Boost (DLBoost) technology, and it shows 11 times higher performance compared to that of FP32 in the early July 2017, and twice as much as the current optimized state by enabling multiple complicated works with one command. Frameworks and libraries will be supported from Caffe, mxnet, TensorFlow, and Intel's MKL-DNN. Moreover, Shenoy presented a demonstration that Cascade Lake with DLBoost’s image processing performance is 10.88 times higher than current generation.

As for Intel Select Solution, which aims at faster value creation with workload-optimized verified solution, AI, Block chain, and SAP HANA support program were mentioned as new areas. Intel, on the other hand, emphasized that it has the capacity to integrate everything from transistors to architecture, memory, interconnect, security, software and solutions with its differentiated competencies. In addition, after 'Cascade Lake' after the fourth quarter of 2018, two fast follows were announced: 'Cooper Lake' in 2019, which is a new platform in 14nm process with a next-generation DLBoost technology including bfloat16 support, and ‘Ice Lake’ in 2020, which will be at the same platform with Cooper Lake in 10nm process-based.

Copyright ⓒ Acrofan All Right Reserved

Company Name : ACROFAN
Founded : October 1, 2006
Ownership : Jae-Yong Ryu, Founder.
Headquarters : 1407Ho, Yangpyeongro 12gagil 14, Yeongdeungpo District, Seoul, Republic of Korea(South Korea). Postal Code 07222.
Contact Us :
Contents API : RSS
Copyright(c) ACROFAN All Right Reserved