» Download all images (ZIP, 16 MB)
What’s New: This week, Intel is presenting a series of innovations that have the potential to enable real-time, low-energy computation for an increasingly connected and data-driven world – from 5G networks to intelligent edge systems and robotic systems. These innovations in integrated circuits and systems-on-chip will be presented during the International Solid-State Circuits Conference (ISSCC), the leading forum on advanced circuit research, in San Francisco from Feb. 17-21. Intel will present 17 scientific papers and accompanying demonstrations that could have a transformative impact on a wide range of applications for the future of technology – including developments in 5G and memory.
“The research underway at Intel is varied in its focus but unified in a vision for the future of technology – one where anyone and everything can communicate with data. To achieve this vision, we recognize the need for computational systems capable of tackling problems conventional computers simply cannot handle, and – as we are showcasing this year at ISSCC – Intel is committed to furthering research and development of the technologies with the potential to carry us to that future.”
–Dr. Rich Uhlig, managing director, Intel Labs
RESEARCH PRESENTED THIS WEEK INCLUDES:
Distributed Autonomous and Collaborative Multi-Robot System Featuring a Low-Power Robot SoC in 22nm CMOS for Integrated Battery-Powered Minibots
Abstract: In this paper, Intel demonstrates a distributed, autonomous and collaborative multi-robot system featuring integrated, battery-powered, crawling and jumping minibots. For example, in a search and rescue application, four minibots collaboratively navigate and map an unknown area without a central server or human intervention, detecting obstacles and finding paths around them, avoiding collisions, communicating among themselves, and delivering messages to a base station when a human is detected.
Each minibot platform integrates: (i) a camera, LIDAR and audio sensors for real-time perception and navigation; (ii) a low-power custom robot SoC for sensor data fusion, localization and mapping, multi-robot collaborative intelligent decision-making, object detection and recognition, collision avoidance, path planning, and motion control; (iii) low-power ultra-wideband (UWB) radio for anchorless dynamic ranging and inter-robot information exchange; (iv) long-range radio (LoRa) for robot-to-base-station critical message delivery; (v) battery and PMIC for platform power delivery and management; (vi) 64MB pseudo-SRAM (PSRAM) and 1GB flash memory; and (vii) actuators for crawling and jumping motions.
Why It Matters: Multi-robot systems, working collectively to accomplish complex missions beyond the capability of a single robot, have the potential to disrupt a wide range of applications ranging from search and rescue missions to precision agriculture and farming. The multi-bot systems can dramatically speed the time to perform a single task. For example, shortening time and latency for first responders during an emergency. However, advanced robotics and artificial intelligence have, to date, required large investment and intensive computational power. The development of these distributed, autonomous and collaborative minibots, which are operated by a system-on-chip that delivers efficiencies orders of magnitude beyond what was previously possible, represents the first step toward enabling the development of energy- and cost-efficient multi-robot systems.
5G Wireless Communication: An Inflection Point
Abstract: The 5G era is upon us, ushering in new opportunities for technology innovation across the computing and connectivity landscape. 5G presents an inflection point where wireless communication technology is driven by application and expected use cases, and where the network will set the stage for data-rich services and sophisticated cloud apps, delivered faster and with lower latency. This paper will highlight the disruptive architectures and technology innovations required to make 5G and beyond a reality.
Why It Matters: Whereas 4G was about moving data faster, 5G will bring more powerful wireless networks that connect “things” to each other, to people and to the cloud. The 5G network will set the stage for data-rich services and sophisticated cloud apps, delivered faster and with lower latency than ever. It will transform our lives by helping deliver a smart and connected society with smart cities, self-driving cars and new industrial efficiencies. For this to happen, networks must become faster, smarter and more agile to handle the unprecedented increase in volume and complexity of data traffic as more devices become connected and new digital services are offered.
Applying Principles of Neural Computation for Efficient Learning in Silicon
Abstract: Intel’s Loihi novel processor implements a microcode-programmable learning architecture supporting a wide range of neuroplasticity mechanisms under study at the forefront of computational neuroscience. By applying many of the fundamental principles of neural computation found in nature, Loihi promises to provide highly efficient and scalable learning performance for supervised, unsupervised, reinforcement-based and one-shot paradigms. This talk describes these principles as applied to the Loihi architecture and shares our preliminary results toward the vision of low-power, real-time on-chip learning.
Why It Matters: Deep learning algorithms mainly used today in machine learning (ML) applications are very costly in terms of energy consumption, due to their large amount of required computations and large model sizes. Many issues, such as connectivity to the cloud, latency, privacy and public safety, could be resolved by establishing intelligent computing at the edge. By applying principles of neural computation to architecture, circuit and integrated design solutions, we could minimize the energy consumption and computational demand of edge learning systems.
Novel Memory/Storage Solutions for Memory-Centric Computing
Abstract: The exponential growth in connected devices and systems is generating a staggering amount of digital records. These records not only need to be stored but also need to be mined for useful information. This era of big data is driving fundamental changes in both memory and storage hierarchy. Data and compute need to be brought closer together to avoid networking and storage protocol inefficiencies. This drives the demand for larger memory capacity, which is currently hindered by memory subsystem cost. In addition, the need for memory persistency will not only streamline storage protocols but will also significantly reduce bring-up time after system failure. In this presentation, novel solutions for memory-centric architecture will be discussed, with a focus on their value, performance and power efficiency.
Why It Matters: Memory-centric computing has the potential to enable energy-efficient, high-performance AI/ML applications. With the explosive growth of memory-intensive workloads like machine learning, video capture/playback and language translation, there is tremendous interest in performing some compute near memory, by placing logic inside the DRAM/NVM main-memory die (aka near-memory compute), or even doing the compute within the memory array, embedded within the compute die (aka in-memory compute). In either case, the motivation is to reduce the significant data movement between main/embedded memory and compute units, as well as to reduce latency by performing many operations in parallel, inside the array.