Навигација

Leveraging parallel processing in SoCs

Време28. новембар 2011. 14:04
ПредавачJeroen Leijten, Principal Engineer, ISP/Video Design, Ultra Mobility Group, Intel Corporation
Местосала 61, Електротехнички факултет

Summary

To enable flexibility and scalability under tight area, power dissipation and performance constraints in SoCs, high-level-programmable parallel processing at modest clock rates, carefully tuned to the target application domain, must be applied. To find the right balance between programmability, cost, performance and quality fast, a structured design approach is required. Silicon Hive IP is created using an automated template-based design methodology that dramatically increases development productivity, computational efficiency and overall quality of the resulting IP. This presentation will discuss this design approach and its underlying technology.

Detailed abstract

The continuous advances in CMOS technology provide improvements in area, speed and power dissipation for the same design when moving to the next technology node. This enables a continuous evolution of moving applications from hardware to software, as soon as a software solution becomes feasible. Choosing a software-based solution helps to reduce the number of silicon re-spins required and enables a parallel process of designing a System-on-Chip (SoC) wherein parallel teams work on integrating existing programmable processors and implementing the application in software. Moreover, SoCs based on programmable platforms have a longer product life cycle than their hardwired counterparts as they allow feature upgrades in software. Enabling the next major step in migrating applications from hardware to software, requires far more powerful, far more area-efficient, and far more power-efficient C-programmable processors than are available using conventional programmable approaches. This will enable efficient software implementations of applications, which up to now have been implemented in hardwired logic because of performance, cost and power constraints. Next to enabling computationally efficient programmability in SoCs, the speed at which SoC IP can be designed and integrated is getting increasingly important as design cycles shorten. This requires an approach where vast design space exploration can be done quickly and overall design productivity is increased dramatically, such that drastic design changes can be made and verified in days to weeks, rather than in months to years.

To achieve a high level of computational efficiency in programmable processors two key measures must be taken. First, processors should focus on computing in parallel at modest clock rates. And second, control hardware overhead in processors should be minimized. C-programmable processors must combine multiple styles of parallelism and exhibit minimal control hardware overhead, all rightfully balanced towards the targeted application domain. Parallelism in computation must be matched with properly dimensioned parallelism in storage and I/O bandwidth. This means that rather than focus on a one-fits-all solution for different application domains, different programmable solutions must be tuned to different application domains to achieve the best possible balance between flexibility, performance, area and power for each domain. To enable finding the right balance fast, design space exploration and selected design creation must be done by a structured design approach, supported by high level design entry and an automated tool flow to generate and validate IP.

Underlying all Silicon Hive solutions is the same basic processor architecture template and associated re-targetable software development tool suite. Key to achieving efficiency and guaranteeing quality, are powerful processor specification, exploration, and generation technology as well as ground-breaking software compilation technology. These technologies were developed as one integrated whole, based on decades of research and development combining vast expertise in processor architecture, compilation technology, application knowledge, and hardware design. Because of this integrated approach, scalability in parallelism can be taken far beyond established limits. For any target application domain, the same unified approach is used to explore, design, generate, verify, program, simulate and debug, complete multi-core (sub-)system IP consisting of multiple heterogeneous C-programmable processor cores, DMAs, MMUs, buses, interfaces, etc.

Background

Silicon Hive was spun out of Philips Electronics in 2007 to leverage unique parallel processor technology matured at Philips Research Labs for over 10 years. The company was acquired by Intel in February 2011 and currently operates as the ISP/Video Design group of Intel UMG. The Intel Ultra Mobility Group (UMG) develops complete hardware/software solutions for the smartphone market.

Biography

dr. ir. Jeroen Leijten
Principal Engineer, ISP/Video Design, Ultra Mobility Group, Intel Corporation

Jeroen has 17 years experience in parallel computer architectures and reconfigurable computing. At Intel he is responsible for research & development within the ISP/Video Design group. The formation of this group within the Ultra Mobility Group of Intel is the result of the acquisition of Silicon Hive by Intel Corporation in February 2011.
 
At Silicon Hive, Jeroen has been leading the development of Silicon Hive's parallel processing technology and related processor generation tools and libraries. As co-founder and Chief Technology Officer of Silicon Hive, he has further been responsible for all world-wide research and development within Silicon Hive in application areas including camera and video.

Prior to co-founding Silicon Hive, Jeroen was leading a next-generation processor architecture and software compiler co-design project in Philips Research. Within Philips Research he has worked as a senior scientist within research groups focusing on digital VLSI and systems on silicon.

In 1998 Jeroen obtained a Ph.D. degree in reconfigurable multiprocessor architectures for real-time digital signal processing applications from the Eindhoven University of Technology.  Jeroen currently holds more than 10 US patents on processor architecture and related technology.

Abstract

The continuous advances in CMOS technology provide improvements in area, speed and power dissipation for the same design when moving to the next technology node. This enables a continuous evolution of moving applications from hardware to software, as soon as a software solution becomes feasible. Choosing a software-based solution helps to reduce the number of silicon re-spins required and enables a parallel process of designing a System-on-Chip (SoC) wherein parallel teams work on integrating existing programmable processors and implementing the application in software. Moreover, SoCs based on programmable platforms have a longer product life cycle than their hardwired counterparts as they allow feature upgrades in software. Enabling the next major step in migrating applications from hardware to software, requires far more powerful, far more area-efficient, and far more power-efficient C-programmable processors than are available using conventional programmable approaches. This will enable efficient software implementations of applications, which up to now have been implemented in hardwired logic because of performance, cost and power constraints.

To achieve a high level of computational efficiency in programmable processors two key measures must be taken. First, processors should focus on computing in parallel at modest clock rates. And second, control hardware overhead in processors should be minimized. C-programmable processors must combine multiple styles of parallelism and exhibit minimal control hardware overhead, all rightfully balanced towards the targeted application domain. Parallelism in computation must be matched with properly dimensioned parallelism in storage and I/O bandwidth. This means that rather than focus on a one-fits-all solution for different application domains, different programmable solutions must be tuned to different application domains to achieve the best possible balance between flexibility, performance, area and power for each domain.

This keynote speech will discuss the challenges and commercially proven solutions to achieve the above, using Silicon Hive technology as an example. Underlying all Silicon Hive solutions is the same basic processor architecture template and associated re-targetable software development tool suite. Key to achieving efficiency and guaranteeing quality, are powerful processor specification, exploration, and generation technology as well as ground-breaking software compilation technology. These technologies were developed as one integrated whole, based on decades of research and development combining vast expertise in processor architecture, compilation technology, application knowledge, and hardware design. Because of this integrated approach, Silicon Hive is able to take scalability in parallelism far beyond established limits. The keynote speech will address the key elements of this integrated approach in more detail.