Select the directory option from the above "Directory" header!

Menu
Intel exec: Programming for multicore chips a challenge

Intel exec: Programming for multicore chips a challenge

Adding cores could create challenges for programmers writing code that enables applications to work effectively with multicore chips.

Adding more cores is desirable to meet growing computing demands, but it could create more challenges for programmers writing code that enables applications to work effectively with multicore chips.

As technology develops at a fast rate, a challenge for developers is to adapt to programming for multicore systems, said Doug Davis, vice-president of the digital enterprise group at Intel, during a speech Tuesday at the Multicore Expo in Santa Clara, California. Programmers will have to transition from programming for single-core processors to multiple cores, while future-proofing the code to keep up-to-date in case additional cores are added to a computing system.

Programming models can be designed that take advantage of hyperthreading, which enables parallel processing capabilities of multiple cores to boost application performance in a cost-effective way, Davis said. Intel is working with universities and funding programs that will train programmers to develop applications that solve those problems, Davis said.

Intel, along with Microsoft, has donated $US20 million to the University of California at Berkeley and the University of Illinois at Champaign-Urbana, to train students and conduct research on multicore programming and parallel computing. The centers will tackle the challenges of programming for multicore processors to carry out more than one set of program instructions at a time, a scenario known as parallel computing.

Beyond future-proofing code for parallelism, adapting legacy applications to work in new computing environments that take advantage of multicore processing is a challenge coders face, Davis said. Writing code from scratch is the ideal option, but it can be expensive.

"The world we live in today has millions of lines of legacy code ... how do we take legacy of software and take advantage of legacy technology?" Coders could need to deliver what's best for their system, Davis said.

Every major processor architecture has undergone quick changes because of the rapid rate of change as described by Moore's Law, which calls for better application and processor performance every two years, but now the challenge is to deliver performance within a defined power envelope. Power consumption is driving multicore chip development, and programmers need to write code that works within that power envelope, Davis said.

Adding cores to a chip to boost performance is a better power-saving option than cranking up clock frequency of a single-core processor, Davis said. Adding cores increases performance, but cuts down on power consumption.

In 2007, about 40 per cent of desktops, laptops and servers shipped with multicore processors. By 2011, about 90 per cent of PCs shipping will be multicore systems. Almost all of Microsoft Windows Vista PCs shipping today are multicore, Davis said.

Intel is also working on an 80-core Polaris chip, which brings teraflops of performance.

"We're not only talking about terabit computing, but the terabyte sets [of data] we can manage." Davis said. Users are consuming and storing tremendous amounts of data now, and in a few years, the amount of data should reach zettabytes, Davis said.

The next "killer" application for multicore computing could be tools that enable the real-time collection, mining and analysis of data, Davis said. For example, military personnel using wearable multicore computers are able to simulate, analyse and synthesize data in real time to show how a situation will unfold. Doing so is viable and doesn't create risk for military personnel, Davis said.

"These types of applications have taken weeks to do ... now these types of applications are literally running in minutes," Davis said.

As cores are added, the performance boost may also enable more applications, Davis said. The oil and gas industry will demand one petaflop of computing capacity in 2010, compared to 400 teraflops in 2008, to cost-effectively collect seismic data, compare it to historical data and analyse the data. Compared to the past, oil and gas explorers can collect and analyse data much faster now, Davis said.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments