专注在线职业教育23年
下载APP
小程序
希赛网小程序
导航

计算机专业时文选读(974)

责编:runhua911 2005-08-14

Multicore Processors

In 1965, when he first set out what we now call Moore’s Law, Gordon Moore (who later co-founded Intel Corp.) said the number of components that could be packed onto an integrated circuit would double every year or so (later amended to 18 months).

In 1971, Intel’s 4004 CPU had 2,300 transistors. In 1982, the 80286 debuted with 134,000 transistors. Now, run-of-the-mill CPUs count upward of 200 million transistors, and Intel is scheduled to release a processor with 1.7 billion transistors for later this year.

For years, such progress in CPUs was clearly predictable: Successive generations of semiconductor technology gave us bigger, more powerful processors on ever-thinner silicon substrates operating at increasing clock speeds. These smaller, faster transistors use less electricity, too.

But there’s a catch. It turns out that as operating voltages get lower, a significant amount of electricity simply leaks away and ends up generating excessive heat, requiring much more attention to processor cooling and limiting the potential speed advance——think of this as a thermal barrier.

To break through that barrier, processor makers are adopting a new strategy, packing two or more complete, independent processor cores, or CPUs, onto a single chip. This multicore processor plugs directly into a single socket on the motherboard, and the operating system sees each of the execution cores as a discrete logical processor that is independently controllable. Having two separate CPUs allows each one to run somewhat slower, and thus cooler, and still improve overall throughput for the machine in most cases.

From one perspective, this is merely an extension of the design thinking that has for several years given us n-way servers using two or more standard CPUs; we’re simply making the packaging smaller and the integration more complete. In practice, however, this multicore strategy represents a major shift in processor architecture that will quickly pervade the computing industry. Having two CPUs on the same chip rather than plugged into two separate sockets greatly speeds communication between them and cuts waiting time.

The first multicore CPU from Intel is already on the market. By the end of 2006, Intel expects multicore processors to make up 40% of new desktops, 70% of mobile CPUs and a whopping 85% of all server processors that it ships. Intel has said that all of its future CPU designs will be multicore. Intel’s major competitors——including Advanced Micro Devices Inc., Sun Microsystems Inc. and IBM——each appear to be betting the farm on multicore processors.

Besides running cooler and faster, multicore processors are especially well suited to tasks that have operations that can be divided up into separate threads and run in parallel. On a dual-core CPU, software that can use multiple threads, such as database queries and graphics rendering, can run almost 100% faster than it can on a single-CPU chip.

However, many applications that process in a linear fashion, including communications, backup and some types of numerical computation, won’t benefit as much and might even run slower on a dual-core processor than on a faster single-core CPU.

多内核处理器

1965年,Gordon Moore首次提出了今天我们所说的摩尔定律。他(后来与人共同筹建了英特尔公司)说,能够封装进集成电路的元器件数目每年(后来修改成每十八个月)约翻一番。

1971年,英特尔的4004处理器有2300个晶体管。1982问世的80286有134000晶体管。今天,一般的处理器有高达2亿只晶体管,英特尔预定在今年晚些时候推出有17亿只晶体管的处理器。

多年来,处理器的这种进步是完全可以预测的: 一代接一代的半导体技术给我们带来了在更薄的硅衬底上、工作在更高时钟速度上的更大、更强的处理器。那些更小、更快的晶体管耗电也更少。

但总是有尽头的。随着工作电压更低,漏电就更多,产生更多的热量,就需要对处理器的冷却给予更多的关注,这就限制了潜在的速度提高——可以把它当作热障。

为了突破热障,处理器生产厂家采用了一个新的策略,将两个或更多完整的独立处理器内核(即CPU)封装在一个芯片上。这种多内核处理器能直接插入主板的单个插座上,而操作系统把每个执行的内核看作一个分立的、可独立控制的逻辑处理器。有了两个独立的CPU就允许每个CPU稍微运行得慢些,从而温度就低一些,但在多数情况下,仍能改进机器整体的吞吐量。

从某个角度看,这种多内核处理器只是已沿用多年的、采用两个或更多标准CPU的多路服务器设计思想的延伸,我们只是简单地使之封装得更小、集成更多的元器件。然而,在实践中,多内核策略代表着处理器架构的重大转变,将会在计算行业中快速流行。在同一芯片中有两个CPU,而不是插入两个分开的插座,极大地提高了CPU之间通信的速度,降低了等待时间。

来自英特尔的第一个多内核CPU已经上市。英特尔希望到2006年底,多内核处理器在新销售的台式机中达到40%、在移动CPU中达到70%、服务器中达到85%。英特尔已经说过,将来所有的CPU设计都将是多内核的。英特尔的主要竞争对手,包括AMD、Sun和IBM,也都把宝押在了多内核处理器上。

多内核处理器除了运行温度低、速度快,还非常适合那些操作可以分成不同线程以及并行运行的任务。在一个双内核的CPU上,可以使用多线程的软件(同时运行数据库查询和图形生成)运行速度几乎比单CPU芯片快了一倍。

但是,很多以线性方式处理的应用程序,如通信、备份和某些类型的数值计算,在速度稍微慢一些的双内核处理器上并不能比速度更快一些的单内核CPU上获得更大的优势。

更多资料
更多课程
更多真题
温馨提示:因考试政策、内容不断变化与调整,本网站提供的以上信息仅供参考,如有异议,请考生以权威部门公布的内容为准!
相关阅读
查看更多

加群交流

公众号

客服咨询

考试资料

每日一练

咨询客服