Many of us got new laptops, e-readers, tablets, and other electronics toys as gifts over the holidays. But do these machines run as fast as they’re supposed to?
It’s always a joy to give and receive during the Christmas season. I know that I anxiously unwrap each gift, wondering whether it’s a new shirt (ugh!) or the latest electronics gadget (yes!). In fact, this year I treated my son to a new notebook PC, as his aging Toshiba laptop was getting a little long in the tooth. We placed the new Intel 2nd Generation Core i5 machine next to its older Pentium 4 counterpart, and marveled at how much more quickly basic programs like Word, Excel and Explorer booted up and ran. And of course, Battlefield 3 is a much more enjoyable experience on the latest and greatest technology (but with my game-playing skills, faster is not always better).
But as I ran these tests, it occurred to me that my judgment of the speed increase was fairly subjective. Of course the new machine ran faster, but how much faster? And what are the major contributors to the performance boost, and their relative contribution to the better experience? And given those, how do we know that the system is performing at its optimal level?
Answering these kinds of questions is notoriously difficult. The performance of a system like a laptop is highly dependent upon what is being measured: is it raw CPU speed, or memory bandwidth, or graphics performance, or I/O throughput, or something else? Benchmarks do provide a means of comparing the performance of various subsystems across different chip or system architectures, but these are often open to interpretation and don’t take “real-world” situations into account.
One thing that we do know is that any system’s performance is highly dependent upon the speed at which processors, chipsets, and memory communicate on a circuit board. With chip-to-chip serial interconnects such as QPI, PCIe, and DMI now running at 8 GT/s and beyond, and memory buses at 1600MHz, it is essential that these protocols run as close to error-free as possible. Problems on these critical buses such as bad margins, crosstalk, inter-symbol interference, etc. will introduce bit errors in the data stream and corrupt the integrity of the traffic. Buses will recover from these errors, but with re-transmissions, reducing the overall throughput. And reduced throughput means poorer performance. And that means me getting killed faster in Battlefield 3.
What are the sources of signal integrity issues on these high-speed buses? The list is extremely long, but many of them can be traced back to design flaws, manufacturing defects, and process variances. An excellent article on the latter two is in one of our previous Connect articles, which you can view here: http://www.asset-intertech.com/connect/2009Q3/PCB_variances_and_SerDes.htm.
In terms of design issues that can affect signal integrity, these include lossy board material (FR4), via stubs, improper grounding, noisy supply planes, and bad signal coupling, among others.
The good news is that makers of consumer electronics can now easily measure the efficiency of these buses, without the requirements for the costly high-end oscilloscopes and bit error rate testers of the past. For example, on upcoming Intel Ivy Bridge designs, the ScanWorks HSIO for PC Client tool provides advanced bit- and lane-level results on high-speed DDR3, PEG, and DMI buses. It’s a simple, cost-effective tool that runs on a garden-variety PC without any special hardware requirements. Have a look here: http://shop.asset-intertech.com/index.php/pc-client-better/pcclient.html.