In my previous blog I covered at a high level why system signal integrity is highly dependent upon the silicon. Let’s dive into this a little deeper by first looking at wafer and die manufacturing variances.
I was asked recently whether engineers could just check the CRC error counts coming from the Operating System to ensure they had good signal integrity and operating margins. After all, a CRC checks for bit errors, right? Here’s why this is not good enough:
In a previous blog, I described how fixed and adaptive equalization techniques are used within chips to ensure signal integrity even in adverse system conditions. Why is it important to tune these parameters within a chip?
Modern high-speed I/O equalization schemes typically include both fixed (programmable) and adaptive components to ensure signal integrity even in adverse system conditions. What tools are available to ensure that these equalization techniques are working properly on a given system?
What’s cheaper, faster, and more powerful than an oscilloscope, when it comes to validating high-speed signal integrity? Why, a software application using embedded instruments, of course. How is this possible?
In my last few blogs, I’ve talked about the challenges of testing QPI, PCI Express, SATA 3, and DDR3 memory. These buses are common to many Intel Sandy Bridge and Ivy Bridge motherboard designs. Should test engineers take chances and just not test them?
Serial ATA 3 (SATA 3, or SATA III) is a differential bus running at 6Gbps. It’s commonly used on computer motherboards, such as notebooks, to connect to mass storage devices. How do you know if your hard disk or flash drive is running at full speed?