- Conceptual Overview
- Read-Only Memory (ROM)
- Random Access Memory (RAM)
- Cycles and Frequencies
- Summary—Basic Memory
- Cache Memory
- Memory Pages
- Rambus Memory (RDRAM)
- Double Data Rate SDRAM (DDR SDRAM)
- Video RAM (VRAM)
- Supplemental Information
- Packaging Modules
- Memory Diagnostics—Parity
- Exam Prep Questions
- Need to Know More?
Double Data Rate SDRAM (DDR SDRAM)
SLDRAM generated DDR SDRAM and DDR-II. Both use a newer version of the Intel i845E chipset. Double Data Rate (DDR) came about as a response to Intel's RDRAM changed architecture and licensing fees. AMD was developing faster processing by using a double-speed bus. Instead of using a full clock tick to run an event, they used a "half-tick" cycle, which is the voltage change during a clock cycle. As the clock begins a tick, the voltage goes up (an up tick) and an event takes place. When the clock ends the tick, the voltage goes down (a down tick) and a second event takes place. Every clock cycle has two memory cycle events. The AMD Athlon and Duron use the DDR specification with the double-speed bus.
DDR and Rambus memory are not backward compatible with SDRAM. The big difference between DDR and SDRAM memory is that DDR reads data on both the rising and falling edges of the clock tick. SDRAM only carries information on the rising edge of a signal. Basically, this allows the DDR module to transfer twice as much data as SDRAM in the same time period. For example, instead of a data rate of 133MB/s, DDR memory transfers data at 266MB/s.
DDR is packaged in dual inline memory modules (DIMMs) like their SDRAM predecessors. They connect to the motherboard in a similar way as SDRAM. DDR memory supports both ECC (error correction code, typically used in servers) and non-parity (used on desktops/laptops). We discuss parity at the end of this chapter.
NOTE
RDRAM also developed a different type of chip packaging called Fine Pitch Ball Grid Array (FPBGA). Rambus chips are much larger than SDRAM or DDR die, which means that fewer parts can be produced on a wafer. Most DDR SDRAM uses a Thin Small Outline Package (TSOP). TSOP chips have fairly long contact pins on each side. FPBGA chips have tiny ball contacts on the underside. The very small soldered balls have a much lower capacitive load than the TSOP pins. DDR SDRAM using FPBGA packaging runs at 200266MHz, whereas the same chips in a TSOP package are limited to 150180MHz.
DDR-II
The current PC1066 RDRAM can reach 667MHz speeds (which is really PC1333), so Samsung and Elpida have announced that they are studying 1,333MHz RDRAM and 800MHz memory (PC1600). These systems would most likely be used in high-end network systems, but that doesn't mean that RDRAM would be completely removed from the home consumer market. Rambus has already developed a new technology, codenamed "Yellowstone," which should lead to 3.2GHz memory, with a 12.4GB/s throughput. With a 128-bit interface, Rambus promises to achieve 100GB/s throughput. Yellowstone technology is expected to arrive in game boxes first, with PC memory scheduled for sometime around 2005.
DDR-II may be the end of Rambus memory, although people have previously speculated that RDRAM wouldn't last. DDR-II extends the original DDR concept, taking on some of the advantages developed by Rambus. DDR-II uses FPBGA packaging for faster connection to the system, and reduces some of the signal reflection problems (collisions) of the original DDR. However, latency problems increase with higher bus speeds. DDR-II is entering the consumer market, but RDRAM is expected to continue, although with limited chipset support.
Serial Transfers and Latency
One of the problems with Rambus memory is that the RIMMs are connected to the bus in a series. A data item has to pass through all the other modules before it reaches the memory bus. The signal has to travel a lot farther than it does on a DIMM, where the bus uses parallel transfers. The longer distance introduces a time lag, called latency. The longer the delay before the signal reaches the bus, the higher the latency. In a Nintendo game, data generally moves in long streams, so serial transfers aren't a problem. But in a typical PC, data routinely moves in short bursts, and latency becomes a problem.
To understand latency, take a look at the difference between serial and parallel transfers. Think of a train in a movie scene. The hero is at one end of the train and has to chase the bad guy, using a serial process. He goes from one end of a car, along all the seats, and then leaves by a door at the other end, which is connected to the next car in the train. Then the process starts all over again, until he either reaches the end of the train or someone gets killed.
Now take that same train, but this time there isn't a hero chasing a bad guy. Instead, imagine a train full of people on their way to work. If there was only one door at the back of the train, it would take forever to let everyone off at the train station. To fix that problem, each car has its own door. When the train comes to a stop, everyone turns to the side facing the platform: The doors in each car open up, and people leave each car simultaneously. This is a parallel transfer.
RDRAM uses a 16-bit bus for the data signals. This narrow 2-byte path is the main reason why RDRAM can run at higher speeds than SDRAM. Keep in mind that transfers are not only faster, but there are two of them per cycle. On the other hand, one of the problems with parallel transfers at high speeds is something called skew. The longer and faster the bus gets, the more likely it is that some data signals will arrive too soon or too late: not in a perfect line. It would be as if sixty-four people started to leave the train at the same time, but each one stepped onto the platform at a different time.
SLDRAM uses a lower clock speed, which reduces signal problems. With no licensing fees, it's also cheaper to produce. Another useful feature is that it has a higher bandwidth than DRDRAM, allowing for a potential transfer of 3.2GB/s, as opposed to Rambus's 1.6GB/s. (Note that modern Intel chipsets use two parallel Rambus channels to reach 3.2GB/s.)
NOTE
Intel went the Rambus course, and released the 800 series chipset to work only with RDRAM. Soon after, Via released a chipset that would run DDR memory, an outgrowth of the SLDRAM technology. AMD wasn't going to be limited to an Intel board, so much of the market jumped on the Via chipset. This put pressure on Intel to come up with a modified 840-series chipset that would also support DDR memory. It appears as though Rambus may have a hard battle to win market share, but it continues to hang on in high-end desktops and workstations.
In a nutshell, fast and long may not be the same as slow and short. For instance, suppose you want to go two miles to the store. If you go the long way, using a highway, it's a ten-mile drive. However, you can drive 60 mph on the highway. If you go directly to the store, you're stuck driving 30 mph on local roads. Using the highway, you arrive in six minutes (60 mph/10 miles). The other way, you arrive in four minutes (30 mph/2 miles). You might drive a whole lot faster on the highway, but you'll get to the store faster on the straight-line route. In this example, the store is the memory controller. The different roads represent different types of bus architectures.