Introduction to SSD. Part 2. Interface

Introduction to SSD. Part 2. Interface

В last part cycle "Introduction to SSD" we talked about the history of the appearance of disks. The second part will tell about the interfaces for interacting with drives.

Communication between the processor and peripherals occurs according to predefined conventions called interfaces. These agreements regulate the physical and software level of interaction.

Interface - a set of means, methods and rules of interaction between elements of the system.

The physical implementation of an interface affects the following parameters:

  • throughput of the communication channel;
  • the maximum number of simultaneously connected devices;
  • the number of errors that occur.

Disk interfaces are built on I/O ports, which is the opposite of memory I/O and does not take up space in the processor's address space.

Parallel and serial ports

According to the method of data exchange, I / O ports are divided into two types:

  • parallel;
  • consistent.

As the name implies, the parallel port sends a machine word at a time, consisting of several bits. A parallel port is the easiest way to exchange data, as it does not require complex circuitry solutions. In the simplest case, each bit of the machine word is sent on its own signal line, and two service signal lines are used for feedback: Data ready и Data accepted.

Introduction to SSD. Part 2. Interface
Parallel ports, at first glance, scale well: more signal lines - more bits are transmitted at a time and, therefore, higher throughput. However, due to the increase in the number of signal lines, interference occurs between them, leading to distortion of the transmitted messages.

Serial ports are the opposite of parallel. Data is sent one bit at a time, which reduces the total number of signal lines, but complicates the I/O controller. The transmitter controller receives the machine word at a time and must transmit one bit at a time, and the receiver controller in turn must receive the bits and store them in the same order.

Introduction to SSD. Part 2. Interface
A small number of signal lines allows you to increase the frequency of message transmission without interference.

SCSI

Introduction to SSD. Part 2. Interface
Small Computer Systems Interface (SCSI) appeared back in 1978 and was originally designed to combine devices of various profiles into a single system. The SCSI-1 specification provided for the connection of up to 8 devices (together with the controller), such as:

  • scanners;
  • tape drives (streamers);
  • optical drives;
  • disk drives and other devices.

SCSI was originally named Shugart Associates System Interface (SASI), but the standards committee would not approve of a name after the company, and after a day of brainstorming, the name Small Computer Systems Interface (SCSI) was born. The "Father" of SCSI, Larry Boucher, intended the acronym to be pronounced "sexy", but Dal Allan read "sсuzzy" ("tell"). Subsequently, the pronunciation of "tell" was firmly entrenched in this standard.

In SCSI terminology, connected devices are divided into two types:

  • initiators;
  • target devices.

The initiator sends a command to the target device, which then sends a response to the initiator. The initiators and targets are connected to a common SCSI bus, which has a bandwidth of 1 MB/s in the SCSI-5 standard.

The "common bus" topology used imposes a number of restrictions:

  • at the ends of the bus, special devices are needed - terminators;
  • bus bandwidth is shared among all devices;
  • The maximum number of simultaneously connected devices is limited.

Introduction to SSD. Part 2. Interface

Devices on the bus are identified by a unique number called SCSI Target ID. Each SCSI unit in the system is represented by at least one logical device, which is addressed by a unique number within the physical device. Logical unit number (LUN).

Introduction to SSD. Part 2. Interface
Commands in SCSI are sent in the form command description blocks (Command Descriptor Block, CDB), consisting of an operation code and command parameters. The standard describes more than 200 commands, divided into four categories:

  • Mandatory — must be supported by the device;
  • Optional - can be implemented;
  • Vendor-specific - used by a specific manufacturer;
  • obsolete - obsolete commands.

Among the many commands, only three of them are mandatory for devices:

  • TEST UNIT READY — checking the readiness of the device;
  • REQUEST SENSE — requests the error code of the previous command;
  • Inquiry — request the main characteristics of the device.

After receiving and processing the command, the target device sends a status code to the initiator, which describes the result of the execution.

Further improvement of SCSI (SCSI-2 and Ultra SCSI specifications) expanded the list of used commands and increased the number of connected devices up to 16, and the data exchange rate on the bus up to 640 MB/s. Since SCSI is a parallel interface, increasing the frequency of data exchange was associated with a decrease in the maximum cable length and led to inconvenience in use.

Starting with the Ultra-3 SCSI standard, support for "hot plugging" has appeared - connecting devices when the power is on.

The first known SCSI SSD was the M-Systems FFD-350, released in 1995. The disc had a high cost and was not widely used.

Currently, parallel SCSI is not a popular disk interface, but the command set is still actively used in USB and SAS interfaces.

ATA/PATA

Introduction to SSD. Part 2. Interface
Interface ATA (Advanced Technology Attachment), also known as HOOF (Parallel ATA) was developed by Western Digital in 1986. The marketing name for the IDE standard (Eng. Integrated Drive Electronics - “electronics built into the drive”) emphasized an important innovation: the drive controller was integrated into the drive, and not on a separate expansion board.

The decision to place the controller inside the drive solved several problems at once. First, the distance from the drive to the controller has decreased, which positively affected the performance of the drive. Secondly, the built-in controller was "sharpened" only for a certain type of drive and, accordingly, was cheaper.

Introduction to SSD. Part 2. Interface
ATA, like SCSI, uses a parallel I/O method, which is reflected in the cables used. Connecting drives using the IDE interface requires 40-core cables, also referred to as flat cables. More recent specifications use 80-wire stubs, more than half of which are ground loops to reduce interference at high frequencies.

There are two to four connectors on the ATA cable, one of which is connected to the motherboard, and the rest to the drives. When connecting two devices in one loop, one of them must be configured as Master, and the second as Slave. The third device can only be connected in read-only mode.

Introduction to SSD. Part 2. Interface
The position of the jumper determines the role of a particular device. The terms Master and Slave in relation to devices are not entirely correct, since in relation to the controller, all connected devices are Slaves.

A special innovation in ATA-3 is the appearance Self Monitoring, Analysis and Reporting Technology (SMART). Five companies (IBM, Seagate, Quantum, Conner, and Western Digital) have joined forces and standardized drive health assessment technology.

Support for solid state drives has been around since version 1998 of the standard, released in 33.3. This version of the standard provided data transfer rates up to XNUMX MB/s.

The standard puts forward strict requirements for ATA cables:

  • the plume must be flat;
  • maximum train length 18 inches (45.7 centimeters).

The short and wide train was inconvenient and interfered with cooling. It became more and more difficult to increase the transmission frequency with each subsequent version of the standard, and ATA-7 solved the problem radically: the parallel interface was replaced by a serial one. After that, ATA acquired the word Parallel and became known as PATA, and the seventh version of the standard received a different name - Serial ATA. SATA version numbering started from one.

SATA

Introduction to SSD. Part 2. Interface
The Serial ATA (SATA) standard was introduced on January 7, 2003 and addressed the problems of its predecessor with the following changes:

  • parallel port replaced by serial;
  • wide 80-wire cable replaced by 7-wire;
  • the "common bus" topology has been replaced with a "point-to-point" connection.

Even though SATA 1.0 (SATA/150, 150 MB/s) was marginally faster than ATA-6 (UltraDMA/130, 130 MB/s), the move to serial communication was "setting the ground" for speeds.

Sixteen signal lines for data transmission in ATA were replaced with two twisted pairs: one for transmission, the second for reception. SATA connectors are designed to be more resistant to multiple reconnections, and the SATA 1.0 specification made hot plugging possible.

Some pins on the drives are shorter than all the others. This is done to support "hot swap" (Hot Swap). During the replacement process, the device "loses" and "finds" the lines in a predetermined order.

A little more than a year later, in April 2004, the second version of the SATA specification was released. In addition to accelerating up to 3 Gb / s, SATA 2.0 introduced technology Native Command Queuing (NCQ). Devices with NCQ support are able to independently organize the order of execution of incoming commands to achieve maximum performance.

Introduction to SSD. Part 2. Interface
The next three years, the SATA Working Group worked to improve the existing specification, and version 2.6 introduced compact Slimline and micro SATA (uSATA) connectors. These connectors are a smaller version of the original SATA connector and are designed for optical drives and small drives in laptops.

While second-generation SATA had enough bandwidth for HDDs, SSDs demanded more. In May 2009, the third version of the SATA specification was released with increased bandwidth to 6 Gb / s.

Introduction to SSD. Part 2. Interface
Particular attention was paid to solid state drives in the SATA 3.1 edition. A Mini-SATA (mSATA) connector has appeared, designed to connect solid-state drives in laptops. Unlike Slimline and uSATA, the new connector looked like a PCIe Mini, although it was not electrically compatible with PCIe. In addition to the new connector, SATA 3.1 boasted the ability to queue TRIM commands with read and write commands.

The TRIM command notifies the SSD of data blocks that do not carry a payload. Prior to SATA 3.1, this command would flush caches and suspend I/O operations, followed by a TRIM command. This approach degraded disk performance during delete operations.

The SATA specification has not kept up with the rapid growth in access speeds for SSDs, leading to a compromise in 2013 called SATA Express in the SATA 3.2 standard. Instead of doubling the bandwidth of SATA again, the developers have used the widely used PCIe bus, whose speed exceeds 6 Gb / s. Drives with SATA Express support have acquired their own form factor called M.2.

SAS

Introduction to SSD. Part 2. Interface
The SCSI standard, "competing" with ATA, also did not stand still and just a year after the appearance of Serial ATA, in 2004, it was reborn into a serial interface. The name of the new interface is Serial Attached SCSI (SEDGE).

Although SAS inherited the SCSI command set, the changes were significant:

  • serial interface;
  • 29-wire cable with power supply;
  • point-to-point connection

The SCSI terminology has also been inherited. The controller is still called the initiator, and the connected devices are called the target. All target devices and the initiator form a SAS domain. In SAS, the connection bandwidth does not depend on the number of devices in the domain, since each device uses its own dedicated channel.

The maximum number of simultaneously connected devices in a SAS domain, according to the specification, exceeds 16 thousand, and instead of a SCSI ID, an identifier is used for addressing World Wide Name (WWN).

WWN is a unique identifier 16 bytes long, similar to the MAC address for SAS devices.

Introduction to SSD. Part 2. Interface
Despite the similarities between SAS and SATA connectors, these standards are not fully compatible. However, a SATA drive can be connected to a SAS connector, but not vice versa. Compatibility between SATA drives and the SAS domain is ensured using the SATA Tunneling Protocol (STP).

The first version of the SAS-1 standard has a bandwidth of 3 Gb / s, and the most modern, SAS-4, has improved this figure by 7 times: 22,5 Gb / s.

PCIe

Introduction to SSD. Part 2. Interface
Peripheral Component Interconnect Express (PCI Express, PCIe) is a serial interface for data transfer, which appeared in 2002. The development was started by Intel, and subsequently transferred to a special organization - the PCI Special Interest Group.

The serial PCIe interface was no exception and became a logical continuation of parallel PCI, which is designed to connect expansion cards.

PCI Express is significantly different from SATA and SAS. The PCIe interface has a variable number of lanes. The number of lines is equal to powers of two and ranges from 1 to 16.

The term "lane" in PCIe does not refer to a specific signal lane, but to a separate full-duplex communication link consisting of the following signal lanes:

  • receive+ and receive-;
  • transmission+ and transmission-;
  • four ground wires.

The number of PCIe lanes directly affects the maximum bandwidth of the connection. The current PCI Express 4.0 standard allows you to achieve 1.9 GB / s on a single line, and 31.5 GB / s when using 16 lines.

Introduction to SSD. Part 2. Interface
The "appetites" of solid-state drives are growing very quickly. Both SATA and SAS have not been able to increase their bandwidth to keep pace with SSDs, which has led to the introduction of PCIe-connected SSDs.

Although PCIe Add-In cards are screwed on, PCIe is hot swappable. Short pins PRSNT (English present - present) make sure that the card is fully installed in the slot.

Solid state drives connected via PCIe are regulated by a separate standard Non-Volatile Memory Host Controller Interface Specification and are embodied in a variety of form factors, but we will talk about them in the next part.

Remote Drives

When creating large data warehouses, there was a need for protocols that allow you to connect drives located outside the server. The first solution in this area was Internet SCSI (iSCSI), developed by IBM and Cisco in 1998.

The idea behind the iSCSI protocol is simple: SCSI commands are "wrapped" into TCP/IP packets and sent to the network. Despite the remote connection, it gives the illusion to clients that the drive is connected locally. Storage Area Network (SAN), based on iSCSI, can be built on existing network infrastructure. The use of iSCSI significantly reduces the cost of organizing a SAN.

iSCSI has a "premium" option - Fiber Channel Protocol (FCP). SAN using FCP is built on dedicated fiber-optic communication lines. This approach requires additional optical network equipment, but is stable and high throughput.

There are many protocols for sending SCSI commands over computer networks. However, there is only one standard that solves the opposite problem and allows you to send IP packets over the SCSI bus - IP over SCSI.

Most SAN protocols use the SCSI command set to manage drives, but there are exceptions, such as the simple ATA over Ethernet (AOE). The AoE protocol sends ATA commands in Ethernet packets, but the drives appear as SCSI in the system.

With the advent of NVM Express drives, iSCSI and FCP protocols no longer meet the rapidly growing requirements of SSDs. Two solutions emerged:

  • removal of the PCI Express bus outside the server;
  • creation of the NVMe over Fabrics protocol.

Removing the PCIe bus creates complex switching hardware but does not change the protocol.

The NVMe over Fabrics protocol has become a good alternative to iSCSI and FCP. NVMe-oF uses a fiber optic link and the NVM Express command set.

DDR-T

Introduction to SSD. Part 2. Interface
The iSCSI and NVMe-oF standards solve the problem of connecting remote drives as local ones, while Intel went the other way and brought the local drive as close as possible to the processor. The choice fell on the DIMM slots into which the RAM is connected. The maximum DDR4 bandwidth is 25 GB/s, which is much faster than the PCIe bus. This is how the Intel® Optane™ DC Persistent Memory SSD was born.

A protocol was invented to connect a drive to DIMM slots DDR-T, physically and electrically compatible with DDR4, but requiring a special controller that sees the difference between a memory bar and a drive. The speed of access to the drive is less than to RAM, but more than to NVMe.

DDR-T is only available with Intel® Cascade Lake generation processors or later.

Conclusion

Almost all interfaces have come a long way from serial to parallel data transmission. SSD speeds are skyrocketing, yesterday SSDs were a curiosity, and today NVMe is no longer a surprise.

In our laboratory Selectel Lab you can test SSD and NVMe drives yourself.

Only registered users can participate in the survey. Sign in, you are welcome.

Will NVMe drives replace classic SSDs in the near future?

  • 55.5%Yes100

  • 44.4%No80

180 users voted. 28 users abstained.

Source: habr.com

Add a comment