Data byte life

Data byte life

Any cloud provider offers a data storage service. These can be cold and hot storages, Ice-cold, etc. It is quite convenient to store information in the cloud. But how was data stored 10, 20, 50 years ago? Cloud4Y translated an interesting article that tells just about this.

A byte of data can be stored in a variety of ways, as new, better, and faster storage media are emerging all the time. A byte is a unit of storage and processing of digital information, which consists of eight bits. One bit can be either 0 or 1.

In the case of punched cards, a bit is stored as the presence/absence of a hole in the card at a specific location. If we go back a little further to Babbage's Analytical Engine, then the registers that store numbers were gears. In magnetic storage devices such as tapes and disks, a bit is represented by the polarity of a specific area of ​​the magnetic film. In modern dynamic random access memory (DRAM), a bit is often represented as a two-level electrical charge stored in a device that stores electrical energy in an electric field. A charged or discharged capacitance stores a bit of data.

In June, the 1956 Werner Buchholz coined the word byte to denote a group of bits used to encode a single character text. Let's talk a little about character encoding. Let's start with the American Standard Code for Information Interchange, or ASCII. ASCII was based on the English alphabet, so every letter, number, and symbol (az, AZ, 0-9, +, - , /, ",!, etc.) were represented as a 7-bit integer from 32 to 127. This was not exactly "friendly" to other languages. To support other languages, Unicode extended ASCII. In Unicode each character is represented as a code point, or character, for example, the lowercase j is U+006A, where U stands for Unicode followed by a hexadecimal number.

UTF-8 is a standard for representing characters as eight bits, allowing each code point in the range 0-127 to be stored in a single byte. If we remember ASCII, then this is quite normal for English characters, but characters from another language are often expressed in two or more bytes. UTF-16 is the standard for representing characters as 16 bits, and UTF-32 is the standard for representing characters as 32 bits. In ASCII, each character is a byte, and in Unicode, which is often not entirely true, a character can take up 1, 2, 3, or more bytes. The article will use different dimensional groupings of bits. The number of bits in a byte varies depending on the media design.

In this article, we will take a journey through time through various storage media in order to immerse ourselves in the history of data storage. In no case will we begin to deeply study every single storage medium that has ever been invented. Before you is a funny informational article, in no way pretending to encyclopedic significance.

Let's start. Suppose we have a data byte to store: the letter j, either as the encoded byte 6a, or as binary 01001010. As we travel through time, the data byte will be used in some of the storage technologies that will be described.

1951

Data byte life

Our story begins in 1951 with the UNIVAC UNISERVO tape drive for the UNIVAC 1 computer. It was the first tape drive designed for a commercial computer. The tape was made from a thin strip of nickel-plated bronze 12,65 mm wide (called Vicalloy) and almost 366 meters long. Our data bytes could be stored at 7 characters per second on a tape moving at 200 meters per second. At this point in history, you could measure the speed of the storage algorithm by the distance traveled by the tape.

1952

Data byte life

Fast forward a year to May 21, 1952, when IBM announced its first magnetic tape unit, the IBM 726. Our data byte can now be transferred from UNISERVO metal tape to IBM magnetic tape. This new home has turned out to be very cozy for our very small byte of data, since up to 2 million digits can be stored on a tape. This magnetic 7-track tape traveled at 1,9 meters per second with a baud rate of 12 numbers or 7500 characters (then called copy groups) per second. For reference: an average article on HabrΓ© has about 10 characters.

The IBM 726 tape had seven tracks, six of which were used for storing information, and one for parity. Up to 400 meters of tape 1,25 cm wide were placed on one reel. The data transfer rate theoretically reached 12,5 thousand characters per second; recording density - 40 bits per centimeter. This system used the "vacuum channel" method, in which a loop of tape was circulated between two points. This allowed the tape to start and stop in a fraction of a second. This was achieved by placing long vacuum columns between the tape spools and the read/write heads in order to absorb the sudden increase in tension in the tape, without which the tape would normally break. A removable plastic ring at the back of the tape spool provided write protection. One roll of tape can store about 1,1 megabyte.

Think VHS tapes. What did you have to do to watch the movie again? Rewind the tape! And how many times have you twirled a cassette for a player on a pencil so as not to waste batteries once again and not get a torn or chewed tape? The same can be said about the tapes used for computers. Programs couldn't just jump over a section of tape around a tape or accidentally access data, they could read and write data in a strictly sequential manner.

1956

Data byte life

Fast forward a few years to 1956, and the era of magnetic disk storage began with the completion of the RAMAC 305 computer system by IBM, which Zellerbach Paper would supply to San Francisco. This computer was the first to use a moving head hard disk. The RAMAC disk drive consisted of fifty 60,96 cm diameter magnetized metal plates capable of storing about five million characters of data, 7 bits per character, and spinning at 1200 rpm. The storage capacity was about 3,75 megabytes.

RAMAC allowed real-time access to large amounts of data, unlike magnetic tape or punched cards. IBM advertised the RAMAC as capable of storing the equivalent of 64 punched cards. Previously, RAMRAC introduced the concept of continuously processing transactions as they occur, so that data can be retrieved immediately while it is still fresh. Now access to our data in RAMAC could be carried out at a speed of 100 bits per second. Previously, when using tapes, we had to write and read sequential data, and we could not accidentally jump to different sections of the tape. Random access to real-time data was truly revolutionary at the time.

1963

Data byte life

Let's fast forward to 1963 when DECtape was introduced. The name comes from the Digital Equipment Corporation, known as DEC. DECtape was inexpensive and reliable, so it was used in many generations of DEC computers. It was 19mm tape laminated and sandwiched between two layers of mylar on a four inch (10,16 cm) spool.

Unlike its heavy, bulky predecessors, DECtape tape could be carried by hand. This made it a great option for personal computers. Unlike its 7-track counterparts, the DECtape had 6 data tracks, 2 cue tracks, and 2 clock tracks. Data was recorded at 350 bits per inch (138 bits per cm). Our data byte, which is 8 bits but expandable to 12, could be transferred to the DECtape at 8325 12-bit words per second at a tape speed of 93 (Β±12) inches per give me a sec. This is 8% more digits per second than UNISERVO's 1952 metal tape.
 

1967

Data byte life

Four years later, in 1967, a small IBM team began working on the IBM disk drive, codenamed Minnows. At that time, the team was tasked with developing a reliable and inexpensive way to download microcodes into mainframes IBM System/370. The project was subsequently repurposed and repurposed to load microcode into a controller for the IBM 3330 Direct Access Storage Facility, codenamed Merlin.

Our byte can now be stored on read-only 8-inch magnetically coated Mylar floppy disks, known today as floppy disks. At the time of release, the product was called the IBM 23FD Floppy Disk Drive System. Disks could hold 80 kilobytes of data. Unlike hard drives, the user could easily transfer a floppy disk in a protective shell from one disk to another. Later, in 1973, IBM released the read/write floppy disk, which then became the industrial standard.
 

1969

Data byte life
 In 1969, on board the Apollo 11 spacecraft, which delivered American astronauts to the moon and back, the on-board computer AGC (Apollo Guidance Computer) with "rope memory" (rope memory) was launched. This rope memory was made by hand and could hold 72 kilobytes of data. The production of rope memory was labor intensive, slow, and required skills similar to weaving; to weave the program into rope memory, could go мСсяцы. But it was the right tool for those times when it was important to fit the maximum into a strictly limited space. When the wire passed through one of the circular strands, it represented a 1. The wire passing around the strand represented a 0. Our data byte required a few minutes for the person to weave into the rope.

1977

Data byte life

In 1977, the Commodore PET, the first (successful) personal computer, was released. The PET used a Commodore 1530 Datasette, meaning data plus cassette. PET converted the data into analog audio signals, which were then stored on cassettes. This made it possible to create an economical and reliable storage solution, albeit a very slow one. Our small data byte could be transferred at a rate of about 60-70 bytes per give me a sec. Cassettes could hold about 100 kilobytes per 30-minute side, with two sides per tape. For example, one side of a cassette could hold about two 55K images. Datasettes were also used in the Commodore VIC-20 and Commodore 64.

1978

Data byte life

A year later, in 1978, MCA and Philips introduced the LaserDisc under the name "Discovision". Jaws was the first film sold on LaserDisc in the US. The audio and video quality on it was much better than the competition, but the laserdisc proved too expensive for most consumers. You couldn't record on LaserDisc, unlike VHS cassettes on which people recorded television programs. Laserdiscs worked with analog video, analog FM stereo and PCM modulation, or PCM, digital audio. The discs were 12 inches (30,47 cm) in diameter and consisted of two plastic-coated single-sided aluminum discs. Today, LaserDisc is remembered as the backbone of CDs and DVDs.

1979

Data byte life

A year later, in 1979, Alan Shugart and Finis Conner founded Seagate Technology with the idea of ​​scaling a hard drive to the size of a 5 ΒΌ-inch floppy disk, which was the standard at the time. Their first product in 1980 was the Seagate ST506 hard drive, the first compact hard drive. The disk held five megabytes of data, which at the time was five times the size of a standard floppy disk. The founders were able to achieve their goal of reducing the size of the disk to the size of a 5ΒΌ-inch floppy disk. The new data storage device was a rigid metal plate coated on both sides with a thin layer of magnetic data storage material. Our data bytes could be transferred to disk at a rate of 625 kilobytes per give me a sec. It is approximately such a GIF.

1981

Data byte life

Fast forward a couple of years to 1981, when Sony introduced the first 3,5-inch floppy disks. Hewlett-Packard pioneered this technology in 1982 with its HP-150. This made 3,5-inch floppy disks famous and made them widely used in industries. The floppy disks were single-sided with a formatted capacity of 161.2 kilobytes and an unformatted capacity of 218.8 kilobytes. In 1982, a double-sided version was released, and the Microfloppy Industry Committee (MIC) consortium of 23 media companies based the 3,5-inch floppy disk specification on Sony's original design, cementing the format in history as we know it. we know. Our data bytes can now be stored on an early version of one of the most common media: the 3,5-inch floppy disk. Later, a pair of 3,5-inch floppy disks with Oregon Trail became the most important part of my childhood.

1984

Data byte life

Shortly thereafter, in 1984, the CD-ROM was announced with read-only data (English Compact Disc Read-Only Memory, CD-ROM). These were 550 megabyte CD-ROMs from Sony and Philips. The format grew out of Digital Audio Compact Discs, or CD-DA, which were used to distribute music. CD-DA was developed by Sony and Philips in 1982 and had a capacity of 74 minutes. According to legend, when Sony and Philips were negotiating the CD-DA standard, one in four people insisted that he could accommodate the entire Ninth Symphony. The first product released on CD was Grolier's Electronic Encyclopedia, published in 1985. The encyclopedia contained nine million words, which took up only 12% of the available disk space, which is 553 mebibyte. We would have more than enough room for an encyclopedia and a byte of data. Shortly thereafter, in 1985, computer companies worked together to create a standard for discs so that any computer could read information from them.

1984

Also in 1984, Fujio Masuoka developed a new type of floating-gate memory called flash memory that was capable of being erased and rewritten many times.

Let's take a look at flash memory using a floating gate transistor. Transistors are electrical gates that can be turned on and off individually. Since each transistor can be in two different states (on and off), it can store two different numbers: 0 and 1. The floating gate refers to the second gate added to the middle transistor. This second gate is insulated with a thin oxide layer. These transistors use a small voltage applied to the gate of the transistor to indicate whether it is on or off, which in turn translates to 0 or 1.
 
With floating gates, when an appropriate voltage is applied across the oxide layer, electrons pass through it and get stuck on the gates. Therefore, even when the power is turned off, the electrons remain on them. When there are no electrons on the floating gates, they represent 1s, and when the electrons are stuck, they represent 0s. Reversing this process and applying a suitable voltage across the oxide layer in the opposite direction causes the electrons to pass through the floating gates and restore the transistor back to its original state. Therefore cells are made programmable and non-volatile. Our byte can be programmed into the transistor as 01001010 with electrons stuck in floating gates to represent zeros.

Masuoka's design was slightly more affordable but less flexible than electrically erasable PROM (EEPROM), as it required several groups of cells to be erased together, but this was also attributed to its speed.

Masuoka was working for Toshiba at the time. In the end, he left to work at Tohoku University, as he was unhappy that the company did not reward him for his work. Masuoka sued Toshiba for compensation. In 2006, he was paid 87 million yuan, equivalent to 758 thousand US dollars. This still seems unimportant given how influential flash memory has become in the industry.

Since we are talking about flash memory, it is also worth noting what is the difference between NOR and NAND flash memory. As we already know from Masuoka, flash stores information in memory cells made up of floating gate transistors. The technology names are directly related to how memory cells are organized.

In NOR flash, individual memory cells are connected in parallel to provide random access. This architecture reduces the read time required for random access to microprocessor instructions. NOR flash is ideal for lower density applications that are mostly read-only. This is why most CPUs load their firmware, usually from NOR flash. Masuoka and colleagues introduced NOR flash in 1984 and NAND flash in XNUMX. 1987.

The developers of NAND Flash abandoned random access in order to achieve a smaller memory cell size. This results in a smaller chip size and lower cost per bit. The architecture of NAND flash memory consists of series-connected memory transistors, which consist of eight parts. This achieves high storage density, smaller memory cell size, and faster data writing and erasing as it can program blocks of data at the same time. This is achieved at the expense of having to overwrite data when it is not written sequentially and the data already exists in block.

1991

Fast forward to 1991, when a prototype solid-state drive (SSD) was created by SanDisk, then known as sundisk. The design combined a flash memory array, non-volatile memory chips, and an intelligent controller to automatically detect and correct defective cells. The disk was 20 megabytes in a 2,5-inch form factor and was valued at about $1000. This disc has been used by IBM in a computer ThinkPad.

1994

Data byte life

One of my personal favorite storage media since childhood was Zip Disks. In 1994, Iomega released the Zip Disk, a 100 MB cartridge in a 3,5-inch form factor, about slightly thicker than a standard 3,5-inch drive. Later versions of disks could store up to 2 gigabytes. The convenience of these disks was that they were the size of a floppy disk, but had the ability to store a larger amount of data. Our data bytes could be written to a Zip disk at 1,4 megabytes per second. For comparison: at that time, 1,44 megabytes of a 3,5-inch floppy disk were written at a speed of about 16 kilobytes per second. On a Zip disk, the heads read/write data contactlessly, as if flying above the surface, which is similar to the operation of a hard drive, but differs from the principle of operation of other floppy disks. Zip drives soon became obsolete due to reliability and availability issues.

1994

Data byte life

In the same year, SanDisk introduced the CompactFlash, which was widely used in digital video cameras. As with CDs, CompactFlash speed is based on "x" ratings such as 8x, 20x, 133x, etc. The maximum data transfer rate is calculated based on the original audio CD's bit rate, 150 kilobytes per second. The transfer rate looks like R = Kx150 kB/s, where R is the transfer rate and K is the nominal rate. So for a 133x CompactFlash, our data byte would be written at 133x150 kB/s or about 19 kB/s or 950 Mb/s. The CompactFlash Association was founded in 19,95 with the goal of creating an industry standard for flash memory cards.

1997

A few years later, in 1997, the rewritable CD (CD-RW) was released. This optical disc was used to store data, as well as to copy and transfer files to various devices. CDs can be rewritten about 1000 times, which was not a limiting factor at the time, as users rarely overwrote data.

CD-RW's are based on surface reflectance technology. In the case of CD-RW, phase shifts in a special coating composed of silver, tellurium, and indium cause the ability to reflect or not reflect the read beam, which means 0 or 1. When the compound is in a crystalline state, it is translucent, which means 1. When the compound melts in the amorphous state, it becomes opaque and non-reflective, which means 0. So we could write our data byte as 01001010.

DVDs eventually took over most of the market with CD-RWs.

1999

Let's move on to 1999, when IBM introduced the world's smallest hard drives at the time: the 170MB and 340MB IBM microdisks. These were small 2,54 cm hard drives designed to be installed in CompactFlash Type II slots. It was planned to create a device that will be used as a CompactFlash, but with a larger memory capacity. However, they were soon replaced by USB flash drives and later by larger CompactFlash cards as they became available. Like other hard drives, microdisks were mechanical and contained small spinning discs.

2000

A year later, in 2000, USB flash drives were introduced. The drives consisted of flash memory enclosed in a small form factor with a USB interface. Depending on the version of the USB interface used, the speed could vary. USB 1.1 is limited to 1,5 megabits per second while USB 2.0 can handle 35 megabits per second give me a sec, and USB 3.0 - 625 megabits per second. The first USB 3.1 Type C drives were announced in March 2015 with read/write speeds of 530 megabits per second. Unlike floppy disks and optical disks, USB devices are more difficult to scratch, but they still have the same data storage, file transfer, and backup capabilities. Floppy and CD-ROM drives were quickly replaced by USB ports.

2005

Data byte life

In 2005, hard disk drive (HDD) manufacturers began shipping products using perpendicular magnetic recording, or PMR. Interestingly enough, this happened at the same time as the iPod Nano announced the use of flash memory instead of 1-inch hard drives in the iPod Mini.

A typical hard drive contains one or more hard drives coated with a magnetically sensitive film made up of tiny magnetic grains. Data is written when the magnetic recording head flies just above the spinning disc. It is very similar to a traditional gramophone player, the only difference is that in a gramophone the stylus is in physical contact with the record. As the disks rotate, the air in contact with them creates a gentle breeze. Just as air on an airplane wing creates lift, air generates lift on the airfoil head. disk heads. The head quickly changes the magnetization of one magnetic region of the grains so that its magnetic pole points up or down, indicating 1 or 0.
 
The forerunner of PMR was longitudinal magnetic recording, or LMR. The PMR recording density can be more than three times the LMR recording density. The main difference between PMR and LMR is that the grain structure and magnetic orientation of the stored data of PMR media is columnar rather than longitudinal. PMR has better thermal stability and improved signal-to-noise ratio (SNR) due to better grain separation and uniformity. It also features improved recordability due to stronger head margins and better magnetic media alignment. Like LMR, the fundamental limitations of PMR are based on the thermal stability of the data bits recorded by the magnet and the need to have enough SNR to read the recorded information.

2007

In 2007, the first 1TB hard drive from Hitachi Global Storage Technologies was announced. The Hitachi Deskstar 7K1000 used five 3,5" 200GB platters and rotated at 7200 rpm This is a significant improvement over the world's first hard drive, the IBM RAMAC 350, which had a capacity of approximately 3,75 megabytes. Oh, how far we have come in 51 years! But wait, there's more.

2009

In 2009, technical work began on the creation of a non-volatile express memory, or NVMe. Non-volatile memory (NVM) is a type of memory that can store data permanently, as opposed to volatile memory that needs constant power to store data. NVMe satisfies the need for a scalable host controller interface for PCIe-enabled solid state drive peripherals, hence the name NVMe. More than 90 companies were included in the working group for the development of the project. All of this was based on work to define the Host Controller Non-Volatile Memory Interface (NVMHCIS) specification. The best NVMe drives today can handle about 3500 megabytes per second when reading and 3300 megabytes per second when writing. Writing the data byte j we started with can be done very quickly compared to a couple of minutes of manual rope memory weaving for the Apollo Guidance Computer.

Present and future

Storage Class Memory

Now that we've traveled back in time (ha!), let's take a look at the current state of Storage Class Memory. SCM, like NVM, is resilient, but SCM also delivers performance that is superior to or comparable to main memory, and byte addressability. The goal of SCM is to solve some of today's cache problems, such as the low density of static random access memory (SRAM). With Dynamic Random Access Memory (DRAM) we can get better density, but this comes at the cost of slower access. DRAM also suffers from the need for constant power to refresh the memory. Let's look into this a bit. Power is needed because the electrical charge on the capacitors is slowly leaking out, meaning that without intervention, the data on the chip will soon be lost. To prevent this leakage, DRAM requires an external memory refresh circuit that periodically overwrites the data in the capacitors, restoring them to their original charge.

Phase-change memory (PCM)

Earlier we looked at how the phase changes for CD-RW. PCM is similar. The phase change material is usually Ge-Sb-Te, also known as GST, which can exist in two different states: amorphous and crystalline. The amorphous state has a higher resistance, representing 0, than the crystalline state, representing 1. By assigning data values ​​to intermediate resistances, PCM can be used to store multiple states in the form MLC.

Spin-transfer torque random access memory (STT-RAM)

STT-RAM consists of two ferromagnetic, permanent magnetic layers separated by a dielectric, that is, an insulator that can transmit electrical force without conduction. It stores bits of data based on the difference in magnetic directions. One magnetic layer, called the reference layer, has a fixed magnetic direction, while the other magnetic layer, called the free layer, has a magnetic direction that is controlled by the current flow. For 1, the direction of magnetization of the two layers is aligned. For 0, both layers have opposite magnetic directions.

Resistive random access memory (ReRAM)
The ReRAM cell consists of two metal electrodes separated by a metal oxide layer. A bit like Masuoka's flash design where electrons pass through the oxide layer and get stuck in the floating gates or vice versa. However, when using ReRAM, the state of the cell is determined based on the concentration of free oxygen in the oxide layer of the metal.

Although these technologies are promising, they still have drawbacks. PCM and STT-RAM have high write latency. PCM latency is ten times higher than DRAM, while STT-RAM is ten times higher than SRAM. PCM and ReRAM have a write limit before a serious error occurs, which means that the memory element is stuck for certain value.

In August 2015, Intel announced the release of Optane, its product based on 3DXPoint. Optane claims 1000 times the performance of NAND SSDs and four to five times the price of flash. Optane is proof that SCM is more than just an experimental technology. It will be interesting to watch the development of these technologies.

Hard Drives (HDD)

Helium hard drive (HHDD)

A helium drive is a high capacity hard disk drive (HDD) filled with helium and hermetically sealed during the manufacturing process. Like other hard drives, as we said earlier, it looks like a turntable with a magnetically coated spinning record. Typical hard drives simply have air inside the cavity, however this air causes some resistance as the platters rotate.

Helium balloons fly because helium is lighter than air. In fact, helium is 1/7 the density of air, which reduces the braking force as the plates rotate, causing a decrease in the amount of energy needed to rotate the discs. However, this feature is secondary, the main distinguishing characteristic of helium was that it allows you to pack 7 plates in the same form factor that would normally only fit 5. If we remember our airplane wing analogy, this is the perfect analog. Because helium reduces drag, turbulence is eliminated.

What we also know is that helium balloons start to sink after a few days because the helium comes out of them. The same can be said about drives. It took years before manufacturers were able to create a container that would prevent helium from escaping the form factor for the lifetime of the drive. Backblaze conducted experiments and found that helium hard drives had an annual error of 1,03%, while standard 1,06%. Of course, this difference is so small that a serious conclusion can be drawn from it. pretty hard.

The helium-filled form factor can contain a hard drive encapsulated using the PMR we discussed above, either Microwave Magnetic Recording (MAMR) or Heat Magnetic Recording (HAMR). Any magnetic storage technology can be combined with helium instead of air. In 2014, HGST combined two cutting-edge technologies in its 10TB helium hard drive, which used host-controlled shingled magnetic recording, or SMR (Shingled magnetic recording). Let's dwell on SMR for a bit and then look at MAMR and HAMR.

Tiled magnetic recording technology

Earlier we covered perpendicular magnetic recording (PMR), which was the predecessor of SMR. Unlike PMR, SMR records new tracks that overlap part of a previously recorded magnetic track. This in turn makes the previous track narrower, resulting in a higher track density. The name of the technology is due to the fact that the overlapped paths are very similar to the tiled paths on the roof.

SMR results in a much more complex writing process, as recording to one track overwrites the adjacent track. This does not occur when the disc's wafer is blank and the data is serial. But as soon as you write to a series of tracks that already contain data, existing adjacent data is erased. If an adjacent track contains data, then it must be overwritten. It's pretty similar to the NAND flash we talked about earlier.

SMRs hide this complexity by managing firmware, resulting in an interface like any other hard drive. On the other hand, host-managed SMR devices, without special application and operating system adaptations, will not allow the use of these drives. The host must write to devices strictly sequentially. At the same time, device performance is 100% predictable. Seagate began shipping SMR drives in 2013, claiming 25% density exceeds PMR density.

Microwave magnetic recording (MAMR)

Microwave-assisted magnetic recording (MAMR) is a magnetic memory technology that uses energy similar to HAMR (discussed below). The STO itself is located in close proximity to the record head. When current is applied to STO, a circular electromagnetic field is generated with a frequency of 20-40 GHz due to the polarization of electron spins.

When such a field is applied, a resonance occurs in the ferromagnet used for MAMR, which leads to the precession of the magnetic moments of the domains in this field. In fact, the magnetic moment deviates from its axis and the recording head needs much less energy to change its direction (flip).

The use of MAMR technology makes it possible to take ferromagnetic substances with a higher coercive force, which means that it is possible to reduce the size of magnetic domains without fear of causing a superparamagnetic effect. The STO generator helps to reduce the size of the write head, which makes it possible to write information on smaller magnetic domains, and therefore increases the recording density.

Western Digital, also known as WD, introduced this technology in 2017. Shortly thereafter, in 2018, Toshiba supported this technology. While WD and Toshiba are looking for MAMR technology, Seagate is betting on HAMR.

Thermal Magnetic Recording (HAMR)

Heat-assisted magnetic recording (HAMR) is an energy-saving magnetic storage technology that greatly increases the amount of data that can be stored on a magnetic device, such as a hard drive, by using heat supplied by a laser to help write the data to the surface. hard disk pads. By heating, the data bits are placed much closer together on the disk substrate, which increases the density and capacity of the data.

This technology is quite difficult to implement. 200mW laser fast heats up a tiny area up to 400Β°C before recording, without interfering with or damaging the rest of the data on the disc. The process of heating, data recording and cooling should be completed in less than a nanosecond. Meeting these challenges required the development of nanoscale surface plasmons, also known as surface guided lasers, instead of direct laser heating, as well as new types of glass plates and thermal control coatings to withstand rapid spot heating without damaging the recording head or any nearby data, and various others. technical problems to be overcome.

Despite numerous skepticism, Seagate first demonstrated this technology in 2013. The first discs started shipping in 2018.

The end of the tape, wind it to the beginning!

We started in 1951 and end this article with a look into the future of storage technology. Data storage has changed a lot over time: from paper tape to metal and magnetic, rope memory, spinning discs, optical discs, flash memory and others. In the course of progress, faster, smaller and more productive storage devices have appeared.

If we compare NVMe with the 1951 UNISERVO metal tape, NVMe can read 486% more digits per second. Compared to my childhood favorite, Zip drives, NVMe can read 111% more digits per second.

The only thing that remains true is the use of 0 and 1. The ways in which we do this vary greatly. I hope the next time you burn a CD-RW of songs for a friend or save a home video to an Optical Disc Archive, you think about how a non-reflective surface translates a value to 0 and a reflective surface translates to 1. Or if you record a mixtape on cassette, remember that this is very closely related to the Datasette used in the Commodore PET. Finally, don't forget to be kind and rewind.

Thank you Robert Mustacchi ΠΈ Riku Alterra for tidbits (can't help it) throughout the article!

What else can you read on the blog? Cloud4Y

β†’ Easter eggs on topographic maps of Switzerland
β†’ Computer brands of the 90s, part 1
β†’ How the mother of a hacker entered the prison and infected the boss's computer
β†’ Diagnostics of network connections on the EDGE virtual router
β†’ How did the bank fail?

Subscribe to our Telegram-channel, so as not to miss the next article! We write no more than twice a week and only on business. We also remind you that Cloud4Y can provide secure and reliable remote access to business applications and information necessary for business continuity. Remote work is an additional barrier to the spread of coronavirus. Details - from our managers on Online.

Source: habr.com

Add a comment