What is Direct Attached Storage?

Direct-attached storage is data storage that is connected directly to a computer, as opposed to storage that is connected to a computer over a network.

Direct attached storage is data storage that is connected directly to a computer such as a PC or server, as opposed to storage that is connected to a computer over a network. Sometimes known as DAS, direct attached storage has an important role to play in many organizations’ storage strategy because of the specific benefits that it offers.

However, there are a number of disadvantages to direct attached storage, which means that it is not the best choice of storage in all circumstances. Let’s look in-depth.

How Does DAS Work?

Almost every PC uses direct-attached storage in the form of one or more internal storage drives, which may be traditional hard disk drives or faster solid state drives (SSDs), typically connected using a Serial Advanced Technology Attachment (SATA) interface.

Many servers are also equipped with internal storage drives, which may be connected using SATA, faster Small Computer System Interface (SCSI), Serial-Attached SCSI (SAS), or other high-speed interfaces for better storage performance.

But direct attached storage does not have to be connected to a computer system internally. It also includes external drives or drive enclosures (which may contain multiple drives), typically connected using USB, eSATA, SAS or SCSI to an individual computer system.

The defining feature of all direct attached storage is that it is controlled by a single computer to which is is attached. That means that any other computer that needs to access the data stored on direct attached storage has to communicate with the computer it is attached to, rather than being able to access the data directly.

Direct Attached Storage

As the name suggest, Direct Attached Storage is closely connected to the computing device it servers, rather than use the more indirect network connection.

Benefits of DAS

  • High performance: Direct-attached storage offers fast access to data because it is attached to the computer that usually requires it. Network connectivity and congestion issues do not directly affect direct attached storage. However, a computer attempting to access data stored on direct attached storage connected to a storage server over a network will be subject to network conditions.
  • Easy to set up and configure: Computer systems are usually supplied with internal direct-attached storage which is ready to use immediately. External network attached storage can is usually “plug and play,” meaning that it can be used as soon as it is plugged into a suitable port such as a USB port.
  • Low cost: Direct-attached storage consists only of the storage device itself, plus any drive enclosure. That means that it can be very cost effective compared to other storage solutions that require hardware and software to run and manage the storage devices. 

Drawbacks of DAS:

  • Limited scalability: Direct-attached storage is difficult to scale because options are limited by the number of internal drive bays, the availability of external ports, and the capacity of external direct attached storage devices. If internal direct-attached storage needs to be upgraded this may involve shutting down the host computer during the upgrade.
  • Poor performance possible when data needs to be shared: direct attached storage connected to a PC can be slow to provide data to other computers on a network because performance depends in part on the resources of the host PC. Sharing data can also impact on the performance of the host PC. This is less of an issue when direct attached storage is connected to powerful servers dedicated to storage, however.
  • No central management and backup: Ensuring the data stored on direct attached storage is available and backed up is much more complicated and generally more costly than arranging redundancy and backups on networked storage devices, which may include their own management, RAID and backup software. This is not a problem when only a few computers use direct attached storage, but it becomes an issue as organizations grow and computer numbers proliferate.

DAS Architecture

Direct attached storage architecture is very simple: PCs may access their own direct attached storage directly, or they can access data stored on direct attached storage connected to storage servers over a network. Direct attached storage is low cost, making it ideal for small businesses, which are unlikely to have rapidly expanding storage needs in the foreseeable future.

Other storage architectures such as those used with network attached storage (NAS) and storage area network (SAN) solutions are more complex, but offer benefits that direct attached storage cannot deliver.

Performance Differences and Use Case: SAN, NAS, and DAS Storage

It is not possible to make categorical statements about the performance differences between SAN, NAS, and DAS, because performance will always be affected by hardware and its configurations.

For example, a DAS setup which uses a number of high-speed SAS disks in a suitable RAID configuration will offer vastly superior performance to a DAS setup consisting of a single 5400 rpm IDE drive. However, it is possible to offer some general conclusions:

SAN

Storage area network solutions use a dedicated high speed data network (usually based on fibre channel or iSCSI) to move block-level data around the storage environment between servers and expensive – but fully featured – SAN storage arrays. The features that SAN arrays offer include deduplication, compression, encryption, and various availability services such as backup and site mirroring.

SANs offer very high storage performance, reliability, and data availability, but because SAN storage arrays are very expensive and require management by a storage specialist they are only suitable for large organizations with sizeable storage budgets and IT support teams.

NAS

Network attached storage consists of computer hardware (usually a storage appliance), storage devices such as hard drives, and software which manages the storage devices, arranges data backupsand redundancy (usually through some RAID configuration), and provides networked access (as well as restricting access) to these devices (often through Active Directory).

The computer hardware is optimized for managing and providing access to data, and that means that NAS usually offers much better performance than direct attached storage in environments where data needs to be shared by a number of different users. NAS can also be scaled up very quickly simply by adding more NAS devices to the network.

This makes NAS ideal in small and medium-sized businesses where data often needs to be shared between groups of users, and where storage requirements may increase rapidly as the company grows.

The main drawback to NAS is that storage data is carried over the organization’s normal (usually Ethernet) data network. This means that network congestion and performance degradation can be significant if users are accessing large files or moving large amounts of data to and from the NAS.

DAS

Direct attached storage generally offers high storage performance to the computer system which it is directly attached to, because the data is located close to the system’s RAM and processor. And because it is not affected by network congestion, and because it can take advantage of fast computer bus interfaces such as SAS and SATA.

However, if DAS is attached to a storage server then data will still have to travel over a network to the computer system to request the data from the storage server, and that means that it will still be subject to network congestion.

Comparison Chart: DAS vs. NAS vs. SAN

DASNASSAN
StorageFilesFilesBlocks
ConnectionSAS, SATA, USB, eSATA etcEthernetFibre Channel, iSCSI
Accessed byAttached computer system (Server or PC)Server or PCServer
PerformanceHigh when attached to PC, Low when accessed from separate systemMediumHigh
CostLowMediumHigh
Suitability for shared accessLowHighHigh
ScalabilityLowMedium High
Storage featuresFewModerateMany
Best for (company size)SmallMediumLarge
Management complexityLowMediumHigh
Read More

Hackers Exploiting Oracle WebLogic zero-day With New Ransomware To Encrypt User Data

Hackers exploiting the recently disclosed Oracle WebLogic Server remote code execution vulnerability to install a new variant of ransomware called “Sodinokibi.”

The vulnerability allows anyone with HTTP access to the server can carry out the attack without authentication. The vulnerability affects Oracle WebLogic Server, versions 10.3.6.0, 12.1.3.0, Oracle fixed the issue on April 26, and assigned it CVE-2019-2725.

According to Talos Investigation, the initial stages of attack performed on April 25, on the same day where the Oracle released the patch. On April 26 attackers establish a connection with different vulnerable HTTP servers.

Attackers leverage the vulnerability to download the ransomware copy from attackers controlled servers and they also infected some legitimate sources and repurposed it.

“Cisco IR Services and Talos observed the attack requests originating from 130.61.54[.]136 and the attackers were ultimately successful at encrypting a number of customer systems.”

The infection starts with the HTTP POST request which contains the PowerShell or certutil command to download the malicious files and execute it.

Oracle WebLogic

Once the infection triggered it executes the vssadmin.exe utility which adds shadowstorage that allows Windows to create a manual or automatic backup. The ransomware tries to delete the backup mechanism to stop the data recovery process.

The Ransom note directs victims to the .onion website and to a public domain (decryptor[.]top) which was registered on March 31.

Oracle WebLogic

The visited website asks victim’s to buy a decryptor software to decrypt the files. In order to buy it, victims to create a Bitcoin wallet and buy Bitcoin worth $2500. Then the bitcoins need to be transferred to attackers wallet address to download the decryptor software. Also, they avail an option to test the decryptor tool by uploading an encrypted image.

After Sodinokibi ransomware deployment attackers chose to distribute Gandcrab v5.2 again to the same victim, thinking their earlier attempts had been unsuccessful.

It is recommended to patch the CVE-2019-2725 vulnerability, you can find the security alert published by Oracle and the Patch Availability here.

Indicators of Compromise

Ransomware samples: 
0fa207940ea53e2b54a2b769d8ab033a6b2c5e08c78bf4d7dade79849960b54d
34dffdb04ca07b014cdaee857690f86e490050335291ccc84c94994fa91e0160
74bc2f9a81ad2cc609b7730dbabb146506f58244e5e655cbb42044913384a6ac
95ac3903127b74f8e4d73d987f5e3736f5bdd909ba756260e187b6bf53fb1a05
fa2bccdb9db2583c2f9ff6a536e824f4311c9a8a9842505a0323f027b8b51451

Distribution URLs:
hxxp://188.166.74[.]218/office.exe
hxxp://188.166.74[.]218/radm.exe
hxxp://188.166.74[.]218/untitled.exe
hxxp://45.55.211[.]79/.cache/untitled.exe

Attacker IP:
130.61.54[.]136

Attacker Domain:
decryptor[.]top
Read More

How reliable are modern hard drives?

If you want to know how reliable modern hard drives are, ask a company that uses a lot of them.

All hard drive manufacturers provide reliability data for their offerings, but if you want to really know how well they stand up to use, ask a company that uses a lot of them.

Cloud storage specialist Backblaze is a good example.

The good news is that Backblaze publishes quarterly stats and reliability data for the drives it uses, and this data gives us a glimpse into real-world storage reliability.

The data for Q1 2019 contains some interesting tidbits. For example, the cloud backup company has 106,238 hard drives in three data centers. 1,913 of those are boot drives, while the rest are used for storage.

With that many drives in use, trends start to stand out. For example, over the past three years, the annualized failure rates for Seagate and HGST have improved, with Seagate failure rate down 50 percent in that period.

Quarterly failure rates for Seagate and HGST hard drives.
Quarterly failure rates for Seagate and HGST hard drives.Backblaze

But it’s also interesting to note that Seagate failure rates have started to steadily increase over the past three quarters. Backblaze doesn’t yet have an explanation for this.

As for future data, Backblaze is looking to roll out at least twenty 20TB drives for testing by the end of 2019, along with at least one HAMR based drive from Seagate and/or one MAMRdrive from Western Digital.

Read More

Galaxy S10 5G bursts into flames, but Samsung refuses to take responsibility

One month after launching the Galaxy Note 7 in August 2016, Samsung was forced to suspend sales of the flagship phone when a manufacturing defect was discovered in its batteries that caused them to generate excessive heat and occasionally light on fire. After more problems were reported with the first batch of replacements, Samsung issued a second recall, and ceased production of the Galaxy Note 7 altogether, once and for all.

The catastrophic episode resulted in the company implementing a new eight-step testing and inspection process for its batteries in all future devices, and in the years since, there haven’t been any widespread issues of note. But even isolated incidents are enough to set off alarm bells following the Note 7 debacle.

This week, a South Korean Galaxy S10 5G owner posted photos of the phone scorched beyond recognition. He says  that he hadn’t done anything that would cause the S10 5G to combust, claiming it burnt “without [reason].”

“My phone was on the table when it started smelling burnt and smoke soon engulfed the phone,” the S10 5G owner, who asked to be identified by his last name, Lee, told AFP. “I had to drop it to the ground when I touched it because it was so hot.” He then added that “everything inside [the phone] was burnt.”

Samsung, unsurprisingly, refused to refund Lee for his ruined phone. The South Korean company told AFP that the damage to the phone was the result of an “external impact,” not an internal issue. Details surrounding the incident are rather scant from both Samsung and Lee, so until more comes of this, it’s hard to say whether or not the Galaxy S10 5G is actually problematic. That said, this is the first such burnt S10 5G we’ve heard of.

Read More

Micron’s new 15TB SSD is almost affordable

Ever so slightly closes price gap between high capacity SSDs and HDDs

The 15.36TB drive, which is a smidgen smaller in capacity than the largest hard disk drive currently on the market (a 16TB Toshiba HDD model), costs “only” €2.474,78 plus sales tax or around $2,770 (about £2,140). 

While that is far more expensive than smaller capacity SSDs (Silicon Power’s 1TB SSD retails for under $95 at Amazon), it is less than half the average price of competing enterprise SSDs like the Seagate Nytro 3330, the Western Digital Ultrastar DC SS530, the Toshiba PM5-R or the Samsung SSD PM1633a. 

HDD still wins the price/capacity comparison

And just for the comparison, a 14TB hard disk drive, the MG07 from Toshiba, retails for around $440, about a sixth of the price, which gives you an idea of the price gulf between the two. If you are looking for something bigger, then the Samsung SSD PM1643 is probably your only bet at €7294.22 excluding VAT.

Bear in mind that these are 2.5-inch models which are far smaller than 3.5-inch hard disk drives. They also connect to the host computer using a special connector called SAS (Serial Attached Small Computer System Interface). The Micron 9300 Pro connects via the U.2 PCIe (NVMe), offering read speeds of up to 3.5GBps.

For the ultimate data hoarder, there’s the Nimbusdata Exadrive which boasts a capacity of 100TB albeit in a 3.5-inch form factor.

Read More

Data in a Flash, Part I: the Evolution of Disk Storage and an Introduction to NVMe

NVMe drives have paved the way for computing at stellar speeds, but the technology didn’t suddenly appear overnight. It was through an evolutionary process that we now rely on the very performant SSD for our primary storage tier.

Solid State Drives (SSDs) have taken the computer industry by storm in recent years. The technology is impressive with its high-speed capabilities. It promises low-latency access to sometimes critical data while increasing overall performance, at least when compared to what is now becoming the legacy Hard Disk Drive (HDD). With each passing year, SSD market shares continue to climb, replacing the HDD in many sectors. The effects of this are seen in personal, mobile and server computing.

IBM first unleashed the HDD into the computing world in 1956. By the 1960s, the HDD became the dominant secondary storage device for general-purpose computers (emphasis on secondary storage device, memory being the first). Capacity and performance were the primary characteristics defining the HDD. In many ways, those characteristics continue to define the technology—although, not in the most positive ways (more details on that shortly).

The first IBM-manufactured hard drive, the 350 RAMAC, was as large as two medium-sized refrigerators with a total capacity of 3.75MB on a stack of 50 disks. Modern HDD technology has produced disk drives with volumes as high as 16TB, specifically with the more recent Shingled Magnetic Recording (SMR) technology coupled with helium—yes, that’s the same chemical element abbreviated as He in the periodic table. The sealed helium gas increases the potential speed of the drive while creating less drag and turbulence. Being less dense than air, it also allows more platters to be stacked in the same space used by 2.5″ and 3.5″ conventional disk drives.

""

Figure 1. A lineup of Standard HDDs throughout Their History and across All Form Factors (by Paul R. Potts—Provided by Author, CC BY-SA 3.0 us, https://commons.wikimedia.org/w/index.php?curid=4676174)

A disk drive’s performance typically is calculated by the time required to move the drive’s heads to a specific track or cylinder and the time it takes for the requested sector to move under the head—that is, the latency. Performance is also measured at the rate by which the data is transmitted.

Being a mechanical device, an HDD does not perform nearly as fast as memory. A lot of moving components add to latency times and decrease the overall speed by which you can access data (for both read and write operations).

""

Figure 2. Disk Platter Layout

Each HDD has magnetic platters inside, which often are referred to as disks. Those platters are what stores the information. Bound by a spindle and spinning them in unison, an HDD will have more than one platter sitting on top of each other with a minimum amount of space in between.

Similar to how a phonograph record works, the platters are double-sided, and the surface of each has circular etchings called tracks. Each track is made up of sectors. The number of sectors on each track increases as you get closer to the edge of a platter. Nowadays, you’ll find that the physical size of a sector is either 512 bytes or 4 Kilobytes (4096 bytes). In the programming world, a sector typically equates to a disk block.

The speed at which a disk spins affects the rate at which information can be read. This is defined as a disk’s rotation rate, and it’s measured at revolutions per minute (RPM). This is why you’ll find modern drives operating at speeds like 7200 RPM (or 120 rotations per second). Older drives spin at slower rates. High-end drives may spin at higher rates. This limitation creates a bottleneck.

An actuator arm sits on top of or below a platter. It extends and retracts over its surface. At the end of the arm is a read-write head. It sits at a microscopic distance above the surface of the platter. As the disk rotates, the head can access information on the current track (without moving). However, if the head needs to move to the next track or to an entirely different track, the time to read or write data is increased. From a programmer’s perspective, this is referred to as the disk seek, and this creates a second bottleneck for the technology.

Now, although HDDs’ performance has been increasing with newer disk access protocols—such as Serial ATA (SATA) and Serial Attached SCSI (SAS)—and technologies, it’s still a bottleneck to the CPU and, in turn, to the overall computer system. Each disk protocol has its own hard limits on maximum throughput (megabytes or gigabytes per second). The method in which data is transferred is also very serialized. This works well with a spinning disk, but it doesn’t scale well to Flash technologies.

Since its conception, engineers have been devising newer and more creative methods to help accelerate the HDDs’ performance (for example, with memory caching), and in some cases, they’ve completely replaced them with technologies like the SSD. Today, SSDs are being deployed everywhere—or so it seems. Cost per gigabyte is decreasing, and the price gap is narrowing between Flash and traditional spinning rust. But, how did we get here in the first place? The SSD wasn’t an overnight success. Its history is more of a gradual one, dating back as far as when the earliest computers were being developed.

A Brief History of Computer Memory

Memory comes in many forms, but before Non-Volatile Memory (NVM) came into the picture, the computing world first was introduced to volatile memory in the form of Random Access Memory (RAM). RAM introduced the ability to write/read data to/from any location of the storage medium in the same amount of time. The often random physical location of a particular set of data did not affect the speed at which the operation completed. The use of this type of memory masked the pain of accessing data from the exponentially slower HDD, by caching data read often or staging data that needed to be written.

The most notable of RAM technologies is Dynamic Random Access Memory (DRAM). It also came out of the IBM labs, in 1966, a decade after the HDD. Being that much closer to the CPU and also not having to deal with mechanical components (that is, the HDD), DRAM performed at stellar speeds. Even today, many data storage technologies strive to perform at the speeds of DRAM. But, there was a drawback, as I emphasized above: the technology was volatile, and as soon as the capacitor-driven integrated circuits (ICs) were deprived of power, the data disappeared along with it.

Another set of drawbacks to the DRAM technology is its very low capacities and the price per gigabyte. Even by today’s standards, DRAM is just too expensive when compared to the slower HDDs and SSDs.

Shortly after DRAM’s debut came Erasable Programmable Read-Only Memory (EPROM). Invented by Intel, it hit the scene at around 1971. Unlike its volatile counterparts, EPROM offered an extremely sought-out industry game-changer: memory that retains its data as soon as system power is shut off. EPROM used transistors instead of capacitors in its ICs. Those transistors were capable of maintaining state, even after the electricity was cut.

As the name implies, the EPROM was in its own class of Read-Only Memory (ROM). Data typically was pre-programmed into those chips using special devices or tools, and when in production, it had a single purpose: to be read from at high speeds. As a result of this design, EPROM immediately became popular in both embedded and BIOS applications, the latter of which stored vendor-specific details and configurations.

Moving Closer to the CPU

As time progressed, it became painfully obvious: the closer you move data (storage) to the CPU, the faster you’re able to access (and manipulate) it. The closest memory to the CPU is the processor’s registers. The amount of available registers to a processor varies by architecture. The register’s purpose is to hold a small amount of data intended for fast storage. Without a doubt, these registers are the fastest way to access small sizes of data.

Next in line, and following the CPU’s registers, is the CPU cache. This is a hardware cache built in to the processor module and utilized by the CPU to reduce the cost and time it takes to access data from the main memory (DRAM). It’s designed around Static Random Access Memory (SRAM) technology, which also is a type of volatile memory. Like a typical cache, the purpose of this CPU cache is to store copies of data from the most frequently used main memory locations. On modern CPU architectures, multiple and different independent caches exist (and some of those caches even are split). They are organized in a hierarchy of cache levels: Level 1 (L1), Level 2 (L2), Level 3 (L3) and so on. The larger the processor, the more the cache levels, and the higher the level, the more memory it can store (that is, from KB to MB). On the downside, the higher the level, the farther its location is from the main CPU. Although mostly unnoticeable to modern applications, it does introduce latency.

""

Figure 3. General Outline of the CPU and Its Memory Locations/Caches

The first documented use of a data cache built in to the processor dates back to 1969 and the IBM System/360 Model 85 mainframe computing system. It wasn’t until the 1980s that the more mainstream microprocessors started incorporating their own CPU caches. Part of that delay was driven by cost. Much like it is today, (all types of) RAM was very expensive.

So, the data access model goes like this: the farther you move away from the CPU, the higher the latency. DRAM sits much closer to the CPU than an HDD, but not as close as the registers or levels of caches designed into the IC.

""

Figure 4. High-Level Model of Data Access

The Solid-State Drive

The performance of a given storage technology was constantly gauged and compared to the speeds of CPU memory. So, when the first commercial SSDs hit the market, it didn’t take very long for both companies and individuals to adopt the technology. Even with a higher price tag, when compared to HDDs, people were able to justify the expense. Time is money, and if access to the drives saves time, it potentially can increase profits. However, it’s unfortunate that with the introduction of the first commercial NAND-based SSDs, the drive didn’t move data storage any closer to the CPU. This is because early vendors chose to adopt existing disk interface protocols, such as SATA and SAS. That decision did encourage consumer adoption, but again, it limited overall throughput.

""

Figure 5. SATA SSD in a 2.5″ Drive Form Factor

Even though the SSD didn’t move any closer to the CPU, it did achieve a new milestone in this technology—it reduced seek times across the storage media, resulting in significantly less latencies. That’s because the drives were designed around ICs, and they contained no movable components. Overall performance was night and day compared to traditional HDDs.

The first official SSD manufactured without the need of a power source (that is, a battery) to maintain state was introduced in 1995 by M-Systems. They were designed to replace HDDs in mission-critical military and aerospace applications. By 1999, Flash-based technology was designed and offered in the traditional 3.5″ storage drive form factor, and it continued to be developed this way until 2007 when a newly started and revolutionary startup company named Fusion-io (now part of Western Digital) decided to change the performance-limiting form factor of traditional storage drives and throw the technology directly onto the PCI Express (PCIe) bus. This approach removed many unnecessary communication protocols and subsystems. The design also moved a bit closer to the CPU and produced a noticeable performance improvement. This new design not only changed the technology for years to come, but it also even brought the SSD into traditional data centers.

Fusion-io’s products later inspired other memory and storage companies to bring somewhat similar technologies to the Dual In-line Memory Module (DIMM) form factor, which plugs in directly to the traditional RAM slot of the supported motherboard. These types of modules register to the CPU as a different class of memory and remain in a somewhat protected mode. Translation: the main system and, in turn, the operating system did not touch these memory devices unless it was done through a specifically designed device driver or application interface.

It’s also worth noting here that the transistor-based NAND Flash technology still paled in comparison to DRAM performance. I’m talking about microsecond latencies versus DRAM’s nanosecond latencies. Even in a DIMM form factor, the NAND-based modules just don’t perform as well as the DRAM modules.

Introducing NAND Memory

What makes an SSD faster than a traditional HDD? The simple answer is that it is memory built with chips and no moving components. The name of the technology—solid state—captures this very trait. But if you’d like a more descriptive answer, keep reading.

Instead of saving data onto spinning disks, SSDs save that same data to a pool of NAND flash. The NAND (or NOT-AND) technology is made up of floating gate transistors, and unlike the transistor designs used in DRAM (which must be refreshed multiple times per second), NAND is capable of retaining its charge state, even when power is not supplied to the device—hence the non-volatility of the technology.

At a much lower level, in a NAND configuration, electrons are stored in the floating gate. Opposite of how you read boolean logic, a charge is signified as a “0”, and a not-charge is a “1”. These bits are stored in a cell. It is organized in a grid layout referred to as a block. Each individual row of the grid is called a page, with page sizes typically set to 4K (or more). Traditionally, there are 128–256 pages per block, with block sizes reaching as high as 1MB or larger.

""

Figure 6. NAND Die Layout

There are different types of NAND, all defined by the number of bits per cell. As the name implies, a single-level cell (SLC) stores one bit. A multi-level cell stores two bits. Triple-level cells store three bits. And, new to the scene is the QLC. Guess how many bits it can store? You guessed it: four.

Now, although a TLC offers more storage density than an SLC NAND, it comes at a price: increased latency—that is, approximately four times worse for reads and six times worse for writes. The reason for this rests on how data moves in and out of the NAND cell. In an SLC NAND, the device’s controller needs to know only if the bit is a 0 or a 1. With an MLC, the cell holds more values—four to be exact: 00, 01, 10 or 11. In a TLC NAND, it holds eight values: 000, 001, 010, 011, 100, 101, 110, 111. That’s a lot of overhead and extra processing. Either way, regardless of whether your drive is using SLC or TLC NAND, it still will perform night-and-day faster than an HDD—minor details.

There’s a lot more to share about NAND, such as how reads, writes and erases (Programmable Erase or PE cycles) work, the last of which does eventually impact write performance and some of the technology’s early pitfalls, but I won’t bore you with that. Just remember: electrical charges to chips are much faster than moving heads across disk platters. It’s time to introduce the NVMe.

The Boring Details

Okay, I lied. Write performance can and will vary throughout the life of the SSD. When an SSD is new, all of its data blocks are erased and presented as new. Incoming data is written directly to the NAND. Once the SSD has filled all of the free data blocks on the device, it then must erase previously programmed blocks to write the new data. In the industry, this moment is known as the device’s write cliff. To free the old blocks, the chosen blocks must be erased. This action is called the Programmable Erase (PE) cycle, and it increases the device’s write latency. Given enough time, you’ll notice that a used SSD eventually doesn’t perform as well as a brand-new SSD. A NAND cell is programmed to handle a finite amount of erases.

To overcome all of these limitations and eventual bottlenecks, vendors resort to various tricks, including the following:

  • The over-provisioning of NAND: although a device may register 3TB of storage, it may in fact be equipped with 6TB.
  • The coalescing of write data to reduce the impacts of write amplification.
  • Wear leveling: reduce the need of writing and rewriting to the same regions of the NAND.

Non-Volatile Memory Express (NVMe)

Fusion-io built a closed and proprietary product. This fact alone brought many industry leaders together to define a new standard to compete against the pioneer and push more PCIe-connected Flash into the data center. With the first industry specifications announced in 2011, NVMe quickly rose to the forefront of SSD technologies. Remember, historically, SSDs were built on top of SATA and SAS buses. Those interfaces worked well for the maturing Flash memory technology, but with all the protocol overhead and bus speed limitations, it didn’t take long for those drives to experience their own fair share of performance bottlenecks (and limitations). Today, modern SAS drives operate at 12Gbit/s, while modern SATA drives operate at 6Gbit/s. This is why the technology shifted its focus to PCIe. With the bus closer to the CPU, and PCIe capable of performing at increasingly stellar speeds, SSDs seemed to fit right in. Using PCIe 3.0, modern drives can achieve speeds as high as 40Gbit/s. Support for NVMe drives was integrated into the Linux 3.3 mainline kernel (2012).

""

Figure 7. A PCIe NVMe SSD (by Dsimic – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=41576100)

What really makes NVMe shine over the operating system’s legacy storage stacks is its simpler and faster queueing mechanisms. These are called the Submission Queues (SQs) and Completion Queues (CQs). Each queue is a circular buffer of a fixed size that the operating system uses to submit one or more commands to the NVMe controller. One or more of these queues also can be pinned to specific cores, which allows for more uninterrupted operations. Goodbye serial communication. Drive I/O is now parallelized.

Non-Volatile Memory Express over Fabric (NVMeoF)

In the world of SAS or SATA, there is the Storage Area Network (SAN). SANs are designed around SCSI standards. The primary goal of a SAN (or any other storage network) is to provide access of one or more storage volumes across one or more paths to a single or multiple operating system host(s) in a network. Today, the most commonly deployed SAN is based on iSCSI, which is SCSI over TCP/IP. Technically, NVMe drives can be configured within a SAN environment, although the protocol overhead introduces latencies that make it a less than ideal implementation. In 2014, the NVMe Express committee was poised to rectify this with the NVMeoF standard.

The goals behind NVMeoF are simple: enable an NVMe transport bridge, which is built around the NVMe queuing architecture, and avoid any and all protocol translation overhead other than the supported NVMe commands (end to end). With such a design, network latencies noticeably drop (less than 200ns). This design relies on the use of PCIe switches. A second design has been gaining ground that’s based on the existing Ethernet fabrics using Remote Direct Memory Access (RDMA).

""

Figure 8. A Comparison of NVMe Fabrics over Other Storage Networks

The 4.8 Linux kernel introduced a lot of new code to support NVMeoF. The patches were submitted as part of a joint effort by the hard-working developers over at Intel, Samsung and elsewhere. Three major components were patched into the kernel, including the general NVMe Target Support framework. This framework enables block devices to be exported from the Linux kernel using the NVMe protocol. Dependent upon this framework, there is now support for NVMe loopback devices and also NVMe over Fabrics RDMA Targets. If you recall, this last piece is one of the two more common NVMeoF deployments.

Conclusion

So, there you have it, an introduction and deep dive into Flash storage. Now you should understand why the technology is both increasing in popularity and the preferred choice for high-speed computing. Part II of this article shifts focus to using NVMe drives in a Linux environment and accessing those same NVMe drives across an NVMeoF network.

Read More

Western Digital -5.6% on EPS miss, weak revenue mix

Western Digital (NASDAQ:WDC) -5.6% after reporting in-line Q3 revenue and an EPS miss. Results were weighed down with $110M in inventory charges in the cost of revenue primarily due to flash memory products containing DRAM.

Peer Seagate (NASDAQ:STX) is down 1.1%.

Revenue breakdown: Client Devices, $1.63B (last year: $2.31B); Client Solutions, $804M ($1.04B); Data Center and Devices, $1.25B ($1.66B).

HDD units were 27.8M (consensus: 28.06M) with Client compute at 12.9M (12.54M), Non-compute at 9.3M (9.58M), and data center at 5.6M (5.94M).

ASP was $73 versus the $67.02 consensus.

Gross margin was 25.3%, below the 27. 9% estimate.

Read More

The spin doctors: Researchers discover surprising quantum effect in hard disk drive material

Scientists find surprising way to affect information storage properties in metal alloy.

Sometimes scientific discoveries can be found along well-trodden paths. That proved the case for a cobalt-iron alloy material commonly found in hard disk drives.

As reported in a recent issue of Physical Review Letters, researchers from the U.S. Department of Energy’s (DOE) Argonne National Laboratory, along with Oakland University in Michigan and Fudan University in China, have found a surprising quantum effect in this alloy.

The effect involves the ability to control the direction of electron spin, and it could allow scientists to develop more powerful and energy-efficient materials for information storage. By changing the electron spin direction in a material, the researchers were able to alter its magnetic state. This greater control of magnetization allows more information to be stored and retrieved in a smaller space. Greater control could also yield additional applications, such as more energy-efficient electric motors, generators and magnetic bearings.

The effect the researchers discovered has to do with “damping,” in which the direction of electron spin controls how the material dissipates energy. “When you drive your car down a flat highway with no wind, the dissipating energy from drag is the same regardless of the direction you travel,” said Argonne materials scientist Olle Heinonen, an author of the study. “With the effect we discovered, it’s like your car experiences more drag if you’re traveling north-south than if you’re traveling east-west.”

“In technical terms, we discovered a sizable effect from magnetic damping in nanoscale layers of cobalt-iron alloy coated on one side of a magnesium oxide substrate,” added Argonne materials scientist Axel Hoffmann, another author of the study. “By controlling the electron spin, magnetic damping dictates the rate of energy dissipation, controlling aspects of the magnetization.”

The team’s discovery proved especially surprising because the cobalt-iron alloy had been widely used in applications such as magnetic hard drives for many decades, and its properties have been thoroughly investigated. It was conventional wisdom that this material did not have a preferred direction for electron spin and thus magnetization.

In the past, however, scientists prepared the alloy for use by “baking” it at high temperature, which orders the arrangement of the cobalt and iron atoms in a regular lattice, eliminating the directional effect. The team observed the effect by examining unbaked cobalt-iron alloys, in which cobalt and iron atoms can randomly occupy each other’s sites.

The team was also able to explain the underlying physics. In a crystal structure, atoms normally sit at perfectly regular intervals in a symmetric arrangement. In the crystal structure of certain alloys, there are slight differences in the separation between atoms that can be removed through the baking process; these differences remain in an “unbaked” material.

Squeezing such a material at the atomic level further changes the separation of the atoms, resulting in different interactions between atomic spins in the crystalline environment. This difference explains how the damping effect on magnetization is large in some directions, and small in others.

The result is that very small distortions in the atomic arrangement within the crystalline structure of cobalt-iron alloy have giant implications for the damping effect. The team ran calculations at the Argonne Leadership Computing Facility, a DOE Office of Science User Facility, that confirmed their experimental observations. 

The researchers’ work appears in the March 21 online edition of Physical Review Lettersand is entitled, “Giant anisotropy of Gilbert damping in epitaxial CoFe films.”

Read More

NAND Flash Industry in 2019 Has Huge Variables

Back in 1Q18, some companies such as Samsung, Toshiba, Micron and so on have a 40% profit margin on NAND chips, but prices began to fall sharply after the first quarter, according to a report from Mari Technology Information co., ltd published on February 16, 2019.

It’s estimated that prices in NAND flash fell by at least 50% in 1H18. According to analysts, prices will fall by 30% annually in the future until the next round of price increases.

The NAND market in 2018 has changed from prosperity to depression, a trend that will continue this year. However, the variables in flash memory market in 2019 are not just price reduction because new technologies, new products and the addition of Chinese manufacturers will bring great changes to this market.
 
Insufficient demand inevitably leads to price reduction
A key reason for the sharp price reduction in 2018 was the mass production of 64-tier 3D NAND flash. The transition from 32/48-tier to 64-tier reduced the cost of NAND flash memory to 8 cents/GB compared with the 21 cents of previous 2D NAND flash/GB, making it possible for NAND to reduce cost.

Under the pressure of oversupply, NAND flash market has been adjusted several times in the past few years. The market began to decline during 2015 and 2016 while in 2017 the market began to rise again. However, the situation did not continue as expected. When large factories competed to strengthen investment scale and supported new capacity of 3D NAND flash, market demand grew slowly, which also accelerated the price reduction of NAND flash in 2018 in the world.

Affected by the bottleneck of technology and yield, the improvement rate of 3D NAND yield in 2018 is not as smooth as expected. As a result, secondary products are sold in circulation, further interfering with market prices. As for terminal applications, the client SSD market is the first to bear the brunt.

The giants repay their debts for the good they once had
The hard days are not over yet. The price of NAND flash is expected to drop another 50% this year. NAND and SSD manufacturers will pay off the debt for the previous two years.

Recently, Western Digital’s financial reports have shown that the price reduction of NAND flash has a great impact on it. Its revenue and profit of this quarter declined Y/Y. Although NAND flash capacity shipments increased by 28%, the price of NAND flash fell by 16%, resulting in a sharp decline of gross interest rate. The HDD business is more serious since the shipment is 34.1 million, declined 8.1 million year-on-year.

The price reduction of NAND flash will also affect the earnings of Micron, SK Hynix, Samsung and Toshiba. However, Samsung, Micron and SK Hynix have the support of DRAM memory with high price, so their earnings will not be as bad as WD. In order to reduce the impact, WD and Micron are reducing NAND flash capacity investment to reduce NAND supply and alleviate market expectations of price reduction.
 
64-tier and 96-tier 3D NAND Flash have different destinies
This year will be the first year of the explosion of the 96-tier 3D NAND flash. After 64-tier stack, major manufacturers began to mass-produce 96-tier stack 3D flash in 2Q18. Samsung announced the fifth generation of NAND flash (one of 3D NAND flash) in early July. V NAND flash took the lead in supporting Toggle DDR 4.0 interface, with transmission speed reaching 1.4Gb/s, 40% higher than 64-tier V NAND flash. Its working voltage is reduced from 1.8V to 1.2V, and the writing speed is the fastest at present with only 500μs, which is 30% higher than the previous generation.

In addition, the fifth generation of NAND flash has also been optimized in manufacturing process. The manufacturing efficiency has increased by 30% and advanced technology reduces the height of each unit by 20%, which reduces the interference among units and improves the efficiency of data processing.

Micron, Intel, Toshiba, WD and SK Hynix have also announced their own 96-tier 3D NAND flash scheme, among which, WD and Toshiba use a new generation of BiCS4 technology in 96-tier 3D flash. Core capacity of their QLC is up to 1.33Tb, 33% higher than the industry standard. Toshiba has developed 16-core single chip flash. Only a flash has a capacity of 2.66TB.

Emergence of domestic manufacturers changes global markets
When the global semiconductor market in 2018 reached $150 billion, of which NAND flash exceeded $57 billion, China consumed 32% of the global capacity, which means that the country has become the major global market. In order to get rid of the dependence of long-term external procurement, the independent development of domestic memory chips has become an urgent task.

There is another variable in the NAND market in 2019. Although it is still a rudimentary, it is most likely to reshape the market structure of memory chips. That is, YMTC in China will produce large-scale 3D NAND flash in 2019, competing with Samsung, Toshiba, Micron and other international NAND manufacturers.

32-tier 3D NAND flash of YMTC has been successfully released and entered small-scale production. However, the 32-tier stacking process is not competitive. Recently, YMTC’s 64-tier NAND samples with Xtacking architecture have been sent to relevant supply chains for testing.

If the schedule is in line with expectations, production will be expected as soon as the 3Q19. At that time, there will be an opportunity to turn a loss into a profit.

In addition, YMTC also plans to skip from 96-tier 3D NAND to 128-tier 3D NAND in 2020. With the upgrading of production technology and the planned production capacity of 300,000 to 450,000 pieces, there will be an opportunity for the firm to grab about 10% of global market share in the future.

At the same time, UNIS promotes construction progress in other cities, for example Nanjing and Chengdu factories successively enters the construction stage by the end of 2018. A total of $26.87 billion will be invested in three major production bases to produce 3D NAND chips. On the other hand, UNIS desires to cooperate with Intel to develop NAND flash technology at full speed. 

It is only a matter of time for domestic manufacturers to enter the NAND flash market. Although it is still in the test stage in 2019, it is still necessary to solve the problem of yield after increasing output gradually. In the process of technological transformation, whether the products with unqualified yield will affect the market order will be worth observing.

Industrial variables are huge in this year
It’s reported that NAND flash price will fall 10% to 15% in the first quarter of 2019. In response, in the latest report analysts at foreign Citibank maintained neutral frequencies on Micron’s shares, but lowered Micron’s revenue and earnings expectations for 2019 for the reason the overall memory market in 2019 will face a major price reduction.

Due to overcapacity and increasing inventory, NAND flash and DRAM are expected to have a price reduction in 2019. NAND flash prices will fall 45%, and DRAM prices will fall 30%.Moreover, such a price cut will not see the bottom line until 2Q19, suggesting that the price reduction this year will last at least two quarters.

On the supply side, the yield of 64-tier 3D NAND flash has reached a mature stage. Coupled with the input of new production capacity, even if the production time of 96-tier 3D NAND flash is delayed, it still cannot withstand the increasing output of 64-tier 3D NAND flash. Unlike memory products can be used in cache, flash is the main storage device for various electronic products. The price reduction is often accompanied by an increase of carrying capacity.

Demand-side growth is not keeping pace with output growth, so the whole industry will continue to oversupply until the end of 2019.

Read More

NAND Flash Industry in 2019 Has Huge Variables

Back in 1Q18, some companies such as Samsung, Toshiba, Micron and so on have a 40% profit margin on NAND chips, but prices began to fall sharply after the first quarter, according to a report from Mari Technology Information co., ltd published on February 16, 2019.

It’s estimated that prices in NAND flash fell by at least 50% in 1H18. According to analysts, prices will fall by 30% annually in the future until the next round of price increases.

The NAND market in 2018 has changed from prosperity to depression, a trend that will continue this year. However, the variables in flash memory market in 2019 are not just price reduction because new technologies, new products and the addition of Chinese manufacturers will bring great changes to this market.

Insufficient demand inevitably leads to price reduction
A key reason for the sharp price reduction in 2018 was the mass production of 64-tier 3D NAND flash. The transition from 32/48-tier to 64-tier reduced the cost of NAND flash memory to 8 cents/GB compared with the 21 cents of previous 2D NAND flash/GB, making it possible for NAND to reduce cost.

Under the pressure of oversupply, NAND flash market has been adjusted several times in the past few years. The market began to decline during 2015 and 2016 while in 2017 the market began to rise again. However, the situation did not continue as expected. When large factories competed to strengthen investment scale and supported new capacity of 3D NAND flash, market demand grew slowly, which also accelerated the price reduction of NAND flash in 2018 in the world.

Affected by the bottleneck of technology and yield, the improvement rate of 3D NAND yield in 2018 is not as smooth as expected. As a result, secondary products are sold in circulation, further interfering with market prices. As for terminal applications, the client SSD market is the first to bear the brunt.

The giants repay their debts for the good they once had
The hard days are not over yet. The price of NAND flash is expected to drop another 50% this year. NAND and SSD manufacturers will pay off the debt for the previous two years.

Recently, Western Digital’s financial reports have shown that the price reduction of NAND flash has a great impact on it. Its revenue and profit of this quarter declined Y/Y. Although NAND flash capacity shipments increased by 28%, the price of NAND flash fell by 16%, resulting in a sharp decline of gross interest rate. The HDD business is more serious since the shipment is 34.1 million, declined 8.1 million year-on-year.

The price reduction of NAND flash will also affect the earnings of Micron, SK Hynix, Samsung and Toshiba. However, Samsung, Micron and SK Hynix have the support of DRAM memory with high price, so their earnings will not be as bad as WD. In order to reduce the impact, WD and Micron are reducing NAND flash capacity investment to reduce NAND supply and alleviate market expectations of price reduction.

64-tier and 96-tier 3D NAND Flash have different destinies
This year will be the first year of the explosion of the 96-tier 3D NAND flash. After 64-tier stack, major manufacturers began to mass-produce 96-tier stack 3D flash in 2Q18. Samsung announced the fifth generation of NAND flash (one of 3D NAND flash) in early July. V NAND flash took the lead in supporting Toggle DDR 4.0 interface, with transmission speed reaching 1.4Gb/s, 40% higher than 64-tier V NAND flash. Its working voltage is reduced from 1.8V to 1.2V, and the writing speed is the fastest at present with only 500μs, which is 30% higher than the previous generation.

In addition, the fifth generation of NAND flash has also been optimized in manufacturing process. The manufacturing efficiency has increased by 30% and advanced technology reduces the height of each unit by 20%, which reduces the interference among units and improves the efficiency of data processing.

Micron, Intel, Toshiba, WD and SK Hynix have also announced their own 96-tier 3D NAND flash scheme, among which, WD and Toshiba use a new generation of BiCS4 technology in 96-tier 3D flash. Core capacity of their QLC is up to 1.33Tb, 33% higher than the industry standard. Toshiba has developed 16-core single chip flash. Only a flash has a capacity of 2.66TB.

Emergence of domestic manufacturers changes global markets
When the global semiconductor market in 2018 reached $150 billion, of which NAND flash exceeded $57 billion, China consumed 32% of the global capacity, which means that the country has become the major global market. In order to get rid of the dependence of long-term external procurement, the independent development of domestic memory chips has become an urgent task.

There is another variable in the NAND market in 2019. Although it is still a rudimentary, it is most likely to reshape the market structure of memory chips. That is, YMTC in China will produce large-scale 3D NAND flash in 2019, competing with Samsung, Toshiba, Micron and other international NAND manufacturers.

32-tier 3D NAND flash of YMTC has been successfully released and entered small-scale production. However, the 32-tier stacking process is not competitive. Recently, YMTC’s 64-tier NAND samples with Xtacking architecture have been sent to relevant supply chains for testing.

If the schedule is in line with expectations, production will be expected as soon as the 3Q19. At that time, there will be an opportunity to turn a loss into a profit.

In addition, YMTC also plans to skip from 96-tier 3D NAND to 128-tier 3D NAND in 2020. With the upgrading of production technology and the planned production capacity of 300,000 to 450,000 pieces, there will be an opportunity for the firm to grab about 10% of global market share in the future.

At the same time, UNIS promotes construction progress in other cities, for example Nanjing and Chengdu factories successively enters the construction stage by the end of 2018. A total of $26.87 billion will be invested in three major production bases to produce 3D NAND chips. On the other hand, UNIS desires to cooperate with Intel to develop NAND flash technology at full speed.

It is only a matter of time for domestic manufacturers to enter the NAND flash market. Although it is still in the test stage in 2019, it is still necessary to solve the problem of yield after increasing output gradually. In the process of technological transformation, whether the products with unqualified yield will affect the market order will be worth observing.

Industrial variables are huge in this year
It’s reported that NAND flash price will fall 10% to 15% in the first quarter of 2019. In response, in the latest report analysts at foreign Citibank maintained neutral frequencies on Micron’s shares, but lowered Micron’s revenue and earnings expectations for 2019 for the reason the overall memory market in 2019 will face a major price reduction.

Due to overcapacity and increasing inventory, NAND flash and DRAM are expected to have a price reduction in 2019. NAND flash prices will fall 45%, and DRAM prices will fall 30%.Moreover, such a price cut will not see the bottom line until 2Q19, suggesting that the price reduction this year will last at least two quarters.

On the supply side, the yield of 64-tier 3D NAND flash has reached a mature stage. Coupled with the input of new production capacity, even if the production time of 96-tier 3D NAND flash is delayed, it still cannot withstand the increasing output of 64-tier 3D NAND flash. Unlike memory products can be used in cache, flash is the main storage device for various electronic products. The price reduction is often accompanied by an increase of carrying capacity.

Demand-side growth is not keeping pace with output growth, so the whole industry will continue to oversupply until the end of 2019.

Read More