Micron’s new 15TB SSD is almost affordable

Ever so slightly closes price gap between high capacity SSDs and HDDs

The 15.36TB drive, which is a smidgen smaller in capacity than the largest hard disk drive currently on the market (a 16TB Toshiba HDD model), costs “only” €2.474,78 plus sales tax or around $2,770 (about £2,140). 

While that is far more expensive than smaller capacity SSDs (Silicon Power’s 1TB SSD retails for under $95 at Amazon), it is less than half the average price of competing enterprise SSDs like the Seagate Nytro 3330, the Western Digital Ultrastar DC SS530, the Toshiba PM5-R or the Samsung SSD PM1633a. 

HDD still wins the price/capacity comparison

And just for the comparison, a 14TB hard disk drive, the MG07 from Toshiba, retails for around $440, about a sixth of the price, which gives you an idea of the price gulf between the two. If you are looking for something bigger, then the Samsung SSD PM1643 is probably your only bet at €7294.22 excluding VAT.

Bear in mind that these are 2.5-inch models which are far smaller than 3.5-inch hard disk drives. They also connect to the host computer using a special connector called SAS (Serial Attached Small Computer System Interface). The Micron 9300 Pro connects via the U.2 PCIe (NVMe), offering read speeds of up to 3.5GBps.

For the ultimate data hoarder, there’s the Nimbusdata Exadrive which boasts a capacity of 100TB albeit in a 3.5-inch form factor.

Read More

Data in a Flash, Part I: the Evolution of Disk Storage and an Introduction to NVMe

NVMe drives have paved the way for computing at stellar speeds, but the technology didn’t suddenly appear overnight. It was through an evolutionary process that we now rely on the very performant SSD for our primary storage tier.

Solid State Drives (SSDs) have taken the computer industry by storm in recent years. The technology is impressive with its high-speed capabilities. It promises low-latency access to sometimes critical data while increasing overall performance, at least when compared to what is now becoming the legacy Hard Disk Drive (HDD). With each passing year, SSD market shares continue to climb, replacing the HDD in many sectors. The effects of this are seen in personal, mobile and server computing.

IBM first unleashed the HDD into the computing world in 1956. By the 1960s, the HDD became the dominant secondary storage device for general-purpose computers (emphasis on secondary storage device, memory being the first). Capacity and performance were the primary characteristics defining the HDD. In many ways, those characteristics continue to define the technology—although, not in the most positive ways (more details on that shortly).

The first IBM-manufactured hard drive, the 350 RAMAC, was as large as two medium-sized refrigerators with a total capacity of 3.75MB on a stack of 50 disks. Modern HDD technology has produced disk drives with volumes as high as 16TB, specifically with the more recent Shingled Magnetic Recording (SMR) technology coupled with helium—yes, that’s the same chemical element abbreviated as He in the periodic table. The sealed helium gas increases the potential speed of the drive while creating less drag and turbulence. Being less dense than air, it also allows more platters to be stacked in the same space used by 2.5″ and 3.5″ conventional disk drives.


Figure 1. A lineup of Standard HDDs throughout Their History and across All Form Factors (by Paul R. Potts—Provided by Author, CC BY-SA 3.0 us, https://commons.wikimedia.org/w/index.php?curid=4676174)

A disk drive’s performance typically is calculated by the time required to move the drive’s heads to a specific track or cylinder and the time it takes for the requested sector to move under the head—that is, the latency. Performance is also measured at the rate by which the data is transmitted.

Being a mechanical device, an HDD does not perform nearly as fast as memory. A lot of moving components add to latency times and decrease the overall speed by which you can access data (for both read and write operations).


Figure 2. Disk Platter Layout

Each HDD has magnetic platters inside, which often are referred to as disks. Those platters are what stores the information. Bound by a spindle and spinning them in unison, an HDD will have more than one platter sitting on top of each other with a minimum amount of space in between.

Similar to how a phonograph record works, the platters are double-sided, and the surface of each has circular etchings called tracks. Each track is made up of sectors. The number of sectors on each track increases as you get closer to the edge of a platter. Nowadays, you’ll find that the physical size of a sector is either 512 bytes or 4 Kilobytes (4096 bytes). In the programming world, a sector typically equates to a disk block.

The speed at which a disk spins affects the rate at which information can be read. This is defined as a disk’s rotation rate, and it’s measured at revolutions per minute (RPM). This is why you’ll find modern drives operating at speeds like 7200 RPM (or 120 rotations per second). Older drives spin at slower rates. High-end drives may spin at higher rates. This limitation creates a bottleneck.

An actuator arm sits on top of or below a platter. It extends and retracts over its surface. At the end of the arm is a read-write head. It sits at a microscopic distance above the surface of the platter. As the disk rotates, the head can access information on the current track (without moving). However, if the head needs to move to the next track or to an entirely different track, the time to read or write data is increased. From a programmer’s perspective, this is referred to as the disk seek, and this creates a second bottleneck for the technology.

Now, although HDDs’ performance has been increasing with newer disk access protocols—such as Serial ATA (SATA) and Serial Attached SCSI (SAS)—and technologies, it’s still a bottleneck to the CPU and, in turn, to the overall computer system. Each disk protocol has its own hard limits on maximum throughput (megabytes or gigabytes per second). The method in which data is transferred is also very serialized. This works well with a spinning disk, but it doesn’t scale well to Flash technologies.

Since its conception, engineers have been devising newer and more creative methods to help accelerate the HDDs’ performance (for example, with memory caching), and in some cases, they’ve completely replaced them with technologies like the SSD. Today, SSDs are being deployed everywhere—or so it seems. Cost per gigabyte is decreasing, and the price gap is narrowing between Flash and traditional spinning rust. But, how did we get here in the first place? The SSD wasn’t an overnight success. Its history is more of a gradual one, dating back as far as when the earliest computers were being developed.

A Brief History of Computer Memory

Memory comes in many forms, but before Non-Volatile Memory (NVM) came into the picture, the computing world first was introduced to volatile memory in the form of Random Access Memory (RAM). RAM introduced the ability to write/read data to/from any location of the storage medium in the same amount of time. The often random physical location of a particular set of data did not affect the speed at which the operation completed. The use of this type of memory masked the pain of accessing data from the exponentially slower HDD, by caching data read often or staging data that needed to be written.

The most notable of RAM technologies is Dynamic Random Access Memory (DRAM). It also came out of the IBM labs, in 1966, a decade after the HDD. Being that much closer to the CPU and also not having to deal with mechanical components (that is, the HDD), DRAM performed at stellar speeds. Even today, many data storage technologies strive to perform at the speeds of DRAM. But, there was a drawback, as I emphasized above: the technology was volatile, and as soon as the capacitor-driven integrated circuits (ICs) were deprived of power, the data disappeared along with it.

Another set of drawbacks to the DRAM technology is its very low capacities and the price per gigabyte. Even by today’s standards, DRAM is just too expensive when compared to the slower HDDs and SSDs.

Shortly after DRAM’s debut came Erasable Programmable Read-Only Memory (EPROM). Invented by Intel, it hit the scene at around 1971. Unlike its volatile counterparts, EPROM offered an extremely sought-out industry game-changer: memory that retains its data as soon as system power is shut off. EPROM used transistors instead of capacitors in its ICs. Those transistors were capable of maintaining state, even after the electricity was cut.

As the name implies, the EPROM was in its own class of Read-Only Memory (ROM). Data typically was pre-programmed into those chips using special devices or tools, and when in production, it had a single purpose: to be read from at high speeds. As a result of this design, EPROM immediately became popular in both embedded and BIOS applications, the latter of which stored vendor-specific details and configurations.

Moving Closer to the CPU

As time progressed, it became painfully obvious: the closer you move data (storage) to the CPU, the faster you’re able to access (and manipulate) it. The closest memory to the CPU is the processor’s registers. The amount of available registers to a processor varies by architecture. The register’s purpose is to hold a small amount of data intended for fast storage. Without a doubt, these registers are the fastest way to access small sizes of data.

Next in line, and following the CPU’s registers, is the CPU cache. This is a hardware cache built in to the processor module and utilized by the CPU to reduce the cost and time it takes to access data from the main memory (DRAM). It’s designed around Static Random Access Memory (SRAM) technology, which also is a type of volatile memory. Like a typical cache, the purpose of this CPU cache is to store copies of data from the most frequently used main memory locations. On modern CPU architectures, multiple and different independent caches exist (and some of those caches even are split). They are organized in a hierarchy of cache levels: Level 1 (L1), Level 2 (L2), Level 3 (L3) and so on. The larger the processor, the more the cache levels, and the higher the level, the more memory it can store (that is, from KB to MB). On the downside, the higher the level, the farther its location is from the main CPU. Although mostly unnoticeable to modern applications, it does introduce latency.


Figure 3. General Outline of the CPU and Its Memory Locations/Caches

The first documented use of a data cache built in to the processor dates back to 1969 and the IBM System/360 Model 85 mainframe computing system. It wasn’t until the 1980s that the more mainstream microprocessors started incorporating their own CPU caches. Part of that delay was driven by cost. Much like it is today, (all types of) RAM was very expensive.

So, the data access model goes like this: the farther you move away from the CPU, the higher the latency. DRAM sits much closer to the CPU than an HDD, but not as close as the registers or levels of caches designed into the IC.


Figure 4. High-Level Model of Data Access

The Solid-State Drive

The performance of a given storage technology was constantly gauged and compared to the speeds of CPU memory. So, when the first commercial SSDs hit the market, it didn’t take very long for both companies and individuals to adopt the technology. Even with a higher price tag, when compared to HDDs, people were able to justify the expense. Time is money, and if access to the drives saves time, it potentially can increase profits. However, it’s unfortunate that with the introduction of the first commercial NAND-based SSDs, the drive didn’t move data storage any closer to the CPU. This is because early vendors chose to adopt existing disk interface protocols, such as SATA and SAS. That decision did encourage consumer adoption, but again, it limited overall throughput.


Figure 5. SATA SSD in a 2.5″ Drive Form Factor

Even though the SSD didn’t move any closer to the CPU, it did achieve a new milestone in this technology—it reduced seek times across the storage media, resulting in significantly less latencies. That’s because the drives were designed around ICs, and they contained no movable components. Overall performance was night and day compared to traditional HDDs.

The first official SSD manufactured without the need of a power source (that is, a battery) to maintain state was introduced in 1995 by M-Systems. They were designed to replace HDDs in mission-critical military and aerospace applications. By 1999, Flash-based technology was designed and offered in the traditional 3.5″ storage drive form factor, and it continued to be developed this way until 2007 when a newly started and revolutionary startup company named Fusion-io (now part of Western Digital) decided to change the performance-limiting form factor of traditional storage drives and throw the technology directly onto the PCI Express (PCIe) bus. This approach removed many unnecessary communication protocols and subsystems. The design also moved a bit closer to the CPU and produced a noticeable performance improvement. This new design not only changed the technology for years to come, but it also even brought the SSD into traditional data centers.

Fusion-io’s products later inspired other memory and storage companies to bring somewhat similar technologies to the Dual In-line Memory Module (DIMM) form factor, which plugs in directly to the traditional RAM slot of the supported motherboard. These types of modules register to the CPU as a different class of memory and remain in a somewhat protected mode. Translation: the main system and, in turn, the operating system did not touch these memory devices unless it was done through a specifically designed device driver or application interface.

It’s also worth noting here that the transistor-based NAND Flash technology still paled in comparison to DRAM performance. I’m talking about microsecond latencies versus DRAM’s nanosecond latencies. Even in a DIMM form factor, the NAND-based modules just don’t perform as well as the DRAM modules.

Introducing NAND Memory

What makes an SSD faster than a traditional HDD? The simple answer is that it is memory built with chips and no moving components. The name of the technology—solid state—captures this very trait. But if you’d like a more descriptive answer, keep reading.

Instead of saving data onto spinning disks, SSDs save that same data to a pool of NAND flash. The NAND (or NOT-AND) technology is made up of floating gate transistors, and unlike the transistor designs used in DRAM (which must be refreshed multiple times per second), NAND is capable of retaining its charge state, even when power is not supplied to the device—hence the non-volatility of the technology.

At a much lower level, in a NAND configuration, electrons are stored in the floating gate. Opposite of how you read boolean logic, a charge is signified as a “0”, and a not-charge is a “1”. These bits are stored in a cell. It is organized in a grid layout referred to as a block. Each individual row of the grid is called a page, with page sizes typically set to 4K (or more). Traditionally, there are 128–256 pages per block, with block sizes reaching as high as 1MB or larger.


Figure 6. NAND Die Layout

There are different types of NAND, all defined by the number of bits per cell. As the name implies, a single-level cell (SLC) stores one bit. A multi-level cell stores two bits. Triple-level cells store three bits. And, new to the scene is the QLC. Guess how many bits it can store? You guessed it: four.

Now, although a TLC offers more storage density than an SLC NAND, it comes at a price: increased latency—that is, approximately four times worse for reads and six times worse for writes. The reason for this rests on how data moves in and out of the NAND cell. In an SLC NAND, the device’s controller needs to know only if the bit is a 0 or a 1. With an MLC, the cell holds more values—four to be exact: 00, 01, 10 or 11. In a TLC NAND, it holds eight values: 000, 001, 010, 011, 100, 101, 110, 111. That’s a lot of overhead and extra processing. Either way, regardless of whether your drive is using SLC or TLC NAND, it still will perform night-and-day faster than an HDD—minor details.

There’s a lot more to share about NAND, such as how reads, writes and erases (Programmable Erase or PE cycles) work, the last of which does eventually impact write performance and some of the technology’s early pitfalls, but I won’t bore you with that. Just remember: electrical charges to chips are much faster than moving heads across disk platters. It’s time to introduce the NVMe.

The Boring Details

Okay, I lied. Write performance can and will vary throughout the life of the SSD. When an SSD is new, all of its data blocks are erased and presented as new. Incoming data is written directly to the NAND. Once the SSD has filled all of the free data blocks on the device, it then must erase previously programmed blocks to write the new data. In the industry, this moment is known as the device’s write cliff. To free the old blocks, the chosen blocks must be erased. This action is called the Programmable Erase (PE) cycle, and it increases the device’s write latency. Given enough time, you’ll notice that a used SSD eventually doesn’t perform as well as a brand-new SSD. A NAND cell is programmed to handle a finite amount of erases.

To overcome all of these limitations and eventual bottlenecks, vendors resort to various tricks, including the following:

  • The over-provisioning of NAND: although a device may register 3TB of storage, it may in fact be equipped with 6TB.
  • The coalescing of write data to reduce the impacts of write amplification.
  • Wear leveling: reduce the need of writing and rewriting to the same regions of the NAND.

Non-Volatile Memory Express (NVMe)

Fusion-io built a closed and proprietary product. This fact alone brought many industry leaders together to define a new standard to compete against the pioneer and push more PCIe-connected Flash into the data center. With the first industry specifications announced in 2011, NVMe quickly rose to the forefront of SSD technologies. Remember, historically, SSDs were built on top of SATA and SAS buses. Those interfaces worked well for the maturing Flash memory technology, but with all the protocol overhead and bus speed limitations, it didn’t take long for those drives to experience their own fair share of performance bottlenecks (and limitations). Today, modern SAS drives operate at 12Gbit/s, while modern SATA drives operate at 6Gbit/s. This is why the technology shifted its focus to PCIe. With the bus closer to the CPU, and PCIe capable of performing at increasingly stellar speeds, SSDs seemed to fit right in. Using PCIe 3.0, modern drives can achieve speeds as high as 40Gbit/s. Support for NVMe drives was integrated into the Linux 3.3 mainline kernel (2012).


Figure 7. A PCIe NVMe SSD (by Dsimic – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=41576100)

What really makes NVMe shine over the operating system’s legacy storage stacks is its simpler and faster queueing mechanisms. These are called the Submission Queues (SQs) and Completion Queues (CQs). Each queue is a circular buffer of a fixed size that the operating system uses to submit one or more commands to the NVMe controller. One or more of these queues also can be pinned to specific cores, which allows for more uninterrupted operations. Goodbye serial communication. Drive I/O is now parallelized.

Non-Volatile Memory Express over Fabric (NVMeoF)

In the world of SAS or SATA, there is the Storage Area Network (SAN). SANs are designed around SCSI standards. The primary goal of a SAN (or any other storage network) is to provide access of one or more storage volumes across one or more paths to a single or multiple operating system host(s) in a network. Today, the most commonly deployed SAN is based on iSCSI, which is SCSI over TCP/IP. Technically, NVMe drives can be configured within a SAN environment, although the protocol overhead introduces latencies that make it a less than ideal implementation. In 2014, the NVMe Express committee was poised to rectify this with the NVMeoF standard.

The goals behind NVMeoF are simple: enable an NVMe transport bridge, which is built around the NVMe queuing architecture, and avoid any and all protocol translation overhead other than the supported NVMe commands (end to end). With such a design, network latencies noticeably drop (less than 200ns). This design relies on the use of PCIe switches. A second design has been gaining ground that’s based on the existing Ethernet fabrics using Remote Direct Memory Access (RDMA).


Figure 8. A Comparison of NVMe Fabrics over Other Storage Networks

The 4.8 Linux kernel introduced a lot of new code to support NVMeoF. The patches were submitted as part of a joint effort by the hard-working developers over at Intel, Samsung and elsewhere. Three major components were patched into the kernel, including the general NVMe Target Support framework. This framework enables block devices to be exported from the Linux kernel using the NVMe protocol. Dependent upon this framework, there is now support for NVMe loopback devices and also NVMe over Fabrics RDMA Targets. If you recall, this last piece is one of the two more common NVMeoF deployments.


So, there you have it, an introduction and deep dive into Flash storage. Now you should understand why the technology is both increasing in popularity and the preferred choice for high-speed computing. Part II of this article shifts focus to using NVMe drives in a Linux environment and accessing those same NVMe drives across an NVMeoF network.

Read More

How to Buy the Right SSD: A Guide for 2019

The easiest way to hobble a fast CPU is to pair it with slow storage. While your processor can handle billions of cycles per second, it spends a lot of time waiting for your drive to feed it data. Hard drives are particularly sluggish because they have moving parts. To get the optimal performance you need a good solid state drive (SSD).

Image Credit: Chris Ramseyer

Image Credit: Chris Ramseyer

 Best OverallAdata XPG GAMMIX S11 (1TB)$207.99Amazon Best M.2 PCIeSamsung 970 Pro (1TB)$345.97Amazon Best SATASamsung 860 Pro (1TB)$277Amazon Best Add-in-CardIntel Optane 905P (1TB)$1,325.71Amazon Best CheapCrucial MX500 (500GB)$69.95Amazon
Capacity (Raw / User)960GB / 1024GB1024GB / 1024GB1024GB / 1024GB960GB / 960GB512GB /500GB
Form FactorM.2 2280 D5M.2 2280 S32.5″ 7mmHalf-Height, Half-Length2.5″ 7mm
Interface / ProtocolPCIe 3.0 x4 / NVMe 1.3PCIe 3.0 x4 / NVMe 1.3SATA / AHCIPCIe 3.0 x4 / NVMeSATA / AHCI
ControllerSMI SM2262Samsung Phoenix NVMeSamsung MJXIntel CustomSilicon Motion SM2258
NANDMicron 64-Layer TLCSanDisk 64L Samsung 64-Layer MLCSamsung 64L MLCIntel 3D XPointMicron 64-Layer TLC
Sequential Read3,150 MB/s3,500 MB/s560 MB/s2,600 MB/s560 MB/s
Sequential Write1,700 MB/s2,700 MB/s530 MB/s2,200 MB/s510 MB/s
Random Read310,000 IOPS500,000 IOPS100,000 IOPS575,000 IOPS95,000 IOPS
Random Write280,000 IOPS500,000 IOPS90,000 IOPS550,000 IOPS90,000 IOPS
EncryptionClass 0 (256-bit FDE), TCG Opal 2.0, Microsoft eDriveTCG Opal, eDriveAES 256bitHardware AES-256 Encryption; TCG Opal 2.0 SED Support
Endurance640 TBW1,200 TBW1,200 TBW17.52 PBW180 TBW
Warranty5-Years5-Years Limited5-Years5-Years Limited5-Years Limited

If you already know all about the specific drive types and want specific recommendations, check out our Best SSDs page. But if you don’t have a Ph.D in SSD, here are a few things you need to consider when shopping.

First, if you’re going to be shopping for an SSD deal, you’ll want to check out our feature: How to Tell an SSD Deal From a Solid-State Dud. And if you keep an eye on our Best SSD and Storage Deals page, you might snag a sweet price on an older (but still plenty fast) SATA SSD. Also, keep an eye out for deals on higher-capacity drives, like 1 or even 2TB models. That’s where there’s the most potential for great discounts.


Here are four quick tips, followed by our detailed answers to many FAQs:

  • Know your home computer: Find out if you have slots for M.2 drives on your motherboard and room in the chassis. If not, you may may need a 2.5-inch drive instead.
  • 500GB to 1TB capacity: Don’t even consider buying a drive that has less than 256GB of storage. 500GB offers a good balance between price and capacity. And as 1TB drives slide toward the $100/£100 price point, they’re great, roomy options as well.
  • SATA is cheaper but slower: If your computer supports NVMe PCIe or Optane drives, consider buying a drive with one of these technologies. However, SATA drives are more common, cost less and still offer excellent performance for common applications.
  • Any SSD is better than a hard drive: Even the worst SSD is at least three times as fast as a hard drive. Depending on the workload, the performance delta between good and a great SSDs can be subtle.

How much can you spend?

Most consumer drives range from 120GB to 2TB. While 120GB drives are the cheapest, they aren’t roomy enough to hold a lot of software and are usually slower than their higher-capacity counterparts. It costs as little as $10 (£7) extra to step up from 120 to 250GB size and that’s money well spent. The delta between 250GB and 500GB drives can be slightly more, but 500GB is the sweet spot between price, performance and capacity for most users–particularly if you don’t have the budget for a 1TB model.

There are also some drives (primarily from Samsung) with capacities above 2TB. But they’re typically expensive in the extreme (well over $500/£500), so they’re really only worthwhile for professional users who need space and speed and aren’t averse to paying for it.

What kind of SSD does your computer support?

Solid-state drives these days come in several different form factors and operate across several possible hardware and software connections. What kind of drive you need depends on what device you have (or are intending on buying). If you own a recent gaming desktop or are building a PC with a recent mid-to-high-end motherboard, your system may be able to incorporate most (or all) modern drive types.

Alternatively, modern slim laptops and convertibles are increasingly shifting solely to the gum-stick-shaped M.2 form factor, with no space for a traditional 2.5-inch laptop-style drive. And in some cases, laptop makers are soldering the storage directly to the board, so you can’t upgrade at all. So you’ll definitely want to consult your device manual or check Crucial’s Advisor Tool to sort out what your options are before buying.

Which form factor do you need?

SSDs come in three main form factors, plus one uncommon outlier.

  • 2.5-inch Serial ATA (SATA): The most common type, these drives mimic the shape of traditional laptop hard drives and connect over the same SATA cables and interface that any moderately experienced upgrader should be familiar with. If your laptop or desktop has a 2.5-inch hard drive bay and a spare SATA connector, these drives should be drop-in-compatible (though you may need a bay adapter if installing in a desktop with only larger 3.5-inch hard drive bays free).
  • SSD Add-in Card(AIC): These drives have the potential to be much faster than other drives, as they operate over the PCI Express bus, rather than SATA, which was designed well over a decade ago to handle spinning hard drives. AIC drives plug into the slots on a motherboard that is more commonly used for graphics cards or RAID controllers. Of course, that means they’re only an option for desktops, and you’ll need an empty PCIe x4 or x16 slot to install them. If your desktop is compact and you already have a graphics card installed, you may be out of luck. But if you do have room in your modern desktop and a spare slot, these drives can be among the fastest available (take the Intel Optane 900p, for example), due in large part to their extra surface area, allowing for better cooling. Moving data at extreme speeds generates a fair bit of heat.
  • M.2 SSDs: About the shape of a stick of RAM but much smaller, M.2 drives have become the standard for slim laptops, but you’ll also find them on many desktop motherboards. Some boards even have two or more M.2 slots, so you can run the drives in RAID. While most M.2 drives are 22mm wide and 80mm long, there are some that are shorter or longer. You can tell by the four or five-digit number in their names, with the first two digits representing the width and the others showing length. The most common size is labeled M.2 Type-2280. Though laptops will only work with one size, many desktop motherboards have anchor points for longer and shorter drives. The largest M.2 drives are 1 to 2TB. So, if you have a generous budget and need a ton of storage space, you should consider other form factors.
  • U.2 SSDs: At first glance, these 2.5-inch components look like traditional SATA hard drives. However, they use a different connector and send data via the speedy PCIe interface, and they’re typically thicker than 2.5-inch hard drives and SSDs. U.2 drives tend to be more expensive and higher-capacity than regular M.2 drives. Servers that have lots of open drive bays can benefit from this form factor.

Do you want a drive with a SATA or PCIe interface?

Strap in, because this bit is more complicated than it should be. As noted earlier, 2.5-inch SSDs run on the Serial ATA (SATA) interface, which was designed for hard drives (and launched way back in 2000), while add-in-card drives work over the faster PCI Express bus, which has more bandwidth for things like graphics cards. 

M.2 drives can work either over SATA or PCI Express, depending on the drive. And the fastest M.2 drives (including Samsung’s 970 drives and Intel’s 760p) also support NVMe, a protocol that was designed specifically for fast modern storage. The tricky bit (OK, another tricky bit) is that an M.2 drive could be SATA-based, PCIe-based without NVMe support, or PCIe-based with NVMe support. That said, most fast M.2 SSDs launched in the last couple of years support NVMe

Both M.2 drives and the corresponding M.2 connectors on motherboards look very similar, regardless of what they support. So be sure to double-check the manual for your motherboard, laptop, or convertible, as well as what a given drive supports, before buying.

If your daily tasks consist of web browsing, office applications, or even gaming, most NVMe SSDs aren’t going to be noticeably faster than less expensive SATA models. If your daily tasks consist of heavier work, like large file transfers, videos or high-end photo editing, transcoding, or compression/decompression, then you might consider stepping up to an NVMe SSD. These SSDs provide up to five times more bandwidth than SATA models, which boosts performance in heavier productivity applications.

Also, some NVMe drives (like Intel’s SSD 660p) are nearing the price of SATA drives. So if your device supports NVMe and you find a good deal on a drive, you may want to consider NVMe as an option even if you don’t have a strong need for the extra speed.

What capacity do you need?

  • 128GB Class: Stay away. These low-capacity drives tend to have slower performance, because of their minimal number of memory modules. Also, after you put Windows and a couple of games on it, you’ll be running out of space. Plus, you can step up to the next level for as little as $10/£7 more.
  • 250GB Class: These drives are much cheaper than their larger siblings, but they’re still quite cramped, particularly if you use your PC to house your operating system, PC games, and possibly a large media library. If there’s wiggle room in your budget, stepping up at least one capacity tier to a 500GB-class drive is advisable.
  • 500GB Class: Drives at this capacity level occupy a sweet spot between price and roominess, although 1TB drives are becoming increasingly appealing.
  • 1TB Class: Unless you have massive media or game libraries, a 1TB drive should give you enough space for your operating system and primary programs, with plenty of room for future media collections and software.
  • 2TB Class: If you work with large media files, or just have a large game library that you want to be able to access on the quick, a 2TB drive could be worth the high premium you pay for it. 
  • 4TB Class: You have to really need this much space on an SSD to splurge on one of these. A 4TB SSD will be quite expensive — well over $500/£600 — and you won’t have many options. As of this writing, Samsung was the only company offering consumer-focused 4TB models, in both the 860 EVO and pricier 860 Pro models.

If you’re a desktop user, or you have a gaming laptop with multiple drives and you want lots of capacity, you’re much better off opting for a pair of smaller SSDs, which will generally save you hundreds of dollars while still offering up roughly the same storage space and speed. Until pricing drops and we see more competition, 4TB drives will be relegated to professionals and enthusiasts with very deep pockets.

What about power consumption?

If you’re a desktop user after the best possible performance, then you probably don’t care how much juice you’re using. But for laptop and convertible tablet owners, drive efficiency is more important than speed—especially if you want all-day battery life.

Choosing an extremely efficient drive like Samsung’s 850 EVO over a faster-but-power-hungry NVMe drive (like, say, the Samsung 960 EVO) can gain you 90 minutes or more of extra unplugged run time. And higher-capacity models can draw more power than less-spacious drives, simply because there are more NAND packages on bigger drives to write your data to.

While the above advice is true in a general sense, some drives can buck trends, and technology is always advancing and changing the landscape. If battery life is key to your drive-buying considerations, be sure to consult the battery testing we do on every SSD we test.

What controller should your SSD have?

Think of the controller as the processor of your drive. It routes your reads and writes and performs other key drive performance and maintenance tasks. It can be interesting to dive deep into specific controller types and specs. But for most people, it’s enough to know that, much like PCs, more cores are better for higher-performing, higher-capacity drives.

While the controller obviously plays a big role in performance, unless you like to get into the minute details of how specific drives compare against each other, it’s better to check out our reviews to see how a drive performs overall, rather than focusing too much on the controller.

Which type of storage memory (NAND flash) do you need?

When shopping for an SSD for general computing use in a desktop or laptop, you don’t expressly need to pay attention to the type of storage that’s inside the drive. In fact, with most options on the market these days, you don’t have much a choice, anyway. But if you’re curious about what’s in those flash packages inside your drive, we’ll walk you through various types below. Some of them are far less common than they used to be, and some are becoming the de facto standard.

  • Single-Level Cell (SLC) flash memory came first and was the primary form of flash storage for several years. Because (as its name implies) it only stores a single bit of data per cell, it’s extremely fast and lasts a long time. But, as storage tech goes these days, it’s not very dense in terms of how much data it can store, which makes it very expensive. At this point, beyond extremely pricey enterprise drives and use as small amounts of fast cache, SLC has been replaced by newer, denser types of flash storage tech.
  • Multi-Layer Cell (MLC) came after SLC and for years was the storage type of choice for its ability to store more data at a lower price, despite being slower. To get around the speed issue, many of these drives have a small amount of faster SLC cache that acts as a write buffer. Today, apart from a few high-end consumer drives, MLC has been replaced by the next step in NAND storage tech, TLC.
  • Triple-Level Cell (TLC) flash is still very common in today’s consumer SSDs. While TLC is slower still than MLC, as its name implies, it’s even more data-dense, allowing for spacious, affordable drives. Most TLC drives (except some of the least-expensive models) also employ some sort of caching tech, because TLC on its own without a buffer often is not significantly faster than a hard drive.
    For mainstream users running consumer apps and operating systems, this isn’t a problem because the drive isn’t typically written to in a sustained enough way to saturate the faster cache. But professional and pro-sumer users who often work with massive files may want to spend more for an MLC-based drive to avoid slowdowns when moving around massive amounts of data.
  • Quad-Level Cell (QLC) tech is emerging as the next stage of the solid-state storage revolution. And as the name implies, it should lead to less-expensive and more-spacious drives thanks to an increase in density. As of this writing, there are only a handful of consumer QLC drives on the market, including Intel’s SSD 660p and Crucial’s similar P1, as well as Samsung’s SATA-based QVO drive.

What about endurance?

These are two other areas where, for the most part, buyers looking for a drive for general-purpose computing don’t need to dive too deep, unless they want to. All flash memory has a limited life span, meaning after any given storage cell is written to a certain number of times, it will stop holding data. And drive makers often list a drive’s rated endurance in total terabytes written (TBW), or drive writes per day (DWPD).

But most drives feature “over provisioning,” which portions off part of the drive’s capacity as a kind of backup. As the years pass and cells start to die, the drive will move your data off the worn-out cells to these fresh new ones, thereby greatly extending the usable lifespan of the drive. Generally, unless you’re putting your SSD into a server or some other scenario where it’s getting written to nearly constantly (24/7), all of today’s drives are rated with enough endurance to function for at least 3-5 years, if not more.

If you plan on using your drive for much longer than that, or you know that you’ll be writing to the drive far more than the average computer user, you’ll probably want to avoid a QLC drive in particular, and invest in a model with higher-than-average endurance ratings, and/or a longer warranty. Samsung’s Pro drives, for instance, typically have high endurance ratings and long warranties. But again, the vast majority of computer users should not have to worry about a drive’s endurance.

Do you need a drive with 3D flash? And what about layers?

Here again is a question that you don’t have to worry about unless you’re curious. The flash in SSDs used to be arranged in a single layer (planar). But starting with Samsung’s 850 Pro in 2012, drive makers began stacking storage cells on top of each other in layers. Samsung calls its implementation of this tech “V-NAND” (vertical NAND), Toshiba calls it “BiCS FLASH.” Most other companies just call it what it is: 3D NAND. As time progresses, drive makers are stacking more and more layers on top of each other, leading to denser, more spacious, and less-expensive drives.

At this point, the vast majority of current-generation consumer SSDs are made using some type of 3D storage. The latest drives often use 96-layer NAND. But apart from looking at small letters on a spec sheet or box, the only reason you’re likely to notice that your drive has 3D NAND is when you see the price. Newer 3D-based drives tend to cost significantly less than their predecessors a the same capacity, because they’re cheaper to make and require fewer flash packages inside the drive for the same amount of storage.

What about 3D XPoint/Optane?

3D XPoint, (pronounced “cross point”), created in a partnership between Intel and Micron (maker of Crucial-branded SSDs), is an emerging new storage technology that has the potential to be much faster than any existing traditional flash-based SSD (think performance similar to DRAM), while also increasing endurance for longer-lasting storage.

While Micron is heavily involved in the development of 3D Xpoint, and intends to eventually bring it to market, as of this writing, Intel is the only company currently selling the technology to consumers, under its Optane brand. Optane Memory is designed to be used as a caching drive in tandem with a hard drive or a slower SATA-based SSD, while the Optane 900p (an add-in card) / 905P are standalone drives, and the Intel 800p can be used as either a caching drive or a standalone drive (though cramped capacities make it more ideal for the former).

Optane drives have much potential, both on the ultra-fast performance front and as a caching option for those who want the speed of an SSD for frequently used programs but the capacity of a spinning hard drive for media and game storage. But it’s still very much a nascent technology, with limited laptop support, low capacities and high prices. At the moment, 3D XPoint is far more interesting for what it could be in the near future than for what it offers to consumers today. However, if you have a lot of money to spend, the Intel Optane 905P is the fastest SSD around.

Bottom Line

Now that you understand all the important details that separate SSDs and SSD types, your choices should be clear. Remember that high-end drives, while technically faster, won’t often feel speedier than less-spendy options in common tasks.

So unless you’re chasing extreme speed for professional or enthusiast reasons, it’s often best to choose an affordable mainstream drive that has the capacity you need at a price you can afford. Stepping up to any modern SSD over an old-school spinning hard drive is a huge difference that you’ll instantly notice. But as with most PC hardware, there are diminishing returns for mainstream users as you climb up the product stack.

Read More