Choosing an M.2 SSD: Data Rot, PCIe, NVMe, and Other Issues

As detailed in another post, I wanted to add a solid state drive (SSD) to the M.2 slot in my laptop. The laptop already had a hard disk drive (HDD) in its standard 2.5″ drive bay, so there was a question of how to arrange files across these two drives. This post discusses the questions and issues I encountered as I sought the best configuration.

Contents

Introduction
NAND: SLC, pSLC, MLC, TLC, QLC
Data Rot
Over-Provisioning
The Ideal HDD/SSD Combination
SATA vs. PCIe
AHCI vs. NVMe
Machine-Specific Hardware

.

Introduction

I started with a search that led to a Lifewire article (Kyrnin, 2017). That article introduced several points of note:

  • An SSD plugged into an M.2 connector on a laptop’s motherboard might be using either SATA or the potentially much faster PCI-Express (PCI-E or PCIe) specification.
  • SSDs and operating systems now generally supported the AHCI interface, but they might also support the faster NVMe interface. Windows 10 now supported PCIe and NVMe; older versions of Windows might, too, with the necessary drivers.
  • On some systems, the machine might require a BIOS upgrade to boot an M.2 drive.
  • Especially on desktop computers, using the M.2 slot could make four or more SATA slots unavailable.

Computer Shopper (Burek, 2017) briefly covered the historical transitions from SSDs in the 2.5″ format, designed to fit into spaces formerly occupied by laptop HDDs, to the mSATA format, designed to plug into a SATA connector but taking less space, to the M.2 form factor, using its own kind of connector. Burek said that M.2 drives “look like sticks of gum studded with NAND modules and a controller chip,” with lengths and widths most commonly captured in the 2260 and 2280 identifying numbers (i.e., 22mm wide and either 60mm or 80mm long). Burek also said that new, high-end M.2 drives would now support PCIe x4 (i.e., four-lane PCIe) and NVMe to yield the best performance, and that motherboards that used Intel’s Z170 or Z270 chipset and/or that supported Intel 7th-generation Kaby Lake CPUs would also be more likely to support PCIe x4 NVMe M.2 drives. Burek noted that desktop motherboards lacking any M.2 connector might be able to use the “M.2 on a card” option, where the M.2 SSD would be designed to plug into an expansion slot, like the ones used by desktop computer video cards — with the advantage of better cooling and reduced self-throttling by SSDs under sustained load.

These introductory materials raised several topics of investigation, including NAND, SATA vs. PCIe, and AHCI vs. NVMe. The following sections explore those topics among others.

NAND: SLC, pSLC, MLC, TLC, QLC

A search led to an NVMdurance page explaining that Toshiba invented NAND (short for negative-AND) flash memory in 1987, and that, until recently, NAND was planar (i.e., memory cells were arrayed on a flat 2D plane), but “now 3D NAND is replacing 2D.” The advantages of 3D included “higher densities at a lower cost per bit.” How-To Geek (Crider, 2017) said that 3D NAND was also known as vertical or V-NAND, and that other advantages of 3D NAND included increased speed, faster data transfer, and lower power usage. Crider felt, however, that the days of cheap 3D NAND were still “a way away.”

eTeknix (Hansen, 2017) said that single-level cell (SLC) NAND had the greatest endurance, took the most space, and had the highest price. On the opposite extreme, triple-level cell (TLC) NAND had the lowest endurance but also the highest capacities for the lowest price. Multi-level (more specifically, two bits per cell) (MLC) NAND was midway between SLC and TLC, in terms of density and endurance, and was most preferred by gamers for its performance. Cactus Technologies (Larrivee, 2017) characterized SLC as “industrial grade,” MLC as “commercial grade,” pSLC (for “pseudo-SLC,” roughly equivalent to MLC) as OEM grade, and TLC as “consumer grade.” The last, Larrivee said, was designed “for lowest cost and in some cases highest performance” but had the most issues with “unexpected power loss, cell cross talk, read disturb, data corruption and data retention as well as difficulties at wide temperature ranges.”

EE Times (Shiah, 2015) said, “SLC NAND can withstand approximately 70K P/E (Program/Erase) cycles before a cell fails. In comparison, MLC NAND endurance is in the 18K P/E range. . . . [and] TLC NAND is in the 1K range” (although, in a later table, Shiah indicated that TLC might actually permit 4.5K). Shiah (of Samsung) explained, however, that those numbers held only in planar (2D) NAND:

A key benefit of 3D NAND technology is that it can use a larger process geometry and still get better densities than planar NAND. Larger memory cells have the benefit of yielding faster, more reliable NAND. It also consumes less power . . . . 3D NAND is what allows TLC to perform at levels comparable to planar MLC. For comparison, Samsung’s 2nd-generation 3D TLC NAND is characterized with 20K P/E cycles, an endurance better than planar MLC NAND.

Shiah said Samsung’s second-generation 3D TLC NAND was also faster than planar MLC. For whatever it was worth, I noticed that, in Tech Report’s (Gasior, 2013) endurance experiment, Samsung’s older products showed the greatest deterioration in heavy use, due to its choice of TLC rather than MLC. On the other hand, in a comparison among a handful of M.2 SSDs, I also noticed that Samsung was one of few offering a five-year warranty.

From the buyer’s perspective, much of this was still very preliminary. A year or more later, IT Buyer’s Resource (2016) said, “[W]e are still in the inception phase of 3D NAND.” Not to say that research had ceased. Among other things, TechPowerUp (btarunr, 2017) noted that Toshiba had recently developed the world’s first 4-bit-per-cell (QLC) NAND; ExtremeTech (Hruska, 2017) covered Western Digital’s announcement of 96-layer NAND; and Samsung announced plans to release 128TB QLC SSDs in 2018.

Data Rot

ElectronicDesign (Beekman & Wong, 2017) offered responses to certain “myths.” I was particularly interested in these:

  • Myth: SLC NAND flash is rated for 100k write/erase cycles and MLC for 10k write/erase cycles. Reality: “[T]hese were the specifications of NAND flash years ago . . . . Typical SLC NAND flash [now] has an endurance of 50-60k cycles (24-nm generation) and MLC NAND flash has an endurance of 3k cycles (15-nm generation).” That latter remark seemed to underscore the contrast between the state of the art generally and that at Samsung specifically (above).
  • Myth: Data in NAND flash always lasts at least one year. Reality: “While the nominal data-retention specification for NAND flash is one year, that’s only when at a specific temperature, write/erase cycle limit, and ECC requirement. The write/erase cycle limit is different for SLC, MLC, and TLC devices.”

That second myth raised a new concern. Were they saying that data would start to evaporate from an SSD that had been sitting around for a while? In response to recent postings by “quite a few media outlets . . . claiming that SSDs will lose data in a matter of days if left unpowered,” AnandTech (Vättö, 2015) offered an explanation of the prevailing data standard. The explanation appeared to be as follows: start with an SSD that has already reached its rated life, in terms of total numbers of bytes of data written. This worn-out drive will still tend to retain its data for a certain number of weeks without power. (It appeared, but I was not certain, that the data would be refreshed if the unit was powered up. I assumed that a turned-off machine would not allow any refreshing current to reach the SSD.) The number of weeks depended on (a) the active temperature of the SSD (not sure whether that was as of the time when data was saved, or when the machine was last used) and (b) the temperature at which the SSD is stored, as indicated in a chart (apparently taken from a JEDEC presentation):

Vättö cited the green cell as an example: an SSD stored at 30°C (86°F) would retain data for 52 weeks if the SSD’s temperature was 40°C (104°F) when the machine (or data?) was active. The worst case would arise if the data was written while the SSD was cold, and then the SSD was stored hot. So, for instance, if my laptop had an old SSD that I used during a low-intensity session (i.e., not heating up the SSD) while working in an air-conditioned home or office setting at 22°C (72°F), and if I then I threw that laptop in the trunk of my non-air-conditioned car and set off on my ten-week camping trip in Texas during summer and fall 2013, with air temperatures frequently reaching or exceeding 38°C (100°F) and with trunk temperatures potentially reaching 78°C (172°F), I might find that the data was gone by the time I got around to looking at the laptop again.

Note, again, that these numbers were for an old and well-worn SSD. Nonetheless, it seemed prudent to conclude that I should keep a good backup. In the words of PCWorld (Jacobi, 2015),

SSDs, and NAND in general, are not suitable for archiving data. But you absolutely do not have to rush back from vacation or hire someone to turn on your PC every few days to avoid losing the data on your SSD.

To archive data, store it online, store it to hard drives (write the data, unplug them, and store them in a safe place), or even use M-DISC write-once archival optical media. Yeah, and you thought optical was dead. Also, always follow the rule of three and keep a working copy of your data, a backup copy, and a copy of the backup.

ExtremeTech (Hruska, 2015) said, “With SSDs, you can’t necessarily depend on more than 12-24 months of longevity. . . . SSDs . . . may need to be stored in climate-controlled environments. . . . [F]or now, spinning disks (or in some cases, tape backup) offer a longevity that SSDs simply can’t match.” (See also Quora.)

Over-Provisioning

A Kingston report explained that SSD manufacturers could reserve a portion of the SSD for purposes of over-provisioning (OP), so as to improve performance and increase SSD endurance. Endurance had become less of an issue, in the wake of research demonstrating that contemporary SSDs were not especially fragile, but performance would always be a topic of interest.

The general idea, in OP, was to leave a portion of an SSD unused. Seagate offered a somewhat technical explanation of the difference between an SSD and a HDD in this regard. It appeared to me that at least one of Seagate’s graphs was incorrect. I wrote to them about that. PNY (2015) provided a simpler summary:

When the drive fills up, the controller needs extra capacity to write new data to the SSD before garbage collecting the old data. The drive uses this additional space, called OP, in order to improve functionality of various processes. The larger the OP, the more available capacity there is to move the valid data without having to re-write multiple blocks. This means less garbage collection, which in turn means better performance, higher endurance, and a more reliable SSD.

Or, in the words of How-To Geek (Hoffman, 2013), “A nearly full solid-state drive will have much slower write operations, slowing down your computer.”

As noted in the Kingston report, manufacturers themselves would typically build some OP into their drives. Kingston said the amount of OP was calculated using the difference between an SSD’s physical capacity and the capacity available to the user. For this purpose, “physical capacity” was evidently the same as advertised capacity. So, for instance, if the user could access only 60GB on a 64GB SSD, OP would be 7% (i.e., 4/60). I would eventually find that a Samsung 850 Evo SSD, advertised as providing 500GB of data storage, would actually only make 466GB available to me. After subtracting a bit of that difference due to formatting, it appeared that this SSD likewise allowed for 7% OP.

The Kingston report indicated that typical OP percentages would be 7% in a read-intensive application (typical for clients) and 28% in a write-intensive application (found in some enterprise settings). Seagate said that, actually, “an SSD’s performance begins to decline after it reaches about 50% full”; but Seagate also acknowledged that it could be expensive and/or impractical to leave an SSD half-empty. The 7% figure appeared to represent the point where OP would tend to be essential to avoid sharp performance impairment.

These remarks raised the question of whether users could contribute additional OP space, assuming they had the space to spare. On this, my searches and reading over a period of several hours did not lead to a clear, recent, authoritative statement on the state of the art. The best practice appeared to be as follows: if the user wanted to make SSD space available for OP, s/he should partition the SSD to leave an unformatted partition as the last partition on the drive. Evidently the SSD would know that such space would be unavailable to the user, and would then feel free to use that space as needed.

It appeared that allowing more space may have been a good idea for earlier SSDs. For instance, citing AnandTech, Hoffman (2013) suggested not filling the SSD more than 75%. But according to Samsung (2014),

[U]nder a light workload in a client PC application, users don’t need to set additional OP space. However, under a heavy workload (for example, a server, data center or heavy workload client PC applications), a minimum of 6.7% OP is recommended and over 20% and even 50% is being used. . . .

Guaranteeing free space to accomplish the NAND management tasks (GC, wear-leveling, bad block management) means the SSD does not have to waste time preparing space on demand, a process that requires additional time as data is copied, erased and recopied. . . .

Samsung said that, according to its own tests (details unspecified), allowing 6.7% OP could nearly double performance, and 28% OP could double it again. Samsung continued:

Depending on the SSD product, some are already over-provisioned by the manufacturer and users cannot access and control it. However, users can set additional OP areas using several methods – using utility tools (hdparm, etc.), setting unallocated partitions on the operating system (OS) and using Samsung Magician software (SW).

Although Samsung did not say so, TechSpot (Smith, 2013) indicated that SSDs could also engage in dynamic over-provisioning (DOP). Smith described DOP as including “any user space not consumed by the user.” An AnandTech discussion suggested that DOP could eliminate the need for the distinct partition specified by Samsung: apparently SSDs could now use any unused space in any partition. (One participant in a Reddit discussion pointed out that free space within an encrypted partition would not be available as free space for this purpose; the SSD would see it as being entirely filled with (random) data.)

At the present day, I was not sure whether DOP was supported by SSD manufacturers other than possibly Seagate. A search led to very few references to it in the current year. There seemed to be a need for testing to determine the performance difference, with and without an unformatted drive space set aside for OP, for a sustained workload on comparably full SSDs over a period of several hours. Until I encountered that sort of testing, it appeared that I might be well advised to set up such a partition, covering perhaps 5-10% of the SSD’s available space.

Finally, since I was interested in Samsung’s SSDs, I was curious about the foregoing reference to Samsung Magician software. I had seem some comments suggesting that Magician was not particularly helpful. For instance, 192 ratings of Magician at Softpedia gave it an average of only 3.5 stars. Samsung itself billed the software as facilitating firmware updates, performance benchmarks, drive diagnosis, read/write optimization, and secure erase. CustomPCReview (Chen, 2017) went through the features of version 5.1 and suggested that it represented a complete overhaul. I decided that, if I got a Samsung SSD, I would give Magician a try.

The Ideal HDD/SSD Combination

The decision on whether to use an HDD and/or an SSD would depend on the number of drive bays available. While a laptop would typically have only one 2.5″ and one M.2 bay, Computer Shopper (Burek, 2017) observed that some recent desktop motherboards offered not only a number of SATA connectors but also two different M.2 connectors: one for SATA, and one for PCIe (below).

As I thought about HDD backup, it seemed I would have two ways to connect an HDD to my laptop. First, the machine already had an internal HDD. Second, I had an external dock, designed to connect desktop or laptop internal SSDs and HDDs to the computer via USB cable. I had HDDs that I could use for onsite or offsite backup in this way.

This laptop did not have an optical drive. I owned an internal (desktop) Blu-Ray burner, but (unlike an HDD) I probably couldn’t stick it in the external dock: internal optical drives were not necessarily designed to operate vertically, and the dock wasn’t designed to be turned on its side so that the burner could operate horizontally. But I thought I might have a solution. As described in another post (in development), I was in the process of acquiring and testing a SATA extension cable that, if successful, would connect the dock to the burner, lying next to the dock on the tabletop.

As detailed in a different post, I had a good backup scheme by which I could back up the laptop’s contents to a drive in the external dock. Meanwhile, I knew that the (expensive) speed and data longevity of the SSD suited it for active use, not for bulk archiving. I had assumed that I would leave the laptop’s internal HDD in the laptop, and that it would be for active use; but now I thought I might reconsider those assumptions.

If I removed the HDD from the laptop, I could sell it or use it for other purposes. This would free up its drive bay, so that I wouldn’t have to use an M.2 SSD; I could buy an SSD designed to fit into a standard 2.5″ HDD bay. Having one less drive (particularly a mechanical drive) would reduce power demands, increase laptop battery life (see LaptopMag, 2016, finding a 25% improvement from replacing HDDs with SSDs), and make the laptop quieter and less vulnerable to mechanical shock and/or temperature extremes. In this scenario, I would need an SSD big enough to contain the data I expected to be working with, and I would want to be making backups via the external dock pretty often. On the other hand, all of my data would be on an SSD, and then I might be concerned about the potential difficulty if not impossibility of wiping an SSD when I finally sold the device or the laptop.

Alternately, the original plan was that both internal drives would hold data. The SSD would hold data most likely to be worked on; the HDD could serve more of an archival purpose; I could move files between the two as needed, as I turned my attention to different projects; and the external backup would cover both drives. In this case, I probably wouldn’t need such a big SSD.

Or, as another possibility, I could use a big SSD, and use the internal HDD as (a) an archive, for less frequently used files that wouldn’t fit onto the SSD, and also (b) a backup device, perhaps triggered automatically by backup software. In one variation, the external HDD might never back up the SSD directly; it might just be configured to back up everything on the internal HDD, which could include automatic backup(s) of the SSD.

Depending on the scenario chosen, I might want to power down the HDD when it was not in use, so as to reduce noise and battery drain. For this, the situation seemed to be that the Power settings in Windows offered only one setting, covering all drives, but that different drives would power down independently, according to when they were last used. But if I were doing frequent incremental backups from SSD to HDD, either the HDD would rarely power down or it would be stopped and started multiple times a day, which would not be recommended for HDD longevity.

PC World (Spector, 2015) suggested the alternative of a hybrid drive or network-attached storage. Based on previous experience, I rejected the latter as expensive, relatively complicated, and — when working via the NAS private cloud — dependent on an Internet connection (not to mention slow, when trying to access big files online). Spector noted that there was more than one kind of hybrid drive. The highest-rated 1TB hybrid drives at Amazon were all Seagate models featuring an 8GB SSD; the most appealing of these cost about $80. According to PC Advisor (Martin, 2017), “Seagate’s SSHDs intelligently learn which applications you use most, and try to store those in the solid-state storage for faster loading times and better overall performance.” Martin said that data could also be cached, but “performance still falls far short of a proper SSD.” He noted that there could also be a problem of fit, apparently because the RAM chips in an SSHD would be layered on top of the HDD, making the device too thick for some laptops. Contributing to my negative impression, How-To Geek (Hoffman, 2014) said, “As solid-state drive prices continue to decline, we expect to see less hybrid drives.”

I wanted the option of installing at least two operating systems on the SSD, plus various system files (e.g., paging file, based on the informed opinion and experience, so far, that warnings about wearing out the SSD were overblown), plus my customized Start Menu, plus caching, plus a possible virtual machine. A 256GB SSD could probably handle that. But since my laptop could accommodate a maximum of 20GB of RAM, and since I tended to keep many tabs open in Chrome and Firefox, along with (usually) several other programs, experience (and system monitoring utilities) suggested that RAM would tend to be spoken for. So if I wanted fast access to data files, it appeared that the question was whether to choose a 500GB or 1TB SSD. Among the most highly rated 1TB SSDs at Amazon, prices (with tax) were in the vicinity of $300-450. That was pretty rich, for a budget laptop. By contrast, highly rated ~500GB M.2 SSDs could be had for $150-200, at a premium of only about $60 over their ~250GB M.2 SSD counterparts. The price would be slightly lower if I opted to dispense with the HDD and chose the 2.5″ form factor for the 500GB SSD.

I decided the HDD was there, it would have its uses, and if I could eventually make it nonessential, I could remove it without disrupting the system files on the M.2 drive. I could help to make it nonessential by eliminating unnecessary files and/or by moving its less-used contents to an external drive. I wouldn’t mind its power draw when I had the laptop plugged in, which would often be the case; I would just want to look for ways of powering down the HDD when it was not needed, when I was running on battery.

So far, my thinking had involved two separate storage devices — the HDD and the SSD, each holding files. There was another possibility. In the course of my research, I encountered occasional references to disk caching. The idea was that it should be possible to combine a fast and relatively small and expensive SSD with a slow, large, and inexpensive HDD, in such a way that intelligent software would use the SSD to anticipate which data should be drawn from the HDD and made available to the CPU. To some extent, that did occur anyway; the question was whether it could occur on a large scale, so as to reduce HDD delays significantly.

Unfortunately, the consensus seemed to be that disk caching was helpful but not great. The most promising exception seemed to be Intel’s new Optane SSDs. I was encouraged when I read the PC World (Ung, 2017) account of how an Intel Optane PCIe M.2 SSD (16GB for $50, 32GB for $80 at Amazon) could provide HDD caching that was so fast and efficient as to compete with an SSD. In other words, I could access the data in a 1TB HDD almost as quickly as if I had been using a 1TB SSD, at a fraction of the price.

The problem was that, according to How-To Geek (Crider, 2017), Optane had certain hardware drawbacks. I would have to use a 7th-generation Intel Core i3, i5, or i7 CPU and an Optane-compatible motherboard. Also, unfortunately, Optane would work only on the primary (boot) partition, not other partitions. AnandTech (Tallis, 2017) reported that their Optane sample died in the first day of testing. As testing permitted, Tallis found that “the Optane cache delivers a remarkable improvement over just a hard drive” and even “breaks a few records” but unfortunately “lacks any meaningful power saving mode.” Tallis concluded, “I wonder whether it may all be too little, too late. . . . Optane Memory enters a market where the price of flash SSDs means there’s already very little reason for consumer machines to use a mechanical hard drive as primary storage.”

Apparently Optane would require a motherboard capable of handling an NVMe SSD. (See below for further discussion of NVMe.) A quick look at Amazon suggested that NVMe-compatible 15″+ laptops from recognized PC makers presently tended to start at prices above $1,000. In other words, there seemed to be a logical problem. If people were on a budget compelling them to use HDDs rather than large SSDs, they would not tend to be able to afford the kind of laptop that could run an Optane SSD capable of speeding up that HDD — and if they could, they would buy an SSD instead.

SATA vs. PCIe

To understand the difference between SATA and PCIe connections, I ran a search. This led to a MakeUseOf article (Lee, 2016) explaining that SATA was the tried-and-true connector that would work in virtually any computer built in the last decade — but SATA 3.0 (the most common version) had a practical maximum data transfer speed of 600 megabytes per second (MB/s). Lee said that was still “pretty fast” and that it would “suffice for most home users.” By contrast, PCIe 3.0 had “an effective transfer speed of 985 MB/s per lane”; and since there could be up to 16 lanes, “you’re looking at potential transfer speeds of 15.76 GB/s.” At the consumer level, Lee said, the practical maximum speed would be more like 4GB/s, and the difference would be noticeable primarily when working with large (or, presumably, many) files.

Ars Technica (Cunningham, 2015) offered a photo (above) displaying examples of different “module keys” — that is, different ways in which M.2 connectors (gold pins at bottom) could be configured. Apparently the M.2 socket could accommodate them all, and would respond differently to different configurations. On the subject of physical dimensions, Computer Shopper (Burek, 2017) pointed out that there could also be a thickness issue, if the M.2 SSD was covered by a big (usually colorful) heat sink that would make the unit too thick to fit inside a slim laptop case.

Cunningham said, “M.2 is interesting not just because it can speed up storage with PCI Express lanes, but because it can use a whole bunch of different buses too.” So (as suggested by the photo) an M.2 SSD could be SATA III, PCIe x2, or PCIe x4. (Note: there also is/was a standard known as SATA Express. According to Lifewire (Kyrnin, 2017), SATA Express became official in 2013 but did not catch on, and may never catch on, mostly because M.2 has proved to be a more useful option.)

As I considered SATA vs. PCIe, the application that came to mind was video editing. On that topic, a search led to at least 1 2 3 discussions expressing agreement that, yes, the faster the SSD, the better the video editing. Otherwise, though, UserBenchmark (anonymous, 2017) agreed with Lee: “For most consumer uses of SSDs this [SATA 3.0 limit of 600 MB/s] is absolutely adequate.” Indeed, that was substantially the conclusion that Develop3D reached for most CAD/CAM/CAE purposes. UserBenchmark offered, as a supporting example, a comparison of an OCZ PCIe SSD against a Samsung SATA 3.0 SSD. In that comparison of benchmarks from a total of about 11,500 Samsung users and only 215 OCZ users (where the latter probably tended to be extreme users, given the $190 vs. $535 price contrast), the OCZ device was 55% faster.

Dell provided a Knowledge Base page on PCIe SSDs. They said that Windows 7 through 10 could boot from such a device (see Windows 7 Hotfix), but it could be more difficult with 32-bit versions and would only work with UEFI, in place of the older BIOS (see How-To Geek, 2017). They recommended making sure the most recent BIOS update was installed, configuring the BIOS to use the EFI boot loader, and disconnecting all non-boot drives during installation. That Dell page provided three different methods for setting up Windows to boot from the PCIe SSD. If booting from the PCIe SSD proved difficult, a workaround (incidentally simplifying the computer’s partitioning scheme) would be to use the 2.5″ bay for a bootable SATA drive holding the operating system, and use the M.2 PCIe SSD for data files. Assuming one Windows installation plus an optional Linux installation (although the latter could also be housed on a USB flash drive), at this writing a highly rated 128GB 2.5″ SATA SSD would be available for around $60 new, $45 used.

AHCI vs. NVMe

Wikipedia described the Advanced Host Controller Interface (AHCI) as an Intel technical standard for SATA. Elsewhere, Wikipedia indicated that PCIe devices could use AHCI as well as the “much faster” Non-Volatile Memory Express (NVMe). Wikipedia further said that NVMe was designed to eliminate certain inefficiencies arising from AHCI’s original focus on HDDs as distinct from SSDs. Although Wikipedia oddly offered no summary of AHCI’s history, other sources indicated that it originated circa 2003, as an improvement upon the still older IDE, whereas NVMe version 1.0 was released in 2011.

Dell (2017) reported that, as a PCIe standard, NVMe required UEFI. Dell provided links to a number of articles on fixes for complications that might arise when using NVMe on Windows 7. UserBenchMark said, “NVMe delivers better performance and reduced latency and is scalable, but at a price!”

PCWorld (Jacobi, 2015) characterized NVMe as “the insanely fast future for SSDs.” As of 2015, unfortunately, Jacobi perceived near-term hardware constraints on adoption. In that vein, Tom’s Hardware (Ramseyer, 2015) attempted an apples-to-apples comparison of the 2.5″ SATA Samsung 850 Pro against two Samsung SM951 M.2 SSDs. Both of the M.2 SSDs were PCIe 3.0 x4; one used AHCI and the other used NVMe. Ramseyer reached several conclusions, including (1) both M.2 SSDs were noticeably faster than the 2.5″ SATA SSD in various file-intensive (e.g., read, write) operations, (2) the NVMe M.2 SSD was only slightly faster than the ACHI M.2 SSD overall, and (3) as of 2015, it appeared that only the newest motherboards were going to be NVMe-compatible (i.e., bootable). AnandTech (Vättö, 2014) agreed with point (2):

Obviously enterprise is the biggest beneficiary of NVMe because the workloads are so much heavier and SATA/AHCI can’t provide the necessary performance. Nevertheless, the client market does benefit from NVMe but just not as much. . .. . [E]ven moderate improvements in performance result in increased battery life and that’s what NVMe will offer.

At least 1 2 3 sources concurred, noting particularly that everyday performance (e.g., gaming, Windows booting) would enjoy little if any appreciable benefit from NVMe rather than AHCI. Summarizing what many others seemed to be saying, and expressing it in terms that carried some real-world weight, TechSpot (2016) offered a video comparing a Samsung 960 Evo NVMe M.2 SSD, a Crucial MX300 2.5″ SATA SSD, and a Western Digital (WD) Red Pro 3.5″ HDD, performing various tasks on a high-end computer. To summarize key points with respective times and links to the relevant segments of the video, those drives

In those comparisons, the NVMe SSD saved significant amounts of time (against the SATA SSD) only in the file extraction test. The reviewer concluded that, under the right conditions, the NVMe device could provide “a noticeably better experience” than a SATA SSD. He defined “right conditions” as including a heavy (especially file-intensive) workload and a computer capable of using the NVMe SSD’s speed. He pointed out that the NVMe device would contribute nothing further to gaming, once the game was loaded, but might be worth paying twice as much for (as of 2016) for content creators (e.g., video editors) and for applications involving a lot of file de/compression. Otherwise, he said, either the NVMe or the SATA SSD would offer a tremendous improvement over the HDD, and the SATA SSD would generally be fine for most purposes. A participant in a LinusTechTips discussion summarized the situation by saying that NVMe SSDs were intended for “people with HUGE data transfer requirements. Like dozens to hundreds of gigabytes daily.” Indeed, against NVMe, participants in an AnandTech discussion shared these impressions:

I generally see worse performance in laptops on NVME drives and I wasn’t sure why.

I think even with the power setting set to ‘battery saver’, NVMe drives simply use more power than their SATA counterparts at idle and load. If I were going to install a NVMe drive in a laptop (which I personally wouldn’t do), it would have to be a 960 EVO or PRO. Drives like the OCZ RD400, Patriot Hellfire, and Plextor M8Pe just are too hot and use significantly more power. A person would almost certainly see throttling in a laptop from those drives due to lack of cooling.

Machine-Specific Hardware

If I did want an NVMe SSD, there would still be the question of whether my machine could handle it. To find out whether the computer was NVMe-compatible, various sources suggested running CrystalDiskMark or some other system information tool. For instance, participants in an Acer forum used HWINFO64 > Bus > PCI Bus #0 > click on entries referring to “PCI Express Root Port” > look in right pane at PCI Express > Version to determine whether a particular system was running PCIe 3.0, as illustrated in this example from my desktop computer (click to enlarge):

As shown in that screenshot, this computer was running PCIe version 2.0. According to that Acer forum,”NVMe SSD uses PCIe 3.0×4″; my version 2.0 would not support NVMe. A Tom’s Hardware answer ambiguously said, “check [in HWINFO64, to see whether] you have PCI Express version 3.0 and Maximum Link Width is 2x or 4x, if it is then you can use a PCIe/NVMe SSD.” In a discussion of PCI slots, participants in a Hardforum thread made me wonder whether options in the BIOS/UEFI was, or could be, adjusted to revise the M.2 setting. A Geektech webpage reported an instance where such options would not be available until the manufacturer developed the requisite BIOS update. An Apple discussion noted that (at least on their systems) adding a lower-rated SSD could reduce the reported link width from x4 to x2.

A different Tom’s Hardware answer pointed specifically to one motherboard’s BIOS setup > Advanced menu > Onboard Devices Configuration > PCI-EX16 Slot entry. This raised the question of whether the computer would have a PCIe x16 expansion slot that I could use instead of the M.2 slot. As I looked again at my desktop computer screenshot (above), I saw an entry referring to PCI Express x16 Controller. This was consistent with documentation indicating that the motherboard had one PCIe 3.0 x16 slot. The idea seemed to be that, BIOS and drivers permitting, I could make full use of a PCIe NVMe x4 SSD mounted on a card and inserted into that expansion slot, even if the M.2 slot was limited to PCIe 2.0 x2. It seemed the preferred way to to this might be to buy an NVMe adapter and insert whatever M.2 SSD I wished. (I did not investigate the question of whether or how that SSD would be bootable.)

On my laptop, I suspected that the lack of any mention of NVMe or PCIe in the user’s manual or product website was the manufacturer’s way of telling me that the M.2 slot did not support those standards. That was also the drift of my interaction with tech support for that machine: they weren’t sure — in fact, the lady had never heard of NVMe.

Even without NVMe, there was still the option of using a PCIe x2 SSD — or, for that matter, an NVMe SSD restricted to PCIe x2 by motherboard limits. My brief search suggested that these would be expensive in any case; it seemed even more questionable to put them into a machine that could not make full use of them. I was also unsure whether PCIe x2 would run into the potential heat and power issues noted above.

In addition, there was the practical question of what the laptop would actually support. It was a new machine — in fact, I had not yet turned it on. I had a plan for how I wanted to proceed with that. The plan included not booting Windows until I was ready. Thus, I decided to explore the machine’s PCIe capabilities by using a YUMI drive to boot into Linux, where I would run the HWINFO command to collect the desired information. For more on that, please see the other post.

Advertisements
This entry was posted in Uncategorized and tagged , , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

One Response to Choosing an M.2 SSD: Data Rot, PCIe, NVMe, and Other Issues

  1. Skyline says:

    Super informative and helpful post. Thanks for putting this all together. Discovered via your Veracrypt post, which I discovered via google. I actually have an XPS15 9550 with a 1TB Toshiba NVMe PCIe SSD, and it’s definitely the fastest thing I’ve ever used.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s