Ways to Securely Erase a Solid State Drive (SSD)

I was running Windows 7. I had a solid state drive (SSD). I wanted to securely erase it. People commonly sought secure erasure before selling a used drive or otherwise making it available to others. The primary reason for secure erasure was that a casually erased drive might still contain data that others could retrieve.

The problem confronting me was that, as I had lately learned, secure erasure of an SSD was entirely different from secure erasure of a hard drive (HDD). It would apparently require new tools and techniques. This post discusses the secure SSD erasure options I reviewed. The Summary (at the end of this post) offers a brief recap. Another post applies insights from this post to a specific SSD wiping task.

DBAN and Other Drive Wiping/Overwriting Tools

It didn’t take much browsing to discover that simply running a hard disk drive (HDD) wiping program like DBAN or Eraser would not provide a reliable solution. I did not master the technical details, but the general idea seemed to be that the SSD’s wear-leveling feature would send data from your wiping program through a flash translation layer.

What this meant, in effect, was that the wiping program might think that it was now writing a set of ones and zeroes to location A on the SSD, but the flash translation layer would instead be sending those ones and zeroes to locations B, C, and D, so as to ensure that no one part of the SSD would be getting worn down too quickly by too many writes. So while the wiping program might eventually succeed in overwriting much of the SSD, it would not do so in the systematic way its programmer intended, and would probably not reach some portions of the SSD. Belkasoft seemed to say that, in addition, those ones and zeroes might clutter up the SSD and thereby degrade its performance. I wasn’t sure, but it seemed those concerns might be at least somewhat addressed by an approach in which I would generate random junk files to fill the drive’s free space, and then formatting the drive to clean off the junk. I assumed it would be better to create small random files for wiping purposes, so as not to leave a large chunk of drive space unfilled.

It was also often said that wiping programs would reduce the SSD’s longevity by conducting excessive writes. Writing programs would typically perform only a handful of writes, on drives built to survive 1,000 if not 10,000 writes on each memory cell. But perhaps the SSD’s firmware would sometimes misdirect many such writes to a single spot in the SSD’s chips, prematurely aging that particular spot.

It appeared that failure to achieve a secure wipe would also follow, for the same reasons, from attempts to use SDelete or the cipher /w:D or the format e: /fs:NTFS /p:2 command to wipe free space on drive D (after deleting all files, so that the whole drive is free space). Presumably cipher and the like would not run into this problem on a dumb USB flash drive, as distinct from a too-clever SSD.

Secure Erase Tools Provided by SSD Manufacturers

To achieve a secure wipe, many posts advised using the ATA Secure Erase command. This, however, was a mystery: what exactly was that command? Nobody seemed to say.

Eventually, I found a MakeUseOf article that seemed to be telling me that the Secure Erase command was a firmware capability, internal to the SSD and accessed by the drive tools offered by some SSD manufacturers. I assumed this would be the case in, for example, the secure-erase option found in the Intel Solid-State Drive Toolbox. I did not have an Intel SSD, however, and there were indications that using the wrong tool could “brick” an SSD (i.e., turn it into a useless block). I had a Kingston SSD, and Kingston’s Toolbox 2.0 offered no such secure-erase option. But even if there had been a Kingston Secure Erase tool, there would be the concern (below) that it might not work as advertised.

Parted Magic

It seemed that another way to use the ATA Secure Erase command, also cited in that MakeUseOf article (and in 1 2 3 others), was to reboot with Parted Magic ($4.99) and use its Erase Disk option. In the version of Parted Magic that I used (pmagic_2014_02_26.iso, installed on a YUMI drive), the Erase Disk option was available as a desktop icon and also via Start > Disk Management > Erase Disk; in the version of Parted Magic used by MakeUseOf, apparently the option was Start > System Tools > Erase Disk. Within Erase Disk, there were multiple (in my case, seven) options. The last of those options was, “Internal Secure Erase command writes zeroes to entire data area.”

That description provoked some doubts. It seemed to conflict with a forensic webpage that said, “There is no internal linear mapping of sectors in a SSD” and “For practical purposes, it is impossible to overwrite new data in place of old data.” In this light, the Parted Magic claim to write zeros to the entire data area sounded much like the approach taken by hard drive utilities like DBAN (above). Maybe Parted Magic was going to attempt to write zeroes to the entire area, but there was no telling where on the SSD those zeroes would actually be entered. Or maybe they meant that they would be using the ATA Secure Erase command to write zeroes, if indeed that was what the ATA Secure Erase command did.

To explore further, I booted Parted Magic on a laptop containing an mSATA SSD. I went into the Erase Disk > Internal Secure Erase option. In the Secure Erase dialog that appeared, tooltips (i.e., pop-up explanations that appeared when I let the mouse hover on one of the listed drives) estimated 228 minutes to securely erase the laptop’s 1TB HDD, and 60 minutes to securely erase the unit’s 240GB SSD. In other words, Partition Magic seemed to be estimating an erasure rate of about 250GB per hour. It did not make sense that data would be erased from an HDD and an SSD at the same rate. That is, Parted Magic did not seem to be making special arrangements for an SSD.

Moreover, the tooltip provided estimates for both Normal and Enhanced erasure methods (discussed in more detail below). These options would normally be expected to proceed at very different rates. Yet for the SSD, the tooltip provided the same estimate — 60 minutes — for both methods. That was not consistent with a forensic study (p. 11) reporting that complete and secure (presumably Enhanced) erasure on an SSD could transpire within “only a few minutes.” Given various claims that Parted Magic used the Linux hdparm command, this 60-minute estimate was also not consistent with a Linux wiki page (below) stating that hdparm achieved secure erasure of an 80GB SSD in about 40 seconds.

I did make a brief and unsuccessful effort to find more information on what Parted Magic’s Erase Disk option was programmed to do, or what it actually achieved. I encountered reports that Parted Magic did not work for some users. Given my doubts, I decided to continue with investigation of the hdparm command that Parted Magic supposedly used.

The Linux HDPARM Command

In my browsing, I encountered many indications that hdparm provided a way to achieve a secure erase of an SSD.

In Linux-speak, “man” (short for manual) pages are usually primary sources of guidance on the use of commands. The man pages for hdparm at man7.org and cornell.edu repeatedly warned that hdparm was a dangerous command, capable of damaging drives and systems if misused — and that misuse, for these purposes, might involve nothing more than the entry of the wrong command options, or failure to issue the command with certain necessary options. This did not mean that one should not use hdparm. It did suggest that the dangers of the command, and the relative difficulty of getting into Linux to use it in the first place, would not make hdparm the leading candidate for the typical Windows user.

According to Wikipedia, the purpose of hdparm was “to set and view ATA hard disk drive hardware parameters.” This, and the very name of the command (i.e., hdparm), did not make it immediately obvious that hdparm was appropriate for SSDs. Man pages repeatedly indicated that the hardware parameters being examined were often specific to hard drives — involving drive rotation speed, for example, and drive geometry (cylinders, heads, sectors). I noticed, in addition, that the Cornell man page contained no instances of “secur” (e.g., secure, security) and that, in discussing the ATA Security Feature set, the man7.org page said, “These switches are DANGEROUS to experiment with, and might not work with some kernels” — with, that is, some Linux installations. Similarly, the ata.wiki.kernel.org page offered numerous disclaimers, including this one:

If you hit kernel or firmware bugs (which are plenty with not widely-tested features such as ATA Secure Erase) this procedure might render the drive unusable or crash the computer it’s running on.

This acknowledgement that hdparm’s ATA Secure Erase feature had not been widely tested raised, once again, the prospect that a great deal of faith was being placed in a black-box command — in, that is, a mysterious gizmo that would somehow produce magical results. Along those lines, a Linux Magazine article said,

Secure erase has two pitfalls: hdparm can only initiate a secure erase when the BIOS also allows it. Beyond that, the method is considered to be experimental.

Additional concern arose from this statement on the ata.wiki.kernel.org page:

The security-erase command is a single command which typically takes minutes or hours to complete, whereas most ATA commands take milliseconds, or seconds to complete.

If it would take hdparm an hour or more to complete a secure erase, then it did seem plausible that Parted Magic (above) was using hdparm; but with that kind of time lag, I could not tell whether this experimental hdparm tool was indeed correctly implementing the ATA Secure Erase command or was, instead, just doing a DBAN-type overwrite. At this point in my investigation, there did not appear to be empirical verification that hdparm was functioning reliably.

It seemed to me that use of hdparm could yield inconsistent results. Consider, for example, this statement from the ata.wiki.kernel.org page: Secure Erase via hdparm “took about 40 seconds for an Intel X25-M 80GB SSD, for a 1TB hard disk it might take 3 hours or more!” (In fact, one user reported that it did take three hours for a 1TB drive.) If hdparm was able to erase 80GB in 40 seconds, it would seem that data could be erased at a rate of 2GB per second. Hence a 1TB drive should take about 500 seconds (i.e., less than ten minutes). Why would it take three hours? It didn’t seem to be because hdparm was using Normal vs. Enhanced modes (see below) in one case or the others: a search of 1 2 3 4 different man pages yielded no indications of how one might specify Enhanced mode in hdparm (which, itself, raised the question of why a correct implementation of the ATA Secure Erase command would contain only Normal mode). Rather, I wondered whether something akin to write amplification was underway — whether, that is, hdparm might be getting lost in the multiplying complexities that its approach to data deletion might imply as drive size increased.

These reflections left me baffled. How could it be that the ATA Secure Erase command had been built into virtually all drives manufactured in at least the past ten years, and yet there would be such confusion about what that command implied, or how to make it work?

I could certainly not say that Parted Magic or hdparm were obviously flawed or inappropriate for use on SSDs. I noted that, for instance, Corsair recommended using Parted Magic to achieve a secure erase. It was just that the situation did not seem as straightforward as one might hope or expect. To cite the manufacturer whose device got me started on this investigation, it appeared that Kingston did not yet endorse Parted Magic for secure erases, apparently preferring HDDerase (below). In short, there seemed to be incompatible impressions that I was not able to resolve in a brief examination. For such reasons, it appeared advisable to continue my search.


Numerous webpages cited HDDerase (sometimes spelled with a capital E, as HDDErase) as another tool capable of issuing the ATA Secure Erase command that had supposedly been built into SSD firmware. HDDerase reportedly came into existence at about the same time as the Secure Erase command, back in 2001. This was not surprising: HDDerase was a product of Gordon Hughes at the University of California – San Diego (UCSD), who (with NSA fundingreportedly helped to develop ATA Secure Erase.

That authorship suggested that one might expect HDDerase to work — that, among other things, it could erase a drive within minutes, as anticipated by the foregoing discussion. There was some support for that belief. For instance, a Kingston webpage stated that HDDerase would erase a 256GB Kingston SSD in two minutes.

This was the point in my exploration in which I began to understand some things about the different speeds at which an SSD might be securely erased. A Tutorial on Disk Drive Sanitization co-written by Hughes and Coughlin (date not specified, but apparently circa January 2007) stated that there were two kinds of ATA Secure Erase commands. There was an “Enhanced Secure Erase command that takes only milliseconds to complete” (p. 1). But there was also a Normal Secure Erase that would take up to two hours for a 100GB drive (p. 2). The difference was that the Enhanced approach achieved faster and higher-security erasure by changing a key that would encrypt all data on the drive — in essence, converting all of the drive’s data to garbage as soon as that key was deleted — whereas the Normal approach would take the slower route of manually overwriting all data on the drive.

At first, I thought the numbers stated in the preceding paragraph would respond to some concerns expressed above. Hughes and Coughlin seemed to be saying that I should not be surprised if it took two hours to securely erase a 100GB drive using the ATA Secure Erase command in Normal mode. But then I realized they were talking about hard drives. SSDs had not yet emerged on the scene, or at least did not seem to figure in their calculations. As noted above, then, I continued to expect that even a Normal (not to mention an Enhanced) secure erasure of a 100GB SSD should be completed in minutes.

Even in the HDD world — to expand upon a concern already mentioned in passing (above) — if ATA Secure Erase had been implemented on virtually all hard drives since 2001, why would anyone still be using Normal Secure Erase rather than the much faster Enhanced method? For that matter, why would people have to use tools like DBAN, if their hard drives already incorporated secure erase technology? A search yielded bits of insight, but did not lead immediately to any definitive discussion of what was happening with Hughes’s 2007 concepts of Normal and Enhanced Secure Erase. To the contrary, I found that Kingston’s webpages for my own Kingston SSD did not seem to contain any reference to “enhanced.”

So there was a real question of what drive manufacturers had implemented, and how they had implemented it. Reports from UCSD’s Non-Volatile Systems Laboratory (2010-2011) led to a discouraging conclusion:

Our results show that naively applying techniques designed for sanitizing hard drives on SSDs, such as overwriting and using built-in secure erase commands[,] is unreliable and sometimes results in all the data remaining intact.

Unfortunately, as summarized in a Tom’s Hardware article, the second of those UCSD reports indicated that most SSDs’ built-in Erase commands failed to delete all data from SSDs. There could be data left on the software-accessible chips in the SSD, and there could also be a failure to wipe special memory chips, containing user data, that would not be software-accessible but that could apparently be read with moderately skilled hardware tinkering. I was not at all clear on whether a 2007 protocol for ATA hard drive erasure, not obviously oriented toward SSDs, would itself accurately anticipate contemporary SSD architectures.

In short, if programs or commands like HDDerase and hdparm simply triggered an ATA Secure Erase command, resting on the assumption that the command would function as intended in the SSD, then those programs would be ineffectual in the many instances where SSD manufacturers had not implemented the ATA Secure Erase command properly. As noted in a PCWorld article, efforts to achieve secure erasure had run into assorted problems, including “buggy implementations, an out-of-date BIOS, or a drive controller that won’t pass along the commands,” as well as the apparent need to install the drive internally in order to get the command to work and potential issues with “ATA/IDE/AHCI settings in your BIOS.”

That PCWorld article also said that HDDerase, in particular, was not for inexperienced users, and was unable to bypass the frozen security status that most newer drives would use to prevent malware erasures. It was also not always compatible with current hardware: someone reported that, while Hughes’s UCSD webpage still offered HDDerase 4.0 (from 2008), for some Intel systems (and possibly for others as well) it would be necessary to use the older HDDerase 3.3.

While research papers cited above had called for the use of verifiable techniques (so as to insure that erasures were really taking place as claimed), I had not yet run into solid demonstrations that erasure verifiability was now the watchword of the SSD industry. A Computerworld article (2011) indicated that there was going to be a new Sanitize Device Set addition to the serial ATA specification, but a search for recent articles on that didn’t turn up much. For the time being, it appeared that HDDerase, like hdparm, depended upon SSD manufacturing decisions beyond the control of the program itself. Thus it seemed that HDDerase, like hdparm, could not necessarily be counted on to deliver a secure SSD erasure.


As just described, I was having difficulty finding a way to achieve secure erasure at the end of a drive’s service, when I was preparing to shelve or sell it. Another possibility was to do the erasure on the fly, keeping the drive clean on a day-to-day basis. One way to do this would be to use an SSD’s native ability to securely erase files as soon as they were deleted. This was a matter of the TRIM function — which, like ATA Secure Erase, was supposedly built into SSD firmware.

According to a HowToGeek webpage, current operating systems (including Linux and Windows 7+) supported TRIM. In the TRIM function, the operating system would notify the SSD when a file was deleted. Wikipedia said this notification was necessary because the SSD would not have its own direct way of knowing when the user or a program was telling the operating system to delete a file. Upon receiving that notification, the SSD would erase, within its memory cells, those places that contained that file’s data. That erasure would take place shortly (often immediately) after the file was deleted in the operating system. TRIM would thus make sure that deleted things were truly and irreversibly deleted, and would also keep the SSD decluttered for best performance.

This was completely different from an HDD, which would mark the space of the deleted file as available for overwriting but would not actually overwrite or delete it until a new file arrived. Thus SSDs, unlike HDDs, would not be full of supposedly deleted files, just waiting to be restored by file recovery software. Instead, if an SSD said that it had free space, that would really be free space, with no recoverable data in it.

In addition, SSDs and HDDs differed in the effectiveness of their erase processes. Data recorded on an HDD was theoretically able to linger after deletion until new data was written over it multiple times, so as to erase all magnetic traces of the file that had once been stored there. Therefore, HDD-wiping programs like DBAN (above) would offer options to write and rewrite each sector of an HDD repeatedly. By contrast, SSDs did not use magnetization to store data. There would be no lingering signal, hence no need for multiple overwrites. Once TRIM deleted file fragments from an SSD, the contents of those files would be truly and finally gone. As with HDDs, there might be other places (e.g., paging files, temp files) where the operating system would keep copies of deleted files, or data from those files, but it would not be possible to recover data from where the actual deleted files had been stored.

These remarks implied that there should be no need for a secure erase function on an SSD. Just delete the files, leave the drive connected to a power source, and wait for TRIM to do its garbage collection work. According to one forensic article cited above (Bell & Boddington, 2010, p. 11), this would take place “after only a few minutes of sitting idle.” Similarly, if the user deleted all files from the drive, TRIM should then clean the drive, and every space that was cleaned should be cleaned completely, with no magnetic reside or other means of recovering supposedly deleted files.

There was a problem with that ideal scenario. Drive manufacturers reportedly postponed erasures. I guessed that they would do this to reduce wear on the drive and/or to eliminate one potential source of unmarketable delay in speed tests.

For whatever reason, when I ran Recuva file recovery software on my SSD, it reported that there were lots of recoverable files. (DiskDigger and other programs offered competing freeware unerase capabilities. Someone said that HDD Recovery Pro and other nonfree programs would be even better at recovering data from drives.) As described in another post, I had the same experience with TRIM on a different machine, with a different kind of SSD.

Many of the files that Recuva marked as recoverable were in the Recycle Bin, so I right-clicked on it in Windows Explorer, emptied it, and ran Recuva again. That didn’t seem to make much difference: there were still thousands of files, many of which Recuva labeled with a green light to indicate that prospects of full recovery were excellent. I tried recovering one, and was successful: it was back, in good working order. Recuva didn’t show a deletion date, so I couldn’t tell how long these files had been in a deleted condition where TRIM should have wiped out all traces. But it looked like some had been around for weeks if not months, whereas TRIM was supposed to operate in minutes if not seconds. I did try other suggestions (e.g., leaving the computer in a logged-out state for some minutes, which was supposedly essential for TRIM to operate), but they did not help.

It seemed that, on my machine, TRIM was not running, or at least it was not running well. One source said that, to make sure TRIM was running, I should go to a command prompt (or maybe an elevated command prompt) and type “fsutil behavior query DisableDeleteNotify.” I got back, “DisableDeleteNotify = 0.” The source said that was good; it meant TRIM was enabled. (If it hadn’t been, apparently the command to turn it on was “fsutil behavior set DisableDeleteNotify 0” — though if it was turned off now, I would have to check again, later, to see if something was turning it off.)

But since TRIM appeared to be enabled, why did I still have thousands of deleted files lying around, including many that were not in the Recycle Bin? Another site suggested checking the SSD manufacturer’s website, to make sure the SSD’s firmware supported TRIM. For my Kingston 120GB SSDNow V300, neither the webpage nor the accompanying datasheet or specifications said anything about TRIM. Some Amazon comments said it was there, however, and CrystalDiskInfo agreed. (Note: at this point I was using a laptop whose drives C and D were on an SSD, and which also had that Kingston SSD connected as drive H via an external USB dock. I had already tried a variety of steps described above and below on the Kingston, so I was mostly running these Recuva tests on drives C and/or D.)

It seemed that TRIM, functioning properly on my drive C, should have deleted forever all those fragments of deleted files. I thought maybe the problem was that I had to fix some other setting. A search led to numerous lists of ways to optimize your SSD. One forum post said that I needed to enable AHCI in my BIOS before installing Windows. Had I done that? No idea. But Wikipedia said, “It is confirmed that with native Microsoft drivers the Trim command works in AHCI and legacy IDE/ATA Mode.” So maybe the AHCI thing was not essential. The installation instructions for my internal Crucial SSD did not say anything about AHCI; then again, they didn’t say much, period.

That forum post debunked a number of other optimization suggestions, and another thread agreed: most such suggestions were unhelpful or simply wrong. One suggestion that did seem worthwhile was to make sure I was using a current chipset driver. But the Crucial firmware updater said my internal SSD didn’t need updating. There didn’t seem to be any other downloads available at the Crucial support downloads page, and (as usual) Device Manager reported that my driver was up to date. So the driver didn’t seem to be the reason why that SSD had so many undeleted “deleted” files.

It seemed that optimization was not the issue. Continued browsing confirmed that, on one hand, people were saying that TRIM immediately removes deleted files from beyond the possibility of recovery — and, on the other hand, users in one forum indicated that they, too, were able to recover deleted files from SSDs.

Using Recuva to Set the Stage for TRIM

The seeming failure of TRIM made me wonder whether possibly the manufacturer of my SSD had deliberately impaired TRIM, so that the SSD would give users more of an HDD-like experience. Maybe the manufacturer decided that SSD buyers do not know or care much about secure data deletion, but are very upset when file-undelete programs fail to recover accidentally deleted files. Or maybe that was how TRIM was actually supposed to work: maybe the SSD would run it only on those data fragments that the user signaled as being ready for permanent deletion via something like the Shift-Delete key combination in Windows Explorer.

In that case, it seemed that I might be able to use Recuva or some other program to identify those deleted files that were lingering in a half-dead state, and send them to their final resting place. Recuva allowed me to scan an entire drive for deleted files, and then right-click on any selected file and choose Secure Overwrite. But when I went ahead with that, I got a warning:

Secure overwrite

Files you are going to overwrite might be located on a SSD drive (H:). Are you sure you want to continue?

I guessed that I was getting this warning because Recuva was prepared to do some kind of multipass overwrite process of the type that would be OK on an HDD but ineffective and/or overly wearing on an SSD. So I bailed out of that. Interestingly, I did not get that same error when I chose Secure Overwrite on files on the TrueCrypt encrypted SSD where my drives C and D were located. Possibly TrueCrypt emulated an HDD in a way that Recuva could not see through. In that case, instead, on the encrypted SSD, I got this message:

Secure overwrite

Are you sure you want to overwrite 2614 file(s)? Checking Yes will mean they are gone forever.

When I clicked Yes, Recuva deleted those files within approximately 20 seconds, suggesting that it might not be doing a multipass HDD-style wipe. The results window, upon completion, indicated that some files were “Overwritten” while others were “Not overwritten,” mostly because “File is resident in the MFT” or because “File is already overwritten by existing file(s).” Unfortunately, a rescan in Recuva confirmed that there continued to be numerous recoverable files on the SSD. It appeared that Recuva might not be effective for achieving final deletion from SSDs.

Taking a different approach, I wondered whether the process of recovering a deleted file, using Recuva, would remove it from the list of recoverable files. If it did, that would be a way to get the deleted files off the SSD and onto an HDD, where I could do a traditional secure erase if desired. Unfortunately, the answer to this question was negative: the deleted but recoverable files remained on the SSD after being restored to a folder on the HDD. All I achieved by this maneuver was to have two copies of the deleted file instead of one: a deleted copy on the SSD, and a recovered copy on the HDD. What I needed here was a way to move deleted files, and that seemed too bizarre for the Windows options of which I was aware.

As an aside, I found Recuva’s operation puzzling in certain regards. For instance, when it started one scan, it claimed to have found 322,307 potentially recoverable files on the SSD containing my unencrypted drive C. When it finished the scan, it reported (at the bottom of the screen) that it had found 143,595 files and had ignored another 180,594 files. That should have totaled 324,189. I guessed that the explanation of the seemingly bad arithmetic might be that its initial reading of 322,307 files went up as it continued its scan: I did not sit there and watch throughout the scanning process. Even so, I did not understand why it ignored 180,594 files. A search led to speculation that perhaps the files were ignored because Recuva found no recoverable data in them, though that would not explain why it went ahead and listed a lot of files that it considered unrecoverable.

Basically, I needed a utility that would completely wipe files that the operating system believed were deleted, as TRIM was supposed to do. A search led to a list of free tools for the purpose, but they all seemed oriented toward eliminating deleted files from hard drives by using repeated overwriting techniques. In short, my brief investigation did not reveal that Recuva or related programs were useful for the specific purpose of securely erasing files from SSDs, in such a manner as to insure that TRIM would clean up their last remnants as advertised.

Adjusting the Recycle Bin to Facilitate TRIM

At this point, I discovered (or perhaps rediscovered) that it was possible to alter the operation of the Recycle Bin in Windows 7. I believed that such alteration might at least reduce the magnitude of the file recovery problem. A search led to a Microsoft webpage that offered several options. One was to empty the Recycle Bin by right-clicking the Recycle Bin icon on the desktop and choosing the Empty Recycle Bin option. That seemed to work: a pass with Recuva, immediately after selecting that option, did not seem to find anything in the Recycle Bin.

That Microsoft webpage also pointed toward other Recycle Bin possibilities. To find those possibilities, I was advised to right-click on Recycle Bin (desktop icon) > Properties. At first, this didn’t work well, perhaps because of a display driver issue on my system; but when I tried again, I got a list of the partitions on my computer, with an indication of how much space each had. In this dialog, I had a couple of options. One possibility was to set the Recycle Bin to be small. That might still give me the option of undeleting accidentally deleted files, but only for a short time, before other incoming, newly deleted files pushed them out, potentially making them vulnerable to the TRIM function. It was unfortunate that Windows 7 did not offer an option to locate the Recycle Bins for all partitions on a single hard drive partition. That might have simplified the process of obtaining a confident secure wipe of the Recycle Bin.

The other possibility, in that Recycle Bin dialog, was to click on the button that said, “Don’t move files to the Recycle Bin. Remove files immediately when deleted.” (That option could also be implemented via registry edit.) According to one poster, it was common practice to turn off the Recycle Bin on an SSD. I decided to do that for drive C, where I would rarely want to recover any accidentally deleted files. Then I ran Recuva again on drive C. Having not only emptied but also turned off the Recycle Bin, I did not expect Recuva to find any recoverable files in the Recycle Bin. Those expectations were fulfilled.

There was, however, the continuing problem that Recuva identified about 140,000 other recoverable files on drive C. (It looked like roughly a quarter of those files were adjudged to be in excellent condition, for purposes of recovery.) This time, Recuva’s Secure Overwrite option estimated that four (increasing to five, then six, then seven) hours would be necessary to complete the overwrite process. Having thus essentially confirmed that we were once again using a slow and potentially futile HDD-type approach, I canceled the overwrite. Regarding the files that Recuva had attempted to delete, a post-operation pop-up told me that, as above, some had been overwritten but others could not be. My Recycle Bin adjustments may have reduced the numbers of files that would be sheltered from TRIM within the Recycle Bin, but it had not resolved the problem of many thousands of other recoverable files lurking undeleted on the SSD for a potentially long time.

I had assumed that, after checking to make sure that TRIM was enabled (above), it would simply go to work. Now, however, I found a webpage confirming that, even when TRIM was enabled by the operating system, it might still fail to keep an SSD clean of potentially recoverable data fragments.

That webpage (seconded by Belkasoft) said that TRIM would be impaired or nonexistent on older SSD drives, older versions of Windows and MacOS X, non-NTFS file systems, non-SATA (e.g., USB, NAS) drive connections, PCI-Express SSDs, RAID configurations, and corrupted drives. But none of those applied to my drive C. It did not seem that any of those exceptions would explain the tens of thousands of recoverable files appearing on my SSD. The situation was still a mystery.

So at this point I reached the conclusion that I probably should not rely on TRIM to keep my SSD free of deleted files while I continued to use that drive on a day-to-day basis. In other words, secure erasure at the end of drive usage might not be a simple matter of deleting the last files on the SSD and watching them get TRIMed away.

Reformat the SSD

Giving up on the real-time, file-by-file approach proffered by TRIM, I returned to the question of how to securely erase the whole SSD at once. A new possibility had emerged: someone at Crucial suggested using Windows 7’s Disk Management (diskmgmt.msc) tool to wipe a drive. The suggestion was to simply right-click on the partitions (or perhaps the volume) that I wanted to delete, there in Disk Management, and choose Delete Volume. (This would not work on the Windows system drive, at least not until that drive was connected to a system booted by some other drive.) Once the volume was deleted, the final recommended step was to let the drive sit overnight (though Belkasoft said it might take only a few minutes), presumably still connected to the machine, so as to allow TRIM to wipe everything out.

That suggestion seemed to imply that TRIM would work on an unformatted drive, as long as it was connected in the ways described above (e.g., SATA or eSATA, not USB). An alternative, suggested by a Ghacks article, was to right-click on the drive (in Disk Management or Windows Explorer) and choose the Format option (specifying NTFS format). The authors found that this was sufficient to eliminate all recoverable data. The drive would then be NTFS-formatted, eliminating another possible barrier to the proper functioning of TRIM (above). The Ghacks article advised a thorough rather than Quick format, but other sources contradicted that. In that suggestion and also in their recommendation of DBAN, the Ghacks authors seemed to be thinking in HDD terms.

In my own case, somewhere among the multiple approaches I had tried on my Kingston SSD, the recoverable files seemed to have vanished. I thought that reformatting might have been the cure, but later I realized that this could be mistaken. I had done the formatting with the drive connected via USB. As noted above, TRIM would not work in a USB drive. I would apparently have to plug the drive into an internal drive bay with a SATA connection before TRIM would do its job. Even then, I was not sure that TRIM or even reformatting would erase the extra memory chips that, as described above, were not software-accessible and might thus yield only to the physical reset achieved by a properly implemented ATA Secure Erase command.

Encrypt the SSD

The Ghacks article and others suggested another possibility. The reasoning seemed to be that, if we couldn’t be certain that our files were getting wiped out, at least we could be confident that the pieces that someone would recover would contain no user-readable data. That is, I could use something like TrueCrypt to encrypt the entire SSD. This would convert all of its contents to scrambled data that could only be pieced back together if you knew the password or could crack it, and a complex password would be practically uncrackable. (Another post discusses TrueCrypt security developments in summer 2014.)

Within TrueCrypt, Ghacks recommended using the “Create an encrypted file container” option, whereas I thought it might be better to choose the “Encrypt a non-system partition/drive” option. Since encryption could be a very slow process (though faster on an SSD than on an HDD), it would probably be best to do a Windows 7 quick format before (as well as after) the encryption process. That way, there would be no files to encrypt, the encryption should take only a few minutes, and then TRIM would clean up any encrypted residue, assuming the drive was SATA-connected and otherwise free of the potential problems mentioned above. But I was not entirely sure that the encryption (even without TrueCrypt’s quick format option) would succeed in reaching every part of an SSD, in some way that an overwriting program like DBAN would fail to do.

According to an Ars Technica piece (2011), encryption could also be an appropriate option for securely deleting individual files on an SSD. A search pointed toward advice on how to do that. A StackExchange article suggested that security in general (not just at wipe time) would be enhanced by encrypting the drive at the outset, before putting any data onto it. Belkasoft said, however, that encryption would remove files from the self-cleaning operation of the TRIM command, and that some types of encryption would actually make data easier to recover, at least for someone with sufficient forensic expertise who knew “either the original password or binary decryption keys for the volume.” Decryption keys, it seemed, could be obtained from RAM, if the computer in question was still running, and also from memory dumps and from paging and hibernation files. In addition, Belkasoft said, encryption tended to require rewrites, significantly degrading SSD performance. So the full sequence suggested in the preceding paragraph seemed advisable: format, encrypt, reformat, and then leave the drive attached via its SATA connection long enough to let TRIM complete any cleanup that it might attempt.


There appeared to be some myths — and, for me and others, much confusion — in the area of SSD wiping. Given the complexities noted in research on the matter, I could not be confident that I, with my novice understanding of such matters, was going to achieve a secure erase by relying on any one tool. It seemed, rather, that a combination of recommended techniques would be advisable, at least until SSD manufacturers began to implement reliable internal technologies. This summary does not necessarily recommend taking all of the routes explored above; it merely recaps that exploration.

I began (above) with DBAN and other disk wiping tools. These did seem likely to eliminate, with an uncertain degree of thoroughness, at least some of the data on an SSD. At the same time, they had potential to make the SSD slower by cluttering it up, and also to shorten its life somewhat by imposing additional wear on its memory cells.

Next, there was the option of using the SSD manufacturer’s SSD toolbox, if one was available. I guessed that Intel’s toolbox was probably pretty good. There did not seem to be one for my Kingston SSD. There were warnings that using one manufacturer’s toolbox on another manufacturer’s SSD could brick it.

For purposes of securely erasing an SSD, the key component of a manufacturer’s SSD toolbox would be the secure erase tool that would effectively implement the ATA Secure Erase command. Ideally, this command would promptly and thoroughly clear both the software-accessible data storage areas of the SSD and also those extra chips, added by the manufacturer for purposes of speed or longevity, that could be accessed only by the drive’s internal firmware. That is, the ATA Secure Erase command was implemented by SSD firmware, not by the operating system. The problem here was that research had demonstrated that manufacturers did not reliably implement that command — that, in the worst cases, the command was completely ineffectual. It did not appear that there would be any harm in running any Secure Erase tool that the manufacturer might have supplied, but it did not seem advisable to rely solely on that tool.

The dependence upon firmware, and thus upon the erasure competence of the manufacturer, was also a problem for several software approaches that sought to provide alternate ways of triggering the ATA Secure Erase command. The Linux hdparm command and the DOS-based HDDerase program both seemed to assume that the Secure Erase command would function as expected; likewise for the Parted Magic tool that apparently provided a GUI for hdparm. Unlike the SSD manufacturer’s toolbox (if any), it could not be said that all of these alternative tools would be merely useless at worst. In the case of hdparm in particular, the wrong command options could ruin an SSD and cause other damage. I doubted that Parted Magic or HDDerase would be comparably risky. Between those two, the latter appeared harder to use and had not been updated in some years, leading to reports that it would not work under certain conditions. So, again, I was looking at methods that might add another degree of security to the data erasure effort, albeit at some potential cost. Among these several options, in this brief review Parted Magic generally seemed to be safest and most accessible.

Another approach was to rely on a different aspect of SSD firmware. SSDs had a built-in TRIM capability that would promptly and automatically clean out the contents of deleted files, rather than leave recoverable traces of them as in HDDs. Here, again, it was reported that manufacturers did not necessarily implement the capability as expected. In my own case, I found thousands of supposedly deleted files still available for recovery via easy-to-use freeware. It seemed that the persistence of deleted files could be due to a variety of factors, including drive corruption, non-NTFS formatting, or use of a non-SATA connection. Those factors did not seem to explain the residue of recoverable files on my drive C, however. I was also unsuccessful in attempts to use the Recuva file recovery program to delete most potentially recoverable files from my SSD. At best, adjustments to the Recycle Bin seemed to have potential to eliminate, from possible recovery, at least some persisting deleted files.

Assuming TRIM was properly implemented in the SSD, and assuming the drive was otherwise eligible (e.g., no corruption), it seemed that one potentially workable approach was simply to perform an NTFS quick format of a SATA-connected SSD, encrypt the entire drive with something like TrueCrypt, do another format, and then leave the drive connected for a while, so as to let TRIM do its work. Since the implementation of TRIM in a particular SSD could not be verified, it appeared that a combination of these steps with other methods (above) might provide the most reliable response to the task of securely erasing an SSD.

(I did not investigate, at this point, the subsequent discovery that, as in the case of Lenovo ThinkPad laptops, some systems might offer a downloadable BIOS utility to erase an SSD.)

This entry was posted in Uncategorized and tagged , , , , , , , , , , , , . Bookmark the permalink.

36 Responses to Ways to Securely Erase a Solid State Drive (SSD)

  1. ste says:

    i don’t mean to be offensive, but i think you could write a 1000 page synopsis of rather someone should step outside their door on a saturday morning. dude, learn to visit forums, and this process would have been infinitely more succinct. better yet, discover youtube, and you could have secure erased your ssd in minutes.

    • Ray Woodcock says:

      Or at least you would think you had erased it. Not sure how much of the discussion you read, but that was one of the issues addressed.

    • Chris Sadler says:

      Ste – your answer shows your lack of understanding by implying that you understand whilst this EXCELLENT in depth article shows the process of understanding being like an onion comprising of layers of understanding interspersed (pun intended) with yet more layers of confusion. I have been through the same journey with HDDErase, HDParm, S1 & S3, IDE v AHCI etc. What YOU should discover Ste, are the layers of complexity. When your fingers hit the DEL key, spend a moment to consider the numerous and ARBITRARY levels of translation between the firmware interfaces. You have INT H13 , ATAPI-8 and your “BIOS” before you even get to the gates of your SSD firmware never mind the electrons behind the NAND flash gates. This HAS TO MEAN that there is NO SINGLE METHOD that “fits all” as regards secure erasure of an SSD’s data. Yes, in simple a terms the SSD sends a voltage spike to turn all the zeroes to ones (yes it is that way round) – what this article describes in a very detailed way – are the obstacles to crossing the rubicon of that process’s completion. My personal opinion is that the manufacturing world could learn a lot from the collaborative mindset of the open source community. Perhaps only then will we see an end to the ridiculous situation which Ray describes. That said, yes Ray you did bang on a bit. 🙂 My empathy for you, however stems from the fact that I also can be verbose. The gem here is that you were brave enough to expose your journey of discovery, rather than giving a “Here’s the problem – there’s the solution” Post which perhaps readers like Ste like to be spoon fed. Now Ste – I don’t mean to be offensive, but I look forward to your answering of all the issues outlined by Ste above . . . and by the way, Ray’s article has 7415 words and not 1000. Jus’ saying’. Peace out.

  2. Jaromir Sys says:

    Thank you, this post was very usefull in learning how ssds work

  3. Hensel says:

    If you want to learn more…

  4. Yuang says:

    Just SPLENDID article

  5. Manny says:

    The reason for TRIM not working on your SSD is that you had it connected via USB. TRIM is only supported on internally connected drives. There is no support for TRIM over USB or Firewire.

  6. Wakfu says:

    So, I should quick format, encrypt, quick format, let it connected for a while, and then trying to use Parted Magic to trigger a ATA Secure Erase command. Or just try using Parted Magic, and then running Recuva-like software to check if it really worked.

  7. JimS says:

    I used the Windows 7 Disk Manager to delete the volume (a generic 60 GB SSD), added the volume back, and then format the drive. I used a HexEdit to looked at the file space. Under the quick format, there was random data in the file space. Running it again with a full format (7 minutes), the drive appeared to be have only data in the boot sector, with the rest of the drive filled with zeros, with no exceptions. I do believe this works, and is verified.

  8. Phil says:

    Excellent article, great depth and really quite concerning. I wonder, with my conspiracy head on, how much influence governments have in with manufacturers to make it more difficult for end users to securely erase data.

    Excellent point JimS, although it does seem to depend on drive model, manufacturer, firmware, etc on exactly what process happens internally in the drive when TRIM is invoked.

    Heres an article explaining a bit more: http://articles.forensicfocus.com/2014/09/23/recovering-evidence-from-ssd-drives-in-2014-understanding-trim-garbage-collection-and-exclusions/

  9. I was wanting to refresh my Adata SSD only to discovery that Secure Erase cannot be implemented in Windows 8! Astounding. Why on Earth would it not be allowed in Windows 8? Anyway, Minitool Partition Wizard will let you write ones, zeroes, both, DoD 5220+ (no idea what that is). I heard somewhere that one should write all ones to an SSD to restore it’s performance.

    • JimS says:

      The DoD standard was intended to have minimum standards for wiping free space, making it impossible to restore removed files. Secure erase is more likely to be done in a DOS mode, although there are utilities written in Linux and Unix. The biggest issue with SSDs is that the manufacturers have a poor history in adhering to established standard. Use a hex editor to see what actually is on the SSD. Windows 8 does a good job of cleaning, if the SSD can support it. Most hex editors are free to download and use.

  10. Joshua says:

    Very good research, thanks very much!

  11. Ray Woodcock says:

    Just received this query: “Is there a non-destructive way to clean JUST the free space and wiped space on SSDs, so that they work faster or is the premise on which my question is based simply wrong-headed. I’d like my quite sophisticated Dell laptop [XPS 14z] to go somewhat faster, if at all possible.”

    Answer: I don’t know. But I would guess that, if your SSD is not already fast, it’s a problem with the hardware or with your Windows installation. Maybe you could test it by making a drive image, installing that image on another SSD, and boot from that other SSD. Maybe it will run faster. Or, after making an image, wipe your SSD and restore the image. Something like that may at least give you some clues as to whether there is in fact a problem with the drive. Good luck.

  12. SSD Bob says:

    The Linux blkdiscard command will issue immediate TRIMs and make the drive appear to be zero-filled. Recovering data at this point requires going down to the flash layer and probably disassembling the SSD.

  13. This post discusses the secure SSD erasure options I reviewed. … ssssdcrucial.wordpress.com

  14. naltar says:

    Thank you for this text (well, in this day and age anything longer than a tweet makes the population nervous). Somewhat ironically, I have come across your “unpeeling the onion” in the middle of running the recuva myself, having gone through deleting volume and quick reformatting the ssd in windows (computer / disk management), as was suggested elsewhere on the internet.
    I was, kind of, expecting to find, as a counterpoint to your slow build-up of attempts to secure-erase, a conclusion, that NO method exists… so I’m kind of disappointed at the truecrypt solution, as I’m not comfortable with truecrypt. As I have the ssd in question connected to a laptop, via an ata/usb bridge, I was disappointed to see a couple of comments that trim won’t work (for a usb connection). Anyway, unhappy to go the truecrypt route, I have chosen a “secure overwrite” in recuva. Hopefully, when I run recuva again, it doesn’t find those same files on the ssd – ready to recover again. But (purely for my peace of mind, as the data on the disk is not sensitive), I have no real way of knowing that / if those “recoverable” files on the ssd, now – allegedly “securely overwritten” – really have been securely overwritten. I guess I can try a bunch of other recovery software. But then, what they find those files – still there? Oh dear, better close my eyes and pretend I don’t see them…

  15. JimS says:

    Forensic tools can tell you what traces of files still exist on an ssd. I do believe the better these devices comply with drive standards, the liklihood that secure erase, and secure wipe will be very effective. Recently, jetico, the vendor for bcwipe, has released a whole disk utility, bcwipe total wipe out (two) . Documentation states that it can wipe even an ssd, if the ansi standards are supported. It has a wipe option that does secure erase on the third pass. At my work, we use samsung ssd on dell ultabook laptops. The bcwipe application did work, for a complete disk wipe, verified with winhex. Not everyone will be using highend solutions, but some solutions can eradicate the data. One thing to note about encryption on ssd, most forensic tools were best on the encrypted drives because trace elements of the key are left unencrpted.

  16. Juan says:

    Wtf is wrong with you people? PM doesn’t “write” anything to SSD’s. It simply cuts all current to transistors. No current..no on/off (0/1) state..no on/off state, no data. The SSD’s need current to open/close transistor gates.

    • Ray Woodcock says:

      Juan — not sure if I’m understanding your comment, but it sounds like you may be mixing up static and dynamic RAM.

      • Juan says:

        If you apply a charge to every transistor the transistor’s gates will either all open or all close creating state of ALL zero’s or ALL ones in EVERY transistor. The “erase” term is probably a misnomer. However, any data that was stored in the transistor before is non-recoverable. There’s no way to determine the prior on/off state in a transistor that’s been charged/discharged using Parted Magic.

        • Juan says:

          Here’s the less simplistic version from HSW.com:
          The NAND flash of a solid-state drive stores data differently. Recall that NAND flash has transistors arranged in a grid with columns and rows. If a chain of transistors conducts current, it has the value of 1. If it doesn’t conduct current, it’s 0. At first, all transistors are set to 1. But when a save operation begins, current is blocked to some transistors, turning them to 0. This occurs because of how transistors are arranged. At each intersection of column and row, two transistors form a cell. One of the transistors is known as a control gate, the other as a floating gate. When current reaches the control gate, electrons flow onto the floating gate, creating a net positive charge that interrupts current flow. By applying precise voltages to the transistors, a unique pattern of 1s and 0s emerges.

        • ConcernedSSD says:

          But then how are files able to be recovered?

  17. Morgan T says:

    Linux secure erase invoke within a simple GUI:
    1. Download the free YUMI Multiboot USB Creator tool and with a 1GB or larger flash drive plugged in, run it as administrator.
    2. Select redobackup-livecd from the drop down options and there will be a link to download the iso file if you don’t have it yet.
    3. Use the bootable redobackup flash drive to boot us your computer and close the back application
    4. Click the gear icon (Start), Disk Tools, Drive Reset and follow directions.
    5. If the intended drive is USB adapter connected it will fail to wipe data but it works fine to secure erase a direct SATA connected drive
    6. Under Disk Tools there is also a Disk Utility which will allow deleting partitions on an SSD in scenarios where various Windows methods had failed to allow deletion or drive wipe

  18. Ken Schleede says:

    Interesting article and good info in the replies. One thing I have been trying to figure out for a security application is when Trim is working correctly, is there a way to say that it is done? Example: I delete a set of files (and hopefully Deterministic Read Zeroes After Trim is on, see Phil’s reply above), and then the drive will garbage collect and clear the cells “soon” or “almost immediately”. Is there a way to ask the drive if it is done doing that? Anybody have any experience with that? Is there an API that says: “Garbage Collect has work to do or is idle”? Thanks for any reply. And thanks again for the interesting article and replies.

  19. Ramon Escalante says:

    It might be worth noting that of all the techniques you consider, 60% of them have the disk run whatever implementation of the ATA Secure Erase operation is in the disk firmware, if any.
    If correctly implemented by the manufacturer, that’s however the safest way (without physically shredding the SSD to pieces) since, as Juan mentions, it should physically “resets” the whole drive.

    None of the other techniques (trim, format, overwrite…) are really efficient since the SSD controller doesn’t let you control what physical areas of the drive you access, so you have no guarantee that you actually scrubbed everything.

    You proposal to format, then encrypt the whole disk with TrueCrypt after the fact will not work either: SSDs are over-provisioned to cope with wear, they actually have more physical space than they report so “encrypting the whole disk” would not actually overwrite all blocks.

    Encrypting the drive from the start, then securing erasing it would probably be more effective.

    But in the end, it all comes down to the fact that you don’t know and don’t control what the SSD actually does.

  20. rambler78 says:

    Erasing SSDs is quite a thing, thanks for the exhaustive post. Unfortunately I didn’t find time to read it all.
    I wanted to add a couple of bits of info:
    1) It is not proveably effective in securing a drive, but if you want to generate a file (possibly encrypted) the exact size fo your disk you can use fsutil in windows (I’m not much across platforms but I think it’s mkfile in linux).
    2) from a discussion with a vendor’s tech:
    “In Windows 7 and newer, when you create a new partition on your drive, the OS automatically issues a full-drive TRIM command. What this does is tell the drive that the entire user space contains what’s known as “invalid” data. This lets the drive know that it’s okay to start erasing these blocks, and so the drive will… at its leisure. As the drive sees that it’s necessary to erase old data in order to have space ready for new data, it will do so. However, this operation does not take precedence over new data writes from the host, so it’s not really certain when the physical erase operations will take place… but they will eventually take place.”

    So again writing a file the size of the disk may be “useful” compared to overwrite programs (DBAN etc) which will just thrash an SSD without necessarily erasing.

    Vendor strongly recommended using vendor-supplied free program with SANITIZE command, or using third party software that runs SANITIZE.

    Failing that physical shredder., which polarises people into more/less fun.

  21. Albert says:

    On my yoga 13 runing windows 8.1 The tools found under properties for the samsung ssd drive has a optimize function that sets the schedule for optimization. When i mamually executed the feature it displayed percentage trimmed. Must be that the software recognized the ssd and executed the trim command.

    Neither samsung the ssd maker or Lenovo new the method to execute the trim command. Samsung said its disk tool would not work because of lenevo original equipment manufacture oem specificity.

    This article states that secure erase of entire disk is available in disk since 2001. This may be the command that parted magic initiates. http://cmrr.ucsd.edu/people/Hughes/documents/DataSanitizationTutorial.pdf

    Secure Erasure Implementation and Certification
    CMRR has studied secure erase for the Federal Government for many years, and its research4 demonstrates three distinct protocols for user data deletion:
    Weak deletion by users deleting files in public operating systems such as Windows or Linux (“usual computer erase’ in Figure 1). This deletes only file directory entries, not the user data itself.
    Block overwrite utilities overwrite all user accessible blocks (at the time of overwriting). It gives a higher level of deletion confidence than file erase, and these utilities claim to meet Federal Government requirements in DoD 5220. Today’s hard drive technology has obsoleted this document, and NIST 800-88 should be used instead.
    Disk drive Secure Erase is a drive command defined in the ANSI ATA and SCSI disk drive interface specifications, which runs inside drive hardware. It completes in about 1/8 the time of 5220 block erasure.

    Thank you for your research and background information.

  22. Albert says:

    Would a reset, refresh, reinstall eliminate traces of previous ssd disk images?
    Refreshing your PC will reinstall Windows and keeps your personal files and settings. It also keeps the apps that came with your PC and the apps you installed from the Windows Store. Resetting your PC will reinstall Windows but deletes your files, settings, and apps—except for the apps that came with your PC.

  23. dg1261 says:

    “Would a reset, refresh, reinstall eliminate traces of previous ssd disk images?”

    I’m not sure what you mean by “disk images”, but that approach is not effective in securely erasing a SSD (which is the subject of Ray’s blog post).

    I’m late to this thread, but as Ramon correctly pointed out (18-Mar-2016), any method that does not invoke the ATA Secure Erase command built into the SSD’s firmware cannot be assumed to be fully effective because of the nature of TRIM and overprovisioning. That’s because TRIM and overprovisioning regularly move data around, so the OS cannot be sure whether previously used blocks have actually been erased when the firmware’s sleight-of-hand is redirecting the OS’s viewport to different physical blocks. Only the SSD’s firmware knows which physical blocks it is mapping to which virtual “sectors” the OS sees at any given time.

    It’s worth reiterating that the subject of Ray’s outstanding research is how to securely erase all contents of a SSD so nothing can be recovered, including from the overprovisioned area which in some cases may not be visible to an OS. The fact linux or Windows might not see those extra blocks is no guarantee they’re erased or that the NSA couldn’t dismantle the SSD and recover something by reading the chips directly.

    As Ramon stated, the ATA Secure Erase command is probably the most certain way of erasing everything, but as Ray’s research has revealed, not every manufacturer fully implements the command as we might think.

    Also, Juan was correct when he stated (7-Sep-2015), “There’s no way to determine the prior on/off state in a [cell] that’s been [reset].” (I hope I’ve paraphrased him correctly.)

    The physics of flash memory cells are fundamentally different from magnetic memory (traditional hard disks), so don’t get sucked down the rabbit hole of DBAN, Truecrypt, or DOD-level multiple, random pattern overwrites. Those technologies may be relevant to magnetic memory because erased/overwritten magnetic surfaces can retain residual magnetism that could reveal prior contents, so their purpose is to scramble that residual magnetism as much as possible.

    In contrast, once a NAND flash cell is reset, it’s reset. It can only be a 1 or a 0, and once it’s been reset to 1 there’s no way to tell what it may have been before it was reset. Encrypting a cell or overwriting it with random data multiple times will make no difference. (The context here is erasing a given physical cell.)

    The trick, however, is to make sure all blocks get reset, including the blocks OS’s won’t normally see. That’s what we hope the firmware’s Secure Erase command will do.

    Magnetic memory is actively written as 1’s (magnetic flux in one direction) or 0’s (flux in the opposite direction). Thus, both 1’s and 0’s can be overwritten–albeit, possibly with residual flux along the edges which might give away the previous flux direction–but nonetheless, for typical users there is no reason the hard disk can’t write changed cells back to the same location because 1’s and 0’s can be overwritten in either direction.

    In contrast, and to expand on the point Juan started, the input line of a NAND flash cell can only change a 1 to a 0, not from 0 to 1. A byte that has been reset starts out as 11111111. If you try to overwrite with 11110000, the left four cells are left alone while the right four cells are pulled low and locked in that state. It will correctly read back as 11110000. But note what happens if you then try to subsequently overwrite that with 10101010: alternating cells are either pulled low or left alone, but this time the result will be 10100000. That’s not what you wanted.

    The solution is to reset to 11111111 before writing 10101010. But by design, chips can only be reset whole blocks at a time. Thus, if you needed to change just a couple of bytes, you’d have to read an entire block (4KB) from the SSD, make the couple of changes desired, reset the entire SSD block, then write all 4KB back to the newly reset block. That’s hugely inefficient.

    That, plus the desirability for wear leveling, is the reason for TRIM. It’s more efficient to read the 4KB block, make the changes, and write the 4KB back to some other physical block that had previously been reset, ready and waiting. Remapping the blocks is all handled on the fly by the firmware, and the OS is none the wiser that the block it thought it was overwriting is now physically somewhere else.

    When storage access is otherwise idle, the SSD’s firmware will engage in “garbage collection”, rounding up the old, now unmapped blocks and resetting them in the background in preparation for reuse.

    (BTW, the fsutil command Ray referenced doesn’t “enable” TRIM, it merely reports whether Windows allows it, but that doesn’t mean the SSD is doing it. To tell if it’s actually happening, try the trimcheck utility. To quote from its github page, “The program will set up a test by creating and deleting a file with unique contents, then (on the second run) checks if the data is still accessible at the file’s previous location.” If the deleted data is still accessible after the second run, that would prove TRIM is not working.)

    So to summarize: as Juan intimated, don’t bother with solutions involving encryption, random non-zero patterns, or multiple pass overwrites. They’re just a waste of time. As Ramon stated, look for solutions invoking the SSD’s internal ATA Secure Erase feature (and hope the manufacturer has fully implemented it). But most of all, thanks to Ray for exposing the reality vs. the theoretical process.

  24. steve remo says:

    DAMN! Mr.Woodcocks,what are the chances of coming across you twice in a day..lol 🙂
    Oh man,I love how your mind functions = It really schools me on thoughts lurking in my
    GOD blocks of memory,that are also inaccessible.
    Ok,big kudos to all who contributed..excellent insights and information on the physics.
    My feeling / thoughts ran the gamut of many espionage thrillers and high tech movies..lol.
    Yet,what was touched on softly,and of which I am certain,is that all of us are CONSUMERS.
    What I’m saying,is that nary a one of us has the RESOURCES,of a Nation State to really know
    precisely what back-doors there are.
    From the days of MIC (Military Industrial Complex),to today,the Super-Powers have Einsteinian
    Computer Scientists like so many nickels and dimes in their pockets.(Quote: The Godfather).
    For every 1 integration / implementation of security / defense,I can guarantee you,they have 5
    ways to defeat it.
    That being said,I can’t count the hours spent on my ignorance of these fascinating machines.
    Even a poor,disabled bastard like me.Who only makes $15k a year,would take the $150 SSD,
    outside and use a sledge hammer :-). But i digress,were here to learn.Thank you again Mr.

  25. Kes says:

    To add a tiny bit of clarification, the ignored files that Ray found when running Recuva are the count of live files. Live files by definition don’t feature much in deleted file recovery. A great deal of misunderstanding would be avioded if Recuva simply labelled the ignored files as live files.

    Ray (and others) is puzzled that after ensuing that TRIM is in effect Recuva can still find a large list of deleted files, some apparently quite old. This is not a failure of TRIM or anything else, but not understanding how Recuva (and other recovery software) works, or not reading the documentation. Recuva scans the MFT to find and list deleted file names and their cluster addresses. When a file is deleted and its data TRIM’d away into oblivion the entry in the MFT remains, suitably flagged. The clusters are generally still there too, but will contain zeroes*. Very short files – say fewer then 700 bytes – are contained entirely in the MFT and all the TRIMs in the world won’t remove those, and they can be recovered.

    There are methods to get rid of these file names in the MFT, or at least overwrite them, but these methods can incur data cluster overwrites which should be avoided on an SSD – because they’re pointless.

    *There is no such physical thing as a cluster of zeroes in an SSD. When a cluster is TRIM’d the data cluster is unmapped: when an unmapped cluster is requested the SSD controller returns a default cluster full of zeroes.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.