I was running Windows 7. I had a solid state drive (SSD). I wanted to securely erase it. People commonly sought secure erasure before selling a used drive or otherwise making it available to others. The primary reason for secure erasure was that a casually erased drive might still contain data that others could retrieve.
The problem confronting me was that, as I had lately learned, secure erasure of an SSD was entirely different from secure erasure of a hard drive (HDD). It would apparently require new tools and techniques. This post discusses the secure SSD erasure options I reviewed. The Summary (at the end of this post) offers a brief recap. Another post applies insights from this post to a specific SSD wiping task.
DBAN and Other Drive Wiping/Overwriting Tools
It didn’t take much browsing to discover that simply running a hard disk drive (HDD) wiping program like DBAN or Eraser would not provide a reliable solution. I did not master the technical details, but the general idea seemed to be that the SSD’s wear-leveling feature would send data from your wiping program through a flash translation layer.
What this meant, in effect, was that the wiping program might think that it was now writing a set of ones and zeroes to location A on the SSD, but the flash translation layer would instead be sending those ones and zeroes to locations B, C, and D, so as to ensure that no one part of the SSD would be getting worn down too quickly by too many writes. So while the wiping program might eventually succeed in overwriting much of the SSD, it would not do so in the systematic way its programmer intended, and would probably not reach some portions of the SSD. Belkasoft seemed to say that, in addition, those ones and zeroes might clutter up the SSD and thereby degrade its performance. I wasn’t sure, but it seemed those concerns might be at least somewhat addressed by an approach in which I would generate random junk files to fill the drive’s free space, and then formatting the drive to clean off the junk. I assumed it would be better to create small random files for wiping purposes, so as not to leave a large chunk of drive space unfilled.
It was also often said that wiping programs would reduce the SSD’s longevity by conducting excessive writes. Writing programs would typically perform only a handful of writes, on drives built to survive 1,000 if not 10,000 writes on each memory cell. But perhaps the SSD’s firmware would sometimes misdirect many such writes to a single spot in the SSD’s chips, prematurely aging that particular spot.
It appeared that failure to achieve a secure wipe would also follow, for the same reasons, from attempts to use SDelete or the cipher /w:D or the format e: /fs:NTFS /p:2 command to wipe free space on drive D (after deleting all files, so that the whole drive is free space). Presumably cipher and the like would not run into this problem on a dumb USB flash drive, as distinct from a too-clever SSD.
Secure Erase Tools Provided by SSD Manufacturers
To achieve a secure wipe, many posts advised using the ATA Secure Erase command. This, however, was a mystery: what exactly was that command? Nobody seemed to say.
Eventually, I found a MakeUseOf article that seemed to be telling me that the Secure Erase command was a firmware capability, internal to the SSD and accessed by the drive tools offered by some SSD manufacturers. I assumed this would be the case in, for example, the secure-erase option found in the Intel Solid-State Drive Toolbox. I did not have an Intel SSD, however, and there were indications that using the wrong tool could “brick” an SSD (i.e., turn it into a useless block). I had a Kingston SSD, and Kingston’s Toolbox 2.0 offered no such secure-erase option. But even if there had been a Kingston Secure Erase tool, there would be the concern (below) that it might not work as advertised.
It seemed that another way to use the ATA Secure Erase command, also cited in that MakeUseOf article (and in 1 2 3 others), was to reboot with Parted Magic ($4.99) and use its Erase Disk option. In the version of Parted Magic that I used (pmagic_2014_02_26.iso, installed on a YUMI drive), the Erase Disk option was available as a desktop icon and also via Start > Disk Management > Erase Disk; in the version of Parted Magic used by MakeUseOf, apparently the option was Start > System Tools > Erase Disk. Within Erase Disk, there were multiple (in my case, seven) options. The last of those options was, “Internal Secure Erase command writes zeroes to entire data area.”
That description provoked some doubts. It seemed to conflict with a forensic webpage that said, “There is no internal linear mapping of sectors in a SSD” and “For practical purposes, it is impossible to overwrite new data in place of old data.” In this light, the Parted Magic claim to write zeros to the entire data area sounded much like the approach taken by hard drive utilities like DBAN (above). Maybe Parted Magic was going to attempt to write zeroes to the entire area, but there was no telling where on the SSD those zeroes would actually be entered. Or maybe they meant that they would be using the ATA Secure Erase command to write zeroes, if indeed that was what the ATA Secure Erase command did.
To explore further, I booted Parted Magic on a laptop containing an mSATA SSD. I went into the Erase Disk > Internal Secure Erase option. In the Secure Erase dialog that appeared, tooltips (i.e., pop-up explanations that appeared when I let the mouse hover on one of the listed drives) estimated 228 minutes to securely erase the laptop’s 1TB HDD, and 60 minutes to securely erase the unit’s 240GB SSD. In other words, Partition Magic seemed to be estimating an erasure rate of about 250GB per hour. It did not make sense that data would be erased from an HDD and an SSD at the same rate. That is, Parted Magic did not seem to be making special arrangements for an SSD.
Moreover, the tooltip provided estimates for both Normal and Enhanced erasure methods (discussed in more detail below). These options would normally be expected to proceed at very different rates. Yet for the SSD, the tooltip provided the same estimate — 60 minutes — for both methods. That was not consistent with a forensic study (p. 11) reporting that complete and secure (presumably Enhanced) erasure on an SSD could transpire within “only a few minutes.” Given various claims that Parted Magic used the Linux hdparm command, this 60-minute estimate was also not consistent with a Linux wiki page (below) stating that hdparm achieved secure erasure of an 80GB SSD in about 40 seconds.
I did make a brief and unsuccessful effort to find more information on what Parted Magic’s Erase Disk option was programmed to do, or what it actually achieved. I encountered reports that Parted Magic did not work for some users. Given my doubts, I decided to continue with investigation of the hdparm command that Parted Magic supposedly used.
The Linux HDPARM Command
In my browsing, I encountered many indications that hdparm provided a way to achieve a secure erase of an SSD.
In Linux-speak, “man” (short for manual) pages are usually primary sources of guidance on the use of commands. The man pages for hdparm at man7.org and cornell.edu repeatedly warned that hdparm was a dangerous command, capable of damaging drives and systems if misused — and that misuse, for these purposes, might involve nothing more than the entry of the wrong command options, or failure to issue the command with certain necessary options. This did not mean that one should not use hdparm. It did suggest that the dangers of the command, and the relative difficulty of getting into Linux to use it in the first place, would not make hdparm the leading candidate for the typical Windows user.
According to Wikipedia, the purpose of hdparm was “to set and view ATA hard disk drive hardware parameters.” This, and the very name of the command (i.e., hdparm), did not make it immediately obvious that hdparm was appropriate for SSDs. Man pages repeatedly indicated that the hardware parameters being examined were often specific to hard drives — involving drive rotation speed, for example, and drive geometry (cylinders, heads, sectors). I noticed, in addition, that the Cornell man page contained no instances of “secur” (e.g., secure, security) and that, in discussing the ATA Security Feature set, the man7.org page said, “These switches are DANGEROUS to experiment with, and might not work with some kernels” — with, that is, some Linux installations. Similarly, the ata.wiki.kernel.org page offered numerous disclaimers, including this one:
If you hit kernel or firmware bugs (which are plenty with not widely-tested features such as ATA Secure Erase) this procedure might render the drive unusable or crash the computer it’s running on.
This acknowledgement that hdparm’s ATA Secure Erase feature had not been widely tested raised, once again, the prospect that a great deal of faith was being placed in a black-box command — in, that is, a mysterious gizmo that would somehow produce magical results. Along those lines, a Linux Magazine article said,
Secure erase has two pitfalls: hdparm can only initiate a secure erase when the BIOS also allows it. Beyond that, the method is considered to be experimental.
Additional concern arose from this statement on the ata.wiki.kernel.org page:
The security-erase command is a single command which typically takes minutes or hours to complete, whereas most ATA commands take milliseconds, or seconds to complete.
If it would take hdparm an hour or more to complete a secure erase, then it did seem plausible that Parted Magic (above) was using hdparm; but with that kind of time lag, I could not tell whether this experimental hdparm tool was indeed correctly implementing the ATA Secure Erase command or was, instead, just doing a DBAN-type overwrite. At this point in my investigation, there did not appear to be empirical verification that hdparm was functioning reliably.
It seemed to me that use of hdparm could yield inconsistent results. Consider, for example, this statement from the ata.wiki.kernel.org page: Secure Erase via hdparm “took about 40 seconds for an Intel X25-M 80GB SSD, for a 1TB hard disk it might take 3 hours or more!” (In fact, one user reported that it did take three hours for a 1TB drive.) If hdparm was able to erase 80GB in 40 seconds, it would seem that data could be erased at a rate of 2GB per second. Hence a 1TB drive should take about 500 seconds (i.e., less than ten minutes). Why would it take three hours? It didn’t seem to be because hdparm was using Normal vs. Enhanced modes (see below) in one case or the others: a search of 1 2 3 4 different man pages yielded no indications of how one might specify Enhanced mode in hdparm (which, itself, raised the question of why a correct implementation of the ATA Secure Erase command would contain only Normal mode). Rather, I wondered whether something akin to write amplification was underway — whether, that is, hdparm might be getting lost in the multiplying complexities that its approach to data deletion might imply as drive size increased.
These reflections left me baffled. How could it be that the ATA Secure Erase command had been built into virtually all drives manufactured in at least the past ten years, and yet there would be such confusion about what that command implied, or how to make it work?
I could certainly not say that Parted Magic or hdparm were obviously flawed or inappropriate for use on SSDs. I noted that, for instance, Corsair recommended using Parted Magic to achieve a secure erase. It was just that the situation did not seem as straightforward as one might hope or expect. To cite the manufacturer whose device got me started on this investigation, it appeared that Kingston did not yet endorse Parted Magic for secure erases, apparently preferring HDDerase (below). In short, there seemed to be incompatible impressions that I was not able to resolve in a brief examination. For such reasons, it appeared advisable to continue my search.
Numerous webpages cited HDDerase (sometimes spelled with a capital E, as HDDErase) as another tool capable of issuing the ATA Secure Erase command that had supposedly been built into SSD firmware. HDDerase reportedly came into existence at about the same time as the Secure Erase command, back in 2001. This was not surprising: HDDerase was a product of Gordon Hughes at the University of California – San Diego (UCSD), who (with NSA funding) reportedly helped to develop ATA Secure Erase.
That authorship suggested that one might expect HDDerase to work — that, among other things, it could erase a drive within minutes, as anticipated by the foregoing discussion. There was some support for that belief. For instance, a Kingston webpage stated that HDDerase would erase a 256GB Kingston SSD in two minutes.
This was the point in my exploration in which I began to understand some things about the different speeds at which an SSD might be securely erased. A Tutorial on Disk Drive Sanitization co-written by Hughes and Coughlin (date not specified, but apparently circa January 2007) stated that there were two kinds of ATA Secure Erase commands. There was an “Enhanced Secure Erase command that takes only milliseconds to complete” (p. 1). But there was also a Normal Secure Erase that would take up to two hours for a 100GB drive (p. 2). The difference was that the Enhanced approach achieved faster and higher-security erasure by changing a key that would encrypt all data on the drive — in essence, converting all of the drive’s data to garbage as soon as that key was deleted — whereas the Normal approach would take the slower route of manually overwriting all data on the drive.
At first, I thought the numbers stated in the preceding paragraph would respond to some concerns expressed above. Hughes and Coughlin seemed to be saying that I should not be surprised if it took two hours to securely erase a 100GB drive using the ATA Secure Erase command in Normal mode. But then I realized they were talking about hard drives. SSDs had not yet emerged on the scene, or at least did not seem to figure in their calculations. As noted above, then, I continued to expect that even a Normal (not to mention an Enhanced) secure erasure of a 100GB SSD should be completed in minutes.
Even in the HDD world — to expand upon a concern already mentioned in passing (above) — if ATA Secure Erase had been implemented on virtually all hard drives since 2001, why would anyone still be using Normal Secure Erase rather than the much faster Enhanced method? For that matter, why would people have to use tools like DBAN, if their hard drives already incorporated secure erase technology? A search yielded bits of insight, but did not lead immediately to any definitive discussion of what was happening with Hughes’s 2007 concepts of Normal and Enhanced Secure Erase. To the contrary, I found that Kingston’s webpages for my own Kingston SSD did not seem to contain any reference to “enhanced.”
So there was a real question of what drive manufacturers had implemented, and how they had implemented it. Reports from UCSD’s Non-Volatile Systems Laboratory (2010-2011) led to a discouraging conclusion:
Our results show that naively applying techniques designed for sanitizing hard drives on SSDs, such as overwriting and using built-in secure erase commands[,] is unreliable and sometimes results in all the data remaining intact.
Unfortunately, as summarized in a Tom’s Hardware article, the second of those UCSD reports indicated that most SSDs’ built-in Erase commands failed to delete all data from SSDs. There could be data left on the software-accessible chips in the SSD, and there could also be a failure to wipe special memory chips, containing user data, that would not be software-accessible but that could apparently be read with moderately skilled hardware tinkering. I was not at all clear on whether a 2007 protocol for ATA hard drive erasure, not obviously oriented toward SSDs, would itself accurately anticipate contemporary SSD architectures.
In short, if programs or commands like HDDerase and hdparm simply triggered an ATA Secure Erase command, resting on the assumption that the command would function as intended in the SSD, then those programs would be ineffectual in the many instances where SSD manufacturers had not implemented the ATA Secure Erase command properly. As noted in a PCWorld article, efforts to achieve secure erasure had run into assorted problems, including “buggy implementations, an out-of-date BIOS, or a drive controller that won’t pass along the commands,” as well as the apparent need to install the drive internally in order to get the command to work and potential issues with “ATA/IDE/AHCI settings in your BIOS.”
That PCWorld article also said that HDDerase, in particular, was not for inexperienced users, and was unable to bypass the frozen security status that most newer drives would use to prevent malware erasures. It was also not always compatible with current hardware: someone reported that, while Hughes’s UCSD webpage still offered HDDerase 4.0 (from 2008), for some Intel systems (and possibly for others as well) it would be necessary to use the older HDDerase 3.3.
While research papers cited above had called for the use of verifiable techniques (so as to insure that erasures were really taking place as claimed), I had not yet run into solid demonstrations that erasure verifiability was now the watchword of the SSD industry. A Computerworld article (2011) indicated that there was going to be a new Sanitize Device Set addition to the serial ATA specification, but a search for recent articles on that didn’t turn up much. For the time being, it appeared that HDDerase, like hdparm, depended upon SSD manufacturing decisions beyond the control of the program itself. Thus it seemed that HDDerase, like hdparm, could not necessarily be counted on to deliver a secure SSD erasure.
As just described, I was having difficulty finding a way to achieve secure erasure at the end of a drive’s service, when I was preparing to shelve or sell it. Another possibility was to do the erasure on the fly, keeping the drive clean on a day-to-day basis. One way to do this would be to use an SSD’s native ability to securely erase files as soon as they were deleted. This was a matter of the TRIM function — which, like ATA Secure Erase, was supposedly built into SSD firmware.
According to a HowToGeek webpage, current operating systems (including Linux and Windows 7+) supported TRIM. In the TRIM function, the operating system would notify the SSD when a file was deleted. Wikipedia said this notification was necessary because the SSD would not have its own direct way of knowing when the user or a program was telling the operating system to delete a file. Upon receiving that notification, the SSD would erase, within its memory cells, those places that contained that file’s data. That erasure would take place shortly (often immediately) after the file was deleted in the operating system. TRIM would thus make sure that deleted things were truly and irreversibly deleted, and would also keep the SSD decluttered for best performance.
This was completely different from an HDD, which would mark the space of the deleted file as available for overwriting but would not actually overwrite or delete it until a new file arrived. Thus SSDs, unlike HDDs, would not be full of supposedly deleted files, just waiting to be restored by file recovery software. Instead, if an SSD said that it had free space, that would really be free space, with no recoverable data in it.
In addition, SSDs and HDDs differed in the effectiveness of their erase processes. Data recorded on an HDD was theoretically able to linger after deletion until new data was written over it multiple times, so as to erase all magnetic traces of the file that had once been stored there. Therefore, HDD-wiping programs like DBAN (above) would offer options to write and rewrite each sector of an HDD repeatedly. By contrast, SSDs did not use magnetization to store data. There would be no lingering signal, hence no need for multiple overwrites. Once TRIM deleted file fragments from an SSD, the contents of those files would be truly and finally gone. As with HDDs, there might be other places (e.g., paging files, temp files) where the operating system would keep copies of deleted files, or data from those files, but it would not be possible to recover data from where the actual deleted files had been stored.
These remarks implied that there should be no need for a secure erase function on an SSD. Just delete the files, leave the drive connected to a power source, and wait for TRIM to do its garbage collection work. According to one forensic article cited above (Bell & Boddington, 2010, p. 11), this would take place “after only a few minutes of sitting idle.” Similarly, if the user deleted all files from the drive, TRIM should then clean the drive, and every space that was cleaned should be cleaned completely, with no magnetic reside or other means of recovering supposedly deleted files.
There was a problem with that ideal scenario. Drive manufacturers reportedly postponed erasures. I guessed that they would do this to reduce wear on the drive and/or to eliminate one potential source of unmarketable delay in speed tests.
For whatever reason, when I ran Recuva file recovery software on my SSD, it reported that there were lots of recoverable files. (DiskDigger and other programs offered competing freeware unerase capabilities. Someone said that HDD Recovery Pro and other nonfree programs would be even better at recovering data from drives.)
Many of the files that Recuva marked as recoverable were in the Recycle Bin, so I right-clicked on it in Windows Explorer, emptied it, and ran Recuva again. That didn’t seem to make much difference: there were still thousands of files, many of which Recuva labeled with a green light to indicate that prospects of full recovery were excellent. I tried recovering one, and was successful: it was back, in good working order. Recuva didn’t show a deletion date, so I couldn’t tell how long these files had been in a deleted condition where TRIM should have wiped out all traces. But it looked like some had been around for weeks if not months, whereas TRIM was supposed to operate in minutes if not seconds.
It seemed that, on my machine, TRIM was not running, or at least it was not running well. One source said that, to make sure TRIM was running, I should go to a command prompt (or maybe an elevated command prompt) and type “fsutil behavior query DisableDeleteNotify.” I got back, “DisableDeleteNotify = 0.” The source said that was good; it meant TRIM was enabled. (If it hadn’t been, apparently the command to turn it on was “fsutil behavior set DisableDeleteNotify 0” — though if it was turned off now, I would have to check again, later, to see if something was turning it off.)
But since TRIM appeared to be enabled, why did I still have thousands of deleted files lying around, including many that were not in the Recycle Bin? Another site suggested checking the SSD manufacturer’s website, to make sure the SSD’s firmware supported TRIM. For my Kingston 120GB SSDNow V300, neither the webpage nor the accompanying datasheet or specifications said anything about TRIM. Some Amazon comments said it was there, however, and CrystalDiskInfo agreed. (Note: at this point I was using a laptop whose drives C and D were on an SSD, and which also had that Kingston SSD connected as drive H via an external USB dock. I had already tried a variety of steps described above and below on the Kingston, so I was mostly running these Recuva tests on drives C and/or D.)
It seemed that TRIM, functioning properly on my drive C, should have deleted forever all those fragments of deleted files. I thought maybe the problem was that I had to fix some other setting. A search led to numerous lists of ways to optimize your SSD. One forum post said that I needed to enable AHCI in my BIOS before installing Windows. Had I done that? No idea. But Wikipedia said, “It is confirmed that with native Microsoft drivers the Trim command works in AHCI and legacy IDE/ATA Mode.” So maybe the AHCI thing was not essential. The installation instructions for my internal Crucial SSD did not say anything about AHCI; then again, they didn’t say much, period.
That forum post debunked a number of other optimization suggestions, and another thread agreed: most such suggestions were unhelpful or simply wrong. One suggestion that did seem worthwhile was to make sure I was using a current chipset driver. But the Crucial firmware updater said my internal SSD didn’t need updating. There didn’t seem to be any other downloads available at the Crucial support downloads page, and (as usual) Device Manager reported that my driver was up to date. So the driver didn’t seem to be the reason why that SSD had so many undeleted “deleted” files.
It seemed that optimization was not the issue. Continued browsing confirmed that, on one hand, people were saying that TRIM immediately removes deleted files from beyond the possibility of recovery — and, on the other hand, users in one forum indicated that they, too, were able to recover deleted files from SSDs.
Using Recuva to Set the Stage for TRIM
The seeming failure of TRIM made me wonder whether possibly the manufacturer of my SSD had deliberately impaired TRIM, so that the SSD would give users more of an HDD-like experience. Maybe the manufacturer decided that SSD buyers do not know or care much about secure data deletion, but are very upset when file-undelete programs fail to recover accidentally deleted files. Or maybe that was how TRIM was actually supposed to work: maybe the SSD would run it only on those data fragments that the user signaled as being ready for permanent deletion via something like the Shift-Delete key combination in Windows Explorer.
In that case, it seemed that I might be able to use Recuva or some other program to identify those deleted files that were lingering in a half-dead state, and send them to their final resting place. Recuva allowed me to scan an entire drive for deleted files, and then right-click on any selected file and choose Secure Overwrite. But when I went ahead with that, I got a warning:
Files you are going to overwrite might be located on a SSD drive (H:). Are you sure you want to continue?
I guessed that I was getting this warning because Recuva was prepared to do some kind of multipass overwrite process of the type that would be OK on an HDD but ineffective and/or overly wearing on an SSD. So I bailed out of that. Interestingly, I did not get that same error when I chose Secure Overwrite on files on the TrueCrypt encrypted SSD where my drives C and D were located. Possibly TrueCrypt emulated an HDD in a way that Recuva could not see through. In that case, instead, on the encrypted SSD, I got this message:
Are you sure you want to overwrite 2614 file(s)? Checking Yes will mean they are gone forever.
When I clicked Yes, Recuva deleted those files within approximately 20 seconds, suggesting that it might not be doing a multipass HDD-style wipe. The results window, upon completion, indicated that some files were “Overwritten” while others were “Not overwritten,” mostly because “File is resident in the MFT” or because “File is already overwritten by existing file(s).” Unfortunately, a rescan in Recuva confirmed that there continued to be numerous recoverable files on the SSD. It appeared that Recuva might not be effective for achieving final deletion from SSDs.
Taking a different approach, I wondered whether the process of recovering a deleted file, using Recuva, would remove it from the list of recoverable files. If it did, that would be a way to get the deleted files off the SSD and onto an HDD, where I could do a traditional secure erase if desired. Unfortunately, the answer to this question was negative: the deleted but recoverable files remained on the SSD after being restored to a folder on the HDD. All I achieved by this maneuver was to have two copies of the deleted file instead of one: a deleted copy on the SSD, and a recovered copy on the HDD. What I needed here was a way to move deleted files, and that seemed too bizarre for the Windows options of which I was aware.
As an aside, I found Recuva’s operation puzzling in certain regards. For instance, when it started one scan, it claimed to have found 322,307 potentially recoverable files on the SSD containing my unencrypted drive C. When it finished the scan, it reported (at the bottom of the screen) that it had found 143,595 files and had ignored another 180,594 files. That should have totaled 324,189. I guessed that the explanation of the seemingly bad arithmetic might be that its initial reading of 322,307 files went up as it continued its scan: I did not sit there and watch throughout the scanning process. Even so, I did not understand why it ignored 180,594 files. A search led to speculation that perhaps the files were ignored because Recuva found no recoverable data in them, though that would not explain why it went ahead and listed a lot of files that it considered unrecoverable.
Basically, I needed a utility that would completely wipe files that the operating system believed were deleted, as TRIM was supposed to do. A search led to a list of free tools for the purpose, but they all seemed oriented toward eliminating deleted files from hard drives by using repeated overwriting techniques. In short, my brief investigation did not reveal that Recuva or related programs were useful for the specific purpose of securely erasing files from SSDs, in such a manner as to insure that TRIM would clean up their last remnants as advertised.
Adjusting the Recycle Bin to Facilitate TRIM
At this point, I discovered (or perhaps rediscovered) that it was possible to alter the operation of the Recycle Bin in Windows 7. I believed that such alteration might at least reduce the magnitude of the file recovery problem. A search led to a Microsoft webpage that offered several options. One was to empty the Recycle Bin by right-clicking the Recycle Bin icon on the desktop and choosing the Empty Recycle Bin option. That seemed to work: a pass with Recuva, immediately after selecting that option, did not seem to find anything in the Recycle Bin.
That Microsoft webpage also pointed toward other Recycle Bin possibilities. To find those possibilities, I was advised to right-click on Recycle Bin (desktop icon) > Properties. At first, this didn’t work well, perhaps because of a display driver issue on my system; but when I tried again, I got a list of the partitions on my computer, with an indication of how much space each had. In this dialog, I had a couple of options. One possibility was to set the Recycle Bin to be small. That might still give me the option of undeleting accidentally deleted files, but only for a short time, before other incoming, newly deleted files pushed them out, potentially making them vulnerable to the TRIM function. It was unfortunate that Windows 7 did not offer an option to locate the Recycle Bins for all partitions on a single hard drive partition. That might have simplified the process of obtaining a confident secure wipe of the Recycle Bin.
The other possibility, in that Recycle Bin dialog, was to click on the button that said, “Don’t move files to the Recycle Bin. Remove files immediately when deleted.” (That option could also be implemented via registry edit.) According to one poster, it was common practice to turn off the Recycle Bin on an SSD. I decided to do that for drive C, where I would rarely want to recover any accidentally deleted files. Then I ran Recuva again on drive C. Having not only emptied but also turned off the Recycle Bin, I did not expect Recuva to find any recoverable files in the Recycle Bin. Those expectations were fulfilled.
There was, however, the continuing problem that Recuva identified about 140,000 other recoverable files on drive C. (It looked like roughly a quarter of those files were adjudged to be in excellent condition, for purposes of recovery.) This time, Recuva’s Secure Overwrite option estimated that four (increasing to five, then six, then seven) hours would be necessary to complete the overwrite process. Having thus essentially confirmed that we were once again using a slow and potentially futile HDD-type approach, I canceled the overwrite. Regarding the files that Recuva had attempted to delete, a post-operation pop-up told me that, as above, some had been overwritten but others could not be. My Recycle Bin adjustments may have reduced the numbers of files that would be sheltered from TRIM within the Recycle Bin, but it had not resolved the problem of many thousands of other recoverable files lurking undeleted on the SSD for a potentially long time.
I had assumed that, after checking to make sure that TRIM was enabled (above), it would simply go to work. Now, however, I found a webpage confirming that, even when TRIM was enabled by the operating system, it might still fail to keep an SSD clean of potentially recoverable data fragments.
That webpage (seconded by Belkasoft) said that TRIM would be impaired or nonexistent on older SSD drives, older versions of Windows and MacOS X, non-NTFS file systems, non-SATA (e.g., USB, NAS) drive connections, PCI-Express SSDs, RAID configurations, and corrupted drives. But none of those applied to my drive C. It did not seem that any of those exceptions would explain the tens of thousands of recoverable files appearing on my SSD. The situation was still a mystery.
So at this point I reached the conclusion that I probably should not rely on TRIM to keep my SSD free of deleted files while I continued to use that drive on a day-to-day basis. In other words, secure erasure at the end of drive usage might not be a simple matter of deleting the last files on the SSD and watching them get TRIMed away.
Reformat the SSD
Giving up on the real-time, file-by-file approach proffered by TRIM, I returned to the question of how to securely erase the whole SSD at once. A new possibility had emerged: someone at Crucial suggested using Windows 7’s Disk Management (diskmgmt.msc) tool to wipe a drive. The suggestion was to simply right-click on the partitions (or perhaps the volume) that I wanted to delete, there in Disk Management, and choose Delete Volume. (This would not work on the Windows system drive, at least not until that drive was connected to a system booted by some other drive.) Once the volume was deleted, the final recommended step was to let the drive sit overnight (though Belkasoft said it might take only a few minutes), presumably still connected to the machine, so as to allow TRIM to wipe everything out.
That suggestion seemed to imply that TRIM would work on an unformatted drive, as long as it was connected in the ways described above (e.g., SATA or eSATA, not USB). An alternative, suggested by a Ghacks article, was to right-click on the drive (in Disk Management or Windows Explorer) and choose the Format option (specifying NTFS format). The authors found that this was sufficient to eliminate all recoverable data. The drive would then be NTFS-formatted, eliminating another possible barrier to the proper functioning of TRIM (above). The Ghacks article advised a thorough rather than Quick format, but other sources contradicted that. In that suggestion and also in their recommendation of DBAN, the Ghacks authors seemed to be thinking in HDD terms.
In my own case, somewhere among the multiple approaches I had tried on my Kingston SSD, the recoverable files seemed to have vanished. I thought that reformatting might have been the cure, but later I realized that this could be mistaken. I had done the formatting with the drive connected via USB. As noted above, TRIM would not work in a USB drive. I would apparently have to plug the drive into an internal drive bay with a SATA connection before TRIM would do its job. Even then, I was not sure that TRIM or even reformatting would erase the extra memory chips that, as described above, were not software-accessible and might thus yield only to the physical reset achieved by a properly implemented ATA Secure Erase command.
Encrypt the SSD
The Ghacks article and others suggested another possibility. The reasoning seemed to be that, if we couldn’t be certain that our files were getting wiped out, at least we could be confident that the pieces that someone would recover would contain no user-readable data. That is, I could use something like TrueCrypt to encrypt the entire SSD. This would convert all of its contents to scrambled data that could only be pieced back together if you knew the password or could crack it, and a complex password would be practically uncrackable. (Another post discusses TrueCrypt security developments in summer 2014.)
Within TrueCrypt, Ghacks recommended using the “Create an encrypted file container” option, whereas I thought it might be better to choose the “Encrypt a non-system partition/drive” option. Since encryption could be a very slow process (though faster on an SSD than on an HDD), it would probably be best to do a Windows 7 quick format before (as well as after) the encryption process. That way, there would be no files to encrypt, the encryption should take only a few minutes, and then TRIM would clean up any encrypted residue, assuming the drive was SATA-connected and otherwise free of the potential problems mentioned above. But I was not entirely sure that the encryption (even without TrueCrypt’s quick format option) would succeed in reaching every part of an SSD, in some way that an overwriting program like DBAN would fail to do.
According to an Ars Technica piece (2011), encryption could also be an appropriate option for securely deleting individual files on an SSD. A search pointed toward advice on how to do that. A StackExchange article suggested that security in general (not just at wipe time) would be enhanced by encrypting the drive at the outset, before putting any data onto it. Belkasoft said, however, that encryption would remove files from the self-cleaning operation of the TRIM command, and that some types of encryption would actually make data easier to recover, at least for someone with sufficient forensic expertise who knew “either the original password or binary decryption keys for the volume.” Decryption keys, it seemed, could be obtained from RAM, if the computer in question was still running, and also from memory dumps and from paging and hibernation files. In addition, Belkasoft said, encryption tended to require rewrites, significantly degrading SSD performance. So the full sequence suggested in the preceding paragraph seemed advisable: format, encrypt, reformat, and then leave the drive attached via its SATA connection long enough to let TRIM complete any cleanup that it might attempt.
There appeared to be some myths — and, for me and others, much confusion — in the area of SSD wiping. Given the complexities noted in research on the matter, I could not be confident that I, with my novice understanding of such matters, was going to achieve a secure erase by relying on any one tool. It seemed, rather, that a combination of recommended techniques would be advisable, at least until SSD manufacturers began to implement reliable internal technologies. This summary does not necessarily recommend taking all of the routes explored above; it merely recaps that exploration.
I began (above) with DBAN and other disk wiping tools. These did seem likely to eliminate, with an uncertain degree of thoroughness, at least some of the data on an SSD. At the same time, they had potential to make the SSD slower by cluttering it up, and also to shorten its life somewhat by imposing additional wear on its memory cells.
Next, there was the option of using the SSD manufacturer’s SSD toolbox, if one was available. I guessed that Intel’s toolbox was probably pretty good. There did not seem to be one for my Kingston SSD. There were warnings that using one manufacturer’s toolbox on another manufacturer’s SSD could brick it.
For purposes of securely erasing an SSD, the key component of a manufacturer’s SSD toolbox would be the secure erase tool that would effectively implement the ATA Secure Erase command. Ideally, this command would promptly and thoroughly clear both the software-accessible data storage areas of the SSD and also those extra chips, added by the manufacturer for purposes of speed or longevity, that could be accessed only by the drive’s internal firmware. That is, the ATA Secure Erase command was implemented by SSD firmware, not by the operating system. The problem here was that research had demonstrated that manufacturers did not reliably implement that command — that, in the worst cases, the command was completely ineffectual. It did not appear that there would be any harm in running any Secure Erase tool that the manufacturer might have supplied, but it did not seem advisable to rely solely on that tool.
The dependence upon firmware, and thus upon the erasure competence of the manufacturer, was also a problem for several software approaches that sought to provide alternate ways of triggering the ATA Secure Erase command. The Linux hdparm command and the DOS-based HDDerase program both seemed to assume that the Secure Erase command would function as expected; likewise for the Parted Magic tool that apparently provided a GUI for hdparm. Unlike the SSD manufacturer’s toolbox (if any), it could not be said that all of these alternative tools would be merely useless at worst. In the case of hdparm in particular, the wrong command options could ruin an SSD and cause other damage. I doubted that Parted Magic or HDDerase would be comparably risky. Between those two, the latter appeared harder to use and had not been updated in some years, leading to reports that it would not work under certain conditions. So, again, I was looking at methods that might add another degree of security to the data erasure effort, albeit at some potential cost. Among these several options, in this brief review Parted Magic generally seemed to be safest and most accessible.
Another approach was to rely on a different aspect of SSD firmware. SSDs had a built-in TRIM capability that would promptly and automatically clean out the contents of deleted files, rather than leave recoverable traces of them as in HDDs. Here, again, it was reported that manufacturers did not necessarily implement the capability as expected. In my own case, I found thousands of supposedly deleted files still available for recovery via easy-to-use freeware. It seemed that the persistence of deleted files could be due to a variety of factors, including drive corruption, non-NTFS formatting, or use of a non-SATA connection. Those factors did not seem to explain the residue of recoverable files on my drive C, however. I was also unsuccessful in attempts to use the Recuva file recovery program to delete most potentially recoverable files from my SSD. At best, adjustments to the Recycle Bin seemed to have potential to eliminate, from possible recovery, at least some persisting deleted files.
Assuming TRIM was properly implemented in the SSD, and assuming the drive was otherwise eligible (e.g., no corruption), it seemed that one potentially workable approach was simply to perform an NTFS quick format of a SATA-connected SSD, encrypt the entire drive with something like TrueCrypt, do another format, and then leave the drive connected for a while, so as to let TRIM do its work. Since the implementation of TRIM in a particular SSD could not be verified, it appeared that a combination of these steps with other methods (above) might provide the most reliable response to the task of securely erasing an SSD.
(I did not investigate, at this point, the subsequent discovery that, as in the case of Lenovo ThinkPad laptops, some systems might offer a downloadable BIOS utility to erase an SSD.)