This post presents my first serious attempt to understand Windows security concerns, from the perspective of an ordinary user. The subject proved much more complicated and multifaceted than I had expected.
The companion post describes the steps I took to install Windows 10 version 1903 in 2019. I thought security would be one aspect of installation. It turned out to be involved in almost every aspect of installation, starting from the beginning — and in much that happened after installation.
It seemed that security was better addressed, not as an incidental aspect of installation, but rather as a fundamental aspect of computing, worth taking seriously on a continuing basis. Hence, this post offers, not an installation guide, but rather a tour through the issues I encountered, with links for further reading.
Summary
This long post has a table of contents (below). This summary presents a brief recap of the major sections listed in that table. This recap is largely based on the summary paragraphs that appear at the ends of most of those sections.
If there is one overriding message from these materials, it is that computing security is complicated because very intelligent and highly motivated people — often, but not always, criminals — are watching every angle of users’ computing experience, looking for an opening that they can use for their purposes. Such openings could entail varied uses of sensitive data, at the expense of you or of people you care about, including personal embarrassment, loss of such data or of friends, spouses, or clients, and being robbed, sued, or sent to prison. As a general rule, it seemed that one should not behave inappropriately (e.g., selfishly, stupidly); if one does behave inappropriately, a computer’s capacity for performing and documenting such behavior may compound one’s resulting problems; and if one does find it necessary to involve a computer in inappropriate behavior, one’s thoughts will probably turn to the possibility of covering or expunging evidence of that involvement. Or, as some might say, do no evil; don’t document the evil that you do; and don’t let others find the documentation.
After preliminary remarks, the following discussion introduces the concept of threat modeling. The gist of it is that the user could not protect against every possible intrusion. If a well-funded government agency (e.g., law enforcement, anti-terrorism) considered it important to access your data, sooner or later they would probably find a way. For most users, the more realistic challenge was to make a start, and to keep learning and developing good habits and accumulating good tools, so as to keep reducing one’s exposure. Passwords were an example. They were in use everywhere; everyone knew what they were; but not everyone knew what counted as a good password, and why.
The ensuing sections consider two attack vectors: physical and online. The physical angle began with the question of who could access or remove the computer. Ideally, the computer would be locked in a room to which no untrusted person had access, with no contact with the Internet or with other computers having Internet access, in a tamper-resistant case securely locked or bolted to a wall or furniture, with constant video surveillance and (especially for laptops) device recovery (e.g., LoJack) and remote access software.
That was at least an approximation of a security ideal. As real-world circumstances required departures from that ideal, the user faced the constant tradeoff between convenience and efficiency, on one hand, and security on the other. Thus the user might want to consider how much of his/her data really had to be copied or moved from its safest possible storage location (e.g., on an encrypted, unwriteable Blu-ray disc kept in a safebox, backed up in another comparably safe form) to a more accessible disk drive, and how much of that data needed to be exposed to the Internet or to the outside world, via laptop or other mobile device.
Assuming someone did gain physical access to the computer, the goal remained the same: make it as difficult as possible for most attackers to gain access to the user’s data, while continuing to make the data available to the user. The next line of protection against a physical intruder was at the level of the BIOS/UEFI setup. A BIOS login password could prevent someone from booting the computer with a DVD or USB drive, so as to avoid the user’s Windows login (a/k/a lock) screen, enabling the intruder to examine the contents of the system’s hard disk drive (HDD) or solid state drive (SSD). The user could prevent that by using the BIOS setup utility to control which drives could be mounted, and by using UEFI with Secure Boot. Unfortunately, an intruder could do much the same thing: boot the computer, go into the BIOS setup utility, and change those settings. To prevent that — that is, to protect those settings — the user would also assign a BIOS setup password. An intruder might still be able to get past these passwords by flashing the BIOS (i.e., resetting it to factory default settings, and then changing them as needed) from a USB drive, but that would tend to wipe out the BIOS passwords, and then at least the user might notice that the machine oddly failed to request a BIOS password. This would provide a warning that a keylogger may have been installed. The intruder could restore the requirement of a password, but the intruder could not require the correct password without knowing what it was. Suspicions of BIOS tampering could justify re-flashing the BIOS, so as to wipe out any altered BIOS that a spy may have installed, and restoration of a backup system drive image believed to be untainted. Anti-rootkit software might also be justified.
We would want to place additional barriers in the path of a thief or other intruder who got past the BIOS and proceeded to boot this machine (using either its Windows installation or a bootable USB drive), or who removed the computer’s SSD or HDD for viewing on another machine. The next step would be to encrypt the contents of the computer’s drives, preferably using VeraCrypt, so as to make them inaccessible without the VeraCrypt password. As at every other step along the way, there could be keyloggers or other ways of observing or extorting passwords and otherwise defeating security measures; and at some point in the future there would apparently be quantum computers capable of cracking any password (resulting in potential vulnerability for any disk drive not under the user’s control); but for present purposes, in the vast majority of day-to-day situations, defeating VeraCrypt would require a relatively determined and sophisticated effort.
So at this stage, it seems, the intruder would have to get past the BIOS password, to boot the machine into any operating system, and would also have to get past the VeraCrypt password, to boot the installed Windows 10 system. Unfortunately, the computer’s RAM and, if applicable, its paging and hibernation files enabled on an unencrypted drive, could store VeraCrypt passwords entered by the user to unlock data drives, along with contents of and considerable information about data files, so it may not be sufficient to encrypt data drive D while leaving system drive C unencrypted. The contents of RAM would tend to be retained as long as power continued to be supplied, so one would want to verify that the particular machine was being shut down in a way that would clear RAM. Of course, that would not be a problem, from the intruder’s perspective, if the machine was powered up. In that case, regardless of how many BIOS and VeraCrypt and system passwords were in use, an intruder could harvest password information by inserting a suitably configured USB drive when the user was briefly away from the running machine.
In the simple case of an intruder who turned on a computer, sat down at the keyboard, and tried to hack his/her way in, s/he would reach the point of logging into Windows only after getting past BIOS and VeraCrypt barriers. Getting past those barriers could be easier than one might expect if, for instance, the user lazily used obvious passwords (e.g., “password”) for the BIOS and VeraCrypt. Upon briefly verifying that the passwords work, the intruder could log out and await an opportunity to remove the computer’s drive, or to boot the computer with a USB drive. In that case, the intruder could simply copy all desired data to another computer without even having to deal with the Windows lock screen, and the user might never know that any such thing had happened. These remarks illustrate the concept of “defense in depth”: maintain multiple layers of security, so that if one fails, another was there to make things difficult for an intruder. In this case, the user should not rely on the Windows login to keep his/her data safe. Instead, the user should make sure s/he was using recommended procedures (e.g., appropriately complex and unique passwords) at each step along the way. Each time the user did something right, by way of computer security, the would-be intruder would need additional time, opportunities, and/or knowledge to succeed.
If a would-be intruder did reach the Windows login screen, and if circumstances did not favor workarounds like those just mentioned (e.g., the computer had no USB ports to use for booting an alternate operating system; it was not feasible to dismantle and haul away a disk drive; s/he had just this one opportunity), the question was whether the intruder would be able to get through that Windows lock screen and proceed to use the computer as an authorized user. At this point, multifactor authentication (MFA, including the more limited subset known as two-factor authentication (2FA)) could become an important response to an easy or discovered password. The debit card was an example of 2FA: to use it at the store, one would need not only the password (i.e., the PIN) but also the device (i.e., the card).
In Windows 10, the account that one logged into could be a standard user or an administrator account. When the standard user sought to make system changes requiring a higher level of permission, s/he had to enter the administrator’s password. If that password was difficult and rarely used (thus reducing odds of detection), there would be limits on the amount of damage that an intruder, logged in as standard user, would be able to do. There was also a choice between setting up the user or administrator account as a local (a/k/a “Windows,” i.e., traditional standalone computer) or an online (a/k/a “Microsoft”) account. The latter offered the convenience of linking all things Microsoft (e.g., one’s Outlook email account plus one’s Skype account plus one’s login into all of one’s computers) under a single Microsoft account name and password — which was, of course, a bad idea from a security perspective.
Computers were not yet generally equipped to require any factor other than a password at the BIOS level, and disk encryption methods (including but not limited to VeraCrypt) likewise tended to be password-oriented. The situation was slightly different at the Windows 10 lock screen. There, it was possible to allow non-password ways of logging in (e.g., PIN, USB key, fingerprint). Regrettably, Windows 10 did not allow the user to designate a desired combination (e.g., fingerprint plus PIN), so as to require a real MFA login. Instead, these login options were alternative, increasing convenience and reducing security: depending on what the user had enabled, one could log in with a password or a PIN or a fingerprint or a photo, among other things. Thus, an intruder could succeed if s/he had a working solution for any one of the enabled login options (e.g., a PIN).
Microsoft and some third parties did offer what Microsoft called “two-step” (as distinct from two-factor) authentication. The difference was that the user of two-step authentication was not interacting with the identity checker in two different ways, as would be the case when using the debit card at the store: swiping the card plus typing in the PIN. Instead, in two-step authentication, the user would enter a password, trigger an authentication code delivered to a smartphone (for example), and then enter that code. The interaction was still all keyboard-based, and as such could theoretically be recorded and copied via keylogger. The main protection would be that the smartphone receiving the one-time authentication code would remain in the possession of the authorized user, but — depending on the authentication system — there could be ways of hacking into that information. Fortunately, successful attacks of this nature were apparently not common.
Remote access was the other big attack vector. The preceding comments are largely focused on scenarios involving an intruder with physical access to the computer. But a hacker capable of accessing a running computer via the Internet could do enormous damage without ever seeing the machine’s BIOS, VeraCrypt, or Windows password screens. The focus could be rather different: the physical intruder might want documents or other materials that s/he believed could be found on this particular computer, whereas the typical online intruder would tend to be searching for any system that could yield bank account or other financially valuable information. Software keyloggers could be valuable in either kind of intrusion. Only the physical intruder would be capable of using a hardware keylogger. Online intruders might use browser extensions capable of collecting passwords and other user information from particular websites. For example, one hacker modified a legitimate extension without its creator’s knowledge, resulting in infection of more than a million computers. Online intruders also used phishing and man-in-the-middle attacks to collect login data and other sensitive information.
In terms of countermeasures, a software keylogger or corrupt browser extension might or might not be detected by antimalware tools. A hardware keylogger would be obvious if it was inserted into a USB port on a laptop; less so if it was inserted into a USB port on the back side of a desktop computer and then removed within a day or less, after obtaining the desired passwords; and not obvious at all if it was inserted inside a desktop computer’s keyboard. Browsing privacy could be enhanced by using the Tor network, hiding the user’s IP address, and using VPN.
A password manager (PM) (e.g., LastPass) could provide a set of particularly useful countermeasures. On the downside, a PM was capable of swallowing all website passwords and hiding them forever from the user who did not remember his/her PM’s master password. On the upside, a PM might facilitate website password entry not detectable by a keylogger. PMs also appeared to contribute greatly to the objective of choosing and using passwords whose characters were random enough to be truly unguessable and long enough to be uncrackable by current technology. PMs were apparently able to detect some malware. The PM also finally offered an opportunity for real MFA, so that even a keylogger might not give the intruder what s/he would need to discover the user’s website passwords.
Especially for online use, there were additional possible security measures that would tend to require learning and time investment beyond the interest and ability of most Windows users. While Windows 10 seemed to be developing a reputation as a relatively secure operating system (OS), especially taking into account its many features, other OSs offered more secure computing. Against an intruder who did not have the benefit of a keylogger or other additional information, VeraCrypt’s offer of a hidden OS could afford an additional layer of security. There were also more secure browsers than Chrome and Firefox. Such less-traveled paths seemed likely to entail not only fewer features, however, but also less stability. This was also the case for numerous possible hacks intended to improve security in Chrome and Firefox, among others.
Those are the principal subjects explored in the following materials. Despite its length, this post could barely hope to introduce and raise awareness of various Windows 10 security risks and possibilities. Only full-time study and exploration could keep a user reasonably well apprised of all possible dimensions of computing security. For the vast majority of users, the best security advice seemed to be to keep sensitive data away from possible intrusion whenever feasible, take a cautious attitude toward untested and potentially risky technologies, and understand one’s preferred technologies well enough to recognize when changed circumstances might result in new exposure.
.
Contents
Summary
Contents
Preliminaries
What to Protect Against
Passwords
Physical Security
BIOS Setup
Drive Encryption
System Encryption Options
Cold Boot and DMA Attacks
Evil Maid Attacks
Authentication Methods in VeraCrypt
Reducing Data Exposure
Data on Drive D
Data on Drive C
Data in RAM
Other System Security Measures
Windows and Program Updates
Disable Microphone and Webcam
Backup
Accounts
Administrator vs. Standard User Accounts
Local vs. Microsoft Accounts
Sign-In Options
Bypassing the Lock Screen
MFA for Local Accounts
MFA for Microsoft Accounts
Password Manager
Commercial PM vs. DIY
Vulnerabilities: LastPass vs. KeePass
Another Possibility: Triage
Setting Up LastPass
LastPass MFA
Other Safe Browsing & Email Precautions
Browsing and Email Attacks
Firewall
Secure Operating Systems
Secure Browsing
Hiding IP Address
DNS Server
Security Email and Phone
Other Security Software
Security Suites
Antivirus vs. Internet/Total Security Programs
Anti-Malware
.
Preliminaries
This post joins other long treatments covering Windows security. For instance, TechRadar (Carey & Turner, 2019) recommended a handful of online cybersecurity courses, from sources as varied as Udemy and the U.S. Department of Homeland Security. There were also less formal but perhaps comparably comprehensive writeups from many sources (e.g., Hackernoon, 2016; SecurityInABox, 2016; CyberYozh, 2017). Long as this post may be, it is nothing compared to some of these more comprehensive sources. As such, this writeup is likely to be of most value for people who find themselves in circumstances somewhat like mine. In all events, and especially where our needs diverge, users would be best advised to check alternate sources of guidance. Compared to those longer sources, my interest in getting some actual work done required me to try to distinguish essential security from possible overkill. I cannot be certain that I made the right decision in every instance.
In this post, I refer to the Windows key (sometimes called WinKey) as simply Win. So, for example, Win-I means hold the Windows key and hit the I key, and Win-R means WinKey-R. The Windows key is the key with the Windows logo on it, typically located near the lower left (and, at least on desktop keyboards, also near the lower right) corner of the keyboard.
Commands in this post are shown in italics. It is possible to run commands via Win-R. But if a command fails, it may be because it is not being run in an elevated command window. (Easy way to open an elevated command window: Win-R > cmd > Ctrl-Shift-Enter.) If you can’t figure out a certain character in a command (e.g., is that a one or an L; is it a zero or an oh; is there a space between those two characters), copy and paste it into Notepad or directly into the command box, and view it there.
This post contains remarks accumulated during several Windows 10 installations, though at times it specifies just one (e.g., installing on my Acer Aspire 5 A515-51-563W laptop). Some of those remarks may apply only to certain configurations. For instance, I tended to use UEFI rather than legacy BIOS tools, but there may still be a few BIOS-oriented remarks in the text. (That’s not the same as acknowledging that, like others, from old habit, I often use “BIOS” to refer to UEFI.)
The text offers many examples of possible scenarios. It is unlikely that all examples apply to any single user. Some such examples may even seem paranoid. Over the course of my reading, however, I concluded that users probably should be aware that such possibilities exist.
At various points, this writeup mentions pieces of third-party software. The companion post lists a number of programs that might be useful from the outset. As discussed there, some might be installed on drive C; others might be run from a USB drive. This post does not detail the steps that one might take to improve security with various downloads. For purposes of security, some users may be interested in using a Linux computer to visit the relevant websites and download the desired software, in an environment less likely to harbor malware. For example, I obtained good results by booting a computer with a Peppermint Linux live USB drive and using Peppermint’s Chromium browser to download software onto an otherwise empty and previously wiped external hard disk drive (HDD) or solid-state drive (SSD), connected via USB cable. Note, however, that while Linux was often considered safer than Windows, various sources emphasized that there were no sure bets. SentinelOne (2019) offered a brief, readable, and interesting summary of that.
Like Windows itself, third-party software could be (and in notorious cases was) used to inject crapware and malware into users’ systems. Even legitimate software could be hijacked at times. Established software warehouses (e.g., Major Geeks, Softpedia, FileHippo) claimed (accurately, in my use of Softpedia especially) to run virus tests on all software before offering it for download. To protect downloads from possible infection, this might be a good time to obtain a VPN subscription and use that for download to a computer believed to be secure. Ideally, the sources of these downloads would provide hash values and explain the use of a hash tool, so that the user could verify that what s/he downloaded had not been altered.
As a matter of terminology, DGR News Service (2018) said that, in effect, “privacy means the government doesn’t know who sent the message, but can read its contents,” whereas “security means they know who sent the message, but cannot read it.” While this post is focused on security, the two are closely related because, among other things, an attacker’s knowledge about a system’s configuration and contents could affect his/her ability to penetrate it.
This post often refers to bootable USB drives. The general idea is that it was possible to configure a USB thumb drive (or DVD, or SD card, or other device) so that it could provide an operating system (OS) to run the computer. In other words, a computer could be started and run by using the installed Windows 10 OS; but a computer could also be started and run by using an OS installed on a thumb drive. That OS could be Windows 10, for instance, using a Windows To Go (WTG) setup, or it could be a version of Linux. A bootable USB drive could be used by friend or foe — to inspect or tinker with the contents of an unencrypted HDD or SSD, for example; to open (with the password) an encrypted drive; to read the contents of system memory, and more — even without the ability to run the installed Windows 10 system.
A “safebox,” for purposes of this post, is a safe place in which the user could store passwords written or printed on paper, backup copies of hard drives, backup hardware tokens, and other essential security-oriented materials and media. It might literally be a “safe box”: for example, a heavy floor safe, a concealed wall safe at home or office, a safe deposit box, or some other box concealed in a wall or floor, welded into a car, buried in the woods, or submerged at the bottom of a lake. It would ideally be located in a secure, somewhat remote location, so as to remain untouched and available despite any fire, flood, theft, riot, tornado, or other disaster that would imperil one’s computer and related (advisable or inadvisable) security paraphernalia (e.g., password notebook sitting beside one’s computer; backup offline HDD sitting on a shelf in the basement). Perhaps the safebox would contain a USB drive with bootable USB tools and portable versions of the various security software needed to recreate one’s secure system. The concept of the safebox was simply that catastrophe could arrive; the most important data and/or security materials could be lost, stolen, or destroyed; and yet the user could be back in business soon after s/he found a computer — notwithstanding the occasional safe deposit box fiasco, like those described in a New York Times article (Cowley, 2019; see also StackExchange).
Generally, a focus on computer security would be intended to protect what was commonly called “sensitive” data. The exact nature of such data would vary according to the user and the situation. For example, the recoverable contents of a system’s RAM might not hold full copies of all files that a user would consider most sensitive — but it might password that the user had previously entered to access those files. What a user would consider sensitive, in various countries and settings, could include usernames and passwords; business or governmental secrets; legal documents; engineering plans; evidence of software use incompatible with applicable laws or business purposes (on e.g., a company laptop); pornographic material sufficient to warrant a long prison term; financial documents and spreadsheets; web browsing history; love letters or diaries; and other items that would have monetary or political value, or could cause potentially life-changing embarrassment, or might incriminate oneself or one’s friends in illegal activities. In the case of ransomware and other backup-related situations, of course, sensitive data would include everything that the user would prefer not to see lost forever.
The focus here is not primarily on symptoms. That’s because, generally, funky computer behavior was not necessarily a good guide as to whether you had been hacked, because not all hackers were after the same things. Someone who just wanted access to your bank account would presumably try not to make your system behave oddly, so as to provoke the user’s awareness that something might be wrong with his/her system: they would just want to grab the information and go. Likewise, someone using your webcam to watch you might take pains to avoid doing anything to attract attention to their activity — such as making sure that the webcam light did not turn on when they were using it. Thus, one might be alerted by signs of being hacked, but the absence of such signs was not necessarily reassuring. According to various sources (e.g., The Spectrum, HellPC), such signs could include unexplained online activity (e.g., unfamiliar emails you’ve supposedly sent; unknown new social media friends), program crashes and sudden reboots, pop-up ads, unwanted bookmarks or browser toolbar icons, system slowdown or slow loading of webpages or videos, high Internet data usage, unexpected interruption of printer or USB devices, unexpected shutdown of antivirus software, unexpected file additions or deletions, and independent mouse cursor movement.
This post’s references to drive imaging software include several good, free tools capable of taking a more or less exact snapshot of the current Windows installation, saving it in a relatively compressed form, and restoring from that image back to drive C, so as to replace the previous Windows installation with this previously saved version. I had used Acronis True Image for this purpose for some years, but gathered that its later versions had declined in quality, and thus did not buy an update capable of working on Windows 10. Instead, for the past several years, I had been having consistently good experiences with the free version of AOMEI Backupper Standard. Oft-mentioned alternatives included Macrium Reflect, EaseUS ToDo Backup, and Paragon.
What To Protect Against
When I started this post, I believed that my ideal security solution would be to protect against pretty much every threat I might hear about. That turned out to be unworkable. I found that protection tended to cost money, take time, and/or reduce efficiency. The quest to find, acquire, audit, and implement the most secure software and procedures could be very onerous. And even then, after imposing a seemingly endless string of burdens on one’s ability to get anything done, there would still be vulnerabilities, some of which might never even be recognized by the user.
Especially in an enterprise context, discussions of digital security often referred to “threat modeling.” TechTarget defined threat modeling as a matter of “optimizing network security by identifying objectives and vulnerabilities, and then defining countermeasures to prevent, or mitigate the effects of, threats to the system.” Varonis recommended that
a threat modeling team should be made up of representatives from application owners, architects, administrators, and even customers. Pull all of those people into a room to ask questions, flag concerns, discuss potential resolutions, and troubleshoot issues.
Even then, according to a source quoted by Security Intelligence, “Threats are always changing …. Often — even soon after you’ve completed the process — the results are no longer valid.” Imagine oneself as a billionaire, able to hire security experts to spend months, if necessary, scrutinizing every aspect of one’s computing environment and activity, and imagine some of those experts remaining on staff, providing continuing security guidance. This would obviously be far beyond the means of the ordinary user. And yet not even this could provide certainty. One need only realize that a domestic or foreign governmental agency, or a personal adversary or corporate competitor, might be able to threaten or bribe one of those security experts, so as to insert an undetected opening or provide compromising information. In the edited words of a Reddit commenter,
I’ve seen people on this sub who say a locked down Windows 10 isn’t enough, you absolutely need Linux, which is equally ridiculous.
The problem is, at that threat model level, those same people really shouldn’t be on Reddit, have credit cards, bank accounts, or use interstate highways …. When your threat model changes from “I want to reduce the amount of personal data that goes to marketers” to “I want to evade governmental surveillance,” you’re kind of screwed, because it’s impossible to do the latter.
In this post, I did not develop a clear threat model. Doing a good job of it would have become a separate project unto itself. Rather, I took the approach of reducing risks as I became aware of them, to the extent that I could do so without significant expense and time investment. This meant, for example, that I would be willing to buy minor devices and endure minor hassles (e.g., entering a password every time I attempted some activities), but I was not going to acquire a new home, job, or computer in order to reduce security threats. It was more a matter of improving security within my existing situation.
With that as my starting point, I proceeded to look into the things that various people were telling me about computer security. In 2002, Microsoft published what it called the Ten Immutable Laws of Security. As it turned out, those supposedly immutable laws would be both muted (because going to that link now produced “the page you requested cannot be found”) and mutated (because, in 2011, Microsoft produced version 2.0 of that page — and now, in 2019, it too produced a “page can’t be found” error — but at least Microsoft’s (2011) commentary was still available in the Internet Archive). I summarize those laws as follows:
Your computer is no longer reliably under your control if you run a bad person’s program or active content on it, or if s/he has unrestricted physical access to it or can alter its operating system. System security cannot exceed the strength of the password, the trustworthiness of the administrator, the security of the decryption key, or the recency of the antivirus updates. Anonymity is never absolute, and technology is not a panacea.
Wikipedia cited alternate ways of summarizing key security issues. Examples included OECD’s set of nine principles for information systems (i.e., awareness, responsibility, response, ethics, democracy, risk assessment, security design and implementation, security management, and reassessment) and Parker’s Hexad (i.e., confidentiality, possession, integrity, authenticity, availability, and utility). Another standard bit of lore: the CIA Triad. Although the origins of this term appear uncertain, it was apparently an acronym for data Confidentiality, Integrity, and Availability (or Authenticity, according to some). But as with any simplistic formulation (e.g., Jesus Saves), the devil was in the details.
Wikipedia identified numerous responses to security threats that might affect Windows 10 users. Within the sphere of “security by design,” one principle of interest called for “least privilege,” where a given part of the system has only the privileges needed to function. So, for instance, a user would choose to operate his/her computer using a standard (i.e., non-administrator) account, so as to reduce the possible damage that a hacker could do. Another principle, “defense in depth,” called for efforts to arrange things so that, even if one form of security failed, a hacker would face additional barriers. In practical terms, this seemed to entail securing everything that could be secured, except where other priorities (e.g., system stability, system performance, workflow) demanded a reduction in security measures.
Although I did not see it articulated, it appeared one guiding principle should be to observe the old adage, “An ounce of prevention is worth a pound of cure.” Or, as Billy Joel said, “Get it right the first time / That’s the main thing.” Another impression that I did see articulated repeatedly: computer security efforts would rarely offer perfect solutions. Rather, security was a matter of not making it easy for an attacker — of raising the cost and/or difficulty of intrusion. The sources I reviewed seemed to agree that there were few intensive, potentially expensive, and probably illegal attacks focused on a single ordinary (i.e., non-celebrity, non-CEO, otherwise not especially noteworthy) user. They aren’t going to focus on you if they don’t even know you exist.
Computer security seemed to be something of a moving target. From Schneier’s Data and Goliath (2015):
Even when technologies are developed inside the NSA [i.e., the U.S. government’s National Security Agency], they don’t remain exclusive for long. Today’s top-secret programs become tomorrow’s PhD theses and the next day’s hacker tools.
There were overwhelmingly many things to think about and do, for ideal computer security. The goal here was not to get everything exactly right. This post contains little to nothing on user scenarios (e.g., peer-to-peer or P2P networking or file sharing) in which I was mostly not involved. Nor was the goal to achieve all feasible outcomes immediately. It was, rather, to start to become familiar with the possibilities, to keep closing doors that did not have to be open — and thus, over time, to form habits that would reduce various risks. Precautions that seemed strange, when I first encountered them, could eventually become familiar and routine.
I had devoted quite a bit of effort to various attempts to use Linux rather than Windows. That effort was based primarily on the realization that Microsoft’s profit orientation led it to abuse its customers, year after year, and also on the belief that Linux was more secure than Windows. While I still believed that, it appeared that much of the security problem in Windows (as distinct from the problem of Microsoft’s deplorable behavior) was due to the ongoing effort to give users greater choice, functionality, and pleasure. That is, if Linux could offer the things Windows offered, it would probably have many of the same security problems. In that case — in our real-world situation — Windows attracted the bulk of the malware writers and crooks, but it also had the advantage of being watched and protected by many different users and companies.
Passwords
The password may have been the most commonly encountered aspect of computer security. Most security measures relied to some extent on passwords. This section reviews that fundamental topic.
Passwords could be cracked in multiple ways. Among several varieties of brute-force methods discussed by TechRepublic, the simplest and probably slowest involved the exhaustive key search method, in which a fast computer would go down the list, trying every possible combination of characters (e.g., aaa, aab, aac …). The brute-force approach would apparently be most appropriate with passwords consisting of random characters. For example, How Secure Is My Password? calculated that a password of eight random characters (e.g., d3i_7$;B) would be cracked in about two days. Gibson’s Password Haystack Calculator said that same password would be cracked in 2,000 centuries in an “online attack scenario,” in 18.6 hours in an “offline fast attack scenario,” and in 1.12 minutes in a “massive cracking array scenario,” which apparently referred to distributed computing arrangements in which large numbers of computers, networked online, would share the load (perhaps without their owners’ knowledge). There were other password strength calculators, of varying quality (see Dropbox). Note the advice against entering your actual password into such calculators, as it could then conceivably be added to a dictionary (below) that might eventually be used to crack your password, with or without the knowledge of the sites offering such calculators.
Obviously, trying many different passwords to crack someone’s bank account would quickly trigger an account lockout — except that, in practice, that wasn’t how it worked. A search led to explanations saying, in effect, that hackers would run their brute-force attacks, not against bank websites themselves, but against data obtained from bank websites. According to one Quora answer (May, 2016), “[T]he dominant password cracking (guessing) paradigm of our era is where an online service’s user database is stolen … [and then] attacks can be brought to bear on the entire database, trying billions of passwords per second.” Graham Cluley (2015) said, “Since this is all done on a machine that is not subject to an incorrect password lockout threshold, the tool can run as long as necessary to churn through all the possibilities until the passwords are revealed. The hackers never need to type the password into the website. Once the passwords are revealed, they can be sold on the criminal market” (i.e., the Dark Web; see Tor network, below). The purpose of a long password was thus to put one’s account among those that would take too long to figure out: the hackers would sell those that could be cracked within a reasonable amount of time.
Calculations of how long it might take to crack a password were subject to adjustment. For one thing, if it would take a year to crack all passwords of a certain length and complexity using some specified hardware, it would take only half a year to crack half of the passwords, and some would be cracked within the first week. Also, if passwords were being changed frequently — with, perhaps, the aid of an automatic password change feature — the password would only need to be strong enough to defeat brute-forcing until the next password change, unless perchance the new password was one that the brute-force effort had not yet tried (and might coincidentally try even sooner than the old one).
The advice on passwords was evolving. How-To Geek (Hoffman, 2018) said the traditional view was that a password should be long (i.e., at least 12 characters, rather than the eight characters that the experts used to recommend; a year later, RootSecDev (2020) was recommending 16-characters); should include at least one each of numbers, symbols, and upper- and lower-case letters; should not be a combination of dictionary words; and should not rely on common substitutions (e.g., “H0U$3” in place of “HOUSE”), because those were already listed in password-cracking dictionaries, or could be readily generated by password-producing software.
The advice on password length was still good: a long password would require more time than even the fastest password cracker could afford to spend. How Secure Is My Password? calculated that a password of 20 random characters (i.e., the length recommended by an increasing number of sources, e.g., VeraCrypt) would require 32 septillion years to crack by brute force. Even so, CyberYozh said that a good password should actually be at least 50 characters (plus a keyfile, below). Regardless, the user’s plans might take account of the claim that, within about five years, “Quantum computers will be able to instantly break the encryption of sensitive data protected by today’s strongest security” (ZDNet, 2018). The question would then be, how long until hackers are able to buy or build quantum computers, or to bribe or steal access to them? It wasn’t hard to imagine someone in Russia or China getting access to that sort of thing to crack passwords, long before you and I have access to such machines for purposes of obtaining quantum-proof security.
Even without quantum computing, even a long password would be vulnerable if it didn’t consist of random characters. That’s because there was more than one type of brute force method. A word list (a/k/a dictionary) attack was based on the observation that enormous numbers of users employed the same, predictable passwords (e.g., password, 123456, qwerty). (Instead of trying to guess one user’s password, a reverse dictionary attack would take a common password (e.g., qwerty) and try it on multiple user accounts.) Such word lists could be easily supplemented with spelling variations taking account of common character substitutions (e.g., the “H0U$3” example), as well as what Alpine Security called “rule-based” permutations (e.g., try adding certain numbers to the end of each word in the list). Such permutations might, again, be based on behavioral observations. For example, from a sample of ten million passwords, WPEngine determined that, among passwords ending with any number, 24% ended with 1. Of course, as Moore (2017) noted, within a dictionary of ~170,000 words, a password consisting of a simple combination of three words (e.g., catbrowntall) would be quickly cracked: it was relatively long, in terms of characters, but the dictionary would only have to try three-item combinations of its entries.
The general point was that password cracking with that so-called “dictionary” might not proceed by going through all the character possibilities until finally hitting “password” or “qwerty1”; those items could be tested almost instantly, if they were in the list or were included in a related rule. Certain techniques (e.g., rainbow tables; see CrackStation, 2018; StackExchange, 2018) could make a brute force (including dictionary) attack even faster. Multiple sites (e.g., Infosec 1 & 2, SpyAdvice) listed popular, free tools available to assist in such cracking efforts. Various sites (e.g., Ars Technica, CyberArms) reported on experts’ remarkable success in cracking large numbers of passwords very rapidly. This was potentially big money. A high-level criminal could justify a considerable investment in experts and equipment to do it well.
In response to various realities of brute-force cracking, the U.S. National Institute of Standards and Technology (NIST, 2018) issued new guidelines. As summarized by ForcePoint, changes included the following:
- Passwords should not expire without good reason, because when they do, users tend to recycle previous passwords from other accounts. “Good reason” for password expiration could include suspicion or evidence of compromise or intrusion, including a report from Have I Been Pwned? In addition to reports of security breaches, LastPass suggested several other instances when a password should be changed, including evidence of unauthorized access to your account, or of malware or other compromise of your device; login at a public or otherwise unsecured location (e.g., library, restaurant, hotel); prior shared access to the site with someone who no longer needs access; and passage of a year without changing the password.
- In an era in which people overshared on social media, and in which the facts of their life were often easily researched, ForcePoint said NIST recommended against requiring users to answer knowledge-based questions (e.g., “What was the make of your first automobile?”). Indeed, in the worst case, such questions could actually impair security, if used to provide a backdoor for a hacker, pretending to be the user who had lost his/her password. InfoWorld advised users to give false answers to password reset questions when setting up, for instance, a bank account. Of course, the user would need to remember which false answers s/he had given. Possibly a solution there would be to use true answers for someone else (e.g., a friend’s first automobile), or for a somewhat different question (e.g., the first automobile ever built). Again, though, the user would have to remember what strategy s/he was using.
- Longer passphrases using words (e.g., a memorized quote from a song or story), up to 64 characters, could be more secure and more easily remembered than the traditional mixed code. Among other things, as passwords became longer and more random, it was more likely that users would have to write them on a list kept nearby, or on on Post-It notes stuck onto their computer monitors, thus eliminating security against intruders who were able to gain physical access. For some purposes, the user would need passwords s/he could remember. Where codes were required, one response would be to use first letters from a passphrase, perhaps with some modification (e.g., PCMag, HTG). Many websites offered various tips and tricks for developing secure passwords. In addition, many sites (e.g., LastPass, Norton, Avast) offered password-generation tools; some (e.g., 1Password) offered to generate relatively random and hard-to-guess passphrases. PCMag provided advice on how not to reduce the number of possibilities offered by such generators. In a previous post, I offered further discussion of passphrases and keyfiles.
There were many forms of, and alternatives to, brute-force methods of cracking passwords. As summarized by Alphr, possibilities included phishing (i.e., getting the targeted user to enter his/her login credentials on a fake webpage), social engineering (e.g., use a phone call or visit to pose as information technology (IT) security personnel, or impersonate an individual user, in order to get or reset the desired password), shoulder surfing (i.e., watching people enter their passwords), and spidering (i.e., becoming familiar with a target company’s literature, competitors, and customers to build a customized word list for a brute-force attack). Similarly, CyberYozh said that a “mask” attack on an individual user would begin with the collection of personal facts (e.g., mother’s maiden name, pet’s name, home address) and would feed those facts into software that would generate millions of possible passwords (e.g., Smith12345) from that data — again, to design a customized word list for a dictionary attack.
Since few people would be able to remember large numbers of long, relatively random passwords that may have been updated repeatedly for various reasons, one common solution was to use password manager (PM) software (below). Someone pointed out that a PM — especially one secured with a substandard master password — would not necessarily be superior to a simple notebook in which the user recorded his/her passwords: a robber might be the only intruder capable of gaining access to the notebook. And even the notebook could be made functionally invisible. For instance, at least for purposes of remembering a few master passwords, it might be feasible to use a pattern of characters from identifiable pages in a book nearby. A simple example would be to use page 24, go 2 lines down and 4 characters over on that page, choose the character there, and repeat on later lines (e.g., two more lines down) or pages (e.g., doubled page numbers, in this case 48, 96, 192 …). If the logic or sequence of characters so selected was not obvious, it might even be safe to mark them. Some writers said they kept their master passwords written down and stored in lockboxes, or among other papers, or even in their wallets (hopefully without an explanatory note like, “This is my master password!”). For example, one commenter said his company saved a backup of its password list on a USB drive in the CEO’s safe deposit box.
Having said that, one might want to avoid a situation occurring while this post was in progress: Gerald Cotten, age 30, the founder and CEO of QuadrigaCX (Canada’s biggest cryptocurrency exchange) died suddenly, leaving no known access to the encrypted laptop in which Cotten had stored $190 million of the company’s digital assets. The company went bankrupt and Cotten’s wife was left on the receiving end of enormous customer anger. In other words, don’t die holding passwords that others will need, leaving no way of recovering them. In a similar vein, one user reported losing ten years’ work due to a freak power outage wiping out the long, randomly generated passwords that s/he had stored solely on his/her computer.
This discussion of password problems appears early in this post because passwords are often mentioned, in many aspects of computer security. The remainder of this post focuses especially on protections against various ways of obtaining access to a user’s system, including especially those that involve the use of malware to detect passwords and other login credentials.
Physical Security
As we shall see, even a single instance of fairly brief physical access to a computer — especially but not only a computer where the user is already logged in — could be enough for a knowledgeable intruder to gain control over the machine and/or to set it up so that s/he could access its data. The first step in physical security was to eliminate or at least reduce and complicate a would-be intruder’s options in this regard.
Needless to say, computer security included efforts to insure that people would not be able to steal the physical machine or relevant parts (e.g., disk drives). General theft prevention and deterrence measures were a first line of defense. In this category, at home or office, MakeUseOf (2014) and other sources pointed toward desktop and laptop cable locks, heavy steel enclosures, security cameras including the computer’s own webcam, and theft alarms. Other things to consider included locking doors and windows, storing devices in a safe, installing a home security system or at least motion-detecting lights, and using smart lights or lights on a timer to turn on when you’re away.
There were also specific suggestions to prevent and discourage theft of laptops and other devices taken out into the world. Various sources recommended engraving them, or using a Sharpie or tamper-resistant tag (e.g., STOP tags), with your name and phone number; installing a laptop alarm, always being mindful of your situation; choosing secure tables or other workspaces (e.g., against a wall, in a corner, facing a security camera); using a nondescript (but TSA-approved) bag or briefcase rather than an obvious laptop carrying case; avoiding unnecessary display (e.g., not using devices on the street unless necessary; keeping them out of sight when not in use); not leaving them unattended, even in a locked vehicle’s trunk or an airplane’s overhead storage bin; not packing them in checked luggage unless required to do so; double-checking where you’ve sat and any other place where you might have accidentally left them behind; being especially attentive to where your device is, and which airport and TSA employees are near it, while you go through airport security; keeping up with your items as they go through the airport scanner; asking fellow travelers to mutually keep an eye on each other’s items; immediately reporting any missing items; making sure hotel room doors are locked; and using hotel safes to secure devices whenever you are not in the room.
Other suggestions focused on preparing for the possibility of theft or loss of the device, reducing its potential impact, and increasing the likelihood of device recovery. These included keeping data backed up; recording and storing serial numbers; registering the laptop and its software with their manufacturers, and notifying them in case of theft; insuring the device (via the merchant, manufacturer, or cellular service, or perhaps by adding a device-specific rider to homeowner’s or renter’s insurance policies); recording its MAC address; activating its operating system’s built-in device-finding service (Win-I > Update & security > Find my device) and then tracking it, if stolen, using your Microsoft account on another computer, perhaps via Windows To Go or other bootable USB (below); and putting a disposable or secondary email address and/or phone number (e.g., Google Voice, below) on a tag or sticker or on a startup screen, so that a finder can contact you (framework present in Win10RegEdit.reg; personal contact info still needed there).
(Note the desirability of using USB devices that are unlikely to have fallen into the hands of a potentially malicious actor, so as to avoid a Bad-USB type of attack that, seven years after its discovery (Ars Technica, 2014), remained virtually impossible to detect (Comparitech, 2021).)
Several sources (i.e., Lifewire, Gecko & Fly, WindowsChimp) identified a number of services to help laptop owners recover their lost or stolen laptops. Windows-capable solutions included Prey, Absolute (which confusingly provided seemingly distinct websites for its Home and Office device retrieval service and its LoJack theft recovery solutions, though ultimately they seemed to lead to the same place), LockItTight, LaptopLock, FrontDoor, and Pombo, with offerings ranging from free to enterprise-level; and antivirus packages (typically paid rather than free) with anti-theft tools (e.g., Bitdefender, ESET), and Exo5 (enterprise only; see also ConnectWise/LabTech, below). (There seemed to be more services of this nature for Android devices. Note also that some antivirus companies (e.g., McAfee, Norton) offered identity theft protection.) There would be a question of whether the odds of theft, and the cost of replacing the laptop (or the importance of recovering it before its data could be accessed), warranted the price of such tools. Lifewire (2019) explained that these tools had to be set up before the laptop was stolen or lost; that the missing laptop had to connect to the Internet in order to be detected; and that recovery applications embedded in a passworded BIOS would betray some thieves. One might also have to ask whether local police would bother to assist in its recovery (e.g., Reddit: “We had a NAME and couldn’t get the police to care. The owner of the laptop went to the person’s house and demanded the laptop back with success”). When needed for laptop recovery, at least, it seemed these programs typically collected location and usage (e.g., Wi-Fi or other network details) information and used the webcam to photograph the thief.
Lifewire liked LoJack for its ability to delete the laptop’s data remotely, and Prey’s tracking of laptop location only when needed, thus not constantly reporting the user’s location. WindowsChimp (2017) liked Exo5, LoJack, and Prey, in that order. LoJack offered standard and premium plans for laptops ($40-60/year). Prey’s personal plan ($60/year) did, but its free plan did not, offer remote data wipe and file retrieval. My search did not seem to confirm a fear that LoJack’s remote wipe would fail to function on UEFI devices. Both LoJack and Prey tracked up to three devices, suggesting that any user could consider installing at least the free version on phone and laptop, so as to use one to track the other. Lifewire noted that Prey was visible in Win-R > control > Programs and Features and, as such, could be easily uninstalled by a thief who recognized its name, assuming the thief was able to steal the machine while it was running, or could get past relevant protections (e.g., drive C encryption). Possibly the user could work around that by running Prey in a virtual machine on a separate virtual desktop.
A document purporting to be the Absolute User Guide, seemingly written for enterprise rather than individual users, containing no apparent references to LoJack and only two to CompuTrace, provided what may have been more relevant insight into the remote wipe (actually called Data Delete) feature than I was able to find on Absolute’s website. Among other things, it advised users (p. 215) to prepare a Data Delete Policy in advance, for best results in deleting user files and operating system; it seemed to indicate that the deletion process could be performed in a “stealthy” manner (possibly meaning it would not require reboots that might alert the thief that something was happening); and it appeared to provide an option to specify the number of data overwrites (p. 216). It was not clear whether the hard drive light (if any) would flash on and off during the deletion, nor whether the user could be confident that the device would have enough battery power to complete the deletion. Possibly the ideal policy, if possible, would be to execute a one-write pass, and then return (battery permitting) for an additional write, and so forth. It seemed that such instructions would be conveyed to the laptop through an “agent call,” defined as “A secure connection established between the agent [presumably meaning the LoJack agent installed on the laptop] and the Monitoring Center” (p. 310). A separate Administrator’s Guide seemed to provide some further information about the agent. In a brief look, I was not able to confirm the existence of a reported feature by which LoJack would attempt to “phone home” and would automatically lock the BIOS (or, perhaps more usefully, begin wiping the drive) if that attempt failed.
Alternately, Lifewire said the owner could use remote access software (e.g., GoToMyPC, TeamViewer, RemotePC, LogMeIn, VNC Connect, Splashtop Business Access, pcAnywhere, SharedView) to utilize the same tools and information (e.g., webcam) to identify the thief and his/her location. Several of these would be quite expensive for an individual user (see Business.com and links in the previous sentence). Enabling remote access could entail its own security gaps. A search led to a PCMag (2018) comparison favoring GoToMyPC and TeamViewer. Tech-Vise (2018) said that another approach would be to track the IP address of the device by simply going into the settings of popular websites from another computer and checking the locations of their latest login sessions:
Dropbox is great for this because it syncs your information in the background, so it is technically constantly logging in. Gmail is great if you are tracking a thief because they will often times scroll through your mail to find private information. Facebook has a specific tab for login locations, which should make tracking your laptop that much easier from there.
TeamViewer, well-known and offering a free (and a highly rated portable) version, offered a large array of features, including remote device control and cross-platform (e.g., mobile to PC) access. Beebom (2018) listed other alternatives in response to TeamViewer’s reported complexity, potential privacy risks, and limits on its free version. It was not immediately clear which features were available in the free version; possibly all were, with ads and/or nag screens. It seemed that further exploration would be necessary to determine how helpful such software might be in the event of a missing or stolen laptop. Presumably such software would be helpful only if the finder or thief was running the Windows system on which the software was installed, which would not be the case if (1) other security precautions (e.g., VeraCrypt system drive encryption) prevented that, (2) the finder/thief preferred to use a bootable USB drive to access drive contents, or (3) the laptop was simply being wiped for resale. One plausible scenario would entail a thief who grabbed a running laptop whose screensaver had been turned off, or had not yet kicked in, and a user who promptly employed his/her phone to lock or wipe the laptop before the thief could access its data. But it seemed that there might be many other scenarios in which the necessary security factors did not align as desired.
BIOS Setup
Having addressed certain matters that can arise at multiple points, let us turn to the sequence of events in typical computer use, beginning with bootup.
The Basic Input-Output System (BIOS) was created in 1975. The Unified Extensible Firmware Interface (UEFI), a replacement for BIOS, began as an Intel contribution to the nonprofit Unified EFI Forum in 2005. MakeUseOf (2013) said that UEFI was considered more secure than BIOS especially because of its Secure Boot feature. According to How-To Geek (2017),
Typical PCs will normally find and boot the Windows boot loader, which goes on to boot the full Windows operating system. …
However, it’s possible for malware, such as a rootkit, to replace your boot loader. The rootkit could load your normal operating system with no indication anything was wrong, staying completely invisible and undetectable on your system. The BIOS doesn’t know the difference between malware and a trusted boot loader–it just boots whatever it finds.
Secure Boot is designed to stop this. Windows 8 and 10 PCs ship with Microsoft’s certificate stored in UEFI. UEFI will check the boot loader before launching it and ensure it’s signed by Microsoft. If a rootkit or another piece of malware does replace your boot loader or tamper with it, UEFI won’t allow it to boot.
On my system, recommended steps started with configuring UEFI (which, as noted above, is sometimes still referred to as the “BIOS” setup utility, in this post and elsewhere) to make sure the machine was booting in UEFI mode, to turn on Secure Boot, to specify the boot order (i.e., the order in which HDDs, SDDs, and/or USB drives would be given a chance to boot the system) (see How-To Geek, 2018), and then to protect those settings by requiring a password to change them. There were several ways of getting into BIOS at bootup, depending on the specific machine and installation:
- Hit (or keep hitting, or just hold down) a certain key at bootup. F2 was the key used most often for this purpose, but other possibilities included Esc, Del, F1, F10, or perhaps F8 or F12. (See the user’s manual for the computer or its motherboard.)
- Boot the computer with a Windows 10 installation USB drive or DVD > Repair.
- Suggested as an alternative by some, but not my favorite, for fear that Windows crashes could mean corruption: repeatedly hold the power button to shut Windows down while it was booting, as soon as the Windows logo became visible. After several repetitions, Windows would open the boot options menu.
There were also several ways to get into BIOS from within a running Windows 10 installation:
- Shift-Restart, available wherever a power or shutdown button offers a Restart option: hold Shift while clicking Restart. Examples: Start > Power > Shift-Restart; or click the power button on the lock (i.e., login) screen > Shift-Restart; or Start > Shutdown > Shift-Restart in a system running Classic Shell.
- shutdown /r /fw or shutdown /r /o
- Long, menu-driven version: Win-I > Update & security > Recovery > Advanced Startup section > Restart now > Choose an Option screen > Troubleshoot > Advanced options > UEFI Firmware Settings > Restart.
Once in the BIOS, it was generally advisable to page through the various menu picks, to become acquainted with the kinds of things that the user could set there. The UEFI, Secure Boot, and password options were most often found within menu picks labeled Boot, Security, or Authentication (AppGeeker, 2019), or possibly in Troubleshoot > Advanced Options. Note also the option to Reset to factory defaults. Turning on Secure Boot usually meant turning on UEFI, and vice versa. UEFI required a GPT partition style on the drive being booted; typically, in Windows, that would be drive C. (Choosing MBR instead of GPT would be an option on drives up to 2TB, and might enable VeraCrypt, below, to encrypt an entire drive rather than select partitions, but it would do so at the expense of losing Secure Boot.)
An attacker could find it useful to change some of the BIOS settings just mentioned (e.g., Secure Boot). To do so, s/he would have to get past the BIOS password. There were various methods for doing this. For some of those methods, the attacker might find it helpful to do some research in advance. There were several ways of acquiring relevant information, including the BIOS manufacturer and the location of the CMOS reset pins:
- View Power-On Self-Test (POST) information displayed onscreen at bootup, perhaps with the aid of the Pause key to prevent it from disappearing too quickly. (So the user might want to turn off the POST display in BIOS, though it could be helpful for non-security purposes.)
- In this or any other computer of this model, with Windows 10 running, use systeminfo. Or use msinfo32 > System Summary in left pane > BIOS Version/Date in right pane.
- Research the computer or motherboard model online, to find who manufactured the BIOS (e.g., Phoenix, AMI), the type of motherboard, and the motherboard’s layout. Such information might be found in a user’s manual or elsewhere. For instance, my brief searches found a schematic online for my laptop’s motherboard, labeled “Compal Secret Data,” Compal being the Taiwanese manufacturer. Apparently someone had copied and published some of Compal’s corporate secrets. Perhaps a search on the Dark Web would have produced photos or other detailed information, so that an intruder would know where to find the CMOS, and how to reset it, before s/he opened the computer’s cover. An experienced attacker, familiar with various types of hardware, might be able to conduct such research quickly (e.g., after walking past the user’s computer in a restaurant or public library while carrying a smartphone with the video camera running, or wearing video-capturing smartglasses).
With or without such information, various sources (e.g., Gecko & Fly, 2019) described several methods of bypassing the BIOS password. First, software methods:
- Backdoor password: enter an incorrect password three times. This would supposedly produce a System Disabled message with a number below it. On my Acer laptop, this procedure yielded a BIOS menu offering me a choice: Enter Unlock Password or System Shut Down. The former produced a dialog asking for the Unlock Password, and giving me a key number. Entering that number at BIOS-PW.org produced a recommended code that did not work, along with a list of manufacturers, none of which were Acer. The key number was too long for the field in BiOSBug. Alternately, an intruder could ask around, hoping someone would share the Unlock Password. Although moderators in some forums did remove posts regarding password bypass efforts, my brief searching found that participants in at least one HP forum seemed to be providing translations (i.e., you give them the key from the laptop’s screen; they give you a working Unlock Password). There also appeared to be utilities (e.g., UnlockHD.exe, HDD_PW.exe) that might do the same — or might just install malware. Again, there was no telling what might be available on the Dark Web. A successful attacker might have to invest some time to learn the ropes.
- Run software (e.g., PC CMOS Cleaner; a utility provided with the computer, e.g., Acer Recovery Management; Hiren’s Boot CD; CmosPwd — but its webpage said that it might brick a laptop) to reset the BIOS to its default state or to reveal the password. Or (according to Wondershare), run a series of commands: debug > o 70 2E > o 71 FF > quit. These options would presumably not be available if Secure Boot was enabled and if Windows itself was locked: there would be nowhere to run such software or to enter such commands.
- Overload the keyboard buffer by hitting a key (e.g., Esc) at least 100 times. This was apparently for older computers. On my Acer, this had no effect on either the first or second password dialogs.
There were also hardware-based methods of bypassing the BIOS password, and the Secure Boot and other BIOS settings that such a password would protect. Some of these methods would ordinarily be easier in an unlocked desktop computer case than in a laptop:
- Remove the computer’s SSD or HDD and view its contents on another computer.
- Reset the BIOS by removing or unplugging the coinlike CMOS battery (perhaps encased in black plastic, in a laptop) and waiting at least five to ten (some said 20) minutes before replacing it and restarting the computer. (Disconnect power, remove the laptop’s battery if possible, and hit the power button before going inside the computer’s case.)
- Reset the VIOS via the CMOS jumper. For a three-pin setup, it seemed the jumper should be moved for a few seconds from pins 1 and 2 to pins 2 and 3, or vice versa, and then back to its original location. For a two-pin jumper, apparently it was sufficient just to remove the jumper (if any) or to short across the two pins (e.g., touch a coin or screwdriver to both simultaneously). In any case, the advice seemed to be to wait 30 seconds before restoring the jumper and trying to boot. Lifewire (2018) said most desktops would, but most laptops would not, have such a jumper. But there appeared to be numerous hardware-oriented videos and webpages on how to reset the BIOS on a laptop, including some that were fairly close to the specific model of my own laptop. Sometimes, at least, the jumper would be located near the CMOS battery. On desktop and perhaps some laptop motherboards, apparently the jumper tended to be blue and/or to have a label like BIOS Config, Clear CMOS, Clear, CLR, JCMOS1, PWD, PSWD, or PASSWORD. From an intruder’s perspective, the best situation would be to know in advance that the jumper existed, and where it was located. In that case, the intruder could remove the cover from the computer, reset the CMOS, and restore the cover — within a matter of a minute or two, for a laptop, and even faster in a desktop. In the worst case (from the intruder’s perspective), s/he would have to open the cover, photograph the motherboard in detail, close the cover, research the CMOS issue, and then complete the reset on a later visit.
The user would probably realize that the BIOS password had been reset, the next time s/he tried to edit the BIOS and was not stopped by a demand for the password — but how long would it be before the user tried to edit the BIOS? Until then, the intruder could return to the laptop at any time to install malware or copy data, if that couldn’t all be completed during a single intrusion. The user who wished to be notified immediately of a BIOS reset would presumably want to set the BIOS to impose a password, not only for entry into the BIOS setup utility, but also for system bootup. That way, the user would know immediately that the BIOS had been reset, if s/he was not asked for the password at startup. Of course, after an intruder finished his/her work, s/he could reimpose the startup password demand. S/he, presumably not knowing the original password, could not tell the system to ask for it specifically. It was not clear whether s/he would be able to program the BIOS to accept any password. If so, the user might want to form the habit of entering an incorrect password first. If that worked, s/he would know the BIOS had been compromised, without giving away any correct passwords (for the BIOS itself or for the next steps, involving VeraCrypt or the Windows 10 lock screen) to a BIOS-based keylogger (below) that the attacker might have installed. In any event, as How-To Geek (2017) warned, it was important not to forget these BIOS passwords, because it could be difficult to regain use of the computer without them.
The purpose of an intruder’s efforts, with respect to the BIOS, would probably be to install a keylogger and/or to disable Secure Boot, so that the system could be booted with a USB drive, containing tools and an operating system that the attacker could use to alter and/or extract information from the system. These measures would not be necessary, of course, if the system did not impose any barriers — if, that is, an intruder could simply turn on the machine and go directly into Windows, without having to enter a password or otherwise gain permission.
Ordinarily, users could not alter BIOS from within a running Windows system; BIOS settings had to be changed at bootup. But LoJack (previously called CompuTrace, below) did develop a way to alter the BIOS from within Windows. This had two consequences:
- A thief, wishing to neutralize LoJack so that s/he could make off with the computer, might want to flash (i.e., update, or reinstall an update to) the BIOS. Depending on the computer, that might require a bootable USB drive, or it might require the user to run a program from within Windows. Thus, to neutralize LoJack, a thief might need to turn off Secure Boot and/or to boot into Windows — both of which would be difficult, if security measures just discussed were in place.
- An intruder, seeking access to system files by installing malware (e.g., a keylogger) in the system’s BIOS might be able to do so from within Windows (StackExchange). In fact, ESET (2018) discovered that the Russian Sednit group had hacked LoJack’s technology into malware called LoJax, which was then used to infect BIOS/UEFI:
UEFI rootkits are widely viewed as extremely dangerous tools for implementing cyberattacks, as they are hard to detect and able to survive security measures such as operating system reinstallation and even a hard disk replacement [but not a BIOS flash]. …
Computrace attracted attention from the security community, mostly because of its unusual persistence method. Since this software’s intent is to protect a system from theft, it is important that it resists OS re-installation or hard drive replacement. Thus, it is implemented as a UEFI/BIOS module, able to survive such events. This solution comes pre-installed in the firmware of a large number of laptops.
ESET indicated that LoJax would at least be removable by flashing the BIOS — that is, by reinstalling or updating the software programmed into the UEFI firmware. Flashing the BIOS was not hard, but it was delicate in some regards. Among other things, apparently the system could be damaged, perhaps bricked, if power failed during the flashing process. Moreover, the common advice was that BIOS updates were like software updates — while adding desirable features, they could also create new problems. And then there was the problem that it was at least theoretically possible for a rootkit to reside, not only in the BIOS chip, but in any flashable (i.e., updateable) chips in the computer. For example, Ars Technica (Goodin, 2015) described a rootkit and keylogger based in the graphics processing unit (GPU) — that is, the sometimes very powerful processor on the computer’s graphics card — and Domburg described his exploration into hacking the firmware on an HDD.
In the case of the LoJax attack, ESET (2018) said Intel’s Boot Guard technology (included in Intel CPUs starting with the Haswell (2013) series of processors) would have resulted in the machine refusing to boot after its UEFI was compromised. Nonetheless, ESET said,
[Y]ou should make sure that you are using the latest UEFI/BIOS available for your motherboard. Also … make sure that critical systems have modern chipsets with the Platform Controller Hub (introduced with Intel Series 5 chipsets in 2008). …
In [this case] … to remove the rootkit, the SPI flash memory needs to be reflashed with a clean firmware image specific to the motherboard. … The only alternative to reflashing the UEFI/BIOS is to replace the motherboard of the compromised system.
Heimdal Security explained that rootkits could be installed in a variety of ways, and could be used to take control of a computer or to permit installation of a keylogger. TechTarget said, “One common symptom of a rootkit infection is that antimalware protection stops working.” TechTarget said other symptoms included unexplained changes in Windows settings and other general malware symptoms (e.g., unusually slow performance, high CPU usage, other unusual behavior). Trend Micro pointed out that rootkits could also target Linux systems. For those inclined toward technical solutions, Intel’s CHIPSEC tool would apparently facilitate preparation of a whitelist of executables in a good firmware image, which the user could then compare against a similar list from the installed BIOS, to see if anything unwanted had snuck in.
It seemed that rootkit exploits were rare and difficult. Oh (2018) said that, with a few very sophisticated exceptions, “The Windows rootkit era ended with the release of Windows Vista, mainly due to Windows signing requirements.” Varonis (2018) agreed that “rootkits as a method of cyberattack are in decline.” If a rootkit seemed likely, the most thorough response (as just hinted) was apparently to wipe the hard drive (so that Windows software on the drive would not reinstall the rootkit), reflash the BIOS (perhaps using the same BIOS version, not necessarily an update), and reinstall Windows with a safe installer.
There were also rootkit searching tools, recommended by various sources. These included GMER, Kaspersky TDSSKiller, Sophos Virus Removal Tool, McAfee RootkitRemover, F-Secure, and Malwarebytes Anti-Rootkit Beta. I ran a few of those on my stable Win10 Pro desktop computer. The focused Kaspersky product found no TDSS rootkits. Malwarebytes likewise reported, “No malware found!” GMER immediately produced an extensive list of what it called “Rootkit/Malware” items — but in response to an attempt to interpret such a list, the moderator in a BleepingComputer forum warned, “If you’re unsure how to use a particular Anti-rootkit (ARK) tool or interpret the log it generates, then you probably should not be using it” (see also PCWorld). What we did not yet seem to have, then, was an antivirus equivalent for the BIOS, a tool capable of doing a scan and identifying malware for the end user.
This discussion of BIOS security suggested several conclusions. One was that protecting one aspect of a computer could protect another as well. For instance, a Windows password could reduce threats to the BIOS, if it prevented an intruder from running software that would infect the BIOS, and in turn (as we shall see) protection of the BIOS could mean no installation of a BIOS keylogger that would capture the VeraCrypt password allowing access to the Windows installation. Another conclusion, of course, was that (as indicated in the Ten Immutable Laws cited above) giving an attacker physical access could greatly improve his/her opportunities to circumvent all levels of system security and gain access to the user’s data. Security was thus not merely a matter of erecting defenses against direct intrusions (e.g., using antivirus software to detect and remove viruses); it was also a matter of backstopping other security measures, so that if one failed, perhaps another would still make things difficult for the intruder.
Measures suggested in this section thus included the following: prevent physical access to the computer, or at least find a way to physically lock the case and keyboard; use relatively recent hardware with security improvements; use UEFI with Secure Boot and BIOS/boot passwords; be alert to unusual or undesirable symptoms; consider flashing the BIOS to restore factory settings; and consider investing time and money in anti-rootkit software. As with other possible measures, there would always be a question of what seemed advisable within the particular user’s circumstances.
Drive Encryption
Updating a previous post, I decided to endure the risks and hassles of encrypting drive C (i.e., the system drive, on Windows machines). The encryption password would not be a barrier when the system was up and running; but if the system was shut down, an attacker would not be able to explore the drive. Thus, access to the contents of drive C would be prevented even if the foregoing BIOS protections failed, unless a rootkit captured the drive C encryption password. This section focuses on various considerations involved in encrypting the Windows system drive.
System Encryption Options
VeraCrypt was probably a better encryption tool than Microsoft’s BitLocker. Wikipedia mentioned the widespread concern that BitLocker’s closed-source code, not available for public audit, might contain a backdoor, intended to serve law enforcement, but potentially exploitable by hackers. By contrast, VeraCrypt was open-source; its code had been audited; problems had been found and largely fixed. Interpretations of that varied.
A StackExchange answer described several BitLocker configuration steps that were reportedly necessary to achieve security on a par with VeraCrypt’s default. (Note also the option, for data, of using a VeraCrypt container inside a BitLocker drive, and the VeraCrypt option of using a hidden operating system.) How-To Geek (Hoffman, 2018) reported that BitLocker might rely on potentially flawed SSD hardware encryption, unless the user made a manual change to Win-R > gpedit.msc > Computer Configuration > Administrative Templates > Windows Components> BitLocker Drive Encryption > Fixed Data Drives > Configure use of hardware-based encryption > Disabled > OK > unencrypt and then re-encrypt the drive. Of course, BitLocker was not even an option on some versions of Windows. For example, my Acer laptop ran Windows 10 Home, which did not offer BitLocker (and I did not find any reliable tweaks to make BitLocker available on the Home edition).
According to How-To Geek (HTG, 2014), starting with XP, Windows also offered a file-by-file encryption option known as Encrypted File Service or, more commonly, Encrypting File System (EFS). A later HTG article (2015) explained that EFS encrypted only individual files and folders. In Windows 10, this option was available via File Explorer > right-click > Properties > General tab > Advanced > Encrypt contents to secure data. HTG noted that discussion of Windows encryption options tended to skip over EFS because BitLocker was superior. For present purposes, individual file encryption was essentially irrelevant to the question of how to prevent a would-be attacker from installing malware on an unguarded system drive. But for those who wished to use EFS in Windows 10, TenForums provided a tutorial.
Wikipedia said that recent Windows versions also offered Device Encryption, described as “a feature-limited version of BitLocker that encrypts the whole system.” Wikipedia (and Microsoft) said, however, that Device Encryption required the device to meet the InstantGo (a/k/a Connected Standby) specifications. The advice to test for InstantGo capability was to run powercfg /a. InstantGo was available if that command returned “Standby (S0 Low Power Idle) Network Connected,” “Standby (S0 Low Power Idle),” or simply “Standby (Connected).” InstantGo was reportedly not available if that command returned “Standby (S3)” or “Standby <Connected>.” On both my desktop and my laptop, the command resulted in a list of sleep states, some of which were available, some of which were not. The latter included “Standby (S0 Low Power Idle).” It appeared that Device Encryption was not an option on these two computers. People concerned about governmental snooping might be interested in How-To Geek’s (2018) indication that Device Encryption would upload the user’s recovery key to Microsoft’s servers, where it would apparently be available to law enforcement.
Wikipedia said the InstantGo specifications required non-removable RAM (to prevent cold boot attacks, below) as well as SSDs and a Trusted Platform Module (TPM) 2.0 module (i.e., a hardware device built into some motherboards, “to ensure that the boot process starts from a trusted combination of hardware and software” (Wikipedia)). A StackExchange discussion provided further information on TPM and encryption. Microsoft said the TPM required UEFI firmware. An HP Community thread indicated that the TPM version number referred to firmware, not hardware, and thus TPM could be updated via BIOS update (above). Windows Wave and others seemed to indicate that the TPM version could be identified in several places: in the BIOS; via tpm.msc > TPM Manufacturer Information > Specification Version; and in devmgmt.msc > Security Devices > should show Trusted Platform Module 2.0. As a feature-limited version of BitLocker, however, it seemed that Device Encryption would still be inferior to VeraCrypt.
Another possibility was to buy an SSD with its own built-in encryption technology. Crucial (2019) explained that a self-encrypting SSD would be superior to something like VeraCrypt in multiple ways: it would not require a slow encryption process, would not be vulnerable to rootkit attack, would encrypt without impairing system performance, and would use stronger security protocols. But Wikipedia offered a discussion, ostensibly focused on hardware-based full disk encryption for HDDs, that identified multiple vulnerabilities seemingly relevant to SSDs as well. For one thing, Müller et al. (c. 2013) found that “hardware-based full disk encryption (FDE) is as insecure as software-based FDE …. [and] there exists a new class of attacks that is specific to hardware-based FDE.” In the latter set, the researchers found that drives were vulnerable to “hot plug” attacks, where the power cable was left intact, on a sleeping or running computer’s drive, while the data connection was switched to a different computer. Wikipedia also pointed out that drive firmware could be compromised. Bleeping Computer (2018) reported on more recent research (Meijer & van Gastel, 2018) finding that SSDs from Crucial and Samsung had “critical security weaknesses, for many models allowing for complete recovery of the data without knowledge of any secret.” Bleeping Computer noted that these weaknesses also applied to BitLocker, because it would default to drive encryption if supported. Multiple sources echoed that SSD manufacturers had not performed well or consistently in developing their hardware-based disk encryption technologies. Such remarks were consistent with a StackExchange answer summarized as, “Do not use hardware encryption on any storage medium.”
Cold Boot and DMA Attacks
Thus my laptop’s best (indeed, only) full system encryption option was to use VeraCrypt. But VeraCrypt shared some vulnerabilities with BitLocker. First, Wikipedia said, was the cold-boot attack. This kind of attack took advantage of the fact that RAM held data for several minutes (in some cases, up to 90 minutes) after power was turned off — indeed, according to another Wikipedia page, for days and even weeks, with the aid of coolant. (See e.g., Encase; Elcomsoft.) The contents of RAM might also remain available if the system was immediately rebooted with a live CD or USB drive (StackExchange). The RAM in question could be accessed in its host machine — or, if removable, in another. Any data in RAM could be of interest in some situations, but the focus seemed to be on recovery of encryption keys, so as to gain access to the system’s data.
(Wikipedia described a password as being intended for human use, while an encryption key was intended for use by cryptographic software. A StackExchange answer (sort of) explained that this distinction allowed users to change passwords without having to re-encrypt their data: the key would remain the same. Thus, VeraCrypt said that an adversary who obtained the VeraCrypt password could use it to obtain an encrypted volume’s master key, and could use that master key to unlock the volume even after the password was changed. If someone obtained the master key to a VeraCrypt volume, the advice was to encrypt a new VeraCrypt volume and move all files to it.)
The advice to prevent cold-boot attacks was to hibernate the machine or power it down when the owner was not physically present — not to leave it running or in sleep mode. Then again, contrary to what one might expect, powering down was reportedly less effective than a restart, for purposes of clearing RAM, especially if Fast Startup was enabled. Apparently the most secure power-down procedure would thus begin with a restart. In addition, F-Secure (2018) recommended making sure that the encryption (e.g., VeraCrypt) password was required on bootup (but see Wired, 2018). One source said that running Memtest86+ or system diagnostics would effectively clear RAM. So-called memory optimizers (i.e., programs to clear the contents of RAM) (e.g., CleanMem, Wise Memory Optimizer) were apparently not ideal for security purposes. For instance, How-To Geek (2014; similarly, MakeUseOf, 2018) said such tools would merely force RAM contents to be written to the paging file, though that would be protected by VeraCrypt when the drive was dismounted. Some sources (e.g., CCM.net; a Bleeping Computer post) suggested a Clear RAM Cache shortcut, constructed by using right-click (on the Win10 desktop or in File Explorer) > New > Shortcut. Sources varied on the next step, but the advice seemed to be to enter something like this for the location of the item: %windir%\system32\rundll32.exe advapi32.dll,ProcessIdleTasks. Ideally, one assumes, the user would first close programs and use Win-R > taskmgr (or Ctrl-Shift-Esc), or a predefined batch file with taskkill commands, to kill any other processes that might promptly reload RAM with sensitive contents.
To make cold boot attacks more difficult, Wikipedia suggested soldering or gluing RAM modules into the computer and using Secure Boot. Cold boot attacks had reportedly become rare in the real world partly due to improved RAM designs that tended to discard their contents very quickly after shutdown. Researchers suggested that this kind of attack would not generally interest a casual finder or thief of a misplaced laptop — that this would rather be a task for a determined adversary who would be willing to spend money to make it happen.
Another potential threat: DMA attacks. Direct Memory Access (DMA) hardware (used in PCI, PCIe, Firewire, some NVMe, and other connections) was susceptible to attacks that would bypass the operating system to obtain direct read-write access to RAM, so as “to read all that the computer is doing, steal data or cryptographic keys, install or run spyware and other exploits, or modify the system to allow backdoors or other malware” (Wikipedia). Delaunay (2018) describes such an attack, leading to options to unlock the login screen (using PCILeech) or install a backdoor. A Microsoft article (2018) focusing on vulnerability in Thunderbolt 3 hardware said,
Drive-by DMA attacks … occur while the owner of the system is not present and usually take less than 10 minutes, with simple to moderate attacking tools (affordable, off-the-shelf hardware and software) that do not require the disassembly of the PC. A simple example would be a PC owner leaves the PC [turned on during] a quick coffee break, … [an] attacker steps in, plugs in a USB-like device and walks away with all the secrets on the machine, or injects a malware that allows them to have full control over the PC remotely.
A search led to multiple sources of further information on DMA attacks. For instance, Elcomsoft (2018) indicated that its Forensic Disk Decryptor ($599) was capable of extracting data from an encrypted drive (including BitLocker and VeraCrypt) by using a key in RAM (video); Belkasoft offered its Live RAM Capturer for free, but recommended using it in combination with its Evidence Center software (price not available); and Passware provided an overview of VeraCrypt decryption using data taken from memory. A StackExchange discussion illustrated the potentially enormous challenge of trying to harden an unattended, running machine against such attacks.
It appeared that, as above, the first line of defense against a DMA attack would be to dismount the data drive and then hibernate or power down the computer before the user left the machine unattended. In that and another article, Microsoft recommended additional protective steps, pertaining especially to BitLocker. More recent versions of Windows 10 (starting with 1803) seemed to be designed to mitigate some aspects of this form of attack. The details varied according to the version of Windows 10, the hardware setup, and the system state (see Bleeping Computer, 2018). It tentatively appeared that BitLocker was (or at least had been) more vulnerable to DMA attacks than VeraCrypt. My impression was that, when Decent Security (2017) claimed BitLocker was superior for its role in a “chain of trust that flows from UEFI [and Secure Boot] to Windows bootloader to BitLocker,” the better conclusion was that a single weak link jeopardizes an entire chain. The “chain of trust” concept seemed incompatible with the principle of defense in depth — that is, having multiple defenses at any point.
Evil Maid Attacks
Another vulnerability, for BitLocker and VeraCrypt alike, was the Evil Maid attack, named for a hypothetical domestic worker or hotel employee — in other words, a bad actor with physical access — who would take advantage of the user’s absence to install hardware or software that would later capture (at least) the encryption password when the user entered it. Hardware keyloggers could include external USB or other dongles, perhaps installed just long enough to capture the password. These would usually be quite noticeable on a laptop; less so at the rear of a desktop computer’s case. There were also hardware keyloggers designed to be installed inside computers or keyboards. These included an older mini-PCI keylogger as well as a currently available KeyGrabber module. Searches on Amazon for USB hardware keyloggers recommended by PhoneHack (2018) yielded the general impression that hardware keyloggers were rarely purchased, not highly rated, and/or antiquated (e.g., Keyllama 4MB (sic)). On slim evidence so far (i.e., only four ratings), the main exception appeared to be the relatively sophisticated $160 Wi-Fi Premium USB MCP. But possibly there were better places to buy such devices.
An Evil Maid attack could also use a software keylogger. A StackExchange answer explained that steps in a software-based Evil Maid attack might be (1) pull the computer’s HDD and connect it to another machine, (2) use that other machine to replace the HDD’s VeraCrypt (or other) bootloader (which was not encrypted; it would not be able to function if it were encrypted) with a similar bootloader containing a software keylogger, (3) restore the HDD in the original computer, (4) wait for the computer owner to use the machine and enter the VeraCrypt password, and then (5) retrieve the VeraCrypt password saved by the keylogger. Retrieval could entail repeating the HDD remove-and-replace process, or perhaps the keylogger would have the ability to transmit captured data via WiFi.
If an attacker did manage to install malware on a computer, the next line of defense would be to limit its effectiveness. In the case of a software keylogger, that could be challenging. While PCWorld said that keyloggers would sometimes display symptoms (e.g., pauses or mistakes in keyboard or mouse responses), 1 2 discussions said it could be difficult if not impossible to determine whether a keylogger was running, even for technically advanced users. For instance, Raymond said that, once installed, Actual Keylogger was invisible during Windows startup, did not appear in Task Manager or the uninstallation list, and had a hidden program folder (though presumably that would be evident to a folder-by-folder comparison system, potentially achieved by a batch file running frequently). Lifehacker said the same about StupidKeylogger. Similarly, while SafetyDetective (2019) listed antivirus programs that would supposedly offer some protection from keyloggers (with Bitdefender at the top of the list), Super Keylogger claimed to be undetectable by antivirus. Keyloggers.com compared eight keylogging programs (notably, Spyrix) on various aspects of invisibility (e.g., hidden startup) and performance (e.g., ability to capture screenshots).
Wikipedia said that anti-keylogging software — that is, software designed to detect and, ideally, to delete or neutralize keyloggers — tended to be of two types. Signature-based anti-keyloggers would scan the Windows program drive, looking for files matching a database of known keyloggers, while heuristic analysis anti-keyloggers would scrutinize processes running in a computer, to see whether any of them behaved like a keylogger. Wikipedia said the problem with the database approach was that hackers could modify the keylogger enough to make it appear dissimilar to known keyloggers, while the heuristic approach would yield false positives — that is, would sometimes incorrectly identify legitimate Windows processes as keylogging activities.
Such problems did not necessarily make anti-keyloggers useless. Wikipedia noted that they were in regular use to protect public computers and financial institutions, among others. There was, however, a question of whether private individuals could use and afford industrial solutions. As the leading case in point, multiple sources (e.g., WindowsReport, Gecko&Fly, OSAuthority) recommended SpyShelter, which advertised itself as not requiring a signature database and claimed to be “capable of stopping both commercial and custom-made keyloggers,” including “advanced zero-day malware,” which might not “be detected by any anti-virus software.” To live up to such claims, SpyShelter would presumably use an exacting heuristic analysis that, as just noted, might snag many legitimate processes. In an exchange of emails, a SpyShelter representative confirmed to me that SpyShelter’s free version was essentially defunct. The paid version of SpyShelter currently cost $33/year for one machine ($43 for two). For that price, they said I would get an anti-keylogger that would detect keyloggers as well as protection against unwanted screen capturing. A Wilders Security discussion gave me the impression that learning to use SpyShelter effectively could require a considerable time investment and some degree of technical sophistication. It wasn’t clear whether that explained the results achieved in testing by PCMag (Rubenking, 2015): installations of genuine programs were blocked despite user intervention, while many pieces of test malware were able to run.
Zemana, which apparently used a signature database approach, appeared to be the leading competitor to SpyShelter. The free version of Zemana Antilogger (premium version $30/year or $60/3 years) likewise seemed not to be available on the Zemana website, and was marked as “discontinued” (last updated 4/10/17) on Softpedia. (A discontinued item relying on a regularly updated database would presumably not detect threats emerging since the database was last updated, or perhaps the database was now completely unavailable.) SpyShelter and Zemana offered other features (e.g., secure SSL, HIPS module, webcam protection, anti-ransomware, malware scanner, adware removal, zero-day malware protection), some of which might be provided by the user’s antivirus software or by other measures. The general impression from this brief review was that there might be a real need for effective antikeylogging software, but that the world was not yet convinced that SpyShelter or Zemana had found the solution.
The principle of defense in depth called for a tool that, ideally, would detect and remove keyloggers without snagging an annoying number of legitimate programs. Since we did not yet seem to have that, a second-best solution would be software that merely prevented an installed keylogger from obtaining what it sought. On that level, SpyShelter offered a Silent Anti-Keylogger whose real-time keystroke encryption function would apparently tunnel keystrokes only to the intended application. Other programs, attempting to observe what was coming from the keyboard, would then see nothing but seemingly random characters. For this purpose, pending good testing, it was not clear whether SpyShelter’s Silent Anti-Keylogger ($13/year) was any better than the free Ghostpress (which also claimed to offer screenshot protection; recommended by TheWindowsClub and MakeUseOf, and the subject of a hopeful but incomplete discussion at Wilders Security; 4.1 stars from 38 raters on Softpedia) and KeyScrambler Personal (four stars from 532 raters at CNet, 3.7 stars from 70 raters at Softpedia). Presumably a keylogger’s first task would be to determine whether something like Ghostpress was running and, if so, to shut it down if not cripple it, preferably without any warning that the user’s keystrokes were no longer being camouflaged. In that case, one might want to have a batch file regularly checking that programs like Ghostpress were still running. In my own testing, Ghostpress was irritating: among other things, it missed characters, forcing me retype them. It was not clear whether SpyShelter’s key scrambler would have such problems.
A different response to the threat of keylogging software would be to use a virtual keyboard. Various sources listed several free tools that would display a keyboard onscreen, allowing the user to “type” by clicking on the desired letters. For instance, How-To Geek (Hoffman, 2017) explained how to use the two virtual keyboards built into Windows. WindowsReport (Stanojevic, 2017) and Gecko & Fly (2019) identified other virtual keyboard software. It might be possible to arrange a touch-screen keyboard to be at least somewhat as comfortable as a traditional keyboard, but otherwise it appeared unlikely that a virtual keyboard would be realistic for activities requiring extensive typing (for e.g., writers). For users who did not have or did not want to use a touch screen, mouse-clicking on letters displayed onscreen would be vastly slower than regular typing. Presumably some security-oriented software offered what VeraCrypt called its Secure Desktop feature to prevent keylogging of characters typed into that space — but, again, pending good testing, it was not clear whether such tools reliably defeated keyloggers with screen capture capabilities. A StackExchange discussion seemed to indicate that the possibilities and limits of Secure Desktop software were not widely understood even among relatively sophisticated users — but it appeared that, in the words of a 1Password forum participant who did seem to have a relatively clear understanding,
The Secure Desktop feature has only limited utility. It offers protection against a very narrow attack vector — namely, someone has access to your computer enough to log keystrokes (to try to capture the Master Password [for the 1Password software]), but somehow cannot collect data as you access it (after unlocking yourself). I’m not sure I would call that “useless”, but it does not have the security properties some people seem to imagine it does. It’s just one option people can use, and that’s why we present it as such. You don’t have to use it if you don’t see a value in it. If have reason to believe that you have a keylogger on your machine, you should not be using 1Password at all until you address that.
A SuperUser discussion appeared to confirm that screen capture was still possible within Secure Desktop. The working conclusion here seemed to be that a security-conscious user might want to become acquainted with the built-in virtual keyboard, and might consider using it for entry of especially sensitive information, such as the passwords required by banks and password managers — but should not assume that it offered much protection against good keylogging software. For the record, then, one built-in Windows 10 virtual keyboard was available via Win-R > osk.
Direct installation of a software keylogger could apparently be achieved merely by plugging in an infected USB drive. That would evidently be easiest if the spy was able to work inside an active login on the targeted computer (e.g., an employer installing a keylogger on an employee’s computer). Even then, however, good antivirus software would apparently tend to detect and prevent the installation. Installation would evidently become progressively more difficult if the spy was dealing with a machine that s/he could not log into, or whose drive encryption posed a barrier preceding Windows login. Kaspersky (2013) said that, unlike hardware keyloggers, software keyloggers did not require physical access to the computer. But KeyloggerDownloads (2016) said remote (as distinct from direct) installation was “quite difficult.” To put up barriers to keylogger installation, MakeUseOf recommended multiple measures, including using a firewall, anti-keylogger, password manager, and good password practices, and keeping one’s system updated. Later, it occurred to me that perhaps a system could be configured to disable USB devices upon certain events (e.g., when the screensaver turns on).
VeraCrypt’s documentation said, “You must not use VeraCrypt on a computer that an attacker has physically accessed.” In the case of an Evil Maid attack during the computer owner’s absence, it seemed that the keylogger would capture the VeraCrypt password when the owner entered it. As Ghacks (2018) observed, VeraCrypt (Settings > Preferences) offered an option to “Use Secure Desktop for password entry.” That option, when enabled, reportedly used the CreateDesktop function in Windows — which, Ghacks (2017) said, “isolates the dialog from the rest of the desktop and other processes on the operating system” in which, according to a Ghacks commenter, “what you type cannot be captured by [software] keyloggers.” The VeraCrypt secure desktop appeared to be implemented at a high level, preventing interaction with other programs, but presumably the password would still be stored in RAM.
In at least some instances of Evil Maid intrusion, VeraCrypt would detect an intrusion. For example, a StackExchange answer addressed the situation where a computer using a VeraCrypt-encrypted drive C produced a message saying, “WARNING: The verification of VeraCrypt bootloader fingerprint failed! Your disk may have been tampered with by an attacker (‘Evil Maid’ attack).” The explanation was that VeraCrypt kept “a cryptographic fingerprint of the bootloader to see if it’s been tampered with.” While that feature seemed very helpful, there were two caveats. First, StackExchange noted that other things (e.g., backup tools) might also trigger that VeraCrypt warning. Second, the answer went on to explain that “a skilled attacker could thwart this [cryptographic fingerprint] as well unless the machine is using a TPM (above) or similar that checks the bootloader against a key which the attacker can’t overwrite.” VeraCrypt’s FAQs contended that a TPM was actually not effective for foiling this sort of attack.
The VeraCrypt warning just quoted (i.e., that the “fingerprint failed”) did not seem to be widely taken as a definitive solution to the Evil Maid attack. That is, it did not appear that everyone was trusting VeraCrypt to know when an Evil Maid attack might have occurred. VeraCrypt’s people certainly did not seem to be claiming that kind of assurance: their documentation (above) flatly warned against using VeraCrypt on “a computer that an attacker has physically accessed,” regardless of whether the warning appeared. The whole point of the Evil Maid attack was to evade detection. An Evil Maid attacker would be relatively sophisticated. Hence, it seemed that users would often receive no reliable warning, and would not know whether “an attacker has physically accessed” the computer. At least at the level of corporate espionage, the conscientious precaution would apparently be to keep the computer under secure lock and key whenever it was out of the user’s sight, and to assume that any machine not secured in that way had been compromised.
If an Evil Maid attack was suspected, the StackExchange answer suggested changing the password immediately. That suggestion seemed to assume that a keylogger would focus only on capturing the login password. I was not sure that assumption was justified. My impression of keyloggers was that a good one could capture everything entered at the keyboard, and more, and thus would capture the change of password. For instance, the makers of pcTattletale bragged that, along with keyboard input, their software would also capture video showing what appeared onscreen. It was not clear whether such screenshots could capture what was entered into VeraCrypt’s Secure Desktop. As noted above, it was quite possible that the captured information was being relayed immediately via WiFi. Even if the attacker was unable to explore the targeted computer from a remote location, s/he might not have to wait long for another opportunity for physical access, so as to retrieve the relevant passwords.
VeraCrypt’s admonition (above) — that you “must not use VeraCrypt on a computer that an attacker has physically accessed” — seemed to contradict the StackExchange answer; it seemed to say that entering a new password at that point should be considered futile or worse. My guess at an interpretation was as follows: if the machine has been left in a potentially compromising location, don’t enter the VeraCrypt password. If you’ve already entered the VeraCrypt password, change it. Once you’ve changed it, don’t log in with the new password, lest you make it visible to a keylogger that is able to capture only certain kinds of input. Instead, prevent (further) access — perhaps by disabling the computer’s WiFi and ethernet (in devmgmt.msc and/or physically) or taking other prudent security measures — and then shut down the system and decide whether to gamble that at least the computer’s data drive can be safely used (or at least copied from) on a different machine.
VeraCrypt’s admonition did not offer an end date, after which it would be OK to start using the potentially infected computer again, nor did there seem to be tools offering assurance. If one did not wish to assume that flashing the BIOS (above) would take care of any firmware infection, and that wiping the system’s drives and restoring the operating system and data would take care of any software infection, then presumably the next step would be to sell, dismantle, or discard any potentially targeted machine that was ever out of the user’s sight. Users who could not afford the ensuing time, hassle, and expense would presumably choose, instead, to rely on the hope that some combination of external precautions (e.g., use a surveillance camera system where the computer is kept), along with other internal security measures discussed below (e.g., use good antivirus software), would be sufficient to send ordinary hackers elsewhere. Generally, people seemed to be responding to the threat of an Evil Maid attack by assuming that it would take some money and motivation to undertake and succeed at this evidently illegal activity (see discussions at Reddit (1 2) and Quora), and that they and their data were not worth that kind of effort.
Similar observations seem to apply to the discovery that it was possible to capture VeraCrypt passwords, along with other typed characters, without any intrusion into the targeted computer, by using electromagnetic or audio logging of the typing itself (or, in at least one study, of the user’s smartwatch), and that this kind of logging could be performed through walls and some distance away. Users appeared to be mostly unresponsive to this news — and rightly so, perhaps, at least on a practical level, at least in most locations (e.g., not in law enforcement facilities), at least until the requisite spyware advanced to the point of being portable and concealable.
Authentication Methods in VeraCrypt
It appeared, in short, that VeraCrypt passwords could be captured, in principle, through cold boot and Evil Maid attacks, and also through remote listening. It seemed doubtful that such attacks could be reliably and consistently prevented, or that the computer owner would necessarily know if any such attack occurred. Users who had the time and interest (or the aid of a technical support department) to monitor WiFi emanations, to dismantle and thoroughly inspect their hardware, and to track changes in the program files on their C drives, might be best positioned to detect and perhaps even retaliate against attackers. The rest of us would have to find some other way.
One other way would be, ideally, not to rely solely on keyboard passwords to authenticate VeraCrypt. According to Wikipedia, authentication could be accomplished by using one or more of several items: something you know (e.g., PIN, password); something you have (e.g., debit card, USB key, cellphone); or something you are (e.g., fingerprint, retina scan). Multifactor authentication (MFA, typically involving just two factors (i.e., 2FA)) would use more than one such item. For example, possession of your debit or credit card at the grocery store would not be enough, by itself, to make a purchase; the person holding that card would also need the PIN. Another way of seeing it, expressed in a StackExchange discussion, was that getting past an MFA checkpoint would require at least two different kinds of theft.
As HTG (2016) observed, the hassle of MFA would be warranted for important websites (e.g., bank account, primary email, master password in password manager software). Many sources agreed that SMS (i.e., ordinary text messaging) provided an unsafe method of MFA — though, according to Sophos (2018), even that was better than using no MFA at all. The better smartphone choice was a dedicated authentication app, among which various sites (e.g., PCWorld, Kaspersky) named Google Authenticator as the best. Even better, multiple sources (e.g., The Verge, Sophos) agreed that a hardware token (e.g., the USB YubiKey device) was by far the most secure form of MFA, as long as you didn’t lose it: apparently YubiKey couldn’t be backed up, in software, though evidently it was possible to program multiple YubiKeys as backup for one another.
YubiKey could function as an authentication device for some purposes. I posted a question on whether it could authenticate use of a computer’s motherboard. Unfortunately, there did not seem to be any particular arrangement by which YubiKey could function as a second authenticator for VeraCrypt specifically. Some might consider that just as well: there were security concerns related, at least in part, to YubiKey’s switch to closed-source code. One source suggested using YubiKey merely to add part of the VeraCrypt password. The concept was that the user would enter the first part, from memory, and would then plug in the YubiKey and let it deliver the (potentially long) second part — so that, in effect, both the user’s memory and the USB device would be needed to complete the process of gaining access to the drive. A comment to a Lifehacker article (2013) suggested, however, that a keylogger would be able to capture the YubiKey portion as well as the user-typed portion of the password. It seemed the benefit of this method would be to enable a very long password, and the risk would be that the YubiKey would malfunction or be lost, in which case the user seeking to access his/her drive would have to retrieve (hopefully s/he would have written down or otherwise backed up) the long password partially entrusted to the YubiKey.
While VeraCrypt did not seem to support YubiKey or other authentication devices (beyond the limited use just described), it did support keyfiles. The VeraCrypt documentation explained that a keyfile was “a file whose content is combined with a password,” such that the encrypted volume would not be mounted until both the password and the keyfile were provided. Although VeraCrypt said any kind of file (e.g., .mp3) could be used, VeraCrypt recommended using VeraCrypt’s own Tools > Keyfile Generator to produce a keyfile unless there was good reason to do otherwise. Besides introducing a kind of MFA, a keyfile could complicate brute-force cracking efforts. But a StackExchange discussion yielded the impression that a keyfile would not add anything beyond what one could already achieve simply by using a long random password. That discussion reiterated that modern software keyloggers and other malware were capable of reading any VeraCrypt login input (even if VeraCrypt was set to “wipe cache” and if its password caching was disabled, as it should be). The primary benefit of the keyfile would apparently be to foil attacks seeking physical keyboard input (e.g., a camera pointed at the keyboard, a hardware keylogger installed in the keyboard). Another StackExchange discussion considered the possibility that the attacker would figure out which file on the user’s computer contained the keyfile used to decrypt a particular file or container. The suggestion in response appeared to be that, if you’re going to use a keyfile, treat it as an MFA authentication factor: put it on a USB drive, don’t leave it plugged into the computer very long, and maybe encrypt it. For this purpose, a YubiKey might be superior in being harder to write to (i.e., to infect).
The keyfile concept had some drawbacks. First, having lost data to malfunctioning USB drives (and having heard real horror stories in which others lost irreplaceable data by trusting such drives), I considered it very unwise to make access to an encrypted drive completely dependent upon a USB drive. That might make sense in a corporate setting where the IT department controls passwords and retains backups: you take the machine out into the world, your USB drive fails, you go back and they replace it. I wasn’t sure I would want to depend on a USB drive even if I kept a backup, or a printout of the keyfile’s contents, in a safe location that might be inaccessible when needed.
A second drawback of keyfiles was that, unfortunately, VeraCrypt’s documentation said, “Keyfiles are currently not supported for system encryption.” All this talk about keyfiles was relevant only for data (i.e., non-system) drives. Hence, for maximum security within operational constraints on a system drive, the only question was whether the user would choose a password that s/he could remember (probably due to being either relatively short or not substantially random) or a long, random (i.e., completely uncrackable) password that might be at least partly copied and pasted from a USB or other external drive or device (and thus apparently still detectable by contemporary software keyloggers).
VeraCrypt offered an additional security feature: the Personal Iterations Multiplier (PIM). A StackExchange discussion provided opaque statements on exactly what the PIM was, but the point of the PIM was apparently to inject delay into password calculation, so as to slow down brute-force attacks. The PIM was criticized for adding another thing to remember, without making the attacker’s job any more difficult than one would achieve with a comparable increase in the number of characters in the password — meanwhile slowing down the user as well, when s/he entered the correct password.
To recap, ideal measures suggested or implied in this section included the following: use VeraCrypt (not BitLocker or other software, and not encryption tools built into SSDs or other hardware) to encrypt drives or partitions; power down or hibernate (as distinct from sleep) the system when not in use; dismount data drives before hibernating or powering down; do not leave the system running (even if the screen is locked or the user is logged out) where someone could insert a USB drive unobserved; use keyboard encryption software; be aware of alternate means of identifying keystrokes (especially but not only password entry) (e.g., audio or video observation; hacking the gyroscope on the user’s smartwatch); arrange computers so that USB and keyboard ports will be readily visible; and weigh carefully the tradeoff between a long password partly contained on a USB drive or YubiKey, increasing data security at the potential risk of losing data access in the event that such a device is lost or corrupted. In addition, in the best practice, immediately cease all interaction with (e.g., do not even enter a password into) a computer that has been left in an unsafe location (e.g., not stored under secure lock and key, not under constant video surveillance).
Reducing Data Exposure
If an attacker gained access to my Windows 10 installation, it would presumably be because s/he broke through my VeraCrypt protection, or because I left the machine running with my VeraCrypt password entered, or otherwise took VeraCrypt out of his/her way. In that case, many program and data files could be available for the attacker to inspect, copy, delete, or alter, as s/he chose. Even if the Windows login screen prevented an intruder from going right on into my system and viewing, copying, or deleting its files (see below), s/he might still be able to view and tinker with the contents of my drive by connecting it to another computer. The question then would be, what would the drive contain?
Data on Drive D
Some attackers might not care about data files on a user’s system. For instance, a remote attacker might wish to enslave your machine to assist in some massive Internet scheme, or might be seeking bank account passwords stored in a program installed on drive C. But what about espionage, or plain old personal snooping, where the intruder is specifically interested in your data files?
As we have seen, program and data partitions could be arranged and encrypted in different ways: someone might use BitLocker instead of VeraCrypt, for example, or might use an encrypted data container on drive C. For simplicity, this section assumes that drive C is a user’s Windows 10 system partition, drive D is the data partition, and both are encrypted using VeraCrypt.
On that basis, a user’s first objective might be to reduce the size of his/her potential exposure. For instance, data files stored on a laptop taken out into unsecure locations might be limited to those likely to be needed there, and some of those materials might be further encrypted, within the VeraCrypt-encrypted data drive, using a VeraCrypt container (perhaps hidden) or a tool like 7-Zip or WinRAR, encrypted with a different password. Depending on the value or sensitivity of the data materials, one might store some items back home, or at one’s office, on a separate drive kept on a shelf or in a safebox, or in encrypted containers on DVD or Blu-ray discs, synchronized using something like Beyond Compare, and occasionally connected to the computer with the aid of an external USB dock. Of course, backup would always be essential, with at least one copy in a safe or lockbox and/or at a different location. The concept here is that, as we learn from various horror stories, it is not advisable to store a company’s or a nation’s entire database on a single computer that gets lost, stolen, or destroyed, with poor or nonexistent backup.
Assuming the data files on drive D — on, say, a laptop taken out into the world — really did need to be there, as opposed to being more safely stored somewhere else, the question was how to secure them. Assuming a competent VeraCrypt encryption (with, presumably, different passwords for drives C and D), maybe the key questions were when and for how long they should be decrypted. On that, VeraCrypt offered several potentially useful options in its Settings > Preferences:
- Mount all device-hosted VeraCrypt volumes at Windows logon. Aside from the fact that it could apparently be a struggle to get those volumes to take the desired drive letter (if there was more than one data partition), this setting would automatically expose the data partition (i.e., drive D) in every session, even when it was not needed. For instance, in a work session limited to browsing online or working with nonsensitive files on a USB drive, it might be unnecessary to decrypt a carefully encrypted data drive, thus potentially exposing its password or contents to an intruder watching online or located nearby.
- Auto-dismount all volumes when screen saver is launched. The VeraCrypt documentation said, “VeraCrypt does not dismount encrypted system volumes.” Thus, “all volumes” here seemed to mean all drives other than C. If the screensaver was set to turn on after just a few minutes of mouse and keyboard inactivity, so as to narrow the window of opportunity during which an attacker could gain physical access via the keyboard (due to e.g., some distraction or emergency affecting the user), this setting would auto-dismount drive D as well, even if it was actually working on some task. At the same time, it would fail to dismount drive D as long as the user was actively using the system, even if drive D was not needed.
- Auto-dismount all volumes when entering power saving mode. Presumably this option, too, would have no effect on drive C. VeraCrypt’s documentation seemed to say that “power-saving mode” would include hibernation, sleep, and hybrid sleep, all of which could be set in the Control Panel dialog available at Win-R > control.exe powercfg.cpl,,3. Ideally, this meant that VeraCrypt would gracefully shut down drive D when it sensed that Windows was about to enter hibernation. It appeared this option would not do any harm, and might be helpful. But it would not address the desire to make D unavailable when it was not being used.
- Auto-dismount volume after no data has been read/written to it for ___minutes. It could take some tinkering to figure out which programs (e.g., antivirus scans) might keep drive D alive longer than expected. This option could presumably be set to dismount D gracefully, thus avoiding the risk of file corruption, before drive C would be hibernated or shut down.
It appeared that at least some of those options could be helpful. The situation could be more difficult if the user needed disparate auto-dismounting options for multiple drives, or if the foregoing solutions did affect drive C. In that case, it might be ideal to use two separate encryption programs, one for each drive. But, BitLocker aside (at least on my Windows 10 Home laptop), it was tough to find competitive encryption alternatives (see e.g., TheWindowsClub, 2018). Grasping at straws, How-To Geek (2017) suggested maybe going back in time to the problematic TrueCrypt 7.1a (see my previous post). Since VeraCrypt was built from TrueCrypt, that might be rather like an attempt to run portable and installed versions of VeraCrypt simultaneously: it might work, at least until some conflict caused it to fail catastrophically. A different solution would be to access a drive only from within a virtual machine (VM), and to use a VeraCrypt installation in the VM to dismount that drive. A different possibility, on which I posted a question, might be to use mountvol, perhaps in conjunction with a utility capable of identifying disk, mouse, or keyboard activity, to dismount a specified drive. Some such solution might also be useful if VeraCrypt’s auto-dismount options (above) failed for some other reason to produce the desired outcome.
Data on Drive C
Assuming drive D was encrypted with its own unbreakable password, contained only data that the user actually needed, and was dismounted when not needed, there would be a separate question of whether data from drive D was capable of being discovered on drive C. That would matter if, for instance, an attacker could use or steal the computer at a time when Windows was running, or (perhaps using one of the attacks described above) was able to find or figure out the VeraCrypt password for drive C but not drive D.
In that case, unfortunately, the attacker probably would find some data from drive D sitting on drive C. According to the VeraCrypt documentation,
When a VeraCrypt volume is mounted [such as an encrypted data drive D], the operating system and third-party applications may write … [information to drive C] about the data stored in the VeraCrypt volume (e.g. filenames and locations of recently accessed files, databases created by file indexing tools, etc.), or the data itself …. Note that Windows automatically records large amounts of potentially sensitive data.
VeraCrypt’s Security Requirements and Precautions page listed a number of specific concerns. Some of those concerns seemed primarily relevant to users who needed what VeraCrypt called plausible deniability — that is, a computer setup allowing the user to conceal data in a hidden partition, and perhaps even to use a hidden operating system, whose existence would not be revealed even if the user was forced (by e.g., an authority or an extorter) to give up the first-level VeraCrypt password. The plausible deniability scenario did seem to entail some additional complications. For my purposes, not involving plausible deniability, the primary concerns identified in VeraCrypt’s documentation seemed to be as follows:
- Paging File. The Windows paging file(s) held overflow program file information and data (i.e., material that did not fit in RAM) for current and prior operations. Trivedi (2014) and Chandel (2015) illustrated the kinds of potentially sensitive data that pagefile.sys could contain. I was able to view the contents of a paging file on my system by running ShadowExplorer (GUI alternatives: RawCopy and others) > select a recent image from the dropdown box at upper left (it could take a day or two to generate multiple images; see manual) > C:\pagefile.sys > right-click > Export. I renamed it as pagefile.txt and found that none of several large-file viewers suggested in a StackOverflow discussion was able to display the contents of that 25GB pagefile.txt file. (Note the existence of sometimes expensive tools for such purposes, e.g., Belkasoft Evidence Center.) The location of the paging file was specified at Win-R > SystemPropertiesAdvanced > Advanced tab > Performance > Settings > Advanced tab > Virtual memory > Change > uncheck “Automatically managing paging file size for all drives” > select a drive > select its desired configuration > Set. For drive C, the setting might be System Managed Size. When I tried to set it to a tiny value, I got a message:
If you disable the paging file or set the initial size to less than 800 megabytes and a system error occurs, Windows might not record details that could help identify the problem. Do you want to continue?
- For all other drives, it seemed the choice should be No Paging File, to prevent drive C material from going to other drives that may be less secure. On my desktop computer, the System Managed Size setting produced a recommended size of about 4GB, but also said that, as just noted, I had a massive currently allocated size of about 25GB. Microsoft indicated that the Win10 paging file could grow to 3 x RAM — which, since I had 24GB RAM installed, would be 72GB. Mine apparently stopped at 25GB because Microsoft said there was also a limit of volume size (in this case, an unformatted total of more than 200GB for my drive C) divided by 8. It seemed that the system might be giving me such a huge page file just because I had a lot of RAM and a fair amount of free space on drive C. If the system was set (as in my case, via Win10RegEdit.reg or Ultimate Windows Tweaker > Customization > File Explorer tab) to delete the paging file at shutdown, a smaller paging file could make for a much faster shutdown. A smaller paging file would also store less data from drive D that might be exposed to an attacker who obtained access to drive C. TechTarget (2018) seemed to concur that, on a Windows 10 x64 system, the paging file could be set in the range of roughly 2-4GB, depending on RAM. It appeared that some programs might not function properly with an excessively small or nonexistent paging file, though there were also reports of users faring well with such settings. TechTarget did not echo WinHelp’s (2017) seemingly mistaken claim that the recommended 4GB manually set paging file size would impose excessive wear on an SSD. TechNet (2015) said that, in modern systems, “the logic should be: the more RAM you have, the less you need paging file.” Russinovich (2008) recommended a procedure for estimating the optimal pagefile size. In lieu of that procedure, I decided to try something like the 4GB that seemed to be working for others — more precisely, the 3096MB (initial and maximum) value recommended, for my system, in this Windows 10 Virtual Memory dialog. I had noticed some disk thrashing, some struggling to load my many open tabs, and hoped that, as others reported, this change would help. It didn’t. To the contrary, I noticed an immediate and substantial impairment in my (Chrome and Firefox) browsers’ ability to handle many open tabs. Apparently Windows was actually using all that pagefile space. But later, with only 800MB maximum pagefile size, I had no such delay. Possibly the change was due to improvements in software.
- Hibernation File. Like pagefile.sys, hiberfil.sys would be as safe as drive C itself, assuming I did not relocate the hibernation file to some other drive. Many of the other observations about the paging file (above) were likewise applicable to the hibernation file. The VeraCrypt documentation added one comment: if drive C was not encrypted, they recommended disabling hibernation “for each session during which you work with any sensitive data and during which you mount a VeraCrypt volume.” Possibly they meant “or” rather than “and”: presumably they were concerned that hiberfil.sys might store potentially ephemeral RAM contents including either VeraCrypt keys (for e.g., drive D) or sensitive data. Passware seemed to say that keys to an encrypted volume could be obtained from hiberfil.sys even if the computer was shut down, if the encrypted volume was not dismounted before its most recent hibernation.
- Shutdown. To insure that volumes were properly dismounted at shutdown, I wrote a batch file called CUSTOMEND.BAT, with a VeraCrypt force dismount command and a shutdown command allowing enough time for that dismount; and I placed a shortcut to this batch file in C:\ProgramData\Microsoft\Windows\Start Menu\Miscellany, one step away from the main Start Menu, so it would be available but not likely triggered by accident. I was still tweaking this batch file, but at the moment it looked like this:
:: CUSTOMEND.BAT
:: Prerequisites for proper functioning:
:: (1) May need to right-click > Properties > Security tab > give Full Control to Everyone (or a smaller group).
:: (2) Don't name this file "Shutdown," lest it be confused with the Windows shutdown command.
:: (3) Put a copy of VeraCrypt.exe in C:\Windows.
@echo off
:: Display the VeraCrypt window.
veracrypt
cls
echo.
echo.
echo This is CUSTOMEND.BAT.
echo.
echo.
echo The next step will dismount all non-system VeraCrypt mounted drives.
echo.
echo.
pause
veracrypt /f /d /q
cls
echo.
echo.
echo VeraCrypt drives are dismounted.
echo.
echo The next step will restart unless you hit Ctrl-C or close this window.
echo.
echo.
pause
taskkill /f /im veracrypt.exe
echo.
echo.
timeout /t 2
:: Miscellaneous shutdown tasks
:: Delete desktop.ini file in Startup folder, to keep it from opening in Notepad at bootup.
cd "%appdata%\Microsoft\Windows\Start Menu\Programs\Startup"
attrib -r -s -h desktop.ini & del desktop.ini
cls
echo.
echo.
shutdown /r /t 8 /c "Drives dismounted. Restarting momentarily."
- Memory (or Crash) Dump Files. The VeraCrypt documentation said that memory dump files, created to store the contents of RAM at the time of a system error, could contain passwords, encryption keys, and the contents of sensitive files. The advice here was the same as with hibernation: if drive C was not encrypted, the user should disable memory dump generation when working with VeraCrypt volumes “and” sensitive data. That adjustment was at Win-R > SystemPropertiesAdvanced > Advanced tab > Startup and Recovery > Settings > Write Debugging Information section > dropdown box > choose None.
- Defragmentation. VeraCrypt’s documentation seemed to say that defragmenting a drive could leave an ostensibly erased but recoverable copy of parts of a VeraCrypt data container (including information useful for decrypting the entire container) in a drive’s free (i.e., unencrypted) space. As a solution, the documentation recommended using a VeraCrypt partition rather than a container, securely erasing free space (below), or turning off defragmentation for filesystems containing VeraCrypt volumes. Presumably this problem would be addressed by not leaving any unencrypted space on the drive.
- Other Data Leaks. The documentation said that Windows, and programs running on it, tended to record “large amounts of potentially sensitive data” in a variety of locations (especially on drive C) other than those just discussed (e.g., Microsoft Office cache) — not to mention possible locations in the cloud. To keep such data secure, the advice was to encrypt drive C or else run the operating system from a live CD or DVD (e.g., live Linux or Windows To Go), configured to insure that any data written to the system drive would be written to a RAM disk. Presumably they recommended a DVD rather than a USB because things could easily be written to a USB (and perhaps also to a DVD-RW). When using a live DVD, they also recommended that the user “ensure that only encrypted and/or read-only filesystems are mounted during the session,” apparently to make sure that Windows or its programs didn’t write sensitive data to any unencrypted partitions.
As a backup precaution in case an encrypted drive C was compromised, these potential avenues of data leakage suggested several possible responses:
- Minimizing Accessible Sensitive Data. It seemed that one should think of drive C as a clearinghouse for sensitive data — attracting, collecting, and sometimes distributing it. In Windows, the information in a vital spreadsheet or document would often be preserved, not only in its own file, but also in RAM, in a pagefile, in a hibernation file, and/or in the relevant program’s cache folder, and the user would typically have no practical access to or monitoring of the contents of such locations. A consummately important password could likewise be saved, not only in some of those same places, but also in a keylogger, in a password manager, and in a clipboard utility that the user might not even remember installing. These and other sorts of data could moreover be pasted in the wrong place, attached to the wrong email, or otherwise proliferated by user error or overly energetic applications. It seemed that the exposure of data to Windows tended to multiply the number of ways in which others could acquire it. In instances requiring maximum security, this observation seemed to call for the use of simple or non-Windows technology (e.g., Notepad or LibreOffice; Linux; kiosk software (e.g., Webconverger, U.S. DoD’s Trusted End Node Security); pen and paper) that would tend to limit the Windows tendency to grab and distribute data.
- Deleting Unnecessary Files. There did not appear to be a reliable plan or structured approach by which a user could be sure of identifying, for deletion, data traces of the kinds just mentioned. For example, in a writeup oriented toward securing a computer for resale without deleting its Windows installation, Notenboom provided remarks, and obtained reader responses, indicating that the user could take a number of steps to reduce the number of such files, but secure erasure of the drive (below) was ultimately the only reliable solution. Potentially helpful steps, short of drive erasure, included deleting data files; uninstalling unnecessary programs; removing all but one user account; running CCleaner (after making a secured backup image; see How-To Geek) to remove unwanted files and potentially revealing registry entries (see also searches for various filenames and other materials in e.g., O&O RegEditor); setting paging file size to zero, turning off hibernation, and then deleting pagefile.sys and hiberfil.sys; searching (with e.g., Everything) for stray filenames and types (e.g., *.doc*); and viewing and managing hidden files.
- Securely Erasing Part or All of Drive C. The traditional advice, reflected by VeraCrypt, was to use an overwriting tool that would (perhaps repeatedly) overwrite each sector of an HDD, so as to remove residual, theoretically readable magnetism remaining from the sector’s previous contents. Such tools could overwrite the entire drive, or they could overwrite only free space (i.e., those sectors potentially containing the remnants of supposedly deleted files). VeraCrypt recommended SDelete. How-To Geek (2017) listed that among other tools for the purpose. Unfortunately, as I noted in an earlier post, these conventional disk wiping tools were not effective for wiping space on SSDs, which were becoming increasingly common for system drives. As that post indicated, the TRIM function that was supposed to clean free space on an SSD was also not necessarily reliable. In addition, to improve drive life and performance, SSDs tended to use a feature known as “wear leveling,” in which existing and/or overprovisioned free space on the SSD (provided by the manufacturer and/or added by the user — see another previous post) would be rather randomly used for data storage. The problem here was that the operating system would tell the SSD to store a certain file, but the SSD would decide where to store it; the operating system would not know where the file was actually stored; and some storage locations within an SSD might never be reached by a traditional HDD wipe command, no matter how frequently repeated. Therefore, the operating system’s command to erase sector X, a command that an HDD would execute faithfully, would be reinterpreted by the SSD. At the level of the operating system — that is, at the user’s level — there was no way of knowing what, if anything, the SSD might erase, in response to such a command. Thus, for example, in their study of what was actually erased in an SSD, Wei et al. (2011) had to dismantle the SSD and use custom-built testing hardware. What Wei et al. found — that “none of the existing hard drive-oriented techniques for individual file sanitization are effective on SSDs” — was still the case in recent research (Kopchak, 2016), necessitating continuing proposals to overcome the problem (e.g., Onarlioglu et al., 2018; Yang et al., 2018; Huang, 2017). Summarizing Kopchak’s video presentation, TechRepublic (2016) suggested that “pricier, more mature SSDs delete files and leave fewer traces behind than budget models.” That, however, was not what I found with a Samsung SSD. In short, there might not be any completely reliable wiping of free space on drive C on an SSD. Hence VeraCrypt’s advice: don’t use VeraCrypt on drives (i.e., SSDs) that use wear-leveling. Or if you’re going to ignore that advice, follow VeraCrypt’s recommended procedure to encrypt the drive before putting any sensitive data on it. Or if there already was sensitive data on it (and also for those looking for a secure wiping solution when they were preparing to discard their SSDs), ZDNet (2017) recommended using utilities provided by the SSD manufacturer or secure-erase methods discussed in my previous post — with the caveats that such methods did not work consistently, as one could (sometimes) detect with a tool like Recuva. (See also MakeUseOf’s (2018) suggestion to use the PSID Revert key provided on the labels of some SSDs.)
- Restoring a Prior Windows Installation. It appeared that some and possibly all of the foregoing issues could be addressed by restoring a prior Windows installation to an SSD that was more or less satisfactorily wiped using one or more of the methods just suggested. In this procedure, one would hope or assume that attackers would not go to the trouble of a hardware-based inspection of the flash chips on the SSD, as performed by various researchers and forensic investigators; one would use the secure-erase command and/or encrypt the drive; and then one would restore an image of a previous, similarly secured Windows system drive, created shortly after installation, when the recommended procedures had been followed, before any sensitive data was read and possibly stored on drive C, and before an attacker would have an opportunity to install malware. Ideally, the system would include as much as possible of one’s preferred software and customization, so as to reduce the time investment of re-configuring the prior installation, to bring it up to date. The key point, for present purposes, was that, at this starting point, user data would not yet be acquired and stored in pagefile.sys, in various Windows system folders, and elsewhere on drive C.
Data in RAM
After thus addressing the task of making user data on drives C and D less available to an attacker, there was the separate question of doing likewise with user data stored in RAM. This topic was already addressed in connection with cold boot attacks (above), but VeraCrypt introduced additional considerations. VeraCrypt’s Settings > Preferences contained an option to “Force auto-dismount even if volume contains open files or directories.” That option seemed to apply to the several auto-dismount options discussed above. VeraCrypt’s documentation seemed to say that forced auto-dismount would leave the contents of any such open file in RAM. Apparently RAM would contain part or all of the data files used by various programs, because “most programs do not clear the memory area (buffers) in which they store unencrypted (portions of) files they load from a VeraCrypt volume.” RAM would also contain the unencrypted master keys for VeraCrypt volumes. If a non-system drive (e.g., drive D) was properly dismounted, the documentation said VeraCrypt would erase its master keys from RAM. But if VeraCrypt was prevented from doing that due to a non-clean shutdown (e.g., system crash or reset), RAM would still contain the master keys for any non-system drive mounted during that session. In any case, VeraCrypt could not erase, from RAM, the master keys for the system drive (i.e., drive C). Aside from being recoverable in a cold boot attack, these potential RAM contents might be saved in the paging or hibernation files. Hence, users were apparently advised, in effect, to encrypt drive C, if they planned to use hibernation; to verify that hiberfil.sys and pagefile.sys, if any, were located only on drive C; to shut down following a procedure like the sequence in the CUSTOMEND.BAT file (above); and to restart the system (so as to clear RAM) before leaving it where someone could access the contents of RAM.
In summary, this section explores the risk of data exposure in RAM and on system and data drives. The risk could be summarized in the expression, “Information wants to be free,” interpreted as the observation that information naturally tends to become widely distributed (Wikipedia). The sage advice was, if you don’t want a secret told, don’t tell it. Consistent with that advice, apparently the best way to prevent Windows from perpetrating its notorious habit of distributing information promiscuously was not to use Windows. Although that might sound flippant, it was a real possibility. For instance, one could use Linux software to open sensitive documents in a Linux virtual machine in VirtualBox, running on Windows. Second-best was to use Windows with an encrypted drive C. Even then, someone might find or crack the VeraCrypt password to system drive C, or might encounter (or steal) the system after the user has already entered that password. Precautions in that case would be to keep sensitive data in locations that were few and relatively secure — to leave such data on a drive in a safebox, for instance, and to access it only in a physically guarded and Internet-isolated condition — and to venture forth with only the data needed for the errand at hand. Likewise, it would be ideal to use tools (e.g., kiosk software, Linux VMs) and applications (e.g., Notepad) that did not tend to proliferate data in multifarious directions. The user of a computer physically taken out into the world could mitigate the risk of intrusion into data stored on that machine’s data drive by setting VeraCrypt to dismount that data drive automatically when the screensaver runs, when the machine enters power saving mode, or when the computer is inactive for a specified number of minutes. In response to the fact that Windows tended to bring information from and about data files onto drive C, precautions included minimizing the paging file (especially where there was ample RAM or drive C was on an SSD) and setting it for deletion on shutdown; disabling Fast Startup and, perhaps, hibernation; verifying that no insecure partitions were mounted during the computing session; regularly doing a secure wipe of free space on all drives; securely erasing (or destroying) drives before selling, discarding, or otherwise making them publicly available; and possibly disabling defragmentation and memory dump generation and exploring the use of a VeraCrypt hidden operating system. The techniques of restoring a virginal drive image and/or a pristine virtual machine could also help to eliminate accumulated data from a Windows installation.
Other System Security Measures
This section discusses a few security options that were potentially important but did not seem to require extensive discussion.
Windows and Program Updates
As noted above, BIOS updates could bring new problems, and only sometimes addressed security issues. That seemed to be approximately the case for updates to Windows as well. The situation appeared less problematic with updates to third-party programs. For the most part, those seemed to improve security and functionality without introducing significant new problems.
To update third-party programs, Lifewire (2019) offered a list of freeware software updaters, among which Patch My PC (4.4 stars from 209 raters at Softpedia) was perhaps most highly ranked for those who wanted an automatic updater and Glary Utilities (4.7 stars from 150 raters at Softpedia) (go to Optimize & Improve > Software Update) was apparently quite good for those who preferred to download and retain the actual installers (see also Chocolatey, Ninite, and Windows Remix).
In the case of updates to Windows itself, Woody Leonhard at Computerworld made a career of logging various problems in Windows 10 updates and providing advice on how to postpone if not avoid those problems. The situation was evolving: each new version of Windows 10 seemed to differ in how much control it gave to users, and there were also differences between editions (in my case, between Win10 Pro on the desktop and Win10 Home on the laptop). For instance, Leonhard (2019) offered several suggestions on handling Windows 10 updates. One suggestion was to block automatic updating. On Windows 10 Professional, the steps to do that were to use Win-I > Update & Security > Windows Update > choose Semi-Annual Channel and delay feature updates (including major new updates) by at least 120 days, and quality updates by 10 to 20 days (so as to balance the security need against the risk of a bad update: Leonhard said Microsoft “usually yanks bad Win10 cumulative updates within a couple of weeks or so”). On Windows 10 Home, which Leonhard called “Windows 10 Guinea Pig Edition” for its particular notoriety in treating consumers as beta testers (as I had personally experienced), the advice was to go to Win-I > Network & Internet > for each of WiFi and Ethernet, set as a metered connection. This step would block all updates. A good procedure might be to make an image of drive C (i.e., a system backup) once a month and, once that was done, turn off the metered connection and let Windows update itself. Leonhard emphasized never clicking “Check for updates,” as Microsoft interpreted that as a request to be given all of the most recent and least tested updates. How-To Geek (Hoffman, 2017) offered advice on uninstalling and blocking specific problematic updates. To emphasize, however, things continued to change in these regards; this advice might not apply to later versions.
In a different vein, there was the problem of telemetry, commonly interpreted as Microsoft’s use of Windows 10 to spy on its users. This topic was perhaps best handled in two parts. First, on the general level of corporate ethics, Microsoft was, and always had been, patently evil and destructive, in a variety of ways, for the purpose of maximizing the wealth of its shareholders and senior employees. The question for users was not, and apparently never would be, a question of whether Microsoft cared about them. It was, rather, a deal-with-the-devil question of whether problems with Windows were worse than the alternatives that were able to survive within Microsoft’s substantial monopoly. In the spirit of this paragraph, many sources (e.g., EFF, 2016; The Register, 2017; Forbes, 2015) discussed the various ways in which Microsoft continued to lie to and manipulate its users, with respect to various aspects of Windows 10 telemetry. Even a shill like ZDNet (Bott, 2016), after saying that “Telemetry is not a four-letter word” and accusing critics of “a misunderstanding of the basic technology,” had to admit that users might not be aware that, on its higher settings, Windows 10 telemetry did indeed enable Microsoft to collect business and personal information, including “any user content that might have triggered” system function problems. Later, Bott backtracked on this point, falsely characterizing this deliberate data collection as mere “possible inadvertent leakage.”
But in the dance-with-the-devil spirit that kept us all continuing to use this Rosemary’s Baby, Microsoft’s ethics were generally treated as de facto irrelevant, and the only functional question was how, why, and to what extent a user could and should take action to restrict telemetry. A few years after the earlier article, ZDNet (Bott, 2018) admitted that, for each individual, there did exist a “creepy line,” beyond which Microsoft’s knowledge of the user’s private matters would be very unsettling. To help the user draw that line, Bott highlighted several points of possible adjustment, including these:
- To prevent Win10 from connecting automatically with open hotspots that Microsoft has marked as known and trusted: Win-I > Network & Internet > Wi-Fi > turn off options pertaining to hotspots, as desired. To verify the status of a network to which the machine is currently connected, go to Network & Internet > Status > Change Connection Properties.
- Control the location information sent to Microsoft via Win-I > Privacy > Location.
- Control the personal information (including calendar, contacts, location, and browsing history) uploaded for Cortana: for detailed information regarding settings on various devices, Bott pointed to a Microsoft webpage. Perhaps its key statement: “If you would prefer not to send any character data to Microsoft, you should choose not to use Cortana.” Some of the primary settings were at Win-I > Cortana > Permissions & History. To turn off Cortana completely, Bott (2016) and others recommended a registry edit simplified in Win10RegEdit.reg.
- Go down the list in Win-I > Privacy to enable or disable various Windows 10 apps’ access to personal information.
To set telemetry to the basic level, involving least disclosure of personal computing activity, How-To Geek (Hoffman, 2017) advised using Win-I > Privacy > Feedback & diagnostics > Basic. In an earlier article, Hoffman provided a more exhaustive list of ways to restrict Microsoft’s logging activities. Elsewhere, Hoffman advised against using third-party tools that attempted to restrict telemetry more dramatically but, in the process, posed a risk to system functioning and stability. (Ghacks offered a list of such tools.) Microsoft did say, however, that a simple registry tweak (captured, again, in Win10RegEdit.reg) could disable telemetry.
Disable Microphone and Webcam
WonderHowTo recommended physically disconnecting microphones to prevent hackers from using the system’s plugged-in microphone to hear everything the user might say. Where that was not feasible, TenForums offered several ways to disable the microphone until the user chose to re-enable it, including Win-R > devmgmt.msc > expand Audio Inputs and Outputs > right-click on the relevant device (e.g., microphone, headphones with microphone) > Disable device. Similarly, for the webcam, the advice was to physically unplug it, disable it (under Cameras) in devmgmt.msc, or at least cover it with tape, a Band-Aid, or a Post-It, and follow safe webcam practices (e.g., use the webcam only over secure connections, don’t entertain unsolicited telemarketing calls).
TechAdvisor noted that hackers might be able to re-enable devices disabled in devmgmt.msc. Uninstalling the device in devmgmt.msc might provide a solution, but might entail hassles when deciding to reinstall. Gecko & Fly noted that certain utilities (e.g., ShieldApps’s Webcam Blocker, SpyShelter (above)) and Internet security suites (e.g., Bitdefender, Kaspersky) offered the ability to turn off webcams.
Backup
Wikipedia defined backup as ” the copying into an archive file of computer data so it may be used to restore the original after a data loss event.” How-To Geek (2018) noted the distinction between a backup of the Windows system installation and a backup of user data. Both were advisable. From a backup perspective, data files were best kept on some drive other than drive C. (Parts of Win10RegEdit.reg moved standard Windows 10 data folders to drive D.) That way, the user would lose no data even if the entire Windows installation suddenly failed to function (due to e.g., malware). Changes to drive C were infrequent but, especially at first, could involve hours of work. One would not want to have to redo all that just because one rogue program screwed up the system. So drive C images were highly advisable, especially before installing potentially problematic software and after time-consuming system modifications. System restore points were also useful but were not reliable.
Backing up system and data drives tended to call for two different types of backup. Typically, drive C was backed up in a single compressed image file. This file would have to capture complex relationships among Windows system files, some of which would always be in use whenever Windows was running. As such, one way to image drive C was to use a bootable USB drive to run a drive imaging tool when Windows was not running. In the system setup contemplated in this post, that method (which I used for quite a few years) might require turning off Secure Boot in the BIOS, in order to allow the USB drive to boot. Even then, the entire (e.g., 150GB) size of an encrypted drive C would appear as a single file, requiring a huge (i.e., ~150GB) and noncompressible sector-by-sector backup — even if the disk space filled by the Windows installation was actually only 40GB.
The alternative was to make the drive C image while Windows was running, using Windows 10 software capable of doing that. This much more compact (typically compressed) backup image would be saved to some drive other than drive C. If that output drive was not encrypted, and if the backup software did not offer to securely encrypt the resulting drive image, the contents of the image (potentially including passwords and other sensitive information saved in its paging file or in other files on the backed-up drive C) would be available to an intruder, at least until the user ran something like 7-Zip to encrypt it. For maximum flexibility, it might therefore be advisable to save a drive C image that consisted of Windows 10 in its virgin state, periodically updated and perhaps with new software installed, before the installation was used to access any user files.
Of course, if the backup image was encrypted, or was stored on an encrypted drive, the user would need to be able to mount and decrypt that drive and/or that image, in order to restore the backup image to drive C, replacing the old (i.e., presumably nonworking or rejected) drive C installation. The resulting new (i.e., restored) drive C installation would then not be secured until the user ran VeraCrypt to encrypt it. If the image was decrypted on another drive before being restored to C, it would presumably also be necessary to insure that the other drive, or at least its free space, was satisfactorily wiped — at least if the image contained any sensitive data (e.g., passwords). Finally, the software making the backup would need to provide a bootable version of itself, sufficient to restore the backup image when Windows was not running — because, of course, the restoration would wipe out everything on drive C.
A search for recommended drive C imaging software led to a How-To Geek (2018) tutorial covering the System Image Backup tool in Windows 10. The advice was, in effect, to use Win-R > control > Backup and Restore (Windows 7) > Create a system image > choose desired destination. That produced an error:
Windows Backup
Windows could not find backup devices on this computer. The following information might explain why this problem occurred:
STATUS_WAIT_1 (0x80070001)
Close Windows Backup and try again.
Closing and trying again did not fix it. A search led to multiple sources essentially advising, in the words of one Reddit post, “Don’t use the built-in system imaging tool. It’s way too picky about things and can really f*ck you over during restore if you don’t have the exact drives you had before.” That post and others recommended Macrium Reflect. TechRadar (2019) recommended some cloning tools, but just cloning (i.e., copying ) disks wasn’t imaging. Raymond (2017) noted that, at least from a long-term perspective, Acronis had long been the gold standard; but in my impression, a few years back, their reputation seemed to slide with buggy software. A search led to a number of sites mentioning AOMEI Backupper Standard (free, 4.5 stars from 314 raters at Softpedia; see feature comparison with paid versions), which I had been using for the last few years. Their instructions said that, as I had found, drive imaging could proceed from within Windows 10; the restore process could commence within Windows as well, though it would require a restart to complete; and restore was also possible using bootable media — which one would want to prepare before the system went bad.
With a good drive C backup plan in place, it would also be necessary to back up the data drive(s) (e.g., drive D). Data drives were different from the system drive. It was necessary to back up drive C as a single unit because its files interacted with one another. By contrast, drive D files tended to be largely independent from one another: some would change, between one backup and another, but many would stay the same. It was not usually necessary to back up all of drive D at once. Thus, the user could opt for incremental or differential backups to update a full backup. The other consideration was that drive C did not usually change in important ways from one day to the next, whereas last week’s version of a data file could be profoundly inferior to today’s. Hence it would be advisable to make those incremental or differential backups frequently. Having been burned by black-box automated backup software, I had worked out a more hands-on data backup scheme; but certainly there were many automated tools for the purpose.
There remained the question of where to store one’s backups. If the user preferred cloud storage, it seemed there would be two approaches: either make a backup on the computer and then preserve a copy in cloud storage, or use an online backup service. In the latter category, PCMag (2018) favored iDrive, Acronis, and SOS; TechRadar recommended iDrive, Backblaze, Carbonite, and CrashPlan; How-To Geek (2018) and Wirecutter (2019) preferred Backblaze, with iDrive as the primary alternative; and IGN (2019) agreed, but in the opposite order of priority. PCMag’s favorites (including iDrive) offered at most 2TB storage, whereas storage was unlimited at Backblaze ($60/year).
Lifewire (2018) contended that cloud backup was superior to local (e.g., HDD) backup, because cloud backup was safe from fire, flood, theft, hardware (e.g., disk) failure, and all the other things that could happen to a computer’s internal drive or to an external drive connected by, for instance, a USB cable and stored on a shelf. These remarks seemed odd to me. Cloud backup was not intrinsically safe from fire, flood, theft, or hardware failure. It would be safer if its data was saved in several locations, but that did not always happen. (See e.g., Computerworld, 2015, reporting on lightning strikes that caused Google to lose some of its customers’ cloud-stored data; Genaro, 2018, noting that Tencent Cloud Storage lost a company’s entire database.) A user who was on the road, or who did not maintain an offsite backup of his/her data, or who lived in a high-crime area, or who insisted on entrusting everything to a single aging HDD, might be strongly encouraged to use cloud backup. On the other hand, a user who was worried about massive data breaches and other signs of imperfection (not to say incompetence or lack of focus) at major data corporations that supposedly existed in order to provide security (see e.g., LastPass, below) might reasonably suspect that his/her data would be more sensibly kept where s/he could personally keep an eye on it. And then there was the question of what would happen if the cloud company holding the data suddenly went bankrupt, or merged into some other company, or experienced other corporate disruption, or if the Internet connection stopped working due to some problem with AT&T or some other communications company in the data chain: would the user’s data be unavailable until someone picked up the pieces?
In those remarks, Lifewire (2018) neglected to mention drawbacks of cloud backup before glibly recommending switching to it “for most people in most situations.” In Lifewire’s answers to various questions on other webpages, however, more of the truth came out. On one such page, the same Lifewire writer (Fisher, 2018) said there were too many variables to know how long the initial backup might take, but admitted that his initial backup of “a few hundred GB with Backblaze … only [sic] took 3 days.” Backing up that quantity of data would take only a few hours on a local drive. Suppose, then, that the user rearranged his/her files and folders, so that things were now in different places, some with different names. Oops: another three days — or three weeks, if the user had several TB to back up. Or longer: according to Cloudwards (2018), “One of the most frequent reader complaints we hear is of online backup solutions taking not just days or weeks to complete initial backups, but even months.” That’s not backup in any meaningful sense; that’s more like a headache. And while that massive data transfer was underway, Fisher seemed to say, Internet speed might be slowed down for ordinary browsing and other purposes. Fisher’s discussion of security did not address the problem that data stored in a cloud might always be vulnerable to rapid online attack — whereas, toward the other extreme, data stored on a local external drive would not be vulnerable to such attacks at all except when the drive was connected.
In a similar vein, TechRadar (Zoir, 2018) said it might be “time to ditch your external hard drive” because such drives were “just not that reliable. … This means a hard drive is pretty much good for two or three years, and then you can kiss it, and your data, goodbye.” That was ridiculous; one would think Zoir would know better. A sensible user would regularly monitor drives and replace them at a certain age or number of hours of use, or at least when they showed signs of problems. And how did she think cloud storage companies stored all that data? Surprise: hard drives! It was just their drive rather than yours. Yes, you were paying them to make sure those drives were always in excellent condition, always backed up in case of disaster. But as just indicated, nothing was for sure. A person who was not going to pay attention and invest some time to do things well would be at risk of screwing up any backup scheme. Cloud storage or backup had its strengths, but it was not a panacea.
It seemed irresponsible to encourage users to put all their eggs in one basket by relying solely on cloud backup, as those Lifewire and TechRadar writers did. The better advice, according to cloud backup company Carbonite — along with other cloud backup companies (e.g., Backblaze, Acronis) and many other sources — was to follow something like the classic 3-2-1 rule, “a best practice for backup and recovery.” That rule called for keeping three copies of data, of which two were kept in different types of storage and one was offsite (e.g., cloud storage; HDD stored at another location, ideally not within the path of the wildfire, flood, hurricane, or tornado that would hypothetically wipe out home or office). There were several options for those “two different types” of storage. Along with the cloud, Carbonite said that internal and external HDDs counted as two types. In a previous post, I covered a different option, suitable for files that wouldn’t change often and also, perhaps, as a fail-safe against inadvertent drive overwriting: encrypted large-capacity Blu-ray discs. Another option, at least for large installations: magnetic tape — which, despite its drawbacks, was still used by major players (e.g., Microsoft, possibly Google) for virtually unhackable and relatively affordable huge (i.e., up to petabytes of) data storage. Another alternate medium that got little respect but could still provide the best solution for some needs: paper. Whatever the medium, the 3-2-1 model would fail if distraction, laziness, or lack of time commitment resulted in one form of backup being done improperly because, after all, there were still two others. The user would presumably want to choose his/her three methods of backup with an eye toward the question of how much time and expertise s/he was prepared to devote to or develop in the task. For many, cloud security would be an important part of the picture, not as the sole or even primary backup destination, bogged down in capturing hundreds of gigabytes of user data, but rather as an always-on backup focused on keeping up with the most recent work, especially in the case of a laptop that would often be at risk of theft.
Authentication (i.e., MFA) for backup seemed likely to work the same as authentication for other purposes. If the backup was on a local drive, it would presumably be encrypted with VeraCrypt, posing the same security issues as VeraCrypt on any other drive. If the backup was in the cloud, there would be a need for MFA while logging into that service. For example, 2FA options at Backblaze (2017) apparently included SMS and authenticator apps but not hardware tokens, while some cloud services still apparently offered no MFA options at all.
Without denying the vested interests of data recovery service Ontrack, that service was not obviously mistaken in suggesting that “public cloud storage … opens up a deep hole of security concerns.” MakeUseOf (2017) asked, “Are cloud storage services peeking at your data? Or even selling your data? We can’t know for sure.” OwnCloud said,
Today, most people have their digital life stored on online servers from various companies. … But you might wonder: “Where is this data? Who has access to it?” These questions have become more pressing since the revelations that our own government is spying on us, and collecting and snooping into virtually all of our online communications.. We know that foreign and our own governments have access. Criminals and large corporations, too.
Generally, placing data into the cloud would subject it to vulnerabilities of local attack (e.g., keyloggers, shoulder surfing) as well as online attacks (e.g., remote password cracking). Placing data in the cloud also created the possibility that one’s data might be preserved in that location longer than the user would intend. Surely that would not happen often; surely it would happen sometimes — due, perhaps, to imperfect cloud service compliance with data erasure expectations, or to the bankruptcy scenario mentioned earlier, or to lax standards in the country where the data center was located. There seemed to be a heightened risk that at least one such country could have that sort of vulnerability because, according to TechTarget (2018), “Cloud services … [typically copy data] across servers located across the country or around the world.” One could not be confident that cloud storage companies, encountering financial difficulties resulting in layoffs in a rapidly evolving tech marketplace, would consistently focus their remaining personnel on proper disk wiping. The day could come when suddenly the lights went out, the employees were not there anymore, and the next people to work in that place (or the liquidator who cleaned it out) might not know or care how things were done previously, while the user’s data continued to sit there on a hard drive. Even if those drives were encrypted (which was unlikely, as it would impose a significant performance penalty), the potential of quantum computers (above) raised the prospect that, at some point in the not-too-distant future, people could be worrying that they may have exposed more data than necessary to cloud storage.
Another data backup and/or storage variation that could make sense for some purposes: the personal cloud. Elaborating on that term, Wikipedia indicated that the user could set up several different types of hardware that would make his/her files available for him/her to access remotely (i.e., online, while working on e.g., a laptop somewhere). Cloudwards (2018) distinguished a “personal cloud storage system” from “a subscription service like Dropbox.” The personal alternative, they said, had advantages of speed, freedom from subscription costs, and privacy: “Services like Dropbox and Google Drive may claim to respect your privacy, but they must share your information with government agencies if required to do so.” They said the personal alternative could have the disadvantage of subjecting one’s storage hardware to environmental threats (e.g., heat, humidity, theft) and typically could not match the backup protections in place at subscription companies. A search for further enlightenment led to a Key Microsystems (KM) article identifying several personal cloud hardware options, with varying degrees of vulnerability to attacks discussed elsewhere in this post, depending on the specifics of the setup:
- Network-attached storage (NAS). This was the approach KM recommended. MakeUseOf (MUO, 2017) characterized NAS as an external drive system (i.e., a single drive, or an array of drives for greater speed and/or more protection against drive failure) networked (usually via Ethernet, sometimes via Wi-Fi) with a computer and a router: “[I]f you set up your network for remote access, you can access a NAS from anywhere as long as you have an internet connection, effectively replicating cloud storage functionality without the privacy-related downsides.” KM, providing setup details (plus links to several software setup how-to videos), said that a NAS would have an advantage of providing “excellent safety against data loss” (i.e., if one hard drive failed, one or more others in the NAS would still have the data). The cost of a NAS would be at least several hundred dollars plus drives (see also TechRadar, 2019). MUO linked to a separate article explaining that NAS was great for providing additional storage, facilitating collaboration, supporting a private cloud, enabling automatic remote backups to protect against laptop damage or loss, and offering a private media server. On that last point, ExtremeTech (2018) went a step further, walking through the issues that would confront someone who decided to use a NAS to host a photo sharing cloud made available to other users.
- DIY NAS box. Instead of buying a NAS, KM said that an old desktop computer could be repurposed to function as a homemade NAS: it would be “more complicated to set up” but superior in terms of flexibility and cooling (which would extend drive life). This was, first of all, a process of assembling a computer — which, these days, was pretty simple. MUO (2013) said it would suffice to use a basic motherboard, CPU, power supply, minimal RAM, and a large, reliable 5400 RPM HDD. (I would add a battery backup, to keep the system running through the occasional brief power interruption.) Windows would work, but it “costs money.” The DIY Life (2017) explained how to use a Raspberry Pi ($100, not including drives and their docks or enclosures, apparently limited at present to four USB 2.0 ports, which would still presumably keep up with remote (e.g., public Wi-Fi) transmission speeds). The first-choice operating system would be FreeNAS, “a free, open-source project that is fairly easy to use and … built for NAS specifically.” A RAID setup, if added, would supply reliability like that of the store-bought NAS (above). The machine’s BIOS would be set to Wake-on-LAN so that it could sleep until it was accessed.
- Router-based solutions. KM said that a budget option would be to connect a hard drive directly to a router and, as above, provided a link to a how-to video. KM recommended against this option, characterizing this as “a lot of work to get it to run correctly.” MUO described how to modify a router’s software to set up a personal VPN server, in which case the remote user would apparently be contacting the network as a whole, including any networked devices (e.g., NAS).
- Personal cloud services. Cloudwards (2018) said it was also possible to set up one’s own cloud on rented hardware. In another article, they elaborated that this could require use of a personal cloud storage service, among which their favorite was Nextcloud (which seemed to have a free home version), with a setup process that could be “difficult.” (See also Gecko & Fly, 2019; Hongkiat, 2018.) They advised that basic cloud storage (see also zero-knowledge cloud service) was a simpler alternative.
Yet another approach to backup could involve treating the remote laptop as a sort of client of the desktop computer back home or at the office. Regardless of whether the home (or office) desktop was connected to a NAS or a cloud storage service or had other backup arrangements, the laptop might use one of the remote programs mentioned earlier (e.g., TeamViewer, ConnectWise) to work with software and files on the desktop machine. TeamViewer contended that this would be secure and more efficient on public Wi-Fi than using VPN. So, for example, the laptop might have a version of Microsoft Excel installed, but the user might prefer to use the version of Excel installed on the desktop, and the desktop’s larger storage capacity, to process a large spreadsheet requiring intensive computation. In this scenario, there could be a vulnerability through imperfections in the VPN connection, but perhaps not on the laptop’s own HDD or SSD, which would not contain a copy of that large spreadsheet. The laptop might still use cloud or other backup for its own contents, but the desktop would be the primary focus of backup effort and/or hardware. Further investigation would be necessary to determine whether, for instance, the use of TeamViewer for Chrome OS would facilitate such computation from a minimally expensive Chromebook. In a Chrome Remote Desktop how-to, Computerworld (2019) said, “It isn’t the most elegant way to get around a computer — and you probably wouldn’t want to use it for any sort of intensive work — but it can be handy for quick-hit tasks.” (See also Windows Remote Desktop.) Presumably the situation would be similar for a decision to carry, instead, an inexpensive old laptop running TeamViewer for Linux on a lightweight Linux distribution.
To summarize, this section discusses security issues pertaining to updates and backup. The chief concerns with respect to updates include controlling them to maintain system stability and revisiting security settings to control telemetry. Backup entails system drive images and data drive backups. Options for the latter include local HDDs, public and personal cloud backup, and a server-client arrangement between a remote laptop and a home desktop.
Accounts
During initial installation and in later system configuration, the situation facing me would vary according to the type of account I was using. There were several things to take into consideration here.
Administrator vs. Standard User Accounts
The default Windows installation used an administrator account. An administrator could do things that a regular user was not normally permitted to do. The admin account was convenient, for purposes of system setup and program installation: I didn’t have to re-enter my password every time I wanted to install a program or tweak something. But as Windows Central pointed out, using an admin account was not ideal from a security perspective; it made things easier for an attacker. According to a study cited in Forbes, 93% of Win10 vulnerabilities could be mitigated simply by removing administrator privileges. Thus, at some point, no later than the conclusion of system installation and configuration, I would want to switch to using a standard user account.
To switch to using a standard user account, I could create a new standard user, or I could convert the existing admin account via Win-R > control > User Accounts > select the account to be changed > change your account type > Standard > Change Account Type. I felt I would probably want to keep an admin account and create a separate standard user. When installing software, if given a choice, I would want to install software for all users, not just for the current user. That would save me from having to reinstall the same software for each account separately.
There was also the suggestion to create a second administrator account, in case the first one became corrupted. To do that, I could run control userpasswords2 > Add > Ray (Admin2) . That added the new account to the list as a standard user. Next, I selected it > Properties > Group membership > Administrator > OK. Then, if desired, I could restart > log in as Ray (Admin2) > go back to control userpasswords2 > remove defaultuser0. Then delete the C:\Users\defaultuser0 folder. But some claimed this account should be left alone. Microsoft (2019) said it was “a best practice to disable the Administrator account when possible to make it more difficult for malicious users to gain access to the server or client computer.” The linked advice was to go into Win-R > compmgmt.msc > System Tools > Local Users and Groups > Users > right-click on the account > Properties > check Account is disabled.
Beyond the default administrator account, TenForums (Brink, 2014) explained that I could also enable the built-in elevated administrator account. It seemed the main advantage of the elevated admin account was just to avoid all User Account Control (UAC) prompts, which otherwise would pop up every time I tried to install a program or make other adjustments. I did not find those prompts bothersome — as an administrator, unlike a standard user, they did not require me to enter a password — and therefore I saw no real need to enable the elevated account. As noted in a previous post, various sources recommended against using it. If I did want to enable it, however, the advice was to run net user Administrator /active:yes at an elevated command prompt (Win-R > cmd > Ctrl-Shift-Enter); and to disable it, net user Administrator /active:no. I could verify that this worked because the active:yes command gave me an account named Administrator (i.e., not Ray) in Win-R > control > User Accounts > Manage another account. Running lusrmgr.msc > Users would give me a full list of accounts.
Despite the potential risks, I expected to do most of my software installation while running as administrator. In some cases, I might have to reinstall that software in the standard user account. To minimize the instances of that, when given the choice, I would install software for all users, rather than just for the current user. Or, if I really didn’t intend to use the administrator account often, I could proceed the other way around: switch to the standard user account before installing programs, and again choose to install for all users, so as to make most software available to the administrator account as well. To create a backup of my fully configured standard user account, potentially capable of being saved and restored to a later Win10 installation, a search led to (e.g., 1 2) assorted solutions of varying complexity. In brief review, the possible solution most appealing to me was that offered by the free version of Forensit’s User Profile Wizard.
Local vs. Microsoft Accounts
During installation, the Windows 10 installer defaulted to setting up a “Microsoft” account. The alternative was to set up a “local” account. The terminology here could be confusing: sometimes Microsoft also referred to a local account as a “Windows” account. So you could set up a Microsoft account, or you could set up a Windows/local account.
A local account basically meant a traditional kind of login, where the password is saved on the specific computer and not checked online. (Though it would be insecure to do so, the user could also opt to set a local account to log in without a password.) By contrast, a Microsoft account would require me to log in using the password that I used to log into any other Microsoft device or service (e.g., Skype, Hotmail). Microsoft’s concept here seemed to be one of convenience rather than security: a single Microsoft account password would provide entry, and would reportedly be useful and in most cases required, not only for this Windows installation, but also for some items (e.g., apps) from the Microsoft store, for syncing Win10 settings between devices, for logging into OneDrive, for using FindMyDevice, for at least some Cortana functions, for family features, for Xbox and Skype logins, for some versions of Microsoft Office, for Hotmail and Outlook — and (in at least some versions of Windows) for Microsoft drive encryption.
In terms of convenience, logging in via a Microsoft account could simplify use of and interaction among the various devices and services just listed. It could also have other benefits. For one thing, Digital Citizen informed me that I could use the Microsoft account to link to my Microsoft family. But I didn’t have one. I also thought synchronization of settings and some programs among Windows 10 computers would be a good reason to use a Microsoft account. I did find it convenient. But it wasn’t a giant leap forward. How-To Geek (2017) said a Microsoft account would also facilitate reactivation of the Windows installation (at Win-I > Update & security > Activation) after making hardware changes. So far, I hadn’t noticed any problems with a local account in that regard. A Microsoft account login would have potential use on a mobile device, once I had turned on the Find My Device setting (below). In that case, if my computer was stolen, I could log into my Microsoft account on another device and perhaps locate it and/or lock its data. But on a desktop computer, this possibility was not especially compelling.
In terms of security as distinct from convenience, setting up the computer with a Microsoft account seemed like a step backward. There was the problem that use of a Microsoft account meant conveying a potentially substantial amount of potentially sensitive information to Microsoft (MakeUseOf, 2016). In addition, anyone who could intercept, observe, or find the Microsoft password would have access to all of the tools and services listed above (e.g., Skype, Hotmail), on all machines using that account and password. As Yubico (2018) explained, password interception could succeed even against an experienced user who was not paying attention at the crucial moment. For instance, a hacker might acquire it through a phishing attack, if the user was tricked into entering his/her Microsoft account username and password into a fake Microsoft website. Another problem was that companies were not always immediately aware when their servers were hacked, and the resulting lists of usernames and passwords could be explored for possible payoffs until the breach was discovered and the passwords were changed. The risk of password interception or observation would be multiplied because the user might be repeatedly using that same password in a variety of settings and on multiple devices.
The risk of password interception was further enhanced by the fact that the password entered by the user would be sent online to Microsoft’s computers for verification. But, for reasons discussed below, that particular risk could be mitigated by opting to set up a PIN during Windows installation. Also, by default, the user’s Microsoft account name — in my case, a Hotmail address — would be displayed on the Windows lock screen, but it was possible to turn that off via Win-I > Accounts > Sign-in options > Privacy > Show account details.
As Sarkar pointed out, if I used a Microsoft account, I would be best advised to give it a long and complicated password, “because it can be hacked at 24/7 from the Internet”; but using a password of that nature to log into Windows “makes unlocking your computer needlessly complicated.” The response was that I could use a PIN instead of that long password to unlock my computer. But this would not change the fact that detection of one password would still enable an intruder to open multiple devices and services. As long as it was the same Microsoft account, the password would still work, even after the PIN had been assigned. The potential for exposure of private data was illustrated by the comments of a tech support guy who said that logging into his customers’ computers, by using their Microsoft accounts — most of which apparently used their actual email addresses — gave him access to the contents of those email accounts, when all he wanted was to do maintenance, without any potential customer complaints or liability.
Many users encountered complications from mixing a Microsoft login account (such as I was considering for this computer) with a Microsoft (e.g., Outlook or Hotmail) email account. For example, if I decided to use multifactor authentication (MFA) with this computer’s Microsoft login account, would I also have to use it with my Outlook or Hotmail email account? In the words of one guru,
I have actually come to despise the fact that Microsoft itself (or at least it would seem) adopted the term “Microsoft Account” not only for an actual account on the Microsoft site but also for a Windows 10 user account that is linked to a specific Microsoft Account. It makes it very confusing because people believe that they are one and the same when they are not.
To get past that confusion, the solution I arrived at (see also advice from BruceB) was to create a new and separate Microsoft account that I would not use for email. As ZDNet suggested, “If you’re worried about privacy, set up a new Microsoft account and use it exclusively for this purpose and don’t associate the @outlook.com address with any other service.” So I would have a Microsoft account for services like Hotmail and Skype, and I would have another Microsoft account for my login on my computer. To create this account, in Firefox, I logged out of my Hotmail account and then went to Outlook.com. It gave me a Sign In dialog offering an option to create an email account. I did that, choosing an account name that I would (hopefully) never confuse with my Hotmail email. (See also MakeUseOf on switching the email account associated with a Microsoft account.)
So the choice was to set up my computer to use either this separate Microsoft account or a local account. I could set up some computers and devices to use local accounts, and others to use the Microsoft account. I could log into all of the Microsoft account devices with the single password for this separate Microsoft account, but I could set up PINs for some of those devices, and the PINs could differ from one machine to another. On devices set up with a local account, the password would not be checked online or shared in common with other devices; each device would have its own password (though of course I could decide to set up the same password on all).
The Windows 10 installation process set up a Microsoft account by default. But I could choose a local account, during installation, by declining to give my Microsoft login credentials. The installation steps after that point would differ somewhat. In at least one installation, the Microsoft account installation gave me an account name based on my Hotmail email address. I didn’t like that account name, but I wasn’t sure whether I had much control over it. Apparently for this reason, BruceB recommended setting up a local account first, and then changing it to a Microsoft account if desired.
For various reasons, then, I was inclined to set up a local account during initial installation, and then change it to a Microsoft account at least temporarily, if I wanted to sync this computer’s settings with whatever I had set up in the chosen Microsoft account previously, or if I wanted other benefits of a Microsoft account. It was possible to switch back and forth between Microsoft and local accounts. To do that, I would go to Win-I > Accounts > Your info > Sign in with a Microsoft account instead > enter Microsoft credential (e.g., Hotmail address). To switch back to a local account, at that same location, I chose Sign in with a local account. Such changes did not give me two separate accounts listed in Win-R > control > User Accounts. Note also the option, there, to rename the local account. LifeWire (Kingsley, 2019) warned that a local account could also be changed to a Microsoft account without the user’s conscious consent “if you log in to the Microsoft store or install any app with your Microsoft account.”
How-To Geek (Heddings, 2017) said the list of existing accounts was visible via Win-R > netplwiz > Users tab. Those listed with an email account were Microsoft accounts; those with normal names (e.g., Ray) were local accounts. Having thus identified the exact name of the user in question, Heddings said I could prevent that username from appearing on the lock screen via Win-R > regedit > Ctrl-L > paste this path: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon > right-click on Winlogon in the left pane of Registry Editor > New > Key > SpecialAccounts > right-click on SpecialAccounts > New > Key > UserList > right-click on UserList > New > DWORD (32-bit) Value > enter the exact name of the user account that you want to hide from the lock screen > make sure its DWORD value is 0 (that’s a zero, not an oh). Then exit Registry Editor. This was a rather far-reaching change, however: it would hide the user account, not only from login, but also from Control Panel and Settings, presumably including the list just viewed via netplwiz. Heddings said that using this trick on your last administrator account would make it impossible to log in as administrator, though presumably the user could return to the Registry Editor to fix that. (Note that the menu in Registry Editor offered a Favorites feature to save frequently visited registry keys.) Likewise, to make all user accounts visible again, one approach was to delete the newly created SpecialAccounts key from the registry. Alternately, to make a specific account visible, the DWORD value should be 1 rather than 0.
Sign-In Options
As detailed by MashTips (2018), the available Windows 10 login options were listed at Win-I > Accounts > Sign-in options. Aside from my already established password, those options were as follows (with some variation among different versions of Windows 10):
- Windows Hello. Both of my computers said, “Windows Hello isn’t available on this device,” and pointed me toward a Microsoft page providing additional information. At least one problem was that I didn’t have the right hardware. Computerworld (2018) said Windows Hello used the Fast Identity Online (FIDO, currently in version 2, FIDO2) specification in Microsoft-approved computers and in third-party devices, to replace passwords with (among other possibilities) a fingerprint (which could be scanned by e.g., a USB dongle), iris scan, facial recognition, or palm vein scanning.
- PIN. For both local and Microsoft accounts, the PIN was an alternative to the password. As noted earlier, Microsoft (2017) said the PIN, like Windows Hello input devices, was tied to the specific computer, like a local account’s password. A password (especially for a Microsoft account) needed to be long enough to defeat online cracking efforts, whereas a PIN, stored only on the user’s machine and guarded with a lockout after a few failed attempts, could be much shorter, and would not have to be changed periodically to protect against not-yet-discovered server breaches somewhere. Microsoft said the PIN, unlike a password, had additional security features due to its link to the motherboard’s TPM chip, if any — and, if desired, could have the same length, complexity, expiration, and other characteristics as a password. The PIN would thus rectify some of the security drawbacks of using a Microsoft account. I added a registry hack (see Win10RegEdit.reg) setting a higher minimum length on the PIN, if only to make it harder to figure out by shoulder-surfing.
- Security Key. As discussed in more detail below, there were various kinds of security keys. The basic idea was that the user would insert a USB device into the computer as part of a login sequence. Examples of such devices cited by TenForums (2018A, 2018B) included USB devices, smartphones, and the YubiKey.
- Picture Password. The concept here was that a picture, previously designated by the user, would appear onscreen, and the user would use the mouse (or, on a touchscreen, his/her finger) to draw shapes on that picture (e.g., following the outline of a building) — chosen, again, by the user — as another non-password way of logging in. I was hesitant about this method on the computer because I had found it rather unforgiving on the smartphone — and if it was too forgiving, I would join others in worrying that it was vulnerable in the same ways as on the cellphone (e.g., others could observe; smudges on the touchscreen would leave a path).
Microsoft was continually insisting that we were approaching a world without passwords. The emotion was appealing, but the concept seemed brainless. For one thing, Microsoft’s (2017) explanation of “Why a PIN is better than a password” included this statement:
A PIN can be a set of numbers, but enterprise policy might allow complex PINs that include special characters and letters, both upper-case and lower-case. Something like t758A! could be an account password or a complex Hello PIN. It isn’t the structure of a PIN (length, complexity) that makes it better than a password, it’s how it works.
In other words, you are still going to have to remember a password, but we are going to call it a PIN instead of a password because, when you get into the technical details, it doesn’t work the same. By that logic, we should have a different word for a Microsoft account’s password too, because it doesn’t function the same as a local account’s password. And we should have different words for the different ways in which “keys” work, instead of using the one concept to make them all familiar. This seemed to be a matter of marketing, where Microsoft was trying to add dazzle by using a new word for a familiar concept. As Hunt (2019) put it,
[T]he future of passwords is more passwords …. [T]he thing that passwords have going for it … [is that] everyone knows how to use passwords …. [W]hen your marketing manager is making a decision about how people are going to log onto the website, and some enterprising end developer comes along and says ‘Hey, this is awesome. All you’ve gotta do is pull your phone out and there’s a QR code or dongle or something’. The market manager is like ‘this is going to slow people down from registering and making us money, which is what we’re actually here to do’.
It might make sense to insist on using a different term to distinguish the PIN from the password, if the addition of a PIN were improving security. A PIN, by itself, could indeed do that, for reasons noted earlier: for instance, a PIN could connect with a motherboard’s TPM. The problem was that we weren’t getting the PIN by itself. Multiple sources (e.g., Mashable, InfoTex, Reddit, Windows Central) echoed Microsoft’s own admission that these alternate login methods were oriented toward “ease of use” at the expense of security. An intruder who didn’t have the password could always try fooling the facial recognition option (if your computer had it) by using a photo of you.
At present, it was apparently not possible to force the user to enter a PIN as distinct from a password. According to How-To Geek (Hoffman, 2017A and 2017B), “A YubiKey you keep with you on your keychain may be more convenient than typing a long numerical PIN, but there’s no way to require a physical YubiKey to sign in” and “At sign in, you always have the option of using your regular password instead of the picture password or PIN you have set up.” So when I set a PIN on my laptop (running Win10 x64 1803), its login screen gave me — and thieves — a choice of entering either the PIN or the password. And since it was called a PIN, I had unthinkingly treated it as the PINs chosen for banking and other purposes, and made it just four digits — which could be fairly easy for a shoulder-surfer (especially one aided by a smartphone’s camera) to figure out by watching me enter it. Reports (by e.g., WindowsClub, 2019) suggested that Microsoft was planning to offer a “Passwordless” setup option that would entirely remove passwords from Microsoft accounts. But there didn’t seem to be any plan to eliminate PINs — which users would have to remember and type in, just like passwords.
Bypassing the Lock Screen
My levels of protection, at this point, included the BIOS password, the VeraCrypt password for drive C, and the Windows password required to get past the lock (a/k/a login) screen, where the system would demand a password. The option to require a password was available at Win-R > netplwiz. That produced a Users tab with the option, “Users must enter a user name and password to use this computer.” I checked that, and then went into the Advanced tab > Secure sign-in > Require users to press Ctrl-Alt-Del. The explanation for that option said, “This guarantees that the authentic Windows sign-in screen appears, protecting the system from programs that mimic a sign-in to retrieve password info.”
My safest course of action would be to shut down the computer when I was not actually using it, so as to require an intruder to go through those steps — BIOS password, VeraCrypt password, Windows password or other login method. In case I found myself distracted or forced away from the running computer (by e.g., sudden emergency), I might want the lock screen (indeed, hibernation or shutdown) to come on rather quickly. The precise setting would depend on whether I was using, say, the laptop in a public location, as distinct from the desktop at home. Generally, though, the lock screen would appear at bootup or upon awakening from sleep or hibernation, according to the settings in Win-I > System > Power & sleep. It would also appear automatically when the screensaver came on, according to the settings at Win-I > Personalization > Lock screen > Screen saver settings > On resume, display logon screen. (For a screensaver, there were many possibilities.) In addition, I could make the lock screen appear by hitting Win-L (alternately, Ctrl-Alt-Del > Enter). There was also a Dynamic Lock option, misplaced at Win-I > Accounts > Sign-in options. TechRepublic said this option would lock the computer shortly after a Bluetooth-capable phone connected to it was moved beyond Bluetooth range.
As discussed above, it would be risky to leave a machine up and running, in a place where a person could insert a USB drive or otherwise tinker in the owner’s absence. The intruder’s situation would clearly be much better if s/he could sit down at the computer before its screensaver kicked in, or before it went to sleep or otherwise invoked its lock screen. In that case, within a few minutes, the intruder might be able to install a software keylogger, create a password reset disk, and/or copy the contents of RAM, pagefile.sys, and hiberfil.sys. In such materials, s/he might be able to find the user’s passwords for Windows and VeraCrypt. Thus prepared, s/he might be able to make good use of a later opportunity to access and copy large amounts of data from that computer.
But let us assume that the lock screen was on, or that an intruder was somehow able to boot the machine and get to that screen (due to e.g., failure, temporary disuse, or knowledge of boot and VeraCrypt passwords), and that circumstances require the intruder to try to break through or bypass that lock screen (rather than e.g., removing the drive and examining its contents on another computer). What would the intruder’s options be? The answer depends to some extent on whether the system was using a Microsoft or local account. Generally, however, various sources mentioned a number of possible methods, including these:
- Use an alternate login method. As discussed above, the lock screen might offer PIN, picture, or other login methods that the attacker might be able to use without a password.
- Brute force with Ophcrack or similar. Ophcrack was a bootable (live CD) tool that used rainbow tables to brute force passwords. Wikipedia said free tables for Windows XP and Vista were included, but attackers would have to buy (or, no doubt, steal) rainbow tables for later versions of Windows. A Lifewire tester (2019; see also 2018) found that Ophcrack cracked an eight-character (letters and numbers) password in Windows 8 in about 3.5 minutes, but was completely ineffectual against Windows 10. A participant in a StackExchange discussion suggested using John the Ripper (reportedly requiring paid wordlists) or hashcat instead. Those were among a handful of alternatives listed by Wikipedia. Not on that list: Lazesoft Password Recovery, for which PC Steps provided instructions. Windows Report recommended several commercial alternatives, including Active Password Changer Professional and Windows Password Reset Standard. To inhibit the endless exploration of guesses about the correct password, I added entries to Win10Setup.bat to limit the number of password guesses before lockout. For what it was worth, among logon auditing measures suggested by MakeTechEasier (2016) and How-To Geek (2017), I also added an entry to Win10RegEdit.reg that, at startup, would report a count of prior unsuccessful login attempts.
- Boot with a previously created password reset disk. The user may have created a bootable USB password reset drive (below) as a precaution against forgetting his/her password, and the intruder may be able to find that drive.
- Answer security questions. Microsoft said that, starting with Windows 10 version 1803, the user had the option of answering security questions to reset the password on local accounts. If the user took that option and chose weak questions, a knowledgeable attacker could use that route to bypass the lock screen. As noted above, users could improve upon this situation by entering false answers. In version 1903, the method for changing the security questions was Win-I > Accounts > Sign-in options > click on Password > Update your security questions. But that would only allow entry of new answers to the same half-dozen lame questions (e.g., “What’s the name of the city where you were born?”). To avoid being asked for security questions during setup, The Windows Club recommended either starting with a Microsoft (rather than local) account, and converting that to a local account later, or skipping the option of entering a password during initial installation of a local account. Lifehacker (Murphy, 2018) discussed a PowerShell script reportedly capable of disabling those security questions.
- Revert to a previous, very recently changed password. Boot the computer > hit F8 before Windows loads > Advanced boot options > Repair your computer > System restore.
- Boot with a Windows 10 installation DVD or USB drive. TechSpot (2017; see also PCSteps) provided these steps: at the first installation prompt asking about language etc., hit Shift-F10 > diskpart > list volume > identify the drive letter temporarily assigned to the Windows system drive (the following steps assume it is X) > exit Diskpart > enter move X:\windows\system32\utilman.exe X:\windows\system32\utilman.exe.bak > then enter copy X:\windows\system32\cmd.exe X:\windows\system32\utilman.exe > reboot the installation DVD/USB > Shift F-10 > net user > identify the desired user account (let’s say it’s Ray, and let’s assume the desired new password is simply PASS) > net user Ray PASS > remove DVD/USB > reboot > try the login screen > if that fails, reboot the installation DVD/USB > Shift-F10 > net user administrator /active:yes > reboot without DVD/USB > log in with Administrator account > repeat the foregoing instructions, starting with net user Ray PASSWORD (using the correct account user name and desired password, because this administrator account won’t necessarily have access to the actual user’s (in this example, Ray’s) files > end with net user administrator /active:no to remove the option of logging in as administrator. How-To Geek (2017) seemed to say that a variation on such procedures might give an attacker access to files on drive C, without however providing the ability to fully use the computer as the regular user. That difference could be significant: for example, the user might have configured his/her installation to remember passwords for valuable websites; that access would not be available to an intruder who did not have access to that account.
- Boot with Kali (or other) Linux. The ISO would be installed on a USB drive per Kali’s instructions. As explained by various sources (e.g., a video), this approach might provide a simpler way of making the utilman.exe changes suggested in the preceding bullet point.
- Use chntpw. The instructions for the Offline NT Password & Registry Editor (a/k/a chntpw) recommended creating and booting a chntpw live USB/CD, but TechSpot (2017) provided instructions for using chntpw functionality that was apparently already included in Kali Linux, Hiren’s Boot CD, and Trinity Rescue. After booting Kali Linux from a USB drive, for example, the instructions were to mount the Windows system drive > cd /media/Windows/System32/config/ > chntpw -u [username] SAM > Clear or edit password.
- Obtain an automated password reset without rebooting. For Microsoft as distinct from local accounts, How-To Geek (2017) recommended trying Microsoft’s password reset website (on a separate computer, presumably). Alternately, Microsoft indicated that the Windows 10 login screen would offer an “I forgot my password” option leading to further steps. WikiHow elaborated that those steps could include answering security questions unless MFA was set up. Windows Central (2016) said the attacker would then be given an option of receiving a security code by email or text, as long as s/he could fill in the blanks. For instance, the attacker might know the last four digits of the user’s phone number, or that the user’s alternate email address was wacko@work.com. The attacker would then have to be able to intercept the security code sent to that phone number or email address, using means discussed elsewhere in this post.
An attacker encountering the computer already running, but displaying its lock screen, might be able to use a few of those methods without rebooting; but most would require a reboot. Rebooting with something other than the installed Windows 10 operating system would require the attacker to turn off Secure Boot in BIOS and, again, would run into problems if the BIOS and/or VeraCrypt passwords were operational and not known to the attacker.
MFA for Local Accounts
Ideally, every step requiring a password or PIN would also require at least a second factor for authentication (2FA). The BIOS password would require a retinal scan, the VeraCrypt password would require a YubiKey to be inserted into a USB port, and so forth. Unfortunately, I did not encounter much of that in the early stages, except as discussed above: neither VeraCrypt nor my computer’s BIOS seemed to be set up for effective 2FA. The subject did not really find its footing until we got to this point of thinking about logging into Windows 10.
As indicated above, Windows did not really offer MFA for local accounts. Rather, the sign-in options were cumulative: in addition to a password, the user could set up a PIN and Windows Hello and so forth, but could not require a specified combination of such factors. Instead, it seemed that the user seeking MFA for a local account might have to turn to third-party devices, such as these:
- Yubico’s Windows Login Tool (a/k/a YubiKey Windows Logon, or YWL) was “deprecated.” By email in February 2019, Yubico Support said that meant only that it was no longer being updated. The webpage said “New Tool Coming Soon,” but the page appeared to date from September 2017, and in the email they said they didn’t know when the replacement tool would be released. The YWL Configuration Guide (2016) indicated that the YWL installation process required the YubiKey Personalization Tool. The webpage for that tool seemed to contradict itself, saying that it did, and also that it did not, work with the FIDO U2F security keys, which were the ones recommended for use (below). The Yubico support tech said he was still able to set up YWL with no problems other than a “cosmetic” duplicate user account suddenly showing up on the login screen: both, he said, would point to the same real account. By early September 2019, the Yubico Blog was still not offering anything more recent than a Public Preview of the Yubico Login for Windows Application, described as best suited for local rather than Microsoft accounts.
- Duo MFA ($3/user/month) was Duo’s least-expensive edition supporting U2F. Duo MFA supported YubiKey and “any OATH HOTP-compatible tokens” including Google Titan, Feitian ePass FIDO (recommended by Hunt, 2018), and Thetis FIDO. Duo Authentication for Windows Logon could be disabled in Safe Mode, but this could apparently be counteracted at least to some extent. Duo said, “U2F security key support is limited to Offline Access only,” and that apparently required one of the U2F security keys just listed (of which only one could be registered at a time: “Registering a second offline device deactivates the first one”), along with a Duo MFA subscription and an installation of the Duo Authentication for Windows Logon software. Altogether, the documentation did not make entirely clear that Duo was intended for single users as distinct from system administrators managing multiple users.
- Google Titan oddly seemed to offer U2F only in a security key bundle ($50), including a Bluetooth security key that I, for one, would not presently have a need for (see Gizmodo, 2018). Bleeping Computer (2018) reported that Titan might soon be useful for logging into Windows. That report seemed oriented toward Microsoft accounts rather than local accounts, however, and toward Enterprise rather than Home users. Tom’s Guide (2018) said that Google’s focus, with this device, was not on everyday users, but rather on “savvy hackers trying to spear-phish politicians and other high-profile targets” because the kind of attack on which this device focused was “extremely inefficient, and would not work as a large-scale phishing attack.” In any event, at this writing, it had not materialized into a useful option for purposes of providing MFA on a Windows 10 local account.
- It was apparently possible to use a bootable password reset USB drive as a kludge replacement for the local account password. This would not work if Secure Boot in the BIOS was preventing USB booting. Presumably the user would have the BIOS password and could turn that off, though doing so might entail significant vulnerability. The user could create a Windows 10 local account password reset disk (i.e., USB), following Microsoft’s advice (alternately, PassFab 4WinKey, though that seemed to have some problems). The password reset USB would work only for the specific user and computer. It could replace the local account password if the user changed that password to something that nobody (including him/herself) would know or remember. In that case, the password reset USB would provide the only officially approved way to log in. Obviously, one would be well advised to test this concept before relying on it.
Microsoft warned that relying on such alternatives could result in the user being locked out, if some update breaks the third-party credential provider. Assuming that user data was stored on a separate encrypted (drive D) partition and would therefore not be lost in the event of a lockout, and that the user retained a pre-update backup image of drive C on a separate drive, this scenario would require restoration of that image and reinstallation of any recently installed programs and settings. Absent such precautions and time to experiment with the foregoing options, it appeared that local account logins would continue to be secured by only a single factor, be it a password, PIN, fingerprint, or something else, and each such factor would have its own vulnerabilities.
MFA for Microsoft Accounts
Microsoft (2018) explained that, for a Microsoft account, its concept of MFA (which it called “two-step verification”) required “two different forms of identity: your password [or something else entered at a Windows login screen, e.g., facial recognition] and a contact method (also known as security info)” that would not be entered on the computer’s login screen. ZDNet (2016; see also e.g., MashTips, TechNorms, Windows Central) said that users could log in at https://account.live.com/proofs/ to set up MFA to increase security on the Microsoft account, where the second factor could include a smartphone text message or authenticator app signal. Within Windows, Microsoft said the verification process would be set up at the user’s Security Basics page > More security options (at bottom of page) > Additional security options. Those options were as follows:
- Manage how you sign-in to Microsoft. This led to an option to allow sign-in from various email addresses, phone numbers, and Skype names.
- My account page. When I middle-clicked on certain links on this page (i.e., press on mouse wheel), intending to open them in separate tabs in Firefox, I got what seemed to be a main page for my account, with further links to, among other things, specific devices (evidently those from which I had logged into my Microsoft account), the personal data stored with my Microsoft account, and my security information. The specific devices link to my desktop computer led eventually to a webpage reporting, among other things, my last check for updates, my BitLocker status, and how much material existed on my non-encrypted drives. At this point, seeing that Microsoft had that information about my system, I reminded myself that this was all in the name of security.
- Two-step verification and Identity verification apps. I had to left-click on these links to proceed. Left-clicking on either of these led to a choice between the Microsoft Authenticator app or some other. The link to the latter led to a page that would allow me to scan a code, so as to link that other authenticator app to my Microsoft account.
- Recovery code. The note for this one said, “You can use your recovery code if you lose access to your security info. You need to print out your recovery code and keep it in a safe place.”
- Trusted devices. The only options here were to learn more or remove all trusted devices associated with my account. The “learn more” option led to a Microsoft page explaining that I could designate my computer as a trusted device that would not be asked for authentication every time I logged in.
That information left me in the dark, regarding the question of exactly what I could use (other than the Microsoft Authenticator app) as a second login factor. But other sources (e.g., Kaspersky, 2018; Lifehacker, 2016) led me to understand that there were several relevant technologies for what Microsoft (above) called the “contact method”:
- Short Message Service (SMS). This technology, dating back to the 1980s, typically involved text messages sent to cellphones. Multiple (e.g., 1 2 3 4) sites reported that SMS was relatively easy to intercept, yet it remained the most widely used form of MFA. As Sophos (2016) observed, SMS could be the only MFA option, for those whose phone was not a smartphone capable of running apps; besides, a simple phone used only to receive SMS codes could function as a second factor even if the first factor was an app on one’s smartphone. How-To Geek (2017) sketched out some methods of interception and observed that, despite its vulnerabilities, MFA with SMS was still more secure than having no MFA at all. It appeared that the potentially vast data breaches associated with SMS (e.g., BankInfoSecurity, 2018) tended to involve online (e.g., Gmail) accounts rather than physical computer logins. Nonetheless, the point was clear: given good alternatives, SMS was not a logical first choice for MFA. Lifehacker (2016) indicated that emailed login codes were comparable to SMS in their lack of security.
- One-time codes. Kaspersky (2018) said these codes, prepared in advance for emergencies, could be saved on paper in a safe, for instance, or as encrypted notes in a password manager. The main thing was never to allow them to be lost or stolen, as they seemed to function as an ultimate account unlocker — and also to make sure that they did not all get used up before the moment when they were really needed. AllThingsAuth (2018) said the moment of need could arrive when one’s phone was lost or stolen, or when the authentication app (below) was uninstalled. Such codes would work because “it is unlikely that a random passerby who finds [the user’s] trusted device (e.g. her phone) will also have her password and be able to quickly log into her accounts.”
- Authenticator apps. Tom’s Guide (2018) explained that these apps provided “a constantly updating 2FA code that you can enter without having it texted to you” via SMS. Kaspersky (2018) provided some information on these apps, of which there were apparently many, and suggested that the best included Google Authenticator (easiest to use, but not configurable), Duo Mobile (better than Google Authenticator in the sense of keeping codes hidden by default), Microsoft Authenticator (could be configured to hide codes individually; provided extra features to simplify Microsoft logins), FreeOTP (open source, many configuration options), Authy (preferred by How-To Geek, Wired, Lifehacker, and Cloudflare, among others; could apparently add additional devices and revoke access for a lost or stolen phone; simplified migration to new devices by storing tokens in the cloud; secured app login by PIN or fingerprint, supported more kinds of devices; and claimed to be superior to Google Authenticator), and Yandex.Key (secured by PIN or fingerprint, option for password-protected cloud storage like Authy, generally simple, but required effort to find the desired token among many available). Hunt (2018) described authentication apps in MFA — what he called “password and soft token” — as “probably the best balance of security, usability and cost we have going for us today.”
As explained by AllThingsAuth (2018), authentication apps required a trusted device — typically, a smartphone. The user would start the authentication app registration process in his/her computer’s browser. The service provider (e.g., Authy) would display a Quick Response (QR) code in the browser. (AllThingsAuth pointed to the QR Code Generator to illustrate how codes could be created for any text.) The QR Code would contain a value comprised of relevant data (e.g., the user’s email address) and what cryptographers referred to as a “shared secret,” defined as a piece of data known only to the parties in a secure communication. (It was not clear to me how a code displayed in the browser could be known to be secure.) This secret seemed to function, in effect, as a mathematical formula that would operate exactly the same way on the service provider’s server and on the trusted device (e.g., smartphone). The shared secret would thus enable the server and the trusted device to generate exactly the same login code at the same time, without any need to communicate again. Thus, to get a one-time password (OTP), the user would consult the phone, and it would provide the same value as that being calculated at that moment on the server. The user would enter that time-based one-time password (TOTP) into the browser, and it would be communicated along with the user’s password to the website being logged into. Thus, according to Krebs (2018), this was really two-step rather than two-factor authentication, because both the password and the OTP were being entered into the same webpage, and were thus vulnerable to some of the same attacks (e.g., phishing, MITM). Authentication apps were thus more secure than SMS just because they cut the phone company out of the security loop.
Authentication apps had certain vulnerabilities. One was that the phone or other trusted device could be lost or stolen. In that case, according to AllThingsAuth (2018), the user could log into the websites secured by the authentication app and disavow the (formerly trusted, now untrusted) device. Without the trusted device, however, that login would require some alternative form of authentication. As CNet said, “If you use two-step verification … you won’t be able to bypass it without your phone.” In other words, according to FreeCodeCamp, “[If] you use a 2-factor authentication app and you lose your phone … you can be locked out of your account.” One solution was to use the one-time recovery codes (above) typically supplied during app registration, hopefully printed or otherwise saved in a secure location. Another solution was to use another phone or tablet as a backup trusted device. Either way, AllThingsAuth said, the user would now have to log in and change the 2FA at each website.
Those remarks suggested that the user might want to print out (or save on one’s computer) some one-time codes for emergencies, but rely on Authy or perhaps the Google, Duo, or Microsoft authentication app running on a phone for day-to-day use. AllThingsAuth (2018) warned that trusted devices (e.g., the phone and, perhaps, the backup phone) “can be used to generate valid [one-time passwords capable of unlocking the Windows system], so enable encryption on all of them and don’t misplace them!” Microsoft warned:
If you turn on two-step verification [for your Microsoft account], you will always need two forms of identification. This means that if you forget your password, you need two contact methods. Or if you lose your contact method, your password alone won’t get you back into your account—and it can take you 30 days to regain access.
Note, again, that for those who used their primary Microsoft email account as their Microsoft account for purposes of computer login, losing the contact method would mean being locked out of their email too. Given potential penalties of this magnitude, possibly for a crime as minor as misplacing one’s phone, I wondered if I could get simpler and/or better Microsoft account security by using a hardware token (i.e., small device, such as the YubiKey), which would typically connect with a computer via USB and/or with a smartphone via wireless near-field communication (NFC) when held next to the phone. Among the various forms of tokens, smart cards, and other hardware devices that had been offered for such purposes, there seemed to be two principal generations:
- FIDO/U2F hardware tokens. Wikipedia said the Fast Identity Online (FIDO) Alliance was formed to resolve issues regarding “strong” (typically two-factor) authentication devices, and also to improve the situation of users who had to create and remember multiple usernames and passwords. Elsewhere, Wikipedia said FIDO built on work by Google, Yubico, and NXP to develop the Universal 2nd Factor (U2F) standard. Yubico (2018) described U2F as a second factor in the sense that it would typically accompany a password. Yubico (2016) summarized research by Google, including a study (2016) involving use of U2F security keys by 50,000 Google employees, finding that such keys provided greater security, faster authentication, and reduced support cost due to the simplicity of the U2F device. Sophos (2018) agreed that “The most secure option by far is to use a FIDO U2F (or the more recent FIDO2) hardware token such as the YubiKey because bypassing it requires physical access to the key.” Yubico (2016) characterized U2F as an improvement over OTP because, among other things, U2F provided high privacy, insofar as “no personal information is associated with a key” (see Yubico, 2014); ease of use, because there were “no codes to re-type and no drivers to install”; the ability to protect an unlimited number of accounts with a single device; and protection against various kinds of online attacks (see Yubico, n.d.). How-To Geek (HTG, 2018) clarified that a U2F token protected against phishing attacks by providing automatic verification that it was connecting with the correct website, at least when used with a browser; against MITM attacks by protecting the authentication code from interception; and against password interception, insofar as the U2F device would facilitate use of an easy-to-remember PIN, communicated only to the device. HTG also said,
The best thing about U2F is that nothing is physically stored on the key. … That means if you misplace a U2F key … it doesn’t matter where it ends up — no one will be able to pull private information from the key to connect it to your account. … If you happen to lose your U2F key, the first (and really, only) thing you’ll need to do is remove that form of authentication from your accounts [because otherwise it will still work on your computer].
- FIDO2 hardware tokens. The FIDO Alliance (TFA) indicated that the FIDO2 Project drew principally on FIDO’s Client-to-Authenticator Protocol (CTAP) and the World Wide Web Consortium’s (W3C) Web Authentication specification (WebAuthn) to create FIDO2 tokens that would be backward-compatible with U2F tokens. (In that light, it was not clear why, as reported above, Yubico’s FIDO2 key would be incompatible with the Yubico Windows Login Tool that might allow 2FA with a Windows local account.) IBM (2018) characterized FIDO2 as providing a means of “alternative-to-password authentication.” Thus, Yubico described its FIDO2-compatible Security Key NFC as offering “not only two-factor authentication, but also support for single factor passwordless login and multi-factor authentication in conjunction with user touch and PIN.” While there seemed to be a lot of excitement about eliminating passwords, it seemed that MFA of almost any form (even involving a FIDO2 key plus a password) would be more secure than a FIDO2 device (which could be stolen or misplaced) by itself.
The question at hand was whether a U2F or FIDO2 hardware token would provide MFA for a Microsoft account login. On that, the situation was a bit confusing. Announcements in autumn 2018 (by e.g., Microsoft, Yubico, Tom’s Hardware) indicated that Win10 1809 supported 2FA for Microsoft accounts by inserting Microsoft-compatible FIDO2 (not FIDO U2F) key (e.g., YubiKey 5, Feitian BioPass FIDO2) and then entering a PIN. Microsoft indicated that the setup would be done through Windows Hello (i.e., Win-I > Accounts > Sign-in options). (See also GitHub’s reviews of individual U2F devices.) Pending an opportunity to test a YubiKey, it was not clear how that information should be reconciled with the observation that, within Yubico’s catalog of compatible software, the Computer Login category contained a Windows Hello link involving the YubiKey for Windows Hello app. That app, like other Windows Hello options (above), was not 2FA; it seemed to be merely an alternative to a password, not a required factor in addition to a password (or PIN, or fingerprint scan, etc.). Moreover, its compatibility appeared to be limited to the YubiKey 4 series; MusicPhotoLife (2018) said the YubiKey 5 FIDO2 devices were not compatible with it.
So there was a real question as to whether Windows Hello would facilitate solid 2FA login security with a FIDO2 key, or would instead continue to offer alternate login methods oriented toward user convenience rather than security. The latter appeared to be the answer: for instance, Laptop (2018) said that the password would still be “a backup credential” in case the FIDO2 key went missing. But Laptop went on to say that the user could close off the password route by opting to set up 2FA. Again, without a FIDO2 key, I was not able to test that in more detail.
Assuming 2FA would be an option for Microsoft account logins using a FIDO2 key, the next question was which FIDO2 keys would work for this purpose. Results of a search seemed to indicate that, at this point, the YubiKey and Feitian offerings mentioned above were, in fact, the only officially listed Microsoft-compatible FIDO2 keys available. Yubico indicated that users could log into Microsoft accounts using the YubiKey Series 4 or 5 or the YubiKey Security Key Series. Series 4 and 5, among other YubiKeys (but not the Security Key Series) were also listed for LastPass (below) and for local account logins (above) using the Windows Logon Tool. Having found YubiKey’s website confusing on multiple matters, I was inclined to sympathize with one-star reviews at Amazon for the YubiKey 4, YubiKey Neo, and YubiKey 5, among which I saw numerous complaints about documentation poor enough to baffle seemingly knowledgeable consumers. On the other hand, the Feitian BioPass FIDO2 webpage was even more inscrutable.
As with a phone using an authenticator app, use of a hardware token would be tempered with concern about what would happen if the user lost it. The advice (by e.g., Kaspersky, 2018) was to keep a backup hardware token in a safe place to protect against being locked out of one’s own system. Such devices were reportedly not capable of being copied or backed up, by the rightful purchaser or anyone else.
To recap, this section discusses administrator vs. standard user accounts and local vs. Microsoft accounts. Local and Microsoft accounts had some similarities and some differences in their sign-in options, in the means by which an intruder might bypass the sign-in screen, and in their MFA options. It appeared, generally, that a standard user account would be safer for ordinary use, and that a local account would be simpler in some regards than a Microsoft account. Absent personal experience (especially with hardware tokens), it was not clear whether the Microsoft account security options would still tend to be superior to those of the local account, though that had been the impression previously.
Password Manager
The discussion of passwords (above) emphasized the importance of long and complex passwords. As noted in that discussion, there were techniques for creating passwords that would meet or exceed current recommendations (e.g., 20 characters), that might survive an attack using a dictionary containing variations on passwords gleaned from previous database breaches, and yet would be easy to remember. But it appeared that, rather than trust their own abilities to create and remember passwords meeting such requirements, most people were inclined to use password manager (PM) software, or at least some do-it-yourself (DIY) alternative.
Commercial PM vs. DIY
Among the many PM programs available for free download or for a price, a few drew most of the attention. Many sources (e.g., MakeUseOf, Wirecutter, DigitalTrends, TechRadar, PCWorld) favored LastPass. Dashlane appeared to be a solid alternative, favored by some sources (e.g., Mashable, Tom’s Guide, the latter actually giving the same score to LastPass). PCMag (2018) favored Dashlane and Keeper. (PCMag’s review of premium PMs criticized LastPass Premium for not adding much to LastPass Free; the latter was the Editor’s Choice on PCMag’s review of free PMs.) Oft-cited factors affecting critics’ preferences included popularity (making LastPass more of a target for criminals), price (all offered free versions, but only LastPass offered extensive free functionality to multiple devices, and Dashlane premium was now $60/year vs. $36 for LastPass and $30 for Keeper), and automatic password updating (for 80 popular sites in LastPass vs. 500 in Dashlane and none in Keeper, according to PCMag). The following discussion focuses on LastPass, with which I was most familiar, but similar issues arise when using other competing PMs.
A first question was whether a commercial PM was better than one’s own informal system. That could be construed as a question of whether it made more sense to trust a black-box solution offered by something like LastPass, whose inner workings would be a mystery to most users, or to store one’s passwords instead in a plain text file or spreadsheet, and then encrypt that file. Multiple sources explained how to use a spreadsheet, not only to store passwords, but also to generate them. For example, PCMag (Rubenking, 2018) offered a set of instructions to build a random (in all cases, evidently, the more accurate term would be “pseudo-random”) password generator in Excel or Google Sheets.
The benefits of a do-it-yourself (DIY) approach included, in Rubenstein’s words, the fact that “the bad guys can study the password generator in any publicly available password manager, while they have no access to your home-built one.” Of course, they would have access to Rubenstein’s model, and they could study Excel’s pseudo-random number generator. But there was a question of whether they would bother. Efforts to crack passwords seemed to be focused on situations where the hacker did the hard work of figuring out an approach that might work, and then tried it on vast numbers of people, in hopes of hitting pay dirt with a few. In that perspective, it would not make sense to go digging around for ways to beat the few wackos, probably not especially rich, who would invent their own solutions, complete with bizarre twists that would just waste the hacker’s time.
A hacker who did decide to target eccentric DIY password systems might choose, not to try to defeat the encryption scheme, but rather to look for the spreadsheet or other file in which the resulting passwords were stored. That is, the user would generate a password using a spreadsheet whose output might or might not be predictable, and then the user would write down what password s/he had actually decided to use for his/her bank account. An online hacker wouldn’t have access to that resulting password if it was recorded on a slip of paper hidden somewhere around the user’s house, or marked by some secret system in a book on the user’s shelf, but the hacker might be able to get access if the password was listed in a regular computer data file. Programs like Microsoft Word and Excel had their own ways of encrypting files, and apparently their older versions were relatively easy to crack, but it appeared that (starting with Office 2010 or thereabouts) the encryption was now the very secure AES-256 — which would also be an option if one instead used something like 7-Zip or WinRAR to encrypt the file.
There were tricks that could help to some extent, in the DIY approach. For instance, CNet (2012) suggested a technique for pasting a password with excessive characters that the user would then manually delete, after changing the text color to match the background color, disabling the spelling and grammar checker, and excluding the file from indexing. How-To Geek noted that a plain-text file could contain a secret data compartment, though that data would still be readily discoverable by users familiar with that possibility. Wikipedia and WonderHowTo indicated that steganography, the practice of concealing information within a file, could help to hide the password file.
Even so, participants in a Spiceworks discussion (2017) noted several advantages for the PM rather than the simple encrypted list: the PM would encrypt the passwords themselves, not merely the list of passwords, thus requiring an attacker to break each password individually (assuming an essentially unbreakable master password); the PM would typically be designed to facilitate password entry and use, while the spreadsheet or other word list could require many manual steps (for e.g., decrypting, sorting, searching, pasting); a password list file accidentally left open or unencrypted, or whose data was temporarily stored by Windows in RAM or in a temporary file, could be vulnerable to intrusion and even to shoulder-surfing, whereas a PM might offer a more secure form of copying and pasting, without displaying characters onscreen, or might autofill without using the clipboard at all. In addition, if other users on a team (especially a team with remote workers) needed access to the list of passwords, it appeared the spreadsheet scenario could become unwieldy very quickly, while a PM might be built for that sort of thing, and a PM would also allow auditing to track password usage.
Before switching to LastPass, I had used some of the DIY methods just mentioned. I made the switch after facing up to certain concerns. One was that I didn’t really know what I was doing. Yes, I could screw around with hiding or encrypting a password list file or its contents in various ways, but I had no idea whether my efforts were so familiar as to be amusing — like the guy who stuffs cash into his mattress — to criminals who spent their days observing and in some cases profiting from the behavior of people like me. It seemed more prudent to leave the handling of passwords to the pros.
Another concern was that LastPass had been hacked, and presumably other PMs could be too. Wikipedia listed several security incidents, the most recent of which occurred in 2015 (Harvard Information Security). ZDNet (Osborne, 2019) indicated that a relatively recent audit had found some imperfections, but for the most part these seemed to be limited in effect to those situations where a hacker already had administrative privileges on the targeted system. LastPass seemed to have a reputation for acting quickly and responsibly to notify users and to take remedial steps in case of a breach, and apparently such breaches had never given hackers the master passwords or specific website passwords for individual users. Another concern, arising from my limited prior research, was that some PMs seemed complicated and/or unsuited to my needs.
One factor pushing me toward using a PM was that, in effect, I was already doing so: I was allowing Firefox and Chrome to remember passwords, and at a certain point I became aware that this was actually considered less secure than using a dedicated PM. For instance, Techlicious (2018) indicated that, while browsers like Chrome and Firefox did encrypt passwords, they did not offer means of generating unique, random passwords; their level of encryption was more easily cracked; and advertisers had been found to scrape email addresses (and the same technology could scrape passwords) from browser login autofill boxes. (See also How-To Geek, BestVPN, LastPass, Reddit.) I also wanted to manage a single list of sites, using just one PM, not separate lists and PMs in each browser.
Vulnerabilities: LastPass vs. KeePass
I started using LastPass, probably because it seemed to offer the best free PM. Like its most commonly mentioned competitors (above), LastPass was cloud-based, meaning that I could use it to access my saved passwords from anywhere, as long as I had an Internet connection. This arrangement posed vulnerabilities, some mentioned above, that would not exist with a purely local solution, such as my old password spreadsheet. In particular, with a local (a/k/a offline) PM, there would be no remote connection that might somehow be intercepted, and no remote database that might be hacked. KeePass was the best-known offline alternative. In the words of How-To Geek (HTG, 2018), “KeePass is the best password manager for the DIYer who is willing to trade the convenience of cloud-based systems like LastPass for total control …. There’s no click, setup, and done with the KeePass system.”
It appeared that some users favored KeePass because it did not send the user’s data to the cloud; but it seemed there might be a misconception in that. LastPass explained that it encrypted user data at the device (e.g., on my computer) — that LastPass’s own servers never had access to user passwords, including the master password. Nonetheless, at this point I undertook a more detailed investigation of possible LastPass vulnerabilities, and what I learned was rather different from the standard story (above).
In particular, I wondered whether users would be more vulnerable to a hack of the LastPass code, conceivably affecting many users very quickly, whereas my KeePass PM would not be updated until I chose to update it. For example, Martin Vigo (2017) found a self-defeating flaw in LastPass’s MFA, and The Verge (2017) reported on a LastPass vulnerability that “could have allowed malicious attackers to steal users’ passwords.” That particular vulnerability, and several others that LastPass downplayed as “largely the same,” was discovered by a Google researcher named Tavis Ormandy. Ormandy had also discovered “a complete remote compromise” in LastPass the preceding year — which, in turn, followed research finding “critical vulnerabilities” in LastPass in 2014. The error Ormandy discovered in 2017 seems to have been rather glaring: Spiceworks described it as involving
a poorly designed content script that could be exploited with [just] two lines of JavaScript code. … LastPass users could have lost all their passwords by simply visiting a malicious website …. [and the script] can also be potentially abused to execute commands on the victim’s computer.
I was not sure why Ormandy had access to the LastPass code, but in any event what seemed to be his relatively solitary efforts would hardly substitute for the scrutiny of many eyes on an open source product like KeePass. According to Slant, “The most important reason people chose KeePass is: KeePass being open source means that a number of people have reviewed the code and found it to be secure.” Those reviewers included formal auditors — specifically, the European Commission’s Free and Open Source Software Auditing Project (EU-FOSSA, 2016) — who found “no critical vulnerabilities” (see Ghacks, 2016). EU-FOSSA seemed, in fact, to be quite invested in KeePass, to the point of offering a bug bounty.
Of course, having no critical vulnerabilities within its own code would not mean that KeePass kept its users completely safe. Along with garden-variety keyloggers, capturing anything that came along, KeePass was targeted by the Citadel malware. InfoWorld (2014) reported that Citadel “was configured to initiate a key-logging operation” if it found that the user’s computer was running KeePass, Password Safe, or neXus Personal Security Client. According to Zemana, “When a user visits an infected website, Blackhole exploits a vulnerability in the user’s web browser to install Citadel. Citadel could hijack control of users’ Windows PCs and even attempt to grab the master passwords of some third-party password managers, and block access to anti-virus vendor websites.” ZDNet (2014) said that Citadel was “highly evasive” and able to “bypass threat detection systems,” though Heimdal Security (2016) seemed to indicate that at least antivirus software was making progress in identifying Citadel variants. Another KeePass vulnerability, discovered in 2016 and apparently resulting in no actual losses, involved the potential download of a fake and potentially malicious KeePass update. (Presumably that vulnerability was resolved before the European Commission’s audit.) These vulnerabilities in KeePass did not entail the LastPass scenario of uploading user data (encrypted or not) to the cloud; these were instances where KeePass users were vulnerable due to a targeted keylogger and the possibility of a fake update, both occurring on their own computers.
I did not agree with Rouse (2018), who concluded that such incidents called for avoiding PMs altogether. I believed the better conclusion, regarding KeePass, was just that, as with Windows 10 updates, the risks of an unsafe KeePass update might be mitigated by waiting a bit before proceeding with it, giving my hastier brethren a chance to discover malware lurking in that update. As with LastPass, it was scary to think that some of the world’s cleverest computing minds would be looking for ways to circumvent the KeePass protections. But that still didn’t make a DIY solution superior overall. It seemed there was always the risk of a mistake and a breach, but the trend seemed to be toward tighter and more sophisticated protection in PMs generally.
There was one major exception. A StackExchange discussion highlighted the fact that ultimately PMs were not designed or able to protect against keyloggers. Rather, as one participant said, the threats that PMs were designed to protect against were (1) websites that would capture the user-entered password (especially insofar as PMs facilitated the use of unique passwords for each site, making one captured password useless on other sites) and (2) the computer thief who, not knowing the PM’s master password, would be unable to access the user’s accounts at various sites, as s/he could do if the relevant passwords were stored in the browser or were otherwise not encrypted. One participant in an earlier discussion suggested that the auto-type feature in KeePass was supplemented with a Two-Channel Auto-Type Obfuscation technology that would defeat keyloggers available at that time (2013), but another participant said, “The only strong defense against key-loggers is some type of one-time method, such as a one time password,” used with “a trusted connection to the server” (presumably including VPN) to insure that the attacker “can’t intercept the one time password or hijack the user’s session.” These seemingly informed views departed from the remarks encountered, even at some mainstream and seemingly security-oriented websites, suggesting that — in the words of NordVPN (archived), for example — “When your passwords are not typed in but filled in automatically, keyloggers can’t get them.” For a PM, it appeared that the most reliable protection against keyloggers would require (1) installing the PM and setting up its master password before the keylogger could be installed and (2) using MFA.
At this point, unfortunately, MFA options for KeePass were apparently almost nonexistent. A SourceForge thread presented the impressions that KeePass had no way of authenticating a user — that, indeed, KeePass would provide nothing more than the functionality of a password file on a USB thumb drive. The KeePass documentation seemed to describe this as the “static password mode” of YubiKey when used with KeePass. That documentation did say that a YubiKey could also function in a “one-time password mode” that might provide at least some of the functionality of Google Authenticator; but a Github discussion conveyed the views that this functionality was not great, and also that U2F was not on the horizon for KeePass, never mind FIDO2/WebAuth. Thus, it seemed that KeePass could be vulnerable to malware capable of compromising its database when it was unlocked. Moreover, my reading in KeePass materials and discussions suggested that doing KeePass well could require a degree of technical ability and/or a level of ongoing attention that I might not devote — because, ultimately, I wanted to have a security plan in place, and to turn my efforts to other subjects, rather than continue to monitor my PM. Thus an offline PM like KeePass was not looking like my first choice in password management.
Another Possibility: Triage
Not using KeePass or my own spreadsheet to manage all passwords did not necessarily mean putting everything into LastPass. Another possibility would be to use LastPass to manage the many websites whose logins were not important to me. They wanted to control who was allowed to post comments on their site, or they wanted to consider me a customer, and that was fine. I didn’t want to spend my day entering authentication credentials to defeat hackers for each of these websites. As long as they didn’t have my credit/debt card on file, didn’t have a bank account containing my money, weren’t a government agency with information related to my identity, or were otherwise not likely to be hacked and not likely to hurt me if they were hacked, I would want my logins to such sites to be as secure as I could make them without inconveniencing myself, and therefore it made sense to assign these sites to LastPass, which could synchronize my login credentials for such sites in the cloud under a secure master password.
With those out of the way, I could focus my attention on the much shorter list of sites that involved actual amounts of money or significant matters pertaining to personal identity. For these sites — banks and such — I might enter a password from memory, or use a different PM, or make some other arrangement. The central point was that I wanted to get the clutter out of the way — I wanted to let LastPass handle the vast majority of logins — so that I could enforce a habit, and an arrangement, tending toward paying attention and thinking carefully when logging into a small number of highly sensitive sites. As Sophos (2017) said about multi-factor security, “[It] doesn’t scale well. The technology is great for a handful of sites, but apply it to dozens and it starts to weigh people down in exactly the same way passwords do.” At a certain point, as noted in NIST’s findings about passwords (above), the expectation of investing enormous time and effort into creating an ideal security situation could become counterproductive, because users who were trying to get things done in the real world might stop using it, or might use easy workarounds. As stated by Sophos (2014, citing Microsoft research on passwords),
User effort available for managing password portfolios is finite. Users should spend less effort on password management issues … for don’t-care and lower consequence accounts, allowing more effort on higher consequence accounts.
This way of seeing the situation — important vs. unimportant websites and passwords — was not necessarily best for all purposes. For example, The Register (2016) reported on a hacker who was able to piece together a picture of a random individual, with bits of information from various sites. But the key parts of that information came from publicly available sources over which my passwords would have no control in any event. The decision was, in effect, that less stringent, easier-to-use LastPass security for these many, relatively unimportant websites would probably be adequate.
Setting Up LastPass
Apparently the preferred way to install LastPass was via its Universal Installer, setting up a LastPass account at the same time. The Universal Installer would install the LastPass Binary Component, enabling additional features. I had LastPass installed on Firefox and Chrome, and those were almost always the only browsers I used. Various sites (e.g., How-To Geek, Lifehacker) offered LastPass setup instructions for those who were not already using it. Such sources seemed to favor the slow approach of bringing login credentials into LastPass one at a time: the user would visit various sites, the browser’s PM would log into them, and then LastPass would offer to save the login information. For a faster way of handling this, someone at SuperUser recommended using ff-password-exporter. There may also have been approved Firefox add-ons for the same purpose; see also e.g., UptakeDigital.
With LastPass installed, and with me logged into it, I went to its icon on my Firefox toolbar > left-click > Open My Vault > More Options (at the bottom left corner) > Advanced > Export. That gave me a list of all of the sites for which Firefox had stored my login credentials. I could instead have used the list provided in the main panel in Open My Vault, but LastPass had sorted my sites into maybe two dozen different categories (e.g., Business, Entertainment). I would have had to open and page through the items listed under those categories. It was easier to go down the list provided via Export — especially when I parsed that list, using Excel, to highlight the website names, apart from all the other information exported.
Since I wanted LastPass to completely replace the PMs built into my browsers, following advice from Dashlane and LastPass, I went into Firefox > menu > Tools > Options > Privacy & Security > uncheck “Remember logins and passwords for websites” > Saved Logins > review the list, to see if there was anything I needed to transfer to LastPass. As above, I didn’t yet expose and transfer credentials; but when I got the system secured, I would be returning here, to this place in the Firefox Options, to copy and then delete the login credentials for any sites remaining on this list that were not already entered into LastPass. Similarly, in Chrome, I went into menu > Settings > Passwords > turn off Offer to Save Passwords and Auto Sign-in. At the menu option at the top of the list of Saved Passwords, I started to use the option to Export, so as to compare against the Firefox list — but that option warned, “Your passwords will be visible to anyone who can see the exported file” so, as with Firefox, I postponed that step. Chrome Help said that, later, when I was ready to clear all saved passwords, I would have to go into the Clear Browsing Data option and specify Passwords.
Now it was time to adjust some settings in LastPass. They seemed to be scattered across a few different locations. First, I went into the LastPass icon on my Firefox toolbar > left-click > Preferences. A LastPass support page seemed to say that settings in this Preferences location would not be synchronized, so the user would have to set these items for each computer and each browser individually. Maybe I was doing it wrong, but it seemed that the option at the bottom of each tab (e.g., “Restore ‘General’ Defaults”) was not working in Chrome or Firefox on my computer.
Within these Preferences, I started with the General tab. The first question, there, was whether I wanted LastPass to log out automatically, either when all browsers were closed or when the computer was idle (i.e., no keyboard or mouse activity; see Mouse Jiggler) for a certain number of minutes. The best choices here would depend on the user’s situation. For me, sitting at home, there would be little risk of anyone exploiting an unattended computer, so I was relatively safe in unchecking both of these items. A search for further insight led to a StackExchange discussion that called on me to make certain decisions and assumptions, mostly discussed above:
- I would shut down my laptop (i.e., not leave it in sleep mode or otherwise logged in) whenever I was not actually using it, in locations where it might be lost or stolen;
- I was safer using autofill than not using it, on non-sensitive sites, because presumably LastPass would only autofill if it was looking at the intended website (i.e., it would verify https: and the domain, each time); and
- I would probably notice weird behavior if anyone else had actual control of aspects of my system.
On that basis, the conclusion seemed to be that, at home, it would be safer (and certainly more convenient) to stay logged into LastPass, for my non-sensitive websites, than to keep getting logged out and logging back in. I would probably do the same even when I took the laptop out to potentially risky locations; but for reasons explored in the discussion of VeraCrypt (above), I would probably want to power down the machine whenever it was out of my sight in such a location (e.g., cable-locked to a table among multiple patrons, in the public library, while I went to the restroom). As a fallback precaution against unexpected distractions, one solution would be to follow Leo’s advice: do tell LastPass to log out automatically after a period of inactivity that would be oddly long for the situation (e.g., 30 minutes in the library; ten hours overnight at home) and, at least in unsafe locations, foil other easy access (during e.g., brief distractions) by setting the Windows 10 password-locked screen saver to turn on after a short period of inactivity. When using public WiFi or otherwise operating in especially unsafe environments, it seemed that one-time passwords could also provide some security.
Continuing the project of adjusting LastPass Preferences > General tab, I varied from the defaults by checking Hide Context Menu Options (which seemed to refer to the LastPass option that would come up when right-clicking on a webpage). In the Advanced tab, I turned on all options except Automatically login to sites, Warn before filling insecure forms, Enable infield popup, Respect AutoComplete=off, and Open login dialog when browser starts. Finally, I clicked Save. Note that the enabled items thus included the option to save one or more one-time passwords, for use on public wifi or in other places where the password might be copied (and also as an aid in surviving a lost master password — but be sure to change the master password, and remember it, before logging out of that one-time login!). To generate the actual one-time passwords, the advice here said to go to Firefox > LastPass icon > Open My Vault > More Options > Advanced > One Time Passwords > enter master password > Add a new One Time Password. After creating one or more of those, hit Print > output to paper, text, or PDF > keep them in a safe place. The option to Clear All OTPs would invalidate them. To use a One Time Password, it seemed I could not start from the LastPass icon in Firefox. Instead, I had to use the LastPass website > Log In > Log in using a One Time Password. Note again that the settings in this area, including the OTPs, had to be set up separately for each browser on each computer: my OTPs for one browser would apparently not work in another.
Turning to another area for LastPass settings, I went into Firefox > LastPass icon > left-click > Open My Vault > Account Settings > General. For this area, How-To Geek (2012) recommended a number of steps. Among those, I would not be increasing password iterations, as discussed above. But some of the other items were still good advice. I opted for a master password that was long but memorable, and then clicked the Show Advanced Settings button. That opened several possibilities requiring further exploration:
- Recovery Phone. A search oddly led to almost nothing on the Recovery Phone option. The slightly better option, still not turning up much, was a search for SMS Account Recovery. The advice here was to add a phone number where the user could receive a verification text message to recover from a lost master password. The advice said, “We strongly recommend adding an account recovery phone number if you store the password for your email address in LastPass.” Answers in a StackExchange discussion pointed out that, as noted above, “SMS is not a secure channel for MFA or account recovery,” though in this case the recovery SMS would simply activate a recovery one-time password (rOTP). An rOTP, like any other One Time Password, would work only on the specific computer and browser from which it was generated. So the attacker (and, for that matter, the person using the Recovery Phone option) would need access to that computer and browser to (re)gain master password access to LastPass. The rOTP was apparently different from a generic OTP because the rOTP would become available to the user only via the recovery SMS. I was not sure it was any more secure than other SMS, however. As noted at StackExchange, this orientation toward the browser suggested that, in some situations, clearing the browser cache could make recovery impossible.
- Export and Backup. It appeared that a LastPass installation set up via the Universal Installer (above) could export websites’ login data (though presumably not the user’s LastPass settings) to an encrypted file via LastPass icon > More Options > Advanced. LastPass also said the user could export usernames and passwords from that same location to a .csv file. LastPass said it was also possible to export plain text from the Online Vault, but I didn’t see that option. Presumably the user would want to use these latter options only on a secure machine, and encrypt the output. Though the support page wasn’t clear, I assumed all of those exports could be re-imported into a new PM account, at LastPass or elsewhere, if the user had to start over from scratch due to a lost master password for which no LastPass recovery options worked. In other words, it seemed advisable to save periodic encrypted LastPass backups, preferably made on a secure machine and then stored securely.
- Only Allow Login from Selected Countries. Here, LastPass said the user would be able to log into LastPass only from an IP address originating in the selected countries. I wasn’t sure how much help this would be, given advice from e.g., MakeUseOf (2018) on how to use a fake IP address. But, again, it seemed it couldn’t hurt, at least not for those of us who were almost always restricted to just one or two countries. As I learned the hard way (below), the country restriction needed to allow whatever country my VPN service would put me into, which in my case happened to be Canada.
- Disallow Logins from Tor Networks. LastPass said the purpose of this option was to prevent people from logging into LastPass via Tor. The idea was that Tor was often used by hackers who were trying to remain anonymous, and was rarely used by legitimate LastPass users. So unless the user planned to log into LastPass from Tor, this option would ordinarily be checked.
- Master Password Reverting. LogMeInInc (owner of LastPass) said the option to “Allow reverting of LastPass master password changes” would restore the database to where it was at the time of the last password change (thus wiping out all LastPass changes since then), and would only go back to password changes within the past 30 days. The user would submit a reversion request and receive an email valid for two hours. I didn’t expect to change my master password every 30 days, so I doubted this would be as useful as a simple backup. More than most other precautions, this one also seemed to open some unknowns and possible vulnerabilities. I decided against it.
- Remove Duplicates. Having seen duplicate entries in LastPass, this button seemed to provide a useful option — after verifying one’s backup (above).
In that same LastPass area (i.e., Firefox > LastPass icon > left-click > Open My Vault > Account Settings), for the moment, I saw that some of the other headings — specifically, Multifactor Options, Trusted Devices, and Mobile Devices — were going to require a separate discussion. The Never URLs heading produced a list of websites that I did not want LastPass to add or ask about. There were a few entries there already, but primarily this list would need to correspond with the list of important sites for which I planned to use KeePass. The Equivalent Domains section would allow me to designate multiple websites that all used the same login credentials (e.g., Verizon.com and Verizon.net). Finally, there was a URL Rules section that, at present, did not seem particularly compelling.
I moved on to another LastPass settings area: Firefox > LastPass icon > left-click > Open My Vault > More Options > Advanced. In this location, I noticed a history area whose functioning was not clear without further exploration — it seemed to go back only a few months, and to record only a few kinds of events — and a generation option that seemed to use only some top-row special keys (e.g., @#$%, but not {+\;). Some but not all of the items available here seemed to overlap with options on other LastPass menus. For instance, the import/export and one-time password functions, mentioned earlier, were also available at various points via Firefox > LastPass icon > left-click > More Options > Advanced.
Techlicious (2018) recommended choosing a PM that did not allow for master password recovery (since hackers could use that), using MFA, being wary of browser add-ons, and turning off autofill. The browser add-on version of LastPass was able to store passwords for websites, and also had a Secure Notes area that could be used to store passwords for non-browser applications (e.g., VeraCrypt) — or one could use the LastPass desktop app.
LastPass MFA
As mentioned above, the LastPass setup options included Multifactor Options, Trusted Devices, and Mobile Devices sections (at Firefox > LastPass icon > left-click > Open My Vault > Account Settings > menu). These all pertained to authentication — for the LastPass account itself, not for the many websites that a person might log into using LastPass. The Trusted Devices section said,
When you turn on multifactor authentication for your LastPass account, you can choose to ‘trust’ a device. When logging in on a trusted device, you will not be asked to provide your multifactor authentication. Trusted devices automatically expire after 30 days, after which you must re-trust them.
The concept appeared to be that, for example, if I trusted the security situation for my desktop computer at home, I could log into LastPass on that computer by just entering my LastPass master password, without any MFA. Next, the Mobile Devices section said,
Control what smartphones and tablets may access your LastPass account. By default, a unique identifier (UUID) is created to track each device, but you can edit the device label at any time.
I was not presently using LastPass on my smartphone. So, for now at least, I skipped that option. That left me with the Multifactor Options menu pick. There, I saw free, premium, and enterprise sections. SalesForce, the sole enterprise option, did not seem relevant to my situation. The premium options were YubiKey, a fingerprint sensor or smart card reader, and Sesame, described as a “Software application that can be placed on a USB key to generate one time verification codes.” Sesame was apparently intended solely for use with LastPass. Since it would require an additional device that apparently could be readily infected with malware, and since in any event I had not found USB flash drives consistently reliable, I decided against that. I didn’t have a smart card reader, and presently felt I would rather spend money on a YubiKey than on a fingerprint reader dongle. When LastPass called YubiKey a premium item, they meant that, to use it, I would need a premium account ($36/year). Since I probably wouldn’t buy that for the relatively unimportant accounts that I planned to entrust to LastPass, I would probably lean toward the free choices, there in the Multifactor Options section. LogMeIn (2018) said there were some others than the ones I had seen in the LastPass webpage, but these others (e.g., RSA SecurID, Symantec VIP) seemed to be less commonly recommended and/or to require enterprise accounts. Although Authy was not listed in either place, a Reddit post (see also Notenboom, 2017) did say that it was another option.
Another possibility, listed among the Multifactor Options in LastPass, was a Grid option that would give me a “Printable spreadsheet of numbers and letters used to enter different values when logging in.” It seemed this spreadsheet, stored securely somewhere, might provide a fallback in case of a lost phone. (See LogMeIn instructions in case of a lost or compromised grid.) Otherwise, if my LastPass account was compromised, a help site recommended steps to limit the damage; likewise for a lost master password. LastPass recommended using a “security email” account (below) as a backup in case of a lost phone for those using Google Authenticator. In the worst case, it seemed that losing one’s LastPass account might not be tragic, if s/he maintained a recent backup (via export, above) of his/her LastPass vault: presumably s/he could just start a new LastPass account and re-import the information.
I found several websites offering guidance in setting up LastPass MFA. It presently appeared that doing so with an authentication app could add some security without much cost or risk, and that LastPass would then simplify the task of managing passwords for many sites whose login credentials were not extremely important to me. To maximize security and avoid having to manage the same passwords in two or more places, I would want to make sure that my browsers were not saving passwords, and I would also want to tell LastPass not to save passwords for sites whose passwords I intended to keep in my memory or in some other PM.
To summarize, this section examines various aspects of selecting and configuring a password manager (PM). While PMs have displayed some vulnerabilities, it appeared that a PM like LastPass would protect against multiple potential threat vectors that users might not be aware of or constantly focused on. That is, it seemed that a commercial PM had a good chance of providing better protection than a do-it-yourself password management system. It also seemed that a cloud-based PM like LastPass would have better usability (and thus would be more likely to be used regularly) than a local PM like KeePass. Users did have the option of using their own memory, or some other PM, for a small number of high-priority websites that did call for special focus (e.g., banks). Underscoring the fact that a hacker could gain access to a computer’s sensitive data through its Internet connection, without ever having to go through its BIOS and VeraCrypt password barriers, LastPass offered many settings to close many potential Internet-related vulnerabilities. The section also looks at external precautions (e.g., MFA, recovery phone) that the user might consider, to enhance PM security.
Other Safe Browsing & Email Precautions
An Internet connection would typically bring more security risks than any other aspect of computing. This section examines the most commonly mentioned online security issues.
Browsing and Email Attacks
Phishing was the use of fraudulent social engineering techniques by a purportedly trustworthy entity, typically directing users to enter personal information (e.g., username and password) at a seemingly legitimate website, in order to capture that information (Wikipedia). PCMag (2018) noted that some fake websites were poorly done, such that even a moderately alert user should recognize something was wrong, and that a particularly attentive user should also notice an illegitimate webpage address (e.g., paypal.payme.com, as distinct from payme.paypal.com) as well as the absence of the https: (as distinct from http:) protocol and its accompanying padlock icon (though Krebs (2018) said “half of all phishing scams are now hosted on Web sites whose Internet address includes the padlock and begins with ‘https://'”). PCMag noted that the user could also check the domain using ICANN’s WhoIs service (see Wikipedia) to discover that, for example, the seemingly legitimate netbanksecure.com website was registered through CrazyDomains.com — which, for a bank, might seem a little funny. AVG (2017) suggested using VirusTotal and Google Safe Browsing webpages to determine whether a URL was legitimate.
The advice, in any case, was not to log into a bank or other site by clicking on a link in an email, but rather by reaching the bank’s website the way you normally would (e.g., use your bookmark, or enter the URL manually). PCMag noted that the best antivirus software would block nearly all phishing attempts, and that a PM could also help. Forbes (2019) clarified that a PM would refuse to enter login credentials into an illegitimate webpage: in other words, it would detect that the website facing a user was not right, even if the user failed to notice. Another bit of email advice: The Atlantic (2018) recommended telling people not to send email attachments, but instead to put their files in Dropbox, and then view those files using Google Docs.
To expand on the remark regarding https:, note that the HTTPS Everywhere browser add-on (below) was designed to insure that communications between the browser and the website would be encrypted. Unfortunately, it was possible to spoof HTTPS: see The Windows Club, 2014. To guard against that, Double Octopus (2018) recommended disabling Punycode (see Wandera) in Firefox and Chrome. (Punycode was, in effect, the ability to use characters that looked like the ordinary English alphabet, but were actually from other languages — so that what looked like a link to apple.com might not be: see Krebs, 2018.)
A man-in-the-middle (MITM) attack was so named because an intruder would intervene in communications between two parties, relaying and possibly altering their messages to one another (Wikipedia). Norton listed several kinds of MITM attacks, including an attacker’s use of public WiFi connections to glean login credentials and anything else that the user might enter. To guard against MITM attacks, Norton repeated some of the phishing advice (above) and recommended always using VPN on public WiFi networks, using updated antivirus software, and, at home, making sure the router and connected devices used strong, unique passwords. AVG echoed those precautions and suggested that a MITM attack could be noticeable from sudden, unexplained, long page load delays and from URLs suddenly switching to http:. For technically adept users, AVG also suggested using tools like Wireshark. Double Octopus (2018) further advised avoiding public WiFi wherever possible and disabling automatic WiFi connections, so that one’s computer would not automatically connect to a public WiFi network with the right name but created by the wrong people.
Firewall
An Internet connection would probably be needed for malware to receive commands from a remote computer, or to send data to such a computer. Firewall software could be helpful in controlling such activity. For instance, WonderHowTo said, “It’s possible lazy [keylogging] attackers won’t go through the effort of disguising their payloads to appear as being normal DNS (port 53) or HTTP (port 80) transmissions. A firewall might catch suspicious packets leaving your computer on port 35357.”
DigitalCitizen said that I could enter the built-in Windows Defender Firewall with Advanced Security through the Control Panel, or by using Start > Windows Administrative Tools or by simply running wf.msc. The “Advanced Security” part involved manual editing of rules to permit or forbid access. I found myself going into that when I installed NordVPN (below).
Various sources listed potential standalone alternatives to the built-in Windows firewall (e.g., ZoneAlarm, Comodo, GlassWire, Tinywall), or recommended the firewalls built into various antivirus suites (e.g., WindowsReport), but multiple sources (e.g., MakeUseOf, WindowsCentral, MakeTechEasier, PCMag) said the default Windows firewall was scoring very well in recent tests, imposed little burden on the operating system because it was built-in, and would be sufficient for most users.
Secure Operating Systems
Although this post addresses security in Windows 10, important details could vary according to the specific configuration. For instance, perhaps some Windows 10 vulnerabilities could be mitigated by running Windows 10 within a virtual machine on a Linux system. Going further afield, a search led to a StackExchange discussion of how one could gauge an operating system’s (OS) security. A lengthy answer to that question said,
Windows, especially 10, is actually surprisingly secure. … [I]t is likely more secure than an equivalently feature-rich Linux kernel …. The biggest issue … [is] that you cannot compile out features you do not need or audit the source code. Additionally, the Snowden leaks have shown that Microsoft gives the intelligence community early access to 0days before they are patched. … Windows may be useful if you do not care about privacy and want a well-supported system that has decent resistance to exploitation.
The “0days” in that quote was shorthand for “zero-day.” Wired (2014) explained that “zero days” referred to the amount of prior warning of the vulnerability. A zero-day vulnerability would be a new one, not yet known to software vendors and antivirus firms. The point here was that Microsoft apparently allowed law enforcement to use zero-day vulnerabilities to access computers whose owners were not yet aware of the vulnerability. Thus Microsoft was relatively good about security, but not about privacy.
That StackExchange answer named several alternative OSs. For anonymity, the answer recommended Whonix or Tails — both of which used the Tor network, according to DGR News Service (2018); a key difference was that Tails ran from a bootable USB drive or DVD. According to Wikipedia, Tor (an acronym for the original project name: The Onion Router) bounced traffic through a series of servers, slowing performance considerably but also making it difficult if not impossible to trace Internet activity (e.g., instant messages, visits to websites, online posts) to the user. Wikipedia said that the U.S. National Security Agency (NSA) “targets Tor users for close monitoring” — ironically, since the U.S. government reportedly provided most of the funding for Tor’s development. The Tor website said that, among other things, “Journalists used Tor to communicate more safely with whistleblowers and dissidents” (see also Cloudflare’s Project Galileo). Digital Trends (Nicol, 2016) explained that Tor was useful, not only to protect privacy, but also as a gateway to the Deep Web, which meant simply the 80% of the world’s webpages that were not cataloged by Google and were reachable only through specific protocols. For example, Tor reportedly provided a route past China’s censors. Tor’s own notoriety was due to its use in various criminal activities. Nicol noted that Tor had vulnerabilities, especially for users not well versed in it, who might unknowingly reveal where they had been and might find themselves in legal trouble merely for landing on the wrong website. EFF among others offered a tutorial. AVG (2018) said,
Law enforcement agencies like the NSA and FBI, and even more troubling agencies abroad, have been accused of setting up dozens of Tor exit nodes. As a tool so often used to commit cybercrime, you can bet Tor is a major target for intelligence services. … If your online activism is putting your life at risk, we recommend using Tor. Otherwise, a VPN is probably all you need to hide your IP.
(Users interested in the Tor network would perhaps also be interested in the Invisible Internet Project (I2P). Wikipedia characterized it as a free and open-source “volunteer-run network of roughly 55,000 computers distributed around the world” providing “censorship-resistant, peer to peer communication.” The I2P website welcomed volunteers but appeared half-dead. CleberTech said “I2P is a network in itself, isolated from other networks.”)
Among other operating systems, the StackExchange answer preferred OpenBSD for simplicity, security, and stability, with inferior hardware and software compatibility. Fedora was “among the most secure while still being user-friendly.” Qubes was based on Fedora, with an emphasis on isolating hardware for security reasons. ChromeOS (i.e., the Linux version) was “possibly the most secure Linux system out of the box” but “obviously not great” for privacy.
With a somewhat different perspective, OnlinePrivacyTips (Raudo, 2018) considered OpenBSD most secure, but unfortunately it required “a comparatively informed and experienced user.” Qubes, second on the list, was considered “incredibly difficult to use” but also “incredibly safe” because the isolation of each program prevented a security breach in one from affecting others. Mac OS and Windows were deemed “relatively insecure,” with Windows having at least the advantage of responding rapidly to security flaws. Linux could “be compromised as easily as Windows or Mac … [but] security flaws find it hard to come into existence in an OS which is constantly upgraded and verified by [so many] people” and hackers were not very excited about an OS existing in so many different forms, such that an attack might reach only a subset of the already small user base. The Daily Dot (Hubby, 2016) put OpenBSD second on her list, behind Linux.
Secure Browsing
On the most general level, the National Security Agency (NSA) of the U.S. government recommended a handful of basic steps to achieve more secure web browsing. These steps included enabling automatic browser updates, enabling reputation services (e.g., Microsoft SmartScreen and Google Safe Browsing), disabling unsafe plugins and extensions, disabling unnecessary features (e.g., hardware acceleration, untrusted fonts, perhaps VBScript and ActiveX), and browser isolation (e.g., running the browser in the cloud, or in a container, with no access to the local network). Some of this advice appeared to be directed at developers rather than users. For instance, by early 2019, Google and Firefox had implemented browser isolation, or were in the process of doing so.
A search led to many sources (e.g., ExpressVPN, AddictiveTips) that seemed to agree that Chromium and Firefox (along with security-oriented browsers based on Chromium and Firefox) were best positioned to provide secure browsing. There may have been other options not covered by such sources. For example, Citrix offered a secure browser service, apparently oriented toward enterprises, with a particular emphasis on browser isolation. NordVPN recommended what could seem like a poor man’s version of browser isolation:
Use a separate private browser window for each different website while surfing the internet. For example make sure you do not have any other windows opened while accessing your Google account, so data would not be traced or associated with it.
I found it possible to commence private browsing by using Ctrl-Shift-N in Chrome or, in Firefox, right-clicking on a link > Open in New Private Window. Compared to browser isolation, private browsing seemed to be intended primarily to limit the gathering of data; it did not seem to prevent the execution of code for other purposes.
Chromium was an open-source browser developed and maintained by the Chromium Project. Google had apparently commenced that project in 2008. It appeared that Google’s Chrome browser continued to use Chromium source code, to which Google added certain features in its proprietary Chrome browser. Computerworld (2018) said, “Chromium is rough, and not just around the edges. In practical terms, the latest version of the Chromium browser will be far buggier, much more prone to crashes, than even the rawest version of Chrome.” In my brief research, it was not clear whether Google provided feedback, so as to help the open-source volunteers improve Chromium.
It seemed that the strongest security-oriented alternatives to (or improvements upon) Chromium and Firefox were the Tor browser (4.4 stars from 638 raters on Softpedia), designed to use (but apparently not directly related to) the Tor network, and Brave, a Chromium-based browser that also used the Tor network for its Private Tabs Using Tor option (Computerworld, 2018). Other Chromium- and Firefox- based browsers, oriented toward security to varying degrees, and recommended by various sources for their varying degrees of features and user-friendliness, included Epic, Pale Moon, Waterfox, Avira Scout, Freenet, SRWare, Dooble, Opera, and Vivaldi. It had lately been announced that Microsoft Edge would be switching to a Chromium base, but would presumably continue to exploit user data in largely the same way as before. Since I rarely used any browsers other than Chrome and Firefox, the following discussion is largely limited to those two.
Extensions (i.e., add-ons) were available for many browsers, including Chrome and Chromium– and Firefox-based browsers. Some extensions could contribute to improved browsing security. In Firefox Quantum, multiple sites (notably VikingVPN and PrivateInternetAccess) recommended a handful of extensions intended to enhance privacy and security, including particularly uBlock Origin, uMatrix, Disconnect, HTTPS Everywhere, Privacy Badger (see Reddit), NoScript Security Suite, and Lightbeam. Simply installing these extensions would be a significant improvement; various sources (e.g., TechRepublic, Ghacks) advised on further tweaking for best effect. In Chrome, a search led similarly to lists of security-oriented extensions. Aside from those with names or capabilities discussed elsewhere in this post (e.g., LastPass; an antivirus program’s Chrome extension), these included especially HTTPS Everywhere, Click&Clean, Privacy Badger, uBlock Origin, ScriptSafe, and Vanilla Cookie Manager; note also Google’s own Password Checkup.
Meanwhile, other extensions could detract from security. How-To Geek (HTG, 2017) said that, among other things, extensions could function as keyloggers (e.g., capture passwords and credit card details) for every visited site, and track everything the user did online: “Even an extension that only does a minor thing to web pages you visit may require access to everything you do in your web browser.” HTG cited an instance where the developer of a valid and useful Chrome extension fell for a phishing attack, allowing the attacker to upload a modified version of the extension to more than a million computers. HTG recommended paring down the list of extensions to those that were important, using extensions produced only from trusted sources, and paying attention to the permissions that an extension requested upon installation. Consumer Reports (2018) cited a different example (involving a browser extension that accumulated private Facebook messages) to support the same advice (see also Hacker News, 2018). Reddit carried a discussion of offers to buy a popular extension, from people seeking to use that extension for malicious purposes (see also Bleeping Computer, 2017). Kaspersky (2018) cited yet another example, where extensions that supposedly just provided sticky notes were actually clicking on ads in users’ browsers to generate profits for the hackers. Kaspersky said, “If an extension already installed on your computer requests a new permission, that should immediately raise flags; something is probably going on.” Kaspersky also said that its Internet Security package could detect at least some malicious browser add-ons; presumably that was also true of other competing packages (below).
Various sites recommended certain configuration steps to increase privacy and security. Regarding Firefox, the following paragraphs present the suggestions offered by VikingVPN, which several sites cited. I did later wonder whether the Restore Privacy guide to Firefox privacy would have been a better source. Note the existence of other guides, varying in some regards (by e.g., BestVPN, PrivateInternetAccess, SecurityGladiators). I did not attempt a comprehensive review and reconciliation of such sources. Some such sites tended toward detailed tweaks that, I suspected, could increase browser instability. It would apparently be up to the individual user to decide how far to go down this road. A user who was not certain how such matters would play out for purposes of his/her priorities (i.e., the tradeoff between security and functionality, as well as the payoff for his/her time invested in taking the recommended steps) might find it advisable to implement some of the following suggestions gradually, trying one or two for a few days before trying another few.
For Firefox Quantum, some of these changes were available in the Options area, which one could reach either by using the top menu > Tools > Options or by using the hamburger menu (i.e., three parallel horizontal bars, on a button at the right end of the address bar) > Options. Within the Options area, specific suggestions included the following, along with other suggestions noted elsewhere in this post:
- Keep Firefox updated — though it appeared that, while updates would ideally provide cutting-edge security enhancements, sometimes users found the side effects unbearable.
- Search. Allow only one or two one-click search engines. Do not provide search suggestions.
- Privacy & Security. Don’t remember passwords. Never remember history. Warn when sites try to install add-ons. Disallow all Firefox data collection. Block dangerous downloads. Warn about unwanted and uncommon software. Regarding certificates: ask you every time, and query OCSP responder servers.
- Firefox Account > Manage Account (or just go to the Firefox webpage) > Account recovery > Enable > Generate recovery key > download, print, screen capture, memorize, or copy and paste it > store it somewhere safe so you’re never locked out of your Firefox account. This code can be used only once. These steps replace the Enable > Generate buttons with Change > Revoke.
- Firefox Account > Manage Account > Two-step authentication > Enable > scan using an authentication app (e.g., Authy, above). In Authy, the process for scanning is Menu > Add Account > Scan QR Code.
- about:config. (These steps can be automated, but probably should not be, until the user has first verified that the individual items being changed exist, and have the same form and function, in the current version of Firefox.) Go to Firefox > address bar > about:config > search for media.peerconnection.enabled (to disable WebRTC) > double-click. That should change it to “false.” Similarly, search for and change these items to have the following values: security*des (should lead to something like security.ssl3.rsa_des_ede3_sha) (to disable weak Firefox encryption): false; security.tls.version.min (to set the minimum TLS version): 3; security.ssl.require_safe_negotiation (to make Firefox reject insecure negotiation with websites): true, and security.ssl.treat_unsafe_negotiation_as_broken: true; browser.formfill.enable (to prevent Firefox from remembering form information): false; privacy.resistFingerprinting (to prevent browser “fingerprinting”): true; camera.control.face_detection.enabled (to disable face detection using cameras): false; browser.cache.disk.enable (to prevent Firefox from caching data to disk): false, and browser.cache.disk_cache_ssl: false, and browser.cache.offline.enable: false; dom.event.clipboardevents.enabled (to prevent Firefox from getting access to the clipboard): false; geo.enabled (to disable Geolocation): false; network.cookie.lifetimePolicy (to discard all cookies whenever the browser closes): 2; plugin.scan.plid.all (to prevent Firefox from reporting installed extensions, so as to prevent fingerprinting and to block websites from intentionally removing content): false; media.webspeech.synth.enabled (to disable web speech recognition through the microphone): false; search for “telemetry” and set the following items to false, to prevent storage of metadata about your connection:
browser.newtabpage.activity-stream.feeds.telemetry browser.newtabpage.activity-stream.telemetry
browser.pingcentre.telemetry
devtools.onboarding.telemetry-logged
media.wmf.deblacklisting-for-telemetry-in-gpu-process
toolkit.telemetry.archive.enabled
toolkit.telemetry.bhrping.enabled
toolkit.telemetry.firstshutdownping.enabled
toolkit.telemetry.hybridcontent.enabled
toolkit.telemetry.newprofileping.enabled
toolkit.telemetry.unified
toolkit.telemetry.updateping.enabled
toolkit.telemetry.shutdownpingsender.enabled
- Disable untrusted root certificates. VikingVPN said, “This step requires some time and patience, and is only for the most privacy conscious users who are concerned with the advent of mass surveillance. … There have been multiple incidents where governments or individuals have compromised the CA system to steal information. So this is a real threat with real consequence.” VikingVPN said that, among the hundreds of CAs trusted by Firefox, some were “troubling.” Examples included China’s censorship/Great Firewall organization, other governments, and “RSA Security who compromised their own encryption for the NSA.” In general, the recommended process was to “remove the trust of any certificate authorities that you do not regularly use.” To view them, go to Firefox > Options > Privacy & Security > Certificates > View Certificates > Authorities tab in Certificate Manager. It appeared that there had been some changes since the VikingVPN page was created: I did not find the China entity anymore, and WCCFTech (2018) said that Firefox was now distrusting even the Google and Symantec root certificates, because lax security at such companies was reportedly allowing hackers to obtain certification for their malware, so that it would be trusted. VikingVPN said,
If you visit a site that uses a certificate that your PC does not trust, you will get a big ugly warning (rightfully so) warning you about proceeding to the site. You should be especially cautious if you get this error for a site that you can normally visit without errors, because this means that the site’s cert has changed, which can indicate that you are being led to a fake website.
MakeUseOf (2019) recommended making a backup of the Firefox profile after completing such tweaks. The recommended process was Firefox > menu > Help > Troubleshooting Information > Application Basics section > Profile Folder > Open Folder > go up one level in the File Explorer window that opens, probably to a Profiles folder > close Firefox, but leave this File Explorer window open > back up the xxxxxxxx.default folder (where “x” stands for a random digit or character). Make the backup by copying and pasting that folder (or by right-clicking and using a compression tool (e.g., 7-Zip) to make a compressed copy) somewhere else. (Make sure the compression tool is not set to delete the original after compressing.)
Various sites also recommended ways to make Chrome more secure. For example, WhatIsMyIPAddress linked to three videos (though two of the three turned out to be more concerned with performance than with security), and TechAdvisor and especially Heimdal Security recommended a number of specific measures. Collectively, those and other sources suggested the following steps (note that some of the following links may only work in Chrome):
- Keep Chrome updated.
- Google Security Checkup and Google Privacy Checkup (see TechAdvisor, 2018).
- Google Account security page: set a secure password; set up Google two-step verification, with a backup phone number and printed backup codes (TechAdvisor, 2016).
- Remove unwanted ads, pop-ups & malware webpage > “Remove unwanted programs” section > Check your computer for malware.
- In Chrome, go to Settings > People section > Sync > Encryption options > Data is encrypted with your sync passphrase (so that a hacker who gains access to your account cannot sync his/her changes to your other devices).
- Settings > Autofill > Passwords > turn off Offer to save passwords and Auto Sign-in. On a secure machine, export saved passwords, so as to import into your password manager. When that import succeeds, clear saved passwords.
- Settings > Advanced > Privacy and Security section > Content settings > Cookies > Turn on “Keep local data only until you quit your browser” and “Block third-party cookies,” and consider turning off “Allow sites to save and read cookie data.” A look into “See all cookies and site data” will be sobering, in terms of the numbers of cookies used and also the possible extent of lost capability if all are removed.
- Settings > Advanced > Privacy and Security section > Content settings > JavaScript > disallow — because “No Javascript offers … greatly improved page load times and generally a cleaner Internet experience.” Heimdal described this as a “rather hardcore” alternative to simply running extensions like uBlock Origin, because “Sites such as YouTube or Google Docs need [JavaScript] to function, but so do advertising, pop-up software and a whole host of other spammy elements” as well as “malicious ways” in which criminals used it “to infect your device.” Another approach was to disallow JavaScript, but then whitelist sites on which it should run. How-To Geek disapproved disabling JavaScript.
- Settings > Advanced > Privacy and Security section > Content settings > Unsandboxed plugin access > turn on “Ask when a site wants to use a plugin to access your computer” if it isn’t already on.
- Settings > Advanced > System > turn off “Continue running background apps when Google Chrome is closed.”
- Google described site isolation as a protection that would load each website in its own process — meaning extra security at the price of 10% increase in memory use. The more detailed documentation said it was enabled by default, starting with Chrome 67 (i.e., May 2018). That seemed to be true: at the recommended Chrome Experiments page, there no longer seemed to be an enable site per process entry.
Hiding IP Address
Our good friend Tim Fisher at Lifewire (2018) lucidly defined an Internet Protocol (IP) address as “an identifying number for a piece of network hardware” — a unique identifier, it seemed, that allowed devices to communicate over a network. For this purpose, the Internet evidently counted as a network. Fisher explained that there are several kinds of IP addresses. Among the several types of IP addresses Fisher identified, the one relevant here was the public IP address for one’s computer, assigned by one’s Internet Service Provider (ISP) and used on the Internet. That public IP address was perhaps best identified by WhatIsMyIPAddress, which provided both the IPv4 (in the form of 111.111.111.111, where 1 could represent any digit) and IPv6 (in approximately the form of xxxx:xxxx:xxxx:xxxx:xxxx:xxx:xxxx:xxxx, where x could represent any digit or lower-case letter) addresses for the computer. Wikipedia explained that IPv4, first deployed in 1983, ran out of numbers and was thus updated (but not necessarily eliminated) by IPv6 in the 2000s.
How-To Geek (HTG, 2018) listed several reasons why the user might want to hide his/her public ID address. Some involved attempts to evade local law enforcement. HTG offered the example of a person in China, where much material would be censored, or Germany, which blocked copyrighted YouTube material. In a different vein, HTG said the user might want to hide his/her IP address “simply for more privacy and to prevent misuse of your personal information,” such as the sale of information about which webpages the user visited, how long s/he stayed there, and where the user was located at the moment. A search led to various means of doing an IP lookup. Cloudwards (2019) said that someone’s IP address could be obtained from an email message. For example, in Gmail, the steps were to open the message > click on the three-vertical-dot menu at the upper right corner of the message, next to the Reply arrow (not the other three-dot menu further up, in the Chrome address bar) > Show original > Copy to clipboard > paste into WhatIsMyIPAddress’s Trace Email page > Find Email Sender > scroll down to see source IP address > copy > paste into WhatIsMyIPAddress’s IP Lookup page. AVG (2018) indicated that websites similarly tended to collect IP data on their visitors:
If a tyrannical government, litigious record company or pesky advertiser matches your IP address to your actual identity, which is all too easy, it’s open season on your online activity.
So if you care about internet privacy and anonymity, blocking your IP address is the very first thing you should do.
To hide one’s IP address, HTG (2018) suggested three techniques. One was to use the Tor browser, which HTG called “great for extreme anonymization” but “very slow.” Another was to use a proxy server, in which case “The internet servers you visit see only the IP address of that proxy,” which then forwards information on to the user. Unfortunately, HTG said, many proxy servers spied on their users or inserted ads into their browsers. ZDNet (2017) agreed that users would get much better protection from HTG’s third option, VPN (below), because “VPNs don’t just spoof the originating IP address, they also encrypt and secure all internet traffic between your machine and the VPN service,” and that’s “a huge difference.” BestVPN (2018) likewise “strongly recommend[ed]” against using free public proxies, because while some did encrypt their traffic, they tended to be slow and unstable, and a fair percentage were unsafe. Lifewire (2018), with a similar warning, did offer a list of free proxies.
DNS Server
As just described, signing up for a VPN or proxy service would route traffic from and to the user’s computer through that (VPN or proxy) server, providing opportunities for that server to do good things (e.g., encrypt the data, hide the user’s actual IP address from third parties) or bad things (e.g., insert ads, collect and sell data about the user). This was different from the function of a DNS server. Wikipedia described the Domain Name System (DNS) as a naming system for computers and other resources connected to a network, especially the Internet. Cloudflare, detailing the process, characterized it as “the phonebook of the Internet,” keeping lists of domain names (e.g., Google.com, CNN.com) and their corresponding unique IP addresses, and completing the online connection between the company name entered by the user (e.g., CNN.com) and the IP address where that company could be found online (e.g., 157.166.226.26).
Cloudflare claimed that its DNS services offered “the fastest performance of any managed DNS provider.” That meant that users might get connected to a website a bit faster, not that the flow of data to the user’s computer from that website (once connected) would be any faster. Cloudflare (2018) further indicated that networks (including the user’s ISP, public Wi-Fi connections, and mobile network provider) would automatically have “a list of every site you’ve visited while using them” (but not information from the user’s interactions with such websites), and that this data could be sold. At the same time, Cloudflare said, governments could prevent DNS lookup services in their countries from completing the connection for websites they deemed troublesome (e.g., Turkey blocked Twitter after it carried reports of government corruption). It was also reportedly possible for intruders to spoof the desired website — in effect, to send the user to the wrong place, where s/he might essentially give the intruder his/her login credentials.
Cloudflare said that the consumer DNS service it constructed would respond to these needs, and would furthermore not save IP addresses and would wipe all transaction logs within 24 hours. (A Reddit discussion questioned the integrity of the accounting firm Cloudflare retained to audit its activities, at least ostensibly to verify that it was living up to its promises.) The address for Cloudflare’s DNS service would be entered (replacing the ISP’s default preference) at Win-R > ncpa.cpl > right-click on the desired network > Properties > Networking tab > select Internet Protocol Version 4 (TCP/IPv4) > Properties > Use the following DNS server addresses > enter the desired values. For Cloudflare’s DNS service, the numbers were 1.1.1.1 (preferred) and 1.0.0.1 (alternate). Google’s well-known alternative, renowned for worldwide access but not privacy, was 8.8.8.8 and 8.8.4.4. Another alternative commonly encountered on lists of best (especially most private and/or secure) DNS services: OpenDNS Home (208.67.222.222 / 208.67.220.220). To varying degrees, these services would apparently allow users to choose the extent to which they preferred to filter websites, individually or in certain categories (for purposes of e.g., parental control).
Security Email and Phone
It seemed that security measures discussed in this post could encourage a user to maintain at least three distinct email addresses. One would be any regular (e.g., Hotmail, Outlook, Live.com) Microsoft email address that a user may already have and use for various purposes. A second could be a Microsoft email address that was created only to generate a Microsoft account that could be used in Microsoft account logins (above) without causing confusion with the user’s regular Microsoft email address. A third email address would be the “security” email address suggested if not required for various security purposes.
As an example of the use of that third email account, LastPass indicated that, if a user adopted its offer to designate a separate security email address, the “obscurity” of that separate address was “intended to provide an extra layer of protection.” In particular, the LastPass email giving the user a verification option, by which to re-enable his/her LastPass account, would go to that address. Thus Wired (2012) recommended that the user use “a unique, secure email address for password recoveries” that s/he would “never use for communications.” A StackExchange discussion, contemplating that LastPass option, emphasized that access to that separate account would have to be compartmentalized, so that an attacker who gained control of the user’s primary email address could not acquire information regarding the existence of, or access to, the secure email address. Suggestions included opening the secure email from a different browser, from within a virtual machine, from a physically separated machine, from a secure operating system (e.g., Qubes) — or, presumably, from a login through a Linux live CD or (perhaps) a Windows To Go USB drive.
WalletHacks (2019) said that, to keep the secure email invisible to an attacker, the user should use the secure email address “where high security is a must — banks, brokers, etc.,” use it only in safe circumstances, and never forward mail from it to the regular email. There was a concern that emails notifying the user of a potential breach, or providing other emergent news, might be sent to the secure email address, where they might not be read promptly. On that question, one LastPass forum discussion included an observation that (at least as of 2014) LastPass emails could go either to the regular or secure email address, depending on circumstances.
The secure email address could perhaps be chosen from among the more secure free email providers. For instance, MakeTechEasier (2018) recommended SCRYPTmail, Disroot, TorGuard, Hushmail, Runbox, Mailfence, ProtonMail, Tutanota, Posteo.de, Kolab Now, and CounterMail. Someone said s/he appreciated his/her email account at TheXyz because it (perhaps like some others just listed) allowed the creation of endless aliases pointing back to his/her actual account — the advantage being that his/her actual email account was never disclosed. But an alias of that nature would presumably not work for these purposes, because email sent to it would presumably be forwarded to the main email address, thereby notifying anyone hacking that email address of the existence of the supposedly secure alias.
In a similar vein, some security measures called for a phone number. For instance, How-To Geek (HTG, 2017) pointed out that phone hackers could arrange, with the phone company, to reassign the user’s phone number to a new phone, so that 2FA SMS codes would appear on the attacker’s phone rather than on the user’s phone. As noted above, authentication apps were better than SMS, but some websites might offer SMS as the only option. In that case, HTG recommended creating a Google Voice phone number and viewing those messages in the user’s Google account, which could be protected by MFA.
Other Security Software
MakeUseOf (MUO, 2018) recommended a handful of tools designed to improve network security. Some of those tools, or superior alternatives, are discussed elsewhere in this post. Others were as follows:
- InSpectre. This free tool from the well-known Steve Gibson reported on the computer’s current vulnerability to the Meltdown and Spectre vulnerabilities. Those vulnerabilities, largely though not exclusively limited to Intel CPUs, seemed to have been limited so far to research — there did not seem to be any reports of exploits using them in the wild — but they were worrisome nonetheless, insofar as they would “enable attackers to extract encryption keys and passwords from compromised systems” (TechRepublic, 2019). Research was ongoing; apparently a “good” rating from InSpectre at one point could be reversed as researchers discovered other vulnerabilities, so it would be advisable to continue to run the program from time to time. Fixes were delivered through BIOS and operating system updates — which, again, would advisably be revisited occasionally.
- Angry IP Scanner (AIPS). MUO said this was “a must-have tool if you have a router in your home.” The reason seemed to be that it would be useful to detect wardriving (i.e., searching for Wi-Fi connections by driving through an area with the right software; see also warbiking) or intrusions by neighbors — but it was not clear what one would do upon discovering such intrusions. AlternativeTo suggested that Nmap (optionally, with the Zenmap GUI) was a more widely used tool for the same purpose. Various sources (e.g., PC & Network Downloads, GeekFlare, Addictive Tips) listed a number of other alternatives, including SolarWinds Port Scanner and Advanced IP Scanner. Yet such sources seemed to imply that such tools were designed primarily for network administrators, or at least that they would require some expertise to use effectively. TechWiser (2016) provided a “beginner’s guide” on using AIPS, but did not seem to explain specific security uses. AIPS’s own documentation said the software was “only one of the tools that must be used in order to implement a successful defense strategy.” WindowsReport listed software (including Angry IP Scanner) that would be useful for the specific purpose of blocking others from using one’s Wi-Fi. That, however, seemed to be a subtopic within the larger matter of insuring that one’s home or office Wi-Fi network was secure. On that issue, TrendMicro (2018) recommended several measures, in addition to those discussed elsewhere in this post, namely, changing default WiFi network names and passwords to more complex alternatives; installing firmware updates for routers and other hardware when they became available; and limiting router signal strength to the level needed for successful connections within the home or other facility. It tentatively appeared that the purpose of an IP scanner like AIPS would mostly be to monitor the effectiveness of one’s primary precautions (e.g., strong passwords), as distinct from adding directly to security. Incidentally, MakeTechEasier (2017) advised on removing unwanted WiFi networks from the list of available networks.
This section looks briefly at a variety of browsing- and email-related threats and responses. In particular, phishing, spoofing, and man-in-the-middle attacks call for a combination of responses, provided by a good PM, good antivirus and malware software, and user education and vigilance. Other useful tools include a firewall, a secure operating system, use of a secure browser, judicious use and monitoring of add-ons for Firefox and other mainstream browsers, and an extensive set of possible tweaks to enhance security on such browsers. Users might also want to look into hiding their IP addresses, choosing an appropriate DNS server, setting up an email account and phone number used specifically to assist in maintaining security online, and exploring other security-related utilities.
Security Suites
We come, at last, to what may be the first thing that most people think of, for purposes of computer security: an antivirus or security package or suite. I postponed the subject, in this post, because offerings vary: I wanted a clearer idea of what my system would be like, and which functions would be provided or negated by other arrangements.
Antivirus vs. Internet/Total Security Programs
Now that we were here, I found there was still a question as to what, exactly, I should be looking for. I saw, for example, that Kaspersky offered software packages named Anti-Virus, Internet Security, and Total Security. In that case, the proffered comparison was ascending: Internet Security seemed to offer everything that Anti-Virus covered plus more, and Total Security offered everything that Internet Security offered plus more. But then I saw that Tom’s Guide covered Kaspersky’s Total Security in its “antivirus” comparison, and PCWorld considered AVG Internet Security to be an “antivirus suite.” On the other hand, in their comparisons of “security suites,” TechRadar and PCMag covered only products with names that sounded all-encompassing (offering e.g., “total” or “maximum” protection). And yet, on closer examination, I saw that, for instance, Bitdefender‘s Total Security did not actually seem to offer more protective features than its Internet Security; it merely offered coverage for more devices, on more platforms (e.g., Mac). In other words, merchants did not seem to be using terms consistently, in a way that would establish a clear difference among “antivirus” or “Internet” “packages” or “software” or “suites.” For general guidance, TechSupportAll (2016) contended that “antivirus” meant a standalone antivirus program, without firewall, possibly lacking some Internet and USB threat protection, primarily for home users, for free or at low cost, while an “internet security suite” typically meant something more on all of those points. DifferenceBetween (2018) similarly considered “Internet security software” to have antivirus at its core, along with a firewall and other protections.
To see if I could get any more clarity on that distinction, I looked at two recent software comparisons presented by TechRadar. In early 2019, that source offered its lists of best antivirus software 2019 and best internet security suites of 2019, both summarized here in descending order. The best antivirus packages, they said, were Bitdefender Antivirus Plus 2019, Norton Antivirus Basic, Webroot SecureAnywhere AntiVirus, ESET NOD32 Antivirus, F-Secure Antivirus SAFE, Kaspersky Anti-Virus 2018, Trend Micro Antivirus+ Security, and Panda Antivirus Pro. The best Internet security suites they listed were BitDefender Total Security Multi-Device 2018, Kaspersky Total Security 2018, McAfee LiveSafe, Symantec Norton Security Premium, BullGuard Premium Protection, Trend Micro Maximum Security, Avast Internet Security, Panda Dome Advanced, AVG Ultimate, and F-Secure Total.
Those lists displayed some inconsistencies. For example, I was puzzled to see that McAfee was not included among the contenders in the antivirus comparison — a rather glaring absence, since the writeup referred to McAfee as one of the most familiar names before asking, “but are they really the best?” — and yet McAfee’s LiveSafe suite was listed as fourth-best among the suites, and apparently would have been rated even more highly if it hadn’t been overpriced. It was not clear why a McAfee suite that incorporated an apparently mediocre antivirus component should have been listed among the top Internet security suites.
As another example of seeming inconsistency between those two TechRadar lists, I saw that Norton’s antivirus package scored well, whereas its full suite was lower in the ranks. A problem with the suite, in TechRadar’s view, seemed to be that it did not include “system optimisation tools.” A glance at the User’s Guide for one Bitdefender product suggested that what TechRadar might have in mind, here, could include “profiles” that would control some aspects of system functioning for particular situations (e.g., “Battery Mode Profile” would postpone background programs and Windows automatic updates, and disable external devices) along with “real-time optimization” that “improves your system performance silently” by adjusting the system resources available to various running processes. Such features seemed to have nothing to do with security. Moreover, it was not clear that they would even be helpful. As How-To Geek (2018) observed,
Other antivirus programs may occasionally do a bit better [than Windows Defender] in monthly tests, but they also come with a lot of bloat, like browser extensions that actually make you less safe, registry cleaners that are terrible and unnecesary, loads of unsafe junkware, and even the ability to track your browsing habits so they can make money. Furthermore, the way they hook themselves into your browser and operating system often causes more problems than it solves.
There was a question, then, of whether “total” suites were an excuse to charge extra for stuff that Windows, the user, and/or other software might already be handling adequately — stuff that could actually generate clutter, distraction, confusion, and/or malfunctioning. An example appeared in TechRadar’s indication that F-Secure’s suite was “two products bolted together” and consisted of “Everything including the kitchen sink.” There seems to have been an attempt to shoot at multiple targets simultaneously — when, for instance, TechRadar rated these packages according to whether they offered “parental controls” (which would be valuable to some users while being a complete distraction from system threat protection for others), or could be run on a smartphone (whose processor, storage, battery, and daily uses might impose very different constraints than those applicable to, say, a desktop computer used to run programs and browse websites extensively).
A similar spectrum of purposes appeared in PCMag’s “Best Security Suites for 2019.” That comparison listed five main criteria: firewall, antispam, parental control, backup, and tune-up. The article seemed to indicate that an antivirus utility would be “a good start” but that some security suites “stick to the basics, while others pile on tons of useful extras, from online backup to dedicated ransomware protection.” But if backup was an “extra,” why did PCMag list it as one of the core features by which it compared these suites? And why would ransomware be an “extra” in a discussion of Internet security? Again, it seemed that the core question of security was at risk of being neglected for the sake of nonessentials.
The remarks just quoted included links to separate PCMag comparisons of the best antivirus and ransomware products for 2019. The connections among those comparisons were unclear. For instance, relying on these several comparisons, a user might simply purchase a PCMag Editor’s Choice security suite, Bitdefender Total Security, which seemed to include the components of Bitdefender Antivirus Plus, an Editor’s Choice for both antivirus and ransomware protection. Or, in a very different scenario, the user might choose (or already possess) McAfee AntiVirus Plus (listed as an Editor’s Choice here, contra TechRadar), and yet might also feel obliged to buy something like Webroot SecureAnywhere AntiVirus for ransomware protection — leading, against expert advice, to the operation of two competing antivirus packages on the same system — and yet might still not have some of the protections conferred by a security suite.
From the user’s perspective, it appeared that some Internet security companies and reviewers were encouraging companies to market unnecessarily duplicative products, and thus obscuring the situation rather than clarifying it. In contrast to the information just critiqued, for example, a Business.com comparison of The Best Internet Security and Antivirus Software of 2019 logically considered ransomware an intrinsic part of the malware to be guarded against, and treated firewall protection as an element of core security. The Business.com guidance regarding free antivirus software also contrasted sharply against, for example, TechRadar’s assurance that a premium service would provide “more features, such as spam filters” with just “a bit of extra security,” but that “you can get top quality protection absolutely free.” What TechRadar said was pretty much what I had been hearing from various reviewers for many years. But here’s how Business.com viewed that issue, for purposes of a small business owner:
If you’re on a budget and only have one or two computers you need to protect, there are free antivirus programs that provide moderate protection from low-level threats. This is only recommended if you don’t store pertinent financial information on your computer or if the information being stored on that computer is not essential to your business’s everyday operations.
That was consistent with the advice of Heimdal Security (2017): “Our recommendation is to purchase a reliable antivirus program, not use free versions, which are not enough to provide robust protection.” But it was pretty much the opposite of the view expressed by How-To Geek (HTG, 2018): “Windows Defender … does one thing well, for free, and without getting in your way.” I noticed that the HTG writer was not Chris Hoffman, whom I had often found to be highly knowledgeable and on-target; and while I had not studied the matter, my experience as a user did not support this writer’s claim that “Windows 10 already includes … other protections … like the SmartScreen filter that should prevent you from downloading and running malware.” Moreover, I could not readily concur in this writer’s dismissal of the mediocre score given to Windows Defender by AV-TEST. In the latest AV-TEST results (2018), Windows Defender did well (but not as well as some others) in antivirus protection, but was still not among the leaders in terms of performance, which was supposedly its strong point. Moreover, in the latest AV-Comparatives real-world protection tests (2018), Windows Defender performed worst, among all tested products, in terms of wrongly blocking legitimate files, and was a lackluster performer overall.
Such results (as well as the discussion of antimalware software, below) inclined me to question Wirecutter’s (2018) trust in “information security experts” who informed them that “Windows Defender is good-enough antivirus for most Windows PC owners” and that “The virus and malware protection built into Windows and macOS, combined with good habits, are enough for most people.” The question raised by such remarks was, what if I’m not among “most people”? What if I’m the poor shmuck who, for reasons good or bad, does not practice “good habits” every time, without fail? Isn’t that really the point of antivirus software — to make sure I’m covered, where possible, even if I get fooled or make a mistake?
It seemed I might be able to get top-quality protection from a free product, but that product would not be Windows Defender. I hoped I would not have to choose Windows Defender in order to avoid unhelpful intrusions by the software that was supposed to be helping me. ZDNet (2018) cited Microsoft for the view that Windows Defender performed much better in real-world contexts, taking into account protections already built into Windows itself; but ZDNet pointed out that this did not fully account for the performance issues that independent labs had observed in Windows Defender. (Consider, nontheless, the suggestion to resurrect Windows Defender, when desired, for a second opinion on problems identified by third-party software.)
Steered by those views, my search led to recent comparisons by a number of recognized sources, including PCMag, Tom’s Guide, PCWorld, AV-Test, IGN, Mashable, and Windows Report, along with some lists of best free antivirus (e.g., Lifewire, Digital Trends, Tom’s Guide, PCMag, Mashable). Without getting into specifics of whether they were comparing antivirus programs or security suites, or exactly what they were prioritizing, it was safe to say that the same names tended to come up repeatedly, and that those names were also on TechRadar’s lists (above). Generally speaking, it appeared that, if you chose something by Bitdefender, Kaspersky, or Symantec/Norton (possibly in that order), you were likely to find yourself with a product that would be on a number of top-five or top-ten lists. You could also hit some of those lists with software from McAfee, Webroot, Trend Micro, ESET, Panda, F-Secure, Bullguard, or Sophos.
(I had never entirely forgiven Symantec for buying Peter Norton’s great software and promptly ruining it, some 30 years earlier — though I did appreciate that, at least in this area, they had redeemed themselves somewhat. Regardless, either Symantec didn’t have a free antivirus offering or they weren’t doing well with it. I was inclined to focus on companies that would give me a more complete free-to-expensive spectrum of products to choose from. So at this point my scope was narrowing toward Kaspersky and Bitdefender. Others, needing a different mix of features, may have chosen different semifinalists.)
Some of the companies just listed did offer free alternatives, but it seemed not all were competitive. For instance, Kaspersky said that its free antivirus did not offer the “Advanced protection” (i.e., “Prevents malicious attacks, controls dangerous apps & malware behavior”) that its paid version offered. This seemed to justify the perspective offered by Business.com and Heimdal Security (above). On the other hand, Bitdefender’s comparison seemed to say that its free and Total Security versions would have identical antivirus, threat defense, web attack, anti-phishing and anti-fraud protections — suggesting that Business.com was potentially underselling the free version.
If there was a big difference in malware protection, between Kaspersky’s free and paid versions, and if there was no difference between Bitdefender’s free and paid versions, then it seemed that Bitdefender’s free version must have been far superior to Kaspersky’s free version. But that was not the picture that emerged from comparisons of the two. To the contrary, Kaspersky actually came out a bit ahead in at least some comparisons of free antivirus (by e.g., PCMag, Tom’s Guide). It seemed, again, that I might not be getting the entire picture. One possibility was that Kaspersky was exaggerating or simply falsifying the difference between its free and paid versions, in order to push people toward buying something to alleviate their anxieties.
Eventually, I noticed that Business.com was talking about Kaspersky’s Small Office Security 4.0. Not only was that small business rather than consumer software, with potentially divergent needs (to cover e.g., employee activity); it was also four years old. Version 4.0 apparently came out in 2015; we were on version 6 now. So it appeared that Business.com had attempted to mislead readers with its claim that its review dated from December 2018. (At present, PCMag rated Bitdefender best for small businesses.)
MakeUseOf (2018) drew my attention to the fact that multiple governments had issued warnings against and/or had ruled out buying from Kaspersky. I had not been following the story. From the perspective of a blog post that was taking at least a slightly paranoid stance toward security risks, I had to admit that it could seem slightly ridiculous to be considering software from a company that reportedly maintained deep ties with, and had been used by, the Russian secret service. My only experience on the matter was that I had tried Kaspersky’s free version briefly, within the past year or so, and had stopped using it because it needed more attention than the free Avira antivirus that I had been using otherwise — so I was receptive to various reviewers’ remarks that Bitdefender was better for the set-it-and-forget-it user. Maybe I should have been more of a geek on this stuff, if I was going to write a blog post about it, but the reality was that, as with KeePass, I was probably not going to do a good job of managing software that required me to be consistently attentive to its detailed issues. I had other fish to fry.
Aside from essential protection tasks, Bitdefender’s own comparison listed some additional features that were included in the paid but not in the free version. Some of those (e.g., free 24/7 online support, a Wi-Fi security assessment tool, a Safepay protected browser (i.e., online banking and shopping secure transaction tool), anti-theft tools, a vulnerability scan) could be helpful, depending on how well they were executed, while users might already have others, or be able to acquire them at higher quality and/or more economically elsewhere (e.g., a firewall, a service that vetted the safety of search results, webcam protection, file encryption, a file shredder) — though admittedly it could be convenient to have all those options organized in one place. The paid version offered ransomware protection, but it was not necessary to pay anything for that; there, again, Bitdefender provided a free version. The paid version offered a VPN capability, limited to 200MB per day, that might be sufficient to handle those aspects of online activity that really needed VPN protection (e.g., not Netflix or other normal video), assuming the user would remember to turn it on when needed and off when not needed, or would perhaps use it only with a certain browser (e.g., Chromium, or Bitdefender’s own Safepay) reserved for sensitive activities. But the VPN would rarely if ever be needed by the user who shared my inclination to buy a VPN subscription (above). The paid version would presumably free the user from ads and nags to upgrade, and would also allow the charitable user to support the provision of free antivirus to those who could not afford (and also to those who just didn’t want to pay for) anything else.
Various sources (e.g., Windows Central) said the online threat scene had evolved away from the viruses of yesteryear and toward the intrusions covered by anti-malware programs. The conclusion reached in this subsection was that it might be true that virus protection, per se, was no longer a major threat, but that hardly justified choosing the less effective, lower-performing Windows Defender over an equally free and demonstrably superior alternative like Bitdefender. The combination of Bitdefender’s free antivirus and ransomware software was rather obviously superior to Windows Defender, and to most if not all other competitors. Indeed, AVG (2019) and CNet (2014) agreed that Bitdefender Antivirus Free was “clean” and “refreshingly free of the ‘extra’ features and tools that make some apps unwieldy and confusing for non-experts.”
Anti-Malware
Newegg Business (2018) expressed the longstanding view that an antivirus program should be accompanied by an anti-malware program. Similarly, Heimdal Security (2018) said, “Antimalware abilities can cover a broader software solutions [sic], such as anti-spyware, anti-phishing or anti-spam, and is more focused on advanced types of malware threats, such as zero-day malware, quietly exploited by cyber attackers and unknown by traditional antivirus products.” According to How-To Geek (2018),
[A]ntivirus itself is no longer adequate security on its own. … [A]ntivirus will block or quarantine harmful programs that find their way to your computer, while Malwarebytes attempts to stop harmful software from ever reaching your computer in the first place. Since it doesn’t interfere with traditional antivirus programs, we recommend you run both programs for the best protection.
Running a separate tool like Malwarebytes would not be recommended, however, if one’s antivirus software already has an antimalware element built in. Bitdefender’s product comparison chart did not specify anti-malware capabilities, but the webpages for their free and Total Security products did mention that anti-malware protection was included. Indeed, in a comparison of free products, TechRadar (2019) ranked Bitdefender Antivirus Free Edition as “the best anti-malware for your PC,” ahead of AVG Antivirus Free, with Malwarebytes in third place. PCMag (2018) likewise ranked a half-dozen products (including two by Bitdefender) as superior to Malwarebytes, though that comparison seemed unfair: it pitted paid versions of those others against the free version of Malwarebytes, when Malwarebytes made clear that only its paid version had features like those competing paid versions. More compellingly, Malwarebytes Premium was among the worst performers in AV-TEST’s (2018) most recent results. While that did not speak to its anti-malware performance, it did raise questions of whether the user would have to forego more effective antivirus protection in order to run the Malwarebytes Premium product, and of whether Malwarebytes was still enforcing quality standards on its products. Finally, an AV-Comparatives (2017) test found Malwarebytes Anti-malware was not among the leaders in a comparison of ransomware and other malware detection. I had to concur with Rankin’s (2018) conclusion: “[A]s much as I like MalwareBytes and want to support it, I can’t justify buying it until its effectiveness improves and can be demonstrated by independent test labs.”
As noted above, Wirecutter (2018) advanced the dubious proposition that Windows Defender was as good as any other antivirus software. Wirecutter also adopted the traditional view that the best approach was to install antivirus software plus Malwarebytes Anti-malware. Notwithstanding its status as a New York Times company, it appeared that Wirecutter had opted for a rather lazy conclusion. They claimed to have spent months researching the question, but I did not see evidence of that. There were no actual test results: they claimed to rely on “the experts at independent test labs,” and yet the interviewees they listed did not appear to be affiliated with those labs. Indeed, Wirecutter contradicted itself: having endorsed Windows Defender as “good enough,” they claimed that, actually, they did not recommend traditional antivirus after all — which Windows Defender certainly was. They said, “Good security is not free,” which again contradicted not only their own endorsement of Windows Defender, a few paragraphs earlier, but also the indications that, in the antivirus arena specifically, some of the best security is exactly that — free. They complained that third-party apps were “more likely to collect data.” More likely — than Microsoft? They supported their critique of all traditional third-party antivirus suppliers with citations to a couple of articles faulting Symantec (which, as noted above, was no surprise to me) and a few mostly second-tier antivirus companies (e.g., Comodo, Panda). But what about Bitdefender and Kaspersky? Wirecutter had nothing to say there. The premise seemed to be that all antivirus programs were bad and should be discarded because imperfect code had been discovered (and, in most cases, patched). Again, no surprise: the risk of bad code was part of the reason for the open source movement. But that was the logic of an argument that Malwarebytes is bad too — that, indeed, Microsoft Windows itself should be thrown out because it is closed source and has been found to have bugs. I mean, I agree with that logic — as long as a superior alternative exists. And, with many others, I am still waiting for that alternative.
At this writing, the available information seemed to indicate that the best solution, in terms of identifying malware without impairing performance, burdening the user with false positives and unwanted notices, and costing money that didn’t need to be spent, was to disable Windows Defender, skip Malwarebytes, and just rely on Bitdefender’s free version for an effective combination of antivirus and anti-malware (including ransomware) protection, adding features from the paid version (e.g., VPN, secure browser, firewall) as needed, in light of the earlier discussion of such options (above).
This final section examines the use of what has been traditionally called antivirus software to meet a variety of potential threats. Such programs differ, rather confusingly, in what they offer, what they call it, and how highly various reviewers rate it. It appeared that users should at least replace Windows Defender with free versions of top-ranked antivirus software by Kaspersky or, perhaps better, Bitdefender. The latter evidently incorporated antimalware software; Malwarebytes and others offered alternate antimalware options.