A Windows Power User Configures Linux Mint Cinnamon 18.3 on an Acer Laptop (Long Version)

In a previous post, I expressed the view that the Linux community could only make Linux dominant on the desktop by ceasing its attempts to compete with Microsoft for ordinary end users, and instead concentrate its limited resources on attracting Windows power users (as defined in that post), who might then find ways to bring along others in their organizations. This post contributes to that sort of effort by providing a writeup that Windows power users might appreciate.

I was motivated in this direction for reasons discussed in another previous post. Briefly, a series of adverse experiences with Windows 10 persuaded me to switch to a Linux system in which I would run Windows (10, 7, and/or XP) in virtual machines (VMs). This, I felt, would give me greater control over Windows 10, especially, and over the information that my system was providing to Microsoft, while reducing the upheaval and work interference that I had been experiencing due especially to disruptive Windows 10 updates on my desktop computer.

The present post describes how I proceeded to set up a working Linux Mint installation on my Acer Aspire 5 A515-51-563W laptop. Some things went well; some didn’t. The details are presented here for the benefit of anyone who runs into similar problems. Having used Linux infrequently in recent years, I realize that some of my terms and descriptions could stand improvement. As with other posts of this nature, there is a fair chance that I will eventually write an improved and/or condensed version sometime later. Thus, I did not attempt to clean up and finish all of the issues and topics that arose during this process. Such issues would have to await my return to this topic, in a post on using Linux Mint Xfce as my host operating system.

Contents

Reference: Common Terms and Procedures
Permissions
Verifying Downloads
Installation
The EFI Partition
Repositories and Updates
Linux Software Installations
Less Complex Installations
More Complex Installations
—- Firefox
—- PeaZip
Struggling with File Handling Tools
VeraCrypt
Beyond Compare (BC)
Double Commander
NTFS and Other File Systems
Conclusion: Windows File Handling Tools in Linux
Wine Programs
—- Olympus Digital Wave Player (DWP)
—- Other Programs
Setting Up Windows VMs in VirtualBox
A Windows XP VM
A Windows 7 VM
A Windows 10 VM
Tweaks
Disable Touchpad
Create Shortcuts and Edit the Start Menu
Additional Tweaks
Tweaks Not Used

.

Reference: Common Terms and Procedures

Among the various tasks and ideas discussed below, some came up repeatedly. Rather than describe them each time, this section provides a reference.

  • Start Button. I could imagine why Linux Mint would prefer some name other than Start for the button at the lower left corner of the screen. But I did not think Menu was the best alternate name. It was potentially confusing: Linux Mint and its constituent programs had many menus. Therefore, when this post refers to a menu, it generally means the menu available in the window (usually at the top of the window) containing the program or tool currently being used. This post refers to the button at the lower left corner of the screen as the Start button.
  • Taskbar. Linux could have many panels. This post clarifies references to the toolbar running across the bottom of the screen in Linux Mint, on which the Start button appears at the left end and the system tray at the right end, as the taskbar.
  • WinKey. The Windows key, referred to in this post as WinKey- or simply Win- (for example, Win-R), was the key with a Microsoft Windows image on it, near the bottom left and/or right corners of the keyboard.
  • Commands appear, in this post, in italics (e.g., swapon). (I also use italics for some titles, such as the word Commands at the start of this paragraph.) Commands were entered in a Terminal session, sometimes called the console. Terminal was represented by the square black icon available at several locations, including the stock taskbar, at Start > left sidebar, and via right-click in a folder (or on the desktop) > Open in Terminal. It was also possible to open Terminal with Ctrl-Alt-T. Within the same Terminal session, previously typed commands could be retrieved by using the up arrow. For purposes of command editing, the home and end keys moved the cursor to the start or end of the command. Terminal commands were case-sensitive.
  • Omitted Steps. Steps listed in the following discussion do not always mention every time I had to click “OK” or “Next” or take some other obvious step. I also don’t always mention every time I just accepted the default values.
  • Disks (available at Start > Preferences > Disks, or in Terminal as gnome-disks) provided a graphical user interface (GUI) for viewing, mounting, and unmounting disk partitions. For example, the address for my BACKROOM partition, mounted through Disks, was /media/ray/BACKROOM, though some tools saw its location as /dev/sda4.
  • Nemo was the default Linux Mint 18.3 file manager program, comparable to Windows Explorer or File Explorer on Windows computers, though it looked very different from those tools. For information on extending Nemo’s capabilities, see the discussion of PeaZip (below).
  • Home. Nemo confusingly offered two different Home folders. These were visible in both of the views (which, themselves, should have been listed in Nemo > menu > View) available at the bottom left corner of the Nemo window. Mousing over those views brought up two different tooltips: Show Places or Show Treeview. Either way, there was a top-level Home location, and then there was a File System > home folder. My search did not lead to an explanation. These looked like two different folders, until I set Nemo (menu > View > Show hidden files) to display hidden files and folders in both. Then it materialized that both were in the same location: /home/ray (or other username). The tilde (~) symbol was used as shorthand for the user’s home folder: in this example, it meant /home/ray. So if I used a command or opened a folder referring to /home/ray/.local, another way to represent that would be simply ~/.local. During the initial setup and configuration of this Linux Mint installation, the ~ folders that I encountered most frequently were Downloads and Desktop.
  • Root (a/k/a superuser). The term “root” was used in two different ways. As a reference to a location within the Linux file system, root was the top level, signified by the / symbol. For example, someone talking about the /usr folder could describe it as the usr subfolder under root. Root was also another term for the superuser — that is, for the system’s most powerful user account, with access to all commands and files on a Linux system. Since it was possible to run bad programs and to make destructive mistakes unintentionally, the recommended procedure was to run the system in a regular user account, for everyday purposes, and only invoke the root account when it was needed for privileged changes. In Linux Mint, the superuser account was invoked by typing “sudo” before a command. So, for instance, to run a Nemo session as superuser, I could type sudo nemo. I would have to enter my password, and then I could proceed to wreck the installation if I wasn’t careful.
  • Posted questions and other links. At those points where I say I posted a question, it may be worthwhile to click the link and see if anyone has answered my question with a solution better than the one I could see at this writing. Links may also be useful for more detail or different phrasing that may clarify things not explained well in this post.
  • Processes. How-To Geek (Hoffman, 2012) offered a discussion of Linux tools for killing processes that did not want to die. Some of these tools were already installed in Mint; others (notably htop) were added to the list of programs I would be installing (below). In the meantime, pstree would provide a tree diagram of running processes. Also, pgrep [program] (e.g., pgrep firefox) would give me the process ID (PID) for the program, and then kill [PID] would kill it. Or I could just use killall [program] (e.g., killall firefox) to kill all processes so named. Finally, xkill would give me a cursor that I could use to indicate graphically which program I wanted to kill.
  • Desktop Version. For those who weren’t sure, I was able to verify that I was using the Cinnamon desktop by going to Start > Preferences > System Info, and also by running inxi -S.

Permissions

File ownership could prevent copying, moving, and other file activities. For example, when working with VirtualBox (below), I ran into a situation where I could not save the desired settings for a VM. Someone suggested that permissions might be the reason. A search on that led to a Guru99 article explaining that ls -l would provide permissions information for the files and folders where I ran that command. I ran it in /media/ray. It gave this information for the partition where the VM was located:

drwxr-xr-x root root

StackExchange said that the two occurrences of “root” in this example referred, respectively, to the names of the owner and the group. This was cautious: root owned the VMs partition, so I would have to be running as root to control what could be changed in this partition. The leading “d” (as distinct from a leading hyphen (“-“), which would have indicated it was a file) meant that I was looking at information for a directory. The three sets of three characters after the “d” conveyed permissions for the owner, the group, and the world, respectively: “r” meant “read,” “w” meant “write,” “x” meant “execute,” and a hyphen meant no permission. So in this case the owner had read, write, and execute permissions for the VMs partition, whereas everyone else (including me, a mere user) had only read and execute permissions. So that could indeed have explained why I couldn’t write changes to that directory.

A quick check confirmed that I was the owner of other partitions. I felt I had a right to be the owner of the VMs partition too. To make that happen, I went to the root prompt (where “root” had a different meaning, i.e., top of the directory tree) — in this case, the “ray” level in the /media/ray/VMs path. So the prompt said I was at /media/ray $ and what I typed at that prompt was sudo chown ray:ray VMs. I had to type “sudo” to get root-level control. “Chown” meant “change owner,” ray/ray was being substituted for the root/root settings shown in the ls -l output (above), and VMs was the name of the thing being reowned. Once that was done, another ls -l confirmed that I was now the owner, and that the owner still had rwx permissions. Note that permission changes in a folder (e.g., /media/ray/VMs) would not necessarily produce changes in its subfolders (e.g., /media/ray/VMs/Subfolder). Note also the need for quotation marks around pathnames containing spaces (e.g., sudo chown ray:ray “Windows 7 VM Subfolder”).

Verifying Downloads

There were various methods to insure that the integrity of a downloaded program file had not been compromised. Those methods included the following:

  • To compare SHA sums on files downloaded on my Windows 10 desktop computer and then installed on the Linux laptop, I used the self-explanatory MD5 & SHA Checksum Utility (4.2 stars from 54 reviewers on Softpedia) as recommended by MakeTechEasier, and then pasted in the value provided in the SHA text file from the download webpage. Some such text files might be more viewable in Firefox than in Notepad.
  • To verify a download offering a PGP Signature in Linux, as advised by It’s Foss and others, I went to Start > Administration > Synaptic Package Manager > search for gnupg > select > Mark for Installation > Apply. Then, from the download page, I made sure I had downloaded the program file, the accompanying PGP Signature file, and the website’s PGP Public Key. I saved the PGP Public Key as a text file named pubkey.gpg in the Downloads folder. Then I opened a Terminal session in the Downloads folder and typed gpg –import pubkey.gpg. That succeeded but also provided a notice, “No ultimately trusted keys found.” A Mandriva Users forum entry suggested this was a common warning that could usually be safely ignored. Then, as advised by GnuPG, I typed a command of this form: gpg –verify [full name of .sig file] [full name of .tar.bz2 file]. That produced messages saying, “Good signature” but also “Warning: This key is not certified with a trusted signature!” and “There is no indication that the signature belongs to the owner.” According to Barclay (2016), those warnings were unimportant; the thing to watch out for would be a message stating, “BAD signature.” After that, the pubkey.gpg and .sig files were disposable.

Installation

I was installing onto an Acer Aspire 5 A515-51-563W laptop. My choice of Linux distributions was Linux Mint. The latest version was 18.3. Both MATE and Cinnamon desktop environments were appealing; I had only a slightly greater preference for Cinnamon. I was interested in Mint’s Debian-based version, but it appeared that it might not yet be as well developed as the Ubuntu-based alternative.

The Acer’s stock hard disk drive (HDD) came with Windows 10 installed. As described in another post, I replaced that HDD with a larger one and added a 500GB Samsung 850 EVO solid state drive (SSD) and more RAM. Before doing so, I made an image of the Windows installation. That image would be useful if I needed to restore Windows for any reason. One such reason would be to create a dual-boot system, which I decided against for reasons described in another post. Another reason would be to restore and use Windows 10 temporarily, to install BIOS/firmware updates that Acer delivered in .exe form that could not be installed via Linux. (In other words, it might make sense to install those updates before removing Windows from the system.)

I downloaded the 64-bit version of Mint Cinnamon 18.3 on my desktop computer and checked its SHA-256 value (above) as provided on the official download page. To my surprise, the checksum utility reported “Hash does not match!” A second download attempt, from a different server on the list, solved that problem.

Now that I had a good ISO, I used YUMI to put it onto my multiboot USB drive (using the option to allow a persistent file offering a few hundred MB for program data), booted the laptop with it, and proceeded to install it on the SSD that I had installed in the Acer.

In that installation process, YUMI presented me with an “acpi=off” option. Wikipedia said ACPI was short for Advanced Configuration and Power Interface. It looked like ACPI was useful, maybe even the trend of the future, though Ubuntu founder Mark Shuttleworth reportedly declared that it, or any proprietary firmware not subject to open-source verification, was a security risk. Mint’s own philosophy, criticized by the Free Software Foundation, was less rigorously opposed to proprietary software. A StackExchange discussion seemed to say that, aside from firmware philosophy and security concerns, some hardware might only work with ACPI turned off. I decided to try the ACPI = on option.

After a moment, that option resulted in a “live” Linux screen, which I could use as-is, or from which I could choose the “Install Linux Mint” icon. I chose that icon, indicated my wireless network, and chose the option to install third-party software for graphics and other media. The installer observed that some partitions on the Acer laptop were mounted. I accepted its offer to unmount those, so that I would be able to revise those partitions during the installation as needed. As detailed in the other post, when the installer proceeded to its Installation Type screen, I chose the “Something else” option, so that I could indicate manually what I wanted to do with partitions.

The Linux Mint Installation Guide advised creating just a root (“/”) and swap partition, ideally allowing 100GB+ for root and 2x RAM for swap. My previous review of various sources observed that a separate /boot partition (as small as 100MB) was recommended for old and complex (e.g., dual-boot) installations, and that a separate /home partition (20GB) could make it somewhat easier to preserve customized settings during reinstallations and upgrades. On the other hand, Easy Linux Tips Project (ELTP) advised against a separate home partition. I decided to go with the default recommendations: just root and swap partitions.

The next question was, how big should those partitions be? I started with swap. As just noted, its optimum size was supposedly dictated by the amount of RAM on the system. To verify how much RAM I had, I went to the Linux Mint > Start > System Settings (i.e., the gear icon on the left sidebar) > scroll down to the Hardware section > System Info. As expected, it said I had about 20 GiB of RAM, which I translated as 4GB of RAM soldered to the motherboard plus my 16GB RAM addition. That meant a 40GB swap space if I was going to adhere to the Installation Guide’s 2x RAM recommendation.

My browsing suggested that the 2x RAM recommendation was driven by the size of swap that would be needed if you wanted to use hibernation (reportedly enabled by default in Mint), as I did. But a search led to a suggestion that, with an amount of RAM larger than the amount you would actually want to swap during daily usage (in this case, 20GB), it might suffice to use RAM + 2 GB. In my case, that would make swap = 22GB. Likewise, the preferred answer in a Quora discussion suggested, roughly, 1.5x RAM for swap on systems with up to 8GB RAM, and RAM + 2GB above that. Another source, seemingly experienced with hibernation, used RAM + 4 GB. The additional 2GB to 4GB was apparently intended to let SWAP keep whatever it was already using: RAM + 4GB would save the contents of RAM to swap during hibernation, and would also allow up to 4GB of used swap space to remain unchanged. Since I was installing Linux on an SSD, I wasn’t as worried that having a lot of swap would reduce performance as it would on an HDD. Some participants in a Linux Mint forum said their systems ran fine with swap turned off (toggled using swapoff -a and swapon commands), so there was a chance I would be using swap only for hibernation. I decided to make the swap partition 22GB.

My partitioning decision was driven, in part, by the question of how much space (if any) to set aside for overprovisioning (OP). As detailed in another post, I decided to allocate 16% of available space for OP, and I planned to use the remaining space to store one or more VMs. I wasn’t able to do all of the partitioning during this installation phase — I would have to use the GParted program for some of it later — but, for reference, here is the final list of partitions I decided on, with the sizes I would eventually see in GParted:

  1. /dev/sdb1: FAT32: EFI (see below): 300MiB
  2. /dev/sdb2: ext4: for Linux system installation: 120GiB
  3. /dev/sdb3: linux-swap: 22GiB
  4. /dev/sdb4: ext4: for VMs: 250GiB
  5. remaining space: unpartitioned: 73.47 GiB

But for now, in the installation phase, I just created partitions 2 and 3. The values I entered during the actual installation were not the ones just listed — I revised the installation using GParted — but to produce partitions 2 and 3, with the sizes shown here, I think I would have had to proceed as follows:

  • In the Linux Mint “Installation Type” window, select the 500GB free space, click the plus sign to add a partition, type in 120002 MB, use the defaults (i.e., primary partition, beginning of the space, ext4 format), and designate / as the mount point.
  • Select the remaining free space, type in 22000 MB, make it primary, and format it as swap area.

In any case, after creating those two partitions, I clicked Install Now. That took a minute, and then I continued with the remaining installation questions (e.g., time zone). When I got to the option of encrypting my home folder, I ran a search, reviewed a Lifewire (2018) article and a Linux Mint forum discussion, and concluded that the contents of the home folder, if not encrypted, would be available to anyone having physical access to the machine. The home folder would apparently store passwords saved by e.g., a web browser, among other things. So it seemed very advisable to encrypt the home folder on a laptop; perhaps less so on a desktop. The primary risk was that I might forget the password. Possibly an option would be to make an occasional zipped, encrypted copy of the home folder, with a different password I would not forget, or perhaps a zipped, unencrypted copy stored on another computer. Another risk was that it would be hard if not impossible to recover data from an encrypted folder, if the drive went bad. I would have to make a point of backing up the home folder frequently if I felt it had irreplaceable data. Encryption of the home folder could also impose a performance hit, though I suspected it would not be major. Taking all such factors into account, I decided to encrypt the home folder on this laptop.

The EFI Partition

With those decisions made, I continued the installation. Soon, unfortunately, the installer produced an error. Here, I discuss several error messages, for future reference, and then I present the solution, which was to create an EFI partition. The first error was as follows:

Unable to install GRUB in /dev/sda

Executing ‘grub-install /dev/sda’ failed.

This is a fatal error.

That didn’t sound good. Clicking OK brought me an option to “Unmount partitions that are in use.” The specific partition it named was /dev/sdc. I went to Start > Administration > GParted to see what that was. I got a warning that I chose to ignore. As expected, GParted said /dev/sdc was the YUMI multiboot USB drive. That was what I was installing Linux Mint from. I didn’t want to unmount that. That left me with another error:

Bootloader install failed

Sorry, an error occurred and it was not possible to install the bootloader at the specified location.

How would you like to proceed?

I wasn’t sure, but it sounded like the installer had been trying to install the bootloader on my YUMI drive. That wouldn’t make sense, and it also wasn’t what the first error message (above) actually said, but otherwise why would it want to unmount /dev/sdc? This seemed a little screwy, so I chose the option to cancel the installation. But clicking OK didn’t do anything. I powered down the computer. (If all else fails, use a bigger hammer.)

Starting over, the installer took me almost immediately to that option to unmount /dev/sdc. Again, I said no to that. But now it looked like maybe I had made a mistaken assumption. When I chose “Something else” and saw the list of partitions, I saw that a new /dev/sdc1 had been created, and it was named Windows Recovery Environment. Was this somehow a residue of, or a new creation by, the Windows 10 installation I had previously attempted on the SSD? I started GParted, to take a look, but it gave me a message:

Libparted Error

Partition(s) 4 on /dev/sdc have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.

I clicked Cancel. Now GParted confirmed that /dev/sdc was definitely the Multiboot drive, and that disagreed with the installer. So I did reboot, to take another look. This time, I got a new error: “ubi-partman crashed.” I decided this installation was corrupt. I rebooted the laptop using the YUMI drive, used GParted to delete partitions from the laptop’s SSD and HDD, and started the installation over again.

It wouldn’t start. There was an error indicating that there was no space left on the USB drive. It occurred to me that the USB drive might be the problem. I started the installation over with a different YUMI drive. Later, it occurred to me that maybe the problem was, I had not allowed enough persistent drive space on the YUMI drive when adding Linux Mint to it. Apparently the installer could require more than a few hundred megabytes to remember what I was telling it. Be that as it may, the other YUMI drive ran into the same question about unmounting partitions in use. This time, I decided to say yes. It didn’t seem to affect anything: installation continued as usual. When we got to the partitioning phase, this too showed the existence of a Windows Recovery partition. So apparently that was the nature of the YUMI drive, from a Linux Mint perspective.

Unfortunately, the installation stopped at the same “Unable to install GRUB in /dev/sda” error. A search led to an indication that I had to boot the USB in UEFI mode, not Legacy mode. I rebooted, hit F2 during bootup, saw that the laptop was indeed set to boot in Legacy mode, changed that, disabled Secure Boot, and rebooted. This gave me “No Bootable Device.” One source said that I would need to use CSM options in my BIOS, and that those might appear only after I disabled Secure Boot. But I wasn’t seeing them. A search led to an indication that the solution was not to disable TPM; doing so could apparently cause further problems.

A StackExchange discussion said the problem could be that I hadn’t created an EFI partition. To do that, I set the BIOS to boot in Legacy mode. I started the Linux Mint installer. In Linux Mint, I went into Start > Administration > GParted. For my purposes, the advice was essentially to make sure there were no partitions on the SSD, and then create a 300MiB FAT32 primary partition labeled EFI. I clicked Apply. Now I could right-click on it > Manage Flags > check “boot” and “esp” > Apply. I had noticed that the other partitions I had created in the Linux Mint installer showed weird sizes in GParted, so at this point I used GParted to create the 100GiB (i.e., 102400 MiB) root and 22GiB (i.e., 22528 MiB) swap partitions too.

GParted was still giving me the Libparted Error shown above. A search led to several sources suggesting that I use Ubuntu’s “Disks” program, which was apparently Gnome Disk Utility. I intended to see about running that in Linux Mint, but when I tried to restart Mint from a different YUMI drive, I got Error 61 (“Too many fragments”). A search for that led only five hits, none very informative. Removing Linux Mint from that YUMI drive and then reinstalling Linux Mint on that YUMI drive solved that problem.

Booting into Ubuntu, I saw that running GParted there (via right-click on desktop > Open Terminal > sudo gparted) produced the same Libparted error. So the problem wasn’t specific to Linux Mint. Further testing confirmed that the error came up when I loaded Linux Mint via two different YUMI USB drives. I tried using a different program, Universal USB Installer, to put Linux Mint 18.3 x64 on a different USB drive. Running GParted from within Linux Mint on that single-purpose USB drive did not produce the Libparted error in GParted. It appeared, then, that YUMI might be causing a problem with GParted.

With the single-purpose USB drive, I proceeded through the Linux Mint installation. When I got back to the partitioning phase, in the Installation Type window, having already created that EFI partition and marked it as boot and esp (above), I went down to “Device for boot loader installation.” There, I selected /dev/sdb1, since that was the 300MB partition that I had created as advised (above). Then I selected the 120GB partition, clicked on Change, set it to be used as ext4, checked “Format the partition,” and designated it as the / mount point. With that done, the installation succeeded.

The solution here, then, was to start by using GParted to add the EFI partition and to create my other partitions; run the installer; and designate the partitions as described in the preceding paragraph. It was not clear that the Libparted errors actually hurt anything, but if I wanted to avoid them, apparently I needed to run the Linux Mint installer from a single-purpose USB drive, rather than from a YUMI multiboot drive. I did not check to see whether other multiboot tools would have the same problem, nor did I check other versions of YUMI, to see whether the problem persisted.

Repositories and Updates

After installation, I unplugged the USB drive and rebooted. The Linux Mint installation worked. I had already provided the password to log onto my wireless connection, and I saw that it was active now. So I could start downloading and installing things as needed.

I recalled that Linux used repositories. How-To Geek (Hoffman, 2016) explained that, unlike Windows, Linux users would ordinarily get their software, and its updates, from repositories specific to the particular distribution. This task would be handled by a package manager, where a “package” was mostly a list of files that would need to be downloaded and installed in order for the desired program to work. Hoffman said it was possible to add other repositories, in addition to the official one(s) for the specific distribution.

At this point, repeated searches did not lead directly to clear guides for where to find lists of repositories, which ones to use, or how to install them. That was what I had also found in my most recent prior exploration, in 1 2 3 posts I wrote in June 2016. The steps I used then — and retook now, adapted to Mint 18.3 — were as follows:

  • Software Sources was available via either Start > Administration > Software Sources or Synaptic > menu > Settings > Repositories. In Software Sources, I went to Official Repositories > click on each mirror (i.e., Main and Base), one at a time > allow it to test the speeds of the entire list of mirrors, except maybe the ones whose flags show they are in other countries (if there’s a delay, wait; it may be trying to contact an Unreachable source) > select one of the fastest ones (revisit this occasionally to make sure the chosen source is reliable) > Apply. Then, back in the main Software Sources window, click the Update the Cache button at upper right.
  • Still in Software Sources, go to the PPAs tab (left side). PPA was short for Personal Package Archive. PCWorld (Hoffman, 2015) explained that a PPA was a minor software source, usually limited to some particular program. ELTP warned against PPAs. The PPA would typically offer the latest versions of software. Those versions were not yet adopted by the official repositories, and were therefore potentially unsafe. (Most often, that would apparently mean the software might conflict with other software.) An AskUbuntu discussion said that, when choosing a PPA, there were some practical questions (e.g., who made it, how many users have used it), but there were also some unknowns (e.g., did someone later add unstable or malicious software to an initially trustworthy PPA). There were many PPAs. (See e.g., Ubuntu’s database.) While I was here in Software Sources, I wanted simply to work through the PPA process once, for future reference. I decided to use the Wine PPA that Hoffman also used as an example. (Wine was a Linux tool that enabled some Windows programs to run in Linux. Its name was short for “Wine Is Not an Emulator.”) The name of the desired PPA could come from a variety of sources (e.g., the Ubuntu database or, in this case, Hoffman’s recommendation). To add the Wine PPA, Hoffman clicked on Add a New PPA (in Software Sources > PPAs) and typed its name (i.e., ppa:ubuntu-wine/ppa). An Ubuntu PPA was relatively safe because, as noted above, I had chosen the Ubuntu-based version of Linux Mint, as distinct from the Debian-based version. It seemed Hoffman’s selection was outdated, however: clicking OK got me a note that this PPA was deprecated. So I canceled out of that and tried instead the WineHQ website that Wikipedia named as Wine’s official site. A search on that site led to a recent indication that the desired PPA was now ppa:wine/wine-builds — but it turned out that was deprecated too! The advice in both of those messages was instead to enter a set of commands in Terminal. Those commands did not immediately add anything to Software Sources. I had to close and restart Software Sources to see that I now had Winehq under Additional Repositories and WineHQ packages under Authentication Keys. (I would henceforth see occasional indications, in Terminal, that the system was “Ignoring file ‘Release.key’ . . . [because it had] an invalid filename extension.”) The commands I entered (for Linux Mint 18) were:
sudo wget -nc https://dl.winehq.org/wine-builds/Release.key
sudo apt-key add Release.key
sudo apt-add-repository 'deb https://dl.winehq.org/wine-builds/ubuntu/ xenial main'
  • If I wanted to see what repositories my system was using, I could look at the PPAs and Additional Repositories tabs in Software Sources. Alternately, in Terminal, I could type cd /etc/apt/sources.list.d and then ls. The ls command would list the files in that directory. I could view those files (notably official-package-repositories.list and additional-repositories.list) by typing sudo xed (filename), where sudo gave administrator permissions and xed was the name of the default text editor. There, again, in additional-repositories.list, I saw the Wine repository listed.

Now I was in a position where I could see and update the list of installed software packages, including those available through my additional repositories. I would soon be adding to the list of packages, but for the moment I just wanted to take a first run through the process. The New Features page said that Linux Mint 18.3 Cinnamon used a new Software Manager, offering Flatpak access to bleeding-edge software. For now, at least, I planned to stick with more stable software.

For that purpose, I went into Start > Administration > Synaptic Package Manager. There, I scrolled down to see what packages were installed. Clicking on menu > Help > Icon Legend showed me what different colored checkboxes meant (e.g., green = installed). The orange sun or sunburst icon meant “Package is supported.” By whom, was not clear, but a forum discussion led me to think it probably meant it was supported by Linux Mint. I was not sure what that meant, and a search suggested that nobody was saying. Clicking the Status button (at lower left) produced lists of packages that were Installed or Not Installed. Clicking menu > Edit > Fix Broken Packages produced a status bar notice (bottom left corner of Synaptic window): “Successfully fixed dependency problems.” (No idea whether there were any such problems in the first place.) Menu > Settings > Repositories brought up the Software Sources list of repositories (above).

Next, in Synaptic, I went into menu > Settings > Preferences > General tab > check Consider recommended packages as dependencies. I clicked the Reload button, selected Status > Installed (upgradeable), and saw that many packages had the exclamation-in-a-box icon. I thought that meant that the package had been upgraded since the Mint 18.3 Cinnamon ISO was assembled. But then, if I selected Status > All, the status bar said, “56644 packages listed, 2231 installed, 0 broken, 0 to install/upgrade, 0 to remove.” So apparently no further upgrading was needed at the moment. So either nobody had changed anything, among those 2231 installed packages, since the Linux Mint Cinnamon 18.3 ISO was assembled, or else I had somehow already upgraded everything that needed to be upgraded.

I was not clear on the relationship between Synaptic and Start > Administration > Update Manager. Contrary to the Synaptic indication that there were zero packages to install or upgrade, Update Manager said there were many. Update Manager also provided a Level number for each such update. ELTP said that levels 1-3 were safe, while levels 4 and 5 were riskier but might add functionality. Update Manager itself differed slightly: in menu > Edit > Preferences > Levels, it described level 3 as having a “Large” impact and advised, “Apply with caution. Impact on multiple applications.” By default, it selected only levels 1 and 2. I decided to leave that as it was, but I unchecked the Visible entry for level 4. By contrast, ELTP suggested going ahead to install Level 4 updates, but adding them one at a time, with a reboot after each one, so as to see what impact it may have had, and the warning that “if you’re unlucky and your system does get messed up because of a level 4 update, a clean re-installation is sometimes the only solution.” While I was there in Preferences, I also went to Options > Only show a tray icon when updates are available; and I went to Auto-refresh > “Then, refresh the list of updates every 8 hours.” I closed that and clicked Install Updates. That triggered more updates, so I ran those too.

Now I saw that a Linux kernel upgrade (to 4.13.0) was shown as a level 4 update, with an exclamation mark icon evidently signaling that it was a security update. That was probably why it showed up even though I had set the Preferences to make only levels 1-3 visible. Some participants in a Reddit discussion felt that kernel upgrades could cause instability. ELTP said that “security fixes for the kernel usually only repair small risks” and advised, “If your machine functions well on the default kernel series, I strongly advise to stick with it” because the whole installation “has been designed around the ‘engine’ of a particular kernel series.” But ELTP also noted that it might be necessary to update the kernel to accommodate “very new hardware,” and further said that “updating the kernel is currently a necessity . . . [for] protection against the highly dangerous Meltdown/Spectre vulnerabilities” and that “Only the 4.4 and the 4.13 kernel series contain these fixes.” ELTP said that, if the new one failed, the GRUB menu at bootup (via Advanced Options for Linux Mint) would allow me to revert to my prior kernel, and (after a reboot) Start > Administration > Update Manager > View > Linux Kernels > Remove would remove the selected kernel. To find out what kernel series my system had, I typed uname -r. It returned “4.10.0-38-generic.” I was not sure why Meltdown and Spectre fixes would have been incorporated in 4.4 and 4.13 but not 4.10. A Linux Mint forum post explained that the first two were LTS (i.e., long-term support) releases and thus were given priority. Thus, I did need to install this kernel upgrade. So I did. After that, I clicked Refresh. It said, “Your system is up to date.”

Later, I would see, in the Debian Handbook, that such updates and upgrades could be performed from the command line with two commands:

sudo apt-get update
sudo apt-get upgrade

To remember the order of the commands, it might help to observe that they were to be entered in alphabetical order. The update command was recommended, in fact, before any of the apt-get software installation commands suggested below: according to ItsFosssudo apt-get update would update the package database, apparently insuring that the right files were being installed. So, for example, after adding a new repository, an update command would make sure that the system was updated with respect to it along with other sources.

It seemed that those commands did not observe the Linux Mint “level” settings. To find out how that worked, I went into Update Manager > Edit > Preferences > Levels > make all levels visible > Apply. Now (at the later time when I was adding these remarks) I saw that I had various updates at levels 1, 2, and 4. I ran the update and upgrade commands just listed. After the update command, the items listed in Update Manager remained the same. In response to the upgrade command, Terminal informed me that it was going to install a bunch of stuff. It looked like the list included items listed at level 4 in Update Manager. I went ahead and approved the upgrade. Then I refreshed Update Manager. Sure enough, almost everything was gone from the list now, other than a kernel upgrade and one other Level 4 package: sudo apt-get upgrade had ignored the level settings. The conclusion here was that the apt-get upgrade command ignored the Mint level system. If I wanted to install software only at certain levels, I needed to use Update Manager and not use that command. But the upgrade command appeared so frequently, in various instructions, that for now I decided to go ahead and use it, where people included that in their recommended commands, and postpone use of the level system until I saw a need for it.

To get a command that could be configured for the Mint level settings, a StackExchange discussion pointed toward mintupdate-tool and provided a discussion of how to use it. In addition, note that those commands would not upgrade to a newer version of the distribution (in this case, Linux Mint). For that, the command would be sudo apt-get dist-upgrade. But a Linux Mint forum discussion said that this command would actually not function that way in Mint: for that kind of upgrade, you would have to change your sources.list file (above). A Linux Mint Community tutorial provided further detail, along with a recommendation not to upgrade unless you need something specific in the newer distribution.

Along with software updates, there was also the possibility of firmware updates. First, the Linux Mint Installation Guide said I might want to go into Start > Administration > Driver Manager. It had an Intel microcode update for me. I installed that, rebooted, and went back in. (ELTP said, to the contrary, that Intel or AMD microcode should not be installed, due to potentially severe boot problems, but that advice did not seem responsive to the recent Meltdown and Spectre CPU malware concerns.) There were no more driver updates. Having been online during installation, I didn’t need to go into Start > Sound & Video > Install multimedia codecs; that option wasn’t even there for me.

As noted above, Acer offered various updates. Some were specific to Windows, and thus appeared irrelevant to Linux Mint. But there were also several BIOS/firmware updates. One appeared general (i.e., “Improve system performance”); others were focused on specific problems (e.g., “Adjust brightness during POST”). These were unfortunately provided in .exe files that would have to be installed on a system running Windows. I hadn’t installed them before removing the Windows 10 HDD from the system. It now seemed, then, that my options were not to install them, to install them by installing or restoring Windows to the computer (entailing some complications for the present Linux installation), or to run Windows 10 from a USB stick. Given the hassle involved — not to mention what I had just been through, with Windows 10 aggressively installing itself whenever and wherever possible — the Win10 options were not thrilling. Wine was an alternative, but I was not very interested in entrusting a BIOS update to Wine’s potentially imperfect translation. ELTP offered a Linux-only alternative for installing the latest Intel or AMD microcode, but I wasn’t sure whether that included everything in the latest Acer system update. Windows 7 rather than 10 on a USB drive would have been a good alternative, but I found that Windows 10 wouldn’t run the Windows USB/DVD Download Tool recommended by How-To Geek. Another possibility was to use WinToUSB, recommended by PC Mag. I tried it. It didn’t seem to accept Windows XP at all. It tried but failed to produce a bootable USB with Windows 7. I decided to wait until I had a Windows 7 VM running in Linux, and see if I could use that to create a bootable Win7 USB drive that would run the .exe files (which might turn out to be designed to work only on Windows 10) to modify the BIOS.

ELTP recommended that I also take a look at SSD drivers and firmware updates. For my 850 EVO, Samsung did have a firmware update, in the form of an ISO. I burned that onto a USB drive using Universal USB Installer, booted the laptop with that, and found it wouldn’t boot. I tried again with Rufus. (Later, I saw that Samsung’s SSD Firmware Installation Guide, listed further down in that Samsung webpage, recommended using UNetbootin.) Rufus worked, but when I booted it, the Samsung firmware updater said, “No supported SSD detected for Firmware Update!!!” I booted back into Linux Mint > Terminal > sudo lshw -short. It did confirm “500GB Samsung SSD 850.” I went back to the Samsung download site. On second glance, it seemed that my download may have been intended for the old 2.5″ form factor 850 EVO, not for my M.2 850 EVO. I wasn’t sure; it’s just that I noticed they did have a separate ISO download for the 840 EVO mSATA. I tried re-downloading the only listed 850 EVO download. A DoubleKiller byte-for-byte comparison confirmed that it was identical to what I already had, so I deleted it. I called Samsung’s Customer Service (800-726-7864 in the U.S.). The technician said the M.2 software has never been updated, so there was no update.

Linux Software Installations

Now that I had the drive partitioned, Linux installed, and repositories and updates in place, it was time to start thinking about software. In the past, I had swung both ways on this topic. Circa 2010, I had run Windows XP in a VM on Ubuntu, and had done most of my work in the VM. In 2016, during my last serious look at Linux, I had recognized that I could divide much of my Windows workload among Linux programs, and Windows programs running on Linux via Wine. As detailed in numerous (1 2 3 4 5 6 7 8) posts, I had reviewed multiple lists of recommended Linux programs and had spent a bunch of time trying to find ways to avoid working in Windows, with mixed results.

This time around, I felt myself to be more in the middle. I had an established set of Windows programs that did things I wanted. For those, I hadn’t found good Linux alternatives. I was going to use those Windows programs in a VM. I wasn’t going to knock myself out trying to use inferior Linux alternatives. But, consistent with the long-term objective of reducing dependence on Microsoft, if I could do something in Linux, I would. (This post does not provide much discussion of cloud-based alternatives.)

I started with my 2016 post in which I went through my favorite Windows programs and looked at which I could replace in Linux. That post divided Windows programs along a spectrum, ranging from those that were irrelevant in Linux (e.g., antivirus, Windows tweakers, Windows registry editors) through those for which there were Linux versions (e.g., Firefox, Audacity, Google Earth) and those with good Linux alternatives (e.g., file search tools, hardware information utilities) to those whose Linux alternatives were tolerable (e.g., video editors, PDF readers) or less-than-tolerable (e.g., ImgBurn, IrfanView, Acrobat). Another post described my experiences of using some of those Linux alternatives.

This section presents and discusses the Linux programs that I installed on this system. The order of installation matters in some cases: some of these programs needed to be installed before others could be installed, or before various tweaks or the Windows VM discussed in later sections were feasible. Before skipping sections, it might be best to determine whether any of their contents are necessary for other programs.

Note that program installation and removal commands shown here tend to use apt-get. As described by TecMint (Khera, 2016), apt (short for Advanced Packaging Tool) installed the specific package, along with all packages listed as dependencies (i.e., packages needed in order for the specified package to run — in other words, packages on which this package depends). Khera indicated that Aptitude was a superior alternative (see also StackExchange); but for whatever reason Aptitude was not the tool recommended in the commands I drew upon here.

Note also that this post generally avoids re-listing software already installed in Linux Mint 18.3 — that is, these remarks may leave out some programs not pre-installed in other distributions — and software installation commands would produce an error (e.g., “could not get lock”) if other software installation tools (e.g., Synaptic) were open. Recall also the advice to re-run the sudo apt-get update and upgrade commands (above) after adding any new repositories, and also before running commands to install any other software.

Less Complex Installations

For some Linux programs, the questions of whether and how to install were relatively simple. Those less complex programs and related issues were as follows:

  • Mono. ELTP recommended uninstalling mono, on grounds that it posed a minor security threat. It seemed mono had also been controversial, years earlier, for its involvement with Microsoft patents. It now seemed, though, that mono had largely been accepted as serving useful purposes, including support of popular cross-platform tools (e.g., KeePass, Pinta, Wine). I took its default inclusion in Linux Mint 18.3 as a vote of confidence, and decided against ELTP’s recommendation.
  • Minor and underlying packages. From various sources, I accumulated a list of Linux programs recommended for sundry purposes. Some of these recommendations came from ELTP; their specific provenance should be available by adding the specific program name to a generic search of ELTP. The list of packages is presented here in a single command — a long one, but not remotely approaching the maximum Linux command length. That command does not include installations required for specific purposes, such as those needed for VMs (below): I thought it was probably better to keep those together with related instructions. Preceding this long installation command is a shorter uninstallation command, incorporating plausible recommendations mostly presented on several ELTP webpages. I entered each of these commands, one at a time, watching their messages and waiting to see if they ran successfully. (Note: removing the old Intel driver would not be appropriate for computers built before 2008.)
sudo apt-get remove gnome-orca ndiswrappe* xserver-xorg-video-intel 
sudo apt-get install 
audacity doublecmd-gtk p7zip-full p7zip-rar gnome-network-admin unrar xterm gdebi htop gnumeric gtkhash gedit gsmartcontrol fonts-crosextra-carlito fonts-crosextra-caladea ttf-mscorefonts-installer mint-meta-codecs xpad xfburn dconf-editor libdvd-pkg
  • Opera. I went to the Opera website, using Firefox on the Linux laptop, and allowed it to autodetect Linux and offer me a Linux download. (If I went there on the Windows machine, it offered a Windows download instead.) The download was a Debian (.deb) file. I chose to save it rather than open it, so as to have it for future reinstallation. I double-clicked on it, in the Downloads folder, and proceeded with Install Package. I kept the default selection to update Opera with the rest of the system. When it was done, I went to Start > Internet > Opera. I made sure Opera was synchronized on my Windows machine, and then, in Opera on the Linux machine, I went to menu (i.e., click the O in the upper left corner) > Settings > Browser > Synchronization > Sign in. Once that was sorted out, I went to menu > Extensions > Get extensions > search for LastPass > Add to Opera > log in. Then I searched for another extension, to enable the classic WordPress interface for my blogging: TamperMonkey. With that installed, I was able to install the requisite script. To prevent it from being discarded, I considered tinkering with /usr/lib/x86_64-linux-gnu/opera/opera_autoupdate, or at least trying to figure out how to add –disable-update to the path of the startup icon, but I settled for a hope that those days of wiping out my script were behind us.
  • 32-bit Architecture. I planned to install i686 (i.e., 32-bit) (as distinct from x64 or x86_64, i.e,. 64-bit) versions of Beyond Compare (below) and perhaps some other programs. Also, WineHQ said that 64-bit Wine (below) required the installation of 32-bit libraries to run 32-bit Windows applications. This command was thus a common first step for installation of such packages:
sudo dpkg --add-architecture i386
  • Wine. As noted above, Wine enabled Linux to run some Windows programs. Here, I revisited the procedure described in more detail in a post written in spring 2016, with the aid of instructions from TecMint (Saive, 2018) and my own more recent post. Note that, as described above, (1) update would be run before any install command, and (2) I had already installed the 32-bit architecture as well as (3) the Wine repository as an example of a PPA. With that done, I ran these commands to install Wine, accepting its offer to install Mono, and approving its default to Windows 7 (opting for the 32-bit WINEARCH because of an impression that the 64-bit was still problematic):
sudo apt-get install --install-recommends winehq-stable
WINEARCH=win32 winecfg
  • Brother DCP-7065DN Printer. Another post provided a more detailed elaboration for installing Brother’s scan software. I wasn’t sure I would need that. I expected to be using Windows software with my scanning. So for now, I just plugged in the USB cable. The laptop recognized the printer automatically. A test page printed successfully.
  • Google Earth. At this writing, the latest reports were that the latest version of Google Earth was not functioning correctly in Linux Mint. Multiple websites indicated that people were having better luck with an older version. For that purpose, it seemed that my efforts in a previous post could be instructive. I did not use Google Earth often. For the simplest solution, I decided to start with Google Earth in a Windows virtual machine.
  • Adobe Reader. A previous post describes my efforts to run several different Linux and Windows versions of Adobe Reader in Linux Mint. FOSSLinux offered a simpler route to install the same version (i.e., Adobe Reader 9.5.5, the last one built for Linux). Assuming gdebi was already installed (above), the essential steps were simply to download the .deb file from the Adobe download page and then double-click on it > Install Package — or (if I didn’t want to keep a backup) just run it directly from the downloader. The icon was installed at Start > Office. It worked: the program opened, and it was able to open and view .pdf files.

More Complex Installations

Unlike the programs just described, some programs required more thought and/or effort to install and configure.

Firefox

Firefox was already installed. I changed it to show its menu by right-clicking on its tab bar > Menu Bar. I had already set my older version of Firefox in Windows to synchronize via Firefox > menu > Tools > Options > Sync. In the Firefox installation in Linux Mint, I went to Tools > Sign In To Sync > Sign In. That began the process of installing my preferred add-ons.

Unfortunately, as I saw from this was Firefox 59, and it did not support important legacy add-ons, notably Tab Mix Plus and Session Manager. On the Windows machine, I was using Firefox 52.7 Extended Support Release (ESR). At this writing, the next major Firefox ESR release was Firefox 60 ESR, scheduled to occur within just a few weeks.

I decided to revert to Firefox 52 ESR on this Linux laptop. I was tempted to follow Mozilla’s advice and download the 64-bit version from the Firefox ESR download page. Instead, as indicated at AskUbuntu, with Firefox not running, I entered these commands:

sudo add-apt-repository ppa:mozillateam/ppa 
sudo apt-get install firefox-esr

The first command advised me to consult the “Mozilla Team” webpage for that PPA. The key aspect of that page, for present purposes, was a repeated reminder of the impending upgrade to 60 ESR. When those commands were done, I went to Start > Internet > Firefox Web Browser. There were entries there. The one I wanted was the boldfaced one (apparently indicating an added program). Its icon was round rather than square. It worked.

But now, belatedly, I realized that possibly I should have uninstalled Firefox 59 before installing Firefox 52 ESR. I went to Synaptic > search for Firefox. Next to Firefox 59.0.1, it showed a gray icon with an exclamation mark. Synaptic > Help > Icon Legend indicated that this merely meant it was installed and upgradeable. I clicked on the icon > Mark for Removal > Apply.

When that was done, I went back to Start > Internet > Firefox Web Browser. I repeated the sync process as just described. When it was done adding tabs, corresponding to the extensions it was installing (as dictated by the Firefox installation on my synchronized Windows computer), I went to Tools > Add-ons > Extensions. Several add-ons indicated that they needed Firefox to restart to complete their installation. I clicked Restart Now on one of them. When it restarted, I went back into Extensions and modified settings on some (including previously saved settings) as desired. I did not yet have a Saved Settings folder set up on this computer, but eventually I would be exporting settings from some of these extensions (e.g., Tab Mix Plus, Session Manager) and saving them there, for use in future installations (in case synchronization didn’t work).

PeaZip

For file compression, my first choice was WinRAR. WineHQ indicated, however, that even at its best, WinRAR on Wine in Linux had imperfections. RARLab offered a page of unsupported user-contributed RAR options, but in the area of file compression I wanted an established tool. Alternately, 7-zip came with Linux Mint, in the form of p7zip (optional p7zip-full and p7zip-rar packages), but it was limited to the command line. I didn’t want that: I wanted to see very clearly what I was doing and which files would be affected. (There was discussion of GUI front ends, but the situation appeared unsettled.)

Linux Mint Cinnamon 18.3 came with Archive Manager. In an earlier exploration, I had concluded that PeaZip seemed to be the leading GUI archiver. A search led to a Manjaro forum entry suggesting File Roller as an alternative. File Roller did not rank as highly on AlternativeTo, but that was probably just because it didn’t seem to have a Windows version. Those two — PeaZip first — headed a ToppersWorld list of five file archive utilities for Linux.

I decided to try PeaZip. PeaZip had a help file and some information on its website, but I found those were mostly oriented toward Windows users. Fortunately, FossLinux provided installation guidance. In effect, the advice was to run this command (assuming I had already installed gdebi, as listed above:

sudo apt-get install libatk1.0-0:i386 libc6:i386 libcairo2:i386 libgdk-pixbuf2.0-0:i386 libglib2.0-0:i386 libgtk2.0-0:i386 libpango1.0-0:i386 libx11-6:i386 libcanberra-gtk-module:i386

Then download and run the .deb installer. In most cases, including mine, that seemed to be the GTK2 version. (GTK was the standard GNOME desktop interface, from which Cinnamon was derived; Qt was the standard KDE desktop interface.) As UbuntuHandbook said, this put PeaZip files into /usr/local/share/PeaZip. (The portable installation alternative would put files elsewhere.)

After installation, unfortunately, I found that PeaZip did not exist as a Start menu item. PeaZip’s Help webpage said I could find guidance in a lower folder: /usr/local/share/PeaZip/FreeDesktop_integration. The only guidance there was in a file named readme_Linux2.txt. It contained instructions for integrating PeaZip with the Nautilus file manager, but not with Nemo. That folder also contained several PeaZip shortcuts (launchers, in Linux-speak). I tried to copy one to the /usr/share/applications folder mentioned in another post, but it seemed I had to be running as root to do that. But I was able to copy that PeaZip launcher to ~/.local/share/applications. At first, that did not seem to add PeaZip to the Start menu. But then, later, without the aid of a reboot, it was there, in the Administration folder.

The next problem was getting PeaZip to rise to the occasion when I needed it. For instance, if I right-clicked on a file that I wanted to compress using PeaZip, I did see a Compress option, but that appeared to be just the standard Linux Mint archive manager. Using that option did not open PeaZip. So the question was, how can I add PeaZip’s context menu options to Nemo’s right-click menu?

One answer seemed to be, just use the scripts included with the installation. Those scripts were installed in /usr/local/share/PeaZip/FreeDesktop_integration/nautilus-scripts/Archiving/PeaZip/. As that path name suggested, however, those scripts were intended for the Nautilus file manager, not for Nemo. It appeared possible but not easy to use those scripts with Nemo. AskUbuntu said that Nemo (which began as a fork of Nautilus) could use Nautilus scripts that were invoked by .nemo_action files. StackExchange seemed to say that .nemo_action files would tell Nemo which actions to include in the Nemo context menu. So, for example, a file named clamscan.nemo_action could add a ClamScan entry to the Nemo context menu. To work, the .nemo_action file needed to be located in /usr/share/nemo/actions or in ~/.local/share/nemo/actions. Altogether, this seemed to be something I might be able to figure out in a few hours. Of course, I didn’t want to invest hours; I just wanted a file zipper.

It looked like I could download various Nemo extensions and .nemo_action files. For instance, a search led to a list of Nemo extensions, including a File Roller extension. Another list provided about ten .nemo_action files, several of which had peazip in their names. These turned out to be, not downloads, but text files from which I would apparently create my own .nemo_action files. For instance, the one named peazip-add-archive.nemo_action read as follows:

[Nemo Action]

Name=Peazip add to archive
Icon-Name=peazip
Exec=peazip -add2multi %F
Selection=any
Extensions=any;
Name[tr]=Peazip arşive ekle
Dependencies=peazip;

I wasn’t sure what language that penultimate line was, but apparently its non-Latin character didn’t affect program execution, so I left it as it was. I right-clicked on the Linux Mint desktop > Create New Document > Empty Document; I pasted those file contents into that file; and I saved it on the desktop with the name given on the webpage (i.e., peazip-add-archive.nemo_action). I did the same thing for the others (i.e., peazip-extract-archive.nemo_action and peazip-open-archive.nemo_action). I tried to put those three files into /usr/share/nemo/actions, but the Paste button was grayed; presumably that meant I would have had to do that as root. Instead, I pasted them into ~/.local/share/nemo/actions. Viewed in Nemo, that folder had a message across the top that said, “Actions: Action files can be added to this folder and will appear in the menu.”

OK, I thought, let us see about that. I right-clicked on some random file and, sure enough, there was a context menu option: “Peazip add to archive,” exactly as shown in the .nemo_action file text shown above. Selecting that opened PeaZip, with many options for the archive I was about to create. Presumably it worked the same with the extract-archive and open-archive .nemo_action files. Right-clicking on the empty desktop brought up several options: Open with Peazip, Peazip extract from archive, Peazip add to archive. I might want to translate, into better English, the contents of the Name lines in each of the three .nemo_action files I had just copied, but otherwise it appeared that these three .nemo_action files were all I would need.

So far, then, I was able to get what I wanted from Nemo by using .nemo_action files that others had created. If that turned out not to be the case — if I needed to develop my own .nemo_action file — PCSteps (Kyritsis, 2015) offered instructions. It looked very straightforward. As suggested in the text of the foregoing .nemo_action file, the key thing would be to know the command required to run the selected program (e.g., PeaZip) in the desired way.

It looked like the only remaining task, to finish the PeaZip installation, was to associate it with the filetypes that I wanted it to create or open. For this, I created an empty text file, named it x.zip, right-clicked on it, and chose Properties > Open With > select PeaZip > Set as default > Close. I tried the same trick with x.rar, x.7z, and x.tar.gz, but somehow those were already set by default. Maybe the .zip had been too, and I hadn’t noticed. I could continue that exercise with other compressed file extensions as they came to mind. This step wasn’t essential; it just meant that the file would automatically open in PeaZip if I double-clicked on it. I could instead have used right-click > Open with > Other Application > PeaZip, each time I wanted to open a file.

Later, I saw that last approach was mistaken. Apparently Linux didn’t decide filetypes by extensions. It understood me to be saying that PeaZip would be the default opener for files with text contents, regardless of their filename or extension. Aside from that, it appeared PeaZip was set up and operational.

Eventually I used PeaZip to extract files from a .rar archive. Its functioning was not exactly what I expected. First, when I right-clicked on a .rar file, it did not provide an offer to extract files from the archive. The only available option (provided by one of the foregoing scripts) was to add to archive. That didn’t really matter — it seemed that those scripts might do nothing more than merely open PeaZip. I wasn’t yet very familiar with PeaZip’s functioning, so I killed that and restarted it from Start > Administration > PeaZip. In PeaZip, I selected toolbar > Extract. I modified a few options — I didn’t remember what the default settings were, and there was no button to restore them. On this first try, I made sure the option to delete the archive after extraction was not selected, just in case things did not go well. Then I dragged the .rar file into the space labeled “Drag here archives to extract, or right-click for more options.” But when I clicked OK, I got an error:

“86456240870” is an invalid integer.

Press OK to ignore and risk data corruption.
Press Abort to kill the program.

A search failed to turn up information on that error. But search results did lead to websites where users encountered this message in programs that appeared to have very different purposes. It seemed that PeaZip’s developer might have borrowed code without thoroughly working through how it would function in this context. It did not seem likely that PeaZip would actually be corrupting anything in this situation, and anyway I planned to compare the unzipped files against the backup eventually, so I tried OK. That produced an apparent stall: everything was grayed out in PeaZip, and nothing appeared to be happening. I gave it a while, in case PeaZip was surreptitiously doing some kind of calculation preparatory to the intended extraction process. After a while, I concluded that it wasn’t. The attempt to use PeaZip to extract a .rar file, which it was supposedly able to do, had apparently caused the program to freeze. I typed pkill peazip to get rid of it and any processes it might have spawned.

By this point, I was gaining familiarity with the Linux command line — I was seeing that its performance could actually be clearer and more manageable than the black box of a GUI, with PeaZip here providing a prime example — and so I decided to install p7zip-full and p7zip-rar after all, and thus used p7zip as described in another post. (Synaptic seemed to say that I had to specify p7zip-full, even though p7zip-rar depended on it. I thought dependency meant that simply specifying p7zip-rar would be enough to install both of them.) I also installed unrar. Note, however, that its companion rar program — to create .rar files — appeared to be RARLabs shareware that might require purchase after a trial period.

Struggling with File Handling Tools

As I worked through the process of installing software in Linux, I encountered a separate category of programs that presented particular difficulties. For reasons detailed in this section, I would conclude that these programs were probably better run within a Windows VM.

VeraCrypt

I had previously found that VeraCrypt was among the leading Windows drive encryption tools. At this point, on Windows, I’d been using VeraCrypt, and its predecessor TrueCrypt, for some years. Addictive Tips (Diener, 2017) confirmed that VeraCrypt was still widely considered the best Linux drive encryption tool. As discussed in that earlier post, I planned to use the same approach as in Windows: create and encrypt the data partitions, but not the program partition (i.e., drive C, on a Windows system), lest I lock myself out of my own computer.

VeraCrypt was most effectively used before loading data onto partitions, and I wanted to get my data partitions set up and loaded so that I could organize files and program settings (e.g., links to frequently used files; locations of cache folders) as I began to use other programs. So it was time to set up VeraCrypt.

A search led to several sources offering guidance on using VeraCrypt in Linux. A search for VeraCrypt in Synaptic on my Linux Mint machine did not find VeraCrypt. Addictive Tips (Diener, 2017) said that, in Ubuntu, the solution was to add a repository. A search turned up a few others cautiously acknowledging that approach for Linux Mint as well. On the other hand, a different search led to sites supporting the approach of downloading and installing it as a standalone package, not through the apt-get system — so that, in other words, apt-get would not be monitoring updates and dependencies.

I followed the Manjaro.site suggestion that I download VeraCrypt in .tar.bz2 format from the VeraCrypt website. (I probably could also have downloaded the Debian version (from e.g., the openSUSE website), as Diener recommended for Debian systems.) After verifying the PGP signature, in Nemo, I right-clicked on the .tar.bz2 file > Extract Here. That gave me a Veracrypt setup folder. I saved the download for possible future reinstallation and went into that Veracrypt folder. I had a 64-bit system, and I wanted to install using the GUI rather than the console version, so I double-clicked on the file whose name specified an x64 GUI version. It gave me several options, among which Run didn’t work. I had to choose the Run in Terminal option instead. I proceeded with the Install VeraCrypt option. It opened a message saying, “To uninstall VeraCrypt, please run ‘veracrypt-uninstall.sh’.” I had to kill that message for installation to proceed. When I did, it did. At one point, it paused with, “Press Enter to exit ….” Eventually I realized that was my cue: I was supposed to press Enter to exit. And then I guess we were done. It added an icon at Start > Accessories > VeraCrypt. Alternately, I could type veracrypt to make it go.

VeraCrypt reported partition sizes in some places that were inconsistent with sizes it reported in other places, and also inconsistent with sizes shown elsewhere in Linux (e.g., GParted). For instance, to create a partition that would appear with some (not full) consistency as a 400GiB (or, as reported in some programs, “GB”) partition, I used GParted to create a partition of 410000 MiB. (The partition was in NTFS format because I wanted the contents to be usable by Windows programs.) GParted reported that as a 400.39GiB partition. That odd little extra bit seemed necessary to make VeraCrypt see it as a 400GiB partition rather than 399GiB. (I didn’t have a formula for calculating that extra little bit; I just rounded up from the 409600MiB value produced by a GiB-to-MiB converter.) Similarly, to create a partition that VeraCrypt would see as 300GiB, in GParted I entered the value of 308000, and for 1.1TiB I used 1154000GiB.

Now it was time to encrypt partitions. I would not be encrypting the system partition, both to improve performance and to avoid the risk of locking myself out of my own system. For reasons of performance, I would also not encrypt the VMs partition; and for reasons of access (i.e., to be able to restore drive images if things went wrong), I would not be encrypting the BACKROOM partition where I saved drive images. The following instructions thus apply only to those data partitions that I did want to encrypt.

I went to Create Volume > Create a volume within a partition/drive > Standard VeraCrypt volume > select the first partition that I wanted to encrypt (in this case, /dev/sda1). I told it I was going to create files larger than 4GB. I chose Quick Format. A VeraCrypt User Guide (2015, p. 25) explained that Quick Format would be less secure because, “until the whole volume has been filled with files, it may be possible to tell how much data it contains (if the space was not filled with random data beforehand).” But I was not using the hidden volume setting; I didn’t see why I should care if someone knew I had a bunch of data. I left the setting at “I will mount the volume on other platforms” because it was possible I would install a Windows dual boot at some point. That one brought up a notice telling me that I might have to install additional drivers to achieve that.

When I was finished with volume creation, I proceeded to mount the partitions. In this regard, VeraCrypt in Linux was a little different from VeraCrypt in Windows: there were no drive letters. There were just slots. So I selected Slot 1 and then went to Select Device > /dev/sda1. That way, the first mounted volume would also be the first VeraCrypt volume on the disk. Easier to keep track of. Then Mount > enter password. Volumes mounted a lot faster in Linux than in Windows. It was OK to exit VeraCrypt after the volumes were mounted: it wasn’t needed to keep them mounted.

VeraCrypt worked the first time, to connect my drives and allow me to see their contents. I attempted to copy files from a full, encrypted drive in an external USB dock to an empty, internal HDD partition. The copying process ran very, very slowly: after hours, only a small number of files had copied. I stopped the process and told VeraCrypt to Dismount All. It failed to do so. Eventually, I rebooted anyway. When I tried to use VeraCrypt to open a drive after rebooting, it gave me an error:

$MFTMirr does not match $MFT (record 0).

Failed to mount /dev/mapper/veracrypt1: Input/output error

NTFS is either inconsistent, or there is a hardware fault, or it’s a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows
then reboot into Windows twice. The usage of the /f parameter
is very
important!

It seemed that my decision to reboot while VeraCrypt still had partitions mounted had caused this error. I knew I wasn’t supposed to do that, but I had grown impatient when it failed to dismount. Besides, I hadn’t expected dire consequences: VeraCrypt in Windows did not produce those kinds of problems when the system crashed or, for whatever reason, VeraCrypt was shut down abruptly.

I wound up using GParted to recreate, and VeraCrypt to re-encrypt, the partitions on that HDD. Then the same thing happened again: attempts to copy files proceeded very slowly; VeraCrypt wouldn’t dismount partitions; I finally force-rebooted; and VeraCrypt again produced the error just quoted.

There was also the problem that, contrary to the error message just quoted, there did not seem to be a way to run chkdsk in Linux. Since it was an internal HDD, and I did not want to take the back panel off my laptop every time this happened, there was not an easy option of plugging the HDD into an external dock and running chkdsk from the Windows computer. A search produced only a half-dozen relevant English-language hits. Of those, only a few offered clear and distinct suggestions. Markito (2010) and Ubuntu Forums (2013) recommended sudo apt-get install ntfsprogs (preceded, of course, by an update command) and then sudo ntfsfix /dev/sd1a (or whatever the partition is). AskUbuntu (2017) seemed to say ntfsprogs was obsolete; in the sequence just given, it should be replaced with ntfs-3g. That discussion also raised the possibility of (1) trying a different USB port, if this had been an external drive, and (2) physical drive damage.

I wasn’t sure whether HDD damage, if any, would have preceded or been caused by my Linux tinkering. To check on that, for this Seagate HDD, I booted a YUMI drive containing Seagate SeaTools for DOS (2011). Unfortunately, at present the old file from which I created that option (SeagateSeaToolsDOS223ALL.ISO) was the only non-Windows tool Seagate seemed to offer. It did not recognize this 2TB HDD. I tried the Windows 7 Repair Disk on the YUMI. In that, the mouse wasn’t working, but a combination of tab and Alt-[letter] and Enter keys got me to Command Prompt. I tried chkdsk /f. For all partitions other than X (presumably the YUMI’s boot partition), if it saw a partition at all, it said, “The volume does not contain a recognized file system” — which was not surprising, since the partitions were encrypted. I rebooted into Linux and tried the ntfs-3g and ntfsfix sequence. Terminal said ntfs-3g was already installed. The ntfsfix command produced a sequence of errors:

Mounting volume… NTFS signature is missing.
FAILED
Attempting to correct errors… NTFS signature is missing.
FAILED
Failed to startup volume: invalid argument
NTFS signature is missing
Trying the alternate boot sector
Unrecoverable error
Volume is corrupt. You should run chkdsk.

I was wondering whether it was indeed a case of physical HDD damage. A different search led to Bleeping Computer (2016) and Linux Mint Forum (2016) suggestions for programs run within Mint (e.g., gsmartcontrol, e2fsck, testdisk, diskscan) and also from USB (e.g., MiniTool Partition Wizard, Ultimate Boot CD, GSmartControl on Parted Magic). I found that Start > Accessories > Disks (i.e., gnome-disks) offered only a benchmark option for the HDD. It seemed to be benchmarking an encrypted partition. I was not sure how it was managing to read from and write to that partition, but that’s what it reported it was doing. I installed and ran gsmartcontrol. It said the HDD model was “Unknown” and the SMART usage was “Unsupported.” I tried MiniTool Partition Wizard. It wouldn’t boot from the YUMI drive; I had to install it on a single-purpose USB drive (using Rufus). In MiniTool, I right-clicked on the first VeraCrypt partition > Surface Test. That partition was 400GB. After running for three minutes, MiniTool was estimating more than 45 hours to complete. I canceled that and, from the YUMI, tried Ultimate Boot CD > HDD > Diagnosis. It listed multiple HDD diagnosis tools. Most were for specific models (e.g., Samsung) or for FAT drives. ViVARD appeared to be one of the few exceptions, but it too seemed to offer only a surface test. As another option on my YUMI drive, I tried Parted Magic (32-bit from RAM) > Disk Health > double-click on the HDD > Perform Tests tab > Short Self-Test > Execute. Results: “Completed without error.” So maybe there wasn’t a problem of HDD damage after all.

I wondered whether the problem was that, in both of these problematic instances, I had started by trying to use Beyond Compare (below) to copy a few files from the external HDD to the internal HDD. Had Beyond Compare confused the system and/or VeraCrypt? Using GParted, I deleted and recreated one of the partitions on the internal HDD. Before encrypting it with VeraCrypt, I tried copying and pasting files to it using Nemo. That worked just fine. I canceled the transfer and tried to delete what had already been copied. I got, “Error while deleting.” The reason: “Error removing file: Directory not empty.” Apparently Nemo would not let me delete a folder containing files? Oddly, right-click > Delete did not work, but selecting the folder and hitting the Del key did. That error aside, I encrypted the partition in VeraCrypt and again tried the simple copy-and-paste process in Nemo. That ran quickly at first, but then slowed significantly.

Collectively, these experiences prompted me to conclude that my NTFS partitions might be better accessed by using VeraCrypt in the Windows guest system in a VM, rather than by using VeraCrypt in the Linux Mint host. The latter would still be an option, in a pinch, but at this point I was interested in seeing performance and reliability in the Windows guest.

Beyond Compare (BC)

In Windows, for some years, I had found BC and DoubleKiller to be very reliable and useful tools for a cautious backup scheme requiring some manual oversight. My version 3 (32-bit) license was good for both the Windows and Linux versions. I decided to try the Linux version. I downloaded the Debian package (bcompare-3.3.13.18981_i386.deb) and followed the installation instructions, which offered GUI or Terminal methods. I chose the GUI. That entailed simply double-clicking on the .deb package > Install Package. Then I ran it from Start > Programming > Beyond Compare. It took a moment to start up. I clicked its Enter Key button and pasted my license key into the space. Then I went to my Windows computer > Beyond Compare > Tools > Export Settings > Mark All (i.e., the default in each of several screens) > export a .bcpkg file > copy to the Linux machine > click the Import button in BC on the Linux machine > use the defaults (i.e., don’t select everything). Using BC with encrypted partitions required installation of VeraCrypt (above).

I found, unfortunately, that BC 3 for Linux ran very slowly and seemed somewhat unstable. I could see that BC on the Linux laptop was barely causing the disk access light to flicker on the external USB dock, whereas BC on the Windows desktop would keep that light very busy.

Anyway, I exported the revised settings, closed BC, typed sudo apt-get remove bcompare:i386, and then tried downloading and installing BC 4 x64 (bcompare-4.2.4.22795_amd64.deb). BC4 looked like it had been more fully developed, but the slowness was the same. Just as before, almost nothing was happening.

No doubt BC would function well with files on Linux (e.g., ext4) partitions. But for purposes of my NTFS partitions, as with VeraCrypt (above), BC was performing unacceptably on the Linux host with these NTFS partitions. Here, again, I was interested in seeing whether NTFS partitions would be better accessed by running the software in a Windows VM.

Double Commander

Like numerous other users (cited in at least 1 2 sites), I found that — aside from being rather kludgy, compared to my preferred Q-Dir file manager in Windows — the default Nemo file manager was not reliable. The first really bothersome problem I found was that sometimes it would not respond to cut-and-paste commands (i.e., right-clicking or using the Ctrl-X / Ctrl-V combination). But perhaps it was responding, in its own way: it appeared to be continuing file copy processes, at an extremely slow rate, many hours after its GUI was closed and the file copying appeared to be at an end. This behavior prompted me to look for an alternative.

It would apparently be a bad idea to try to uninstall Nemo. Multiple sources warned that the file manager was an integral part of the desktop manager (i.e., Cinnamon, for this version of Linux Mint) and that removing or replacing it would introduce instabilities and bugs. Besides, a search led to a Slant article that suggested the Linux alternatives were not especially impressive. Leaving aside file managers that were KDE-oriented, keyboard-based, reportedly slow and/or difficult to learn, the most likely option seemed to be Double Commander. (See also AlternativeTo and ELTP.) My own (1 2) previous explorations likewise pointed toward Double Commander (DC) or Midnight Commander. A search led to a SourceForge page indicating that DC was still in beta status but was being actively developed; to a SourceForge homepage; and to an active official forum.

To install DC on some versions of Linux, ELinuxBook (Sahu, 2018) recommended certain installation commands. (Contrary to his advice, I would have wanted the –gtk version, not the KDE-oriented -qt version he recommended.) But I saw that, in Mint, doublecmd-gtk was already listed in Synaptic, so I just added it to the list of programs to add with a single install command (above).

Once installed, I was able to run DC through Start > Accessories > Double Commander, and also via doublecmd. I was pleased to see that I could run multiple sessions of DC simultaneously. I worked my way through DC’s menu > Configuration > Options, revising and experimenting with possibilities along the way, mostly in the Layout area. My changes were saved in the location specified in Configuration > Options > Configuration. By default, that location was in a hidden folder in my home directory (i.e., /home/ray/.config/doublecmd).

I posted a list of features that I most immediately missed, when comparing DC to Q-Dir. The biggest one: I didn’t see an option to set DC to use only one panel. But I was able to achieve nearly the same thing by using menu > Show > Horizontal Panels Mode, then dragging the top edge of the lower panel to the bottom of the screen. I was pleased to see that DC remembered that configuration when I restarted it.

DC had a strange concept of file copying. When I right-clicked on an item and selected Copy, it acted like it was immediately going to work, copying the selected item to some unknown location, not giving me a chance to designate the location I preferred. I wasn’t sure whether it was actually going ahead with a copy procedure or, if so, where that stuff might be getting copied to. Generally, DC seemed awkward for purposes of file selecting, cutting, and pasting, and it offered no right-click options in its Tree panel. I hoped I would adapt and would find workarounds for the things it presently seemed unable to do. Alternately, I was interested in reports that Q-Dir would run well via Wine (below). Another post presents a slightly later and different look at file manager alternatives.

NTFS and Other File Systems

It was clear that I was going to use Windows tools in a VM to handle my data files: I had established that Linux did not offer competitive tools for my purposes. (See the discussion of software installed in a Windows VM, below.) Therefore, I was going to need the VM-accessible partitions to use a Windows-friendly format. That led to additional wrinkles.

Conclusion: Windows File Handling Tools in Linux

There seemed to be at least three reasons for the dramatically worse performance I observed when copying data from the the external USB HDD to the internal HDD in this Linux laptop, as compared to similar file copying to the Windows desktop computer:

  • As just discussed, the use of Linux, with an imperfect NTFS-3G implementation, apparently imposed a greater load on the CPU than Windows with its proprietary NTFS.
  • On both the laptop and desktop, VeraCrypt encryption of each copied file placed an additional burden on the CPU.
  • PassMark rated the desktop’s Intel Core i7-4790 CPU as more than twice as fast as the laptop’s Core i5-7200U.

I hoped this laptop would prove capable of video editing and other heavy tasks. To achieve that, it did appear that I would need to be performance-minded: if I were going to use Linux software for such projects, I would want to give it an efficient Linux-compatible file system (e.g., ext4), and likewise I would want to use NTFS for Windows. Even if a native Linux installation could not make the most of an NTFS partition, presumably a Windows VM could.

I considered it unlikely that I would plan to stop using VeraCrypt. I had researched competing programs in some detail (2015) and had not subsequently heard of any better alternative. I didn’t have any reason to think that VeraCrypt for Linux was any less efficient than VeraCrypt for Windows, but I did have this recurrent experience of the former behaving awkwardly by the standards of the latter. That is, over several days of struggling to create partitions, move large amounts of files, troubleshoot partitions, re-create partitions, and so forth, I had seen that (possibly because of NTFS issues) VeraCrypt in Linux might recover poorly from a crash, to the point of losing the contents of a partition. This experience was immediately and dramatically different from what I had been experiencing in Windows on the desktop.

I had seen an equally dramatic contrast, between the desktop and the laptop, in the performance of Beyond Compare. In my Windows usage, it had performed well, and reliably, for years. In Linux, I couldn’t even get it to complete its first task. Finally, Double Commander was the best of the Linux file managers, and it wasn’t bad; but like Beyond Compare and Nemo, it was simply not moving files rapidly, at least not when I was copying or moving the contents of large partitions.

Taken together, these results consistently supported the conclusion that, on this laptop at least, most of the file-intensive work would be happening in a Windows VM, mostly using NTFS partitions. For purposes of the three programs discussed in this section, the conclusions were as follows:

  • I wouldn’t want VeraCrypt (Linux) to access those partitions; I would access them exclusively through VeraCrypt running in the VM. The Linux and Windows (VM) installations could swap files, as needed, through the unencrypted BACKROOM (NTFS) partition.
  • While I might have some use for Beyond Compare for Linux to work with ext4 partitions, I would use back up the NTFS partitions using Beyond Compare for Windows in the VM.
  • For a file manager, it was possible that Nemo would work better when it wasn’t working with NTFS partitions. There was also the possibility of running Q-Dir in Wine on Linux. I would be able to run Q-Dir in the VM.

I was not yet sure that things would work out that way, but those did appear to be the conclusions warranted by the experiences described in this section.

Wine Programs

As described in another post, in summer 2016 I tested a number of Windows programs, to see whether I could get them to run on Linux via Wine. Some ran; some didn’t. But now, in 2018, it was not so much a question of whether I could get a certain program to run. Based on the findings reported in this section (above), the more important question seemed to be, Does this program involve working with data files stored on an NTFS partition? If so, it tentatively seemed I would be best advised to run that Windows program in a Windows VM, not via Wine on the Linux host.

That conclusion severely limited the number of Windows programs that I expected to run via Wine. The question now seemed to be, Is this Windows program better than the Linux alternatives, for purposes of doing the work that I can do within the Linux host? That question seemed to be limited largely to my work with Linux files. At present, there was not much of that. So until I saw a need, in Linux, for something that a Windows program could do better, it seemed the Wine installation would largely go unused. But there were a few exceptions.

Olympus Digital Wave Player (DWP)

It may have been possible to use DWP, running in a Windows VM, to detect an Olympus digital voice recorder (DVR) and upload from it. I did not choose that as my first option because, as detailed in another post, the quality of recordings uploaded from the DVR would be reduced, when uploading into DWP on Windows, but not when uploading onto Linux.

As that post further relates, in Linux I had found that the odvr project (presumably short for Olympus Digital Voice Recorder) produced a file named odvr_0.1.4.1_i386.deb, which I downloaded and ran via double-click. I connected the Olympus DVR device via mini USB cable and used these commands:

  • sudo odvr -h for help;
  • sudo odvr -l (that’s a lowercase L, not a one) to see the files on the recorder;
  • sudo odvr -e to upload from the DVR to the folder where the Terminal cursor was presently located;
  • sudo odvr -r to force a reset if the DVR gets hung, optionally combined with other options (e.g., sudo odvr -r -l to reset and then list recordings) (note: hitting Enter would also move the software after some hangups);
  • sudo odvr -c to delete recordings from the DVR;

I wound up using tee to produce a log of the files being uploaded. The best set of commands for my purposes appeared to be these:

sudo odvr -l -e | tee -a DVR.log
sudo odvr -c

As that earlier post describes, I worked out a technique to translate the contents of the resulting DVR.log into a set of commands to rename the uploaded recordings, so that their filenames would capture the date and time of recording.

Other Programs

I had previously worked through options for the installation of IrfanView in Linux Mint 17 using Wine. A search led to sources (including my own post from a few months earlier) indicating that, with Wine installed, IrfanView had potential. Similarly, Q-Dir’s developer reported that Q-Dir worked well under Wine. I did not explore these programs further at this point, however, as I wasn’t sure whether I would need them in Linux. Instead, I turned to the development of Windows VMs.

Setting Up Windows VMs in VirtualBox

I wanted to use KVM as my hypervisor in Linux. Sources indicated that it would perform much faster. But getting KVM to work as desired became a separate project. Meanwhile, I needed a working VM in order to proceed with this basic task of getting Linux Mint running on the laptop. So, for now, I went with the more user-friendly VirtualBox, in which I had some previous experience. The focus here was just on getting VMs set up; I did not work out a detailed list of which programs I would install in which VM.

A previous post explored methods of installing VirtualBox. One method was summarized in a tutorial by PC Steps (Kyritsis, 2017), which for my purposes was as follows:

  • sudo nano /etc/apt/sources.list > look at the name of the repository: in this case, xenial
  • Download the relevant version of VirtualBox: in this case, VirtualBox 5.2.10 for Xenial (for Ubuntu 16.04, because Linux Mint conservatively lagged the bleeding edge) > AMD64 (because I had a 64-bit machine).
  • Download the corresponding VirtualBox Extension Pack.
  • Double-click on the downloaded .deb file and proceed through installation.
  • Mount the VMs partition on the Linux laptop (using Start > Accessories > Disks). That partition was now located at /media/ray/VMs.

This approach might not be properly updated. Other methods explored in that previous post would probably be better. But for now, until it produced problems or until KVM took over, maybe this would suffice.

Now I could start using VirtualBox. I went to Start > Administration > Oracle VM VirtualBox (or type VirtualBox). That opened VirtualBox Manager (i.e., the main VirtualBox screen). There, I went to menu > File > Preferences > General > Default Machine Folder > drop-down arrow > Other > select the SSD partition where I wanted to save my VMs > Choose. Next, in that same Preferences dialog, I went to Extensions > right side, next to the Version heading > click the folder with the plus symbol on it > navigate to /home/ray/Desktop > select the downloaded .vbox-extpack Extension Pack file > Open > Install > close Preferences.

A Windows XP VM

Now I could configure my Windows VM. I decided to start with a Windows XP VM. I did not expect to connect this VM to the Internet, and therefore would not be installing antivirus software or worrying about updates.

As described in an earlier post, I had previously created a Windows XP VirtualBox VM, named it WinXP x32 SP3 Basic, and stored it in a folder of that same name. That folder contained a .vbox file and a .vdi file with that name, as well as a Snapshots subfolder. I copied that WinXP x32 SP3 Basic folder, comprising about 1.7GB, to the VMs partition. Then I went to VirtualBox Manager > Machine > Add > navigate to that folder (at /media/ray/VMs, where “ray” was my username) > open the .vbox file. Now the WinXP x32 SP3 Basic VM was listed in VirtualBox Manager.

The previous post describes the steps I had already taken to configure the WinXP x32 SP3 Basic VM. I still had to make some changes to make it work in this Linux environment. First, with the VM selected in VirtualBox Manager, I went into Settings > General Advanced tab > make sure the Snapshots folder was in the desired subfolder of this VM.

Next, in Settings > Storage, I had a yellow triangle containing a red exclamation mark next to “WinXP x32 SP3 Basic.vdi.” I proceeded to fool around with this, producing a selection of error messages discussed in another post. The core problem seemed to be that, in steps not detailed here, I had briefly tinkered with the VM while it was located in another folder, including renaming the .vdi file and/or the VM itself, and now VirtualBox could not find the necessary .vdi file. The solution was to delete this VM (in VirtualBox, and then in the Linux file manager) and start over with the steps described here. On this second try, my notes weren’t clear on whether the .vdi file was already listed in this Storage area, or whether I had to add it by going to Controller: IDE > right-click > Add Hard Disk.

The last Setting that I had to revise now was at Settings > Shared Folders > right-click in the open space > Add Shared Folder > Folder Path > drop-down > Other > navigate to /media/veracrypt1 (or whatever the available NTFS partitions are encrypted/mounted as) > select > Choose > OK. Repeat for each NTFS partition to be accessed from this VM. If I did this (or revisited it) when the VM was running, I would have the option to make these settings permanent, but Shared Folders added to a running VM seemed to be effective only after reboot. As advised, I did not set the partitions to auto-mount. In the running VM, the Shared Folders were visible in Windows Explorer > My Network Places > Entire Network > VirtualBox Shared Folders\Vboxsvr. For each one, I right-clicked > Map Network Drive > assign a desired drive letter.

That took care of the changes that I needed to make in VirtualBox. Now it was time to start the VM and make changes in Windows XP. In VirtualBox Manager, I selected the VM and clicked Start. Windows XP started up. It said that I would have to activate it before I could log on. That involved what some called an infinite product activation loop. I worked out a solution to that, as described in my post on activating Windows XP.

Once I was past that, with the VM running, I went to the menu at the top of that VirtualBox window > Devices > Insert Guest Additions CD image. It didn’t show any response, so I went to menu > Machine > Settings > Storage. There, I saw the Guest Additions had been added.

The previous post listed the changes I had already made to the Windows XP installation in that VM. The only additional steps I took inside the VM, at this point, were as follows:

  • Control Panel > Display > Appearance tab > Windows Classic style > Color Scheme > Desert. Also, while I was there: Desktop tab > Background: Autumn.
  • Start > Run > diskmgmt.msc > right-click on Guest Additions (i.e., Vbox_GAs) disc > Change Drive Letter and Paths > select some obscure drive letter (e.g., W), so the Guest Additions wouldn’t occupy the D slot and confuse my NTFS data file scheme. Likewise change the CD drive to Y:.
  • Start > Run > cmd > chkdsk /f > reboot.
  • Start > Run > cmd sfc /scannow. This produced a message:

Files that are required for Windows to run properly must be copied to the DLL Cache.

Insert your Windows XP Professional Service Pack 3 CD now.

The appearance of this message signaled that, when setting up this VM, I had apparently forgotten the old method of insuring that WinXP setup files would be included in the installation, so that this message would not appear. I wasn’t sure whether I could get the VM to see the laptop’s CD drive, which was just as well, because actually the laptop didn’t have one. One possible solution was to use something like Virtual CloneDrive to mount a WinXP ISO. I downloaded VCD 5.5 (see also OldApps), copied it into the VM, along with a WinXP ISO (which I had probably created using ImgBurn), via a shared folder (above), and installed it. Then I right-clicked on the WinXP ISO > Mount. VCD assigned it as drive D. That was apparently sufficient: it looked like sfc /scannow detected it automatically and went to work.

I considered a registry edit to change the default location of the Start menu, so as to use my customized, relocated Start Menu, a copy of which was now located on the BACKROOM drive in this laptop. This would have the advantage of giving me immediate access to the many portable programs installed on that Start Menu, some of which would run in Windows XP. But I had recently discovered that portables on that Start Menu would run in a Windows 7 VM. (When the Win7 VM was running on a Windows host, it could even run some Windows programs installed on the host.) I wanted to keep that menu mostly for Windows 7 usage. I decided instead just to add a WinXP desktop shortcut to that Start Menu folder. If I found myself using the contents of the BACKROOM Start Menu frequently, I could put a copy of it into the WinXP VM, and use various tools to pare it down, as described in the other post.

With the Windows XP VM in place, I could explore the question (above) of whether the extraordinarily slow file copying that I was experiencing in Linux, with my NTFS partitions, was due to CPU overload regardless of operating system and file system, or whether I did indeed need Windows to move files effectively among NTFS partitions. To test this, all I needed was to try moving some files, using Windows Explorer in the WinXP VM. But there was a problem: the WinXP VM did not see the 2.7TB external HDD from which I was copying files onto this laptop. Microsoft said that, to address a device of more than 2TB, I needed to use Windows Vista or later — not XP. So it was time to proceed to the next step: a Windows 7 VM.

A Windows 7 VM

I expected the WinXP VM (above) to be useful at times, for some purposes. But Win7 would be my real production machine. As with Windows XP, I had already set up a Win7 VirtualBox VM, as detailed in another post. The mission here was, again, to bring it up to speed in this Linux environment.

With the VM’s folder copied from the desktop computer to the SSD on this laptop, I went into VirtualBox Manager > menu > Machine > Add > navigate to the VM’s .vbox file > Open. That added the VM to the list in VirtualBox Manager. I had already configured the VirtualBox Manager > menu > File > Preferences (above), so the next step was to select the VM > Settings > make changes as noted in the preceding section.

As I had repeatedly experienced, VirtualBox did not function intuitively, from the perspective of the novice user, when I went into VirtualBox Manager > menu > Settings > Storage > Storage Devices. Its first problem, there, was that it did not remember relative references, so as to associate the proper .vdi file with the VM. Its second problem was that it did not remove unwanted media from its Virtual Media Manager (accessed via VirtualBox Manager > menu > File) after they had been removed from the list of Storage Devices. Its third problem was that, once the incorrect .vdi was removed from that Storage Devices area, it would not allow me to add the correct .vdi, incorrectly insisting instead that the .vdi already existed. It didn’t. There was only one of them, and that was the one I was trying to add. To resolve that problem, I went into Virtual Media Manager > right-click on the unwanted .vdi (marked with a yellow warning) > Release (if necessary) > Remove. Then I could add the correct .vdi.

Once that was done, I was able to power up the Win7 VM. I added Guest Additions and mapped network drives as described above, including the encrypted NTFS partitions already mounted in VeraCrypt on Linux. To see those partitions listed in Windows Explorer > Network, so that I could map them, without exposing the VM to the Internet (see above), the Manual seemed to recommend host-only networking. To achieve that, I went into VirtualBox Manager > select VM > File > Host Network Manager > Create. Then click Machine Tools to return to VirtualBox Manager > Settings > Network > Enable Network Adapter > Attached to: Host-Only Adapter. I started the VM > Control Panel > Network and Sharing Center > Change advanced sharing settings > expand the Public (current profile) section > turn on Network Discovery > Save changes. Then I went into Windows Explorer > Network > optionally, expand > right-click > Map network drive.

At this point, I went back through the tweaks listed and discussed in 1 2 previous posts. Some had already been implemented; some hadn’t, or had reverted during the transition. That included resolution (Control Panel > Display > Adjust resolution) and activation. Control Panel > System said, at the bottom, that I had three days to activate. A previous post contained further information on methods of activation. At the moment, I didn’t pursue the option to prune a copy of my customized Start Menu (as described in another post) so as to bring its portable Windows programs into the VM.

A Windows 10 VM

At its best, Windows 10 was an improvement over Windows 7. Since I did not intend to run antivirus software on my WinXP or Win7 VMs, a Win10 VM would be the only one that could go online with some safety. Given the availability of a relatively safe Linux host system, I would not expect to need that aspect of the Windows 10 VM often, but I might need it sometimes, for some Windows software. I certainly had beloved tools that would run in Windows XP and/or 7 but not 10; likewise, there would probably be programs that would run in Win10 but not in those earlier versions. I was still leery of Windows 10 for its remarkable ability to seize control of systems, but I hoped that would not be an issue when running Win10 in a Linux VM.

The question then was, how could I get a Win10 VM? There seemed to be several routes, discussed generally at greater length in a prior post (and more specifically in some of the following links):

There could also be other funky ways of running Windows 10: run, remotely, a Win10 VM installed on another computer; or use a VM to boot Windows to Go installed on a live Windows USB drive. Most if not all of the options just listed were also available on Windows 7.

I had spent a bunch of time, with mixed results, on efforts to get a good working Windows VM installation through conversion and other techniques just listed, and was thus inclined to focus on relatively straightforward methods. I wasn’t inclined to spend $200, and perhaps I wouldn’t need to: when the upgrade was free from Windows 7 to Windows 10, I had installed Win7 in a VM and upgraded it. So now I had a basic Win10 VM from a few years earlier. If that didn’t work, I also had basic and tweaked Acronis .tib backups of physical installations, along with some Windows 10 Media Creation Tool .iso files that might work, as well as an .iso obtained from Acer (for this laptop) and another Win10 .iso that looked like it might have been created using ImgBurn from a physical installation.

So I started with that basic Windows 10 VM. I made its VirtualBox settings like those of the Windows 7 VM (above), except that I only gave it 2GB RAM, and I gave it a NAT network adapter. I set up only one non-encrypted Shared Folder, as a route into and out of this VM: I wasn’t sure it would need any others. (This Shared Folder was for the BACKROOM drive, where my customized Start Menu (above) was located.) In Settings > Storage, we had problems. I went to VirtualBox Manager > File > Virtual Media Manager > right-click on the problematic .vdi > Release > right-click again > Remove. We weren’t done: Settings > Storage also reported a problem with the Windows 10 Pro .iso file that apparently I had used to create this VM. I guessed which .iso that might be. I right-clicked to remove the dead one, and then clicked on the disc icon to add the proper .iso. I also had to add the correct .vdi as a virtual HDD. With that, the VM started and ran.

With the VM running, I went to VirtualBox menu > Devices > Insert Guest Additions CD Image. Then I went to menu > Machine > Settings > Storage to make sure that took. Then I approved the option, appearing in a pop-up window, to let the Win10 VM be discoverable on its network.

I went into Settings (i.e., Win-I) and saw, “Windows isn’t activated. Activate Windows now.” I clicked on that link and entered the product key. That worked. To verify which version of Windows 10 I was running, I went to Settings > System > About. It confirmed 64-bit Win10 version 1511. (As PCWorld explained, 32-bit Windows would have effective use of less than 4GB RAM, while 64-bit Windows could use much more.) I went to Settings > Update & Security and saw that it was in the process of installing updates. These were major updates, accumulating over a period of two years or more, to bring us up to version 1803. (That proceeded via an intermediate update to 1709.) So that took some hours, and several reboots.

At this point, I was not inclined to spend much time tweaking the Windows 10 VM. If and when I did, I would be following the guidance of my earlier post, in which I had worked through most of the Windows 10 tweaks I would want. For the moment, I worked through most items on the shorter list of adjustments provided in another post. One addition to that short list, from the longer one: Win10RegEdit.reg, an experimental registry edit file that made a number of changes with one stroke.

Tweaks

This section lists various things that I changed as I went along. Some were recommended by the ELTP; others came up elsewhere. I didn’t make all of these changes at this point in the process; this is a collection of notes accumulated throughout the installation project. Some later discussions may not reflect some of these changes. Note that some of these changes would take effect only after a reboot. The changes I made were as follows. Some depend on this particular sequence of steps; some depend upon installation of programs named above.

Disable Touchpad

There seemed to be several approaches:

  • Disable touchpad manually. It seemed that at least some laptops had a built-in toggle. Mine was Fn-F7, as indicated by an icon on the F7 key. It was also reportedly possible to map a function key for the same purpose: Start > Preferences > System Settings > Hardware > Keyboard > Shortcuts > System > Hardware > Toggle Touchpad State > Fn-F7 (or other preferred key or key combination). That did not work for me. I posted a question on it.
  • Disable touchpad automatically when typing. Start > Preferences > Mouse and Touchpad > Touchpad tab > General > Disable touchpad while typing > On > Close. Then Start > Preferences > Startup Applications > Add > Custom Command > Name = Syndaemon, Command = syndaemon -i 1.0 -K -R -t and Comment = Disable touchpad while typing, 1-second delay, only for tapping and scrolling. Startup delay = 10.
  • Disable touchpad automatically when a mouse is plugged in. A search led to another suggestion of options that did not appear in my version of Mint.
  • Use Touchpad Indicator (recommended by multiple sources) as follows:
sudo add-apt-repository ppa:atareao/atareao
sudo apt update
sudo apt install touchpad-indicator

I was a bit hesitant about that. In apparently boilerplate language noted in its Launchpad page, the atareao-team PPA was initially untrusted. Should I trust it? An AskUbuntu discussion offered the rather faint encouragement that “I have never used their PPA but searching around on the net gives a hint that they are a bit trusted and also recommended by some sites” — to which a commenter responded critically. At the moment, the preceding alternatives seemed to be working, so I postponed this. But if I did come back to it, the rest of the usage advice was to go to its icon in the system tray (to me, it looked a little like an oncoming bus, but I think it was supposed to represent a touchpad) > click > Preferences > Actions. The available options included all of the above.

Create Shortcuts and Edit the Start Menu

As described in another post, I had struggled to find a way to organize the Linux Start menu in my preferred style. The end of that post did provide some information on how to edit and arrange items in the standard Start menu.

For my purposes, however, the standard Linux Mint Start menu was too limited, and too limiting. Instead, as described in that post, I decided to create a new Start menu on another drive, look in the relevant folders to see what shortcuts this system had, copy those shortcuts to that drive, and arrange those copies there.

Of course, that approach would only work where this system already had shortcuts (called “launchers” in Linux) in the folder where I was looking. In some cases, I had to look elsewhere to find a launcher. In the case of PeaZip, for instance, as described above, I found the shortcut in the folder where the program had been installed (i.e., /usr/local/share/PeaZip/FreeDesktop_integration).

In other cases, there was no launcher at all. In those cases, I had to create my own launcher. Once I had the launcher, of course, I could move or copy it to the desired folder, unless maybe if there was a permissions issue (above).

There were a couple of ways to create launchers. Xmodulo (Nanni, 2013) and Ubuntu Documentation described a manual method, using a text editor. Alternately, I could open the Create Launcher dialog by going to the computer’s desktop > right-click > Create a new launcher here (or by entering gnome-desktop-item-edit [target folder] –create-new, where [target folder] could be, for example, one of the standard locations for launchers listed above, such as ~/.local/share/applications). Within the easier right-click method, there would still be the question of what command the launcher was supposed to execute. That depended on what I was trying to do. The situations I encountered were as follows:

  • Open a folder. In this case, the command would indicate the program (i.e., Nemo) and then the folder, like this: nemo “/path name/” (case-specific: Nemo wouldn’t work; quotes only required if the path name contained spaces). When I finished, I accepted the option to add it to the Start menu.
  • Run a command. I wanted my Start menu to contain launchers for commands whose details I might not remember. For example, I wanted a shortcut to run the inxi -S command, for a brief report on a few essential bits of system information (inxi -Fxz for a longer report). To run that command from within a launcher, the kind advice in response to my question was to enter this in the launcher’s command space: gnome-terminal -x sh -c “inxi -S; read -p Hit_Enter_to_Close VAR” where the items before inxi -S apparently told Linux to read it as a shell command, and where the items after inxi -S paused the window so that it wouldn’t close as soon as the command ran.

Additional Tweaks

  • Language Support. In Start > Preferences > Languages > Language Support > Install / Remove Languages, Mint already had full English installations for the U.S. and for a boatload of minor places but, oddly, was missing some materials for the U.K. and Australia. I didn’t know if I needed to, but I selected those two and clicked Install Language Packs.
  • Enter Commands. ELTP recommended the following commands for various purposes. I entered them one at a time, allowing time for each to completely finish, and entering other information as needed.
sudo sed -i 's/false/true/g' /etc/apt/apt.conf.d/00recommends
sudo dpkg-reconfigure libdvd-pkg
sudo apt-mark hold libdvd-pkg libdvdcss2 libdvdcss-dev
sudo ufw enable
sudo passwd
sudo sed -i '/lid_options =/,+1 {s/("suspend", _("Suspend")),/&\n ("shutdown", _("Shutdown immediately")),/}' /usr/share/cinnamon/cinnamon-settings/modules/cs_power.py
  • Adjust Power Settings. In the system tray (i.e., bottom right corner of the Linux Mint desktop, i.e., right end of its taskbar), right-click on the power/battery icon > Configure > show percentage and time remaining. Also, after running the commands listed above, choose among the options available by left-clicking on the power/battery icon > Power Settings. Raw Computing (Waddilove, 2015) said that, at least in Windows, VM performance was improved by choosing a high power setting in the host, though of course that could drain the battery faster.
  • Make Terminal Opaque. I went into Terminal > menu > Edit > Profile Preferences > Colors tab > uncheck “Use transparent background.”
  • Disable the Switch User Option. Type dconf-editor > org > cinnamon > desktop > lockdown > check “disable user switching” > close (not “Set to default”).
  • Turn NumLock on Automatically. My laptop’s keyboard had a separate numeric keypad. Therefore, I typed sudo apt-get install numlockx. That didn’t persist after reboot, so I went to Start > Preferences > Startup Applications > Add > Custom command > Name = numlockx, Startup delay = 20 > Add.
  • Configure Nemo. I wanted the Nemo file manager to show the Details view by default. To set that, I went to Nemo > Edit > Preferences > Views tab > List View. Other tweaks: in the Behavior tab, I unchecked “Include a Delete command that bypasses Trash”; in the Display tab > change date format.
  • Change Wallpaper. I went to Start > Preferences > System Settings > Appearance > Backgrounds > Settings > Play backgrounds as a slideshow > On > 999 minutes > Close. I found advice on making my own, if I wanted to go that route.
  • Change Theme. A search led to various pages displaying pictures of desktop themes for Linux Mint. Some were very pretty. I suspected, though, that the best advice was to stick with default themes. Those were in Start > Preferences > System Settings > Appearance > Themes > Add/Remove. It asked if I wanted to update the cache. After doing so, I saw many themes. Had I somehow customized this list, or were these all really standard with Linux Mint? My searches weren’t answering that question. Well, since they were here, I thought I’d try one. The thumbnails were tiny, but I had seen Thunderbolt during my searching, so I tried that. Clicking on the arrow apparently just downloaded it. Now what? From that Add/Remove tab, I went back to the Themes tab > Desktop > click on Linux Mint > select Thunderbolt > Close. Later, though, I decided I preferred the default, now that I had pretty wallpaper.

Tweaks Not Used

  • Tweaks for Old Computers. For computers that were older or had less than 4GB RAM, recommendations included reducing swappiness, removing apt-xapian-index, and enabling zRam.
  • Turn Off Visual Effects. Visual effects were adjustable at several places in Start > Preferences. Those included Effects; System Settings > Preferences > General; and Window Tiling. Adjustments here would reportedly improve performance somewhat but might reduce some functionality.
  • Turn Off Some Startup Applications. Start > Preferences > Startup Applications. There were only a half-dozen, and I didn’t see any that I was sure I could turn off.
  • SSD Tweaks. ELTP advised against trying to align the SSD.
  • Antivirus and Malware Protection Software. There were sources for Linux antivirus software. It appeared, however, that most sources said it wasn’t necessary. ELTP contended that, in fact, antivirus software itself could provide an attack vector, insofar as it (unlike e.g., an MP3 player) was typically capable of opening every kind of file on the system, and still didn’t provide full protection — and imposed a performance hit as well.
  • The Eight Deadly Commands. How-To Geek kindly provided a list of commands that could wreck the system. I looked at them, in hopes I would remember what they looked like, so as to avoid entering them on anyone’s advice.
  • Gnome Tweaks (a/k/a Gnome Tweak Tool). I did install this, took a look, and decided it was not essential, except when users found themselves with a specialized problem or need for which it was the recommended solution. When installed, I found its icon at Start > Preferences > Tweak Tool, but it also ran via gnome-tweak-tool.
  • Alacarte. According to one entry in a Linux Minut Forum discussion, “You are not supposed to use alacarte, but the menu editor built-in to Cinnamon. The Cinnamon menu editor is an alacarte fork, yet, it will do its job, whereas genuine alacarte will not.” That was consistent with some complaints.
  • Other Tweaks. There seemed to be no end to the list of possible tweaks. I declined to list all possible tweaks here. I was particularly unlikely to include those whose primary benefit was reportedly a very minor performance improvement, those that carried a risk of stability, and those that seemed irrelevant to my purposes or my computer. Other users are advised to consult the foregoing sources for other tweaks of possible interest.

There were some tweaks that I wanted but couldn’t find. Those included:

  • Hit Enter Twice. I had to hit the Enter key twice, after typing my password (e.g., after the screensaver darkened the screen), to get it to register. A search did not lead immediately to a solution to this bug (or feature).
  • Set BIOS/UEFI to AHCI. This option did not seem to exist, within the very limited BIOS interface utility provided with this computer. Acer did offer a SATA AHCI driver for this unit, so AHCI was apparently enabled by default.

That was as far as this attempt went. By this point, I had encountered a number of issues and ideas that prompted me to start again, trying CentOS and then Linux Mint Xfce as my host operating system.

Advertisements
This entry was posted in Uncategorized and tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.