Tag Archives: grub

Trying to Use tar and rsync to Back Up a Linux System

This post describes my efforts to use tar, rsync, and gnome-disks to back up a Linux Mint Xfce installation. These efforts succeeded, in the sense that I could create backups; but these efforts failed, in the sense that (at least for tar and rsync) I did not arrive at a clear, working understanding of how to restore the GRUB bootloader, and therefore wound up with a nonbooting system. I provide these notes for any (including myself) who may find them useful for purposes of focusing in on the GRUB issue, before undertaking all the other issues that can accompany a Linux system backup effort.

Contents

Introduction
Considerations Favoring rsync and the Command Line
A Simple Case: Local Backup: Using rsync to Back Up the Linux SSD to the Internal HDD
Using rsync to Create a Single Backup File
Using rsync to Create a Mirror
Simple Scenario Revisited: Local Backup via tar
The Basic tar Command
Assembling the Desired tar Command
Excluding Directories from tar
A tar Script
Running It: A Scripting Environment
Restoring from the tar File
Trying to Restore GRUB
Restoring from the rsync Backup
GUI Backup via Disks Utility

.

Introduction

I was in the process of installing a Linux Mint Xfce (LMX) system. I wanted to make a backup. The question was, how should I do that?

That question seemed to include (a) what do I want to back up and (b) where do I want to back it up to? What I wanted to back up was just the LMX system installation, for now, though I might want to include more later. The LMX system was installed on — it was the sole user of — the laptop’s internal solid state drive (SSD). As for where, the computer in question was a laptop, so possible backup destinations would include the laptop’s internal hard disk drive (HDD), an external (USB) HDD, and the cloud. Later, when I had the networking sorted out, I would also want to be able to copy the backup to, or simply sync the installation with, my desktop computer.

The next question was, how? To answer that, there seemed to be many options. A search led to lists of supposedly great recent Linux backup solutions (by e.g., LinuxTechi, UbuntuPit). It seemed that much had changed within the last few years. For example, in my post from April 2016, I said, “Search results gave the impression, consistent with my previous browsing, that dd and Clonezilla were the most widely used Linux imaging tools.” Now, by contrast, dd was barely mentioned, and Clonezilla was largely overshadowed by more user-friendly GUI-based backup tools like Bacula, available in both corporate and (free) community versions (see comparison), the latter with its own SourceForge page (see Softpedia for the Windows version). Also, a search would remind me that EaseUS Disk Copy ($20) was able to clone Linux systems — which, in some cases, might be good enough. There was also Acronis Backup ($499/year).

Possibly the best of the lot was also the most accessible. In the LMX installation, I could go to Start > Accessories > Disks (or run gnome-disks) > select the drive > click the hamburger menu (i.e., the button with three parallel horizontal lines, near the upper-right corner of the Disks window) > Create Disk Image > follow further instructions. I didn’t notice that option until I was well into this project. It might have made this backup easier, but maybe that was OK. There were other considerations at work, as described below.

Note: commands in this post are rendered in italics.

Considerations Favoring rsync and the Command Line

Unexpectedly, however, I found myself leaning away from GUI and more toward command line solutions, particularly rsync. (Note: I could also have explored the dd command.) This inclination away from the GUI could have been a mere manifestation of a contrarian spirit, but I didn’t think so. I believed it resulted from several considerations:

  • In part, I think it was a reaction against the phenomenon just described, in light of recent experience with other software. Yes, these new tools had emerged upon the scene within the past two years — and where would they be two years from now? Would I set up a backup scheme, only to have to revise it later because the developer of my preferred backup software went out of business, or decided to start selling only a corporate version, or failed to keep up with evolving needs, or with changes in Linux? In that case, would I be able to restore an old backup if needed?
  • Now that I had been working with Linux for a while, I was gradually recovering a degree of comfort with the command line, and appreciating its advantages. Among other things, I was tired of endless mousing and clicking. It was appealing to visualize a command that would run, and do the job, and rarely require revision — and if it did require revision, all of its options and settings would be visible right there on the command line, not hidden off in some obscure checkbox beneath some random submenu.
  • I had been recurrently reminded, for some years now, that a tool like rsync could simply persevere — that is, it could continue to do the job — for a very long time, with very little change, and with many potentially useful options.
  • Rsync was already included in my LMX 18.3 installation. By contrast, there was the prospect that — with Bacula, for instance — I would have to add a lot of extra software. A quick test with sudo apt-get install –install-suggests bacula indicated that, with suggested packages (above), I would be adding 57 new packages (using 218MB of disk space) to this supposedly relatively minimal Linux installation. It would only be about a quarter of that without the suggested packages, but that was still something — and I wouldn’t know, until I got into it, whether I would need some of those suggested packages to make it work as desired.
  • As noted, I was now looking for a backup solution for the Linux system files. I expected to use a different backup solution for the NTFS data files. But it appeared that rsync would provide a fast and arguably secure means of backing up a VeraCrypt container. In other words, it seemed rsync might prove to be an acceptable one-stop solution for various backup and copying needs going forward. For instance, I imagined keeping the laptop and desktop computers in sync — in terms of data files, and possibly even in terms of some program files (between VMs, at least (using DeltaCopy), if the desktop was still running Windows, and between home partitions as well, if both were running Linux). Previously, I had seen that people were using rsync to back up their websites. It appeared I might be able to use rsync for all of the above — in which case the initially worrisome learning curve might eventually pay for itself.
  • These appeared to be reasons why, according to one source, “It seems that rsync is the de-facto standard for efficient file backup and sync in Unix/Linux.” If that was correct, it would make sense to become conversant in that de facto standard.

A Simple Case: Local Backup:
Using rsync to Back Up the Linux SSD to the Internal HDD

As noted above, my laptop had an SSD along with its HDD, and the LMX installation was on the SSD. What I wanted, first, was a simple restorable copy of the Linux installation, preferably compressed into a single zip file that I could move around and/or copy as needed. The separate home partition was big, but so far it didn’t contain much, so for now I could include that in this backup. Later, I might want to treat it separately. The target or destination location for this backup would be an ext4 partition on the HDD named BACKROOM.

Using rsync to Create a Single Backup File

To understand rsync, I could have started with its official “man” (short for “manual”) page. I didn’t readily see one for Linux Mint, but the one for Ubuntu ran to about 28,000 words, mostly discussing what looked to be about 130 options and suboptions. That was intimidating. I felt I would rather find practical examples that I could use to learn from, one step at a time.

For guidance in that project, a search led to a How-To Geek article (Brown, 2013) suggesting something as simple as rsync -av –delete [/source directory]/ [/destination directory]. (In such commands, note that WordPress, host of this blog, unfortunately renders two hyphens (i.e., “- -” without the space between them) as a dash (i.e., “–“) For Linux commands, the two hyphens would work; the dash would not. If in doubt, for that and any other ambiguities, it may help to copy and paste the command to a plain text editor.)

In Brown’s suggested command, the “a” option meant “recurse” — that is, start at the designated directory and include all of its subdirectories. (Actually according to the man(ual) page, it was shorthand for a bunch of things: recurse, copy symlinks as symlinks, preserve permissions, preserve file modification times, preserve group, preserve (superuser) owner, preserve device files, and preserve special files.) The “v” option meant “verbose” (i.e., provide details about what’s happening). The “delete” option meant “delete anything else that might already be located in the destination directory”; “delete-before” would do that deleting before copying, in case space was limited. There was also a -z option to compress files. A TecMint article (Shrivastava, 2016) said I might want to use -h, short for “human-readable,” which evidently meant that a long number might be reported in shorter form, such as 138GB instead of 138 followed by a bunch of other digits. Other possible options: –progress, to show how the copying project was faring, and –dry-run, to try it out without making any actual changes. Shrivastava said that, if I specified a destination directory that didn’t exist, rsync would create it.

Combining what Shrivastava and Brown advised, I came up with this possible command: sudo rsync -avzh –dry-run –progress –delete-before /home/ “/media/ray/BACKROOM/2018-06-18 Backup/.” In that, there were still some unknowns. Was it OK to use quotation marks around path or file names containing spaces? Would this command suffice to preserve the source directory’s structure — to insure that, upon restoration, files would go back into the correct subfolders? Could I combine multiple source directories (i.e., /home plus /boot plus /root) into a single command? How was I supposed to name the compressed output file?

To resolve those unknowns, I looked further. TheGeekStuff (Natarajan, 2011) said I could use –exclude ‘‘ to exclude a specific file from the rsync process, and similarly to exclude a specified folder (and, with a wildcard, to exclude all folders or files whose names fit a certain pattern), within the source directory. It was possible to use more than one –exclude option, or to use –exclude-from ‘exclude-list.txt’ to exclude all files listed in exclude-list.txt. The man page said I could use –include-from=FILE to list files or patterns to be included, one per line.

Continuing the exploration, Juan Valencia said the trailing slash (e.g., specifying /home/ as the source, instead of /home) indicated that it was not necessary to create a destination folder with the same name as the source folder. If there was no folder named /home on the destination, and if I wanted there to be such a folder, then apparently /home (not /home/) would be the way to go. Also, it was possible to “escape” (i.e., accept as-is) spaces and weird (Valencia called them “rare”) characters (e.g., “{” ) in the filename by using a backslash (e.g., referring to a file named we{ird as we{ird) or by using single quotes (e.g., ‘we{ird’). OSTechnix (2017) further recommended using -A (preserve ACLs) and -X (preserve extended attributes) options.

This was all just fascinating, but we were still wondering about the part where I was going to use rsync to compress multiple source directories into a single zip file on the destination — in effect, an image file that I could later unzip to restore the system to its state as of a certain moment. For that, a post in a LinuxQuestions forum said no, rsync didn’t do archiving: its -z option would only temporarily compress files during transfer. So basically I was investigating the wrong tool.

The rsync scenario seemed to be more like mirroring. Rsync would be very efficient because it would detect which files — indeed, which parts of files — had changed, and it would transfer only those changes from the source to the destination. As I proceeded into the next option (below), I would find myself wondering whether certain system folders would have to be excluded from an rsync mirror.

So that was where I left the matter. I proceeded to develop the following section, regarding tar. Then, as that section describes, I found myself wanting to compare the results of the tar method against the results of another tool. That was the point at which I decided to return to this section and develop the method of using rsync to mirror the Linux installation, as described in the following subsection.

Using rsync to Create a Mirror

As just indicated, I returned to write this section after substantially completing the following section, regarding the tar command. This was rather disorganized, but this was also the reality of my learning process. I would have rewritten it, but I was afraid I’d make even more of a hash of it that way. The best I could advise was that it might make sense to skip to the following section at this point, read in detail about tar, and then come back here — because I did not plan to repeat things that I had already figured out in the following section.

At this point, I considered the sketch of an rsync command that I had developed, modified in light of other comments (above). Using that information plus what I had learned about scripting (below) and some further reading and experimentation, I came up with this:

#!/bin/bash
# This is rsyncbup
DATE=$(date +%Y-%m-%d-%H.%M.%S)
SOURCE="/"
DESTINATION="/media/ray/BACKROOM/MintMirror-$DATE"
cd /
sudo rsync -avh --dry-run --progress --delete-before \
$SOURCE $DESTINATION \ 
--exclude={proc,sys,dev,run,tmp,lost+found,cdrom,media,mnt} \
--exclude={var/log,var/cache/apt/archives,usr/src/linux-headers*} \
--exclude=home/*/{.cache,.gvfs,.local/share/Trash} \
--exclude={var/spool/squid,archive}

An AskUbuntu comment stated that “the exclude path is relative to the source path.” So, for instance, when I said –exclude=proc, I was specifying /proc (i.e., a subfolder under the source (i.e., root). That seemed to be the same as in tar (below). Note that I could have used a separate exclude list file, but then I would have had to keep it with the script.

As below, I put that script into xed, saved it with the specified name (i.e., rsyncbup) in $HOME/bin (i.e., /home/ray/bin), and ran it as rsyncbup. Its dry run looked vaguely good. I removed the –dry-run option from the script and tried again. That ran. It took only a minute or two. In Thunar (i.e., the LMX default file manager), I went to the destination folder > right-click > Properties. It said the total was 4.8GB. That was less than the 5.3GB reported for the source partition, excluding most of the excluded directories but not excluding those buried in /home and /var. So I wasn’t sure, but maybe it had run correctly. I would have to see when I compared restores (below).

Simple Scenario Revisited: Local Backup via tar

To back up the system to a single file, MakeTechEasier (Diener, 2016) offered the tar command: “you’ll be compressing an exact copy of your entire Linux file system into a TAR archive.” That sounded appealing, like when I could use Acronis or AOMEI to compress my entire Windows installation into a single image file.

As such, this was different from the scenario where I had system folders (e.g., home, root) on different partitions. In that case, according to an AskUbuntu discussion, was that I should make separate backup files for each partition. Then, when it came time to restore, as advised in an Ars Technica discussion, I would boot from a live CD and use it to restore each backup file to its respective folder.

That actually was the situation on my SSD, at about the time when I started this post. Since then, however, as described in another post, I had reinstalled LMX and, during that installation, I had accepted the installer’s offer to set up my SSD for Logical Volume Management (LVM). So now my whole SSD was one big pool, and my separate home, root, and boot partitions had ceased to exist. Therefore, my .tar.gz backup file was going to contain all those folders in one package. To create that backup file, Diener said I would want to use cd / to get myself to the root directory (i.e., / ), and then I could run something like sudo tar -cvpzf backup.tar.gz –exclude=/backup.tar.gz –one-file-system /. I was about to develop that prototype for my purposes.

The Basic tar Command

According to various (e.g., 1 2 3 4) sources, tar was reportedly short for “tape archive.” Tar was apparently designed to preserve Linux file metadata (e.g., Linux permissions), which zip would not necessarily preserve. Tar did not compress; tar merely combined files into a single archive, which would then have to be compressed to save space. Zip would typically compress files individually before combining them, thus producing larger files than the best Linux compression formats. It seemed the best Linux compression formats were gzip and xz (the latter being similar to the 7z format produced by 7-zip in Windows). A tarball (i.e., tar file) compressed with gzip would typically have the .tar.gz or .tgz extension. Another format, bzip or bz2, seemed to be mostly falling into disuse. The main tradeoff was between degree of compression (i.e., reduced file size) and the amount of time and CPU power required to compress (or decompress). Generally, assuming a fairly powerful computer, it sounded like xz was the best compression format on Linux, just as 7z was arguably the best on Windows.

Given that background, the GNU tar manual (also available via man tar, but not as easily accessed by links) explained the command suggested by MakeTechEasier (above). The current version of the manual said the options in that command were as follows: c = create (i.e., creates a new tar archive, as distinct from adding files to an existing archive), v = verbose (see above), p = preserve permissions, z = gzip (i.e., use gzip for compression, as distinct from the option to use xz, i.e., -J or –xz), f = specify output (i.e., archive) file name (in that command, it was backup.tar.gz). As I went on, I saw that numerous websites used exactly, or almost exactly, that same set of options. There seemed to be no particular order to these options, except that the last one (f) was basically announcing the filename that would immediately follow it. The –exclude option meant “prevent the specified file [or directory] from being operated on” (i.e., archived and compressed). In this particular example, the specified file was none other than the file being created, as designated by the -f option, namely, backup.tar.gz. In other words, don’t get into a loop of trying to include the partly completed archive file within itself.

Understanding the –one-file-system option required a departure from how I thought of things in Windows terms. In Windows, the operating system resided on drive C. There were not typically parts of it on other drives. So I would just use Acronis to make an image of drive C. But here, the manual said, “This option is useful for making full or incremental archival backups of a [single] file system.” This seemed to be another instance of confused terminology in Linux, like “root” to mean the top-level folder or, very differently, the superuser, or “menu” to mean any old menu or, instead, what Windows users knew as the Start button. In Linux, a filesystem could be the format of a drive (e.g., NTFS or ext4 filesystems), or “the” Linux file system (i.e., the whole installation) — or, in the present context, it apparently meant something like the lesser of (a) a directory (e.g., the root directory) or (b) the part of a directory contained on a single partition (e.g., excluding /home if it was on a different partition).

So the general idea of the –one-file-system option seemed to be that you could use it to capture everything in “the Linux filesystem” that wasn’t located on another partition. So, for example, you wouldn’t have to exclude an external USB drive, because it would not be part of the / filesystem. Or, in my case, the internal HDD partition on which I wanted to save this tar file, ordinarily mounted as /media/ray/BACKROOM, would automatically be excluded if I used the –one-file-system option, because that location was not on the same partition as the root partition that I would be naming as my source or treating as the default source (i.e., by running the tar command from that root location).

Assembling the Desired tar Command

Now I had enough knowledge to be dangerous. With the aid of contributions to a Linux Questions discussion, I came up with an experimental tar command:

sudo tar -cvpJf /media/ray/BACKROOM/MintBackup-(date +%y%m%d).tar.xz --exclude=/media/ray/BACKROOM/ /

There were a couple of things to notice about that command. First, it didn’t work. The reason may have been that I forgot to add a dollar sign ($) before the parenthetical “date” entry. The date thing itself was interesting: it was possible to include the current date (and time, as I would soon see) in the output (i.e., destination) filename. Note, also, that this command (and several to follow) used simply / (i.e., the slash character, symbolizing the root of the whole Linux installation) as the source directory — and, in rather bass-ackwards fashion, that source identification came last, at the very end of the command. I wasn’t yet seeing my error with the date command, so I just removed it and tried again, like this:

sudo tar -cvpJf /media/ray/BACKROOM/MintBackup.tar.xz --exclude=/media/ray/BACKROOM/ /

It looked like that worked. I had a Thunar session open and displaying the contents of the BACKROOM partition, and there I could see the MintBackup.tar.xz file getting bigger as the filenames scrolled down the screen in Terminal. I wished there were a progress option, as in rsync. It seemed the best available method for watching progress was watching Thunar update its report of the MintBackup.tar.xz file size. It was pretty slow. In what may or may not have been a representative snippet, it took 42 seconds to grow the archive file by 20MB. So at that rate, according to my calculations, it would take about 12 years to finish. I was recalling that AOMEI Backupper Standard was able to back up my Windows installation into a 50GB compressed file in maybe 15 minutes.

I decided this was a good chance to grab a snack. When I returned, four hours later, it seemed to be stuck at almost the point where I left it: 1.2GB compressed, and many gigabytes left to go. Did the exclude option not work — had the command choked on its own tail, attempting to include BACKROOM in the archive that it was creating on BACKROOM? Or was 1.2GB as large as it could go? The last filename listed on Terminal was /proc/kcore, but I didn’t know what to make of that: the files listed in Terminal were in seemingly random order. I hit Ctrl-C, and Terminal didn’t object; it seemed glad to be done with that ordeal. I opened the MintBackup.tar.xz file that had been created. I saw that it hadn’t finished backing up the /proc folder. This suggested maybe it hadn’t yet started trying to chew on BACKROOM. I tried again, this time using .gz instead of the more demanding xz compression:

sudo tar -cvpzf /media/ray/BACKROOM/MintBackup.tar.gz --exclude=/media/ray/BACKROOM/ /

Yes, that was much, much faster. But then it got to that /proc/kcore file and froze again. A search revealed that I was not alone in this problem. I hadn’t previously noticed it, but now that I was seeing it in these other posts, I noticed that the kcore file’s size was reported at 140.7TB (sic). According to an Ars Technica discussion, the /proc directory was a virtual filesystem, reporting files that did not actually exist, and /proc/kcore “is a file that represents the contents of your memory.” I hoped not. If people knew what was in my memory … seriously, I could see where that would be problematic. How it took 140TB to capture 24GB of RAM, I did not know. I was not asking. It was none of my business.

Excluding Directories from tar

People were saying that I wouldn’t want to include /proc/kcore, and perhaps some others, in my backup. But then where would those files come from, if I explicitly excluded them from the backup, and then had to do a restore? Would they just be recreated automatically? In comparison to the Windows drive image scenario, where I’d just make an image and then restore it and everything would be fine, this was starting to sound convoluted. And it got worse. Consider another remark in that Ars Technica discussion: “Keep in mind that you’ll probably need to re-install the system anyway, if the system goes down. Most of the time you’ll only care about restoring pieces of /etc and /var, /usr/local/, that sort of thing, anyway.” The hell. Which pieces? I was an amateur. I had no concept of such things.

Expressing a rather different view, responses on Quora suggested that (at least if I wasn’t going to use the –one-file-system option, above) my backup should include the top-level directories /opt, /etc, and at least some parts of /var, along with /home (assuming it would not be better backed up separately), but should exclude the virtual filesystems /proc and /sys, whose contents were not actual files but, rather, simply “windows into the variables of the running kernel.” That quote came from a well-written Ubuntu community webpage that said I should also exclude /dev and /run (and /tmp, according to other sources), which were temporary filesystems that did not need to be backed up. In addition, these sources indicated that I should exclude any paths on which other volumes were mounted, notably /mnt and /media (and, I believed, /cdrom). I was able to verify at least part of that last suggestion: on my laptop, using Start > Accessories > Disks (i.e., gnome-disks) > highlight an individual partition on the laptop’s HDD > click to mount partitions, it said they were mounted on /media/ray, so it did make sense to exclude the /media directory.

To clarify all this advice, I decided to list the filesystems in my LMX installation, grouping them as follows. First, to review the preceding advice, here were the ones that I was supposed to exclude:

  • the virtual filesystems /proc and /sys
  • the temporary filesystems /dev, /run, and /tmp, along with /lost+found
  • filesystems that I believed were largely for mounted media: /cdrom, /media, /mnt
  • optionally, other directories that could safely be either backed or up excluded, depending on whether I wanted to make my backup smaller (apparently at the risk of possibly having to re-download some .deb files later). These optional exclusions are listed in the script shown below.

In contrast to those exclusions, there were other filesystems that people were telling me to back up: /home, /opt, /etc, and /var (aside from optional exclusions, below). That left some directories unaccounted for. On my system, those were /bin, /boot, /lib, /lib64, /sbin, /srv, and /usr. I assumed I wouldn’t want or need to back up the hidden /lost+found folder, but I wasn’t sure what to do about the hidden /root folder. There could be other filesystems on other computers or perhaps, later, on this one. For instance, TLDP mentioned /archive. For guidance on what to do about those unmentioned filesystems, I took a hint from the “Alternate Backup” section of the Ubuntu community webpage. That section (and the discussion preceding it) specified some exclusions in addition to those just named — implying that everything else would be included.

A tar Script

Putting this all together, I decided to construct a general backup command with specified exclusions. Listing all those exclusions on a single line would make for a long command. To make it more manageable, there was the options of specifying a file that would contain a list of exclusions, as noted above. Alternately, I could write a script that would preserve all this information. 1&1 Digital Guide offered such a script, which I would soon (but not yet) revise and save as follows:

#!/bin/bash
# This is tarbup
DATE=$(date +%Y-%m-%d-%H.%M.%S)
SOURCE="/"
DESTINATION="/media/ray/BACKROOM"
cd /
tar -cvpzf $DESTINATION/MintBackup-$DATE.tar.gz \
--exclude={proc,sys,dev,run,tmp,lost+found,cdrom,media,mnt} \
--exclude={var/log,var/cache/apt/archives,usr/src/linux-headers*} \
--exclude=home/*/{.cache,.gvfs,.local/share/Trash} \
--exclude={var/spool/squid} \
$SOURCE

With the aid of that website, and drawing on some of the foregoing information, I understood the contents of that script as follows:

  • Line 1: the “shebang,” indicating that this script would be interpreted by the bash command shell.
  • Line 2: example of a comment (i.e., informational, not to be executed, marked with an initial # sign).
  • Line 3: trying again to include the date in the name of the tar archive file. The combination of the dollar sign ($) and the parentheses (i.e., “(” and “)” ) meant, calculate this value (i.e., the current date) and present it as shown (i.e., extracting year, then month, then day …). The DATE variable was thus defined as that date value, presented in that way.
  • Line 4: the SOURCE variable stated what was being backed up. In this case, everything was being backed up, starting at the root level (i.e., “/”). In this case, it would have been easier just to type “/” than to set up the SOURCE variable. But I might want to use this script again later, with some other directory. For instance, the user might want to back up only a part of their /home folder or partition. In that case, the 1&1 webpage specified SOURCE=”$HOME/sourcedirectory.” I would find it easier to use the script if all I needed to do, in a new situation, was to change the value of a few of its variables up front, rather than go rooting through it to make sure I had caught all the places where those variables were used. As shown on the bottom line of this script, $SOURCE was also less likely to be overlooked or accidentally deleted than the ending slash (“/”) by itself. Note also that dollar sign or parentheses were unnecessary, here in line 3, because (unlike the date situation) nothing was being calculated; SOURCE simply had to remember the specified location (e.g., root).
  • Line 5: defining DESTINATION as a variable stating where the backup would be saved.
  • Line 6: moving to root directory. I did try without this, specifying instead the absolute locations of excluded directories (e.g., “/proc”), with a leading slash. It wouldn’t accept that: it couldn’t find what I was talking about. So I had to position the processor at the root directory and remove those leading slashes from the “exclude” lines.
  • Line 7: starting the actual tar command. Dollar signs provided notice that variables were being used — that the command should process the value of the DESTINATION variable, for example, rather than just try to do something with the word “DESTINATION.” Note that this and subsequent lines ended with a backslash, indicating that the command continued on the next line.
  • Lines 8+: apparently versions of tar varied in their syntax. On my computer, man tar seemed to indicate that options, such as exclude, should come after tar but before the specified source path (in this case, $SOURCE). (If I wanted to do a Google search for further information on my particular version further, tar –version informed me that the version of tar in use on my computer was 1.28.) Curly brackets accommodated multiple comma-delimited directories, in lieu of retyping “–exclude” for each. The translation of the curly brackets appeared to be something like, “After each comma inside these curly brackets, repeat everything that appears on this line before the opening curly bracket(s).” Note that addresses were relative to the current location (set by the cd / command. I think this script failed when I tried using e.g., –exclude={/proc, . . ., with a forward slash, instead of the –exclude={proc, version shown here. Note that I was later advised to include, in my restore command (below), a mkdir command for each folder excluded from the tar backup here. That would be impossible for the ones designated with a wildcard: at restoration time, I would have no idea what subfolders had been excluded, so I could not re-create them.
  • Last line: end with $SOURCE (i.e., the specification of the source directory, i.e., root).

In this case, I did not opt to exclude the /home partition. I had not yet copied my VMs back onto it, so (as I could see in Thunar) its contents were only 24MB. Including it would give me a more or less complete system backup. Once I had the VMs back in the /home partition, events calling for a backup of that partition might not be in sync with events calling for a backup of the main LMX installation. At that point, I might want to add /home to the list of exclusions.

Later, it seemed to me that maybe I should have made a backup of the EFI boot partition. I wasn’t sure, but it seemed that doing so might have saved me a bunch of time and effort to recreate it (below).

Running It: A Scripting Environment

So now I had a script in mind, containing my tar command and its accompaniments. I needed to figure out how and where to save it and run it. For that, I decided to follow advice found in the Arch Linux wiki. The advice was to set up a “scripting environment.” This seemed to mean just a place and a method for saving and running scripts.

To make that work, and to understand what I was doing, I explored the PATH variable. I didn’t intend to, originally; I just got there by following advice that might not have been right for Linux Mint, and then trying to undo what I had done.

The PATH variable was the list of places where the computer knew to look for an executable command, if I typed its name at the command prompt. The places were separated by colons. I could see the current contents of the PATH variable by typing echo $PATH.

What I should have done was to type echo $PATH at the start, and record the default PATH, so that I would know what it was supposed to be if I changed it to something undesirable. It appeared that the original PATH may have been this:

/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games

That may have been set by a file that I could have edited using sudo xed /etc/environment, where xed was the default text editor in LMX (also available via Start > Accessories > Text Editor). But I was not certain that that was the original PATH: I also saw that xed ~/.profile displayed these two lines at the bottom:

# set PATH so it includes user's private bin directories
PATH="$HOME/bin:$HOME/.local/bin:$PATH"

The second line seemed to say, “Add these two bin folders to the standard system PATH.” Sources said the PATH was set by multiple files, so I couldn’t be sure, but it seemed that those two lines might have been why my PATH contained two additional directories, in addition to the standard PATH shown above (i.e., the one starting with /usr/local/sbin). Those two additional directories were /home/ray/bin and /home/ray/.local/bin. According to a post in a Linux Mint forum (2017), /home/ray/bin would automatically be included in $PATH.

I didn’t think I had added those two lines at the bottom of ~/.profile. I wasn’t sure I should remove them. I tried nonetheless. To do that, I followed the advice to use export PATH=[desired path, excluding what I didn’t want]. That changed the PATH, but apparently it was only a temporary fix, persisting only during the current Terminal session or perhaps until I logged out. To change the PATH permanently, the advice was to use xed .bash_profile or perhaps xed ~/.profile. But I suspected the former (i.e., xed .bash_profile) was outdated advice, or perhaps not applicable to LMX: it seemed to be creating a new .bash_profile file, not editing the existing one. I tried again, using locate .bash_profile to find where the real .bash_profile was located, and then revising the command: xed /[full path]/.bash_profile. That opened a file that contained nothing operational, only a statement about how it was empty by default.

Reviewing, then, it seemed that I might alter my PATH with sudo xed /etc/environment and/or xed ~/.profile. At any rate, my current PATH (and possibly the original default) was this:

/home/ray/bin:/home/ray/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games

If I did want to add more directories to the PATH, the advice was to use export PATH=$PATH:/[new directory] (e.g., export PATH=$PATH:/Home/bin).

Despite all these words, my PATH command actually didn’t work. I had to specify the path to /home/ray/bin/tarbup in order to get it to run. If this comment is still here, that probably means I haven’t yet returned to this issue long enough to figure it out.

With that informative if imperfect PATH information in mind, I proceeded to follow the(modified) advice to set up the scripting environment, as follows:

  • mkdir ~/bin. On my laptop, this command created a “bin” directory to store scripts at /home/ray/bin.
  • Open a xed session, enter the text of the script (above) into the blank document, and save it in /home/ray/bin as tarbup (i.e., using the name shown on its comment line). Apparently adding an extension (e.g., tarbup.sh) was no longer recommended. I learned, the hard way, that saving tarbup in Notepad in Windows, and then bringing it over to the Linux laptop via USB jump drive, was problematic: it seemed that, by doing so, I added invisible codes. It would have been better to type it into xed.
  • Add the new /home/ray/bin folder to my PATH (discussed above), if it wasn’t there already. With that in place, I would not have to move the Terminal prompt to any particular folder in order to run the script, and I would also not have to enter the full path (i.e., /home/ray/bin/, or ~/bin/, followed by the name of the script) in order to run the script.
  • To facilitate future script production, run commands to (1) create (or use unalias to remove) an alias (i.e., a typed shortcut) that would allow me to move the prompt to that bin folder without requiring a lot of typing, and then (2) update the current system information on that point. I chose “mybin” as the alias. After I entered these commands, mybin moved the prompt to /home/ray/bin:
alias mybin="cd ~/bin"
source ~/.bashrc
  • With the prompt in ~/bin, run the “change mode” command chmod to make my new script executable. The command format: chmod +x [script name]. In this case, it was chmod +x tarbup.

The Arch Linux advice continued a bit further, with instructions on using chroot. This was apparently to facilitate running the backup from a live CD. I wasn’t sure whether that was necessary in all cases, or was perhaps specific to Arch or ideal for advanced users. Other sources didn’t seem to recommend it. It seemed I might have to feel my way into that.

Judging from the 1&1 Digital Guide, I was ready to go. All I needed was to run sudo tarbup. After this, according to that Guide, I could run a different script to do incremental backups. So, after some playing around, I managed to get the script to run. In a few minutes, I had my .tar backup. I opened it and compared its list of top-level directories to those actually installed on the laptop. The desired ones seemed to be there; the undesired ones weren’t. The archive was 1.9GB, as compared to a 5.2GB installation (excluding unwanted directories). That was serious compression, but it was possible.

Now that I had finished that, the only thing missing was a backup. Seriously, I was about to wipe my Linux Mint installation and see if I could restore the tar backup. Of course I would want another backup, just in case. I might not trust GUI backups for ongoing use; but for purposes of making sure I had a one-time alternative to this tarball, it seemed like a good idea. The built-in (Start > System > ) Backup Tool fell far short of a full system backup, offering to back up only Personal data or Software Selection. Instead, I downloaded Bacula, extracted files from the tar.gz archive, and opened its INSTALL file. That file conveyed these nuggets:

This file is rather out of date, and if you want to avoid a lot of pain, you will read the manual, which you can find at www.bacula.org. . . . Note, in configuring Bacula, you cannot get by with a simple ./configure, it is much more complicated than that (unfortunately).

That link actually redirected to a page in the Bacula blog, from which I found a Manuals page, whose Main Reference Guide contained a Quick Start section that I found only moderately dismaying.

It did occur to me, at about this point, that there might actually be a friendlier GUI backup solution. With that possibility dangling before me, I returned to the LinuxTechi list (Kumar, 2018) of 12 top open source backup tools. Duplicati was No. 2 on the list, but it appeared to be cloud-oriented. The next GUI solution on the list, the Advanced Maryland Automatic Network Disk Archiver (a/k/a Amanda), offered an FAQs page that seemed to say that I would have to use the command line to do a full backup. I went a little further down the list and concluded that these were tools for administrators, not simple disk imaging tools for end users.

The whole concept here was that I was trying to learn tar in order to avoid having to screw around with endless proprietary and/or eccentric tools to accomplish relatively simple file moving, archiving, and backup tasks. I wasn’t eager to screw around with these tools in order to see whether tar had given me a way of avoiding the need to screw around with these tools. So at this point, logic being what it is, and having seen rsync mentioned on multiple lists of backup tools, I decided to return to the previous section of this post and develop the rsync method of mirroring the Linux installation.

Restoring from the tar File

Now that I had two backups — one provided by my rsyncbup script and one by my tarbup script — it was time to see whether they worked.

I started by taking a look at GParted, to see how things were arranged on the SSD in the installed system. (GParted was included in the LMX live CD ISO but, for some reason, was not installed on my system. I added it to my installation via sudo apt-get install gparted.) GParted said that /dev/sdb (i.e., the 466GiB SSD) consisted of the following:

  • /dev/sdb1: an EFI System Partition (FAT32) (512MiB)
  • /dev/sdb2: a /boot partition (ext2) (488MiB) (flags: boot, esp)
  • /dev/sdb3: the LVM partition (crypt-luks) (465GiB) (flag: lvm)
  • /dev/sdb4: unallocated (1.02MiB)

Now I rebooted the laptop with a single-purpose USB drive (i.e., not a YUMI or other multiboot drive, except as detailed in another post). This was, of course, a LMX live USB. My own preference, for creating that tool from a downloaded LMX ISO, was to use Rufus. I interrupted the bootup process, using F2 at the splash screen (i.e., the Acer logo, on this laptop) to verify that the machine was booting in UEFI, not BIOS/Legacy, mode, with Secure Boot disabled. I restarted and, this time, I hit F12 at the splash screen, to see the boot menu, and selected the USB drive. The live USB booted and ran LMX.

Now, in the Linux live CD session, I ran GParted (Start > System > GParted). For the SSD, it showed the partitions listed above. In GParted, I deleted those partitions. I went to GParted > menu > View > Device Information. That gave me a left-hand sidebar providing information about the SSD. It said it was formatted as gpt, not mbr, msdos, or something else. GPT was the desired option, so that was good. Now, as advised by the Ubuntu wiki, I used GParted to recreate the first three partitions. I decided to try leaving a larger unallocated space for SSD overprovisioning, so I made the LVM partition only 400GiB (i.e., 409600MiB). (I wasn’t sure that would work, but I wanted to try.) Crypt-luks wasn’t a formatting option, so I chose ext4. Then I clicked Apply. Then, for the partitions that I had named BOOT and LVM, in GParted, I right-clicked > Manage Flags > select boot for the /boot partition and lvm for the LVM partition.

Continuing in the LMX live CD session, I went to Start > Accessories > Disks. My mission here was to mount the necessary partitions. So on the HDD, I selected the BACKROOM partition where the tar and rsync backups were stored > click the Mount arrow; and on the SSD, I similarly mounted the EFI, boot, and LVM partitions. I was logged in to the live CD under the “mint” username, so Thunar reported mount points EFI, BOOT, BACKROOM, and LVM at /media/mint. These mount points functioned like folders, so I could see the contents of BACKROOM in the BACKROOM mount point, whereas the others were empty.

Then I opened Terminal and began to develop the command needed to restore the tar backup. According to the Ubuntu community webpage, an appropriate restore command would be something like sudo tar -xvpzf /path/to/backup.tar.gz -C /restore/location –numeric-owner. Some of those options were the same as those used to create the tar. One exception was that the GNU tar manual said the restore would call for the -x (extract from tar) rather than the -c (create tar) option. The create command (above) also didn’t use the -C option, which told tar where the top-level restore would begin. The –numeric-owner option would apparently tell tar to disregard user names in the current environment. Specifically, in this LMX live CD session, my login under the default “mint” user had nothing to do with the “ray” username that I wanted to restore to the SSD. On that basis, I developed the following script:

#!/bin/bash
# This is tarrestore
SOURCE="/media/mint/BACKROOM/MintBackup-2018-06-22-00.40.11.tar.gz"
DESTINATION="/media/mint/LVM"
cd /
sudo tar -xvpzf $SOURCE -C $DESTINATION --numeric-owner
cd $DESTINATION
sudo mkdir -p proc sys dev run tmp lost+found cdrom media mnt \ 
  var/log var/cache/apt/archives usr/src/linux-headers \
  home/ray/.cache home/ray/.gvfs home/ray/.local/share/Trash \
  var/spool/squid

I created that script on the desktop in the live CD session, used sudo chmod +x tarrestore to make it executable, and ran it from Terminal via /home/mint/Desktop/tarrestore. It ran. It seemed to work. The LVM mount point was no longer empty.

Regarding those mkdir command lines: if I tried to make directories using full paths (e.g., /proc), of course, the live CD environment would try to create them in its own root (/) directory, not in the LVM volume that I was trying to reconstruct. Therefore, the names of the folders listed in this mkdir command do not begin with a slash. Instead, they are installed under $DESTINATION — that is, relative to the LVM mount point specified by cd $DESTINATION.

Trying to Restore GRUB

Restoring from the tar file gave me the operating system files, but did not give me a bootable system. For that, I would need to restore the GNU Grand Unified Bootloader (GRUB). The GNU GRUB Manual 2.02 described a bootloader as the first software program that runs when a computer starts, with the purpose of loading and transferring control from firmware (i.e., basic startup code built into the computer’s hardware) to an operating system kernel, which then initializes the rest of the operating system.

The Ubuntu community webpage (and, with a little more detail, AskUbuntu) offered instructions to restore GRUB, so that I would have a bootable system. With my laptop in its present state (i.e., booted from the live CD, and having just run the foregoing script), those instructions seemed to translate into the following commands, entered one at a time:

sudo -s
cd /media/mint/LVM
for f in proc sys dev ; do mount --bind /$f /media/mint/LVM/$f ; done
chroot /media/mint/LVM
apt-get update
apt-get install grub-pc
dpkg-reconfigure grub-pc
for f in proc sys dev ; do umount /media/mint/LVM/$f
exit

The apt-get update line seemed necessary because, without it, dpkg-reconfigure grub-pc would produce an error: “package ‘grub-pc’ is not installed and no information is available.” Apparently it was a question of whether grub-pc was installed within the superuser environment: for the ordinary user, Start > System > Synaptic Package Manager reported that, in fact, grub-pc was installed. But the apt-get update command produced its own errors, starting with “temporary failure resolving ‘packages.linuxmint.com.'” It seemed the superuser was not online. Yet ping -n 8.8.8.8 -c3 said otherwise: those pings to the Google server were successful. In that case, an AskUbuntu answer said this was a DNS problem — but the solution wasn’t specified. I was baffled, so I posted a question.

I wasn’t having the best of luck getting responses on the Linux Mint forums. Usually, I took that as a sign that I was getting pretty far down the rabbit hole. So after waiting a while I thought maybe I should try other possibilities. TecMint (Cezar, 2017) said I could use Ubuntu Server (18.04 LTS) edition on a USB drive to reinstall GRUB. Until now, I had been creating my live USB drives on my Windows 10 desktop machine, but for this I tried using the laptop, running an LMX live USB drive. On that laptop system, I downloaded and ran Etcher. (It was portable, not requiring installation.) I used Etcher to install the Ubuntu Server ISO on a USB drive. I rebooted the laptop using that Ubuntu Server USB drive and went through its basic questions (e.g., language, keyboard), accepting default options. But what I was seeing didn’t look like Cezar’s screenshots, and it couldn’t get past the Network Connections dialog. On the Windows machine, using Rufus to burn the USB drive, I tried again with an x64 server install version that would have been available at the time of Cezar’s article (i.e., Ubuntu 16.04 LTS). That was better. It still didn’t look the same onscreen, but at least now I saw the option to “Rescue a broken system.” But by the time I was done with this approach, I had installed Ubuntu Server on the laptop, so I had to abandon this approach and start over, repartitioning with GParted and then re-running the tar command to restore the LMX backup to my laptop’s SSD.

Another possibility: Super Grub2 Disk. I had already downloaded the SGD ISO for version 2.02s9 (“recommended download” for “floppy, CD & USB in one”), so now I used Rufus to install that on a USB drive on my Windows 10 desktop, and then tried to boot the laptop with that USB drive. But the laptop wasn’t seeing it. I tried again with another USB, created the same way, but the laptop didn’t see that either. I tried again, this time using the EFI x86_64 standalone version. It had an .efi extension, which Rufus didn’t see. Lifewire (Fisher, 2017) seemed to say that an .efi file should be placed in the EFI partition that I had created using GParted. Unfortunately, I wasn’t sure where to put it. Fisher said its location varied among Linux versions. A search led to various pages that, within my limited browsing, did not resolve the mystery. Rod Smith advocated using his own rEFInd, but it looked complicated. An AskUbuntu discussion addressed the situation on Ubuntu, but I was not confident LMX would be identical.

A Linux Mint forum discussion made me wonder whether some of this advice was perhaps suited for MBR or BIOS systems rather than GPT or UEFI. I would have been happy to try booting some of these tools with the laptop in Legacy mode, but it seemed like people were saying that this sort of mixing would cause problems later, on a system that was going to be operating in UEFI mode for the most part.

Another possibility was to run Boot-Repair. An Ubuntu Community document said it was possible to use Boot-Repair either on a live USB drive, as Boot-Repair-Disk (which I had used in the past, from my BIOS-oriented YUMI drive), or from an Ubuntu installation or live CD. For that last option, with a working Internet connection, the suggested commands were as follows:

sudo add-apt-repository ppa:yannubuntu/boot-repair
sudo apt-get update
sudo apt-get install -y boot-repair
boot-repair

Having booted with the LMX live USB, I tried those. The second command produced an error:

Failed to fetch cdrom://Linux Mint 18.3 _Sylvia_ – Release amd64 20171213/dists/xenial/contrib/binary-i386 Package

Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs

I ignored that message. The remaining commands seemed to run OK. Boot Repair started up, seemed to be running some scans, and then gave me an chose among Recommended Repair, Create a BootInfo Summary, or Advanced Options. Experience with Boot Repair Disk on the YUMI drive had taught me to simply click the Recommended Repair. It ran, and then gave me a set of four commands to copy and paste into Terminal, which I did. Those commands were as follows:

sudo chroot "/mnt/boot-sav/sdb3" dpkg --configure -a
sudo chroot "/mnt/boot-sav/sdb3" apt-get install -fy
sudo chroot "/mnt/boot-sav/sdb3" apt-get install -y lvm2
sudo chroot "/mnt/boot-sav/sdb3" apt-get purge -y grub*-common grub-common:i386 lupin-s* shim-signed

Those commands all seemed to run OK, with a couple of exceptions for the last one. It produced notices that some of the specified packages were not installed, and therefore were not removed. It also produced some error statements, starting with this one:

mktemp: failed to create directory via template ‘/var/tmp/mkinitramfs_XXXXXX’: No such file or directory

Since that last command was a purge command, presumably the error related only to a failure to remove one or more packages. I hoped that was a problem only for purposes of keeping things orderly, without consequences for actual functioning. Returning to the main Boot Repair dialog, I clicked Forward. It gave me another command to copy and paste:

sudo chroot "/mnt/boot-sav/sdb3" apt-get install -y grub-efi-amd64-signed shim-signed linux-headers-generic linux-signed-generic

Sadly, that command produced indications that the errors from the previous one were problematic. Specifically (among other things), it said that shim-signed had unmet dependencies and I had “held broken packages.” I tried re-running the previous command (i.e., the fourth in the preceding list of four commands). Among other things, it said “package ‘shim-signed’ is not installed, so not removed.” So it seemed I was mistaken: the previous problem with shim-signed seemed irrelevant. And yet when I repeated this last command, involving apt-get install, once again I got those errors about unmet dependencies and broken packages involving shim-signed. A search revealed that I was not alone. Among the recommended solutions, I noticed that srs5694 (apparently the username for Rod Smith, above) provided a simplified summary of using rEFInd (above), which I interpreted as follows:

  • Download and unzip the ISO and use Rufus to install its .img file onto a USB drive.
  • Reboot with that USB drive. Hit F12 during bootup to select the USB drive.
  • Assuming it boots correctly (mine did), select the Linux Mint icon (or Ubuntu if no Linux Mint icon appears) and hit Enter.

That’s as far as we got. It started giving me errors:

WARNING: Failed to connect to lvmetad. Falling back to internal scanning.

Reading all physical volumes. This may take a while …

/run/lvm/lvmetad.socket: connect failed: No such file or directory

It did that for a few minutes and then dumped me at an initramfs prompt after giving me a warning:

ALERT! /dev/disk/by-uuid/[UUID number] does not exist.

Was that the problem? I had restored to a different partition; perhaps its UUID number was different from the one I had backed up with the tar command, and therefore any boot attempt would fail? Novell (Record, 2007) said that recovering or restoring data on an LVM partition would require me to restore the old disk’s UUID and its LVM metadata. Was that advice still valid, or was it outdated? Novell worked through several scenarios that might have been informative. Regrettably, this was starting to look like one of those fights that you can only win by staying with it until neither of you can stand up anymore.

At about this point, I came across a HowtoForge tutorial on repairing Linux boot failures in GRUB 2 rescue mode. I rebooted the laptop, without USB drives, and observed that I wound up with a grub> prompt. According to the tutorial, “This is the screen mode you see when GRUB has found everything except the configuration file. This file probably will be grub.conf.” The tutorial seemed to say that GRUB had failed to load its normal module, and therefore I was in GRUB’s Rescue Shell. Beyond that, I found the tutorial unclear, so I tried again with the GNU GRUB Manual 2.02. But, wow, it was really long and complicated. I didn’t want to earn college credit for this. I just wanted the machine to run.

Now, honestly, I didn’t expect anything better from Boot Repair Disk, running from a USB drive, than I had gotten from the boot-repair command in Linux (above). But I had not actually tried it and — who could say? — maybe it would work miracles. In this optimistic if not delirious state of mind, I went to the Boot-Repair website, downloaded its latest version, used Rufus to burn it to USB, and booted the laptop with it. It did the same scanning as before. But then it said, “This will install the [lvm2] packages. Do you want to continue?” I said sure, why not? It wanted me to connect to the Internet, and then it went to work. It gave me the same choice as before, and again I chose Recommended Repair. It told me to enter the same commands, and when I got errors, this time it concluded, “GRUB is still absent. Please try again.” Instead of doing that, I clicked Discard. It said, “GRUB reinstallation has been cancelled.” I rebooted, with the intention of trying something other than Recommended Repair, but I was no longer getting any alternatives.

At this point, I was ready to cut my losses. My working conclusion was that, when people occasionally said that tar was not really suited for system backup, this might be the sort of thing they were talking about. I mean, it had been real, and it had been fun, but it hadn’t been real fun, and now I believed it was time for me to mosey on.

Restoring from the rsync Backup

If I wanted to restore from the rsync backup instead of the tar backup, OSTechnix (2017) said I could just repeat the rsync command that I had used to create the rsync backup, except that I would want to reverse the source and destination. But there didn’t seem to be much point of restoring what I had saved via rsync, if it was just going to give me a partition full of Linux system files that wouldn’t boot. Without a solution to the GRUB problem, it seemed I could save myself the effort. It appeared that would be the conclusion for other Linux backup tools, such as fsarchiver, for which a Linux Mint forum discussion seemed to indicate that it would be necessary to install Grub to a computer that didn’t already have it.

I did wonder whether the tar and rsync backups differed much. A ServerFault discussion (2009) on exactly that question yielded multiple suggestions on how to perform such a comparison. I tried Beyond Compare and Meld, but could not figure out how to get them to look at root partitions. Moreover, I realized, I was running low on enthusiasm for this project. I decided to shelve it for now.

GUI Backup via Disks Utility

As noted above, at some point in this process I became aware of the Disks Utility (a/k/a gnome-disks) method of doing a backup. Disks was built into the LMX installation. Of course, I no longer had a working LMX installation, so I had to start over with a new install. But once that was in place, I was ready to try Disks. Was it really going to be a simple solution to this complex problem?

To find out, I went to Start > Accessories > Disks. The scenario seemed to be that I would do separate backups of each partition. As Disks informed me, the system partitions were the three on the SSD: the EFI partition, the boot partition, and the large LVM partition. According to TechRepublic (Wallen, 2017), these were going to be images of the entire drive: “so if you have a 1TB drive the resulting image will be 1TB.” That was sobering. It pretty much insured that backups would be made only to an external drive. Wallen said the backup and restore would be done from a live USB or CD, not from within the running Linux system being backed up: “you cannot create an image of a currently mounted drive.”

The Disks utility seemed to offer different image options. As just indicated, Wallen was talking about backing up an entire drive. To get that, he was selecting a drive in the left pane of the Disks utility dialog, and then clicking on the gear icon at the upper right corner of the dialog. In my version of Linux, that gear icon had become a hamburger icon (i.e., three little parallel horizontal lines). Either way, that icon did offer options to create and restore disk images. But there was another option. The little gear icon below the rectangles graphically representing disk partitions also offered options to create and restore partition images. They could still be huge, but in some configurations (i.e., not one featuring a huge LVM pool, like I had here), that would enable me to back up some partitions while ignoring others.

It seemed it might be possible to use GParted on the live USB to shrink partitions to their minimum size temporarily, for purposes of the backup, and then restore their full size afterwards, but possibly that would break things. I also thought it might be possible to compress the backups, for purposes of storage, especially if most of their contents were empty space. Of course, these repartitioning and compressing operations would take time and CPU power, potentially detracting from other tasks, and they might also add to the risk of image corruption.

At present, on my minimal Linux installation, it was going to be faster to just reinstall Linux from scratch than to go through all these steps to create a backup image. Rsync and tar continued to seem like excellent tools for the purpose, if I could get past the GRUB barrier. I was sure I would, at some point. But at present I didn’t know how to do that, and I wasn’t seeing good backup alternatives. I could continue to experiment with various tools, but this was time-consuming, and I had other projects in mind.

MEBx Error State 0106 and 0303 – Grub Rescue Mode

As described in another post, I was installing Linux Mint 17.3 KDE on a Lenovo ThinkPad Edge E430 laptop, in a secondary partition to dual-boot with Windows 7. I had installed Linux Mint 17.3 Cinnamon successfully in that same partition twice before. But for some reason the KDE installation ended with an error message referring to an MEBx Error State. This post describes the steps I took to resolve that situation.

Becoming Familiar with the Problem

The first time I got that message, it said “MEBx Error State : : 0106.” A search yielded the statement that there were no results for this exact phrase, but of course Google kindly ran the search again without the quotes — and pulled up a confirmation that, in the words of one writer, “Our support did not recognize (and never seen before).” That and another source suggested starting with a reboot and, if that failed, trying additional steps.

Instead of a simple reboot, I powered the laptop off (but did not remove its power cable or battery) for at least a half-minute. This time, the error was different: “MEBx Error State : : 0303.” A search for that phrase (and also a variation) yielded one result in Czech and one in Chinese. We did not progress on to any other numbers; subsequent reboots returned to that 0303 error.

Looking at the screen where the error message appeared, I saw the words, “Intel(R) Management Engine BIOS Extension.” MEBx appeared to be short for that. I was puzzled to observe that that screen also said the firmware version was 0.0.0.0. It sounded like Intel was just starting out when they designed my machine.

There were suggestions to short pins 1 and 2, in order “to reset the Intel ME configuration to the factory defaults,” as described on p. 67 of the Intel NUC Board NUC5i5MYBE Technical Product Specification document. I didn’t think I had that particular board, but was not inclined to disassemble my laptop to make sure. I saw that this procedure failed to help at least one person who went to the trouble.

Each time, at startup, I noticed that the words “FW Status Recovery Error” appeared briefly onscreen. That search was a bit more fruitful. As with the previous searches, there were recurrent references to BIOS upgrades: some people said that such upgrades helped them get past this problem. To clarify, in light of the preceding paragraph, it appeared that what I might need to upgrade would be the MEBx — that is, the Intel BIOS Extension — which was surely not the same as the Phoenix BIOS whose information screen appeared as soon as I started the machine. (I had set my BIOS to display the diagnostic screen rather than the splashscreen, so that was why I was seeing this sort of detail.)

I guessed that maybe I was getting this error because KDE wanted to take advantage of BIOS features newer than those currently available on the ThinkPad. I had told the KDE installer to reformat the partition where Cinnamon was installed, so I thought I was starting from a blank slate in that partition.

The GRUB Issue

I saw some indications that hitting Ctrl-P at bootup would put me into the MEBx. Repeatedly doing so put me back at the same MEBx error, but this time (and henceforth) it did not remain stuck there; after a few seconds it moved on to the “grub rescue” prompt. It seemed that hitting Ctrl-P had made a change in the machine’s response to the MEBx error.

This time, I noticed these words before the GRUB prompt: “error: no such device,” followed by a long string of numbers and letters. I guessed that the long string was a Universally Unique Identifier (UUID) for some device that GRUB thought it should be finding on my computer. Most likely it was the Cinnamon installation. Apparently the KDE installer had not changed that to refer to the UUID corresponding to the new KDE installation.

Another suggestion was to change the hard drive boot sequence in my BIOS. I took a look, and rearranged things slightly, but I did not expect that to be the answer, and in fact it wasn’t. I had been booting from the same drive for the past year or two, and expected to continue to do so. In my case, it seemed the change was not in the physical drives, or even in the order of bootable partitions; it was just that I had replaced one bootable partition with another, having a different UUID.

A search for the “error: no such device” message led to various suggestions, including using Boot-Repair-Disk. I had that on my YUMI drive, but for a minute I thought this MEBx error was preventing me from booting it. But when I took the error message’s advice to “Press any key to continue,” it put me through to the USB drive (having hit F12 during the initial bootup, and then selected the USB drive). In the spirit of exploration, I chose Advanced Options and made two changes to the default values: (1) In the Main Options tab, I checked Restore MBR. (2) In the MBR Options tab, I chose sdb, which in my case was the drive containing the Windows and Linux dual-boot partitions. Then I clicked Apply. Unfortunately, the problem persisted.

I re-ran Boot-Repair-Disk and this time used Recommended Repair. At first, it looked like that didn’t work: on reboot, I had the same error messages. But after I pressed a key to continue, it put me through to the desired GNU GRUB multiboot menu, giving me the choice of booting Linux Mint KDE (with or without advanced options), Memtest86+ (in two configurations), or Windows 7.

I tried the Windows 7 option. That worked. I rebooted out of Windows. The “FW Status Recovery Error” message was still there, and so was the “MEBx Error State : : 0303.” I responded to the suggestion, “Press any key to continue,” and the GNU GRUB menu was back. I didn’t respond to it, and in a few seconds it defaulted to the first item on the list, which was to boot into Linux Mint KDE. That worked too.

So it seemed we were making progress. My guess, at this point, was that I had started with two problems. One was a GRUB problem caused by the KDE installation. I doubted that the problem was in the KDE installer per se, or in the choice of partitions, or in the way I had partitioned. I suspected that GRUB got screwed up because of the other problem, which had to do with the Intel MEBx. Apparently there was a way to update that BIOS Extension; and if I did that, maybe the error message would go away; maybe it wouldn’t be there after a reinstallation; maybe it would never have appeared, if I had done that BIOS update first.

If I was wrong in that understanding of the situation, I might have to refer back to some of the other sources that I had found, regarding the task of fixing GRUB. For future reference, those included Ubuntu Community documents on installing and troubleshooting Grub2 and on recovering Ubuntu after installing Windows; there was also an oft-cited StackExchange answer on repairing GRUB, and another on the “no such partition” error, as well as an Ubuntu forums thread on that last topic.

The MEBx Issue

If the GRUB problem was fixed, then my attention needed to turn, now, to that BIOS update issue. The first question was, what exactly was I supposed to be updating? I rebooted and took another look at the MBEx screen, where the error message appeared. The top line said I had MBEx v8.0.0.0065. I wondered if maybe Lenovo had a relevant update for this machine. The most recent BIOS update on their webpage dated from 2012. I was sure I had already installed that. Well, how about Intel? Along with that previous advice about shorting pins 1 and 2 (above), a search led to an unanswered question about resetting MBEx by briefly removing the CMOS battery (not an option, to my knowledge, on a laptop); an Intel AMT Implementation and Reference Guide page on Restoring Intel AMT to Factory Mode; and the following exchange in an Intel forum:

Lance Atencio (Intel): The MEBx is an extension to the BIOS and controlled by the OEM/BIOS provider. Each one has different ways of providing access to the MEBx settings. Ctrl-P is the typical means to access, but some vendors do it differently.
You might try looking for other settings in the BIOS that would display the CTRL-p during boot or try other boot hotkeys that are available.

plmanikandan: In BIOS setting ->Advance chipset feature ->Intel AMT is enabled. I downloaded the AMT tools from Acer website and triedMEInfoWin.exe. I’m getting error as “Error 8199: Communication error between application and Intel ME (Get RCS Connectivity v2)”

Lance Atencio (Intel): It appears that this system does have AMT, but since you cannot get access to the MEBx I’m afraid I’ll have to refer you back to Acer to get help with getting your system working. The OEMs control the firmware which is where the issue seems to be.

plmanikandan: If I do a local firmware update using tool provided by Intel(downloaded from Acer website), will it recover MEBx configuration?

Paul Carbin (Intel): Updating the firmware may recover the MEBx configuration, but you are really in the domain of the OEM. Intel AMT is only a small portion of the system firmware, and Intel does not control how the OEMs implement their firmware. Updating the firmware may cause other issues, andI recommend that you work with the OEM before upgrading firmware.

Also, did you try resetting BIOS to factory defaults?

That exchange led me to think the solution might be in my BIOS. I rebooted the machine and hit F1 (on other machines it might be F2, DEL, or some other key) to get into the BIOS settings. Unlike plmanikandan in the foregoing exchange, however, I did not see an Advanced Chipset Feature option. A search led to a Lenovo forum statement that “In general, BIOS menus are limited in Notebooks,” to which the user responded that s/he did somehow hit an unknown hotkey to get into advanced mode. A revised search led to another user who likewise believed there was some such option — because s/he had discovered it, by accident, hitting some random combination of keys, but was not sure which keys s/he had hit. Another search led to a Lenovo page listing ways to access the BIOS. There was an option of hitting Ctrl-Alt-F11 from a system booted in DOS on some models, but again there was no mention of advanced options.

I tried the BIOS option of loading setup defaults. On reboot, that did it: the error messages were gone. It wasn’t just that the default (i.e., relatively pretty) ThinkPad splash screen was hiding them: the system moved immediately from that introductory splash screen to the GRUB menu. I rebooted again and hit F1, F2, and DEL (I wasn’t sure which one would cut through the splash screen) to get back into BIOS. I reconfigured everything as it was before, as far as I could remember. This took away the splash screen and replaced it with the detailed diagnostic bootup screen, among other things. And now the error messages were back.

It appeared, then, that one of my changes to the BIOS settings was triggering the error messages. Back in the BIOS settings, I reverted once more to the factory defaults, but this time I altered those in just one way: I allowed the diagnostic screens instead of the splash screen. On reboot, the error messages were back. So I was probably wrong: the errors were still there, but somehow the splash screen was just bypassing them, on its way to the GRUB menu. Apparently it wasn’t that my BIOS changes were triggering the errors; apparently they were there in any case, but the splash screen would bypass them.

One source suggested another possible solution: re-flash the BIOS. The concept here was that the BIOS was firmware — that is, software built into chips on the computer’s motherboard — and it could be updated through certain procedures. I had saved the ISO that I downloaded when I last flashed the ThinkPad’s BIOS, so now I added that to the YUMI USB drive and rebooted the ThinkPad from that BIOS update ISO. For some reason, though, what appeared on the screen was mostly illegible.

I tried again with a fresh download from the Lenovo webpage. That worked. Possibly the old version had become corrupted. The flashing involved several steps and took a few minutes. When it was done, the error messages were gone. I booted into Windows and then immediately went back out and customized the BIOS settings. Then I rebooted again. Still no error messages. They were gone. I was able to boot into Linux Mint as well. This problem appeared to be solved.

“Fatal! Inconsistent Data Read” Error

I was using YUMI to install multiple ISOs on a multiboot USB drive.  All of its top-level menu picks worked correctly except one.  That one, “Directly Bootable ISOs or Windows XP,” gave me a rapidly scrolling list of “Fatal!  Inconsistent data read from” errors.  After each iteration of “inconsistent data read from,” it would name some kind of numerical address (e.g., 0x80 3272020888+127).  After a number of these iterations, it would put me at a GRUB prompt.  This post describes my efforts to fix that problem.

I began with a search that led to solutions that looked likely to be complex and very time-consuming.  I wondered if the problem was due to YUMI 0.0.8.1.  I searched for a copy of 0.0.7.9, which I had used with more success previously.  My search was unsuccessful.  I restored a copy from backup and tried starting over with that.  I had to reformat the YUMI multiboot drive with Windows in order to wipe out and redo from scratch; the YUMI formatter did not overwrite the drive.

Unfortunately, switching to 0.0.7.9 did not fix the “Fatal!” error.  I tried the YUMI drive in another machine, as I should have done before.  It did not seem to scroll the same “Fatal!” errors, but it may have done so very rapidly; all I saw was a flash and then the GRUB prompt.  I concluded it was indeed a problem with the YUMI drive.  Yet that was puzzling.  I had used this same 16GB USB drive for a YUMI installation previously, and it had worked on these same machines, using this same version of YUMI.  These computers would still boot into Windows with no problem.

Since the problem arose from one particular menu pick within the multiboot drive, I decided to try to understand YUMI’s menu setup.  A look at the multiboot drive in Windows Explorer showed me the following contents at the root level (i.e., H:\).  All of these items were the names of folders; there were no filenames in the root directory.

.disk
antivir
avupdate
fsecure
multiboot
system
tables
TRK
trk3

The .disk folder contained only an “info” file that contained only one word:  “This.”  The antivir and avupdate folders seemed to belong to the Avira antivirus ISO that I had installed on the multiboot drive.  The fsecure folder obviously belonged to the F-Secure antivirus ISO.  The multiboot folder contained subfolders for various ISOs (e.g., GParted, Ubuntu, Knoppix).  It also contained certain files that appeared to be key system files for the YUMI system (e.g., grub.exe, syslinux.cfg).  I was not quite sure what the system folder was about.  The tables folder seemed to belong to the Ophcrack ISO.  Files in the TRK folder (e.g., memtest.x86, trinity.ico) suggested that it was for some of the other ISOs I had installed on the USB drive.  I guessed, but could not tell for sure, that the trk3 folder was related to the TRK folder.

From those candidates, the multiboot folder seemed to be the place to start.  I figured out that the .cfg files in that folder, and in its menu subfolder, were individual menu entries in the YUMI system.  The top-level .cfg file seemed to be syslinux.cfg.  Viewed in Notepad, its contents were as follows (with line breaks added here for readability, mostly reflecting the appearance in Notepad):

# Menu Entry Created by Lance http://www.pendrivelinux.com for YUMI – (Your USB Multiboot Installer) default vesamenu.c32 prompt 0 timeout 300 menu title Your Universal MultiBoot Installer menu background yumi.png MENU TABMSG  http://www.pendrivelinux.com MENU WIDTH 72 MENU MARGIN 10 MENU VSHIFT 3 MENU HSHIFT 6 MENU ROWS 15 MENU TABMSGROW 20 MENU TIMEOUTROW 22 menu color title 1;36;44 #66A0FF #00000000 none menu color hotsel 30;47 #C00000 #DDDDDDDD menu color sel 30;47 #000000 #FFFFFFFF menu color border 30;44 #D00000 #00000000 std menu color scrollbar 30;44 #DDDDDDDD #00000000 none

label Boot from first Hard Drive
menu label Continue to Boot from ^First HD (default)
KERNEL chain.c32
APPEND hd1
MENU DEFAULT

label Linux Distributions
menu label Linux Distributions ->
MENU INDENT 1
kernel vesamenu.c32
APPEND /multiboot/menu/linux.cfg

label Antivirus Tools
menu label Antivirus Tools ->
MENU INDENT 1
kernel vesamenu.c32
APPEND /multiboot/menu/antivirus.cfg

label Directly Bootable ISOs
menu label Directly Bootable ISOs or Windows XP ->
MENU INDENT 1
KERNEL /multiboot/grub.exe
APPEND –config-file=/multiboot/menu/menu.lst

label System Tools
menu label System Tools ->
MENU INDENT 1
kernel vesamenu.c32
APPEND /multiboot/menu/system.cfg

The very long (wrapped) first line seemed to provide general formatting instructions for how the menu would appear onscreen, among other things.  As noted above, the lines that were causing me problems seemed to be under the “Directly Bootable ISOs” label, toward the end of the list.  Unlike the others, that item referred, not to kernel vesamenu.c32, but rather to grub.exe.  Also, unlike the others, that item’s APPEND command did not link to a subordinate .cfg menu; instead, it named menu.lst.  So it seemed that my problem might be connected to either grub.exe or menu.lst.

Advice from RMPrepUSB led me to understand that I was winding up at a Grub4DOS prompt.  I hit Esc and found myself at a basic GRUB4DOS window with options for “find /menu.lst” etc., commandline, reboot, or halt.  I tried commandline.  RMPrepUSB suggested that I might type in the commands and see what would happen.  I tried with “KERNEL /multiboot/grub.exe,” though I suspected that would achieve nothing because, as I say, I was already looking at the “grub>” prompt.  It said, “Warning!  No such command : KERNEL.”  I now realized that I could already have told that it would say something like that, because onscreen advice said that hitting the Tab key would give me a list of commands, and — talk about persnickety — kernel was listed there, but lowercase.  So, alright, I tried again:  “kernel /multiboot/grub.exe.”  This time, it gave me “Error 25:  Disk read error.”  That seemed pretty close to the “Inconsistent data read” error message that had commenced this expedition.  But why was it happening?

Had grub.exe or menu.lst somehow become corrupted?  I had tried two separate installations, using YUMI 0.0.7.9 and 0.0.8.1, and had wound up with the same “Fatal” error in both cases.  In between, I had reformatted the USB drive.  And 0.0.7.9 had created successfully working YUMI installations on this same USB drive previously.  That didn’t seem like the answer.

I took a look at H:\multiboot\menu\menu.lst.  (H was the YUMI drive.)  I didn’t know much about how it was supposed to look, but in Notepad it did not look obviously corrupted.  It seemed, tentatively, that my problem was with grub.exe, not with menu.lst.  I wondered if I could download a replacement for grub.exe.  I renamed H:\multiboot\grub.exe to be grub.old, figuring that this would keep it around (in case I needed it) but would render it non-executable.  A search led to a recent webpage where I was able to download a copy of grub.exe.  Putting that into H:\multiboot changed the situation, but did not solve the problem.  That is, I still wound up at a grub> prompt, but without the “Fatal!  Inconsistent Data Read” errors.  The fact that something had changed suggested that the commands were finding grub.exe, at least.

It seemed that, somehow, two different versions of Grub were operating.  As just noted, I had to use “kernel” as a lowercase command, but YUMI was successfully using it (above) as KERNEL.  Likewise, the commands shown above use “config-file,” with a hyphen, but pressing Tab at the Grub prompt told me that the proper command was “configfile.”

I tried typing “ls” (that’s lowercase LS) at the grub prompt.  It said, “Error 25:  Disk read error.”  So both the simple file listing command (ls) and the foregoing reference to grub.exe failed because of a disk read problem.  Why was Grub unable to read my USB drive?

I tried creating another YUMI drive, using a different USB drive; and on that other YUMI drive I began by just adding one of the items that would appear in the  “Directly Bootable ISOs” section.  It was the Windows 7 32-bit System Recovery CD.  The menu worked properly in this case.  So now it appeared the problem might be related to the USB drive — even though that USB drive had functioned correctly in the past.

I repeated the same steps with the problematic drive as with the one that had just worked correctly:  reformat it in Windows Explorer, start YUMI, add the Win7 Recovery CD as an Unlisted ISO.  When attempting to reformat, I got an error.  It said, “Windows was unable to complete the format.”  It was showing “Unknown capacity” in the format window.  There was something wrong with this drive.  A search led to the advice to use Start > Run > diskmgmt.msc > right-click on the USB drive.  It was showing as Unallocated, so I selected New Simple Volume.  But nothing happened.  I plugged it into another computer.  It offered to format the USB drive.  I accepted the offer.  But it was still showing unknown capacity, and Windows was unable to complete the format there too.  I tried the HP Drive Key Boot Utility.  This was apparently different from the HP USB Disk Storage Format Tool.  I wasn’t sure if a USB drive had to be manufactured by HP to use these tools, but this one was.  The boot utility said it was not supported for installation on my Windows 7 system.  But the format tool worked.  When it was done, I successfully reformatted in the usual way, in Windows Explorer, and then installed the Win7 32-bit System Recovery CD on this newly rescued HP USB drive.  I tried booting the other computer with that, and it worked.

*  *  *  *  *

Update:  I spoke too soon.  It worked at first, and then stopped working.  It stopped working in a weird way.  When I would boot with the HP USB drive, it would give me a black screen with just this part of a word showing:  “figuration.”  It did it consistently, over a half-dozen reboots, so it didn’t seem that the drive was just decaying or something.  I wondered whether (a) I had screwed up a menu so that this was all that would show or (b) this had something to do with whatever Steve Si was saying in the December 18 (3:30 AM) comment (below) that he had just posted in response to the foregoing paragraphs.

It didn’t seem like it was a case of a menu screwup on my part.  I hadn’t edited syslinux.cfg, which I believed was the first menu that would appear, and anyway the “figuration” string did not appear in that file.  It wasn’t getting to menu.lst, not that I had edited that either.

I decided to try to figure out Steve’s comment.  RMPrepUSB Tutorial No. 7 said that I could test a USB drive quickly (using the Quick Size Test option in RMPrepUSB) or thoroughly (using H2TESTW).  I didn’t want to spend the hours that the latter would take (but did download a copy anyway).  I had made a backup of the files on the HP USB drive, so I tried RMPrepUSB, which was going to wipe the disk during its test.  Using RMPrepUSB involved obtaining RMPrepUSB.  I downloaded RMPrepUSB_Portable_2.1.648wee.zip.  (I wasn’t sure why there was a “wee” in the filename.)  I extracted the files from the ZIP and ran RMPREPUSB.exe.  I selected the HP USB drive and clicked Quick Size Test.  For some reason, it minimized several other windows I had open on the screen, including the browser window in which I was typing these notes.  But it didn’t close them, so I resumed.  The test ran.  It indicated that it was going to take about 27 minutes — much more than the 6-10 minutes estimated on the RMPrepUSB website for a drive of this size.  I suspected the time required might depend on the condition of the drive — that, in other words, RMPrepUSB might be finding problems on the USB drive.  But no, at the end of 27 minutes, it reported that my drive passed the quick test.  So, pending a more thorough test, USB drive corruption did not appear to be the reason for my problems.

Steve suggested trying the QEMU Emulator button in RMPrepUSB.  I tried to restore the bootable image to the USB drive, using Acronis True Image Home 2011.  This would be about the time when I discovered that Acronis did not see USB drives as targets for image restores, at least not without the Universal hardware-independent option which I know I had bought and thought I had installed.  So, oops, I guess I may have used the wrong form of backup.  A search led to an idea for another search which led to going into Acronis (in Windows 7) and choosing Tools and Utilities > Mount Image > select the image etc.  Now I had the Acronis TIB image loaded as a virtual drive and could use Windows Explorer to copy its contents to another folder.  In the spirit of experimentation, I copied those files to the newly reformatted HP USB drive.  I realized this would not be bootable.  I was hoping to achieve a “Boot error” message, because that’s what I had managed to achieve on the other, 32GB Patriot USB drive by this point.  Symmetry.  That’s what it’s all about:  symmetry.  Anticipating some such development, while the files were copying — and, anyway, to try to rescue that Patriot drive — I did a search and formed the belief that the problem could be in the BIOS settings on the computer I was trying to boot.  But the computer in question at this moment was an ASUS Eee PC with virtually no options to adjust.  I had already tried setting the boot flag on the Patriot in GParted, with no luck.

The situation at this point was overwhelming, not to say fubar.  With a whiff of desperation in the air, I resorted to the possibility of biting the bullet and doing this the hard, painful RMPrepUSB way.  The logic, here, was that I had used the HP tool on an HP drive and yet said drive was still recalcitrant, whereas good Steve quoth that this was not surprising.  He seemed to have the answer.  I examined that possibility in another post.

*  *  *  *  *

Later (after the comment shown below), I came back here.  RMPrepUSB had not turned out to be my preferred tool after all.  I knew that YUMI had worked for me.  It seemed my problem had to be with the USB drive, or somewhere else in the process.  One thing I noticed was that YUMI did not always actually format a USB drive when I checked the Format option.  It would just flash an error message and proceed on to load the requested ISO onto the USB drive.  So I had to do a separate formatting process to be sure.  I also noticed that Windows Explorer > right-click > Format sometimes had problems formatting.  I found it handy to use MiniTool Partition Wizard to be sure.  There also seemed to be a situation where the computer would get confused, and it would be better to take such steps after rebooting, or on another machine.  I wasn’t sure about all this; these were just impressions or possibilities.

I was not entirely sure what caused the problem to be sorted out, in the end.  My sense was that proper formatting of the USB drive was a factor.  At some point, the problem disappeared, apparently due to continued tinkering along the lines described above.