This post describes my efforts to use tar, rsync, and gnome-disks to back up a Linux Mint Xfce installation. These efforts succeeded, in the sense that I could create backups; but these efforts failed, in the sense that (at least for tar and rsync) I did not arrive at a clear, working understanding of how to restore the GRUB bootloader, and therefore wound up with a nonbooting system. I provide these notes for any (including myself) who may find them useful for purposes of focusing in on the GRUB issue, before undertaking all the other issues that can accompany a Linux system backup effort.
Contents
Introduction
Considerations Favoring rsync and the Command Line
A Simple Case: Local Backup: Using rsync to Back Up the Linux SSD to the Internal HDD
— Using rsync to Create a Single Backup File
— Using rsync to Create a Mirror
Simple Scenario Revisited: Local Backup via tar
— The Basic tar Command
— Assembling the Desired tar Command
— Excluding Directories from tar
— A tar Script
— Running It: A Scripting Environment
Restoring from the tar File
Trying to Restore GRUB
Restoring from the rsync Backup
GUI Backup via Disks Utility
Introduction
I was in the process of installing a Linux Mint Xfce (LMX) system. I wanted to make a backup. The question was, how should I do that?
That question seemed to include (a) what do I want to back up and (b) where do I want to back it up to? What I wanted to back up was just the LMX system installation, for now, though I might want to include more later. The LMX system was installed on — it was the sole user of — the laptop’s internal solid state drive (SSD). As for where, the computer in question was a laptop, so possible backup destinations would include the laptop’s internal hard disk drive (HDD), an external (USB) HDD, and the cloud. Later, when I had the networking sorted out, I would also want to be able to copy the backup to, or simply sync the installation with, my desktop computer.
The next question was, how? To answer that, there seemed to be many options. A search led to lists of supposedly great recent Linux backup solutions (by e.g., LinuxTechi, UbuntuPit). It seemed that much had changed within the last few years. For example, in my post from April 2016, I said, “Search results gave the impression, consistent with my previous browsing, that dd and Clonezilla were the most widely used Linux imaging tools.” Now, by contrast, dd was barely mentioned, and Clonezilla was largely overshadowed by more user-friendly GUI-based backup tools like Bacula, available in both corporate and (free) community versions (see comparison), the latter with its own SourceForge page (see Softpedia for the Windows version). Also, a search would remind me that EaseUS Disk Copy ($20) was able to clone Linux systems — which, in some cases, might be good enough. There was also Acronis Backup ($499/year).
Possibly the best of the lot was also the most accessible. In the LMX installation, I could go to Start > Accessories > Disks (or run gnome-disks) > select the drive > click the hamburger menu (i.e., the button with three parallel horizontal lines, near the upper-right corner of the Disks window) > Create Disk Image > follow further instructions. I didn’t notice that option until I was well into this project. It might have made this backup easier, but maybe that was OK. There were other considerations at work, as described below.
Note: commands in this post are rendered in italics.
Considerations Favoring rsync and the Command Line
Unexpectedly, however, I found myself leaning away from GUI and more toward command line solutions, particularly rsync. (Note: I could also have explored the dd command.) This inclination away from the GUI could have been a mere manifestation of a contrarian spirit, but I didn’t think so. I believed it resulted from several considerations:
- In part, I think it was a reaction against the phenomenon just described, in light of recent experience with other software. Yes, these new tools had emerged upon the scene within the past two years — and where would they be two years from now? Would I set up a backup scheme, only to have to revise it later because the developer of my preferred backup software went out of business, or decided to start selling only a corporate version, or failed to keep up with evolving needs, or with changes in Linux? In that case, would I be able to restore an old backup if needed?
- Now that I had been working with Linux for a while, I was gradually recovering a degree of comfort with the command line, and appreciating its advantages. Among other things, I was tired of endless mousing and clicking. It was appealing to visualize a command that would run, and do the job, and rarely require revision — and if it did require revision, all of its options and settings would be visible right there on the command line, not hidden off in some obscure checkbox beneath some random submenu.
- I had been recurrently reminded, for some years now, that a tool like rsync could simply persevere — that is, it could continue to do the job — for a very long time, with very little change, and with many potentially useful options.
- Rsync was already included in my LMX 18.3 installation. By contrast, there was the prospect that — with Bacula, for instance — I would have to add a lot of extra software. A quick test with sudo apt-get install –install-suggests bacula indicated that, with suggested packages (above), I would be adding 57 new packages (using 218MB of disk space) to this supposedly relatively minimal Linux installation. It would only be about a quarter of that without the suggested packages, but that was still something — and I wouldn’t know, until I got into it, whether I would need some of those suggested packages to make it work as desired.
- As noted, I was now looking for a backup solution for the Linux system files. I expected to use a different backup solution for the NTFS data files. But it appeared that rsync would provide a fast and arguably secure means of backing up a VeraCrypt container. In other words, it seemed rsync might prove to be an acceptable one-stop solution for various backup and copying needs going forward. For instance, I imagined keeping the laptop and desktop computers in sync — in terms of data files, and possibly even in terms of some program files (between VMs, at least (using DeltaCopy), if the desktop was still running Windows, and between home partitions as well, if both were running Linux). Previously, I had seen that people were using rsync to back up their websites. It appeared I might be able to use rsync for all of the above — in which case the initially worrisome learning curve might eventually pay for itself.
- These appeared to be reasons why, according to one source, “It seems that rsync is the de-facto standard for efficient file backup and sync in Unix/Linux.” If that was correct, it would make sense to become conversant in that de facto standard.
A Simple Case: Local Backup:
Using rsync to Back Up the Linux SSD to the Internal HDD
As noted above, my laptop had an SSD along with its HDD, and the LMX installation was on the SSD. What I wanted, first, was a simple restorable copy of the Linux installation, preferably compressed into a single zip file that I could move around and/or copy as needed. The separate home partition was big, but so far it didn’t contain much, so for now I could include that in this backup. Later, I might want to treat it separately. The target or destination location for this backup would be an ext4 partition on the HDD named BACKROOM.
Using rsync to Create a Single Backup File
To understand rsync, I could have started with its official “man” (short for “manual”) page. I didn’t readily see one for Linux Mint, but the one for Ubuntu ran to about 28,000 words, mostly discussing what looked to be about 130 options and suboptions. That was intimidating. I felt I would rather find practical examples that I could use to learn from, one step at a time.
For guidance in that project, a search led to a How-To Geek article (Brown, 2013) suggesting something as simple as rsync -av –delete [/source directory]/ [/destination directory]. (In such commands, note that WordPress, host of this blog, unfortunately renders two hyphens (i.e., “- -” without the space between them) as a dash (i.e., “–“) For Linux commands, the two hyphens would work; the dash would not. If in doubt, for that and any other ambiguities, it may help to copy and paste the command to a plain text editor.)
In Brown’s suggested command, the “a” option meant “recurse” — that is, start at the designated directory and include all of its subdirectories. (Actually according to the man(ual) page, it was shorthand for a bunch of things: recurse, copy symlinks as symlinks, preserve permissions, preserve file modification times, preserve group, preserve (superuser) owner, preserve device files, and preserve special files.) The “v” option meant “verbose” (i.e., provide details about what’s happening). The “delete” option meant “delete anything else that might already be located in the destination directory”; “delete-before” would do that deleting before copying, in case space was limited. There was also a -z option to compress files. A TecMint article (Shrivastava, 2016) said I might want to use -h, short for “human-readable,” which evidently meant that a long number might be reported in shorter form, such as 138GB instead of 138 followed by a bunch of other digits. Other possible options: –progress, to show how the copying project was faring, and –dry-run, to try it out without making any actual changes. Shrivastava said that, if I specified a destination directory that didn’t exist, rsync would create it.
Combining what Shrivastava and Brown advised, I came up with this possible command: sudo rsync -avzh –dry-run –progress –delete-before /home/ “/media/ray/BACKROOM/2018-06-18 Backup/.” In that, there were still some unknowns. Was it OK to use quotation marks around path or file names containing spaces? Would this command suffice to preserve the source directory’s structure — to insure that, upon restoration, files would go back into the correct subfolders? Could I combine multiple source directories (i.e., /home plus /boot plus /root) into a single command? How was I supposed to name the compressed output file?
To resolve those unknowns, I looked further. TheGeekStuff (Natarajan, 2011) said I could use –exclude ‘‘ to exclude a specific file from the rsync process, and similarly to exclude a specified folder (and, with a wildcard, to exclude all folders or files whose names fit a certain pattern), within the source directory. It was possible to use more than one –exclude option, or to use –exclude-from ‘exclude-list.txt’ to exclude all files listed in exclude-list.txt. The man page said I could use –include-from=FILE to list files or patterns to be included, one per line.
Continuing the exploration, Juan Valencia said the trailing slash (e.g., specifying /home/ as the source, instead of /home) indicated that it was not necessary to create a destination folder with the same name as the source folder. If there was no folder named /home on the destination, and if I wanted there to be such a folder, then apparently /home (not /home/) would be the way to go. Also, it was possible to “escape” (i.e., accept as-is) spaces and weird (Valencia called them “rare”) characters (e.g., “{” ) in the filename by using a backslash (e.g., referring to a file named we{ird as we{ird) or by using single quotes (e.g., ‘we{ird’). OSTechnix (2017) further recommended using -A (preserve ACLs) and -X (preserve extended attributes) options.
This was all just fascinating, but we were still wondering about the part where I was going to use rsync to compress multiple source directories into a single zip file on the destination — in effect, an image file that I could later unzip to restore the system to its state as of a certain moment. For that, a post in a LinuxQuestions forum said no, rsync didn’t do archiving: its -z option would only temporarily compress files during transfer. So basically I was investigating the wrong tool.
The rsync scenario seemed to be more like mirroring. Rsync would be very efficient because it would detect which files — indeed, which parts of files — had changed, and it would transfer only those changes from the source to the destination. As I proceeded into the next option (below), I would find myself wondering whether certain system folders would have to be excluded from an rsync mirror.
So that was where I left the matter. I proceeded to develop the following section, regarding tar. Then, as that section describes, I found myself wanting to compare the results of the tar method against the results of another tool. That was the point at which I decided to return to this section and develop the method of using rsync to mirror the Linux installation, as described in the following subsection.
Using rsync to Create a Mirror
As just indicated, I returned to write this section after substantially completing the following section, regarding the tar command. This was rather disorganized, but this was also the reality of my learning process. I would have rewritten it, but I was afraid I’d make even more of a hash of it that way. The best I could advise was that it might make sense to skip to the following section at this point, read in detail about tar, and then come back here — because I did not plan to repeat things that I had already figured out in the following section.
At this point, I considered the sketch of an rsync command that I had developed, modified in light of other comments (above). Using that information plus what I had learned about scripting (below) and some further reading and experimentation, I came up with this:
#!/bin/bash
# This is rsyncbup
DATE=$(date +%Y-%m-%d-%H.%M.%S)
SOURCE="/"
DESTINATION="/media/ray/BACKROOM/MintMirror-$DATE"
cd /
sudo rsync -avh --dry-run --progress --delete-before \
$SOURCE $DESTINATION \
--exclude={proc,sys,dev,run,tmp,lost+found,cdrom,media,mnt} \
--exclude={var/log,var/cache/apt/archives,usr/src/linux-headers*} \
--exclude=home/*/{.cache,.gvfs,.local/share/Trash} \
--exclude={var/spool/squid,archive}
An AskUbuntu comment stated that “the exclude path is relative to the source path.” So, for instance, when I said –exclude=proc, I was specifying /proc (i.e., a subfolder under the source (i.e., root). That seemed to be the same as in tar (below). Note that I could have used a separate exclude list file, but then I would have had to keep it with the script.
As below, I put that script into xed, saved it with the specified name (i.e., rsyncbup) in $HOME/bin (i.e., /home/ray/bin), and ran it as rsyncbup. Its dry run looked vaguely good. I removed the –dry-run option from the script and tried again. That ran. It took only a minute or two. In Thunar (i.e., the LMX default file manager), I went to the destination folder > right-click > Properties. It said the total was 4.8GB. That was less than the 5.3GB reported for the source partition, excluding most of the excluded directories but not excluding those buried in /home and /var. So I wasn’t sure, but maybe it had run correctly. I would have to see when I compared restores (below).
Simple Scenario Revisited: Local Backup via tar
To back up the system to a single file, MakeTechEasier (Diener, 2016) offered the tar command: “you’ll be compressing an exact copy of your entire Linux file system into a TAR archive.” That sounded appealing, like when I could use Acronis or AOMEI to compress my entire Windows installation into a single image file.
As such, this was different from the scenario where I had system folders (e.g., home, root) on different partitions. In that case, according to an AskUbuntu discussion, was that I should make separate backup files for each partition. Then, when it came time to restore, as advised in an Ars Technica discussion, I would boot from a live CD and use it to restore each backup file to its respective folder.
That actually was the situation on my SSD, at about the time when I started this post. Since then, however, as described in another post, I had reinstalled LMX and, during that installation, I had accepted the installer’s offer to set up my SSD for Logical Volume Management (LVM). So now my whole SSD was one big pool, and my separate home, root, and boot partitions had ceased to exist. Therefore, my .tar.gz backup file was going to contain all those folders in one package. To create that backup file, Diener said I would want to use cd / to get myself to the root directory (i.e., / ), and then I could run something like sudo tar -cvpzf backup.tar.gz –exclude=/backup.tar.gz –one-file-system /. I was about to develop that prototype for my purposes.
The Basic tar Command
According to various (e.g., 1 2 3 4) sources, tar was reportedly short for “tape archive.” Tar was apparently designed to preserve Linux file metadata (e.g., Linux permissions), which zip would not necessarily preserve. Tar did not compress; tar merely combined files into a single archive, which would then have to be compressed to save space. Zip would typically compress files individually before combining them, thus producing larger files than the best Linux compression formats. It seemed the best Linux compression formats were gzip and xz (the latter being similar to the 7z format produced by 7-zip in Windows). A tarball (i.e., tar file) compressed with gzip would typically have the .tar.gz or .tgz extension. Another format, bzip or bz2, seemed to be mostly falling into disuse. The main tradeoff was between degree of compression (i.e., reduced file size) and the amount of time and CPU power required to compress (or decompress). Generally, assuming a fairly powerful computer, it sounded like xz was the best compression format on Linux, just as 7z was arguably the best on Windows.
Given that background, the GNU tar manual (also available via man tar, but not as easily accessed by links) explained the command suggested by MakeTechEasier (above). The current version of the manual said the options in that command were as follows: c = create (i.e., creates a new tar archive, as distinct from adding files to an existing archive), v = verbose (see above), p = preserve permissions, z = gzip (i.e., use gzip for compression, as distinct from the option to use xz, i.e., -J or –xz), f = specify output (i.e., archive) file name (in that command, it was backup.tar.gz). As I went on, I saw that numerous websites used exactly, or almost exactly, that same set of options. There seemed to be no particular order to these options, except that the last one (f) was basically announcing the filename that would immediately follow it. The –exclude option meant “prevent the specified file [or directory] from being operated on” (i.e., archived and compressed). In this particular example, the specified file was none other than the file being created, as designated by the -f option, namely, backup.tar.gz. In other words, don’t get into a loop of trying to include the partly completed archive file within itself.
Understanding the –one-file-system option required a departure from how I thought of things in Windows terms. In Windows, the operating system resided on drive C. There were not typically parts of it on other drives. So I would just use Acronis to make an image of drive C. But here, the manual said, “This option is useful for making full or incremental archival backups of a [single] file system.” This seemed to be another instance of confused terminology in Linux, like “root” to mean the top-level folder or, very differently, the superuser, or “menu” to mean any old menu or, instead, what Windows users knew as the Start button. In Linux, a filesystem could be the format of a drive (e.g., NTFS or ext4 filesystems), or “the” Linux file system (i.e., the whole installation) — or, in the present context, it apparently meant something like the lesser of (a) a directory (e.g., the root directory) or (b) the part of a directory contained on a single partition (e.g., excluding /home if it was on a different partition).
So the general idea of the –one-file-system option seemed to be that you could use it to capture everything in “the Linux filesystem” that wasn’t located on another partition. So, for example, you wouldn’t have to exclude an external USB drive, because it would not be part of the / filesystem. Or, in my case, the internal HDD partition on which I wanted to save this tar file, ordinarily mounted as /media/ray/BACKROOM, would automatically be excluded if I used the –one-file-system option, because that location was not on the same partition as the root partition that I would be naming as my source or treating as the default source (i.e., by running the tar command from that root location).
Assembling the Desired tar Command
Now I had enough knowledge to be dangerous. With the aid of contributions to a Linux Questions discussion, I came up with an experimental tar command:
sudo tar -cvpJf /media/ray/BACKROOM/MintBackup-(date +%y%m%d).tar.xz --exclude=/media/ray/BACKROOM/ /
There were a couple of things to notice about that command. First, it didn’t work. The reason may have been that I forgot to add a dollar sign ($) before the parenthetical “date” entry. The date thing itself was interesting: it was possible to include the current date (and time, as I would soon see) in the output (i.e., destination) filename. Note, also, that this command (and several to follow) used simply / (i.e., the slash character, symbolizing the root of the whole Linux installation) as the source directory — and, in rather bass-ackwards fashion, that source identification came last, at the very end of the command. I wasn’t yet seeing my error with the date command, so I just removed it and tried again, like this:
sudo tar -cvpJf /media/ray/BACKROOM/MintBackup.tar.xz --exclude=/media/ray/BACKROOM/ /
It looked like that worked. I had a Thunar session open and displaying the contents of the BACKROOM partition, and there I could see the MintBackup.tar.xz file getting bigger as the filenames scrolled down the screen in Terminal. I wished there were a progress option, as in rsync. It seemed the best available method for watching progress was watching Thunar update its report of the MintBackup.tar.xz file size. It was pretty slow. In what may or may not have been a representative snippet, it took 42 seconds to grow the archive file by 20MB. So at that rate, according to my calculations, it would take about 12 years to finish. I was recalling that AOMEI Backupper Standard was able to back up my Windows installation into a 50GB compressed file in maybe 15 minutes.
I decided this was a good chance to grab a snack. When I returned, four hours later, it seemed to be stuck at almost the point where I left it: 1.2GB compressed, and many gigabytes left to go. Did the exclude option not work — had the command choked on its own tail, attempting to include BACKROOM in the archive that it was creating on BACKROOM? Or was 1.2GB as large as it could go? The last filename listed on Terminal was /proc/kcore, but I didn’t know what to make of that: the files listed in Terminal were in seemingly random order. I hit Ctrl-C, and Terminal didn’t object; it seemed glad to be done with that ordeal. I opened the MintBackup.tar.xz file that had been created. I saw that it hadn’t finished backing up the /proc folder. This suggested maybe it hadn’t yet started trying to chew on BACKROOM. I tried again, this time using .gz instead of the more demanding xz compression:
sudo tar -cvpzf /media/ray/BACKROOM/MintBackup.tar.gz --exclude=/media/ray/BACKROOM/ /
Yes, that was much, much faster. But then it got to that /proc/kcore file and froze again. A search revealed that I was not alone in this problem. I hadn’t previously noticed it, but now that I was seeing it in these other posts, I noticed that the kcore file’s size was reported at 140.7TB (sic). According to an Ars Technica discussion, the /proc directory was a virtual filesystem, reporting files that did not actually exist, and /proc/kcore “is a file that represents the contents of your memory.” I hoped not. If people knew what was in my memory … seriously, I could see where that would be problematic. How it took 140TB to capture 24GB of RAM, I did not know. I was not asking. It was none of my business.
Excluding Directories from tar
People were saying that I wouldn’t want to include /proc/kcore, and perhaps some others, in my backup. But then where would those files come from, if I explicitly excluded them from the backup, and then had to do a restore? Would they just be recreated automatically? In comparison to the Windows drive image scenario, where I’d just make an image and then restore it and everything would be fine, this was starting to sound convoluted. And it got worse. Consider another remark in that Ars Technica discussion: “Keep in mind that you’ll probably need to re-install the system anyway, if the system goes down. Most of the time you’ll only care about restoring pieces of /etc and /var, /usr/local/, that sort of thing, anyway.” The hell. Which pieces? I was an amateur. I had no concept of such things.
Expressing a rather different view, responses on Quora suggested that (at least if I wasn’t going to use the –one-file-system option, above) my backup should include the top-level directories /opt, /etc, and at least some parts of /var, along with /home (assuming it would not be better backed up separately), but should exclude the virtual filesystems /proc and /sys, whose contents were not actual files but, rather, simply “windows into the variables of the running kernel.” That quote came from a well-written Ubuntu community webpage that said I should also exclude /dev and /run (and /tmp, according to other sources), which were temporary filesystems that did not need to be backed up. In addition, these sources indicated that I should exclude any paths on which other volumes were mounted, notably /mnt and /media (and, I believed, /cdrom). I was able to verify at least part of that last suggestion: on my laptop, using Start > Accessories > Disks (i.e., gnome-disks) > highlight an individual partition on the laptop’s HDD > click to mount partitions, it said they were mounted on /media/ray, so it did make sense to exclude the /media directory.
To clarify all this advice, I decided to list the filesystems in my LMX installation, grouping them as follows. First, to review the preceding advice, here were the ones that I was supposed to exclude:
- the virtual filesystems /proc and /sys
- the temporary filesystems /dev, /run, and /tmp, along with /lost+found
- filesystems that I believed were largely for mounted media: /cdrom, /media, /mnt
- optionally, other directories that could safely be either backed or up excluded, depending on whether I wanted to make my backup smaller (apparently at the risk of possibly having to re-download some .deb files later). These optional exclusions are listed in the script shown below.
In contrast to those exclusions, there were other filesystems that people were telling me to back up: /home, /opt, /etc, and /var (aside from optional exclusions, below). That left some directories unaccounted for. On my system, those were /bin, /boot, /lib, /lib64, /sbin, /srv, and /usr. I assumed I wouldn’t want or need to back up the hidden /lost+found folder, but I wasn’t sure what to do about the hidden /root folder. There could be other filesystems on other computers or perhaps, later, on this one. For instance, TLDP mentioned /archive. For guidance on what to do about those unmentioned filesystems, I took a hint from the “Alternate Backup” section of the Ubuntu community webpage. That section (and the discussion preceding it) specified some exclusions in addition to those just named — implying that everything else would be included.
A tar Script
Putting this all together, I decided to construct a general backup command with specified exclusions. Listing all those exclusions on a single line would make for a long command. To make it more manageable, there was the options of specifying a file that would contain a list of exclusions, as noted above. Alternately, I could write a script that would preserve all this information. 1&1 Digital Guide offered such a script, which I would soon (but not yet) revise and save as follows:
#!/bin/bash
# This is tarbup
DATE=$(date +%Y-%m-%d-%H.%M.%S)
SOURCE="/"
DESTINATION="/media/ray/BACKROOM"
cd /
tar -cvpzf $DESTINATION/MintBackup-$DATE.tar.gz \
--exclude={proc,sys,dev,run,tmp,lost+found,cdrom,media,mnt} \
--exclude={var/log,var/cache/apt/archives,usr/src/linux-headers*} \
--exclude=home/*/{.cache,.gvfs,.local/share/Trash} \
--exclude={var/spool/squid} \
$SOURCE
With the aid of that website, and drawing on some of the foregoing information, I understood the contents of that script as follows:
- Line 1: the “shebang,” indicating that this script would be interpreted by the bash command shell.
- Line 2: example of a comment (i.e., informational, not to be executed, marked with an initial # sign).
- Line 3: trying again to include the date in the name of the tar archive file. The combination of the dollar sign ($) and the parentheses (i.e., “(” and “)” ) meant, calculate this value (i.e., the current date) and present it as shown (i.e., extracting year, then month, then day …). The DATE variable was thus defined as that date value, presented in that way.
- Line 4: the SOURCE variable stated what was being backed up. In this case, everything was being backed up, starting at the root level (i.e., “/”). In this case, it would have been easier just to type “/” than to set up the SOURCE variable. But I might want to use this script again later, with some other directory. For instance, the user might want to back up only a part of their /home folder or partition. In that case, the 1&1 webpage specified SOURCE=”$HOME/sourcedirectory.” I would find it easier to use the script if all I needed to do, in a new situation, was to change the value of a few of its variables up front, rather than go rooting through it to make sure I had caught all the places where those variables were used. As shown on the bottom line of this script, $SOURCE was also less likely to be overlooked or accidentally deleted than the ending slash (“/”) by itself. Note also that dollar sign or parentheses were unnecessary, here in line 3, because (unlike the date situation) nothing was being calculated; SOURCE simply had to remember the specified location (e.g., root).
- Line 5: defining DESTINATION as a variable stating where the backup would be saved.
- Line 6: moving to root directory. I did try without this, specifying instead the absolute locations of excluded directories (e.g., “/proc”), with a leading slash. It wouldn’t accept that: it couldn’t find what I was talking about. So I had to position the processor at the root directory and remove those leading slashes from the “exclude” lines.
- Line 7: starting the actual tar command. Dollar signs provided notice that variables were being used — that the command should process the value of the DESTINATION variable, for example, rather than just try to do something with the word “DESTINATION.” Note that this and subsequent lines ended with a backslash, indicating that the command continued on the next line.
- Lines 8+: apparently versions of tar varied in their syntax. On my computer, man tar seemed to indicate that options, such as exclude, should come after tar but before the specified source path (in this case, $SOURCE). (If I wanted to do a Google search for further information on my particular version further, tar –version informed me that the version of tar in use on my computer was 1.28.) Curly brackets accommodated multiple comma-delimited directories, in lieu of retyping “–exclude” for each. The translation of the curly brackets appeared to be something like, “After each comma inside these curly brackets, repeat everything that appears on this line before the opening curly bracket(s).” Note that addresses were relative to the current location (set by the cd / command. I think this script failed when I tried using e.g., –exclude={/proc, . . ., with a forward slash, instead of the –exclude={proc, version shown here. Note that I was later advised to include, in my restore command (below), a mkdir command for each folder excluded from the tar backup here. That would be impossible for the ones designated with a wildcard: at restoration time, I would have no idea what subfolders had been excluded, so I could not re-create them.
- Last line: end with $SOURCE (i.e., the specification of the source directory, i.e., root).
In this case, I did not opt to exclude the /home partition. I had not yet copied my VMs back onto it, so (as I could see in Thunar) its contents were only 24MB. Including it would give me a more or less complete system backup. Once I had the VMs back in the /home partition, events calling for a backup of that partition might not be in sync with events calling for a backup of the main LMX installation. At that point, I might want to add /home to the list of exclusions.
Later, it seemed to me that maybe I should have made a backup of the EFI boot partition. I wasn’t sure, but it seemed that doing so might have saved me a bunch of time and effort to recreate it (below).
Running It: A Scripting Environment
So now I had a script in mind, containing my tar command and its accompaniments. I needed to figure out how and where to save it and run it. For that, I decided to follow advice found in the Arch Linux wiki. The advice was to set up a “scripting environment.” This seemed to mean just a place and a method for saving and running scripts.
To make that work, and to understand what I was doing, I explored the PATH variable. I didn’t intend to, originally; I just got there by following advice that might not have been right for Linux Mint, and then trying to undo what I had done.
The PATH variable was the list of places where the computer knew to look for an executable command, if I typed its name at the command prompt. The places were separated by colons. I could see the current contents of the PATH variable by typing echo $PATH.
What I should have done was to type echo $PATH at the start, and record the default PATH, so that I would know what it was supposed to be if I changed it to something undesirable. It appeared that the original PATH may have been this:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
That may have been set by a file that I could have edited using sudo xed /etc/environment, where xed was the default text editor in LMX (also available via Start > Accessories > Text Editor). But I was not certain that that was the original PATH: I also saw that xed ~/.profile displayed these two lines at the bottom:
# set PATH so it includes user's private bin directories PATH="$HOME/bin:$HOME/.local/bin:$PATH"
The second line seemed to say, “Add these two bin folders to the standard system PATH.” Sources said the PATH was set by multiple files, so I couldn’t be sure, but it seemed that those two lines might have been why my PATH contained two additional directories, in addition to the standard PATH shown above (i.e., the one starting with /usr/local/sbin). Those two additional directories were /home/ray/bin and /home/ray/.local/bin. According to a post in a Linux Mint forum (2017), /home/ray/bin would automatically be included in $PATH.
I didn’t think I had added those two lines at the bottom of ~/.profile. I wasn’t sure I should remove them. I tried nonetheless. To do that, I followed the advice to use export PATH=[desired path, excluding what I didn’t want]. That changed the PATH, but apparently it was only a temporary fix, persisting only during the current Terminal session or perhaps until I logged out. To change the PATH permanently, the advice was to use xed .bash_profile or perhaps xed ~/.profile. But I suspected the former (i.e., xed .bash_profile) was outdated advice, or perhaps not applicable to LMX: it seemed to be creating a new .bash_profile file, not editing the existing one. I tried again, using locate .bash_profile to find where the real .bash_profile was located, and then revising the command: xed /[full path]/.bash_profile. That opened a file that contained nothing operational, only a statement about how it was empty by default.
Reviewing, then, it seemed that I might alter my PATH with sudo xed /etc/environment and/or xed ~/.profile. At any rate, my current PATH (and possibly the original default) was this:
/home/ray/bin:/home/ray/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
If I did want to add more directories to the PATH, the advice was to use export PATH=$PATH:/[new directory] (e.g., export PATH=$PATH:/Home/bin).
Despite all these words, my PATH command actually didn’t work. I had to specify the path to /home/ray/bin/tarbup in order to get it to run. If this comment is still here, that probably means I haven’t yet returned to this issue long enough to figure it out.
With that informative if imperfect PATH information in mind, I proceeded to follow the(modified) advice to set up the scripting environment, as follows:
- mkdir ~/bin. On my laptop, this command created a “bin” directory to store scripts at /home/ray/bin.
- Open a xed session, enter the text of the script (above) into the blank document, and save it in /home/ray/bin as tarbup (i.e., using the name shown on its comment line). Apparently adding an extension (e.g., tarbup.sh) was no longer recommended. I learned, the hard way, that saving tarbup in Notepad in Windows, and then bringing it over to the Linux laptop via USB jump drive, was problematic: it seemed that, by doing so, I added invisible codes. It would have been better to type it into xed.
- Add the new /home/ray/bin folder to my PATH (discussed above), if it wasn’t there already. With that in place, I would not have to move the Terminal prompt to any particular folder in order to run the script, and I would also not have to enter the full path (i.e., /home/ray/bin/, or ~/bin/, followed by the name of the script) in order to run the script.
- To facilitate future script production, run commands to (1) create (or use unalias to remove) an alias (i.e., a typed shortcut) that would allow me to move the prompt to that bin folder without requiring a lot of typing, and then (2) update the current system information on that point. I chose “mybin” as the alias. After I entered these commands, mybin moved the prompt to /home/ray/bin:
alias mybin="cd ~/bin" source ~/.bashrc
- With the prompt in ~/bin, run the “change mode” command chmod to make my new script executable. The command format: chmod +x [script name]. In this case, it was chmod +x tarbup.
The Arch Linux advice continued a bit further, with instructions on using chroot. This was apparently to facilitate running the backup from a live CD. I wasn’t sure whether that was necessary in all cases, or was perhaps specific to Arch or ideal for advanced users. Other sources didn’t seem to recommend it. It seemed I might have to feel my way into that.
Judging from the 1&1 Digital Guide, I was ready to go. All I needed was to run sudo tarbup. After this, according to that Guide, I could run a different script to do incremental backups. So, after some playing around, I managed to get the script to run. In a few minutes, I had my .tar backup. I opened it and compared its list of top-level directories to those actually installed on the laptop. The desired ones seemed to be there; the undesired ones weren’t. The archive was 1.9GB, as compared to a 5.2GB installation (excluding unwanted directories). That was serious compression, but it was possible.
Now that I had finished that, the only thing missing was a backup. Seriously, I was about to wipe my Linux Mint installation and see if I could restore the tar backup. Of course I would want another backup, just in case. I might not trust GUI backups for ongoing use; but for purposes of making sure I had a one-time alternative to this tarball, it seemed like a good idea. The built-in (Start > System > ) Backup Tool fell far short of a full system backup, offering to back up only Personal data or Software Selection. Instead, I downloaded Bacula, extracted files from the tar.gz archive, and opened its INSTALL file. That file conveyed these nuggets:
This file is rather out of date, and if you want to avoid a lot of pain, you will read the manual, which you can find at www.bacula.org. . . . Note, in configuring Bacula, you cannot get by with a simple ./configure, it is much more complicated than that (unfortunately).
That link actually redirected to a page in the Bacula blog, from which I found a Manuals page, whose Main Reference Guide contained a Quick Start section that I found only moderately dismaying.
It did occur to me, at about this point, that there might actually be a friendlier GUI backup solution. With that possibility dangling before me, I returned to the LinuxTechi list (Kumar, 2018) of 12 top open source backup tools. Duplicati was No. 2 on the list, but it appeared to be cloud-oriented. The next GUI solution on the list, the Advanced Maryland Automatic Network Disk Archiver (a/k/a Amanda), offered an FAQs page that seemed to say that I would have to use the command line to do a full backup. I went a little further down the list and concluded that these were tools for administrators, not simple disk imaging tools for end users.
The whole concept here was that I was trying to learn tar in order to avoid having to screw around with endless proprietary and/or eccentric tools to accomplish relatively simple file moving, archiving, and backup tasks. I wasn’t eager to screw around with these tools in order to see whether tar had given me a way of avoiding the need to screw around with these tools. So at this point, logic being what it is, and having seen rsync mentioned on multiple lists of backup tools, I decided to return to the previous section of this post and develop the rsync method of mirroring the Linux installation.
Restoring from the tar File
Now that I had two backups — one provided by my rsyncbup script and one by my tarbup script — it was time to see whether they worked.
I started by taking a look at GParted, to see how things were arranged on the SSD in the installed system. (GParted was included in the LMX live CD ISO but, for some reason, was not installed on my system. I added it to my installation via sudo apt-get install gparted.) GParted said that /dev/sdb (i.e., the 466GiB SSD) consisted of the following:
- /dev/sdb1: an EFI System Partition (FAT32) (512MiB)
- /dev/sdb2: a /boot partition (ext2) (488MiB) (flags: boot, esp)
- /dev/sdb3: the LVM partition (crypt-luks) (465GiB) (flag: lvm)
- /dev/sdb4: unallocated (1.02MiB)
Now I rebooted the laptop with a single-purpose USB drive (i.e., not a YUMI or other multiboot drive, except as detailed in another post). This was, of course, a LMX live USB. My own preference, for creating that tool from a downloaded LMX ISO, was to use Rufus. I interrupted the bootup process, using F2 at the splash screen (i.e., the Acer logo, on this laptop) to verify that the machine was booting in UEFI, not BIOS/Legacy, mode, with Secure Boot disabled. I restarted and, this time, I hit F12 at the splash screen, to see the boot menu, and selected the USB drive. The live USB booted and ran LMX.
Now, in the Linux live CD session, I ran GParted (Start > System > GParted). For the SSD, it showed the partitions listed above. In GParted, I deleted those partitions. I went to GParted > menu > View > Device Information. That gave me a left-hand sidebar providing information about the SSD. It said it was formatted as gpt, not mbr, msdos, or something else. GPT was the desired option, so that was good. Now, as advised by the Ubuntu wiki, I used GParted to recreate the first three partitions. I decided to try leaving a larger unallocated space for SSD overprovisioning, so I made the LVM partition only 400GiB (i.e., 409600MiB). (I wasn’t sure that would work, but I wanted to try.) Crypt-luks wasn’t a formatting option, so I chose ext4. Then I clicked Apply. Then, for the partitions that I had named BOOT and LVM, in GParted, I right-clicked > Manage Flags > select boot for the /boot partition and lvm for the LVM partition.
Continuing in the LMX live CD session, I went to Start > Accessories > Disks. My mission here was to mount the necessary partitions. So on the HDD, I selected the BACKROOM partition where the tar and rsync backups were stored > click the Mount arrow; and on the SSD, I similarly mounted the EFI, boot, and LVM partitions. I was logged in to the live CD under the “mint” username, so Thunar reported mount points EFI, BOOT, BACKROOM, and LVM at /media/mint. These mount points functioned like folders, so I could see the contents of BACKROOM in the BACKROOM mount point, whereas the others were empty.
Then I opened Terminal and began to develop the command needed to restore the tar backup. According to the Ubuntu community webpage, an appropriate restore command would be something like sudo tar -xvpzf /path/to/backup.tar.gz -C /restore/location –numeric-owner. Some of those options were the same as those used to create the tar. One exception was that the GNU tar manual said the restore would call for the -x (extract from tar) rather than the -c (create tar) option. The create command (above) also didn’t use the -C option, which told tar where the top-level restore would begin. The –numeric-owner option would apparently tell tar to disregard user names in the current environment. Specifically, in this LMX live CD session, my login under the default “mint” user had nothing to do with the “ray” username that I wanted to restore to the SSD. On that basis, I developed the following script:
#!/bin/bash # This is tarrestore SOURCE="/media/mint/BACKROOM/MintBackup-2018-06-22-00.40.11.tar.gz" DESTINATION="/media/mint/LVM" cd / sudo tar -xvpzf $SOURCE -C $DESTINATION --numeric-owner cd $DESTINATION sudo mkdir -p proc sys dev run tmp lost+found cdrom media mnt \ var/log var/cache/apt/archives usr/src/linux-headers \ home/ray/.cache home/ray/.gvfs home/ray/.local/share/Trash \ var/spool/squid
I created that script on the desktop in the live CD session, used sudo chmod +x tarrestore to make it executable, and ran it from Terminal via /home/mint/Desktop/tarrestore. It ran. It seemed to work. The LVM mount point was no longer empty.
Regarding those mkdir command lines: if I tried to make directories using full paths (e.g., /proc), of course, the live CD environment would try to create them in its own root (/) directory, not in the LVM volume that I was trying to reconstruct. Therefore, the names of the folders listed in this mkdir command do not begin with a slash. Instead, they are installed under $DESTINATION — that is, relative to the LVM mount point specified by cd $DESTINATION.
Trying to Restore GRUB
Restoring from the tar file gave me the operating system files, but did not give me a bootable system. For that, I would need to restore the GNU Grand Unified Bootloader (GRUB). The GNU GRUB Manual 2.02 described a bootloader as the first software program that runs when a computer starts, with the purpose of loading and transferring control from firmware (i.e., basic startup code built into the computer’s hardware) to an operating system kernel, which then initializes the rest of the operating system.
The Ubuntu community webpage (and, with a little more detail, AskUbuntu) offered instructions to restore GRUB, so that I would have a bootable system. With my laptop in its present state (i.e., booted from the live CD, and having just run the foregoing script), those instructions seemed to translate into the following commands, entered one at a time:
sudo -s cd /media/mint/LVM for f in proc sys dev ; do mount --bind /$f /media/mint/LVM/$f ; done chroot /media/mint/LVM apt-get update apt-get install grub-pc dpkg-reconfigure grub-pc for f in proc sys dev ; do umount /media/mint/LVM/$f exit
The apt-get update line seemed necessary because, without it, dpkg-reconfigure grub-pc would produce an error: “package ‘grub-pc’ is not installed and no information is available.” Apparently it was a question of whether grub-pc was installed within the superuser environment: for the ordinary user, Start > System > Synaptic Package Manager reported that, in fact, grub-pc was installed. But the apt-get update command produced its own errors, starting with “temporary failure resolving ‘packages.linuxmint.com.'” It seemed the superuser was not online. Yet ping -n 8.8.8.8 -c3 said otherwise: those pings to the Google server were successful. In that case, an AskUbuntu answer said this was a DNS problem — but the solution wasn’t specified. I was baffled, so I posted a question.
I wasn’t having the best of luck getting responses on the Linux Mint forums. Usually, I took that as a sign that I was getting pretty far down the rabbit hole. So after waiting a while I thought maybe I should try other possibilities. TecMint (Cezar, 2017) said I could use Ubuntu Server (18.04 LTS) edition on a USB drive to reinstall GRUB. Until now, I had been creating my live USB drives on my Windows 10 desktop machine, but for this I tried using the laptop, running an LMX live USB drive. On that laptop system, I downloaded and ran Etcher. (It was portable, not requiring installation.) I used Etcher to install the Ubuntu Server ISO on a USB drive. I rebooted the laptop using that Ubuntu Server USB drive and went through its basic questions (e.g., language, keyboard), accepting default options. But what I was seeing didn’t look like Cezar’s screenshots, and it couldn’t get past the Network Connections dialog. On the Windows machine, using Rufus to burn the USB drive, I tried again with an x64 server install version that would have been available at the time of Cezar’s article (i.e., Ubuntu 16.04 LTS). That was better. It still didn’t look the same onscreen, but at least now I saw the option to “Rescue a broken system.” But by the time I was done with this approach, I had installed Ubuntu Server on the laptop, so I had to abandon this approach and start over, repartitioning with GParted and then re-running the tar command to restore the LMX backup to my laptop’s SSD.
Another possibility: Super Grub2 Disk. I had already downloaded the SGD ISO for version 2.02s9 (“recommended download” for “floppy, CD & USB in one”), so now I used Rufus to install that on a USB drive on my Windows 10 desktop, and then tried to boot the laptop with that USB drive. But the laptop wasn’t seeing it. I tried again with another USB, created the same way, but the laptop didn’t see that either. I tried again, this time using the EFI x86_64 standalone version. It had an .efi extension, which Rufus didn’t see. Lifewire (Fisher, 2017) seemed to say that an .efi file should be placed in the EFI partition that I had created using GParted. Unfortunately, I wasn’t sure where to put it. Fisher said its location varied among Linux versions. A search led to various pages that, within my limited browsing, did not resolve the mystery. Rod Smith advocated using his own rEFInd, but it looked complicated. An AskUbuntu discussion addressed the situation on Ubuntu, but I was not confident LMX would be identical.
A Linux Mint forum discussion made me wonder whether some of this advice was perhaps suited for MBR or BIOS systems rather than GPT or UEFI. I would have been happy to try booting some of these tools with the laptop in Legacy mode, but it seemed like people were saying that this sort of mixing would cause problems later, on a system that was going to be operating in UEFI mode for the most part.
Another possibility was to run Boot-Repair. An Ubuntu Community document said it was possible to use Boot-Repair either on a live USB drive, as Boot-Repair-Disk (which I had used in the past, from my BIOS-oriented YUMI drive), or from an Ubuntu installation or live CD. For that last option, with a working Internet connection, the suggested commands were as follows:
sudo add-apt-repository ppa:yannubuntu/boot-repair sudo apt-get update sudo apt-get install -y boot-repair boot-repair
Having booted with the LMX live USB, I tried those. The second command produced an error:
Failed to fetch cdrom://Linux Mint 18.3 _Sylvia_ – Release amd64 20171213/dists/xenial/contrib/binary-i386 Package
Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs
I ignored that message. The remaining commands seemed to run OK. Boot Repair started up, seemed to be running some scans, and then gave me an chose among Recommended Repair, Create a BootInfo Summary, or Advanced Options. Experience with Boot Repair Disk on the YUMI drive had taught me to simply click the Recommended Repair. It ran, and then gave me a set of four commands to copy and paste into Terminal, which I did. Those commands were as follows:
sudo chroot "/mnt/boot-sav/sdb3" dpkg --configure -a sudo chroot "/mnt/boot-sav/sdb3" apt-get install -fy sudo chroot "/mnt/boot-sav/sdb3" apt-get install -y lvm2 sudo chroot "/mnt/boot-sav/sdb3" apt-get purge -y grub*-common grub-common:i386 lupin-s* shim-signed
Those commands all seemed to run OK, with a couple of exceptions for the last one. It produced notices that some of the specified packages were not installed, and therefore were not removed. It also produced some error statements, starting with this one:
mktemp: failed to create directory via template ‘/var/tmp/mkinitramfs_XXXXXX’: No such file or directory
Since that last command was a purge command, presumably the error related only to a failure to remove one or more packages. I hoped that was a problem only for purposes of keeping things orderly, without consequences for actual functioning. Returning to the main Boot Repair dialog, I clicked Forward. It gave me another command to copy and paste:
sudo chroot "/mnt/boot-sav/sdb3" apt-get install -y grub-efi-amd64-signed shim-signed linux-headers-generic linux-signed-generic
Sadly, that command produced indications that the errors from the previous one were problematic. Specifically (among other things), it said that shim-signed had unmet dependencies and I had “held broken packages.” I tried re-running the previous command (i.e., the fourth in the preceding list of four commands). Among other things, it said “package ‘shim-signed’ is not installed, so not removed.” So it seemed I was mistaken: the previous problem with shim-signed seemed irrelevant. And yet when I repeated this last command, involving apt-get install, once again I got those errors about unmet dependencies and broken packages involving shim-signed. A search revealed that I was not alone. Among the recommended solutions, I noticed that srs5694 (apparently the username for Rod Smith, above) provided a simplified summary of using rEFInd (above), which I interpreted as follows:
- Download and unzip the ISO and use Rufus to install its .img file onto a USB drive.
- Reboot with that USB drive. Hit F12 during bootup to select the USB drive.
- Assuming it boots correctly (mine did), select the Linux Mint icon (or Ubuntu if no Linux Mint icon appears) and hit Enter.
That’s as far as we got. It started giving me errors:
WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
Reading all physical volumes. This may take a while …
/run/lvm/lvmetad.socket: connect failed: No such file or directory
It did that for a few minutes and then dumped me at an initramfs prompt after giving me a warning:
ALERT! /dev/disk/by-uuid/[UUID number] does not exist.
Was that the problem? I had restored to a different partition; perhaps its UUID number was different from the one I had backed up with the tar command, and therefore any boot attempt would fail? Novell (Record, 2007) said that recovering or restoring data on an LVM partition would require me to restore the old disk’s UUID and its LVM metadata. Was that advice still valid, or was it outdated? Novell worked through several scenarios that might have been informative. Regrettably, this was starting to look like one of those fights that you can only win by staying with it until neither of you can stand up anymore.
At about this point, I came across a HowtoForge tutorial on repairing Linux boot failures in GRUB 2 rescue mode. I rebooted the laptop, without USB drives, and observed that I wound up with a grub> prompt. According to the tutorial, “This is the screen mode you see when GRUB has found everything except the configuration file. This file probably will be grub.conf.” The tutorial seemed to say that GRUB had failed to load its normal module, and therefore I was in GRUB’s Rescue Shell. Beyond that, I found the tutorial unclear, so I tried again with the GNU GRUB Manual 2.02. But, wow, it was really long and complicated. I didn’t want to earn college credit for this. I just wanted the machine to run.
Now, honestly, I didn’t expect anything better from Boot Repair Disk, running from a USB drive, than I had gotten from the boot-repair command in Linux (above). But I had not actually tried it and — who could say? — maybe it would work miracles. In this optimistic if not delirious state of mind, I went to the Boot-Repair website, downloaded its latest version, used Rufus to burn it to USB, and booted the laptop with it. It did the same scanning as before. But then it said, “This will install the [lvm2] packages. Do you want to continue?” I said sure, why not? It wanted me to connect to the Internet, and then it went to work. It gave me the same choice as before, and again I chose Recommended Repair. It told me to enter the same commands, and when I got errors, this time it concluded, “GRUB is still absent. Please try again.” Instead of doing that, I clicked Discard. It said, “GRUB reinstallation has been cancelled.” I rebooted, with the intention of trying something other than Recommended Repair, but I was no longer getting any alternatives.
At this point, I was ready to cut my losses. My working conclusion was that, when people occasionally said that tar was not really suited for system backup, this might be the sort of thing they were talking about. I mean, it had been real, and it had been fun, but it hadn’t been real fun, and now I believed it was time for me to mosey on.
Restoring from the rsync Backup
If I wanted to restore from the rsync backup instead of the tar backup, OSTechnix (2017) said I could just repeat the rsync command that I had used to create the rsync backup, except that I would want to reverse the source and destination. But there didn’t seem to be much point of restoring what I had saved via rsync, if it was just going to give me a partition full of Linux system files that wouldn’t boot. Without a solution to the GRUB problem, it seemed I could save myself the effort. It appeared that would be the conclusion for other Linux backup tools, such as fsarchiver, for which a Linux Mint forum discussion seemed to indicate that it would be necessary to install Grub to a computer that didn’t already have it.
I did wonder whether the tar and rsync backups differed much. A ServerFault discussion (2009) on exactly that question yielded multiple suggestions on how to perform such a comparison. I tried Beyond Compare and Meld, but could not figure out how to get them to look at root partitions. Moreover, I realized, I was running low on enthusiasm for this project. I decided to shelve it for now.
GUI Backup via Disks Utility
As noted above, at some point in this process I became aware of the Disks Utility (a/k/a gnome-disks) method of doing a backup. Disks was built into the LMX installation. Of course, I no longer had a working LMX installation, so I had to start over with a new install. But once that was in place, I was ready to try Disks. Was it really going to be a simple solution to this complex problem?
To find out, I went to Start > Accessories > Disks. The scenario seemed to be that I would do separate backups of each partition. As Disks informed me, the system partitions were the three on the SSD: the EFI partition, the boot partition, and the large LVM partition. According to TechRepublic (Wallen, 2017), these were going to be images of the entire drive: “so if you have a 1TB drive the resulting image will be 1TB.” That was sobering. It pretty much insured that backups would be made only to an external drive. Wallen said the backup and restore would be done from a live USB or CD, not from within the running Linux system being backed up: “you cannot create an image of a currently mounted drive.”
The Disks utility seemed to offer different image options. As just indicated, Wallen was talking about backing up an entire drive. To get that, he was selecting a drive in the left pane of the Disks utility dialog, and then clicking on the gear icon at the upper right corner of the dialog. In my version of Linux, that gear icon had become a hamburger icon (i.e., three little parallel horizontal lines). Either way, that icon did offer options to create and restore disk images. But there was another option. The little gear icon below the rectangles graphically representing disk partitions also offered options to create and restore partition images. They could still be huge, but in some configurations (i.e., not one featuring a huge LVM pool, like I had here), that would enable me to back up some partitions while ignoring others.
It seemed it might be possible to use GParted on the live USB to shrink partitions to their minimum size temporarily, for purposes of the backup, and then restore their full size afterwards, but possibly that would break things. I also thought it might be possible to compress the backups, for purposes of storage, especially if most of their contents were empty space. Of course, these repartitioning and compressing operations would take time and CPU power, potentially detracting from other tasks, and they might also add to the risk of image corruption.
At present, on my minimal Linux installation, it was going to be faster to just reinstall Linux from scratch than to go through all these steps to create a backup image. Rsync and tar continued to seem like excellent tools for the purpose, if I could get past the GRUB barrier. I was sure I would, at some point. But at present I didn’t know how to do that, and I wasn’t seeing good backup alternatives. I could continue to experiment with various tools, but this was time-consuming, and I had other projects in mind.