V2P: Converting a Linux Virtual Machine to a Physical Installation

Note: a later post offers a boiled-down (and in some ways improved) version of this one. See also the Summary, below.

Contents

Summary
Introduction
File Format Options
Setting Up the VirtualBox VM
Possible Solutions

INTERNAL SOLUTIONS

Using Clonezilla
Background & Setup
VM Drive Cloning with Clonezilla
Troubleshooting the VM with fsck
VM Drive Imaging with Clonezilla
Trying GParted
Trying Timeshift
Using dd Internally
Linux Virtual to ISO Solutions
Difficult and (Apparently) Dead Options
Systemback
Distroshare Ubuntu Imager
Customizer
Clonezilla ISO Creation
Summary of Internal Solutions

EXTERNAL SOLUTIONS

False Leads
VM-RAW-dd Method
Windows Solutions
Acronis and AOMEI
Macrium
VHD to ISO in Windows
Other Windows Tools
Rescuezilla
Paid Linux Alternatives
Summary of External Solutions

.

Summary

I was testing software and otherwise developing an Ubuntu Linux installation in a virtual machine (VM), so as to be able to roll back undesirable changes. The question for this post was, when it all finally came together, how could I convert that perfect, final, virtual Linux installation into a physical installation on a real computer? (Note: some of the methods discussed here may also work for Windows VMs.)

The answer developed here was that there were both internal and external solutions. An internal solution would run inside the VM. There were two ways to do that: the tool could be installed and run in the Ubuntu installation; or it could boot instead of Ubuntu, inside the VM, and use that outsider’s perspective to copy the Ubuntu files to an external destination. An external solution, by contrast, would not run inside the VM. Like a simple copy-and-paste operation in Windows (or whatever host system), it would treat the entire VM as a single file or folder; it would simply convert the VM to a different format, like when a user converted a VirtualBox VDI file to VMDK format for use on VMware.

I might have expected the external solutions (e.g., QEMU, VBoxManage) to be simpler and more direct. Indeed, there were a few paid external solutions (e.g., Macrium, TeraByte, PowerISO) that might have worked. I didn’t explore their licensing rules, for users who might want to use them on more than one machine, perhaps including more than one VM.

But for the most part, the working solutions were internal solutions. Among a number of candidates (e.g., Distroshare, Linux From Scratch and other relatively complex options, Remastersys and many other abandonware possibilities), I got good results (for imaging and cloning) from Clonezilla, and from dd and related tools; but for my purposes it appeared that the best solution might be to use Systemback to produce an ISO. (But note a later post‘s discovery of a 4GB limit on Systemback.) Various sections of this post (see Contents, above) provide relevant links and further information on all of those options.

Introduction

In Linux, as in Windows, I would generally be keeping my non-system personal files on an entirely separate drive, so that I could restore and overwrite the system from a backup without jeopardizing my data, and so that I could have different schedules for data and program backups. As in Windows, there would be system-related data (e.g., program settings) for the specific user, but that would be very different from really personal data (e.g., a PDF copy of an article that I wanted to read; a video of friends). I would want a system clone or image to capture all of the program stuff and none of the personal stuff.

When I speak of a full installation, I mean the kind of installation that a user would get by downloading an installer and running it, to install program files on a drive. In the Linux world, this was the difference between a “live” USB and a “full” or “complete” (or, as I sometimes saw it, a “regular”) installation. (I was not looking for solutions like Ventoy’s vDisk Boot Plugin, which would reportedly run Linux from a drive image (e.g., VDI, VHD) file sitting on an HDD.)

The use of a USB drive was not the distinguishing factor between live and full installations. A USB drive was simply a type of drive. Linux could be installed on a USB drive as easily as on any other type of drive. Rather, according to MakeTechEasier (Damien, 2013) and SuperUser (2017), the primary points of difference were as follows:

  • Persistence. It was possible to create a live USB with a predefined space for persistent storage — with, that is, an optional space that would hold downloaded files, system updates, installed applications, and other user changes and additions. Without persistence, such materials would vanish upon reboot. But I found that, even with persistence, a live USB might not remember stuff. This was very different from a regular installation, where every change was for real, and where there would continue to be enough space for changes, updates, new programs, and more user data files, until the system ran out of disk space.
  • Size. Live USBs were typically designed to fit in a relatively small space.
  • Currency. Live USBs were not designed to be updated with the latest versions of software and of their installed Linux distributions. Attempts to update posed the risk of breaking the system due to software conflicts. A full installation could ordinarily be updated and upgraded to the fullest extent intended by developers.
  • Security. Live USBs were typically not restrictive, whereas a secure login, and password entry for administrative changes, were standard expectations on full installations.
  • Performance. A live USB could perform faster than a full installation on USB, as the live USB might be designed to run from RAM, whereas a full installation would not.
  • Adaptability. A live USB typically used generic drivers to run on the largest possible variety of systems. No computer was assumed safe; therefore bootup took much longer, as the live system ran compatibility tests. By contrast, an installation would select drivers designed for the specific system’s hardware, improving bootup and operating efficiency, but making the installed system less suited for use on other systems. In the best case, some Windows tools (e.g., Acronis, AOMEI, Macrium) offered “universal” or “dissimilar hardware” capabilities, where supposedly a clone of a full installation to a different computer would work with few if any problems. For instance, I had used the free AOMEI Backupper Standard to restore a very customized Windows 10 desktop backup image to a USB drive that I could then boot on my laptop, and also to restore that Win10 image directly to the laptop. Both of those restores were still functioning well, a year later. But I had also found that there were limits on what such software could achieve (see e.g., SuperUser, Spiceworks). Altaro’s Windows-oriented V2P guide said, “The biggest challenge in V2P is hardware drivers,” and suggested several potentially complex hardware neutralization steps. At Spiceworks, Erik8532 (2018) asserted that “Linux is way more dynamic than Windows, in the sense that an installation will run, even on a different machine with different hardware.” But others seemed less certain. It appeared that there could be some difficulty in troubleshooting GRUB and in resolving other boot issues (e.g., How-To Geek). VMware offered some support for server-level V2P conversion, including a nice 22-page technical note. But as a lead-in to that technical note, VMware said,

Neither Converter, nor any other VMware product, currently supports going from a virtual machine to a physical machine. … The technical note and sample configurations for performing a virtual to physical conversion are provided for information only.

So the question here was not, how can I convert a VM to a live USB? It was, rather, how can I develop a full installation in a VM, and then clone that installation to a drive — be it USB flash or internal or external solid state (SSD) or hard disk (HDD) drive — without boot or driver issues, as if I had installed and tweaked the installation on the drive in the first place, without ever using a VM?

File Format Options

This post repeatedly refers to certain VM-related file types. It may be helpful to begin by sketching out those types.

Some file types were more or less linked to certain hypervisors. Among those most relevant to this post, Parallels (Hunter, 2021) characterized VDI as the default disk format for VirtualBox, VHD and VHDX as the standard disk formats for Microsoft’s past and present virtualization products, and VMDK as VMware’s previously proprietary virtual drive format. According to Hunter,

VMDK allows incremental backups of changes to data from the time of the last backup, unlike VDI and VHD. This makes the backup process for VMDK files much faster compared to VDI and VHD. Unofficial tests also show that VMDK is significantly faster than VDI or VHD.

Another relevant filetype: RAW. OpenStack described it as “an unstructured disk image format. … the bit-equivalent of a block device file.” In other words, as I was about to see, a RAW copy of a 60GB VM would require 60GB of disk space.

Wikipedia characterized IMG as “a raw disk image file format with .img filename extension.” For example, Wikipedia said, “QEMU uses the .img file extension for raw images of hard disk drives, calling the format simply ‘raw.'” That appeared to be the general meaning in most of the sources I reviewed for this post. But IBM offered examples of QEMU commands that used an IMG extension for QCOW files.

ISO was probably the most confusing VM filetype. It had two meanings, and sources could muddy the difference. For instance, that same Wikipedia page said, “ISO images are another type of optical disc image files, which commonly use the .iso file extension, but sometimes use the .img file extension as well. They are similar to the raw optical disc images.” The difference between the two meanings of ISO was perhaps made clearer in a VirtualBox.org discussion: one meaning was RAW; the other was ISO 9660:

Pie05: I had made a virtual machine with a 50 gb dynamically allocated vdi. I wanted to convert this to an iso using vboxmanage clonehd –format RAW. When I used this command, it outputted a 50 gb iso even though the vdi was only 11 gb on the host. …

Martin: Please keep in mind that what you have created is a raw image = binary harddisk image, even when you add a file name extension of “.iso”. This is NOT an ISO. ISO is an image format / file system for CD / DVD = ISO9660.

In other words, within the VM context, it seemed that references to “ISO” files should be construed as specifying an ISO 9660 file unless otherwise indicated. Creodias said, “ISO is not frequently considered a virtual machine image format … [but] ISOs contain bootable filesystems with an installed operating system … [and thus] can be treated like other virtual machine image files.” Unlike the unstructured RAW image, Wikipedia explained that, in the VM conversion context, “The data inside the ISO image will be structured according to the file system that was used …. [in the source] disk image file.” Hetman Recovery (Artiukh, 2021) encouraged the reader to

Think of an ISO image as a complete copy of all data stored on a physical optical disk such as a CD, DVD or Blu-ray disk, including its own file system. It is, in fact, a sector by sector copy of the physical disk, without any additional compression applied.

If an ISO image was smaller than a RAW image of the same source material, apparently that would be because the ISO would contain only actual data; it would not blindly copy blank space as well. Hetman said that, in Windows, tools like WinRAR and 7-Zip could extract data from ISOs.

Various sources indicated that an ISO file could be mounted as a virtual disk drive, allowing access to its files. In the VM context, it seemed the primary use of an ISO 9660 image would be to add it to a VM, so that it could be booted in place of the VM’s installed operating system. For example, as described below, I was able to add a Clonezilla ISO to VirtualBox, so as to boot Clonezilla instead of the installed Ubuntu system. This procedure gave Clonezilla access to that installed system, so that Clonezilla could back it up or clone it.

Setting Up the VirtualBox VM

For this investigation, I decided to use VirtualBox. A search yielded many indications that people still tended to consider VMware (Workstation Pro or Player) and VirtualBox the best free VM tools, though others (e.g., Gnome Boxes, QEMU) continued to draw attention in Linux. (See also the discussion of Hyper-V, below.) Between the two, MakeUseOf (Khawaja, 2021) favored VMware Workstation Player in terms of performance, ease of use, and reliability, but VirtualBox for support of multiple platforms and disk formats, and for snapshots and cloning. Those last two factors were of particular importance here. (A later post describes the setup process I used in VMware Player.)

I decided to set up VirtualBox on Windows. I was more familiar with Windows, and thought I might benefit from access to certain Windows tools. Therefore, I started by downloading VirtualBox 6.1.24 for Windows, with its accompanying Extension Pack and a link to its online manual. I chose the installed rather than portable version, based on a recollection that there could be some differences between the two, and that the installed version might be better supported.

I also began by downloading and configuring a canned VirtualBox (VDI) image from OSBoxes.org, as detailed in another post. (Note that Microsoft also offered free, temporary VMs for VirtualBox and other hypervisors.) Later, however, I would find that the canned OSBoxes VDI entailed some complications — specifically, multiple large dynamic partitions — that got in the way of what I was trying to do here. Instead, for present purposes, the best strategy was to create my own VirtualBox VM running Ubuntu 21.04. For that purpose, I downloaded the Ubuntu ISO and installed it manually. This section describes the installation process.

I think it may have been possible to install the Extension Pack at this point. If so, the first step would be to go into VirtualBox Manager — that is, the main screen in VirtualBox — and to use its top menu. This menu offered three options: File, Machine, and Help. In this writeup, sometimes I refer to these menu options for clarity, because VirtualBox could be a little confusing.

I express uncertainty about the Extension Pack because, on my VM, it was already installed. But to check on that, I think the installation procedure was to go to menu > File > Preferences > Extensions tab > click the blue plus ( + ) icon at the right side > navigate to the downloaded Extensions file > select the Extension Pack > Open > Install > scroll through VirtualBox License > I Agree.

Next, I went to VirtualBox Manager > menu > Machine > New. This opened a dialog for creating a new VM. On my machine, it defaulted to Expert mode. I could tell because the button at the bottom of the screen said Guided Mode. If I clicked on that button, I would be in a different screen, and then the button would say Expert Mode. In other words, the choice was between Expert and Guided Mode, with that button as the way to switch between the two. What the bottom button said was the opposite of the mode I was actually in.

So. In Expert Mode, I named the new VM simply Ubuntu. I didn’t give it a version number, because I hoped I would be able to upgrade it to higher version numbers in later months and years. I put it on drive W. This meant that the VM’s files would be stored in the W:Ubuntu folder. I didn’t have to create that folder — indeed, doing so would confuse VirtualBox. I just had to enter that information here on the Create Virtual Machine dialog.

The dialog defaulted to 64-bit Ubuntu Linux. I gave it 4096MB of RAM, because I had plenty. (I could change that at any time, as desired.) I left it at “Create a virtual hard disk now,” and hit Create. That took me to the next screen, on which I had to designate a file size. There was some chance that I would be converting this VM to run on a USB drive, and for that purpose the minimum drive size would be 64GB — which, in practice, meant around 57.8 GiB. (See other post for explication of GB and GiB.) As an allowance for possible differences among drives, I set the maximum at 57GB, and chose dynamically allocated. Since I might be using the VM in VMware Player, I chose VMDK format. (As the discussion of solutions (below) may suggest, this choice seemed to have no significant impact.) Then I clicked Create.

Now, in VirtualBox Manager, with the Ubuntu VM selected, I went to Settings (i.e., menu > Machine > Settings) and made these changes:

  • General: Advanced tab: enable Shared Clipboard and Drag ‘n’ Drop: bidirectional. Note that this might not be advisable on a security-oriented system.
  • System: Motherboard tab: Base Memory = 4096 MB (on a system with 24GB RAM). Processor tab: assign up to half the number of CPUs shown. (See discussion of hyperthreading.) It appeared that I could always select PAE/NX. I wasn’t sure whether that was true of the option to Enable Nested VT-x/AMD-V. I selected both.
  • Display: Screen tab: I was unsure; I set it at half the available total. I also selected Enabled 3D Acceleration.
  • Storage: Storage Devices: Controller: IDE: It began with an Empty virtual optical disc. I could use this to mount an ISO file as a virtual CD-ROM disc. I selected Empty and then clicked the blue disc icon at far right (tooltip: “Choose a virtual optical disk or a physical drive …”) > Choose a disk file > navigate to and select the downloaded Ubuntu ISO > Open. That caused Empty to be replaced by the Ubuntu ISO.
  • Network: Adapter 1 tab: Enable Network Adapter was checked; I changed Attached To from NAT to Bridged Adapter.
  • USB: Option not available until Extension Pack (above) was installed: choose USB 3.0. Then OK.

Back in VirtualBox Manager, with the Ubuntu VM selected, I clicked on Start. That caused the installer to run. Note: as indicated by the reminder on the status bar at the bottom of the VM window, the right Ctrl key would release the mouse and keyboard from the VM, so that I could use them again in the world outside of the VM.

Now I was looking at the Ubuntu installation screen. Some of the default settings worked for me, so I clicked Install Ubuntu > Continue. On the Updates and Other Software screen, I added the option to install third-party software.

At the Installation Type screen, it appeared that Ubuntu defaulted to creating a single partition, whereas there were important advantages to having at least a separate /home partition. Guided by multiple sources (including AskUbuntu 1 2, Lifewire, and my own prior post), I chose Something Else > New Partition Table. That created a Free Space entry filling the VM. I clicked the + (plus button) (each time selecting free space) > Size: 1MB > Use as: Reserved BIOS boot area > OK. Then (plus) > 500 MB (but later it seemed that maybe it should have been larger than 512MB) > Use as: EFI System Partition > OK. Then (plus) > Size: 35000 MB > Mount point: / (because that was essential) > OK. Then + (plus) > 20001 MB > Mount point: /home. Then + (plus) > Use as: swap area > OK. That used up the available space. Then Install Now.

I filled in user data when requested. Installation completed. I clicked a button to restart. A notice told me to remove the installation medium and then hit Enter, but the installation medium was removed automatically from Settings > Storage, so I just hit Enter. The VM started up. I logged into Ubuntu and skipped through its first-time setup questions.

The last minimum requirement was to install the Guest Additions. To do that, I went to the VM window’s top menu > Devices > Insert Guest Additions CD image. It ran some additional software, and then we were done. I had a working VM. I powered it down, closed VirtualBox Manager, and used WinRAR (or could have used an alternative program like 7-Zip or PeaZip) to make a compressed archive of the folder containing the files for the Ubuntu VM. Windows File Explorer > right-click > Properties told me that those files totaled about 9.0GB. The compressed archive was only 3.7GB.

This VM was not the one that I expected to develop, with many programs installed and many customized tweaks in place. This was just an example, a test case for my hope that, eventually, I would be able to convert that customized virtual Ubuntu installation into a physical installation whenever I needed one.

Now it was time to figure out how to make that conversion happen.

Possible Solutions

To convert a VirtualBox VM, the tools of most obvious and immediate interest were VirtualBox’s own VBoxManage and, for some purposes, CloneVDI. But these tools did not appear to offer any direct means of converting or exporting a VDI — or, in this case, a VMDK — to a physical installation. They were not like an imaging tool that could reportedly save a backup of a VM and restore it to a physical system.

As an alternative to a direct export from VDI to a physical installation, it seemed that VBoxManage might at least convert VDI to some other format that would be more easily converted into a physical installation. It was not entirely clear which formats VDI could produce, however. In brief searching, I did not find a single, definitive list of such output formats.

The VirtualBox manual said that VirtualBox could import from and export to OVF and “cloud services such as Oracle Cloud Infrastructure” (see e.g., A-Team). But I was not very interested in a cloud solution. Drive image files tended to be large, which would mean potentially long delays in up- and downloading them. The delays would be even longer if the system that needed a new operating system installation happened to be the one that the user intended to use for downloading.

I was more interested in the file types that VBoxManage could produce for local use. The VirtualBox manual said that OVF format “appliances” could appear in VMDK or OVA forms. LinuxSecrets clarified that VBoxManage could convert VDI to VMDK and VHD, and that another option was to use qemu-img to convert VDI to QCOW2 (see OpenStack).  A RedHat discussion said VBoxManage could also convert VDI to IMG. StarWind V2V Converter also offered various conversion possibilities among these formats.

Assuming such tools could convert among VDI, OVF, VMDK, VHD, QCOW2, RAW, and/or IMG formats, the next question was whether any such format was best suited for conversion into a physical installation. My incomplete investigation suggested that, for this purpose, IMG was perhaps the most popular among these formats. Apparently ISO would be another possible intermediary format (see e.g., How-To Geek, SuperUser).

Ask Ubuntu and other sites (e.g., Ask Ubuntu, StackExchange, Server Fault, SuperUser, Reddit, Linux.org) suggested using dd to copy files from an IMG or, perhaps, from an ISO to the target drive. The dd command in question was potentially rather simple and powerful — which was probably why people who made a typographical error while entering such a command might say that dd was short for “data destroyer.”

The preceding remarks involve cloning or conversion of a single file (e.g., VDI, VMDK) containing an entire VM. There also seemed to be methods of V2P conversion that worked inside the VM, or that would be booted alongside the VM, in order to copy its files from an internal perspective. (This distinction will become clearer as we get into the specifics.)

On that level, among the possibilities raised in response to an old Ask Ubuntu question with some relatively recent answers, Clonezilla and dd appeared to be the ones that drew the most upvotes — and that, not coincidentally, were not obscurely complicated, unsuited for reasons discussed above, or simply obsolete by now.

SuperUser mentioned potential drawbacks of dd in the related P2V context. The full procedure seemed to entail techniques that I was not familiar with, unfortunately, and that was also true of a ServerFault answer (2010) proposing rsync instead of dd. Nonetheless, I saw so many references to dd that it seemed almost obligatory to review it (below).

As I worked through various uses of dd (below), I became aware that two other Linux tools — namely, pv and cat — could do some if not all of the same things as dd. After brief research, I decided not to explore those two alternatives. I had several reasons:

  • By the time I discovered cat and pv, I had already worked out dd solutions that seemed to work for present purposes, and was running low on time for, and interest in, further exploration in this project.
  • While I encountered sources who said that dd, pv, and cat were functionally similar, I did not explore far enough to verify that that was an established, widely shared opinion. It was possible that, with further exploration, I would find that there were good reasons why dd seemed to be by far the most commonly used tool in this context.
  • One reason was, perhaps, that dd had more options — it could deal with a bigger variety of situations (e.g., SuperUser).
  • The combination of options and usage seemed to explain why some dd discussions in various Stack Exchange sites (e.g., Unix & Linux) had dozens of answers and comments, and hundreds of upvotes. This seemed to be where there was the greatest depth of experience, such that I could get answers to any questions that might arise.

The conclusion seemed to be that pv and/or cat could accomplish some but possibly not all of the tasks I needed; that in some cases their commands might be simpler, or they might be less powerful and/or less risky; but that there were plenty of websites offering sample dd commands with good explanations, and seemingly plenty of people motivated to help with dd problems. (A later post elaborates further on those alternatives to dd.)

As another alternative to dd, an Ask Ubuntu answer suggested using the Disks tool (i.e., gnome-disks) to restore a RAW output file to a hard drive. An Ubuntu Forums comment elaborated on the use of Disks for V2P — but again noted some potential complications. (See also ArchLinux wiki; SuperUser.) Disks had the unfortunate drawback that it was apparently unable to save images as sparse (what some called “intelligent”) backups, saving only the sectors that were actually in use on the drive: Disks backups were reportedly as large as the partitions they were backing up, or would require extra steps to shrink.

Various sources (e.g., SuperUser) indicated that imaging a Linux system partition would be best done while that partition was not running. Windows offered a Volume Shadow Snapshot (VSS, a/k/a Volume Shadow Copy Service) that facilitated copying system files even when they were in use. But according to Microsoft (2021), Linux still lacked a VSS equivalent. Thus, in Linux, as in the case of Windows programs (e.g., Kyhi) that did not use VSS, it appeared that these various file copying approaches would best begin by mounting an inactive VDI as a drive, and would then copy from that drive. I was not sure how to reconcile this impression with what seemed to be a claim, by MakeTechEasier (Diener, 2015), that the Disks utility could create a bootable backup while the system drive was running.

Since I had not used rsync or dd in this context, I was not sure why those methods of copying files would be superior to simply using GParted to clone partitions. (GParted was included in some Linux installations but would have to be installed in others.) I had not used GParted for that purpose. Both GParted and Clonezilla would assume that the partition being cloned was not presently running.

In the webpages I visited, Clonezilla was probably the most frequently mentioned V2P tool. Clonezilla’s homepage described it as being based on and/or using several other tools, notably Partclone, Partimage, ntfsclone, and dd. At this point, Clonezilla and dd seemed to be the tools that I definitely had to try.

Mondo Rescue was another possibility. But TechRepublic (Wallen, 2021) appeared to be scraping the bottom of the barrel in highlighting it as one of five noteworthy drive cloning tools. Its homepage seemed to indicate that progress in updating the program was very slow. Its last and possibly only mention in Distrowatch appeared to date from 2013 — consistent with the Last Update date cited at SourceForge, where it had been downloaded only four times in the past week. Its Softpedia page presented an average of 2.9 stars from a total of 23 raters, with 7,088 downloads, as compared to Clonezilla’s 4.3 stars from 424 raters, with 83,758 downloads, last updated in November 2019. LinuxHelp offered a closer look into Mondo’s installation process, as did Tecmint. Both of those articles appeared to date from around 2012. For my purposes, there was a concern that relatively old software could entail issues that may have been resolved in more up-to-date software. Therefore, I did not explore further.

Timeshift seemed to be a popular tool for saving snapshots of the Linux system. It was not clear, however, what those snapshots might include. Multiple sources (e.g., Linux.org, Reddit) said Timeshift was not suited or designed for full system restore, but was rather like Windows System Restore: a system repair tool useful only within a working Linux system. ItsFOSS (2020) agreed that it was like Windows System Restore, but also confusingly said, “Each snapshot is a full system backup that can be browsed with a file manager” — which, to my knowledge, was not at all like Windows System Restore. Some (e.g., StackExchange, LinuxNewbie) said Clonezilla was more reliable and/or complete because it took a cold snapshot (i.e., when the system was not running) whereas Timeshift took a warm snapshot. Installation guidance (from e.g., LinuxTechi, Otodiginet) seemed to confirm that Timeshift would only run within a working Linux system (i.e., could not run from a standalone USB drive). Nonetheless, Timeshift did reportedly offer a “Clone Ubuntu” feature that would seem useful for present purposes.

Continuing through the field of possible tools, CyberCiti recommended Trinity Rescue Kit for cloning and Redo Rescue (formerly Redo Backup & Recovery) for image backup and restore. Like Mondo, Trinity did not seem to have been highly active. Its blog reported that its most recent (and apparently minor) updates were in 2014 and 2016. The version on Softpedia (with only 4 raters and a total of 5,110 downloads) dated from early 2018; apparently the blog hadn’t been updated to account for that presumably minor update. Softpedia’s editor gave it 4.5 stars, describing it as an “Oldschool Linux distribution” that could be useful in various data rescue situations. In other words, it was not registering here as a compelling alternative.

At first, I blew past Rescuezilla, but its developer diligently registered its abilities in comments following this and another post. I decided to try it, next time I needed to image the Ubuntu VM. That writeup appears below. Finally, as described in another post, I had found Pinguy unsuitable as an Ubuntu backup tool.

The Ubuntu documentation, Wikipedia, and Tecmint listed other backup programs, but it appeared that most of them were intended for general-purpose backup, not for making restorable clones, images, or other copies of the Linux system drive. As far as I could tell, I had identified the primary candidates.

*** INTERNAL SOLUTIONS ***

The list of possible solutions (above) were roughly divisible into two groups. That is, there seemed to be two ways to go with efforts to clone or make an image of a VM’s virtual system drive.

On one hand, there were the “internal” solutions, in which the user would boot and/or use a tool on the VM’s own system drive — or at least on a Linux system booted inside the VM — so as to examine and copy the contents of the VM’s system drive when it was not running. Programs or packages like Clonezilla, GParted, dd, and Timeshift could or perhaps had to be installed on a Linux system, and would work on drives available to that system.

That seemed very different from “external” solutions, where the user would use a conversion tool or would otherwise execute commands that treated the entire VM as a single file, in Linux or perhaps in Windows. For example, at this level, it might be a matter of converting the VirtualBox VDI, VMDK, or VHD into a form that could be used to create a working installation on some other drive.

The following sections discuss internal solutions. Later sections turn to external solutions. Note that, while I found working solutions, I might have had driver issues (above), in some of those solutions, if I had developed and used the VM more extensively before cloning or imaging it.

Using Clonezilla

Since a number of people said they had achieved V2P success with Clonezilla, and since I had some experience with Clonezilla myself, I decided to start there. My general impression, as detailed in the following discussion, was that it had an awkward and in some ways confusing interface — but that, after using it a few times, that came to seem less important than the fact that it worked. (Note: for those who chose an LVM installation, Clonezilla reportedly supported LVM2, but not LVM1.)

Background & Setup

My first task was to get Clonezilla in position to work with the VM’s virtual HDD. For that, I downloaded a bootable Clonezilla ISO. Then I went into VirtualBox Manager > select the Ubuntu VM > Settings > Storage > Storage Devices > select Controller: IDE (not SATA). The only thing listed there was VBoxGuestAdditions.iso.

To add the Clonezilla ISO to that list, I clicked on the (round) disc-plus icon next to the Controller: IDE heading. That opened the Optical Disk Selector. The Clonezilla ISO was not yet listed there. To add it to the list, I clicked Add > navigate to the downloaded Clonezilla ISO > select > Open > wait until it was added to the list > Choose.

Now the Clonezilla ISO appeared on the Storage list under Controller: IDE. I selected the Clonezilla ISO and moused over the Live CD/DVD checkbox to the right. It said, “When checked, the virtual disk will not be removed when the guest system ejects it.” I figured I would probably be starting and stopping the VM several times before this project was done, and I didn’t want to have to repeat these steps each time, so I checked the Live CD/DVD checkbox.

Later, when I was done with Clonezilla, I would want to come back to this place in Settings > Storage and remove the Clonezilla ISO. I would also want to go into VirtualBox Manager > top menu > File > Virtual Media Manager > Optical Disks tab > right-click the unwanted ISO > Remove (or select it and use the Remove button at top) > Close.

There was one other thing. I was going to be telling the VirtualBox boot menu that I wanted it to run a virtual CD-ROM drive — in this case, Clonezilla — rather than the virtual hard drive (i.e., Ubuntu). But if Clonezilla didn’t come first on the list of virtual CD-ROM drives, it wouldn’t boot. If I had installed VirtualBox Guest Additions, then its ISO would be listed under Controller: IDE in the VirtualBox Settings > Storage area. To change the order, I would want to select one of these ISOs and assign it, at right, to a higher-ranked IDE primary or secondary device. The highest ranking one was listed first (i.e., IDE Primary Device 0). Without that change, I would get an error: “Fatal: Could not read from the boot medium! System halted.”

Now that I had Clonezilla on the list of IDE storage devices, it was time to get out of Settings and run the VM. To do that, I clicked OK and then went into the main screen of VirtualBox Manager > select the Ubuntu VM > Start. When the VM began to run, it showed me a black screen — and then, within a few seconds, it displayed a VirtualBox splash screen and said, “Press F12 to select boot device.” I clicked in the box (so that my keystrokes would register there) and then hit F12. That paused the boot process and gave me a black-and-white text menu of the available boot options. The two of interest here were the 1 key, to start the Linux VM (on a virtual HDD), and the c key, to boot the virtual CD-ROM drive (containing Clonezilla, in this case). But I didn’t proceed with either of those options yet. (Here, again, right-Ctrl was the hotkey to make the VM release the mouse and keyboard.)

Before proceeding with that boot menu, I needed a target drive to which I could send Clonezilla’s output. One possible solution would have been to explore the concept of passthrough, where apparently any drive could be mounted from within a VM. A set of instructions toward providing raw hard disk access suggested that this might not be terribly hard to do. Maybe passthrough would have enabled me to attach a physical drive to the VM like I had just attached a virtual optical drive (i.e., the Clonezilla ISO). But that didn’t seem to be an option now. In Settings > Storage > Controller: SATA, I clicked on the rectangular icon, to add a hard disk. But when I clicked Add at that location, it said, “Please choose a virtual hard disk file.” There didn’t appear to be a way to make it select a physical drive.

But no problem. I had a 64GB USB drive, large enough to hold the VM’s partitions, formatted as NTFS. When installing Ubuntu in the VM, however, I had gone with the default ext4 file system. I wondered whether that would require the target drive to be formatted in ext4 as well. The answer seemed to be no. A StackExchange discussion made clear that people often used the standard Windows NTFS filesystem with Linux. A comment in that discussion also indicated, however, that NTFS was “not as well implemented on Linux as on Windows.” Maybe I would try NTFS later. For now, to give Clonezilla every chance at success, I used diskmgmt.msc (or could have used MiniTool Partition Wizard or AOMEI Partition Assistant or some other tool) to quick-format it as exFAT, which one comment said was “well implemented in Linux.”

This seemed to work OK with this VM. It did not work well in a previous try, using the preconfigured OSBoxes VM download mentioned above on an Ubuntu host system. In that case, for posterity, I will mention that there seemed to be several possible explanations:

  • An AskUbuntu answer seemed to say that the process of attaching a physical drive to the VM began with making sure the user was in the vboxusers and/or vboxsf groups, in the host and guest operating systems.
  • Another source led me to discover that installing VirtualBox on the host by using sudo apt-get install virtualbox instead of using the installer downloaded from the VirtualBox downloads page could leave me with a version of VirtualBox that was less up-to-date, and that updating from the wrong place could then yield a mismatch between VirtualBox and its Extension Pack. (See e.g., the VirtualBox Old Builds webpage.)
  • In the older version that I initially installed from the command line, I went into VirtualBox Manager > top menu > File > Preferences > Extensions. There, I saw not only the Extension Pack, but also a slightly mysterious VNC extension.

With those prior maldiscoveries noted here for posterity, I went back to my new, ready-to-go Ubuntu VM on this Windows desktop system. It was still frozen at the boot menu. Before proceeding with that, I went to its top menu > Devices > USB. This gave me a list of USB devices plugged into the computer. I selected the USB flash drive. (Note the suggestion to attach a USB drive by using the USB filter method instead.) Attaching the USB drive here made it unavailable to the Windows host system. And now I was ready for Clonezilla.

VM Drive Cloning with Clonezilla

As just described, I had set up the VM to boot Clonezilla or Ubuntu, as I preferred, and had attached a physical USB drive to serve as Clonezilla’s target. I had booted the VM and hit F12 to access its boot menu. Now I proceeded to choose c to boot from the Clonezilla virtual CD-ROM. That gave me the Clonezilla startup screen. I arrow-keyed down to the Other Modes of Clonezilla Live option > To RAM.

Clonezilla scrolled various commands that it was executing, with occasional pauses, and then stopped to offer several introductory options — which, for me and perhaps for most users in the U.S., amounted to just hitting the Enter key three times in a row, to select a few default options (i.e., language, keyboard, start Clonezilla).

That put me at a screen offering a half-dozen different modes. Leaving aside the remote and lite-server and -client options, the only real choices for me here were device-image and device-device. As those names may suggest, this was a choice between making an image to a target drive, which I would then have to restore to another drive, or directly cloning the VM’s drive to a target drive, which would ideally be bootable without any further hassle.

This was the point where, in my tinkering with the previous OSBoxes VM, I discovered that I should have investigated what my VM was all about. There were some easy ways to acquire information on that. In the main screen of VirtualBox Manager, I could click on the little box at the right end of the colored bar where the VM was listed and select Details > Storage section. This would tell me the size of the configured VM. It, or the slightly more detailed VirtualBox Manager > Settings > Storage area, would have told me that the OSBoxes VM was configured to expand dynamically to a size as large as 500GB. I could get similar information inside the VM via its Start Menu > System Tools > Disks (i.e., gnome-disks) tool. Size mattered, here, because Clonezilla treated the VM as if it had already been expanded to its maximum possible size: I wound up having to use a 1TB external HDD as the target disk for both of these Clonezilla options (i.e., device-image and device-device). Since that was obviously not the desired solution, my next trick was to use GParted to shrink the several bloated partitions to a size that would fit on the USB drive. But I was still not out of the woods. OSBoxes had also gifted me a farrago of so many partitions as to infuse new dimensions into a disk junkie’s concept of a “cluster.”

This time, we weren’t going there. We had created a nice, tight little VM with just a few partitions and a size that would fit on the target USB drive. Instead, it was a simple matter of choosing device-device, the second option on this Clonezilla menu, described there as “work directly from a disk or partition to a disk or partition.” I was going to try to clone the VM’s virtual system drive directly to my exFAT USB drive, and I would hope that the latter would then prove bootable. (HowToForge offered a tutorial on a similar process using VMware instead of VirtualBox.)

With that decision made, Clonezilla’s next question was whether I was a Beginner or an Expert. You might say they amounted to the same thing. The Expert mode asked a few more questions, but either I agreed with their default values or I didn’t understand them (in which case Clonezilla’s advice was, “If you have no idea, keep the default value”). So for several of the questions in Expert mode, it was a matter of just hitting the Enter key to accept the default.

Either way, for my purposes, the first meaningful question was whether I was talking about disks or just partitions. Since I wanted to clone the whole virtual disk, with any hidden boot partitions that might exist in the VM, I made sure disk_to_local_disk was selected, and then hit Enter.

Gratifyingly, Clonezilla now admitted that I had succeeded in giving it access to two drives. First on the list was VBOX_HARDDISK. That would be the VirtualBox virtual hard drive containing the Ubuntu installation. Second was the USB drive. In this screen, Clonezilla wanted to know which would be the source drive for my cloning process. The VBOX drive was already selected, and that was correct — I wanted to copy from the VBOX drive — so I just hit Enter. On the next screen, that left the USB drive as the only possible target, so it was Enter once more.

After hitting Enter a few more times, to accept additional default choices, we got to the point where the action was about to begin. Clonezilla repeatedly asked whether I really knew what I was doing and had carefully reviewed which drive was going to overwrite which. I confirmed and reconfirmed my way through those. Then Clonezilla went to work. Unlike other recent episodes, it ran for a respectable amount of time, like you’d expect from an effort to clone some tens of gigabytes. When it was done, I chose the option to power down the VM.

Then I tried booting a computer with the USB drive. It worked. In some previous attempts, it didn’t. For those, a search for answers led to the possibility that it was a case of GPT-MBR mismatch. Other possibilities were that Clonezilla’s documentation recommended using the Other Modes > To RAM option, rather than the default boot option that I had always used (though I did not see how that could make any difference); or the source drive was larger than the target (though that would have been true, in this case, only if the maximum dynamic sizing of the VM was treated as its actual size).

In exploring the possibility of GPT-MBR mismatch, the Disks utility (i.e., gnome-disk-utility) in the host indicated that the partitioning of the target USB drive was Master Boot Record (i.e., MBR). In the VM, it said that the virtual drive used GUID Partition Table (i.e., GPT). So, bingo! It seemed that could explain it. To fix that, I went into GParted in the host system > select the Kingston drive > menu > Device > Create Partition Table > select gpt > Apply. Then right-click > New > create a new ext4 partition on the Kingston > green Apply button. That completed successfully, but did not resolve the problem.

Fortunately, as I say, the procedure outlined in this section evaded that problem: I had a working physical installation from a virtual machine.

Troubleshooting the VM with fsck

The next question was whether the newly created physical installation had any problems that weren’t obvious. To find out, I tried several ways of running fsck, with the following results:

  • PhoenixNAP suggested sudo reboot > hold the Shift key down during bootup > Advanced options for Ubuntu > choose the latest Recovery Mode entry > fsck > Enter > Yes. That produced an error.
  • Ask Ubuntu recommended that I boot a separate Linux USB drive and run fsck from there. I booted the laptop with Ubuntu installed on a USB drive (i.e., not live), and used that to examine the USB drive containing Clonezilla’s clone of the VM. Specifically, I ran Disks to ascertain that the new cloned USB drive was /dev/sdb. On that basis, I ran sudo umount /dev/sdb. It said, “umount /dev/sdb: not mounted.” So then I ran sudo fsck /dev/sdb as Tecmint advised. That produced a perplexing result: “/dev/sdb is in use” and “e2fsck: Cannot continue, aborting.”
  • Re-creating the clone on a USB drive formatted as ext4 on the laptop (booted in Linux) did not solve the problem. It merely established that Clonezilla, running in the VM on the Windows desktop computer, was willing and able to work with an ext4 USB drive.
  • I replaced the Clonezilla ISO, in the VM, with another Linux ISO. I used the 64-bit LXLE 18.04.3 distro. Hitting F12 on bootup then allowed me to run LXLE instead of Ubuntu in the VM. In LXLE, I chose the legacy boot option. From its (upper left corner) menu button, I ran Control Menu > Utilities > GParted. The only drive it could see was /dev/sda, whose partitions and size seemed pretty clearly to be those of the Ubuntu VM. So then, in LXLE’s upper left corner, I clicked on its Terminal icon (or could have used Ctrl-Alt-T) and ran the same umount and fsck commands (but for sda rather than sdb), and got the same results. So it seemed that the problem was not with the resulting USB drives, but rather with the source drive.
  • I shut down LXLE, went into VirtualBox Manager > Settings > Storage, removed the LXLE ISO, restarted the Ubuntu VM, ran sudo touch /forcefsck as Tecmint suggested, and rebooted the VM. I expected to see fsck in process, but no: all I got was an extended view of the Ubuntu splash screen. That taught me to disable the splash screen. I did it again, but still didn’t see anything that looked like a checking of the filesystem. I ran ls /forcefsck to verify that the newly created forcefsck file no longer existed, so it didn’t re-run every time I started the VM. I shut down the VM, reinstated the LXLE ISO, went back into it, retried umount and fsck, and saw that the problem remained unchanged.

A search led to no other immediate solutions. Until some new insight resolved the situation, it seemed that I had to accept that fsck might not run on this VM.

VM Drive Imaging with Clonezilla

The alternative procedure, in Clonezilla, was to use device-image, the second option on the Clonezilla menu. The menu described that option as “work with disks or partitions using images.” In other words, I was going to create and then restore an image of the VM’s Ubuntu installation. Dedoimedo’s Clonezilla tutorial (2011) was useful for general-purpose guidance, but wasn’t written for V2P.

You may notice that this option was first on Clonezilla’s menu, but I did not begin with it. There were three reasons. First, I didn’t like Clonezilla’s idea of a drive image. It turned out to be a folder, not a single image file like an ISO or an Acronis TIB or an AOMEI ADI. That could get confusing if the user didn’t remember how Clonezilla images worked, or if those folders’ duplicative files got deleted by accident in a search for duplicate files on a drive. Second, I didn’t favor the two-step process of having to create an image and then restore from an image. Granted, this would give you a backup, whether you wanted it or not. But it was less direct than simply cloning the drive — and, as just seen, it might prove unnecessary: Clonezilla’s direct cloning seemed to work. Third, if I wanted to be fooling with image files, I might not go to the trouble to set up and run Clonezilla. I might try, instead, to do one of the simple conversions described above, from the VirtualBox VMDK (or VDI, or VHD) to an IMG or ISO or other format that might be restored directly to bare metal.

Yet here we were. I decided to cover Clonezilla’s imaging procedure after all. In some regards, it turned out not to be terribly different from its cloning procedure (above). I had to add the Clonezilla ISO in VirtualBox Manager > Storage > Settings; I had to connect a USB drive in the VM’s top menu > Device area. Actually, in this case I connected two different USB drives. I chose drives by different manufacturers so that it would be easy to tell them apart in program menus and windows. One was a Patriot; the other was a Silicon Power.

Proceeding through Clonezilla’s menus, I went with the default local_dev option. Next, I had to designate a partition or drive to serve as /home/partimag. This was the place that Clonezilla would be writing an image file to (in the case of backup) or reading an image file from (in the case of restore). In other words, as its name suggests, it was the place for images, not for the user’s system or data files. In the present case, I hoped the correct answer here was the Patriot drive. It was only 32GB, and I wasn’t sure how Clonezilla would respond to that.

So I designated the USB drives. That added them to the yellow-white-black menu where Clonezilla listed the drives that it considered available for the imaging process. When I saw that the list included the two USB drives and the VirtualBox (VBOX) drive, I could go to the next step, so I hit Ctrl-C as instructed there, and then I could select the Patriot as /ome/partimag and hit Enter.

The next screen said that Clonezilla assumed the target directory for the image was its root (i.e., / ) directory. That was fine. If I had wanted it to be in a subfolder, I might have had to create that folder before starting down this road. Since the root folder was OK with me, I tabbed to Done and hit Enter. It confirmed and gave me a choice between Beginner and Expert, as above. I went with Expert.

The next choice was savedisk, not saveparts: I wanted to capture the whole drive as a single image. I didn’t bother changing the name; I would do that later, on the zip file that I would be saving as a backup in Windows. Next, I hit the spacebar next to the VBOX HARDDISK entry, to designate it as the source. I went with the defaults except to add a bunch of zeroes on the size screen, to force Clonezilla not to split the image into multiple files, though I suppose that wasn’t necessary.

After confirming that I was sure, Clonezilla got underway. It was done in maybe 15 minutes. I told Clonezilla to reboot, hit F12, checked top menu > Devices > USB, and then hit c to begin the restore process. It was still a device-image process, and the Patriot drive was still /home/partimag (i.e., the drive holding the VM’s drive image). The screen asking for the location of the Clonezilla image repository said the (confusingly) highlighted CZ_IMG entry was actually not the correct choice. The correct choice was still the root ( / ) partition, and the screen said that “/” was still the “Current selected dir name,” even though it sure didn’t look like it. So I tabbed to Done and hit Enter.

Next, I chose Expert mode > restoredisk. The target disk was obviously not going to be the VBOX drive that I had just saved into the image file: it was, rather, the Silicon Power USB drive. In the “advanced extra parameters” screen, I could have arrowed down and hit the spacebar to select -icds (“Skip checking destination disk size before creating partition table”) if I’d had any worries about Clonezilla not liking my target disk size, but in this case that didn’t look like it would be a problem. So I just went with the default extra parameters. Other than that, I accepted the default values and, once again — after several more confirmations — we were on our way.

When Clonezilla finished the restore process, I powered it down. That released the USB drives, so I could take a look at what I had. Windows promptly sought to reformat the Silicon Power USB drive, since Clonezilla had correctly copied some of its partitions as ext4, which Windows didn’t recognize. In Windows File Explorer, I used right-click > Eject on the partitions that Windows didn’t seem to understand. Then I removed the Silicon Power USB drive from the Windows desktop and plugged it into the laptop, which was running Linux. In that Linux system, I used Disks to compare the partition table for the Silicon Power USB drive against the partition table that Disks showed me for the system drive inside the Ubuntu VM on the Windows desktop. The partitions were all there, and they were almost but not exactly the same size.

Overall, GParted on the laptop said that used space on the Silicon Power USB drive totaled about 8.8 GiB. In Windows File Explorer > right-click > Properties, I saw that the VM used 9.7GB. The image folder on the Patriot drive was 4.0GB, as was the WinRAR compressed RAR file.

And yes, the Silicon Power USB drive did boot and run Linux, just like in the VM. So Clonezilla succeeded both in cloning and in image backup-and-restore operations.

Trying GParted

Once I knew how to boot another tool (e.g., Clonezilla) alongside an operating system (e.g., Ubuntu) inside a VM, I wondered why a person would go to the trouble of using a tool like Clonezilla if — as some sources suggested (above) — s/he could instead use GParted to clone the VM to an external drive.

The first steps in this approach seemed obvious enough: find a GParted ISO; install it as a virtual disc in the VM, just as I had done with Clonezilla; and boot that tool to conduct operations on the VM’s primary operating system drive. Without rehashing in detail the steps (above) required for those operations, let us proceed here to the question of how I used GParted — once it was booted in, and an external USB drive was attached to, the VM.

Briefly, after replacing Clonezilla with GParted in Settings > Storage, I started the VM, hit F12 to open the boot menu, went into top menu > Devices > USB to attach the target USB drive, and then hit to boot GParted as a virtual CD-ROM drive. As with Clonezilla, I ran GParted in Other Modes > To RAM.

GParted required me to hit Enter three times, at the start, to accept default settings that worked for me, and then I was looking at the usual GParted screen, with a choice among available drives in a drop-down menu at upper right. That menu showed /dev/sda, which was pretty clearly the Ubuntu installation in the VM, and /dev/sdb, which was the empty NTFS USB drive. According to GParted, as expected, the latter had a maximum capacity of 57.87 GiB, which was just above the 57.00 GiB of the Ubuntu installation.

So it seemed that cloning should work. To try that, I directed GParted’s attention to /dev/sda. But now I saw that the only relevant option on the GParted menu was Partition > Copy. It (and the Copy button on the toolbar) was grayed out, presumably because I wanted to copy the entire virtual drive. I selected a single partition and revisited that menu pick. It quickly developed that GParted seemed interested in copying only some partitions: fat32, ext4, NTFS, and linux-swap, but not GRUB2. So apparently the plan here was that I would create a GRUB2 partition manually, and would figure out how to set it up, but I could copy the other partitions, one at a time. Suddenly it wasn’t so certain that this would be faster and easier than the Clonezilla image create-and-restore operation.

Was I understanding this situation correctly? A search led to an articles by AddictiveTips (Diener, 2019) and AOMEI that confirmed I had the basic idea: cloning with GParted would proceed one partition at a time. Neither of them said anything about GRUB2, which made sense: they were talking about Windows or data drives, not about Linux system drives. It was telling that GParted was not among the four methods of cloning your Linux drive discussed in a MakeUseOf (Cawley, 2021) article. I decided to shelve this approach until I had time and motivation to explore what seemed to be a potentially complicated process.

Trying Timeshift

In browsing various sources for an introductory sense of what Timeshift was (above), I arrived at the impressions that it might be able to clone the VM’s Ubuntu installation; that it might fail to capture user settings in the process; and that it might not be as reliable as Clonezilla. I decided to take a firsthand look, to see whether my working reactions squared with those initial impressions.

iTecTec said, “User settings are stored in the Home folder by design. … Settings (Firefox profile, appearance, …) are often stored in hidden folders (or files). Hidden folders/files are prefixed with a dot, like .mozilla for Firefox (and other Mozilla applications).” To see those in the Ubuntu file manager, I just had to hit Ctrl-H. In the VM’s Home folder, I did see a .mozilla/firefox folder.

I hadn’t done much to personalize the Ubuntu installation in the VM. I suspected that I didn’t need to, right now — that if Timeshift was going to fail to bring over user settings, it would probably fail to bring over anything in the Home folder. But to make sure that the VM did have at least a bit of a personal touch, I made two changes:

  • In Firefox, I went into Sync and Save Data > Sign In > Get Started. That installed my Firefox add-ons and, no doubt, a number of personalized settings.
  • While I was in Firefox, I downloaded a new “Sunrise” desktop wallpaper. With (I think) the aid of sudo nautilus, I moved it to /usr/share/backgrounds. Then I right-clicked on the desktop > Display Settings > Background > Add Picture > Other Locations > Computer > usr > share > backgrounds > Sunrise.jpg > Open > click on Sunrise.jpg.

With that done, I proceeded to install Timeshift. LinuxTechLab reported that, starting with Ubuntu 20.04, it only took a single command to install Timeshift: sudo apt install timeshift. To run it, I went into Show Applications > scroll to the very last page > click on Timeshift icon.

When I first started Timeshift, and thereafter in its Settings > Type screen, I could choose between RSYNC and BTRFS. NewBeDev said BTRFS was supported only on certain older systems. I left it at the default RSYNC. But the next tab in Settings (i.e., Location) seemed to say we were going to have a problem: it allowed me to select only one of the partitions in the VM’s virtual system drive. As I looked again, it seemed that this might be intended as the target location, not the source. Moreover, the Schedule tab did not offer a one-time backup option. The Users tab did allow me to include /root and /home/ray files in the backup. But the Clone Ubuntu button that seems to have existed in Timeshift c. 2014 was no longer to be found.

I actually found the tool dangerous, because when I unwisely clicked its Create button, thinking that this would lead into a set of choices as the other buttons did, or at least a finalizing “Are you sure?” question as in Clonezilla, I found that instead it lurched immediately into creating a snapshot at some unspecified location, presumably on one of the two VM partitions listed in its Location tab. A bit of hunting turned up a folder at Files > Other Locations > Computer > Home > Timeshift.

To uninstall Timeshift, TeeJeeTech suggested running Timeshift and viewing “the list.” I did not see a list. In Timeshift > Restore, I saw “No snapshots selected” and none listed. If there had been one, TeeJee said, using Ctrl-A > Delete would “delete all snapshots and remove the /timeshift folder in the root directory.” It seemed that I could only proceed with TeeJee’s other suggestion, which was to close Timeshift and run sudo apt-get remove timeshift. After that, the folder still existed at Other Locations > Computer > Home > Timeshift. With (I think) the aid of sudo nautilus, I was able to delete it.

Later, when I ran lsblk (below), I saw that Timeshift seemed to have taken over my 20GiB partition on the VM’s virtual drive: that partition was now labeled /run/timeshift/backup. I used sudo apt-get install gparted and then sudo gparted to get a clearer look at the situation. The situation seemed to be that, probably in response to my selection of that partition, Timeshift had added a /run/timeshift/backup mountpoint to that /home partition. I restarted the VM and looked again. Now the /run mountpoint was gone.

As far as I could tell, then, Timeshift was not suitable for cloning or imaging the Ubuntu VM to a new location. There may have been a way, using Timeshift’s command-line interfact (CLI) capabilities, but I did not find a guide to those, nor did the GUI convey any hint that the CLI had such capabilities.

Using dd Internally

Note, again, that a later post improves upon this one, particularly with respect to the desirable form of the dd command. The following material may nonetheless be useful as background.

Yarygin (2021) said that dd (actually short for “data definition”) could be used for cloning drives or partitions, compressing a drive into an image, creating a disk image, or erasing a drive. The first two were the ones of interest here. To clone a drive or partition, Yarygin (with the aid of other sources, e.g., ServerFault) recommended several command variations that boiled down to this:

sudo dd if=(source) of=(target) bs=16M status=progress

(I have to use parentheses rather than brackets, here, due to another example of WordPress taking it upon itself to reinterpret what the writer has posted: bracketed references to “source” disappear.)

In this command, “if” was (or at least could be taken as) short for “input file,” and “of” essentially meant “output file.” You were cloning everything, byte for byte, from sda to sdb. The bs=16M portion specified the block size, to speed up copying. The status=progress part merely told dd to give you progress reports.

That form of the dd command was not adjusted to compensate for bad blocks. Such compensation might include conv=sync,noerror at the end, and might also use bs=4096. The thinking was that a drive with bad blocks might be much better handled by ddrescue. Be that as it may (see further debate), I was deterred from pursuing Ubuntu’s ddrescue (or variations) because ddrescue was reportedly unable to use gzip (below) to compress the resulting file, at least without complex and/or potentially unavailable workarounds.

Everyone emphasized being careful with dd. Yarygin suggested that it might be best not to drink alcohol while using dd. Although I naturally found it very tempting to get completely smashed and then play with powerful Linux commands I had never tried before, I realized that he was probably right. I really had no desire to bork the VM, much less my Windows desktop system. For people thinking this way, the advice was: check and double-check your dd command before you enter it. Even if you truly are three sheets to the wind. Even if it all seems fun.

The mission at hand was to clone the VM’s virtual HDD to an external USB drive. To verify that I was selecting the right USB device, I viewed it in Windows Explorer, and then watched it disappear when, as above, I went into the VM’s top menu > Devices > USB > select the Silicon Power USB drive, which I had labeled SP01. In the VM, that move added SP01 to Ubuntu’s file manager’s list of mounted drives.

Now it was time to work up the dd command. To identify the input and output (i.e., source and target) drives, I ran lsblk. Aside from some loop items that did not seem relevant, it was pretty obvious that (as GParted also said) the Ubuntu VM’s drive was sda and the USB drive was sdb. So my own version of Yarygin’s command (above) was sudo dd if=/dev/sda of=/dev/sdb bs=16M status=progress.

So I ran that. The Silicon Power USB drive had the virtue of having an LED that flashed when it was active. That light started to flash. So we knew that either its empty bytes were overwriting the VM’s virtual HDD or (hopefully) vice versa. Whatever it was doing, the progress indicator said it was doing it at a rate of 10.4 MB/s (and slowing). At that rate, it was going to take close to two hours to copy 60GB byte-for-byte. Definitely not as fast and intelligent as Clonezilla. But, on the plus side, entering the command was a lot easier. And the process might have been significantly faster if I had chosen an external HDD or SSD instead of a USB flash drive.

While this was going on, the VM’s hyperactive power setting blanked the screen, leading to a situation where I accidentally hit the Enter key in Terminal where dd was running. No harm done, apparently: it just used that as an excuse to stop the progress indicator on one line and continue it on the next.

When it was done, I detached the USB drive from the VM, ejected it from Windows 10, and booted another computer with it. It worked. The Firefox and wallpaper customizations that I had installed in the VM were there too. GParted indicated that partition sizes were identical, though for some reason there were some differences in amounts of space used.

Later, I saw a ServerFault warning that I should not use dd to clone (or, presumably, as described below, to make an image of) a drive or partition that was being used. In that case, apparently the solution would be to boot the VM with another Linux ISO, as described in the Clonezilla and GParted processes (above), and run the dd command from there, still treating the VM’s virtual HDD as the source. My apparent success in this case was presumably due to the fact that this Ubuntu installation was new and had little going on while I was cloning it, so there was not much opportunity for anything to get corrupted.

Yarygin (2021) said that, in addition to cloning, dd could also produce a disk image. For that, the recommended command took this form:

sudo dd if=(source) bs=16M | sudo gzip -c > (target)

In this command, gzip refers to the gzip data compression program (see documentation regarding the -c parameter). The basic idea is that, instead of sending output to a USB drive, this command would run the output of dd through gzip and save it in the target file. Sending the output into a compressed file made it unnecessary to use a target drive at least as large as the source drive (e.g., MakeUseOf). With a large source partition, it could also make the process much faster. (A ServerFault answer suggested a procedure for first defragmenting the source partition, so as to further shrink the output.) Note that sudo might be required for both of the commands combined on that line; I added it just in case.

With a file as the target, MSNiner explained that dd could not use the /dev/sdb address used above. Instead, lsblk told me that /dev/sdb was mounted at /media/ray/SP03. So the command I actually used was sudo dd if=/dev/sda bs=16M | gzip -c > /media/ray/SP03/UbuntuVM.image.gz. Here, again, it seems I should have been running that command in some other booted Linux, still treating the VM’s virtual HDD as source.

This time, dd did not immediately begin to write to the USB drive; it was some seconds before the Silicon Power’s LED light began to flicker. I assumed this was because it was adding compressed stuff to the output file in batches. As with the clone process, dd copied the VM’s full ~60GB. Here, again, it might have been faster to write to something other than a USB flash drive.

The process did not reformat the target drive; it remained NTFS-formatted with the same SP03 name. The resulting UbuntuVM.image.gz file on the USB drive was 5.3GB. When I opened it using WinRAR, I saw that it contained only a single 57GB UbuntuVM.image file. Windows didn’t seem able to look inside it. I copied it to a 1TB HDD attached to my laptop, which was running Ubuntu at the moment. There were permissions issues. I didn’t research them. Instead, I just used sudo nautilus and then right-clicked on open space in the Nautilus file manager > Open in Terminal. In Terminal, as advised by CyberITHub, I ran gunzip -v UbuntuVM.image.gz. That command did not provide a progress indicator, but I could see the HDD’s light flashing; I just had to wait until it was done. It deleted the source .gz file, replacing it with UvuntuVM.image (61.2GB). A SuperUser answer explained that I was unable to extract or inspect further because, of course, this was “a whole disk image,” comprised of multiple partitions, so it wasn’t as though I could simply mount it as a virtual drive. That answer provided steps to set up “a loop device for each partition.” I didn’t pursue that.

I was curious about the contents of the .image file because I wondered why the image produced by Clonezilla was smaller than the one produced by dd. I suspected the answer was that Clonezilla intelligently excluded the contents of the swap partition. Presumably that wouldn’t have been an issue if I had run dd from a different Linux partition as it later seemed I should have done: perhaps in that case there would have been nothing in the swap partition.

There remained the question of how to restore the .gz archive that dd had produced, so as to create a bootable drive. One approach might be to unzip it and then use something like Yarygin’s (2021) suggested sudo dd if=UbuntuVM.image.img of=/dev/sda bs=16M. But rather than devote time and space to manipulate that large archive. CyberCiti suggested the approach of keeping it compressed in the .gz file, and restoring directly from that file, like this:

sudo gunzip -c [.gz file] | sudo dd of=[target]

To make that work, I attached two different brands of USB drive to the VM (via top menu > Devices > USB). The Patriot drive contained UbuntuVM.image.gz. The Silicon Power drive was the target. Here, again, I was using a file, rather than a drive, as the source. Thus, dd would not find the .gz file at a location like /dev/sdb. Instead, with information from lsblk, the foregoing command translated into sudo gunzip -c /media/ray/PATRIOT/UbuntuVM.image.gz | sudo dd of=/dev/sdc.

That worked: the Silicon Power drive booted the laptop with an Ubuntu session resembling what I had in the VM. This gave me a dd procedure that, while a bit more roundabout, produced a compact backup image along the way.

In short, dd was able to do most of the same things as Clonezilla: clone the VM to a physical drive, image the VM to a backup file, and restore that backup file onto a physical drive. There was still the caution that, as one SuperUser answer put it, multiple complexities “make dd not the best tool for creating disk images in general, especially if you don’t know its quirks very well.” With few exceptions, that writer preferred ddrescue. It appeared that dd was unable to create an ISO from a VM, or at least I was seeing nothing on that, so I posted a question about it. The later post has more on that.

Linux Virtual to ISO Solutions

Some internal Linux tools were oriented specifically toward converting a running Linux system to an ISO file. If all went well, a working ISO could be used to create a bootable (e.g., USB, CD) “live” drive running exactly the Linux system that had been encapsulated into the ISO file format. An ISO could be run either in a standalone capacity — that is, as the only operating system installed on a drive — or as one of many ISOs installed on a multiboot (e.g., YUMI) device. In addition to “live” functionality, an ISO could be used to install its encapsulated (in this case, Ubuntu) operating system on a target drive.

Difficult and (Apparently) Dead Options

LinuxAddicts (Isaac, 2021) listed numerous methods for converting a Linux installation to an ISO. I looked into the methods listed on that webpage.

Among such methods, Remastersys was apparently once a leading tool for ISO creation. The original Remastersys website evidently fell into disuse, however. At this point, a safe (and apparently the only) way to view the original site was via archive, such as the Wayback Machine. The Wayback archive confirmed that the purpose of Remastersys was to enable users to make and distribute backups of their customized Linux installations. Wikipedia said that development of Remastersys ceased in 2013. An Ask Ubuntu comment (2015) suggested that “the old .deb [version of Remastersys] would not work with modern versions of ubuntu.” (See e.g., Dedoimedo (2008) for historical instructions.) It appeared that Remastersys was still able to work on Ubuntu 14.04 (Dev.to, 2020; see also Ask Ubuntu (1 2 3), Wasta, LinuxMint, and ProgrammerSought.) But by now, even Ubuntu 14.04 was seven years old.

Wikipedia said that Remastersys had forked to Debian and Ubuntu versions of Respin. The LinuxRespin webpage offered links for Debian (SourceForge) and Ubuntu (Launchpad) systems. MakeTechEasier (Kourafalos, 2020) confirmed my impression that one thing to dislike about Respin was “its almost nonexistent documentation” (but see Linux.org, 2021). Moreover, it appeared that the project itself may have run out of gas. The Ubuntu-based (Launchpad) site indicated that the “latest bugs reported” dated from 2016, and its “external downloads” link led to a GitHub page that was likewise last updated in 2016. An Ask Ubuntu question drew a comment agreeing that “all the Linux ISO remastering apps seem to have been abandoned” as of 2020.

Abandonment seemed to be the right concept for (Free) Ubuntu Customized Kit (UCK), another remastering tool on the Linux Addicts list. The leading items appearing in a search for information on UCK included an Ubuntu manpage pointing toward a SourceForge page announcing “!!!PROJECT DISCONTINUED!!!” Similarly, upon getting a warning of a suspicious page for the InstaLinux website, I tried its Wayback page — which said, “Site Decomissioned” (sic). I wasn’t sure how to search for Builder or Hook, two other items on the Linux Addicts list: what I found seemed to indicate yet another ghost town, with an echo down the canyon for LinuxLive (LiLi) (“This project is not maintained anymore”) and Debian Live Magic (linking to a dead GitHub page). Softpedia said that Ubuntu Builder was last updated in 2014 (see also Ubuntu Geek, 2012; iTecTec, c. 2013). ReLinux was apparently another promising upstart, back in the day (e.g., HowToForge, 2012) — though it might not have gotten past beta (see Ask Ubuntu, 2017). Novo Builder (see MakeTechEasier, 2010) seems to have survived until sometime around 2016. A few of the tools listed on the Linux Addicts list (i.e., Reviewer, SuSE Studio, Pungi) were limited to Linux distibutions other than Debian and/or Ubuntu. See also Turnkey Linux (2010).

That left only a few potentially useful ISO-conversion survivors, whether included on the Linux Addicts list or not. Of those, some appeared complex enough to disregard: before I went to that much trouble (read: before I invited that many ways for things to go wrong), I would probably try paid software or just rely on other methods discussed in this post. One of these avoidably complex solutions was Linux From Scratch, which promised to provide me with “step-by-step instructions for building your own custom Linux system, entirely from source code” — which, according to one Quora estimate, would probably take around two days. Other complex solutions: the Live CD Customization page offered by the Ubuntu documentation, and a squashfs-tools method (see also Linux Mint). (I did not read closely enough to verify that these methods were all truly different from one another.)

For those willing to undertake a relatively complex solution, Cubic appeared promising. According to its Launchpad webpage, “Cubic (Custom Ubuntu ISO Creator) is a GUI wizard to create a customized Ubuntu Live ISO image.” TechRepublic (Wallen, 2018) explained that the purpose was not to create an ISO, but rather to customize an existing ISO, such as one downloaded from the Ubuntu website. Wallen’s example: add Kubernetes to the official Ubuntu 16.04 ISO. OSTechnix (Sk, 2020) said, “You can update the packages, install your favorite applications, remove unwanted applications from the ISO, install additional Kernels, add files and folders and add wallpapers, install themes, modify the software repositories and so on.” Sk provided what appeared to be the current commands for installation (TechRepublic’s differed slightly), and offered walk-throughs of a number of customizations (see also Ask Ubuntu). A glance at Cubic’s Questions page suggested that its use would entail complexities far beyond what I had in mind. Examples: “Auto-mount ext4 partitions at startup by modifying fstab or something else?” and “Autoinstall server iso not loading local user-data config.” For purposes of this project, I was more at the level of, “Punching the ‘Copy’ button doesn’t work.”

Finally, Linux Live Kit offered “a set of shell scripts” designed “to create your own Live Linux from an already installed Linux distribution” that would then be “bootable from CD-ROM or USB Flash Drive.” I wasn’t actually sure whether that (or any custom) ISO could function as an installer, though that’s what I wanted and was hoping for. Linux Live Kit provided an overview that did sound approximately like the other complex methods just mentioned.

Systemback

Having eliminated many possible V2ISO solutions that were no longer maintained or seemed overly complex, I was left with a few tools that, at first glance, appeared to be alive, or at least still working. Systemback was among them.

TechRepublic (Wallen, 2014) described use of Systemback to convert a Linux system into a live ISO. According to Alibaba Cloud (2020), the “Systemback author stopped its development in 2016.” The Systemback homepage said, “DEVELOPMENT AND SUPPORT ENDED,” and pointed to a Launchpad page indicating that the latest supported Ubuntu release was 16.10, and to a SourceForge page whose last update was in 2017. (Note alternatives to and updated forks of Systemback. See also an Ubuntu MATE Community thread.)

Despite Systemback’s near-demise, I found current guidance from LinuxBabe (2021; see also LinuxAddicts, 2021; Alibaba Cloud, 2020; sTechalon, 2019; UnixMen, c. 2016). With that guidance, I proceeded to install and use Systemback in the Ubuntu 21.04 VM. To do so, I began with these commands:

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 382003C2C8B7B4AB813E915B14E4942973C62A1B
sudo add-apt-repository "deb http://ppa.launchpad.net/nemh/systemback/ubuntu xenial main"
sudo apt update
sudo apt install systemback

The first command gave me a warning: “Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).” A search led to an Ask Ubuntu answer explaining the recommended alternative steps. For the moment, since the command worked, I left it as-is. The second command gave me an introduction to Systemback, repeating verbiage from the Systemback homepage, and closed with “Adding repository.” In response to its invitation, I hit Enter to continue. That produced a series of messages about the installation process. That process finished.

After rebooting the VM, I went into Ubuntu’s left panel > Show Applications (i.e., the 3×3 dot matrix) > scroll to Systemback (or I could have run sudo systemback). Among the available options in Systemback, LinuxBabe (2021) pointed out that I could create a system restore point; copy the system to, or install the system on, another partition; or create a bootable live system ISO. I chose that last option. It defaulted to setting /home as the storage directory.

I tried to designate an ext4 USB drive. Windows was not able to recognize that; therefore it was not available to the VM. But even if it had been, I was not able to get Systemback to save the ISO on a shared NTFS drive, or anywhere other than /home. So I went with that. I named it UbuntuVM and checked the box to include user data files. Then I clicked the Create New button. It said, “Creating live system.”

That took a while. When it was done, I went into the VM’s left panel > Files application > Other Locations > Computer > home. There, I saw the UbuntuVM.sblive file that was also listed in Systemback’s Created Live Images list. Now, LinuxBabe (2021) said, I could convert that file to ISO or burn it to a USB drive. In Systemback, I selected that Created Live Images entry and clicked Convert to ISO. That ran, with a dialog that said, “Converting Live system image.” That took only a few minutes. When it was done, Files showed UbuntuVM.iso in the home folder. I moved both of those files to the VM’s shared (external, NTFS) folder.

Now the question was whether the ISO would (a) boot a computer and (b) function as an installer. To test those possibilities, I used Rufus in Windows to install it on a USB drive, accepting default values except adding (optional) persistence; then I used that USB drive to boot an old laptop. On that computer, F12 brought up the boot menu. I chose the USB drive, and that produced a Systemback Live menu with two key options: Boot Live System and Boot System Installer. The Live System option ran Ubuntu and gave me a login screen. But after I logged in, it died: the screen went black, and the USB drive’s LED showed no further activity. On a newer laptop, it worked: I got the VM’s wallpaper and was able to run Firefox. On the old laptop, I used GParted to uninstall an existing Windows 7 installation; it seemed to me that I’d had problems trying to boot other USB drives on that computer. After that, the Systemback USB drive booted the old laptop too.

Then I rebooted the old laptop and tried the Boot System Installer option. It greeted me with a dialog requesting basic login information: user name, password, etc. I selected the desired target partition and checked “Transfer user configuration and data files.” That partition remained unformatted, in the wake of GParted, so I had to click the green arrow to format it. Then I selected it again, designated root (i.e., ” / “) as its mount point, and clicked the green arrow again. For simplicity, I left it at that. I clicked Next > Start. A few minutes later, it said, “The system install is completed.” I rebooted and tried it. It worked.

In this brief look, I liked Systemback. It worked, it offered a GUI, and it could produce both a bootable drive and an ISO (which itself could then be used to produce a bootable drive).

Distroshare Ubuntu Imager

MakeTechEasier (Orosz, 2015) described Distroshare as an automation of the detailed tutorial for creating a Live CD from an Ubuntu installation. It appears to have been a short-lived effort: MakeTechEasier pointed to a GitHub page listing files that were last updated in 2016. Unlike Systemback, Distroshare did not offer an option to directly create a bootable USB drive.

Ubunlog (Darkcrizt, 2021) echoed the advice of MakeTechEasier (Orosz, 2015) and others (e.g., OS Radar, 2018, AddictiveTips, 2018) to begin by installing Git. OS Radar (2018) offered instructions on how to do that for various operating systems. For Ubuntu, the recommended commands were as follows:

sudo add-apt-repository ppa:git-core/ppa
sudo apt update
sudo apt install git

I ran those in the Ubuntu VM without difficulty. Now that I had Git, Ubunlog (Darkcrizt, 2021) recommended these commands:

cd ~
git clone https://github.com/Distroshare/distroshare-ubuntu-imager.git
cd distroshare-ubuntu-imager
sudo chmod +x distroshare-ubuntu-imager.sh

The first of these commands put the user into the /home folder, on the assumption that that is where s/he would want the distroshare-ubuntu-imager subfolder to be created. Next, Ubunlog (Darkcrizt, 2021) recommended completely updating the system, to make the image up-to-date:

sudo apt update
sudo apt upgrade
sudo apt dist-upgrade

MakeTechEasier (Orosz, 2015) and ProgrammerSought (n.d.) suggested some expert tweaks, but I ignored those. (Note: those customizations did not appear to include an option to include personal settings.) Finally, one command to run Distroshare:

sudo ./distroshare-ubuntu-imager.sh

That command took a while to run. That was not surprising: in its words, it was “Copying the current system to the new directories.” There was no GUI nor progress indicator. These aspects of Distroshare, plus the requirement of Git, left me feeling that the Distroshare installation required more time, complexity, and software than the Systemback installation, and that its actual copying process was less user-friendly.

The process ended with, “Writing to ‘stdio:/home/distroshare/live-cd.iso’ completed successfully.” To move the file from /home/distroshare/live-cd.iso to the shared (NTFS) drive, I used sudo nautilus. Then, as in Systemback (above), I used Rufus in Windows to burn the ISO to a USB drive. MakeTechEasier (Orosz, 2015) said that, in Linux, I could use something like dd if=live-cd.iso of=/dev/sdX bs=1M to achieve the same thing; AddictiveTips (2018) recommended the safer Etcher tool instead of dd.

As above, I used the USB drive to boot the old laptop. It worked, but it didn’t bring over my wallpaper, confirming that it did not copy personal settings — which may be just what some users would want. It wasn’t what I wanted; I wanted a complete copy. So that was another drawback of Distroshare, for my purposes.

Customizer

MakeTechEasier (Kourafalos, 2020) included Customizer at the very end of its list of tools to create a custom Linux distro. It said this:

Customizer isn’t under active development anymore, but that, according to its developer, is because it is considered stable. It is another tool with which you can remix Ubuntu, but it also supports its different flavors, like Xubuntu and Kubuntu. A critical restriction, though, is that the host system under which you are using it should share the same release number and architecture as the guest system you are remixing.

I wasn’t sure whether other tools (e.g., Systemback) likewise supported Xubuntu et al. The linked page seemed to confirm that Customizer was last updated in 2019. The Read-Me file on that page cited a manual and a wiki, and confirmed that “Customizer is stable and not under active development” — but also said that “Recent releases … support remastering … [only up to] Ubuntu 17.04.” That sounded like declaring victory and then retreating. The wiki’s First Guide page clarified,

IMPORTANT Use the same release and architecture of both host system and Live CD, but not necessarily be [sic] the same operating system.

For example, a user can run Xubuntu 14.04 32-bit host system to remaster Ubuntu Mini Remix 14.04 32-bit ISO image. Using same release (14.04) and same architecture (32-bit).

That wiki page appeared to assume basic familiarity with certain aspects of Linux that were no problem for me when I was using Linux more frequently. But that had been some years in the past. Having succeeded with Systemback and Distroshare, and having already dismissed other relatively complex solutions (above), I was not presently inclined to jump through these hoops.

Clonezilla ISO Creation

Clonezilla’s documentation described a Clonezilla option to create a recovery ISO. I set up the VM as above, with Clonezilla mounted as a virtual optical disc in Settings > Storage and with a USB drive enabled via the VM window’s top menu > Devices > USB, as described above.

For this purpose, the documentation said not to boot from RAM, so I used Clonezilla’s default boot option. The documentation said to use the device-image option > local_dev. The USB drive was going to be my save-to target (i.e., /home/partimag), so when the list of “Available disk(s) on this machine” confirmed that Clonezilla recognized the USB drive, I hit Ctrl-C to get out of that list. After a moment, I got an option to select the USB drive (in my case, again, a Silicon Power drive), I arrow-keyed down to select it and hit Enter. The USB drive didn’t have anything on it, so the next time Clonezilla paused, it said the “Current selected dir name” was ” / ” (i.e., the top-level folder on the USB drive); and since that was what I wanted, I tabbed to Done and hit Enter. Clonezilla confirmed that the USB drive had plenty of space, so I hit Enter again.

Now the documentation told me I could choose the Beginner option. But the next screen in my VM showed only two choices — savedisk and saveparts — whereas the documentation showed many options, including the recovery-iso-zip option that I wanted. A Clonezilla forum thread explained that the recovery-iso-zip option would only appear if the drive or folder that I selected as /home/partimag (above) contained a Clonezilla image.

In other words, I had to use the procedure described above (“VM Drive Imaging with Clonezilla”) to create the image, and then I had to convert that image to ISO. I did not attempt that at this point. Later, however, I did give it a whirl. My notes from that later episode (incorporating some mention of VMware Player as well as VirtualBox) were as follows:

As detailed above, I added a downloaded Clonezilla ISO as a virtual optical disc (IDE, in VirtualBox). In VMware Player, the process was similar, except that I also had to slow down its POST to give me time to react. Then I booted the VM and hit the appropriate key (F12 in VirtualBox, Esc in VMware) to see a list of bootable devices. This was a good time to plug in an SSD or a fast USB drive to serve as a target, and to attach that to the VM. The devices I used for this were NTFS-formatted. Then I hit the key to boot from the virtual CD-ROM drive, and I was in Clonezilla.

In the steps described here, I did not search for nor follow the official instructions; I just followed what the program seemed to be requiring. This may have entailed some mistakes, but at least it helped me avoid those interminable instructions.

First, I had to create an image. For that, after starting Clonezilla, I accepted the default options for the first several items (i.e., language, keyboard, start Clonezilla). I chose device-image > Enter > local_dev > Enter > wait until my target drive (i.e., SSD or USB) was listed. That could take a few seconds, if I had just plugged it in. Then Ctrl-C > arrow-key down to select the target (i.e., SSD/USB) as /home/partimag > Enter. This left me at a screen that required careful attention. It could appear to be offering a choice between the Recycle Bin or aborting. But what it actually said was that the “Current selected dir name” on the target drive was “/” (i.e., the top level), because I didn’t have any subfolders on the target drive. So this was right, as-is, and all I had to do was to hit Tab (not Enter) to get to Done, and then hit Enter. Clonezilla now confirmed that it was associating the SSD with /home/partimag. I hit Enter > Beginner > savedisk > Enter > Enter to accept its default name (because I didn’t plan to keep it for long) > Enter to accept sda as the source drive (the only one listed for me; arrow-key around and then use spacebar to select the right one, if there was more than one) > Enter through the next several items to accept defaults unless otherwise desired. Then I got yellow print specifying what was going to be saved, and what it would be called on /home/partimag. I confirmed, and it ran.

That ran for a while. When it was done, Clonezilla said, “Press ‘Enter’ to continue.” I chose  the rerun1 method of starting over. Now we were into the ISO creation part of the previous post. The sequence here was device-image > local_dev. It was time to attach another external drive to the VM. This one would be the target for the ISO we were about to create. With that done, I could hit Enter to see the list of attached devices. Both of my external drives were listed, so I hit Ctrl-C. For /home/partimag, I wanted the first drive, the one where I had created the image (above). So I arrowed down to that one and hit Enter. Now Clonezilla showed me that it was still set to the root (“/”) folder, but this time I wanted to select the folder it had created with the image. So I arrowed to that and hit Tab > Done > Enter. It confirmed I had selected the right one. I hit Enter > Beginner. Now I had a list with about ten options. I arrowed down to recovery-iso-zip > Enter. Clonezilla said, “Choose the image file to restore.” There was only one, so I hit Enter. It asked for “The device to be restored when using this Clonezilla live recovery CD/USB.” It offered sda, which sounded about right: that was normally the first drive in the list, and as I recalled it was the one I had intended to back up. So Enter > skip checking > hit Enter a few more times > create both ISO and ZIP files, just in case (assuming sufficient space on the target drive). Clonezilla warned me that the resulting ISO would be too large to fit on a DVD. I confirmed go-ahead. It proceeded.

When Clonezilla finished, I went to its command prompt and ran mc (apparently short for Midnight Commander) to get a file manager. I tried to expand its media and mnt folders, but there didn’t seem to be anything there. I hit F10 to exit mc and tried lsblk. It saw the various partitions (e.g., sdb1). I wasn’t sure whether I actually should mount anything; I typed exit and then just killed the VM.

I took a look at the contents of the intermediate and final destination drives in Windows. Apparently I had screwed up something: everything (i.e., the intermediate image, as well as the final .zip and .iso files) was on the SSD I had used; there was nothing on the USB drive. Well, whatever: I had it. For a VM whose files totaled 23GB according to Windows File Explorer > right-click > Properties, Clonezilla had given me an image folder (welcome to Clonezilla’s concept of an image) requiring only 5.8GB. The ISO and ZIP were a bit larger, at 6.1GB each. The contents of the ZIP and ISO looked similar, if I right-clicked and opened them with a compression tool (e.g., 7-Zip): folders that you’d expect to see in a bootable ISO, like .disk and syslinux).

The big news was that, yes, I did have an ISO. Woo hoo! And it only cost me 12 hours and my right arm. Exaggerating somewhat, but anyone reading the preceding paragraphs might begin to understand why I wanted a command-line ISO creation solution. None of that weaving around through funky menus that, despite my best efforts, apparently I did mess up somehow, or at least not all of my files ended up where I intended. Just enter the damned command already. But what was the single, simple command that I craved so desperately? That was still, and might forever remain, a mystery.

The remaining question was whether the Clonezilla ISO functioned as desired. I had three key tasks in mind: use the ISO to create a bootable USB drive, use that USB drive to install Ubuntu on a target computer, and use the ISO to boot a VM as interloper, as I had just done with the Clonezilla ISO.

To create the bootable USB drive, I used the Rufus tool in Windows. To test whether the ISO would work under different circumstances, I set Rufus to create a 1GB persistent partition with a GPT partition scheme and NTFS formatting. Before proceeding, I made a backup of all the stuff Clonezilla had just given me. Then I ran Rufus.

The resulting USB drive booted my laptop. But it did not give me what I expected. First, it gave me an indication that the USB drive was not properly unmounted from Windows, and said it was fixing that. Then, instead of a live Ubuntu screen, it displayed a live Clonezilla screen. It was the familiar orange Clonezilla display; the difference was that its options said, “Clonezilla live with img 2021-08-27-03-img,” where that was the name of the image that Clonezilla had created. It appeared that booting Clonezilla from RAM might not work here. Choosing the first option on the list gave me this:

///WARNING/// filesystem.squashfs not found! No idea where is LIVE MEDIA!!! Assume this is running in DRBL client.

/sur/sbin/ocs-live-run-menu is run in Clonezilla Live!

Program terminated!

A search turned up no wisdom on that. I tried recreating the bootable USB drive, this time leaving all Rufus settings at their default values (no persistence, MBR, Large FAT32 filesystem, 32KB cluster size). I even allowed the default volume label (though that proved fruitless: Windows cut it off at “2021-08-27-“). This time, I got an indication that this was an ISOHybrid image; I went with the default ISO (rather than DD) image mode.

That seems to have been the solution. When Rufus was done, I even did the proper thing of right-clicking on the drive in Windows File Explorer > Eject. Trying again — booting that new USB drive on the laptop — got past the previous error (above), but now there was a new one:

The directory for this inputted image name does NOT exist: /home/partimag/2021-08-27-

Program terminated!!!!

I think I got the number of exclamation marks right. Anyway, it wasn’t clear from the instructions, but it looked like I wasn’t supposed to use Rufus to create the bootable USB drive. I was supposed to use the VM, or possibly the target laptop, to boot the basic Clonezilla ISO (or a USB drive created with Rufus from that ISO); and then I was supposed to follow another sequence of Clonezilla steps to combine the bootable Clonezilla with the image that I wanted to restore. Then the Clonezilla ISO that I had burned onto USB would restore the Clonezilla backup image to the target drive.

I noticed that the directory name in the error message (2021-08-27-) was the same as the truncated name that resulted from the Rufus USB creation process. The instructions said, “You must first create an image and it should exist in dir /home/partimag.” So could I just doctor this thing? To view its contents, I had to boot a Linux machine: Windows wasn’t able to see the drive. Not sure why not: GParted said it was still FAT32. Anyway, in GParted, I changed the partition label from 2021-08-27- to a simple CZ. I mounted the USB drive, went into its /home/partimag folder, and confirmed that the image folder was there. I changed its name from 2021-08-27-03-img to CZ too. I had no idea what that might accomplish, but I tried booting it. Unfortunately, that didn’t help.

Truly, I believed that if I plodded through the instructions or just got lucky, I would be able to make this ISO thing work. But even then, we would still be pretty far afield from the original concept. I didn’t get a live ISO; I got a Clonezilla installer ISO, customized for a single Clonezilla image; and it didn’t even give me an option to run the Clonezilla USB and go searching around for my newly renamed CZ image. This Clonezilla produce was designed solely to run the 2021-08-27-03-img file — which apparently couldn’t run because Rufus or Windows truncated the drive’s label.

I’m not describing all this to find fault with the Clonezilla tool. In the grand scheme, I was grateful to have it. It was undeniably useful. I’m just saying that I had an objective of winding up with something that corresponded to the usual concept of an ISO that goes onto a USB drive and is bootable as a live version and/or as a straightforward installer. Instead, what I got was more like, you are now locked into Clonezilla Land, and you will have to jump through these seven hoops and climb the magical mountain every time you want to create and use Clonezilla to create an ISO — and it still won’t be the ISO you were looking for.

In other words, Clonezilla did not appear to be able to deliver a bootable Ubuntu ISO. I thought it could, but I was wrong. So it was back to the drawing board.

Summary of Internal Solutions

In this exploration of internal solutions, I found that Clonezilla offered three ways to convert a VM to a physical drive: it could clone the VM directly to an external USB drive; it could create a backup image of the VM, and could then restore that image to the external USB drive; and it could restore that image in the form of an ISO file. I tested the first two of those three. For the third — for ISO creation — I looked into many abandonware solutions (e.g., Remastersys, Respin, LiLi, Novo), and observed in passing several possibly viable but seemingly complex alternatives (i.e., Linux From Scratch, Ubuntu Live CD Customization, and Linux Live Kit), before finding success with Distroshare and Systemback. I also explored using dd to clone the VM to a USB drive, to create a compressed (i.e., .gz) image, and to create an ISO. I found that dd met the first two of those objectives; I was hoping for insight from a posted question about the third. Other methods, tested but found wanting, included GParted and Timeshift.

Among those internal solutions, I favored Systemback for ISO creation. For imaging, I preferred dd‘s ability to produce a single compressed image file over Clonezilla’s creation of a relatively awkward directory full of files. I also preferred dd for cloning directly from the VM to an external drive, while remaining mindful that dd could be complex and that exploration of competing tools (e.g., ddrescue) wouldn’t hurt. Among those choices, I was likely to rely primarily on Systemback.

*** EXTERNAL SOLUTIONS ***

This post discusses ways of converting VMs to physical installations. As noted above, there seemed to be two general categories of solutions. On one hand, there were internal solutions, involving cloning or imaging of system files from a tool running in, or booted alongside, the VM’s virtual drive. On the other hand, there were external tools — running in Linux or, perhaps, in Windows — that might clone or image the VM as a whole. The preceding sections have explored several internal solutions. This section turns to the external alternatives.

Before going into the specifics, an external approach did offer the easiest and most obvious of all methods of cloning a virtual machine: simply copy it. For instance, the VM I was using existed as a handful of VirtualBox files (e.g., Ubuntu.vbox, Ubuntu.vmdk) within a folder in Windows 10. I could copy the whole folder to another computer and run it there. To make it easier to handle and store, I could use a compression tool (e.g., 7-Zip, WinRAR) to boil it down into a single and potentially much smaller file. For purposes of this post, the failing of that solution was just that it didn’t give me a physical installation.

False Leads

From an external perspective, cloning a VM was as simple as it gets: just make a copy of its files, or at least of its key (e.g., VDI, VMDK, VHD) file. It also wasn’t super-difficult to use various tools and commands to convert a file of one type (e.g., VDI) to another type (e.g., VMDK).

The hard part was in transforming VM files into the physical installation. In my browsing, I encountered a bewildering variety of ideas for how this might be done. I would eventually conclude that the people suggesting some of these ideas may not have actually tried them, else they would have seen that they didn’t work. I might have been mistaken about that. But that’s how it seemed.

As an example, one suggestion in a SuperUser discussion was to use a command of this form:

VBoxManage clonehd source.vmdk target.iso --format VMDK

I tried that. On my laptop computer, booted in Ubuntu (i.e., not a VM), I went to /media/ray. There, I saw the two Silicon Power USB drives that I had plugged in. SP1 contained a copy of the Ubuntu.vmdk file from my Ubuntu VM on my Windows desktop computer. SP2 was empty. Using the form just provided, the command I entered in that folder was VBoxManage clonehd /media/ray/SP1/Ubuntu.vmdk /media/ray/SP2/Ubuntu.iso –format VMDK.

I had already installed VirtualBox, so the VBoxManage tool worked. The command ran, and it did produce Ubuntu.iso on SP2 in maybe 15-20 minutes. The size of Ubuntu.iso was only 12.2GB — the same size as the VMDK. Unfortunately, I was not able to figure out how that ISO was supposed to be useful for a physical installation. Efforts to burn it to a USB drive in Windows (using Rufus) and in Linux (using Etcher as well as Ubuntu’s built-in Startup Disk Creator) consistently produced indications that, as Etcher said,

Missing partition table

It looks like this is not a bootable image. The image does not appear to contain a partition table, and might not be recognized or bootable by your device.

Maybe the suggested solution would have worked on a Windows VM. Or maybe the purpose of the VBoxManage command was to produce an ISO that would function, not as a source for direct burning onto USB, but merely as a compact file repository whose contents would then be transferred onto a USB drive. Linux.org (Buse, 2015) said that the ISO produced by VBoxManage could be written to a drive with the basic sudo dd if=(source) of=(target) command. As above, the command would state a path for the source (unless it was run in the source ISO’s folder), because it was a file; but it would specify a drive (e.g., /dev/sdb) for the target — because, well, the whole drive was the target: I was trying to make it bootable.

In its fullest form (adding quotation marks as a test, even though there were no spaces in this source path), the command I used was sudo dd if=”/media/ray/SP2/Ubuntu.iso” of=/dev/sdd bs=64K status=progress. In this case, sdd was not formatted, but dd didn’t care: it ran. The result was confusing, though: dd concluded with a statement that it had processed billions of bytes, and yet GParted reported that the target USB drive continued to be unformatted. Had dd put those billions of bytes somewhere else?

I had no idea. But when I started to format the target USB drive as ext4, GParted said, “No partition table found on device /dev/sdd.” To fix that, with the target USB drive selected in GParted, I went to Device > Create Partition Table > GPT. GParted said the drive was still unallocated — there were no formatted partitions — but I was curious if simply having a partition table was enough. It did seem that GParted had accomplished something: this time, starting down the same path of formatting the USB drive did not produce that error message.

So now I tried the dd command again. The USB drive’s LED light started flashing (although I thought it had done so previously too). When dd was done, I checked again in GParted — and we were back to square one! Once again, “No partition table found.” This suggested that dd actually was writing to the USB drive, but was doing so in some way that GParted found incoherent. I was befuddled. I tried booting the laptop with this odd USB drive. It didn’t work. It seemed that the ISO still didn’t have a partition table, and there was nothing that dd or any other relatively simple command could do about that. Possibly there was a solution using raw2iso. But that was beyond me.

So it was not clear what I had achieved by using VBoxManage to convert the VMDK to ISO. In saying this, I don’t mean to lean too heavily on that single SuperUser suggestion. It was one among many that I had to dig my way through, trying to figure out what people were suggesting, and to learn whether their suggestions were workable. (Users seeking only to create a nonbootable ISO might consider suggestions at Ask Ubuntu and WikiHow.)

VM-RAW-dd Method

Along with the various methods that didn’t work, I found one that did work. It was one of two suggestions presented in a highly upvoted Ask Ubuntu answer. The more complicated of the two was for situations involving dual booting or otherwise needing to install Ubuntu within a partition, without disturbing other contents on the target drive. I was not dual booting and, as noted above, was also not excited about the author’s suggestion that this more complicated solution might result in “nasty grub issues” to which the user could respond by “chrooting and fixing things.”

Therefore, I leaned toward the simpler of that writer’s two solutions. In that simpler answer, the basic idea was as follows: use QEMU to convert the VMDK or VDI into RAW format, and then use dd to copy its contents to the target drive. This was somewhat like the internal dd solution (above); but in this case the source consisted of a single image file, viewed externally.

It seemed that the conversion step could be performed in Windows as well as in Linux. There was a Windows version of QEMU, and instructions for using it; and of course VirtualBox (including its VBoxManage tool) could run on Windows. But since the Ask Ubuntu answer called for using dd in any case, it seemed that I would still have to use Linux, either natively installed (as on my laptop) or in a VM (as on my desktop) or possibly in WSL. So these instructions are written for use on a Linux system.

I decided to use an Ubuntu VM on a Windows 10 desktop computer to explore this suggested method. That made this writeup more complicated than most users would need. That may have been OK, in the end, because at least it provided an opportunity to go into more detail on the use of a VM in this context. The outcome would apparently have been inferior, in any event, compared to some of other methods discussed in this post. I chose to do it in a VM because I wanted to keep writing about it, without interruption, while observing what went on in the VM.

As a starting point, if I wanted the VM to see the internal SSD that I intended to use as a source, I would have to add it as a Shared Folder in the VM’s Settings; and if I wanted the VM to see the USB drive on which I intended to install the final result, I would have to attach it to the VM (see Clonezilla writeup, above, for details). In that sense, running the dd command in the VM (instead of on a native Ubuntu installation) could limit the amount of damage that dd could do: it would have access to only those drives or folders that I made available to it.

Among the several methods for adding the SSD as a Shared Folder suggested by Ask Ubuntu, I started with sudo mkdir ~/Desktop/w_mount. That worked: using Ubuntu’s Files tool > Desktop, I could see the newly created w_mount folder. This folder would be Ubuntu’s mount point for the Windows system’s W: drive. That is, it would be the name that Ubuntu would understand, if I wanted to refer to the W: drive in Ubuntu. (There was no magic to the w_mount folder name; it was just a name I made up.) If desired, I was able to remove that folder in Files by simply right-clicking > Move to Trash > Delete.

Then I used sudo mount -t vboxsf w_drive ~/Desktop/w_mount, where w_drive was the name I had entered in VirtualBox Manager > Settings > Shared Folders > Folder Name. Windows automatically spelled it in all caps — W_DRIVE — when I designated the W: drive as the Folder Path; I changed it to lowercase to reduce the possibility of confusion. While I was there, in Shared Folders, I also checked the boxes to make it Auto-mount and Permanent, even though I would be removing it later.

That command mounted the shared folder (i.e., w_drive) at w_mount. So now w_mount wasn’t just a randomly created empty folder; it was a point of connection between host and guest operating systems. Ubuntu would know what I meant, when I referred to w_mount; through it, I would be reaching the W: drive in Windows. Anyway, the command worked: when I went into Ubuntu > Files > /Desktop/w_mount, I saw the contents of the W: drive that I could also view in Windows File Explorer. A forum entry suggested that I could remove or undo the designation of w_drive by using umount -t vboxsf w_drive — or, possibly, umount -i or umount ~/Desktop/w_mount.

As I came to grips with the project, I realized that I might want a source drive (W:) for the VMDK file, an empty intermediate drive (G:) for the IMG file, and a target USB drive that would hopefully give me a bootable physical copy of the VM’s virtual Ubuntu installation. For an intermediate drive, I had a 1TB HDD in an external drive dock. So I repeated the sharing process for that drive, in VirtualBox Manager > Settings > Shared Folders and also in the two Ubuntu commands specified above. That added g_mount as a second mount point in my Desktop folder in Ubuntu, and in g_mount I could now see drive G’s hidden Windows folder named System Volume Information.

With the W: and G: drives set up as shared folders, I could proceed with the conversion. First, I had to convert the VM’s file to an intermediate file. The recommended command for converting a VDI to RAW was VBoxManage clonehd source.vdi intermediate.img –format RAW. Since I was using a VMDK, I didn’t test that. Instead, I installed QEMU (sudo apt-get install qemu-kvm); then I navigated to the source folder (from the Desktop folder, it was cd /w_mount); then I ran the recommended command for converting a VMDK to RAW, in this form:

qemu-img convert (source).vmdk -O raw (intermediate).img

In my case, the exact command was qemu-img convert Ubuntu.vmdk -O raw ~/Desktop/g_mount/Intermediate.img. (I know — Intermediate was not a very imaginative name, but it could help to keep things straight.) The command provided no onscreen feedback, but I could see the LED flashing in the external drive dock.

The process was slow. At this point, I hadn’t yet figured out that the IMG file would be a byte-for-byte copy of the full Ubuntu partition. That is, Ubuntu.vmdk was only 11.3GB, whereas Intermediate.img on drive G was the full 57GB. I didn’t time the conversion process. It might have taken an hour.

Now it was time for the final step, converting the IMG file from drive G: into an Ubuntu installation on the USB flash drive. By now I had plugged the USB drive into the computer, and had used the Devices menu pick (above) to attach it to the VM. In the VM, I could see the USB drive listed in the left-hand panel.

As discussed above, a dd command that used an image file as the target — or, in this case, as the source — would not use that file’s /dev/sd? address. Instead, I would want its mount point. Therefore, I would designate the input file by borrowing the destination terms from the preceding command: ~/Desktop/g_mount/Intermediate.img. Lsblk indicated that the target USB drive was mounted at sdb — and I believed I could state that as the destination, since I wanted dd to overwrite the whole drive. Thus, with a slight variation from other parameters used above, the form of this command was

sudo dd if=[Intermediate].img of=[USB drive] bs=64K status=progress

Within my VM specifically, the command I used was sudo dd if=~/Desktop/g_mount/Intermediate.img of=/dev/sdb bs=64K status=progress. That seemed to work, or at least the right drive lights were flickering. (Note the option of using a similar QEMU command without dd, though with the same need to be careful in designating the correct drives.)

When it was done, I found that the 57GB Intermediate.img had produced a working USB drive. It booted, and the result looked like the Ubuntu installation in the VM, complete with my Firefox and wallpaper customizations and a working Internet connection. For my purposes, pending more detailed exploration, its chief drawback seemed to be the relatively huge, space- and time-consuming intermediate IMG file.

Having seen (above) that VBoxManage could produce a relatively compact intermediate ISO, I wondered if QEMU could do the same, in place of that bulky intermediate IMG. Unfortunately, my search did not lead directly to any indication that QEMU could produce an ISO. To the contrary, it turned up a list of QEMU-supported formats on which ISO did not appear.

Windows Solutions

As mentioned above, I had been using the free AOMEI Backupper Standard to produce a sort of Windows To Go bootable Windows 10 installation on a USB flash drive, and to clone the desktop’s Windows 10 installation to my laptop. Acronis was even better-known for consumer drive imaging tools. Among Acronis’s offerings, possibly the tool most suited for my situation would be its Cyber Backup Standard — costing as little as $69 for a Windows desktop, or $469 for a Windows or Linux server. Macrium Reflect was another oft-cited solution. Server tools of similar purpose, mentioned less frequently in a Spiceworks discussion, included StorageCraft and Unitrends; see also Fog Project and BackupChain and explanation.

Acronis and AOMEI

It had apparently long been the case that Clonezilla and dd could clone and image Windows (e.g., NTFS) drives and partitions. Judging from an Acronis forum discussion, it seemed that Acronis True Image 2020 (and apparently other Acronis products) had finally achieved a similar cross-platform capability to work with ext4 and perhaps other Linux filesystems on an intelligent (i.e., not merely the slow and bulky sector-by-sector) basis. For present purposes, this capability would be useful from an internal perspective only if the user could figure out how to boot Windows (so as to run Acronis) within the VM. Maybe that could happen by converting the VM’s Linux installation to a bootable ISO and adding that ISO as a virtual storage device to a Windows VM in which Acronis was installed. But if I already knew how to convert the Ubuntu VM’s installation to a bootable ISO, I wasn’t sure I would need an image produced by Acronis.

In lieu of some such internal approach, the involvement of Windows tools like Acronis seemed more promising from an external perspective, where I would be dealing with the VM as a single file, in Windows, without needing to engage with its ext4 filesystem. On that level, there was a question of whether such a tool could convert between VMDK and that tool’s preferred image file format (e.g., TIB, for Acronis). If the right version of Acronis could convert the VMDK (or something like it) into a TIB, and if that version also had ext4 and dissimilar hardware capabilities, then presumably it could restore that TIB to a physical drive.

Once upon a time, according to Techwalla (Manning, 2011), it was easy to do that with Acronis. But in later years, after hitting a peak with True Image Home 2011, Acronis fell into a ditch, actually losing quality and capabilities. A search for more recent insight didn’t turn up much. Judging from the Acronis True Image (ATI) 2020 documentation, it looked like ATI was now limited to converting TIB to VHD. To verify, I posted a question. The prompt response indicated that ATI 2021 could convert TIBX (apparently their upgraded file format) to VHD, but ATI 2021 had “minimal support for Linux” and therefore could not assist in V2P nor in conversion of a physical Windows installation to a bootable ISO.

As another option, AOMEI Backupper seemed a bit simplistic at times and, as with Acronis, my search to explore that product’s relevance produced little.

Macrium

The situation was more promising for Macrium: I didn’t get much from a search, but forum discussions at VMware and VirtualBox suggested some relevant capabilities.

On closer examination, another forum entry indicated that Macrium’s IMG2VHD conversion feature was eliminated in version 7, but users still had ways to restore a Macrium drive image to a VHD. Macrium’s blog (2019) indicated that Reflect could do intelligent copying and cloning of ext2-4, but that other Linux filesystems (e.g., XFS, JFS, BTRFS) would require a “forensic sector copy” — which, however, would be compressed, thus not requiring an amount of disk space equal to the size of the imaged partition. So apparently it would be slow, but its output might not be unusually large. The blog said that the Linux drive had to be either MBR or GPT, not LVM.

That blog post said that, to browse Linux ext2-4 filesystems in Windows, Macrium users would have to install an additional driver. Unfortunately, the webpage pointing toward the home page (archive) for that additional driver (archive) indicated that it was “no longer compatible with Windows 10.” Since the driver was still available, presumably it would still work on Windows XP, 7, and 8 systems. It was not clear whether users of Macrium in a WinXP-8 VM would be able to browse ext2-4 partitions.

According to that blog post, Windows users who wanted to image or clone an ext2-4 drive (as distinct from browsing its contents) could do so either in Windows or by booting the system using Macrium’s Rescue Media. I wanted to explore those options, so I downloaded and installed Macrium Reflect.

Somehow, I wound up with a 30-day trial of Macrium Reflect Home 8 ($70), instead of the Macrium Reflect Free 7.3 (x64) that I thought I had requested. On Windows 10, Home 8 offered a viBoot option that might or might not be available in Free 7. According to Macrium, this option “enables you to instantly create, start and manage Microsoft Hyper-V virtual machines using one or more Macrium Reflect image files” and could “instantly present a Macrium Reflect image file as a Microsoft Virtual Disk (.VHDX) file.” This option was interesting but not relevant here.

When I uninstalled Macrium Reflect Home 8 and tried to install Personal Free 7, I got an error message: “This app can’t run on your PC.” A search indicated that this was rare but not unheard of. Following a suggestion from one site listed in that search, I tried again, this time downloading the Commercial version of Free 7. In both cases, the downloader said, “Download Warning: one or more files failed to download, WinPE components can be downloaded automatically in the main application.” The Commercial download (110MB) was much larger than the Personal (18MB) — more like the Home download (161MB). It seemed that maybe the Personal download had aborted.

I proceeded to install Macrium Reflect Commercial Free 7. It said, “What Free edition licence do you require?” I think I must have selected “Home” on the previous go-round; apparently Macrium converted that into a request for the Home trial that I then had to uninstall and start over. This time, I chose “Commercial.” I entered the registration code that they had emailed me. The installation finished. I ran it. The title bar said that I had the “Free Edition for both home and commercial use.”

The Windows 10 desktop computer on which I was installing Macrium had a USB drive plugged into it, formatted as ext4. I had just seen that Macrium 8 Home recognized that drive as an ext4 drive; and when I clicked on that drive, Home 8 gave me options to clone or image it. But now, in Macrium 7 Free, that functionality was absent: the ext4 USB drive was not recognized. I had seen an indication, during download or installation, that Macrium was now in the process of preparing Macrium 8 Free. I was not sure whether that version would have that ext4 capability. At any rate, Macrium 7.3 Free, running in Windows, was useless for present purposes.

There was one possible exception to that statement. Both Macrium Free 7 and Home 8 gave me a menu pick: Other Tasks > Create Rescue Media. In both, I chose ISO File, so as to create a bootable copy of Macrium Reflect. In the Advanced menu, I left it at the default Windows RE base WIM. In the Advanced > Options tab, I added all available support options. I put the resulting ISOs on my YUMI multiboot drive and booted my laptop with them. The Free 7 ISO was willing to image selected disks, but the only disk it could see was the laptop’s internal SSD; it could not see the external ext4 USB drive. The Home 8 ISO could see, and would apparently make an image of, the external USB drives, including the ext4 drive. There was not a clone option per se, but Home 8’s Restore tab offered an option to ReDeploy to New Hardware — which, Macrium said, “modifies an existing operating system to work on new hardware or on a virtual machine (VM).” That sounded like the “universal restore” or “dissimilar hardware” options in Acronis and AOMEI (above). But when I chose that option, the Home 8 ISO gave me an error:

ReDeploy is not available in this edition of Macrium Reflect.

If you have purchased a license then please re-create your rescue media to add this functionality

Macrium elaborated that the ReDeploy option was available only in its Professional (apparently now renamed Workstation, $75) and Server ($300) editions.

For imaging or cloning physical drives, the conclusion here seemed to be that, unless the user spent $75 for Macrium Reflect Workstation 8, his/her best hope would be that Free 8 (when it arrived), installed in Windows 10, would be able to see and image ext4 drives connected to the Win10 machine. Even in that optimistic case, the user of Free (or even Home) 8 would apparently not have Macrium’s dissimilar hardware restore feature.

From an external, Windows 10 perspective on the Ubuntu VM, I did not see any menu picks within Macrium Reflect, nor did my search lead directly to any other indication, suggesting that any version of Reflect would restore a Linux VDI or VMDK to physical hardware. I did find, for instance, a TenForums thread discussing a relatively complex VHD to physical technique for Windows systems, but that technique seemed to require Windows-specific tweaks.

As a final option, it seemed Macrium might offer an internal solution. On the Win10 desktop, I ran VirtualBox and attempted to add the Reflect Free 7 ISO to Ubuntu VM > Settings > Storage, so as to boot the Macrium rescue media with the VM, following the technique described at the start of the Clonezilla section (above). Unfortunately, VirtualBox in Win10 seemed unable to see the Reflect Free 7 or Home 8 ISOs. That problem did not exist on a machine booted in Ubuntu: its version of VirtualBox saw the Macrium ISOs without difficulty. A search led to confirmation that there could be a difference in VirtualBox behavior in this regard, between Windows and Linux, and to a VirtualBox forum discussion identifying possible causes (e.g., incorrect file extension or size, or (in Linux) filename capitalization). These did not resolve the issue.

My working conclusion about Macrium at this point was that, unless I was interested in spending more for it than I might need to spend on competing tools, it was unhelpful unless (a) the forthcoming Free 8 version offered more functionality than Free 7 or (b) VMware Player (below) was willing to mount the Macrium ISOs that VirtualBox was unable to see.

VHD to ISO in Windows

According to Wikipedia, “VHD (Virtual Hard Disk) and its successor VHDx … are the native file format for Microsoft’s hypervisor (virtual machine system), Hyper-V.” Microsoft (2019) said that “Hyper-V is built into Windows as an optional feature — there is no Hyper-V download” and, moreover, that it “cannot be installed on Windows 10 Home” but only on Enterprise, Pro, or Education (though apparently there were workarounds). Enabling it required simply running either of these commands in PowerShell as Administrator:

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All
DISM /Online /Enable-Feature /All /FeatureName:Microsoft-Hyper-V

The latter could also run in CMD as admin. Another way to enable Hyper-V was to run C:\Windows\System32\OptionalFeatures.exe > enable Hyper-V. Microsoft said that last location was also sufficient to disable Hyper-V, as was this PowerShell command:

Disable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-Hypervisor

StackOverflow discussed those and other methods. It was apparently necessary to disable Hyper-V to run other virtualization tools (e.g., VirtualBox, VMware).

Although my sources had seemed to favor VirtualBox and/or VMware over Hyper-V, a search that insisted on considering Hyper-V did lead to sources that considered it a reasonable option for tech-savvy desktop users. For instance, MakeUseOf (Phillips, 2019) found that Hyper-V and VMware Player delivered better performance than VirtualBox, and that Hyper-V and VirtualBox beat VMware in offering snapshots, though Hyper-V did still have some shortcomings (e.g., Hyper-V made file sharing “vastly more complicated than VirtualBox or VMware,” did not offer seamless mode, and did not support macOS).

Instead of using Hyper-V, an alternative was to set VirtualBox to create its disk image file in VHD format. That is, both Hyper-V and VirtualBox could use VHD files: VirtualBox.org and Oracle agreed, “VirtualBox also fully supports the VHD format.” In VirtualBox, the choice arose at this point in VM creation:

See WindowsCentral (2018) and Britec (2013) for older/alternate ways to create and use a VHD.

So while I had chosen VMDK for its speed in VirtualBox (above), I could have chosen VHD instead. I might also be able to convert the VMDK to VHD or VHDX:

Sysprobs (Dinesh, 2020) warned that, for use in VirtualBox, VHDX (unlike VHD) would have to be converted to VDI (using e.g., VBoxManage.exe clonemedium disk “[input].vhdx” “[output].vdi” — format vdi).

I installed and ran that VMC download. It appeared to support only conversion of Windows physical and virtual (only VMDK) installations to VHD. Moreover, aside from those who intended to upload their VHDs to Azure (which would tend not to apply to desktop users), its conversion to VHD required the user to “enable remote access through Windows Management Instrumentation (WMI) on the Hyper-V destination.” This portended a level of complexity beyond my present interest. I did try to proceed with the latter, at least, but it immediately demanded the address of my Hyper-V host as the VM’s destination. I wasn’t going to set up Hyper-V now. So that was the end of that.

Instead of converting a VMDK or VDI to VHD, or creating an Ubuntu VHD from scratch, there was the possibility of downloading a VHD. Linux VHD downloads seemed scarce. It looked like there might be at least a few available, for those inclined to turn on Hyper-V and to use Microsoft’s Hyper-V gallery. I posted a tip on finding a few; mine came from Canonical.

It appeared, in short, that I could download or create a VHD, or I could convert other formats into VHD. But that didn’t answer the question of whether I could convert VHD into ISO. I had gone down this rabbit hole because it appeared that Microsoft, Acronis, Macrium, and others were more interested in VHD than in other VM formats. As discussed above, recent versions of Acronis True Image (e.g., ATI 2020) were reportedly able to convert an ATI TIBX image to VHD, and Macrium Reflect apparently supported use of Macrium images as VHDX files. But to get an ISO from a Linux VM, it still seemed that I would have to look elsewhere.

There were ways to convert VHDs containing Windows VMs to Windows ISOs (e.g., SevenForums). VHD2ISO was a portable beta tool that sources sometimes mentioned. I did not find documentation on its use, but on its face it depended on the undeniably Windows-specific install.wim, and also seemed to require other Windows tools (e.g., oscdimg.exe). Thus, while this discussion may be useful for purposes of converting a Windows VHD, it did not appear to offer any aid to conversion of a Linux VHD, and therefore, for V2P purposes, it seemed unhelpful to convert a Linux VM into VHD format.

Other Windows Tools

I did not attempt a program-by-program review of the many Windows tools capable of making drive images, to see which others were able to work with Linux partitions. There may have been other free or consumer-priced Windows drive imaging tools comparable to Acronis or Macrium. If so, people didn’t seem to be talking about them much.

In a different direction, there seemed to be quite a few Windows tools (e.g., ImgBurn, AnyBurn, PassMark’s OSFMount (see UUByte), IMG to ISO, ISO Toolkit + ImDisk Toolkit, UltraISO + GimageX + Disk Management (see EaseUS, TDFTips, and video)) that were supposedly able to produce (or at least to assist in producing) a bootable ISO from an IMG file, from running within a Windows VM, and/or from a collection of files. But they did not seem to be producing a bootable ISO from an existing Linux system. My searches did not turn up details of procedures, using such tools, that would lead toward my objectives. Similarly, there were some (e.g., ISOLINUX) that might have had the desired capability, but that seemed to require technical knowledge above my present level.

The Windows imaging and cloning software (e.g., Acronis, Macrium) that was able to see and work with ext4 partitions may have used the same technology as tools like DiskGenius or DiskInternals Linux Reader, which reportedly allowed the user to browse files on a Linux system. Here again, however, I did not encounter any indication that anyone was using such tools with a Windows cloning or imaging program, to capture an image of a Linux installation. Precluding another possibility, it appeared that Linux might never be installable on exFAT or other Microsoft-owned filesystems, which might have made a Linux drive more accessible to other Windows imaging programs.

Although I spent quite a few hours wallowing around in various websites, I did not come away from this overview with a sense that I had achieved a comprehensive grasp of what might be possible through various Windows tools. Instead, I had merely arrived at the impression that, if there were alternatives not mentioned here, they didn’t seem to be making much of an impression.

Rescuezilla

A few weeks after finishing this post, I decided to go ahead with cloning the VM to create physical installations on a laptop for actual use, not just testing. The developer of Rescuezilla had just registered his reactions to my initial writeup, in a comment at the end of this post. The initial writeup read as follows:

Turning to another option, Rescuezilla was apparently not “the Clonezilla GUI” that it claimed to be. Its Softpedia writeup indicated, rather, that it was a fork of Redo Rescue, commenced when the latter ceased for some years to be developed. The creation of Rescuezilla may have triggered a perverse reawakening at Redo Rescue, whose developer then came back to life with a new version. There were some signs that the latter was well-received (e.g., 72 upvotes at AlternativeTo, compared to 418 for Clonezilla; not listed at Softpedia; 1,165 downloads this week at SourceForge, compared to 5,466 for Clonezilla). Lifewire (2021) found Redo Rescue to be “quick and easy” to use, but disliked the fact that it could only restore images to drives at least as large as the source. By the time I reached a point of deciding whether to test Redo Rescue, I had run into situations where that limitation would have been problematic. I was inclined to favor Clonezilla, perhaps pending some future update of Redo Rescue.

I’m sure I was getting rather tired of this project by the time I wrote those words. I think that, and the fact that I had already had some success with Clonezilla and dd — and the apparent absence of an ISO-creation capability, which particularly interested me — were why I didn’t try Rescuezilla at that point. But, OK, I was ready to try it now. Since Rescuezilla was a fork of Redo Rescue, it seemed that it would probably be unnecessary to give that program a full test as well; I felt that Rescuezilla would most likely give me an informed sense of what to expect from Redo.

Rescuezilla apparently used both its own website and a GitHub page. At the latter, release notes suggested that the previous year’s “focal” version might boot more reliably than the current “hirsute” version. Since I didn’t plan to explore multiple versions, I went with focal. That gave me an ISO that I could readily boot inside the VM, just as I had done with Clonezilla (above). It loaded, giving an impression of something like Lubuntu, and presented me with a choice: backup, restore, clone, or explore an existing image.

I had already gathered that Rescuezilla did not yet have the intended capability of producing compressed backups. In that regard, it was inferior to the dd and pigz command that, by this point, I had prepared and used for highly compressed and relatively rapid backups, without the hassle of Clonezilla’s menu system. On the other hand, by this point I had also done what they warn about — I had accidentally used dd‘s power to overwrite the wrong drive: very easy to do with a single command — so I was not 100% gung-ho on the command line. But I generally kept good backups, and would still therefore prefer that risk and power to the Clonezilla alternative. Anyway, I could still run a separate command to compress a Rescuezilla backup image; it’s just that this would defeat any speed advantage of the combined image-and-compress operation that I had worked out with dd.

Rescuezilla did not seem to offer an ISO-creation capability. There was no mention of ISOs on its Features list, nor any menu pick to that effect. Therefore, for my purposes, Rescuezilla offered me only the possibility of a direct cloning process that might compete against my dd command for that purpose. Like Rescuezilla, dd‘s cloning was byte-for-byte across the entire space of the source drive (in this case, ~60GB). So I wasn’t thinking of a speed contest, but rather of usability.

When I set forth in the VM via the cloning wizard, the Rescuezilla interloper identified a 57GB “VBOX HARDDISK” drive that had to be the VM’s Ubuntu installation. So I selected that and clicked next. It gave me the same screen over again. Or maybe it wasn’t the same screen: the Rescuezilla window wasn’t fitting entirely within the VM window, and unfortunately VirtualBox Guest Additions weren’t available for an interloper, so I wasn’t getting the usual, desirable screen resizing or scrollbars that could help me see what was going on. I figured out the situation when I selected the 57GB VBOX HARDDISK and clicked Next again: it said, “No destination drive selected. Please select destination drive to overwrite.”

So, OK, that second screen was for selecting a destination. I backed up and, since the list wasn’t showing me any drives large enough to serve as a destination, I took a hint and plugged in an empty USB drive, and then went to the VirtualBox top bar > Devices > USB > select that USB drive. So now I could select the VM’s VBOX HARDDISK in the first screen and the empty USB drive in the second screen. The latter was big enough to accommodate the former, so I got no error message from that choice. Rescuezilla gave me a couple of confirmation screens, reminiscent of Clonezilla but certainly easier to read. And then we were off. Copying speed appeared about the same as with dd. I couldn’t tell for sure: I used a faster USB drive this time. It still took 52 minutes.

When Rescuezilla finished, it seemed to hang. I wasn’t using a USB drive with an LED light, so I couldn’t tell what was happening on that end. The Rescuezilla window itself said it was 100% completed, but both the Back and Next buttons remained grayed out. The progress bar looked like it had gotten hung at 58%. But then, to my surprise, after sitting still for at least several minutes, it proceeded on to partition 4. So I guess it wasn’t finished after all; the 100% statement was just for partition 3. Good thing I didn’t pull the plug.

When Rescuezilla really did finally finish — when there was no further action for quite a while — it gave me an overall time (73 minutes) and reported the things it had done (e.g., “Successfully re-installed GRUB bootloader”). We were about to test all that. I clicked Next, closed down the VM, and booted the target laptop with the USB drive. It worked: it ran the system with no obvious problems.

Although I had probably not articulated it previously, this test reminded me that one thing I liked about the command-line approach was that I could use tools like Files and GParted during the process — to see what was happening, how large various partitions were, and so forth. On balance, assuming the results worked out in additional testing, and assuming I were working in a normal VM window with Guest Additions active, I felt that I would probably prefer Rescuezilla over Clonezilla. But it was not the tool for me in the interloper context.

Paid Linux Alternatives

The only Linux tool I found that appeared comparable to Windows imaging tools (e.g., Macrium Reflect) was TeraByte’s Image for Linux ($30, or $39 with a Windows component, 30-day free trial). Its GUI might be easier to use, but it was not as widely used and tested, and might not be as reliable or enduring. Its impressive manual indicated that it was capable of creating intelligent backups. The manual did not seem to address V2P specifically.

PowerISO ($30, 4.4 stars from 445 raters at Softpedia), a Windows tool, offered a Linux command ability to copy a disk to an ISO. I assumed but was not certain that it could do that while running inside a VM.

Summary of External Solutions

The foregoing discussion of external solutions began with a roundabout VM-RAW-dd method. In that method, I used QEMU to convert the VM’s VMDK file to IMG format, and then used dd to write that file to a USB drive. That was successful, but not appealing: the IMG file was huge and took a long time to write, and the process as a whole was relatively complicated.

I hoped that Windows tools like Acronis, AOMEI, and Macrium would be able to work with the VMDK, or at least with a VHD (whether used in the VM or produced by converting the VMDK). Sadly, it seemed that the leading Windows tools had pulled back from their prior (albeit limited) support for Linux and/or for other relevant capabilities. Possibly a relatively expensive (~$70) version of Macrium Reflect would be useful for these purposes. Otherwise, I found but did not test two paid programs — PowerISO (for Windows, $30) and TeraByte’s Image for Linux ($30-40) — that looked like they might be able to convert the Ubuntu VM file to an ISO.

Hence, what seemed to be the simplest path to a solution — just convert the VMDK or other VM file to ISO — was ironically the least accessible. If any such simple solution did exist, it had not yet penetrated my consciousness. Certainly my tinkerings with conversion tools (e.g., VBoxManage, QEMU) had not yielded anything of that nature. Exploration of a free trial (of e.g., Macrium, TeraByte, PowerISO) would make sense primarily if I intended to buy some such program instead of relying upon free internal methods (above).

This entry was posted in Uncategorized and tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

3 Responses to V2P: Converting a Linux Virtual Machine to a Physical Installation

  1. rescuezilla says:

    Rescuezilla supports a wide variety of virtual machine formats (VirtualBox’s VDI, VMWare’s VMDK, Qemu’s QCOW2, HyperV’s VHDx, raw .dd/.img), since version 2.2 (released in June 2021).

    This gives Rescuezilla the V2P (virtual machine image to physical machine) feature, so that you can use the easy-to-use graphical wizard to “restore” a VDI file (for example) to a physical hard drive.

    Though as of writing, Rescuezilla can only restore to drives equal to or larger in size. But this limitation will be lifted in the future.

    I suspect most users coming across this blog post will be interested in using Rescuezilla, especially since it’s free and open-source, and by far the easiest solution.

    By the way, Rescuezilla is more than just a Clonezilla GUI. It’s not merely a frontend but carefully ported all Clonezilla’s backup and restore logic to a different programming language. I have recently implemented “automated testing suite” will help to make sure compatibility never breaks by continually testing Clonezilla and Rescuezilla’s interoperability.

    • Ray Woodcock says:

      In that case, I’d suggest revising the statements on the website to remove the remark about Rescuezilla being a Clonezilla GUI. Maybe you have done so recently. I haven’t revisited that.

      Based on your remarks, I have added a review of Rescuezilla. As explained above, it is not my tool of choice for this particular purpose. I do think that, with further testing, I would probably prefer it over Clonezilla in normal usage. But at present, as indicated above, I prefer the command line.

  2. stevyn says:

    wow! Thank you for taking the time to research all these options and to write it up and also include references & links! Bravo!!!!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.