As discussed in a previous post, I decided to try using the CentOS distribution of the Linux operating system (OS) on my laptop computer. This post describes the process of installing and tweaking CentOS as desired. This effort failed, in the sense that my installation proved unable to run virtual machines in KVM. This post may nevertheless be useful for its relatively detailed exploration of certain issues and perspectives that arose while installing and configuring CentOS and KVM.
Setting Up the CentOS Installer
Installing CentOS: Software Selection
Installation: Other Settings
Completing the Installation
Configuration, Tweaks, and Fixes
Setting Up Virtualization
Setting Up KVM in Linux and Converting a VirtualBox VM to KVM
Recently, as indicated in another post, I had spent some time on an effort to install and tweak Linux Mint on that computer. That post provides more detail on some steps and concepts taken for granted or described more briefly in this post.
Unfortunately, that Mint installation’s networking got screwed up during an attempt to set up an ethernet connection between the Linux laptop and my Windows desktop. I could have continued to mess with that. But as described toward the end of a post on that networking issue, that experience (among other things) nudged me toward a different perspective on what I needed from Linux, and what its various distributions provided. I was now wondering whether the Red Hat family, including particularly CentOS, might be better suited for my needs.
The previous post acknowledged that CentOS was not designed to provide an end-user experience as rich as that provided by Ubuntu or Linux Mint. It seemed that many desktop users were nonetheless satisfied with what CentOS offered, but plainly there were good reasons why the majority preferred distributions from the Debian family.
My situation seemed to be different from that of most Linux end users because, at present, I expected to use Linux primarily to host one or more Windows virtual machines (VMs). I had arrived at that expectation through previous investigations, cited in those earlier posts, in which it seemed there would not be many instances when I would prefer a Linux program in place of my familiar Windows software. In the unusual case where I did prefer a Linux program, if CentOS couldn’t run it, I would still be able to run it in a Linux VM.
This way of seeing the situation was new for me. I had tweaked Linux Mint in a number of ways, essentially trying to make it a replacement for Windows. There was nothing wrong with that. I might still attempt that in a VM. But I was now realizing that what I really needed from the Linux desktop system was a safe, stable, easily replicable home for my VMs. I didn’t want, anymore, to play with revisions that might jeopardize fundamental system stability. In that sense, it really did seem that I had switched to a corporate mentality seeking an installation that would have minimal downtime and that, if necessary, I could rebuild quickly. That sounded like a job for CentOS.
In effect, it seemed that I had switched from a supply-driven installation model to a demand-driven model. Instead of seeking to equip the Linux installation with everything of potential relevance that I might be able to add, so as to anticipate possible future needs, I had essentially decided to add the things that I needed, as I needed them. A drawback of the demand-driven approach was that a person might not even know that a certain tweak or capability existed, until s/he went looking for the things that people had achieved. The tradeoff, of course, was that, the less you messed with the thing, the less likely you were to create problems that might not need to exist.
At this introductory point, I will just say that my perspective changed as I worked through this process. The last section of this post says more on that.
(Notes: Commands in this post are written in italics. WordPress, host of this blog, incorrectly rendered a double hyphen (i.e., “- -“) as a dash (i.e., a single long character), but it was necessary to enter a double hyphen for the command to work properly. If in doubt, copying and pasting commands to a text editor would normally clarify the exact content of the command. In this writeup, I was using a Windows 10 desktop to assist with blogging, downloading, and other tasks involved in installing CentOS on an Acer Aspire 5 A515-51-563W 15.6 Core i5-7200U laptop. I always favored having at least one spare computer around, to play this kind of supporting role in case things went wrong with one’s production machine.)
Setting Up the CentOS Installer
I began by downloading the the latest CentOS release. The main download page provided a link to the continuously updated Release Notes.
Among the sites recommended by the Release Notes for How to help and get help and Further Reading, it appeared the most useful would be the wiki and the forums. The Virtualization Special Interest Group (Virt-SIG) did not seem oriented toward KVM, which my reading had suggested would be the way to go. (A search led to sources that, at a glance, seemed to confirm that KVM was compatible with CentOS 7.) The wiki listed other sources of information. After weeding out those that appeared ancient and/or no longer maintained, these included the installed help file, available via rpm -qd [package name]; Red Hat Enterprise Linux (RHEL) 7 documentation; and of course Google searches, optionally tailored to focus on site:centos.org.
For CentOS 7 (i.e., the current version), the Release Notes said there were multiple installation images, all capable of being burned to USB “or dd’ed to an [sic] USB memory stick.” About those images, the Release Notes said this:
- The DVD image “contains all packages that can be selected from the GUI installer.”
- The Everything DVD “is almost twice the size of the ordinary DVD and is not required for most common installs – it is intended for use by sysadmins who want to run their own local mirror.” Even then, “For most users installing from the DVD image and then installing the other packages with ‘yum install <packagename>’ instead is probably easier.”
- The “live” images, available for both GNOME and KDE desktop environments, “allow you to test out CentOS by booting from the DVD or USB stick.” Installing from a live image was not recommended: “For more flexibility in selecting which packages you want to have installed, please use the DVD image.”
- The netinstall image “can be used for doing installs over networks.”
The Release Notes recommended a minimum of 1280MB RAM for regular installation and 1536MB RAM for installation from the live images. The Release Notes also provided sha256sum values for each image.
On the Windows computer, I downloaded the DVD ISO (4.16GB), ran MD5 & SHA Checksum Utility (Softpedia), clicked its File: Browse button, browsed to the newly downloaded .iso file, calculated its SHA256 value (as advised by CentOS), pasted in the SHA256 value provided by CentOS, and clicked Verify. The checksum utility said they matched. We were good.
Later, I would discover a RHEL 7.5 Installation Guide (see below). I did not follow its advice, so I could not say whether its approach was better than the one I took. What I did was, first, to add the downloaded ISO to a YUMI USB multiboot drive. YUMI’s setting for CentOS assumed the live ISO, which was not what CentOS recommended using, so I installed the ISO on YUMI using YUMI’s Unlisted ISO (GRUB) setting. I booted the laptop with that, hit F12 (varying on some computers) after seeing the Acer splashscreen, to bring up the boot menu, and selected the YUMI USB drive. YUMI booted successfully and gave me CentOS as an option. I selected that. Unfortunately, the CentOS installation process died with error messages.
I tried again, this time installing the downloaded ISO to the same USB drive using Rufus 3.0 (4.4 stars from 538 raters on Softpedia). In Rufus, I chose Device = the USB drive, Boot Selection = Disk or ISO Image > click Select > navigate to the CentOS ISO > Open > Start > Write in ISO Image Mode (Recommended). Rufus said I could choose the DD Image Mode alternative if the ISO Image Mode selection didn’t work. When that finished, I tried booting the laptop with it. That worked. I got a Welcome to CentOS 7 screen with a choice of language.
Later, I would decide that, before booting the laptop with the CentOS installer, I should have booted it with a tool capable of reacquainting me with the existing partition information, so that I could write it down for reference during installation. My YUMI drive already had some tools of that nature. Examples included Parted Magic, MiniTool Partition Wizard 8, and GParted. I would be revisiting those tools (below).
Installing CentOS: Software Selection
Now that the installer was running, I approved English as my language and set the local time zone. Then it was time to deal with more substantial questions, presented by this Installation Summary screen.
The CentOS forums and wiki did not seem to have sections focused specifically on installation. A search of the Red Hat documentation led to a RHEL 7.5 Installation Guide. This Guide addressed numerous installation scenarios not applicable to me. For instance, I would not be using text, remote, RAID, or automated installation. The Guide seemed to say that the default GUI was Red Hat’s Anaconda installer. TecMint (Cezar, 2015) provided screenshots depicting more or less the installer as I encountered it.
Within the Installation Summary screen, some matters did not presently seem to require my attention. For instance, English (US) keyboard and language was fine. It seemed obvious that the USB drive was serving as my Installation Source, but I clicked on that option anyway. There, “Auto-detected installation media” was selected, but I saw that I could have specified an ISO file on another device or a network location. In this and other screens, I noticed that clicking the Help button in the upper right corner gave me a Help dialog that was blank. I clicked Done (oddly located in the upper-left corner of the window) to return to the main installation screen.
The Software Selection option was more challenging. When I clicked on that, I saw a set of Base Environment alternatives. In my reading, multiple sources had recommended the GNOME Desktop environment. But the Installation Guide (8.13) said,
If you are not sure what package should be installed, Red Hat recommends you to select the Minimal Install environment. Minimal install only installs a basic version of Red Hat Enterprise Linux with only a minimal amount of additional software. This will substantially reduce the chance of the system being affected by a vulnerability. After the system finishes installing and you log in for the first time, you can use the Yum package manager to install any additional software you need.
Luppeng (2016) attempted to list the specific programs that were installed in each of those program groups. I examined that list. The differences between the Minimal and GNOME Desktop installations appeared to be extensive and not simple. A SuperUser comment said that the Minimal installation would provide “a truly utterly bare CentOS OS install with pretty much no GUI. This is good for server environments where you really only need Terminal access and a core OS.” Tecmint (2018) listed a set of 30 steps, following the CentOS Minimal installation, that would be recommended to set up a server. Among these were multiple items (e.g., “Install and Configure sudo”) that, for my purposes, were integral to any functional Linux installation. They were not things that I wanted to be responsible for installing and configuring. I was indeed growing more comfortable with the command line — but only at the level of achieving minor tasks, not for designing a Linux installation. ShellHacks (2016) said, “If you have installed CentOS/RHEL Minimal server installation, you may have lots of troubles with not installed packages,” and I suspected that was true. Meanwhile, I doubted that the CentOS/RHEL designers had added risky items to the GNOME Desktop option. For my purposes, then, that option seemed to be the safest functional choice. (For those who did prefer the Minimal installation, ITzGeek (2018) and VPSCheap.net (2017) provided instructions on adding the GNOME desktop later.)
When I selected GNOME Desktop in the left pane on that screen, I got a list, in the right pane, of Add-Ons for Selected Environment. Those add-ons were organized into ten groups. Several of these (e.g., Backup Client, described as “Client tools for connecting to a backup server and doing backups”) were rather obviously suited for a server environment unrelated to my desktop situation. The Installation Guide (8.13) said that add-ons listed above the dim horizontal line in the right pane were specific to the selected environment (i.e., GNOME). Among these, GNOME Applications (“A set of commonly used GNOME Applications”) sounded like a useful addition, whereas others (e.g., Office Suite and Productivity, which apparently meant LibreOffice) were things I could add later, as needed. According to the Installation Guide, items listed below that dim horizontal line were available for all environments. I was not sure I would ever need any of those. I clicked Done, and that took me back to the main installation screen.
At the time of this installation, I was not too clear on what GNOME was about. Later, however, I would want more clarity, and at that point I returned here to add this note. Key points for the lay reader, from the Wikipedia article, were that GNOME (pronounced more like ga-nome than nome) was a loose affiliation of developers; GNOME 2 (released in 2002) provided a Windows-like desktop interface; and GNOME 3 (2011), used in CentOS 7 and Ubuntu, was a controversial departure, more like Mac than Windows.
Now that I had settled the Software Selection option within the Installation Summary screen, the next item was Installation Destination. This presented me only with a choice of disks. This laptop contained a 450GiB solid state drive (SSD) and a hard disk drive (HDD). I wanted to install my program files on the SSD. That was /dev/sdb. The SSD contained a couple of partitions (i.e., Linux Mint plus swap space) from my previous Linux Mint installation, along with a VMs partition and an unallocated overprovisioning area (73 GiB).
I selected the SSD and clicked “I will configure partitioning” and then Done. That opened a Manual Partitioning screen. In the left pane, the installer presented two three boldface drop-down headings. From the installer’s perspective, these were apparently divided into the “known” and the “unknown” (I’d have suggested, instead, the terms “recognized” or “familiar”). In the known category, we had (a) the available New CentOS 7 Installation options that I could choose and (b) the existing Linux Mint system partition. Oddly, the supposedly Unknown category included the swap space partition, along with the SSD’s other partitions. The overprovisioning space did not appear in this list. If I had wanted to make that available to CentOS, without disrupting other existing partitions, apparently I would have needed to use something like GParted or MiniTool to repartition the SSD before starting this installation.
The New CentOS 7 Installation section did not provide an intuitive explanation of what it was doing. Regardless of whether I chose “Click here to create them automatically” or one of the partitioning schemes available in the drop-down menu, nothing changed onscreen. It was not clear what effect one of these choices would have. Moreover, changing my choice, among those drop-down options, did not ungray the Modify or Update Settings buttons. It was as if I had not done anything. But then I noticed a red banner across the bottom, bearing the words, “Automatic partitioning failed. Click for details.” I clicked. It said, “You have not defined a root partition (/), which is required for installation of CentOS to continue. You have not created a bootable partition.” So I was wrong: something did happen. That said, “automatic” may have been the wrong word for a process requiring me to take manual steps first.
There was a question of which Linux partitions I should create. On a single webpage, the Installation Guide (8.14), said that “the recommended file systems for a typical installation” were a boot partition, a root partition, and a swap volume — but then, that “Red Hat recommends at least” those three plus a home partition. I understood the advantage of having a separate home partition, and decided to follow that suggestion. So I would have four partitions.
At this point, it occurred to me that, if I was not going to install and configure a maximal Linux Mint system, with all kinds of additional software — if, indeed, this CentOS installation was going to be rather basic — I would probably not need to give it the entire 120GB that I had allocated to the Mint installation (plus 22GB swap). I was more likely to wish, sooner or later, for more space in the VMs partition. Therefore, I bailed out of the installation, rebooted Parted Magic, and used its GParted component to rearrange and resize partitions on the SSD. The considerations that I took into account at this point were as follows:
- In GParted, I saw, first, that the SSD had a 300MiB EFI partition. I dimly recalled that this was essential for the laptop’s UEFI firmware and/or the SSD’s GPT structure. The system was working. So I did not plan to disturb that EFI partition.
- In configuring my Linux Mint installation, various sources had led me to conclude that, on a laptop with 20GB RAM, to enable hibernation, a 22GB swap partition would be sufficient. The Installation Guide (Table 8.3) recommended 1.5x RAM for systems with 8GB to 64GB of RAM. That would mean 30GB. My reading suggested that was the old wisdom, reflecting a more conservative approach, and that there was a risk of system slowdown when swap was too large. For now, I decided to stick with 22GB swap and see if hibernation worked.
- The Installation Guide (184.108.40.206) recommended a boot partition of at least 1GiB.
- The Guide recommended home = 100MB minimum. The home partition would store system settings and could also store user files. Sources indicated that files in home could survive a Linux reinstallation, as long as the user told the installer not to format the home partition. I had an existing 300GB partition on the HDD that I wanted to use for this, if possible. Various (e.g., 1 2) sources dealt with the option of setting this up after installation. Their instructions seemed complicated. I wanted to see if I could include it in the installation.
- The Guide (220.127.116.11) recommended root = 10GiB. TecMint (Cezar, 2015) recommended at least 10GB. ZDNet (Watson, 2016) said at least 12-16GB.
- For boot and root partitions, the Guide recommended using the ext4 filesystem. Responses to a Stack Exchange question indicated that the home partition would also best be on ext4. I planned to use ext4 throughout. Lifewire (Newell, 2018) warned that Windows could not read ext4 partitions, so user files stored there would not be visible on a dual-boot system when running Windows. I had found, however, that Windows running in a VirtualBox VM could read the contents of an ext4 partition mounted as a Shared Folder in the VM. I wanted to use KVM rather than VirtualBox for better performance, but I would probably have a VirtualBox VM as well, and could use it for this purpose if necessary (though apparently KVM and VirtualBox, or perhaps any two virtualization tools, could not be running at the same time). So using ext4 format for the home partition did not seem problematic for my purposes.
- The Guide also recommended encrypting the home partition, if not others as well.
Based on these considerations, I wound up creating these partitions in GParted, in this order, all using ext4 except swap, which had its own format:
- boot: 1GB
- root: 17GB
- swap: 22GB
plus the already existing 300GB ext4 partition that I wanted to use for home. I had to create an extended partition to accommodate all these partitions (i.e., the total number of partitions on that drive now exceeded the limit of four primary partitions), and I had to enter weird values in GParted to get them to have exactly those sizes, but it worked. In the process, I also rearranged the unformatted space so that it was located between the extended partition and the VMs partition, for easier addition in either direction if needed. (Though some insisted that overprovisioning required that the unformatted space appear after all partitions, others — and logic — suggested that the SSD would use any unformatted space. There was not, after all, such a thing as the “end” of an SSD’s memory.)
Now that I had created partitions of the desired sizes, with the desired labels, I could return to the CentOS installation and see whether it accepted what I had wrought. Due to distractions in the middle of this process, I would initially be flying by the seat of my pants here, reverting to the ordinary approach of relying on the GUI and forgetting that there was an Installation Guide, and that would cost me a bit.
In the Installation Destination screen, I checked the SSD > I will configure partitioning > Done. Then, in the Manual Partitioning screen, I went into Unknown > select the 1000 MiB partition > Mount Point = /boot > click the Reformat box. Reformatting was probably not necessary — GParted had probably formatted it properly — but I didn’t mind letting CentOS make sure it was happy with the format. Clicking the Reformat box ungrayed the File System box: I had to make sure it was still ext4. I also had to verify that the Label was what I wanted.
Then I selected the 17 GiB partition using the same steps, mounting it as /root. That would turn out to be a mistake, provoking an orange error banner across the bottom of the screen: I was supposed to mount root as simply a slash (“/”), not as “/root.” When I clicked on the 17 GiB partition, the installer moved the 1000 MiB partition from the Unknown list up to the New CentOS 7 Installation list. That happened as soon as I clicked on the 17 GiB partition; I didn’t need to click Update Settings to make it happen.
Finally, I selected the 22 GiB swap partition. Apparently swap space was not actually mounted: the Mount Point box was grayed, and would apparently remain that way unless I clicked Reformat (which I did) and then chose some format other than swap (which I didn’t).
That took care of the partitions I intended to use for CentOS installation on the SSD. I clicked Update Settings. The New CentOS 7 Installation list at the upper left corner of the screen now listed the partitions I had just selected. This was the point at which I got, and rectified, the orange warning about /root (above).
Now I wanted to get back to the list of disks, add the HDD, and set its 300GB partition as /home. There didn’t seem to be a way to do that. Neither the plus (“+”) symbol nor the “1 storage device selected” link at the bottom left corner of the screen led to any such option. I tried Reset All > Home, thinking that would let me start over, but it didn’t. Finally, I found the solution: I had to click on the circular “Refresh” arrow next to the plus symbol at lower left. That gave me this:
You can remove or insert additional disks at this time and press ‘Rescan Disks’ below for the changes to take effect.
Warning: All storage changes made using the installer will be lost when you press ‘Rescan Disks’.
I clicked the Rescan Disks button. It said I could click OK to go back to the disk selection screen. I did that. So I started over. This time, I selected both the SSD and the HDD, as I should have done in the first place. (See RTFM.) Now, as I proceeded back through the partitioning steps, the Unknown area listed partitions from both disks. I would want to have written down the location of the 300 GB partition (e.g., /dev/sda3), when using GParted (above), if there were multiple similarly sized partitions on these drives, because that could be my only way of being sure I was choosing the correct partition. As with the other partitions, I selected the desired partition; gave it /home as a mount point — and, unlike the others (as warned above), made sure not to reformat it, so as to preserve its contents. In this case, the Label box autofilled with the name I had previously given that partition, so that was another way to be sure I was selecting the right one.
I clicked Update Settings and then clicked on each of the partitions listed under the New CentOS 7 Installation heading, at the upper left corner of the screen, to review their settings. The list looked right. The only thing that seemed not quite right was that, for some reason, the 300 GB partition that I had selected for /home was listed in both the upper (i.e., New CentOS 7 Installation) and lower (i.e., Unknown) parts of the left pane. None of the other partitions appeared in both lists. The Update Settings button was grayed out, so that wasn’t the solution.
Well, whatever. Aside from that quirk, everything looked good. I clicked Done. That gave me a Summary of Changes list. The 300 GB partition was not on it. I guessed that was because I wasn’t doing either of the types of activities listed: Destroy Format or Create Format. I was apparently just Including the 300 GB partition. I clicked Accept Changes. That just took me back to the Installation Summary screen. Evidently the actual formatting wouldn’t happen until I clicked Begin Installation, and we weren’t quite ready for that.
Installation: Other Settings
The Installation Summary screen contained an icon for KDUMP, and these words: “Kdump is enabled.” The Installation Guide (8.16) said,
Kdump is a kernel crash dumping mechanism which, in the event of a system crash, captures information that can be invaluable in determining the cause of the crash.
Note that if you enable Kdump, you must reserve a certain amount of system memory for it. As a result, less memory is available for your processes.
That was pretty much all the Guide said on the matter. But Red Hat also offered a Kernel Administration Guide whose first chapter was a Kernel Crash Dump Guide. A section of that crash dump guide (1.7.1) said, in effect, that, for a regular 64-bit PC with less than 1TB (!) of RAM, the default amount of memory reserved for Kdump would be between 160MB and 224MB. The installer offered only basic choices regarding Kdump; the guide seemed to say I could refine its configuration after installation. The guide also offered information on how to test it, which I might actually get around to after my first really frustrating system crash.
The Installation Summary screen also contained an option to adjust Network & Host Name for installation. The Installation Guide (8.12) said, “When the installation finishes and the system boots for the first time, any network interfaces which you configured during the installation will be activated.” This seemed like an opportunity to get those set up and working right away.
I clicked on that Network & Host Name option. It showed that this laptop had Ethernet and Wireless networks. For the Ethernet, its button (upper right corner) said Off. I switched that to On. That changed its status indicator from Disconnected to Connecting. I guessed that was about all it would achieve; the Ethernet cable just ran to a switch, from which another cable continued onwards to the Windows 10 desktop computer: there was no Internet access. Sure enough, after a minute of trying, it went back to Disconnected and turned itself Off. I tried again with the Wireless network. That connected almost immediately, once I entered the WiFi network name and password. I clicked Done.
Back in the Installation Summary screen, the last option was Security Policy. The Installation Guide (8.10) said, “Important: Applying a security policy is not necessary on all systems. This screen should only be used when a specific policy is mandated by your organization rules or government regulations.” I noticed that the default setting, there in the Installation Summary screen, was “No profile selected.” I clicked on the icon. That gave me a list of policies approved, I guess, by various organizations. For instance, along with various tech policies (e.g., Red Hat Corporate Profile for Certified Cloud Providers), there was one for Criminal Justice Information Services. The Installation Guide said that additional custom profiles could be loaded. I made sure the button at the top, “Apply security policy” was turned Off and then clicked Done.
Completing the Installation
I reviewed everything. It felt a little like reviewing what my tax software said, just before clicking the button that said, “File your taxes with the IRS!” I took a deep breath and clicked Begin Installation.
That took me immediately to a Configuration screen with the words User Settings. Under that heading, there were two icons. The text accompanying the first one, labeled Root Password, said, “Root password is not set.” The second, User Creation, said, “No user will be created.” I wasn’t sure whether it was saying that I could have opted to create a user, somewhere in the vicinity of the Installation Summary screen. The Installation Guide (8.18) seemed to say this was normal — the installer was configured to yank everybody’s chain at this point. Or, actually, as I read on, it seemed I could go ahead and click on those icons and set those passwords now. The Guide said,
Creating a user account is optional and can be done after installation, but it is recommended to do it on this screen. . . . Best practice suggests that you always access the system through a user account, not the root account. . . . The root account gives you complete control over your system. For this reason, the root account is best used only to perform system maintenance or administration.
I assumed the user password would be required anytime I wanted to log in, if I set one now. For a laptop, that seemed like a good idea. I chose OICU812 . . . (kidding). Actually, it was B4I4Q . . . It hadn’t occurred to me that there might be websites listing goofy passwords, but now that I was sitting here, waiting for the installation to finish, I could hardly think of a better use of my time than to go find out. Freemake suggested “Virus Infected WiFi” for the name of a wireless network. TechnoClever’s list began with “TellMyWifiLoveHer.”
Seriously, various sites advised on how to choose secure and memorable passwords. There were also password generators. Whatever I came up with, I could test its security at The Password Meter (which, itself, was not at a secure https: address, which could mean I would be giving away my password) and How Secure Is My Password? The latter offered rather dismaying information on how quickly someone could crack passwords that seemed safe enough to me, and also seemed to disagree somewhat with the password tester built into the CentOS installer. Anyway, I glanced at the Advanced Configuration option, but decided to set nothing there.
By the time I was done fooling with this stuff, the installer said, “CentOS is now successfully installed, but some configuration still needs to be done. Finish it and then click the Finish configuration button please.” I couldn’t see what else it thought I needed to do, however, so I clicked Finish configuration. It fiddled with a few things and then gave me a Reboot button. The Guide said,
Congratulations! Your Red Hat Enterprise Linux installation is now complete!
Click the Reboot button to reboot your system and begin using Red Hat Enterprise Linux. Remember to remove any installation media if it is not ejected automatically upon reboot.
I was never sure exactly when to unplug USB drives, before or during reboot. I decided to try doing it now. Then I clicked Reboot. Apparently my timing was off: that gave me a frozen black screen displaying only “4m[terminated].” Ctrl-Alt-Del didn’t work. I tried the power button. That worked. If it hadn’t, I suppose I could have removed the motherboard.
With that, the system booted. It flashed dire error warnings like those that had become normal for me, in Linux, and then it dropped me at an Initial Setup screen. This screen had two icons. The first, License Information, said “License not accepted.” The second was Network & Host Name. I wasn’t sure why that one appeared; it said that my wireless network was connected. I went into the License Information option. It just wanted my acceptance of the CentOS license agreement. I guessed the Network item was listed because my Ethernet was still disconnected. Unsurprisingly, I didn’t see some of the options listed in the Installation Guide (29.1), notably the Red Hat Subscription Manager — because I was not a Red Hat subscriber.
I clicked Finish Configuration down in the lower right corner of the screen. That led to what looked like another reboot, and then I was looking at a sign-in screen. I clicked on my username, entered my password, and I was in. There was more configuring to do. The first two questions were repeats of what I had told the installer. Then I had to decide whether to let applications determine my geographical location. Following that, I faced the question, “Give Color access to your location?” Absent information in the Guide, a search led to just 1 2 results, both suggesting this was a bug. I said OK. Now I was looking at a dialog titled Connect Your Online Accounts. The three options listed here were Google, Nextcloud, and Microsoft. With or without a Windows VM, I did expect to be going online primarily through the Linux host, so I went ahead with that, and then clicked Next. It said, “You’re ready to go! Start using CentOs Linux.” I clicked that button, and now they gave me a Getting Started dialog. So that was it.
Configuration, Tweaks, and Fixes
I closed the Getting Started dialog and went to the desktop menu (in the upper left corner of the screen) > Applications. It looked like the GNOME installation option had given me a GUI and a small number of applications (e.g., Firefox, Rhythmbox, calculator, text editor). Applications and Places (e.g., Home, Documents, Downloads) were the only menu options. In the upper right corner, comparable to the system tray at the lower right corner of a Windows desktop, I had the clock and icons for WiFi, speaker, and battery, all leading to one brief shared menu (e.g., “VPN Off”; “Fully Charged”).
From that beginning, this section summarizes the steps I took to modify the new installation. As noted above, the philosophy this time was very different from that guiding my previous Linux Mint exploration, as detailed in another post. This time, as discussed in the previous post, there would be far fewer attempts to modify the system for full-time use. I also declined to write up some items at length, where the previous post already provided some detail on them.
Except as otherwise indicated, I did not attempt to customize the GNOME 3 desktop. There were two reasons. One was that keeping it simple was consistent with my very basic needs from the host system. The less I messed with, the less likely I was to mess up. The other reason was that, if I had wanted to do much customization, I probably would have been better advised to try a different desktop.
To clarify that latter remark, and also to sort out some terminology, RedHat’s Desk Migration and Administration Guide (DMAG) said that the GNOME Desktop was heart of GNOME 3, and the GNOME Shell was the user interface of that desktop. This evidently meant that, to GNOME.org, the desktop was the whole package of software packages comprising GNOME version 3, and what the user would consider a “desktop” was actually the “shell.” The DMAG said that the default GNOME shell mode in RHEL 7 was the “classic” version. In this version, we had the taskbar at the bottom of the screen, with its indicator (at the right edge) of which workspace was active; we had what I would call the menu bar at the top, holding what they called the Applications menu and the Places menu in the upper left corner, and the system menu in the upper right, with the message tray (Win-M) next to it. Judging from their writeup, GNOME Classic had no panels. That was perplexing because (a) everyone else seemed to talk confusingly of “the panel” and “the panels,” and (b) there was a dock, not mentioned in the writeup and “not supposed to be a part of GNOME Classic,” nonetheless visible upon pressing the Win key or by going to menu > Applications > Activities Overview, but not autohiding in the usual sense of coming up when the mouse approached. Overall, to me, this desktop (i.e., “shell”) was not very well thought out. ZDNet (Watson, 2015), among others, offered a customization guide for the shell (what he called the “desktop”) as a whole, including the dock, which he and others said was actually the Dash. (I know — dash, dock, dash, like someone was trying to send us a message.)
Leaving aside most options for desktop customization, I did consider it worthwhile to modify a number of items, and to learn more about Linux in the process, as follows:
- Disks. I went to Applications > Utilities > Disks to confirm that the installation had indeed given me the desired partitions. When I clicked on the partitions specified for inclusion in this installation, Disks indicated where they were mounted.
- WinKey. CentOS referred to Win- or WinKey- (i.e., on a typical PC, the key near the lower right and/or left corners of the keyboard bearing the Microsoft Windows logo) as the Super key. It was likely that I would refer to it sometimes as the Win- key.
- Display. I went to menu > Applications > System Tools > Settings > Power > turn off options to dim or blank the screen when inactive, at least during this setup phase.
- Terminal. By default, Terminal was available via menu > Applications > Favorites or System Tools, and also via right-click on the desktop or on an open space in the file manager. In addition, I could open Terminal with Alt-F2 > gnome-terminal. Win-T and Ctrl-Alt-T did not open Terminal.
- Root. In CentOS, I could not perform superuser commands by simply preceding them with sudo. The CentOS wiki seemed to say there were at least three levels here: user, user with specific superuser permissions, and superuser. To become superuser, I had to enter su – with that trailing hyphen (unfortunately displayed here in WordPress as a dash), and then type exit to return to being a standard user. There was no need for sudo when I was running as su, and there was also no option to run sudo when I wasn’t. If I tried to run sudo commands, I got an ominous message: “[User] is not in the sudoers file. This incident will be reported.” To fix that, the wiki seemed to say that I had to type su – and then visudo to edit /etc/sudoers. But the purpose of doing so would be to give specified superuser powers to the regular user (in this case, me), exercisable with his/her own password, not the root password. That scenario didn’t really fit my situation, here on my own desktop computer. I was not an office worker whose duties called for a few specific administrative abilities. I didn’t see the need to become mired in superuser arcana in order to give myself a more complex division of responsibilities. Being a regular user with the occasional aid of su – seemed sufficient. It appeared that I would just have to cope with those error messages, in those instances when I would enter sudo without thinking. (Note: in some of the following instances, I precede commands with (su -) to indicate that I would probably want to precede them with a separate su – command.)
- File Manager. Applications > Accessories > Files opened the default file manager (a/k/a “file browser”). Sadly, it lacked any menu > Help > About option providing its name or version. Doubling down on unhelpfulness, the file manager further jettisoned the Address bar that some users, myself included, found very helpful for purposes of navigation. I was not able to find any listing or command that would bring up the name of the default file manager. Various sources (e.g., Installation Guide 3.1) seemed to indicate that the default file manager was Nautilus. As indirect confirmation, nautilus opened a new session of the default file manager. I did see that Ctrl-H toggled to hide or display hidden items. Previous reading suggested that Thunar would be a superior alternative, but that replacing the integrated file manager of a desktop environment (e.g., GNOME) could produce instability. I also saw that Thunar in CentOS would entail numerous additional packages. Therefore I opted to stay with Nautilus, and to plan on using a Linux VM for any extensive file work in Linux.
- Define Hotkeys. I went to Applications > System Tools > Settings > scroll down to Devices > Keyboard to view existing shortcuts. I clicked on the plus (“+”) key at the bottom of the list to add a Custom Shortcut. (See lists of default hotkeys.) I used that procedure to create the following hotkeys: (1) Terminal: Win-T. I created a custom shortcut and, for its command, I typed gnome-terminal. (The disown option recommended by one source produced an unresponsive mouse, resolved below.) I clicked Set Shortcut and then I hit Win-T. The Custom Shortcut dialog captured that as Super+T. I clicked Add and then tested it. It worked: Win-T now worked to open a Terminal session. The launcher did not seem to have been added to the Applications menu, but it did appear in the Custom Shortcuts section at the bottom of the Keyboard Shortcuts list. (2) Run Dialog: Win-R. To configure Win-R as an alternative to Alt-F2 (i.e., to open the Linux counterpart to the Run dialog in Windows), following the same general procedure, I created a Custom Shortcut; I named it Run Dialog; and, for its command, I used zenity –entry –text=”Run command:” –width=400 | sh -s. Unfortunately, I found that, while that worked (more or less) on the command line, Win-R so defined did nothing. So, for the moment, that was a solution awaiting a rescue. (3) File Manager: Win-E. In Windows and Linux Mint, I had found that Win-E opened the default file manager. Here, Applications > Accessories > Files (and also the Places menu) opened a session of Nautilus. Therefore, to the Win-E key combination, I assigned File Manager and nautilus.
- Unresponsive Mouse Click. At one point, I found that the mouse cursor continued to be mobile, but clicking did not achieve anything. It developed that I was trapped in a Terminal session created by an earlier version of the Terminal hotkey (above) using the disown option. The solution was to hit Ctrl-Alt-F2. That put me at a terminal. I logged in there, typed pkill terminal, and then hit Ctrl-Alt-F1 to get back to the GUI.
- Fluttering Mouse. The mouse had another problem. Sometimes a context (i.e., right-click) menu would come up only for a moment, and then disappear. I would right-click again, and it would happen again. It would take multiple tries to get the menu to persist so that I could make a selection from it. Attempting to force persistence by holding the right mouse button down was not entirely successful: sometimes it would result in selection of a higher item on the context menu instead. I posted a question on this problem but, as of the time when I completed this post, no solution had been suggested.
- Reduce Item Size in File Manager and on Desktop. In Nautilus, I went to the upper right corner > hamburger icon (i.e., three parallel horizontal lines) > click the minus (“-“) symbol, to reduce the size of listed items. I toggled the button next to the hamburger to switch from icon view to list view. Then I entered gsettings set org.gnome.nautilus.icon-view default-zoom-level small.
- Repositories. I obtained the list of repositories actually installed on this system with yum repolist. That list indicated three repos — CentOS 7 Base, Extras, and Updates — with a total of 10,870 packages installed. The CentOS wiki listed several repositories “that are not included in the default base and updates repositories.” For Extras, in particular, the wiki indicated that “The CentOS development team have tested every item in [the CentOS Extras repository] and they all work with CentOS. This repository is shipped with CentOS and is enabled by default.” The wiki said that the CentOSPlus repository likewise shipped with CentOS, and that every item in it had likewise been tested and was found to work with CentOS, but was not enabled by default. The wiki listed other repositories, but none seemed to be comparably tested. As noted in the previous post, repositories praised by at least a few commenters, encountered during my browsing, included nux-dextop, EPEL, SCL Software Collections, and Google Chrome. I decided that I would wait and add repositories as needed.
- “Filesystem type ntfs not configured in kernel.” When I plugged in an NTFS-formatted USB drive, I got an “Unable to access” error. As if to respond to the preceding remarks about repositories, a CentOS forum moderator advised resolving this error with (su -) yum install epel-release and then yum install ntfs-3g. (See How-To Forge for more detailed instructions.) There was no question but that I did need to be able to access NTFS partitions, so I went ahead with that. That seemed to solve the problem: now the CentOS laptop could see the contents of the USB drive.
- Desktop Shortcuts. As discussed in a previous post, shortcuts were stored in various locations. Notably, menu shortcuts were in /usr/share/applications. I could navigate there via Terminal > cd /usr/share/applications or via Nautilus (i.e., Applications > Accessories > Files) > Other Locations > Computer (or simply menu > Places > Computer) > /usr/share/applications. In that location, I could use GUI or command tools to copy, edit, drag, and/or move existing .desktop files (i.e., shortcuts) to other locations. In particular, to add shortcuts to my own user GUI desktop, I would either put them there graphically or create/move them to the desktop’s filesystem location (i.e., /home/ray/Desktop, a/k/a $HOME/Desktop). I could create a new shortcut by editing a copy of an existing shortcut to use a new name, icon, and command. Thus, I created a few desktop shortcuts: (1) Terminal. This one already existed in /usr/share/applications. I just needed to copy it to the desktop. The icon there changed into the proper Terminal icon when I double-clicked on it and indicated that it was trusted. I edited that shortcut by using right-click > Open with Other Application > View All Applications > Text Editor > Select. To sharpen focus on the important lines, I deleted the inessential lines. According to LinuxCritic (2010), the essential ones were Type, Name, and Exec, but I also kept Icon. (See The Debian Administrator’s Handbook (Hertzog & Mas, 2015, p. 359), citing FreeDesktop.org, whose list of standards included desktop entry specifications, whose latest version listed required and optional keys for .desktop files.) I already knew the Exec program for Terminal was gnome-terminal, but if I had needed to find its executable, apparently I could have used which gnome-terminal. (2) Settings. I wanted a desktop shortcut leading to the Settings dialog (i.e., menu > Applications > System Tools > Settings). To find the name of the program responsible for that dialog, I used ps ax. Among the results, I saw (and ran) gnome-control-center –overview. That worked: running that command in Terminal started the Settings dialog. To create the Settings shortcut, I used right-click > copy the desktop Terminal shortcut > paste it to the desktop > edit in the same manner as the Terminal shortcut (above). In the editor, I revised it to Name=Settings, Exec=gnome-control-center –overview, and left Type=Application. I came up with an icon for this shortcut, as detailed in another post, named it LinuxSettings.png, put it into a new subfolder (i.e., /usr/share/icons/Other), and entered Icon=/usr/share/icons/Other/LinuxSettings.png. And that worked. After I used it, I saw that it had been changed: now Name[en-US]=Settings.
- Change Passwords. If I wanted to change passwords later, the first step was, as in life, to just be yourself. In other words, if I wanted to change my user password, and I was already logged in as myself, then I had passed this hurdle. And, as in life, if being yourself isn’t working, then try to be someone else instead. In this case, that would be superuser, achieved easily enough with su –. Once I was logged in as the real or fake me, the next step was to type passwd and enter the new password.
- p7zip. I found that this CentOS installation was not able to unzip .7z files. Thus, as discussed in another post, I ran (su – ) yum install p7zip. (It seemed that installing p7zip-full would require using more cutting-edge Fedora software that I wasn’t ready to inflict on this system.)
- Updates. The Red Hat System Administrator’s Guide (9.1) indicated that, among other possibilities, I could choose to install all updates, updates for only specified packages, or only security-updates. Several (1 2 3) sources explained how to set up automatic updates. But administrators in a Red Hat discussion criticized the idea of applying updates automatically to production systems. Having lost work to Windows 10 updates that Microsoft insisted upon applying immediately and without my present consent, I was inclined to share that sentiment. Red Hat also indicated that it was possible to set the yum installer to install only security updates, as distinct from bug fixes or enhancements. I decided that I was being conservative enough already, that I would rather have those fixes and enhancements. Participants in a CentOS discussion (2016) said that at least some of these variations did not work in CentOS anyway. The advice was, in effect, just to run (su -) yum update on a regular basis. In my case, this first update involved more than 300MB. Somebody had been busy while I was screwing around. When it was done, I ran it again, just to be sure — and, yeah, it was sure.
Setting Up Virtualization
Now we were starting to get to the fun part, where I might actually someday get to use this computer for something. The next step in the journey was to get some virtual machines running.
To me, this could go a couple different ways. I had already run VMware and VirtualBox in Ubuntu and Linux Mint. So possibly that’s where we were headed. In fact, as noted above, I thought I probably would have Windows and/or a Debian flavor — Ubuntu, probably — running in at least one VirtualBox VM. Meanwhile, however, I had seen indications that KVM provided much better performance than VirtualBox, and was now a leading contender in the virtualization world. So I was hoping to get KVM running at least one Windows VM. Finally, I had seen references to virtualization along the way, during this CentOS setup, leading me to believe they had their own in-house solution. So now it was time to learn what that was all about.
As I was about to see, Red Hat Virtualization turned out to be based on KVM, with some other elements thrown in (Wikipedia). Red Hat offered a writeup of different kinds of virtualization. Their main RHEL 7 documentation page had a whole section on it. Much of that material was not going to be meaningful to me until I gained more experience with CentOS. Much of it was not going to be meaningful to me, period, because in the end I was just one end user, not an administrator of many users and systems.
The RHEL 7 Virtualization Getting Started Guide (VGSG, ch. 4) said the command line provided more detailed control than the GUI. I believed that. I also believed, however, that something was better than nothing — that, for my purposes, it made good sense to start with the GUI. Note the Arch Linux wiki‘s warning, “If you start your VM with a GUI tool and experience very bad performance, you should check for proper KVM support, as QEMU may be falling back to software emulation.” Note also that, as an alternative to these steps, VDAG (2.1) pointed out that, during installation, I could have gone into Software Selection and, instead of the GNOME Base Environment, I could have selected Virtualization Host > Virtualization Platform.
To use the GUI, VGSG (5.1) suggested virt-manager. But on my minimalistic installation, that produced “command not found.” A search led to (1 2 3 4 5) sources advising me to start with something like egrep -c ‘(vmx|svm)’ /proc/cpuinfo to make sure the CPU could support a proprietary guest operating system (OS) like Windows. I knew it could — I had already run a Windows guest VM on this laptop — but I ran the command anyway and got output that was not zero (it was 4). So I was good. Now the sources were inclined toward approximately these commands (running as root):
yum install -y qemu-kvm qemu-img libvirt virt-install libvirt-python virt-manager libvirt-client virt-viewer bridge-utils "@X Window System" xorg-x11-xauth xorg-x11-fonts-* xorg-x11-utils systemctl start libvirtd systemctl enable libvirtd adduser $USER kvm lsmod | grep kvm
I may have erred in relying on those sources. The Red Hat documentation was voluminous and, at the moment, I was impatient, so I did not find the official Red Hat instructions exactly on point. Later, however, I saw that, among the packages listed in the foregoing command, the Virtualization Deployment and Administration Guide (VDAG, 2.2) called for only those up through libvirt-client (i.e., not virt-viewer et seq.). Some of the packages in that command (possibly including the later ones; I did not keep detailed notes) were duplicative: they were already installed on my system, or installing one included the other. Finally, for python-virthost (recommended by one source), I got “No package python-virthost available.”
In any event, the lsmod command, last of those just listed, was supposed to (and did) produce a response containing something like (for my machine) “kvm_intel.” Now (su -) virt-manager worked: I was running Virtual Machine Manager. I skipped advice on how to create a VM (VDAG 3.1). At this point my concern was with how to import or convert the ones I had already created. On that subject, a Red Hat article (2018) said I would use virt-v2v. That package was not yet installed on my system. Judging from the commands just listed, it seemed I was supposed to use (su –) yum install -y virt-v2v to install it. Unfortunately, install –help didn’t explain the -y option; perhaps I was using the help option incorrectly. StackExchange indicated that -y would make the command automatically answer “yes” to any question arising during the process. I wondered what questions might arise, so I used the yum install command without -y. The only question was, “Is this ok?” which came after the installer listed what it was about to do. I decided that, for now, it might be best if I took the time to review that sort of information before proceeding. I decided to exclude the -y option from at least some of my future install commands. Incidentally, among the information provided when I ran yum install without the -y option, I noticed that virt-v2v was coming from the epel repository that I had added in order to install ntfs-3g (above).
The Red Hat article (2018) seemed to say that virt-v2v could convert VMs running RHEL 3.9 through 7, as well as Windows versions released since 2001 (i.e., starting with XP and Windows Server 2003) and perhaps (i.e., without Red Hat support) other Linux distributions (e.g., Debian). The article also said, however, that virt-v2v conversions were supported only from RHEL5 Xen and VMware vSphere ESX. I saw a Red Hat statement that the RHEL 6 version of virt-v2v had been deprecated; therefore, I constructed a search for recent information. That search led to a Red Hat acknowledgement of workarounds: an off-topic link to third-party instructions for P2V (i.e., converting a physical installation (e.g., Windows) to a VM); making an image (with e.g., Acronis) of one installation, and then restoring it in another; or using Windows’s own backup software to make and restore a backup. The wording of the last of those options suggested that this might work only for Windows Server (i.e., not e.g., Windows 7) backups. As this source indicated, the change in drivers from one VM to another could be a problem, though possibly not with imaging software offering a universal restore feature (though my few attempts with Acronis along those lines had failed). To mitigate that, the advice was to restore a new VM with the same characteristics (e.g., RAM, CPU).
At this point, I dusted off a draft post that I had begun earlier, when installing Linux Mint on this laptop. The draft post addressed the same issue: installing and using KVM in Linux. Unfortunately, I ran into problems for which I was not finding solutions. The following section contains the text of that draft post, starting in Linux Mint and then switching to CentOS. As indicated there, the conclusion was that it seemed I might have to start over with a fresh installation to get past those problems. Thus, the next section is both choppy and abortive: it conveys how far I got in this attempt to complete the CentOS installation with KVM.
Setting Up KVM in Linux and Converting a VirtualBox VM to KVM
As described in previous posts, I had explored VirtualBox and VMware as hypervisors. My conclusion was that both provided an acceptable albeit visibly slower alternative to the Windows 10 host that I was using for that exploration — and that, between the two, VirtualBox was a little faster and had better features.
Now I was using a Linux host. (In the early paragraphs of this section, the host was Linux Mint; in the later paragraphs, it was CentOS.) Unlike the situation I encountered back in 2010, when I was running a Windows XP guest virtual machine (VM) on an Ubuntu host, the choice was no longer limited to VirtualBox and VMware. In the words of Virtualization & Cloud Review (Pott, 2016), KVM had reached a level of “mass market adoption.” As noted in a previous post, KVM achieved much better performance than VirtualBox and VMware. That would matter for me, especially but not only when doing video editing in the VM.
A RedHat page explained that KVM had been a part of Linux since 2007. To use it, a search pointed toward materials from Linux.com, TechRepublic, and YouTube. Those sources (especially TechRepublic) suggested the following procedure:
- Examine your system’s BIOS or UEFI setup utility, typically available by hitting F2 (sometimes DEL or other keys; check your computer’s or motherboard’s manual) at startup, as soon as the splash screen (e.g., on my Acer laptop, the screen that says “Acer”) appears, or when (or right after) the bootup procedure lists the drives in your system. The BIOS setup utility (sometimes just called the BIOS) will ideally say that it can support hardware virtualization (i.e., Intel VT-x or AMD-V), and if needed you will turn that on. You can also run this Linux command: egrep -o ‘(vmx|svm)’ /proc/cpuinfo. (Those single quotation marks are just plain apostrophes.) It should return vmx for Intel, and svm for AMD, indicating that your CPU has one type of virtualization support or the other. Most do, and have, for some years. In response to this command, my Acer returned vmx vmx vmx vmx, apparently indicating a quad-core Intel CPU.
- Install additional useful software. I ran these commands one at a time, waiting for each to complete, and observing any messages produced. [User], here, is the name of the user who will be using KVM. To find the current user, use whoami or id -un. (In retrospect, I wasn’t sure whether I actually did enter the second command shown here, involving useradd. See below for further discussion. Note also that, as detailed in my other post, an installation on CentOS would apparently call for some changes in the first command as well.)
sudo apt-get install kvm qemu-kvm libvirt-bin virtinst virt-manager bridge-utils sudo useradd -g libvirtd [user]
- Log out of Linux Mint. Log back in. Go to Start > Administration > Virtual Machine Manager. It should list QEMU/KVM. Go to the top menu > File > New Virtual Machine.
That worked. Virtual Machine Manager (i.e., virt-manager) was now telling me that potential sources of VMs included Local install media (i.e., CD, DVD, or ISO; see advice on creating a VM in KVM) as well as existing VM disk images. This situation prompted a review of options, spanning the next several paragraphs.
First, the early paragraphs of another post discuss sources for downloading Windows, Linux, and other installation ISOs. VM images could likewise be downloaded, in addition to those created by the user. In a previous post, I identified multiple sources of such images. Microsoft offered VM downloads, expiring after 90 days (but capable of being rolled back via snapshots indefinitely), for x86 versions of Internet Explorer installed on Windows 7 and 8.1, and for Microsoft Edge (x64) on Windows 10, as well as a simple Win10 trial. (I did not test the functionality of those VMs. ELTP offered advice on using them.) VirtualBox VDI- (and in some cases other, e.g., VMware-) format images of current (or at least for relatively recent) versions of installed Linux distributions were available from OSBoxes, VirtualBoxImages, and VirtualBoxes.
As described in a previous post, I had already created and to some extent customized a Windows 7 VM in VirtualBox. I didn’t want to redo that work, and I also didn’t want to use up a Windows activation, which I would have to do if I reinstalled Win7 from scratch and activated it. (Apparently too many activations would provoke Microsoft to deny further activation requests, leaving the Windows installation potentially useless.) So I was particularly interested in advice on how to convert an existing VirtualBox VM to KVM.
For the recommended format of KVM disk images, a KVM FAQ page said, “KVM inherits a wealth of disk formats support from QEMU; it supports raw images, the native QEMU format (qcow2), VMware format, and many more.” (LWN (Shah, 2016) and StartoScale (Grinberg, 2015) seemed to say that those references to QEMU (short for Quick Emulator, probably best pronounced kew-em-yoo) were due to KVM’s use of certain preexisting QEMU functionality.) Despite that KVM FAQ statement, multiple sources viewed only raw and qcow2 as official KVM formats, and indicated that other formats would have to be converted.
On the choice between raw and qcow2 formats, a RHEL 3.5 document (2005) said that raw format would perform better, at the price of requiring the entire VM space to be allocated in advance rather than expanding dynamically as needed. More recently, however, Red Hat (2017) said that the performance of the latest version of qcow2 (i.e., QCOW2v3, a/k/a QCOW3) was “almost as good as RAW format.” It appeared the preallocation of virtual disk space was the reason for the raw performance advantage. The drawback of preallocation was that raw images were much larger, though one participant in a ServerFault discussion said the preallocated space could be compressed (e.g., when backing up in a .zip file). Unlike qcow2, raw format did not support snapshots — that is, capturing the current state of a VM, so that the user could revert if something went wrong later (e.g., returning to the state of a Windows installation prior to the installation of software that turned out to be bad). But apparently it was possible to construct “overlays” for approximately the same purpose as snapshots (see e.g., DustyMabe, 2015, FedoraPeople, 2012). In my limited reading, it appeared qcow2 was much more widely used than raw format. I decided to start with that.
There seemed to be several ways to convert other VM formats to qcow2 format. A search led to third-party sources suggesting that it might help to begin by merging snapshots (as detailed in 1 2 sources) and shutting down Windows in the VM gracefully. For Windows XP, 1 2 3 4 sources mentioned additional tweaks that might help. For the actual conversion, Joseph Zikusooka (2014) recommended qemu-img convert -f vdi -O qcow2 [VBOX-IMAGE.vdi] [KVM-IMAGE.qcow2]. The converted image could then supposedly be imported using virt-manager > Disk 1 > Advanced Options > change Storage Format to qcow2. According to a comment at Random Hacks (2014), this approach replaced the older advice (e.g., Useful Stuff, 2014) requiring use of the VirtualBox VBoxManage command (which included clonevm and clonemedium subcommands, see e.g., Utappia video) to convert from .vdi to a raw .img, and then to convert from .img to .qcow2.
It appeared that I could use virt-manager (i.e., KVM’s Virtual Machine Manager) to convert the VirtualBox VM. I tried this twice, first in Linux Mint and then later in CentOS. In Mint, the process went like this. In virt-manager, I chose File > New virtual machine > “Import existing disk image” > Forward > browse to a copy of the .vdi file produced by VirtualBox > indicate its operating system type and version > Forward. It produced a warning: “The emulator may not have search permissions for the path” that I specified. I didn’t know what that meant, but I accepted its offer to correct it. I told it to use 8192MiB RAM and 3 of 4 available CPUs. I named it and selected the custom configuration box. The only thing I was relatively confident about changing there was to make it USB 3. Then I clicked Begin Installation (at the top of the custom hardware box). It created the VM, and tried to boot Windows, but failed. By default, it led into Windows startup repair; and after a minute’s worth of black screen, I got the usual options. I chose to skip Windows System Restore. It said, “Attempting repairs.” After several minutes, it said, “Startup Repair cannot repair this computer automatically.” That was apparently the result that others achieved by using the methods mentioned above, involving qemu-img convert and VBoxManage. Multiple sources observed that, when converting Windows VMs, outputs from both of those Linux conversion approaches tended to be unbootable.
Later, I took similar steps in CentOS. I used Applications > Utilities > Disks to mount my VMs partition. It said the partition was mounted at /run/media/ray/VMs. Then, as above, I ran virt-manager (also available at Applications > System Tools > Virtual Machine Manager). It asked for the administrator password. I entered that. I ran ps aux | grep virt-manager. Its output named ray in the first column. This indicated that I was the owner of the process. I went to File > New virtual machine > Import existing disk image > Forward > Browse > Browse Local > Other Locations > browse to and select the .vdi file > Open > indicate OS type = Windows > indicate Version = Show all OS options > Windows 7 > Forward. First time around, I got a message indicating that the emulator might not have search permissions for that .vdi file or path. I accepted its offer to correct that and not to ask again. It said, “Errors were encountered . . . It is very likely the VM will fail to start up.” I think the reason may have been that, that time, I had actually started virt-manager from a Nautilus session where I was running as root. Second time, I did the ps aux check just mentioned, and didn’t get that error. Instead, I was looking at the option to specify RAM and CPU. I set those as in the previous paragraph. This time, I didn’t attempt any custom configuration. When I clicked on through to finish the setup, I got an error, whose key elements seemed to be as follows:
Unable to complete install: ‘internal error: process exited while connecting to monitor . . . could not open disk image . . . permission denied.
A search led to a Level One Techs forum where the question arose, did you add your user to the libvirtd group? On reflection, I was not sure that I had run the command recommended above for that purpose (i.e., useradd -g libvirtd [user]). When I tried that as ray, I got “Permission denied.” When I tried it as root, I got “group ‘libvirtd’ does not exist.” A search for the latter message led to an AskUbuntu discussion recommending running these commands as root:
addgroup libvirtd adduser YOURUSERNAME libvirtd
But that returned “addgroup: command not found.” Additional investigation revealed that, in Red Hat/CentOS, the command was groupadd. Once I did groupadd libvirtd, the useradd -g libvirtd ray command didn’t produce an error. Instead, it said, “user ‘ray’ already exists.” So, OK, I was confused. How could I already be a member of a group that hadn’t previously existed? Someone else in that AskUbuntu discussion said that, in Ubuntu, the group was actually named libvirt, not libvirtd. I tried adding myself to libvirt and repeating the command. That produced the same result: “user ‘ray’ already exists.” Someone else in that discussion said s/he got better results starting virt-manager from the command line than by running Virtual Machine Manager from the menu. I tried that. I still got the same permissions error. A comment in reddit suggested that I became able to run virt-manager as a standard user by running (su – ) usermod -a -G libvirt [username]. I was a little too confused to know whether that information was helpful or even fully relevant; I’m just passing it on as befuddled babble. Another reddit remark: the group of actual importance here was the kvm group. So I tried useradd -g kvm ray. But it said I already existed there too.
I was lost. I posted a question and waited for insight. Not much arrived. Eventually, I tried again in a different forum. While that was unfolding, I decided to try something else. I already had my Windows VMs set up and configured, and hopefully I would be able to bring those over to KVM; but I didn’t have any Linux VMs yet, and I was pretty sure I would want at least one. For this purpose, I thought I might start with a preconfigured Ubuntu image. I had previously become aware of several sources of such images, notably VirtualBoxImages, OSBoxes, and VirtualBoxes. A look at their websites indicated that VirtualBoxImages and VirtualBoxes were no longer keeping up with new releases. OSBoxes offered VMware and VirtualBox images for Ubuntu GNOME, which seemed like a sensible counterpart to my CentOS GNOME host. It looked like OSBoxes was coming out with one new addition a year. At this writing, their latest offering was a .7z file for Ubuntu Gnome 17.04 x64, so that’s what I downloaded. I installed p7zip to gain the ability to unzip it. That gave me a .vdi file, which I stored on the VMs drive, along with the Windows 7 VM mentioned above. I tried the same steps as above to add this VM to KVM. I got the same permissions error. Thus, the problem did not seem to arise from the Windows 7 VM specifically.
At this point, a response to my posted question asked whether the VM had “the right SELinux context.” I didn’t know what that meant. A search led to the RHEL 7 SELinux User’s and Administrator’s Guide (SUAG). According to SUAG (ch. 1), the point of Security Enhanced Linux (SELinux) was to “enable system administrators to create comprehensive and fine-grained security policies, such as restricting specific applications to only viewing log files, while allowing other applications to append new data to the log files.”
That sounded like way too much administrative overhead for a desktop user, especially one who didn’t even plan to run many applications on the host system. My first question was whether and how I could turn this off. I know this will come as a shock, but a search demonstrated that I was actually not the only person who had that question. Standing against the tide of people who wanted to disable SELinux, I found Rich Owen (2017), who said that SELinux “actually” is “really not” that difficult.” Its problems, he said, “typically include” mislabeling, policy modification, and policy bugs — all of which seemed to support the comment by Christof Damian:
While I agree that turning off SELinux is a bad idea. I don’t agree that there are simple fixes for most users. For some problems even experienced users can have a hard time figuring them out. [Damian offers an example of a documented bug.] . . . The problem can be fixed with a simple relabel of the correct file, but it is very difficult to find this out. . . . The problem was caused by the nfs-server package itself, which was also written by experienced developers.
In similar spirit, another commenter, Doctor Dbx, said, “Selinux is a great concept but is fundamentally broken when default configurations from official repos trigger violations.” Another, Michael Nielsen, agreed “While SE linux is a great concept, it is in no way easy to work with. . . . Currently the tools are good for those who pretty much always work with SELinux, but are a pain for anyone else.” There were other comments in the same vein. SUAG itself contained a Troubleshooting section (ch. 11) discussing numerous SELinux problems.
I was not a Linux administrator. I was going to have to be content with the level of security I’d get from any other Linux distro. I would have to substantially disable SELinux. On that topic, SUAG (4.4) said that SELinux could be either enabled (in either enforcing or permissive mode) or disabled. In permissive mode, SUAG (18.104.22.168) said, “SELinux policy is not enforced. The system remains operational and SELinux does not deny any operations but only logs AVC messages . . . .” That sounded tolerable. To change to permissive mode, SUAG said I would have to edit /etc/selinux/config and then reboot. The edit consisted of (su – ) gedit /etc/selinux/config > change the line that said “SELINUX=enforcing” to read “SELINUX=permissive” > save > close. (The file’s comment lines said that I could instead have changed it to be “SELINUX=disabled.”) I did that and then rebooted.
With SELinux thus changed to permissive mode, I tried virt-manager again. I got further than before. This time, the error was, “Unable to complete install: ‘Cannot get interface MTU on”: No such device’.” Along with that, I got an alert as to an SELinux problem. When I clicked on it, that alert read as follows:
SELinux has detected a problem.
The source process: /usr/sbin/virtlogd
Attempted this access: write
On this fifo_file: /run/systemd/inhibit/13.ref
That seemed like a pretty good example of something that I did not want to troubleshoot, because (a) I felt like I was already troubleshooting everything else at the same time and (b) I had no idea what it was about. The SELinux dialog said, “Would you like to receive alerts?” I felt it was wonderful that there were people and systems out there, doing this sort of work; but for me, it was not ardently desired. So I gracefully declined, and hoped the whole apparatus would quietly sink beneath the waves.
Which brought us back to the main error, which I interpreted as “Cannot get interface MTU on this nonexistent device.” No idea what it meant, but at least it sounded like English. A search led to multiple pages mentioning bridge connections. I wondered if the error was due to my attempt to disable networking: I didn’t want the Windows 7 VM to have Internet access. I trashed that attempt and started over, this time with only default settings except for larger RAM and CPUs. That seemed to resolve the “Cannot get interface MTU” error, but now I was back at the “Unable to complete install . . . could not open disk image . . . permission denied” error.
After a while, I found a possible solution. I was looking at permissions for the folder, and assuming they applied to the files within the folder. Evidently that was not the case. I changed the permissions for the .vdi file. I was thinking maybe there were additional problems with the Windows VM, so I tried this on a Linux Mint .vdi download from OSBoxes — only to find that KVM did not have any settings for Linux Mint. When I got to the box that said, “Choose an operating system type and version, the Linux choices were ALT Linux, CentOS, Debian, Fedora, Mageia, openSUSE, RHEL, SUSE Linux Enterprise (Desktop and Server), Ubuntu, and Others, which contained versions of GNOME, Mandriva, and Red Hat. So, OK, I tried with the Ubuntu 17.04 download from OSBoxes. I gave it 2048GB RAM and 2 CPUs. But no, permission denied — and that was when both the .vdi and its folder were owned by ray. I tried setting them to be owned by root, but that made no difference.
One response to one of my posted questions asked how I was mounting the VMs partition. I unmounted it using CentOS menu > Applications > Utilities > Disks, then remounted it by navigating to it in Nautilus. This time, when I went back through the Virtual Machine Manager process of creating a new disk, I got a question I hadn’t seen before:
Default pool is not active.
Storage pool ‘default’ is not active. Would you like to start the pool now?
I wasn’t sure what that was about. As of this date, a search suggested this message was quite rare. In the only relevant hit, the original poster answered his own question by manually creating /var/lib/libvirt/images and then changing ownership. I could have attempted to recreate what he seemed to be saying, but the larger message appeared to be that I had a funky installation and should just go back to bed, or at least consider reinstalling from scratch.
That conclusion was aided by the discovery that, around this time, I was starting to get notices reading, “A problem in the kernel package has been detected.” Not to say that was the kiss of death but, according to one SuperUser contributor, “When it comes to Linux kernel issues and the way they’re being reported in the UI, there’s rarely an easy path to diagnosing it.”
This section contains the start of what was going to be a separate post. As described here, I started down the road of reinstalling CentOS; but as I continued with the following steps, I had second thoughts.
For this installation retry, I used the installation method recommended by the Red Hat Enterprise Linux version 7 (RHEL 7) Installation Guide (3.2.2). Specifically, on my Windows 10 desktop computer, I downloaded and ran FedoraMediaWriter-win32-4.1.1.exe from its GitHub webpage, and used that to install the 4.16GB CentOS DVD ISO onto a USB thumb drive. The steps in Fedora Media Writer were to plug in the USB drive > select Custom Image > navigate to the ISO > select the USB drive > Write to Disk > Close. On the USB drive, that process produced a 4.5GB CentOS partition plus a 9.2MB Anaconda FAT partition. Those contents were visible in Linux, but not in Windows Explorer. It seemed I couldn’t even see the USB drive in Windows unless I used something like MiniTool Partition Wizard or diskmgmt.msc.
I already had the desired ext4 partitions on the solid state drive (SSD) on the laptop: 1GB for boot and 17GB for root, plus 22GB for swap. On the laptop’s hard disk drive (HDD), I also had a preexisting 300GB partition for /home. If I hadn’t had those partitions, I would have used GParted to create them before starting installation.
I plugged the USB drive into the laptop, booted it, hit F12 at startup (just after the Acer splash screen) to bring up the boot menu, chose this USB drive, and chose its Install CentOS 7 option. Minor adjustments aside, the first key point was to select Software Selection > GNOME Desktop with GNOME Applications. For Installation Destination, I selected both the SSD and HDD > I will configure partitioning > designate partitions. This was already configured for me, as I did not erase my previous CentOS installation before commencing this one. I still had to select each of the previous partitions, designate mount points, and check the Reformat box. Exception: I did not reformat /home or swap.
At this point, while attempting to configure the Networking > Wireless option, I got an error message: “An unknown error has occurred.” The first line of the details referred to an anaconda exception report. Unless there was a problem with the Fedora installation on the USB drive, I suspected the reason for the error was that I attempted to configure Wireless while the installer was still wrestling with my Ethernet selection. That bug crashed the installer; clicking Quit forced a reboot.
On reboot, the partition configuration process resulted in an orange error banner across the bottom of the screen: “Error checking storage configuration.” When I clicked on its link, I got a statement, “Your BIOS-based system needs a special partition to boot from a GPT disk label. To continue please create a 1MB ‘biosboot’ type partition.” That was new. I Quit > reboot > F2 > examined the system’s BIOS utility. It was set to Legacy. I switched it to UEFI > reboot. It seemed that the Fedora media writer method of setting up the USB drive may have made it UEFI-friendly in a way that the previous YUMI method didn’t. But this did not eliminate the “Error checking storage configuration,” when I got to that point. This time, the detail said,
No valid boot loader target device found. See below for details.
For a UEFI installation, you must include an EFI System Partition on a GPT-formatted disk, mounted at /boot/efi.
I did already have such a partition, in the Unknown section of this Manual Partitioning screen; it was just a matter of selecting it and designating its mount point as /boot/efi and its File System as EFI System Partition > Reformat > Update Settings. Or not: the error persisted. I clicked the circular-arrow Rescan button near the bottom left corner and retried all partitions. That didn’t fix it. I bailed out of the installer and went back into the BIOS and changed it back from UEFI to Legacy. But still no luck.
I guessed the problem was that I had set up the SSD as MBR rather than GPT. I didn’t know why that hadn’t seemed to be a problem in the previous installation. Regardless, at this point, I did a reconfiguration: I moved things that I wanted to save off the SSD; I repartitioned it as GPT; I set the BIOS to UEFI (and turned off Secure Boot); and then I restarted the installation. I went through the same steps, this time allowing sufficient time for the Ethernet setup to try Connecting and then become Disconnected (it was connected only to another computer, not to the Internet). In the next step, when I had an opportunity to create root and user accounts, the hassles I had experienced with sudo in the previous setup persuaded me to check the box, “Make this user administrator,” as some (e.g., TecMint, LinOxide) suggested.
At this point, I paused to reflect on what I had experienced with CentOS.
I had begun with the belief that Red Hat was designed with a particular focus on stability for server administration in large corporations. I knew that world to be cautious. I also began with the understanding that CentOS was virtually identical to Red Hat, but without the investment in Red Hat training, certification, and support. It seemed that, by taking this route, I would be choosing a Linux distribution that favored the careful, sometimes slow road, and in exchange I would be getting an operating system that produced consistent, predictable, reliable results.
The original concept was, in effect, that I would be getting Red Hat corporate software for free. In many regards, that was probably true. But my experience suggested that, in some regards, it was not. The most important example involved KVM. I had been able to use KVM to run my VMs on Linux Mint using Virtual Machine Manager. But on CentOS, using the same tool, attempting to mount those same VMs, following official instructions, I could not get past a permissions problem that had not existed on Mint. Everything seemed to be right, but I was getting weird results for which neither Google searches nor questions posted on two of the most highly trafficked CentOS discussion forums yielded solutions. I never did get the VMs to work in CentOS.
I did not think that was just a matter of CentOS being cautious. Nor did I think it was a case of having a GUI problem that wouldn’t have existed if I had used the command line. As I say, these were simple procedures that had worked on Mint. And Red Hat devoted considerable effort to documentation explaining how to use Virtual Machine Manager for precisely this purpose. It seemed, in other words, that CentOS might not be behaving as Red Hat would behave.
I encountered other instances like that. At the outset, for instance, I encountered an unpolished, sometimes unintuitive, and in one or two places apparently buggy CentOS installer, capable of crashing for no apparent reason. As another example, I have mentioned (above) the CentOS forum discussion in which participants said that commands available to Red Hat users, to choose among all updates, specified package updates, or security-only updates, were nonworking in CentOS. Likewise, when I ran into context menus that would not stay open until I right-clicked on the same item several times (manifested again after reinstallation), I seemed to be encountering a glaring bug, one that I doubted Red Hat itself would tolerate, in the user experience that it was selling to corporate buyers.
I did believe that the command-line-oriented server administrators comprising Red Hat’s primary market might not care much about CentOS’s remarkably unhelpful default file browser or its mediocre implementation of the GNOME desktop. This was a second area of concern in my experience: I was seeing what it meant to choose a Linux distribution oriented primarily toward server administrators. Such users might want and/or need to invest the time required to master the arcana — to understand and implement the various security possibilities involved in sudo configuration and group memberships, for instance, and in SELinux. That wasn’t me. It did appear that SELinux, too, was (as I recalled the words of one commenter) “fundamentally broken”; but even if it wasn’t, this was precisely the sort of complexity that I was not looking for. In these regards, I seemed to be swimming against the tide within the CentOS world.
Before undertaking this effort, I saw the user base of distributions like Mint and Ubuntu as something of a liability. There seemed to be so many people, creating and trying to resolve so many problems, arriving at so many beliefs that were not necessarily true or compatible with each other. It was a hubbub. I believed, in effect, that it didn’t have to be that complicated. And that might be true. But at present it seemed that the semi-chaos of the user experience in something like Ubuntu was also very helpful, for purposes of finding at least some kind of help, for almost anything that might go wrong. Imagine, by contrast, my experience of getting few if any relevant hits for Google searches seeking solutions to problems with CentOS, and few if any responses to multiple questions posed in two of the most heavily trafficked CentOS forums. There just didn’t seem to be a lot of people trying to resolve the problems that were blocking my progress.
Now, not only was I running into a KVM permissions problem that didn’t exist in Mint; I was finding that nobody seemed to be able to explain it. In addition, by simply following normal steps in the KVM GUI, I was getting a “default pool” error message that virtually nobody else got. I wound up with a kernel problem without even trying. I had done far more tweaking in Linux Mint, and yet had experienced far fewer problems that stumped me.
In such regards, I got a sense that CentOS might be more fragile than Mint. It seemed that it might work fine, for corporate users who would all do things in exactly the same way, though that did not seem to describe the remarks that various troubleshooting admins had posted in various discussions. But it seemed that CentOS would break down more easily if the user departed from standard operating procedure. The impression here was that, along with its bulk and complexity, a Mint installation would also have the advantage of many more user experiences to sort out its bugs, starting in Debian and continuing in Ubuntu and continuing further among Mint users.
Lifewire (Newell, 2017) said, “New users may find it painful at first because Ubuntu does an awful lot for you and with CentOS you may find yourself hunting for the solution to certain issues.” But now I believed that was an understatement. I was afraid that I might find it painful, not only at first — when I had been essentially sidelined, doing other things for days on end, while waiting for advice and trying to find answer or think of solutions that I might have overlooked — but at unpredictable times in the future, when I would run into a CentOS problem that nobody else seemed to be dealing with. This was, again, not my concept of OS stability; this was not what I wanted when I went searching for an OS with long-term support.
I began this CentOS attempt in pursuit of an OS that would provide a stable long-term experience. My tentative impression at this point was that Red Hat might be great, for corporate users willing and able to pay, and CentOS might be fine, for experienced server administrators who did not need to depart from standard procedures. But it presently seemed that neither distro would be suitable for end users lacking corporate funds and seeking solutions for varying individual purposes.
I thought that, for purposes of running VMs, CentOS would give me a very plain-vanilla solution, but it didn’t turn out that way. I concluded that I probably did belong back in the Ubuntu world, though possibly not with a distro as heavily laden as Mint or Ubuntu per se.