I just encountered a video done by one Matthew Moore. He calls it “Mythbusting Linux”. You can find it over on Youtube here. For the most part, what Matthew says in the videos is accurate enough. However, there are a couple of points where he misrepresents or misunderstands the specific situation.
He has comments blocked on his video so it’s not possible to argue with his points directly on the video page. I’m only writing this because his video describes an incomplete or mistaken understanding of several points and I think it’s worth having something out there to help set the record straight.
I’ll take his four myths one by one in the order he presents them in the video.
Malware is no problem
He states that Linux can, in fact, be infected with malware. I have no argument with his conclusion here. However, the examples he shows appear to be windows targetted malware that is located in what appears to be a temporary directory used by the software he has that isn’t starting. I suspect that reveals a bug in that software rather than an active malware infection on his system. Still, even though his example appears not to actually prove his conclusion, his conclusion is accurate.
Fragmentation is no problem
I’m not going to argue his conclusion here. His argument is basically accurate but his argument on the need for defragmentation is a bit overstated. Still, following his advice here is not going to cause problems.
The myth is that you never need to reboot Linux. His conclusion is right – there are cases where a reboot is needed. In particular, when the kernel is updated, a reboot is definitely required for the new kernel to operate. In many cases, even a device driver update (when the driver is a kernel module) will require a reboot because that may be the only way to take the device offline so the driver can be switched. However, his assertion is that you need to reboot because that is the only way an updated system library gets used. This is where his understanding of the system falls apart.
He asserts that the new version of the library will not be used by anything at all until the old version is removed from memory. This is where his understanding of libraries is wrong. The new version will, in fact, be used by any program started after the library is updated. It is only programs that were already running before the update that will continue to use the old version. Simply restarting those programs will cause them to use the new library version. Almost all system software can be restarted after such an update, and that will be all that is required by most library updates.
He uses the example of uninstalling Firefox while it is running and showing that it still works after it has been uninstalled to prove his point. However, all he proved is that something that is already loaded can continue to run after the file it is loaded from is deleted, which should not be surprising. If he tried to start a second firefox without closing the first one, he would have seen that the firefox binary is no longer present to be executed. This is actually the same effect described above for libraries. Whatever is currently present in the file system is what gets loaded when a program is started. If that program isn’t present in the file system, it can’t be started. For his test to be accurate, he should have used two different versions of a single executable with a trivially visible difference in their interfaces. He should have started one version, left it running, removed the binary, installed the other version, and then started that version. He would have seen that the second version started would be the newly installed version. If his assertion about libraries were accurate, it would be the first version of the program.
Now, to be fair, the easiest way to make sure nothing is still using the old version of a library is to reboot. However, that is not strictly required if anything using the old version can be stopped and restarted, which will usually be the case for anything other than the C library.
One Install Will Boot Anywhere
Here, he’s basically right. However, his reason is actually wrong.
In this segment, he takes the hard drive out of his laptop, installs it in his desktop, starts it up, and it fails to boot. He then concludes that it is due to the UUID not matching the hardware. His conclusion is, however, flawed.
The UUID in question is attached to the file system, not the overall computer system. Even if the UUID was associated with the hard drive itself, that hard drive is present so that can’t be the problem. I have personally copied the entire contents of a hard drive, partition table and all, to a separate drive, installed that in a system, and it booted just fine. I have also taken a modern Unbuntu derivative from one machine and installed it on a new machine, with only the hard drive in common, and it booted just fine modulo an issue with the video driver due to a different video card.
The problem he had was that when the system booted, the bios found the drive just fine and, thus, grub was able to read it. However, for whatever reason, the linux kernel and whatever magic Arch Linux is doing failed to find the hard drive at all. That means that either the bios failed to initialize the disk controller properly (say it was hiding it from the operating system) or the desktop machine has a disk controller chipset not supported by that Arch linux installation. (Just because the CPU and video card are similar, it doesn’t mean the disk controller chipset is the same.) This could be because Arch Linux customized the “initrd” image to the hardware in the laptop or it could be because of something the bios in the desktop is doing. However, the failure to boot had nothing to do with “UUIDs in fstab” since those are attached to file systems or hard drives, not computer systems themselves and the hard drive is clearly present to get as far as that error.
As a bonus, he asserts that the UUID systems boot faster because the /dev systems (the “old way”) have to identify all the drives before they can boot. In actual fact, he has that backwards. The old system is actually faster because the /dev stuff identifies the hardware device directly while the system has to look up the UUID for each file system on the actual disks to find it.
To be fair, transferring a linux installation from one computer to another is not guaranteed to work. If drivers are missing or there is some other architectural incompatibility, it will certainly fail. However, the UUID thing will not be the cause of such a failure.