Linux distributions

If you have used a LIVE image to install, you can boot from your DVD once again, select "Try Fedora" (you have to click twice because of a bug).

If you have a non-qwerty keyboard, you have to find the settings and change the keyboard.

Then you can open a terminal (from the menu hidden by "Activities", or ALT-F2 and type "gnome-terminal"), and type this command and see what it reveals:

sudo parted -l

With the disk configuration on automatic, I had something like this, note the boot partition in 1. It needs at least 512 MB for the boot, here it chose 1 GB although it was not necessary.

Number Start End Size Type File system Flags
1 1049kB 1075MB 1074MB primary ext4 boot
2 1075MB 42.9GB 41.9GB primary btrfs


That's where I realize that by default, it uses btrfs too :(

If you don't have this boot partition, you can still reduce your main partition's size and create it since it's just been installed, but it's probably easier to restart the installation at this point.

PS: booting from an USB requires to authorize it in the BIOS. And if you are creating the USB from Windows, you can try Rufus, which works with simple images like Fedora's (won't work with Manjaro and some others).
 
Joined
Aug 29, 2020
Messages
10,157
Location
Good old Europe
EDIT: If you selected an automatic setup, it's strange that you have a problem. We probably need to investigate that first.

Generally speaking, creating the partitions manually isn't a bad idea, and it's easy with the installer.

I've tried it a few days ago. Without messing with LVM, it's really straightforward,
in Installation Destination,
  • you can choose from Automatic / Custom / Advanced custom. Either of the two custom is fine
  • click "Done" (the buttons are really badly positioned, they're all over the place)
  • delete your partitions with the cross (select each non-free partition, and click on the delete icon, just on the right of the + button)
  • add one 512-MB boot partition (select the SSD, click on free space, then +, then fill in the panel)
  • add the other partitions you need (*)
  • click "Done", it should tell if you if something's missing or wrong

(*) I usually create a swap partition, you only have 4 GB of memory so it depends what you do with it, but 4 or 8 GB of swap should be fine. And I usually create a separate partition for the users' home directory. You could use your SSD for the OS and your HD for the users, so
  • 512 MB on your SSD, filesystem = BIOS Boot
  • 4 or 8 GB on your SSD, filesystem = SWAP
  • the rest of the SSD, filesystem = ext4, mount point = /
  • the HD, filesystem = ext4, mount point = /home
 
Joined
Aug 29, 2020
Messages
10,157
Location
Good old Europe
I'm a firm believer that creating swap on a SSD is a mistake if that disk is also your boot disk. Also I'm not sure if LVM supports trim; but even if it does if your system unexpectedly swaps a lot you could add a lot of wear to your ssd. I would either use a scratch ssd for swap (and use a file system that suppor trim).
 
Joined
Jun 26, 2021
Messages
271
It's nice to see interest in Linux growing around here. It used to be mostly tumbleweeds or snark when it was mentioned. :p
 
Joined
Nov 8, 2014
Messages
12,085
I'm a firm believer that creating swap on a SSD is a mistake if that disk is also your boot disk. Also I'm not sure if LVM supports trim; but even if it does if your system unexpectedly swaps a lot you could add a lot of wear to your ssd. I would either use a scratch ssd for swap (and use a file system that suppor trim).

Maybe, maybe not.

My thoughts were that the boot doesn't contain valuable data, nor the OS, so I'd rather see that going down than the user's data, even if the cost of an SSD is still a bit higher than an HD. So the idea was using components for what they're good at, keeping the OS on a faster support with no random access penalty, and which can be much smaller than the HD and yet contain all the OS files, and keeping a good-sized but slower HD for the user's data.

Besides, I may be wrong but I think that swapping on an HD can be much more damaging than on an SSD, which automatically distributes the wear on its content and has no mechanical parts. If there is so much swap as to be a real issue though, it means there's another problem and the user would notice the slowdown.

It could depend on how the PC is used, of course.
 
Joined
Aug 29, 2020
Messages
10,157
Location
Good old Europe
It is said that many of us just read the first and last letters of a word before interpreting it. Which in this case might l..d to s..e … misunderstandings.'

pibbuR who didn't (he thinks) misinterprete this one, but who regularly makes similar mistakes when reading keywords in crossword puzzles. ><which, no surprise, makes them harder to solve than intended.

PS. One trivial example, not worth mentioning, but of course I do it anyway. Keyword: "figur" ("figure"), which I read as "fugler" ("birds"). Took me some time to get around this one. DS

PPS: @Redglyph;: Thanks for the suggestions. I haven't got around to working on it yet. DS.

It was really only about the hard disk, since size matters in this case. ;)
 
Joined
Aug 29, 2020
Messages
10,157
Location
Good old Europe
I've just seen this informative video about Linux filesystems, it's a comparison of performances between Open ZFS, XFS, BTRFS and EXT4. It's not directly about distros but it's part of the installation, so relevant enough here.

View: https://www.youtube.com/watch?v=G785-kxFH_M


His conclusions are:
- ZFS is best for non-root filesystems
- EXT4 without journaling is best for root filesystems (w/o journaling to prevent wearing if SSD, or perfs, but potential loss of data)
- XFS is good for non-SSD (with SSD, journaling could be moved to another drive...)
- BTRFS not really any advantage, plus stability issues with RAID 5 & 6
 
Joined
Aug 29, 2020
Messages
10,157
Location
Good old Europe
I've just seen this informative video about Linux filesystems, it's a comparison of performances between Open ZFS, XFS, BTRFS and EXT4. It's not directly about distros but it's part of the installation, so relevant enough here.

View: https://www.youtube.com/watch?v=G785-kxFH_M


His conclusions are:
- ZFS is best for non-root filesystems
- EXT4 without journaling is best for root filesystems (w/o journaling to prevent wearing if SSD, or perfs, but potential loss of data)
- XFS is good for non-SSD (with SSD, journaling could be moved to another drive...)
- BTRFS not really any advantage, plus stability issues with RAID 5 & 6
I didn't watch the video; but about 8 years ago i spent a year as par to of my job bench marking these file systems for our storage system (the company i worked for offer cloud storage service); xfs was significantly faster than ext4. Another note is it is a shame that after all these years BTRFS still has unstable raid. In theory this was the filesystem that was going to offer everything and it was stability issues 5 years ago that made me go with ZFS for my home system. As for ext4 without journaling. Just don't. The damage to the file system after a crash isn't worth the benefit 'cept for a file system like /tmp where you throw everything away. There is also a difference between maximize performance (that is most bits per second) and latency performance (how long it takes to access a file while other disk activity is taking place). As i said i didn't watch the video but we were more concern about latency for disk reads on a new request over absolute maximum bit/s performance. This is also something that should concern a home user - do you want to have maximum copy speed of a large file or do you want to be able to run a program when you type a command (for example) or watch a video without stutter. Faster is not always better.
 
Joined
Jun 26, 2021
Messages
271
I didn't watch the video; but about 8 years ago i spent a year as par to of my job bench marking these file systems for our storage system (the company i worked for offer cloud storage service); xfs was significantly faster than ext4. Another note is it is a shame that after all these years BTRFS still has unstable raid. In theory this was the filesystem that was going to offer everything and it was stability issues 5 years ago that made me go with ZFS for my home system. As for ext4 without journaling. Just don't. The damage to the file system after a crash isn't worth the benefit 'cept for a file system like /tmp where you throw everything away. There is also a difference between maximize performance (that is most bits per second) and latency performance (how long it takes to access a file while other disk activity is taking place). As i said i didn't watch the video but we were more concern about latency for disk reads on a new request over absolute maximum bit/s performance. This is also something that should concern a home user - do you want to have maximum copy speed of a large file or do you want to be able to run a program when you type a command (for example) or watch a video without stutter. Faster is not always better.
He addresses many of those points in the video, which was the point of my post. Besides, many things can change in 8 years. ZFS is a clear winner with a few exceptions revealed in those tests, but I think it's also very heavy for the memory.

I was also a little puzzled about the EXT4 without journaling, but it's a matter of performance + SSD wear vs the inconvenience of re-installing some packages or at worst the OS in case of crash or power shutdown when you're installing those packages / the OS. So it's not entirely unsound to me.
 
Joined
Aug 29, 2020
Messages
10,157
Location
Good old Europe
Not using a journal is nutty, should only be considered in the most specialized/extreme circumstances. And SSD wear is a non-issue in all but the most extreme situations today as well.

I use XFS on all our production systems, which is also what Red Hat has been recommending for many years now. I don't see much reason to use ext4 in 2022. Maybe if you need the ability to shrink your filesystem (which XFS is missing).

I'd say ZFS is too widely unsupported to really consider for production usage, unless you've got a special situation. Red Hat and SuSE both don't support it. In my industry, those are the only two Linux vendors that are used. I know that Ubuntu is popular in some other industries, and they do support ZFS. But also as another example, I don't think any enterprise backup software (stuff like Veeam, CommVault, whatever) supports it.
 
Last edited:
Joined
Sep 26, 2007
Messages
3,444
Note that the EXT4 journal recommendation was for the root fs only, not the user or data partitions. Still, I don't think I would follow it, there are other ways to get more performances.

Isn't there a legal issue with Ubuntu's version of ZFS? One of the shady things they've done IIRC.

Curiously, OpenSUSE has made BTRFS the default choice at installation for a long time, probably still does. And the default configuration is not great, making the whole system regularly freeze because it generates too many snapshots. I don't know if that's the same for SUSE, I suppose not.
 
Joined
Aug 29, 2020
Messages
10,157
Location
Good old Europe
Hmm, I don't have any recent experience with SuSE, but their documentation says their current default is to use Btrfs for the root filesystem, and XFS for all other filesystems. Red Hat doesn't even support Btrfs.

SuSE has always been a little weird with their filesystem choice. Back in the day, they were the only major distro to use ReiserFS as their default filesystem. But they moved away from it soon after Reiser murdered his wife - and also insisted that Reiser murdering his wife had nothing to do with their reasons for the switch.
 
Joined
Sep 26, 2007
Messages
3,444
Nasty move from Red Hat. By putting the sources behind a pay wall and forbidding its use for other downstream distributions, they're killing CentOS replacements like Rocky Linux and AlmaLinux OS. Is that a move to try and get their users back? It's not going to end well for them.

They're not better than Ubuntu, after all.

 
Joined
Aug 29, 2020
Messages
10,157
Location
Good old Europe
Back
Top Bottom