I just had an SSD installed alongside the pre-existing hard drives within my Hetzner server. Ideally, I would like to put the two HDDs into RAID 0 for storage (the data is unimportant) and then have the SSD as the boot drive outside of the RAID array.
I’m able to install an OS onto the SSD (/dev/sdc) via Hetzner’s installimage script (I comment out the two hard drives and disable software RAID), but then the server refuses to boot. I’m sure it’s something simple, but I just can’t figure it out.
my guess would be that commenting out the HDDs makes installimage ignore them completly so that there is no grub (or whatever) is installed into the MBR of /dev/sda - hence you can’t boot up your OS.
I’d go into rescue mode and rewrite that MBR with grub-install (mount your root drive, chroot into it, etc.)
maybe as an alternative instead of commenting out sda/sdb in installimage you could configure them there already. you should be able to put your root onto sdc anyway.
I do have a server with additional SSDs as well but luckily they appear as sda and sdb - so no fancy stuff needed at all
I’ve also seen the need on some servers to enter BIOS and change the order of the disks there.
Can’t remember where it was but BIOS only had the first two disks visible and I had to swap one for the third disk.
Then it could boot and install on that disk. OS found all disks after the boot.
for those interested, installing grub as proposed did the trick, it just came with the hurdle that those 3TB HDDs need GPT and therefore you have to have a small BIOS BOOT partition where grub can embed itself…
@mikho is also right, changing the disk/boot order in bios would be an option. sadly with hetzner you have no easy way to achieve that, as they have no ipmi, need to ask for a lara kvm etc.
From my experience with the above, just dropping a support ticket they will go and make the change for you to save going through the full KVM proccess. But the GRUB work around also works!
@Falzo, would you mind outlining for Linux-noobs what to do? I have an NVME drive and then 2x 4TB drives with my NVME drive was listed first. I’ve tried commenting out/not commenting out the 2x 4TB drives during the installimage script process but haven’t been successful at all. All I want is the OS to be installed on the NVME drive and then the 2x HDD to be in RAID0 but can’t get it to boot after the installimage script says everything worked perfect.
You’re going to need to install GRUB to whatever your primary/NVME drive is and chain through to your choice. I suggest making a 200MB /boot and / on it and the system should be able to chain accordingly according to the last time I did similarly (about two years ago).
Afraid I can’t give step-by-step on this because I’m sure minor things have been altered since then.
As the others mentioned if you are still confused/not comfortable with doing this… You can likely just ask Hetzner to do this. Most unmanaged providers will do custom partitions on the house once or so per billing cycle.
so your NVMe is listed as /dev/sda or something else?
have you tried to use no raid at all during installimage setup - instead of commenting out the 4TBs…
if it is sda you ideally could set swraid 0 (for no raid) and declare the partitions as you need it, which should only use the nvme drive then anyway.
ideally the system starts with that and you can create your sw-raid for the 4TBs afterwards easily.
if the nvme is not sda or the system does not come up properly, I’d comment out the 4TBs and do the install through installimage on the nvme only anyway
after it finishes, boot into rescue mode and create a small (~10MB) partition on each 4 TB disk, type set to bios boot (fdisk).
figure which partition from nvme is your root-partition and mount it somewhere. f.i. /mnt (and the boot partition boot into it, if seperate, e.g. /mnt/boot)
bind mount dev/proc/sys into that
now chroot into that system (chroot /mnt /bin/bash)
use grub-install to write the boot-loader into each disk (grub-install /dev/sdX).
run update-grub in the end. reboot, and hopefully it starts correctly
now create your sw-raid from the rest of the 4TBs afterwards with mdadm and mount it, adjust fstab, etc.
Just my two cents about boot ordering: I’ve recently had to setup again an auction box whose layout was like this:
sda & sdb : SSD
sdc & sdd: HDD
sda & sdb were supposed to be the RAID1 lvmcache for the main partitions hosted on sdc & sdd (which were in RAID1 as well): the server was supposed to boot from either sdc or sdd. sda & sdb weren’t supposed to host any other partition.
Opened a ticket, 30 minutes later I got a Lantronix and had configured the server to boot from sdc first, sdd second. I then did grub-install on sdd as well
I did the whole rescue mode + chroot rollercoaster as well on different boxes, there are some quirks if you’re using LVM on top of LUKS on top of RAID I’m actually too sober to detail (make sure, whichever distro you picked, that initramfs regeneration in chroot picks the required drivers and modules if you ever happen to touch that after the first installation steps; it’s even funnier if the chroot’d distro is not debian nor a debian derivative )
that’s not funny, but a serious issue with the operator. debian thx
other than that I totally agree that it might be easier to just rearrange the boot order in bios via KVM (as has been suggested in this thread earlier) esp in more complex use cases…
@Falzo - thank you, that worked! I didn’t set up the boot partitions on the 4TB disks and wasn’t mounting /mnt/boot. So I was getting nowhere pretty fast last night.
Hi!
I now this thread is old but could anyone help me figuring this out?
I have the same issue, new Server and I want to use my SSD as the OS boot drive.
Hetzner gave me the information that I need to install the bootloader on all drives but I can’t keep to find a way of how Im supposed to do that.
I googled a few hours and tried many commands, but nothing worked.