Feedback on Co-Location Enquiry


I found a fairly cheap co-location deal here locally that made me think about buying a server and have it deployed in the data center. The server would replace a dedi and a few VPS:es I use for hosting websites right now.

Main reason for the switch would be to have some kick ass performance for around the same price per month as I have now. Second reason would be to have sites hosted in the same country where I live.

Budget for the server and hard drives would be around maximum €1300.
What I’m looking for is a 1U server with 4 pcs. 250 or 500 GB NVMe drives in RAID 10.

I found a PowerEdge R440 with a Intel Xeon Silver 4110 (8 cores) and 16 GB RAM for around €850. That server looks to be compatible with adding NVMe drives afaik and also gives me a bit of budget to buy the NVMe drives and the BOSS S-1 controller cards which seems to be what I’m looking for to be able to add 4 NVMe drives.

Would that be a sane way to go, or is there better alternatives for my budget do you think?

You are correct EXCEPT good luck getting 4 flipping sockets in a 1U chassis. :confused:

You mean it will be hard to fit 4 NVMe drives in there? It seems like the BOSS S-1 takes two NVMe drives per controller board, and that the server has 2 x PCI-Express slots.

I’m getting a bit confused though. Specs say “Front Bays: Up to 10 x 2.5” SAS/SATA (HDD/SSD) with up to 4 NVMe”. Would that be native NVMe support without PCI-Express boards?

I have only had one 2U server at home for some testing, so it feels like I’m in deep water right now :slight_smile:

I thought you meant processors, anyways you shouldn’t have a problem finding room for 4 NVMe drives. Because like your example they means that you can fit…
10 2.5" drives AND 4 NVMes

Though if your concerned about being able to hot swap them I would check to see if the NVMes would be internally installed or if you can access them from the bay.

1 Like

No, they don’t say anything about U.2 drives - which would plug into front bays if it had an NVMe backplane. The config you mentioned would have 10x 2.5" slots up front and then if you have both risers and BOSS cards you could get 4 M.2 NVMe’s installed internally.

From googling it looks like you can do a single full height riser or 2 low profile risers - and you’d need the dual riser config to fit 4x M.2.

Honestly, unless you really need the NVMe performance I’d consider going SSD as primary storage. Keep getting cheaper and you could step up to a 6, 8, or 10 SSD RAID10 if you need the throughput and IOPs.


Awesome. Thank you very much for your input.

I’ll do some more research before ordering. Or is there maybe any other more reasonable 1U server available with native support for 4 x NVMe that I should be aiming at instead of the R440?

I could probably live with SSD instead of NVMe I guess, but I was just blown away by the NexusBytes NVMe Raid 10 VPS I bought last week. Would love to have a whole dedicated server with similar performance.

Remember your VPS is not getting that performance but usually at least 9+ others as well. So your likely right now getting RAID 10 SSD performance or even less in terms of how much IOPS you can use at a given time during sustained usage.

So at the end of the day UNLESS your combined usage really needs it, don’t waste time, money and efforts getting the specs the “strongest” upstream provider is using. As your not going to host 9+ VPSes that also eating away IOPS as well right?

1 Like