Vultr runs off of 3½-inch floppy disks in a hybrid RAID0 setup.
Source: They won’t deny it
Vultr runs off of 3½-inch floppy disks in a hybrid RAID0 setup.
Source: They won’t deny it
hybrid RAID0 setup.
Ahahaha, I consider RAID0 to be a manly trait.
jkjk
MORE POOOORN PLZ.
This is a sentence, for real.
All I can say is please hurry…
SOOOO SEXYYYYY!!!
Sorry everyone!
Should have more this week.
Had some delays in some purchases so its put me back over a week.
Next round will have storage node assembling, racking, and likely getting all of the infiniband wired. I’d like to get some initial benchmarks going too.
We’ll migrate our old storage customers into the cluster to see how it rolls and give me time to upgrade the current slices with ConnectX-3 cards.
Francisco
Next round will have storage node assembling, racking, and likely getting all of the infiniband wired. I’d like to get some initial benchmarks going too.
A few new things arriving and getting things moved over to the DC in the next couple days.
NVME cache drives:
Infiniband switch for storage fabric:
Mellanox ConnectX-3 cards (for the inifiniband fabric):
Now, to explain some of the choices.
The cache is only taking “hot spot” data, meaning itll mostly deal with people like torrenters, databases, and things like that, where the same data is going to be read over and over again, justifying it being moved to the NVME. Past that, everything else is on the platters so spending 3x more wouldn’t do much anyway given these NVMe’s get 8PB+ of endurance each.
The infiniband switches are a few versions back but given the total capacity of a single node is around 4GB/sec~5GB/sec the 40gbit option is not only cheap but plenty of capacity. Worst case I can double up the ports and get a 80Gbit link to the fabric on each node.
As for the cards, Mellanox makes excellent cards and I got these for borderline a steal. They support 40Gbit & 56Gbit per port as well as some other nifty things that i’ll keep to myself. Fantastic power draw at only 5W/card keeps that side of things in check.
If we were offering pure SSD/NVME block storage we’d have to go with something like 100Gbit to keep up with things, but we’d have to charge prices similar to DO, Linode, Vultr, etc, and I just don’t think there’s enough demand in our side of the market to justify that kind of investment.
Karen & I will be heading to the DC in the coming couple days so the next photos should be with the chassis’ being populated & racked.
Francisco
You sir, are nothing but god-like.
Went to the DC today to start assembling gear. Nothing too crazy, was caught up on a few things but here’s what i snagged.
We destroyed Fiberhub’s back wall building area, as is tradition.
A completed node. Some of the 8087<>8087 cables are too short so I can’t wire everything at full speed, but I can at least get things running badblocks
for the time being while I wait on those to arrive. Each back plane (24 drives hang off the first, 12 off the 2nd) gets its own HBA. No hardware RAID in any of this.
The chassis are Supermicro 847’s, dual sided nodes with 36 drives total. The motherboard tray slides out the back to allow you to mount some OS drives in the cavity under it.
in the day time we’ll head back and start racking things up and getting a mass badblocks
running.
Francisco
Damn those nodes are huge. I have a feeling I couldn’t push one over, much less pick one up.
The servers w/ motherboard are around 70 pounds.
With drives it’d be an easy 100+ pounds.
The weight isn’t the hardest pat about it, it’s just really clunky to move about.
I’ll snag pictures of them once racked later today
Francisco
The weight isn’t the hardest pat about it, it’s just really clunky to move about.
Non-issue for the stallion.
Damn those look pretty nice in that rack.
Damn those look pretty nice in that rack.
Agreed, Francisco sure does have a great rack!
Cheap storage is beautiful, so excited for these toppings.
Been taking a while to get this online since we’ve had some bad raid cards get delivered as well as just waiting on badblocks to run.
One internet please.
Francisco
as well as just waiting on badblocks to run.
Out of curiosity, how many drives have been RMA’d so far? With that volume purchased I’d expect at least a few need to be returned?