MAvL: joined #clearfoundation
Morning guys!
the sun is shining after days of rain marclaporte: joined #clearfoundation Benjamin4: joined #clearfoundation MAvl: joined #clearfoundation
left #clearfoundation MAvL: joined #clearfoundation
joined #clearfoundation MarcelvanLeeuwen: joined #clearfoundation MAvL: marclaporte, i have spoken with erry of freenode.org
he is waiting for some feedback
and he is apologizing for the delay
so let's hope for the best marclaporte: tks MAvL: no problem!
Hey Benjamin4 ! What do you think of my new home for ClearOS?
http://img842.imageshack.us/img842/9106/9sa1.jpg
It's the server on the top
not ready yet just the case Benjamin4: haha...nice. MAvL: I have to order some hardware
not sure which motherboard i going to use
suggestions?
Should i go i7 or Xeon...
Also this server has to run at least 5 years without hardware changes Benjamin4: I'm not a good resource for hardware recommendations.
You should post to the forums.
Some guys, like Darryl, are great resources for stuff like that. Lacrocivious: MAvL: I've been building systems since 1985, and the mainboards which have had the fewest field failures for my clients, by far, have been from Asus. For what that's worth. As for i7 vs Xeon, I'd probably choose i7 but either is fine MAvL: joined #clearfoundation
Lacrocivious, thanks for the info!
my desktop has also a Asus motherboard
the Asus Z87-Pro c2
I build this system last summer and it's really stable Lacrocivious: For longevity, it is more important to choose a midline to middle-high-end board. The least expensive models will be built more to price points than for longevity, using less expensive components. The very high end will be bleeding edge gamer boards for the most part, and quite volatile. The middle line boards are best for business use where longevity is important MAvL: i used a i7-4770K
okay
of course i need at least two Nic's Lacrocivious: At least. More if you want more than one internal subnet MAvL: not sure if i go the onboard route or buy a separate Nic
I think a intel Nic is the best solution Lacrocivious: Here again, for longevity, consider discrete NIC adapters and do not rely entirely upon on-board NICs. The single most common mainboard failure is the NIC; usually only it gets damaged and the board remains viable
Yes, Intel is the safest bet there, particularly on the WAN side MAvL: hmmm, okay
can suggest a HBA card to connect to all those drives? 16 drives in total... Lacrocivious: My point is that you might not want to subject your board to the known most common failure (the on-board NIC) because of the chance that whatever surge takes it out also damages other board components MAvL: I'm not sure you seen the photo of the case i uploaded Lacrocivious: MAvL: I can't suggest any that aren't true RAID and therefore very expensive (*starting* at $300, for reaons of royalties)
MAvL: I looked at that photo, yes
Noisy bastard to have in your home, isn't it? MAvL: okay
haha, yes
but it's going to garage Lacrocivious: Ah
Keep the mice awake MAvL: so then it's no problem
haha Lacrocivious: You *might* want to take a look at one of the Asrock mainboards that has 10 or 12 SATA ports on it. I don't know about the longevity, but they're decent enough and have improved their quality quite a bit over the past three years or so MAvL: I want to do software RAID on that server and maybe in the futute BTRFS
okay Lacrocivious: MAvL: Is this for home use? MAvL: yes
:) Lacrocivious: Take this for what it's worth, but I actually discourage RAID for home use. I realize that may sound weird. I'll explain MAvL: I'm all ears Lacrocivious: RAID is designed and intended for Enterprise level computing, which *assumes* full redundancy and backup. Home users never have that
RAID works just fine until you get a unit failure
Whereupon, you discover that the array rebuild probably doesn't work as well as the not-quite-enterprise-level hardware manufacturer claimed it would. If it works at all
Plus you don't know what data is on what drive
So potentially you lose everything
That's a worst case, but you'd be horrified to know how many times I've seen that happen, particularly when even vendors like Adaptec turn out to have built RAID cards that won't actually rebuild an array
RAID doesn't actually help a home user much anyway, in my opinion. I'd go for JBOD instead, and also avoid LVM for the same reasons; you don't know what data is on what physical drive
And unless you have full redundant hardware, once you have a failure, the headache is massive MAvL: okay
So you suggested just use every drive separate Lacrocivious: I fully realize this is a contrarian view, and software-oriented (as differentiated for this argument from hardware-oriented) people will laugh derisively at me for 'opposing' the convenience and 'out-of-sight, out-of-mind' advantages of RAID of any stripe (pun!) ;-)
The other consideration is, of course, to only use RAID-certified or RAID-capable hard drives in an array. If you buy end-user-grade drives for an array, you are going to suffer later MAvL: I'm using the WD-Red's Lacrocivious: Western Digital Red or Black will do, or any of their HGST line (which is Enterprise grade); Seagate calls their Enterprise models names what usually have 'enterprise' in the description MAvL: okay Lacrocivious: I encourage you to get more than my own opinion on these issues, then make your purchase and architecture decisions based on what feels best for you marclaporte: joined #clearfoundation MAvL: i appriciated your input :)
but JBOD is also interesting. You only lose the drive which fails and all other data is still there...
if i'm correct Lacrocivious: Yep. It is less efficient, because you will have some empty space on each physical drive rather than all your empty space in one usable chunk across all drives MAvL: but is JBOD a future of a HBA card Lacrocivious: But you always know Where Your Stuff Is
Just about any HBA -- even the fake 'RAID' chipsets on mainboards -- also support JBOD
You can treat your JBOD pile sort of like a physical manifestation of your directory tree, one drive for Applications and Development, one or more for TV, one or more for Movies, one or more for your Massive Pr0... er, *Linux Distro* Collection, etc ;-) MAvL: okay
I've never used JBOD
Because i thought it was a sort of RAID 0
one drives fails everything is gone Lacrocivious: No, JBOD is Just a Bunch Of Disks. You can combine them in volumes that span physical drives, e.g., LVM, but by itself JBOD is more the absence of RAID. It simply means that each physical device is separately and distinctly addressable MAvL: yes i did some googling :)
nice! Lacrocivious: With WinOS, separately addressable drives become problematic after you run out of drive letters, but with *nix you don't have that problem MAvL: Linux rulezzz Lacrocivious: Every time I think of 'drive letters' I break out in hives ;-) MAvL: :)
i have to do some testing with JBOD
interesting solution... Lacrocivious: Not much to test, really. Mainly you need to think carefully about how to assign 'purposes' and therefore relevant directories to each physical drive, rather than simply dumping everything into one HD until if fills up, then moving to the next one
Same kind of consideration you'd give up front to designing a database; everything you plan for on the front end is one less problem to deal with once you start using it MAvL: If i have a directory movies i can expand this directory over a few drives
and can i grow this volume? Lacrocivious: MAvL: Of course, using lvm or some other spanning method. But if you are going to do that, you lose the advantage of knowing what data is on which drive, and you might as well use RAID MAvL: Okay but with JBOD you can't grow? Lacrocivious: Instead, consider for example a top-level directory named /pub/ under which you create mount points for each physical drive, named movies01, movies02...
MAvL: JBOD in and of itself has nothing to do with whether you can grow volumes MAvL: hmmm.., im lost
lol Lacrocivious: My point isn't so much that you should use JBOD. Rather, it is that if you consider each physical HD as a separate and distinct volume, you don't have any trouble figuring out what data you need to back-up-right-now! when unit failure is imminent MAvL: okay Lacrocivious: e.g., when S.M.A.R.T. warns of failure signs
One way to look at 'JBOD' is merely as a convenient way of referring to drives that are not part of a RAID array MAvL: When you create a JBOD volume you can't grow this in a later stadium
you can to this with LVM or RAID
but then you never know were everything is Lacrocivious: MAvL: You are conflating JBOD with LVM. JBOD is not LVM. LVM is not JBOD. RAID is not LVM. LVM is not RAID. MAvL: and when you loose a drive with LVM or RAID 0 you loose the whole volume Lacrocivious: MAvL: Wikipedia is your friend ;-) MAvL: not with JBOD
haha Lacrocivious: MAvL: You have that risk, yes, because you don't have control over where the spanning technology decides to put files within the boundaries of its volume MAvL: So i create a JBOD volume with several drives
i loose one drive all other data on the other drives is intact Lacrocivious: Yes. Except there is no such thing in that case as a 'JBOD volume'
Any system with multiple drives that aren't spanned with LVM or the like, or part of a RAID array, is using JBOD.
That same system can use LVM to span those multiple physical drives and still be called JBOD. You need to get clear on what exactly JBOD is, and what it is not MAvL: true Lacrocivious: MAvL: Your comments suggest this epiphany has thus far eluded you ;-) Benjamin4: joined #clearfoundation MAvL: So JBOD is just a buch of disks which let me combine different HDD into a single large unit
joined #clearfoundation marclaporte: joined #clearfoundation MarcelvanLeeuwen: joined #clearfoundation MAvL: i think i like the idea of a top level directory with mounting points to each physical hdd...
no hassle
easy to maintain
easy to expand
if a drive crashes you just loses that one
with software RAID 5 you lose the whole array when losing two
i've read several times that this happens during rebuilding... marclaporte: joined #clearfoundation Benjamin4: joined #clearfoundation