[02:14] marclaporte joined #clearfoundation [10:21] MAvL joined #clearfoundation [11:01] MAvL Morning guys! [11:02] MAvL the sun is shining after days of rain [14:19] marclaporte joined #clearfoundation [14:41] Benjamin4 joined #clearfoundation [15:33] MAvl joined #clearfoundation [15:33] MAvl left #clearfoundation [15:36] MAvL joined #clearfoundation [15:38] MAvL joined #clearfoundation [15:41] MarcelvanLeeuwen joined #clearfoundation [15:43] MAvL marclaporte, i have spoken with erry of freenode.org [15:43] MAvL he is waiting for some feedback [15:43] MAvL and he is apologizing for the delay [15:43] MAvL so let's hope for the best [16:02] marclaporte tks [16:07] MAvL no problem! [16:15] MAvL Hey Benjamin4 ! What do you think of my new home for ClearOS? [16:15] MAvL http://img842.imageshack.us/img842/9106/9sa1.jpg [16:16] MAvL It's the server on the top [16:16] MAvL not ready yet just the case [16:16] Benjamin4 haha...nice. [16:17] MAvL I have to order some hardware [16:17] MAvL not sure which motherboard i going to use [16:18] MAvL suggestions? [16:18] MAvL Should i go i7 or Xeon... [16:19] MAvL Also this server has to run at least 5 years without hardware changes [16:32] Benjamin4 I'm not a good resource for hardware recommendations. [16:32] Benjamin4 You should post to the forums. [16:32] Benjamin4 Some guys, like Darryl, are great resources for stuff like that. [16:37] Lacrocivious MAvL: I've been building systems since 1985, and the mainboards which have had the fewest field failures for my clients, by far, have been from Asus. For what that's worth. As for i7 vs Xeon, I'd probably choose i7 but either is fine [17:05] MAvL joined #clearfoundation [17:08] MAvL Lacrocivious, thanks for the info! [17:08] MAvL my desktop has also a Asus motherboard [17:09] MAvL the Asus Z87-Pro c2 [17:09] MAvL I build this system last summer and it's really stable [17:10] Lacrocivious For longevity, it is more important to choose a midline to middle-high-end board. The least expensive models will be built more to price points than for longevity, using less expensive components. The very high end will be bleeding edge gamer boards for the most part, and quite volatile. The middle line boards are best for business use where longevity is important [17:10] MAvL i used a i7-4770K [17:11] MAvL okay [17:11] MAvL of course i need at least two Nic's [17:12] Lacrocivious At least. More if you want more than one internal subnet [17:12] MAvL not sure if i go the onboard route or buy a separate Nic [17:13] MAvL I think a intel Nic is the best solution [17:13] Lacrocivious Here again, for longevity, consider discrete NIC adapters and do not rely entirely upon on-board NICs. The single most common mainboard failure is the NIC; usually only it gets damaged and the board remains viable [17:14] Lacrocivious Yes, Intel is the safest bet there, particularly on the WAN side [17:14] MAvL hmmm, okay [17:15] MAvL can suggest a HBA card to connect to all those drives? 16 drives in total... [17:15] Lacrocivious My point is that you might not want to subject your board to the known most common failure (the on-board NIC) because of the chance that whatever surge takes it out also damages other board components [17:15] MAvL I'm not sure you seen the photo of the case i uploaded [17:16] Lacrocivious MAvL: I can't suggest any that aren't true RAID and therefore very expensive (*starting* at $300, for reaons of royalties) [17:16] Lacrocivious MAvL: I looked at that photo, yes [17:16] Lacrocivious Noisy bastard to have in your home, isn't it? [17:16] MAvL okay [17:16] MAvL haha, yes [17:17] MAvL but it's going to garage [17:17] Lacrocivious Ah [17:17] Lacrocivious Keep the mice awake [17:17] MAvL so then it's no problem [17:17] MAvL haha [17:18] Lacrocivious You *might* want to take a look at one of the Asrock mainboards that has 10 or 12 SATA ports on it. I don't know about the longevity, but they're decent enough and have improved their quality quite a bit over the past three years or so [17:18] MAvL I want to do software RAID on that server and maybe in the futute BTRFS [17:19] MAvL okay [17:19] Lacrocivious MAvL: Is this for home use? [17:19] MAvL yes [17:19] MAvL :) [17:20] Lacrocivious Take this for what it's worth, but I actually discourage RAID for home use. I realize that may sound weird. I'll explain [17:20] MAvL I'm all ears [17:20] Lacrocivious RAID is designed and intended for Enterprise level computing, which *assumes* full redundancy and backup. Home users never have that [17:20] Lacrocivious RAID works just fine until you get a unit failure [17:21] Lacrocivious Whereupon, you discover that the array rebuild probably doesn't work as well as the not-quite-enterprise-level hardware manufacturer claimed it would. If it works at all [17:21] Lacrocivious Plus you don't know what data is on what drive [17:21] Lacrocivious So potentially you lose everything [17:22] Lacrocivious That's a worst case, but you'd be horrified to know how many times I've seen that happen, particularly when even vendors like Adaptec turn out to have built RAID cards that won't actually rebuild an array [17:23] Lacrocivious RAID doesn't actually help a home user much anyway, in my opinion. I'd go for JBOD instead, and also avoid LVM for the same reasons; you don't know what data is on what physical drive [17:23] Lacrocivious And unless you have full redundant hardware, once you have a failure, the headache is massive [17:24] MAvL okay [17:25] MAvL So you suggested just use every drive separate [17:25] Lacrocivious I fully realize this is a contrarian view, and software-oriented (as differentiated for this argument from hardware-oriented) people will laugh derisively at me for 'opposing' the convenience and 'out-of-sight, out-of-mind' advantages of RAID of any stripe (pun!) ;-) [17:26] Lacrocivious The other consideration is, of course, to only use RAID-certified or RAID-capable hard drives in an array. If you buy end-user-grade drives for an array, you are going to suffer later [17:27] MAvL I'm using the WD-Red's [17:28] Lacrocivious Western Digital Red or Black will do, or any of their HGST line (which is Enterprise grade); Seagate calls their Enterprise models names what usually have 'enterprise' in the description [17:28] MAvL okay [17:30] Lacrocivious I encourage you to get more than my own opinion on these issues, then make your purchase and architecture decisions based on what feels best for you [17:31] marclaporte joined #clearfoundation [17:31] MAvL i appriciated your input :) [17:34] MAvL but JBOD is also interesting. You only lose the drive which fails and all other data is still there... [17:34] MAvL if i'm correct [17:34] Lacrocivious Yep. It is less efficient, because you will have some empty space on each physical drive rather than all your empty space in one usable chunk across all drives [17:34] MAvL but is JBOD a future of a HBA card [17:35] Lacrocivious But you always know Where Your Stuff Is [17:36] Lacrocivious Just about any HBA -- even the fake 'RAID' chipsets on mainboards -- also support JBOD [17:38] Lacrocivious You can treat your JBOD pile sort of like a physical manifestation of your directory tree, one drive for Applications and Development, one or more for TV, one or more for Movies, one or more for your Massive Pr0... er, *Linux Distro* Collection, etc ;-) [17:44] MAvL okay [17:44] MAvL I've never used JBOD [17:45] MAvL Because i thought it was a sort of RAID 0 [17:45] MAvL one drives fails everything is gone [17:47] Lacrocivious No, JBOD is Just a Bunch Of Disks. You can combine them in volumes that span physical drives, e.g., LVM, but by itself JBOD is more the absence of RAID. It simply means that each physical device is separately and distinctly addressable [17:48] MAvL yes i did some googling :) [17:48] MAvL nice! [17:48] Lacrocivious With WinOS, separately addressable drives become problematic after you run out of drive letters, but with *nix you don't have that problem [17:49] MAvL Linux rulezzz [17:49] Lacrocivious Every time I think of 'drive letters' I break out in hives ;-) [17:50] MAvL :) [17:51] MAvL i have to do some testing with JBOD [17:51] MAvL interesting solution... [17:52] Lacrocivious Not much to test, really. Mainly you need to think carefully about how to assign 'purposes' and therefore relevant directories to each physical drive, rather than simply dumping everything into one HD until if fills up, then moving to the next one [17:53] Lacrocivious Same kind of consideration you'd give up front to designing a database; everything you plan for on the front end is one less problem to deal with once you start using it [17:56] MAvL If i have a directory movies i can expand this directory over a few drives [17:56] MAvL and can i grow this volume? [18:10] Lacrocivious MAvL: Of course, using lvm or some other spanning method. But if you are going to do that, you lose the advantage of knowing what data is on which drive, and you might as well use RAID [18:12] MAvL Okay but with JBOD you can't grow? [18:12] Lacrocivious Instead, consider for example a top-level directory named /pub/ under which you create mount points for each physical drive, named movies01, movies02... [18:13] Lacrocivious MAvL: JBOD in and of itself has nothing to do with whether you can grow volumes [18:13] MAvL hmmm.., im lost [18:13] MAvL lol [18:14] Lacrocivious My point isn't so much that you should use JBOD. Rather, it is that if you consider each physical HD as a separate and distinct volume, you don't have any trouble figuring out what data you need to back-up-right-now! when unit failure is imminent [18:14] MAvL okay [18:14] Lacrocivious e.g., when S.M.A.R.T. warns of failure signs [18:16] Lacrocivious One way to look at 'JBOD' is merely as a convenient way of referring to drives that are not part of a RAID array [18:17] MAvL When you create a JBOD volume you can't grow this in a later stadium [18:18] MAvL you can to this with LVM or RAID [18:18] MAvL but then you never know were everything is [18:18] Lacrocivious MAvL: You are conflating JBOD with LVM. JBOD is not LVM. LVM is not JBOD. RAID is not LVM. LVM is not RAID. [18:19] MAvL and when you loose a drive with LVM or RAID 0 you loose the whole volume [18:19] Lacrocivious MAvL: Wikipedia is your friend ;-) [18:19] MAvL not with JBOD [18:20] MAvL haha [18:20] Lacrocivious MAvL: You have that risk, yes, because you don't have control over where the spanning technology decides to put files within the boundaries of its volume [18:21] MAvL So i create a JBOD volume with several drives [18:22] MAvL i loose one drive all other data on the other drives is intact [18:22] Lacrocivious Yes. Except there is no such thing in that case as a 'JBOD volume' [18:23] Lacrocivious Any system with multiple drives that aren't spanned with LVM or the like, or part of a RAID array, is using JBOD. [18:24] Lacrocivious That same system can use LVM to span those multiple physical drives and still be called JBOD. You need to get clear on what exactly JBOD is, and what it is not [18:24] MAvL true [18:24] Lacrocivious MAvL: Your comments suggest this epiphany has thus far eluded you ;-) [18:32] Benjamin4 joined #clearfoundation [18:32] MAvL So JBOD is just a buch of disks which let me combine different HDD into a single large unit [18:47] MAvL joined #clearfoundation [18:47] marclaporte joined #clearfoundation [18:59] MarcelvanLeeuwen joined #clearfoundation [19:20] MAvL i think i like the idea of a top level directory with mounting points to each physical hdd... [19:20] MAvL no hassle [19:21] MAvL easy to maintain [19:21] MAvL easy to expand [19:24] MAvL if a drive crashes you just loses that one [19:24] MAvL with software RAID 5 you lose the whole array when losing two [19:25] MAvL i've read several times that this happens during rebuilding... [20:09] marclaporte joined #clearfoundation [21:28] Benjamin4 joined #clearfoundation