←Prev date Next date→ Show only urls | (Click on time to select a line by its url) |
Who | What | When |
---|---|---|
marclaporte | joined #clearfoundation | [02:14] |
.................................................................................................. (idle for 8h7mn) | ||
MAvL | joined #clearfoundation | [10:21] |
......... (idle for 40mn) | ||
Morning guys!
the sun is shining after days of rain | [11:01] | |
........................................ (idle for 3h17mn) | ||
marclaporte | joined #clearfoundation | [14:19] |
..... (idle for 22mn) | ||
Benjamin4 | joined #clearfoundation | [14:41] |
........... (idle for 52mn) | ||
MAvl | joined #clearfoundation
left #clearfoundation | [15:33] |
MAvL | joined #clearfoundation
joined #clearfoundation | [15:36] |
MarcelvanLeeuwen | joined #clearfoundation | [15:41] |
MAvL | marclaporte, i have spoken with erry of freenode.org
he is waiting for some feedback and he is apologizing for the delay so let's hope for the best | [15:43] |
.... (idle for 19mn) | ||
marclaporte | tks | [16:02] |
MAvL | no problem! | [16:07] |
Hey Benjamin4 ! What do you think of my new home for ClearOS?
http://img842.imageshack.us/img842/9106/9sa1.jpg It's the server on the top not ready yet just the case | [16:15] | |
Benjamin4 | haha...nice. | [16:16] |
MAvL | I have to order some hardware
not sure which motherboard i going to use suggestions? Should i go i7 or Xeon... Also this server has to run at least 5 years without hardware changes | [16:17] |
Benjamin4 | I'm not a good resource for hardware recommendations.
You should post to the forums. Some guys, like Darryl, are great resources for stuff like that. | [16:32] |
Lacrocivious | MAvL: I've been building systems since 1985, and the mainboards which have had the fewest field failures for my clients, by far, have been from Asus. For what that's worth. As for i7 vs Xeon, I'd probably choose i7 but either is fine | [16:37] |
...... (idle for 28mn) | ||
MAvL | joined #clearfoundation
Lacrocivious, thanks for the info! my desktop has also a Asus motherboard the Asus Z87-Pro c2 I build this system last summer and it's really stable | [17:05] |
Lacrocivious | For longevity, it is more important to choose a midline to middle-high-end board. The least expensive models will be built more to price points than for longevity, using less expensive components. The very high end will be bleeding edge gamer boards for the most part, and quite volatile. The middle line boards are best for business use where longevity is important | [17:10] |
MAvL | i used a i7-4770K
okay of course i need at least two Nic's | [17:10] |
Lacrocivious | At least. More if you want more than one internal subnet | [17:12] |
MAvL | not sure if i go the onboard route or buy a separate Nic
I think a intel Nic is the best solution | [17:12] |
Lacrocivious | Here again, for longevity, consider discrete NIC adapters and do not rely entirely upon on-board NICs. The single most common mainboard failure is the NIC; usually only it gets damaged and the board remains viable
Yes, Intel is the safest bet there, particularly on the WAN side | [17:13] |
MAvL | hmmm, okay
can suggest a HBA card to connect to all those drives? 16 drives in total... | [17:14] |
Lacrocivious | My point is that you might not want to subject your board to the known most common failure (the on-board NIC) because of the chance that whatever surge takes it out also damages other board components | [17:15] |
MAvL | I'm not sure you seen the photo of the case i uploaded | [17:15] |
Lacrocivious | MAvL: I can't suggest any that aren't true RAID and therefore very expensive (*starting* at $300, for reaons of royalties)
MAvL: I looked at that photo, yes Noisy bastard to have in your home, isn't it? | [17:16] |
MAvL | okay
haha, yes but it's going to garage | [17:16] |
Lacrocivious | Ah
Keep the mice awake | [17:17] |
MAvL | so then it's no problem
haha | [17:17] |
Lacrocivious | You *might* want to take a look at one of the Asrock mainboards that has 10 or 12 SATA ports on it. I don't know about the longevity, but they're decent enough and have improved their quality quite a bit over the past three years or so | [17:18] |
MAvL | I want to do software RAID on that server and maybe in the futute BTRFS
okay | [17:18] |
Lacrocivious | MAvL: Is this for home use? | [17:19] |
MAvL | yes
:) | [17:19] |
Lacrocivious | Take this for what it's worth, but I actually discourage RAID for home use. I realize that may sound weird. I'll explain | [17:20] |
MAvL | I'm all ears | [17:20] |
Lacrocivious | RAID is designed and intended for Enterprise level computing, which *assumes* full redundancy and backup. Home users never have that
RAID works just fine until you get a unit failure Whereupon, you discover that the array rebuild probably doesn't work as well as the not-quite-enterprise-level hardware manufacturer claimed it would. If it works at all Plus you don't know what data is on what drive So potentially you lose everything That's a worst case, but you'd be horrified to know how many times I've seen that happen, particularly when even vendors like Adaptec turn out to have built RAID cards that won't actually rebuild an array RAID doesn't actually help a home user much anyway, in my opinion. I'd go for JBOD instead, and also avoid LVM for the same reasons; you don't know what data is on what physical drive And unless you have full redundant hardware, once you have a failure, the headache is massive | [17:20] |
MAvL | okay
So you suggested just use every drive separate | [17:24] |
Lacrocivious | I fully realize this is a contrarian view, and software-oriented (as differentiated for this argument from hardware-oriented) people will laugh derisively at me for 'opposing' the convenience and 'out-of-sight, out-of-mind' advantages of RAID of any stripe (pun!) ;-)
The other consideration is, of course, to only use RAID-certified or RAID-capable hard drives in an array. If you buy end-user-grade drives for an array, you are going to suffer later | [17:25] |
MAvL | I'm using the WD-Red's | [17:27] |
Lacrocivious | Western Digital Red or Black will do, or any of their HGST line (which is Enterprise grade); Seagate calls their Enterprise models names what usually have 'enterprise' in the description | [17:28] |
MAvL | okay | [17:28] |
Lacrocivious | I encourage you to get more than my own opinion on these issues, then make your purchase and architecture decisions based on what feels best for you | [17:30] |
marclaporte | joined #clearfoundation | [17:31] |
MAvL | i appriciated your input :)
but JBOD is also interesting. You only lose the drive which fails and all other data is still there... if i'm correct | [17:31] |
Lacrocivious | Yep. It is less efficient, because you will have some empty space on each physical drive rather than all your empty space in one usable chunk across all drives | [17:34] |
MAvL | but is JBOD a future of a HBA card | [17:34] |
Lacrocivious | But you always know Where Your Stuff Is
Just about any HBA -- even the fake 'RAID' chipsets on mainboards -- also support JBOD You can treat your JBOD pile sort of like a physical manifestation of your directory tree, one drive for Applications and Development, one or more for TV, one or more for Movies, one or more for your Massive Pr0... er, *Linux Distro* Collection, etc ;-) | [17:35] |
MAvL | okay
I've never used JBOD Because i thought it was a sort of RAID 0 one drives fails everything is gone | [17:44] |
Lacrocivious | No, JBOD is Just a Bunch Of Disks. You can combine them in volumes that span physical drives, e.g., LVM, but by itself JBOD is more the absence of RAID. It simply means that each physical device is separately and distinctly addressable | [17:47] |
MAvL | yes i did some googling :)
nice! | [17:48] |
Lacrocivious | With WinOS, separately addressable drives become problematic after you run out of drive letters, but with *nix you don't have that problem | [17:48] |
MAvL | Linux rulezzz | [17:49] |
Lacrocivious | Every time I think of 'drive letters' I break out in hives ;-) | [17:49] |
MAvL | :)
i have to do some testing with JBOD interesting solution... | [17:50] |
Lacrocivious | Not much to test, really. Mainly you need to think carefully about how to assign 'purposes' and therefore relevant directories to each physical drive, rather than simply dumping everything into one HD until if fills up, then moving to the next one
Same kind of consideration you'd give up front to designing a database; everything you plan for on the front end is one less problem to deal with once you start using it | [17:52] |
MAvL | If i have a directory movies i can expand this directory over a few drives
and can i grow this volume? | [17:56] |
Lacrocivious | MAvL: Of course, using lvm or some other spanning method. But if you are going to do that, you lose the advantage of knowing what data is on which drive, and you might as well use RAID | [18:10] |
MAvL | Okay but with JBOD you can't grow? | [18:12] |
Lacrocivious | Instead, consider for example a top-level directory named /pub/ under which you create mount points for each physical drive, named movies01, movies02...
MAvL: JBOD in and of itself has nothing to do with whether you can grow volumes | [18:12] |
MAvL | hmmm.., im lost
lol | [18:13] |
Lacrocivious | My point isn't so much that you should use JBOD. Rather, it is that if you consider each physical HD as a separate and distinct volume, you don't have any trouble figuring out what data you need to back-up-right-now! when unit failure is imminent | [18:14] |
MAvL | okay | [18:14] |
Lacrocivious | e.g., when S.M.A.R.T. warns of failure signs
One way to look at 'JBOD' is merely as a convenient way of referring to drives that are not part of a RAID array | [18:14] |
MAvL | When you create a JBOD volume you can't grow this in a later stadium
you can to this with LVM or RAID but then you never know were everything is | [18:17] |
Lacrocivious | MAvL: You are conflating JBOD with LVM. JBOD is not LVM. LVM is not JBOD. RAID is not LVM. LVM is not RAID. | [18:18] |
MAvL | and when you loose a drive with LVM or RAID 0 you loose the whole volume | [18:19] |
Lacrocivious | MAvL: Wikipedia is your friend ;-) | [18:19] |
MAvL | not with JBOD
haha | [18:19] |
Lacrocivious | MAvL: You have that risk, yes, because you don't have control over where the spanning technology decides to put files within the boundaries of its volume | [18:20] |
MAvL | So i create a JBOD volume with several drives
i loose one drive all other data on the other drives is intact | [18:21] |
Lacrocivious | Yes. Except there is no such thing in that case as a 'JBOD volume'
Any system with multiple drives that aren't spanned with LVM or the like, or part of a RAID array, is using JBOD. That same system can use LVM to span those multiple physical drives and still be called JBOD. You need to get clear on what exactly JBOD is, and what it is not | [18:22] |
MAvL | true | [18:24] |
Lacrocivious | MAvL: Your comments suggest this epiphany has thus far eluded you ;-) | [18:24] |
Benjamin4 | joined #clearfoundation | [18:32] |
MAvL | So JBOD is just a buch of disks which let me combine different HDD into a single large unit | [18:32] |
.... (idle for 15mn) | ||
joined #clearfoundation | [18:47] | |
marclaporte | joined #clearfoundation | [18:47] |
MarcelvanLeeuwen | joined #clearfoundation | [18:59] |
..... (idle for 21mn) | ||
MAvL | i think i like the idea of a top level directory with mounting points to each physical hdd...
no hassle easy to maintain easy to expand if a drive crashes you just loses that one with software RAID 5 you lose the whole array when losing two i've read several times that this happens during rebuilding... | [19:20] |
......... (idle for 44mn) | ||
marclaporte | joined #clearfoundation | [20:09] |
................ (idle for 1h19mn) | ||
Benjamin4 | joined #clearfoundation | [21:28] |
←Prev date Next date→ Show only urls | (Click on time to select a line by its url) |