Good Evening All, I have a question regarding CentOS 6 server partitioning. Now I know there are a lot of different ways to partition the system and different opinions depending on the use of the server. I currently have a quad core intel system running 8GB of RAM with 1 TB hard drive (single). In the past as a FreeBSD user, I have always made a physical volume of the root filesystem (/), SWAP, /tmp, /usr, /var, and /home. In the partitioning manager I would always specify 10GB for root, 2GB or so for SWAP, 20GB var, 50GB usr, 10GB /tmp, and allocate all remaining space to my home directory as my primary data volume (assuming all my applications are installed and ran from my home directories). I was recently told that this is an old style of partitioning and is not used in modern day Linux distributions. So more accurately, here are my questions to the list: 1) What is a good partition map/schema for a server OS where it's primary purpose is for a LAMP server, DNS (bind), and possibly gameservers 2) CentOS docs recommend using 10GB SWAP for 8GB of RAM. 1X the amount of physical memory + 2GB added. (Reference: http://www.centos.org/docs/5/html/Installation_Guide-en-US/s1-diskpartitioning-x86.html). I was told this is ridiculous and will severely slow down the system. Is this true? If so, what is a good swap space to use for 8GB of RAM? The university of MIT recommends making MULTIPLE 2GB swap spaces equaling 10GB if this is the case. Please help! 3) Is EXT4 better or worse to use then XFS for what I am planning to use the system for? Thanks in advance for all your help guys Kind Regards, Jonathan Vomacka
At Wed, 31 Aug 2011 21:21:25 -0400 CentOS mailing list <centos at centos.org> wrote:> > Good Evening All, > > I have a question regarding CentOS 6 server partitioning. Now I know > there are a lot of different ways to partition the system and different > opinions depending on the use of the server. I currently have a quad > core intel system running 8GB of RAM with 1 TB hard drive (single). In > the past as a FreeBSD user, I have always made a physical volume of the > root filesystem (/), SWAP, /tmp, /usr, /var, and /home. In the > partitioning manager I would always specify 10GB for root, 2GB or so for > SWAP, 20GB var, 50GB usr, 10GB /tmp, and allocate all remaining space to > my home directory as my primary data volume (assuming all my > applications are installed and ran from my home directories). I was > recently told that this is an old style of partitioning and is not used > in modern day Linux distributions. So more accurately, here are my > questions to the list: > > 1) What is a good partition map/schema for a server OS where it's > primary purpose is for a LAMP server, DNS (bind), and possibly gameservers > > 2) CentOS docs recommend using 10GB SWAP for 8GB of RAM. 1X the amount > of physical memory + 2GB added. (Reference: > http://www.centos.org/docs/5/html/Installation_Guide-en-US/s1-diskpartitioning-x86.html). > I was told this is ridiculous and will severely slow down the system. Is > this true? If so, what is a good swap space to use for 8GB of RAM? The > university of MIT recommends making MULTIPLE 2GB swap spaces equaling > 10GB if this is the case. Please help!Given machines now come with multiple Gigs of RAM now, swap is pretty much not needed (and if it is, the solution is to stuff more memory in the box or look to memory leaks). Usually 1-2 Gig of swap is enough to cover 'emergencies'. If you are hitting this limit, something is wrong somewhere (this assumes you have enough physical RAM). The 1X + 2G rule cited in the page above is excessive (where did that come from?). Short of memory leaks or memory intensive activities, you should never use much swap space -- some little used system daemons might get swapped out early on, but that should have little impact on system performance. The idea of MULTIPLE 2GB swap spaces is also dumb, and I belive relates to older kernels (2.4?) which could not handle swap partitions larger then 2GIG (and this might also be a 32-bit limitation as well).> > 3) Is EXT4 better or worse to use then XFS for what I am planning to use > the system for? > > Thanks in advance for all your help guys > > Kind Regards, > Jonathan Vomacka > _______________________________________________ > CentOS mailing list > CentOS at centos.org > http://lists.centos.org/mailman/listinfo/centos > >-- Robert Heller -- 978-544-6933 / heller at deepsoft.com Deepwoods Software -- http://www.deepsoft.com/ () ascii ribbon campaign -- against html e-mail /\ www.asciiribbon.org -- against proprietary attachments
On 08/31/2011 09:21 PM Jonathan Vomacka wrote:> Good Evening All, > > I have a question regarding CentOS 6 server partitioning. Now I know > there are a lot of different ways to partition the system and different > opinions depending on the use of the server. I currently have a quad > core intel system running 8GB of RAM with 1 TB hard drive (single). In > the past as a FreeBSD user, I have always made a physical volume of the > root filesystem (/), SWAP, /tmp, /usr, /var, and /home. In the > partitioning manager I would always specify 10GB for root, 2GB or so for > SWAP, 20GB var, 50GB usr, 10GB /tmp, and allocate all remaining space to > my home directory as my primary data volume (assuming all my > applications are installed and ran from my home directories). I was > recently told that this is an old style of partitioning and is not used > in modern day Linux distributions. So more accurately, here are my > questions to the list: > > 1) What is a good partition map/schema for a server OS where it's > primary purpose is for a LAMP server, DNS (bind), and possibly gameserversIf you have a currently running system serving the same purpose and running the same OS version, consult the partitioning scheme there, adjusting the sizes up or down depending upon how much is used. The size of /home is going to depend upon how many users will have accounts on the system and how much space you're going to allow per user. I'd double the space for a typical user, then multiply that by the number of users. Set up a quota system with hard and soft limits to eliminate surprises. And why don't you use LVM? Then you can adjust the sizes of the volumes as you need them. If this is to be an enterprise system on which you'll be doing live backups, you may also want to set up a snapshot LV.> > 2) CentOS docs recommend using 10GB SWAP for 8GB of RAM. 1X the amount > of physical memory + 2GB added. (Reference: > http://www.centos.org/docs/5/html/Installation_Guide-en-US/s1-diskpartitioning-x86.html). > I was told this is ridiculous and will severely slow down the system. Is > this true? If so, what is a good swap space to use for 8GB of RAM? The > university of MIT recommends making MULTIPLE 2GB swap spaces equaling > 10GB if this is the case. Please help!In the absence of actual evidence to the contrary, I'd go with the recommendations in the docs regarding swap. As for swap 'severely slowing down the system', I think that's bunk. Well, theoretically any empty disk space is going to slow down disk reads a bit, but then so would occupied disk space which isn't used, but neither of these is a 'severe problem'. So she must be speaking of the algorithm used in determining what and when something is written to swap. To have an intelligent opinion on the usefulness and efficiency of swap would require a detailed understanding of that algorithm alongside one of the system as a whole. Did she do her master's thesis or doctoral dissertation on this topic? Okay, maybe swap is totally unnecessary and the developers are keeping everyone in the dark about the details just to create and keep jobs for themselves. But the code is open source, so that sounds too much like an April Fool's Day posting to slashdot. As for MIT's suggestion, yeah, that is a good idea. I did this on some machines years back and noticed a considerable speed increase. In most instances though all swap spaces should have the same priority.> > 3) Is EXT4 better or worse to use then XFS for what I am planning to use > the system for?Which features of each might you plan on making use of?> > Thanks in advance for all your help guysIf you're truly interested in performance, run some metrics before any users get on the system and periodically thereafter. This will give you a baseline for evaluation of changes you'll inevitably make and others which just happen. I predict a long and contentious thread on these topics (When do we not have that?) along with scattered earthquakes and sunspots. I'll be missing the impact of most of those as I'm bringing this machine down for 'maintenance'. Just thought everyone would be interested in knowing that. :)> > Kind Regards, > Jonathan Vomacka
On Thursday, September 01, 2011 03:21:25 AM Jonathan Vomacka wrote:> Good Evening All, > > I have a question regarding CentOS 6 server partitioning. Now I know > there are a lot of different ways to partition the system and different > opinions depending on the use of the server. I currently have a quad > core intel system running 8GB of RAM with 1 TB hard drive (single). In > the past as a FreeBSD user, I have always made a physical volume of the > root filesystem (/), SWAP, /tmp, /usr, /var, and /home. In the > partitioning manager I would always specify 10GB for root, 2GB or so for > SWAP, 20GB var, 50GB usr, 10GB /tmp, and allocate all remaining space toI don't think the above figures are bad. Then again the CentOS default (/boot + /) and then adding your /home may be more flexible. After that, if I split it further, I'd make a stand alone /var and maybe /tmp. Splitting /usr from / seems like more trouble than it's worth to me. Also I'd use LVM for everything but /boot and leave some unused space in the VG that I could use for lvextend + resize2fs later. Just my take on it.> my home directory as my primary data volume (assuming all my > applications are installed and ran from my home directories). I was > recently told that this is an old style of partitioning and is not used > in modern day Linux distributions. So more accurately, here are my > questions to the list: > > 1) What is a good partition map/schema for a server OS where it's > primary purpose is for a LAMP server, DNS (bind), and possibly gameservers > > 2) CentOS docs recommend using 10GB SWAP for 8GB of RAM. 1X the amount > of physical memory + 2GB added. (Reference: > http://www.centos.org/docs/5/html/Installation_Guide-en-US/s1-diskpartition > ing-x86.html). I was told this is ridiculous and will severely slow down > the system. Is this true?Disclaimer: the following is based on CentOS-5 and I'm not 100% if all or any applies to the CentOS-6 kernel. * Some swap (as opposed to no swap) seems to increase system stability under OOM conditions (depeding on a lot of factors) * You'll need at least as much swap as the max stack size you intend to set (ulimit -s). Usually this is very low but in some instances you need a significant percentage of your RAM size. An alternative is to set max stack size to unlimited when needed (which _does not_, thankfully, require an infinite amount of swap...). Based on this I'd say just add some swap (like a gig or two) unless you know you want a high max stack size. If you left space in your VG you can always add another chunk of swap later.> If so, what is a good swap space to use for 8GB > of RAM? The university of MIT recommends making MULTIPLE 2GB swap spacesThis shouldn't really make much difference. Long ago swap size was limited to 2G but I don't even remember if that was per swap or in total.. Either way you can have 5x 2G or 1x 10G. Linux will balance its usage over all available swaps so if you have several independant drives then use swaps on both for maximum performance (although it's my feeling that if you need swap performance then you're probably doing something wrong...).> equaling 10GB if this is the case. Please help! > > 3) Is EXT4 better or worse to use then XFS for what I am planning to use > the system for?Much has been said here. I'd stay with the dist default unless I had specific reasons. If you need >16T you have to use XFS. If you're on 32-bit you have to use ext*. If you're trying to decide based on performance then try it out on your hardware (where preferably "it" is close to your actual work load). /Peter> Thanks in advance for all your help guys > > Kind Regards, > Jonathan Vomacka-------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: <http://lists.centos.org/pipermail/centos/attachments/20110901/d3da6121/attachment.sig>
From: Jonathan Vomacka <juvix88 at gmail.com>> I have a question regarding CentOS 6 server partitioning.http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/s2-diskpartrecommend-x86.html
On Wednesday, August 31, 2011 09:21:25 PM Jonathan Vomacka wrote:> I was > recently told that this is an old style of partitioning and is not used > in modern day Linux distributions.The only thing old-style I saw in your list was the separate /usr partition. I like having separate /var, /tmp, and /home. /var because lots of data resides there that can fill partitions quickly; logs, for instance. I have built machines where /var/log was separate, specifically for that reason.> So more accurately, here are my > questions to the list: > > 1) What is a good partition map/schema for a server OS where it's > primary purpose is for a LAMP server, DNS (bind), and possibly gameserversSplitting out filesystems into partitions makes sense primarily, in my opinion and experience, in seven basic aspects: 1.) I/O load balancing across multiple spindles and/or controllers; 2.) Disk space isolation in case of filesystem 'overflow' (that is, you don't want your mail spool in /var/spool/mail overflowing to be able to corrupt an online database in, say, /var/lib/pgsql/data/base) (and while quotas can help with this when two trees are not fully isolated, filesystems in different partitons/logical volumes have hard overflow isolation); 3.) In the case of really large data stores with dynamic data, isolating the impact of filesystem corruption; 4.) The ability to stagger fsck's between boots (the fsck time doesn't seem to increase linearly with filesystem size); 5.) I/O 'tiering' (like EMC's FAST) where you can allocate your fastest storage to the most rapidly changing data, and slower storage to data that doesn't change frequently; 6.) Putting things into separate filesystem forces the admin to really have to think through and design the system taking into account all the requirements, instead of just throwing it all together and then wondering why performance is suboptimal; 7.) Filesystems can be mounted with options specific to their use cases, and using filesystem technology appropriate to the use case (noexec, for instance, on filesystems that have no business having executables on them; enabling/disabling journalling and other options as appropriate, and using XFS, ext4, etc as appropriate, just to mentiona a few things).> 2) CentOS docs recommend using 10GB SWAP for 8GB of RAM. 1X the amount > of physical memory + 2GB added.If you put swap on LVM and give yourself room to grow you can increase swap space size at will if you should find you need to do so. Larger RAM (and virtual RAM embodied by swap) does not always make things faster. I have a private e-mail from an admin of a large website showing severe MySQL performance issues that were reduced by making the RAM size smaller (turned out to be a caching mismanagement thing with poorly written queries that caused it). Consider swap to be a safety buffer; the Linux kernel is by default configured to overcommit memory, and swap exists to prevet the oom-killer from reaping critical processes in this situation. Tuning swap size and the 'swappiness' of the kernel along with the overcommit policy should be done together; the default settings produce the default recommendation of 'memory size plus 2GB' that was for CentOS 5. Not too long ago, the recommendation was for swap to be twice the memory size. Multiple swap partitions can improve performance if those partitions are on different spindles; however, this reduces reliability, too. I don't have any experience with benchmarking the performance of multiple 2GB swap spaces; I'd find results of such benchmarks to be useful information.> 3) Is EXT4 better or worse to use then XFS for what I am planning to use > the system for?That depends; consult some file system comparisons (the wikipedia file system comparison article is a good starting place). I've used both; and I still use both. XFS as a filesystem is older and presumably more mature than ext4, but age is not the only indicator of something that will work for you. One thing to remember is that XFS filesystems cannot currently be reduced in size, only increased. Ext4 can go either way if you realize you made too large of a filesystem. XFS is very fast to create, but repairing requires absolutely the most RAM of any recovery process I've ever seen. XFS has seen a lot of use in the field, particularly with large SGI boxes (Altix series, primarily) running Linux (with the requisite 'lots of RAM' required for repair/recovery.... XFS currently is the only one where I have successfully made a large than 16TB filesystem. Don't try that on a 32-bit system (in fact, if you care about data integrity, don't use XFS on a 32-bit system at all, unless you have rebuilt the kernel with 8k stacks). The mkfs.xfs on a greater than 16TB partition/logical volume will execute successfully on a 32-bit system (the last time I tried it), but as soon as you go over 16TB with your data you will no longer be able to mount the filesystem. The wisdom of making a greater than 16TB filesystem of any type is left as an exercise for the reader....
You've already received some good responses so I won't rehash a lot of what was said. However here are few more comments without a lot of backing detail (but it should give you enough info to google for detail): 1. Despite the RedHat link someone provided, I think the advice of putting almost everything on the root filesystem is a lot of bunk, at least for servers. The old arguments for separate filesystems still apply. I suspect that the single filesystem perspective is coming from desktop scenarios, and especially laptop users and those coming from MS Windows. 2. Putting /boot on its own filesystem and using LVM for everything else is a generally good idea from both the management and snapshot perspectives as someone previously described. However be aware that most (if not all) LVM configurations will disable write barriers -- this is probably mostly of interest for when you're running a database. You need to put on your combined DBA and sysadmin hat, have a look at your underlying disks, disk controller, filesystem stack, database, UPS/powerfail monitoring, and budget to see where your balancing point is. Yes, I have databases on LVM on top of RAID on top of SATA; but it's better to know your risks rather than having them be a surprise. 3. Pay attention to whether your disks are using the old 512 byte sector size or the new 4k sector size (sometimes called advanced disk format), and whether or not your disks lie to the OS about the sector size. The RAID, other MD layers, and filesystem need to know the truth or you can run into performance and/or lifespan issues. 4. Regarding swap: Yes, having it is still a good idea under most circumstances. The old "2 * physical memory" rule no longer applies. Follow the sizing guidelines from RedHat that someone posted. The kernel is smart enough to use it when necessary and avoid it otherwise. Having it can get your server through unusual circumstances without crashing but you should have enough memory that you're not paging under normal circumstances. See also point #6. 5. Consider encrypting swap. See crypttab(5), including the comments about using /dev/urandom for the key. 6. Putting /tmp on tmpfs is a good idea in that it ensures that it gets cleaned out at least when the system reboots. (Running cron jobs to clear it out periodically can cause problems; under some circumstances.) This is a good argument to have swap; you can use tmpfs without a significant impact of /tmp using up physical RAM. Also see the 'tmp' option in crypttab(5). 7. Under CentOS 5 having less than 2G for /var could cause problems with updates, especially between minor versions. I've increased my minimum to 4G under RHEL6 due to kdump concerns. Devin