| To: | Martin Steigerwald <Martin@xxxxxxxxxxxx> |
|---|---|
| Subject: | Re: xfs_growfs / planned resize / performance impact |
| From: | Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx> |
| Date: | Sun, 05 Aug 2012 07:34:58 -0500 |
| Cc: | xfs@xxxxxxxxxxx, Eric Sandeen <sandeen@xxxxxxxxxxx>, Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx> |
| In-reply-to: | <201208051303.09337.Martin@xxxxxxxxxxxx> |
| References: | <5017E426.2040709@xxxxxxxxxxxx> <501B4D7E.1000303@xxxxxxxxxxx> <501B6B04.2090002@xxxxxxxxxxxx> (sfid-20120803_140801_426027_0A33D973) <201208051303.09337.Martin@xxxxxxxxxxxx> |
| Reply-to: | stan@xxxxxxxxxxxxxxxxx |
| User-agent: | Mozilla/5.0 (Windows NT 5.1; rv:14.0) Gecko/20120713 Thunderbird/14.0 |
On 8/5/2012 6:03 AM, Martin Steigerwald wrote: > Well the default was 16 AGs for volumes < 2 TiB AFAIR. And it has been > reduced to 4 for as I remember exactly performance reasons. Too many AGs > on a single device can incur too much parallelity. Thats at least is what > I have understood back then. For striped md/RAID or LVM volumes mkfs.xfs will create 16 AGs by default because it reads the configuration and finds a striped volume. The theory here is that more AGs offers better performance in the average case on a striped volume. With hardware RAID or a single drive, or any storage configuration for which mkfs.xfs is unable to query the parameters, mkfs.xfs creates 4 AGs by default. The 4 AG default has been with us for a very long time. It was never reduced. -- Stan |
| Previous by Date: | Re: xfs_growfs / planned resize / performance impact, Andy Bennett |
|---|---|
| Next by Date: | Re: xfs_growfs / planned resize / performance impact, Martin Steigerwald |
| Previous by Thread: | Re: xfs_growfs / planned resize / performance impact, Martin Steigerwald |
| Next by Thread: | Re: xfs_growfs / planned resize / performance impact, Martin Steigerwald |
| Indexes: | [Date] [Thread] [Top] [All Lists] |