xfs
[Top] [All Lists]

Re: Re: [PATCH 3/3] XFS: Print error when unable to allocate inodes or o

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: Re: [PATCH 3/3] XFS: Print error when unable to allocate inodes or out of free inodes.
From: Raghavendra Prabhu <raghu.prabhu13@xxxxxxxxx>
Date: Wed, 26 Sep 2012 11:44:02 +0530
Cc: xfs@xxxxxxxxxxx, Ben Myers <bpm@xxxxxxx>, Alex Elder <elder@xxxxxxxxxx>
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=d/tlADZZOVDsBFePwF3YPfendNnLhGYT37ACTkUS98c=; b=TXCQHldx6MHupUCudHR0QEqGNUy4zP29Ua3l6q9C6yQt2WNEKdOcUMK7RmO7BoqZAT n8/OGtvriDNzjPzVwkwtipPaSgFmHvUKvXv9MySDEbtEuGtzBkR0X3o272/3fcZRFbsU 6LcDYOASBhoF9lXaxEhDL2aGf3gQxn/OqFF2KqvsHDGmmRXvbRwr1H3cirhy/N446QHl +55XRRACu87SNGQSzfFMHAma70i9Dt1D5bIV9ALD23Bn1jUcl5tuNUShTJ1rX5J3jV29 AWd3IT3aeldT2qJRTnXL3nrs2Uwd+qw0IBMw1G9ZpTD1iGs6rQG60mh2NNmZZPtf9Xar Qx5A==
In-reply-to: <20120921071644.GA20650@Archie>
References: <cover.1347396641.git.rprabhu@xxxxxxxxxxx> <93d9b37ce9ad720e14e2f9311e623a8e3e3139f5.1347396641.git.rprabhu@xxxxxxxxxxx> <20120911232144.GH11511@dastard> <20120921071644.GA20650@Archie>
Hi,

On Fri, Sep 21, 2012 at 12:46 PM, Raghavendra D Prabhu
<raghu.prabhu13@xxxxxxxxx> wrote:
> Hi,
>
>
>>
>>> +                               goto out_spc;
>>> +                       }
>>> +                       return 0;
>>>                 }
>>>         }
>>>
>>> +out_spc:
>>> +       *inop = NULLFSINO;
>>> +       return ENOSPC;
>>>  out_alloc:
>>>         *IO_agbp = NULL;
>>>         return xfs_dialloc_ag(tp, agbp, parent, inop);
>>
>>
>> Default behaviour on a loop break is to allocate inodes, not return
>> ENOSPC.
>>
>> BTW, there's no need to cc LKML for XFS specific patches. LKML is
>> noisy enough as it is without unnecessary cross-posts....
>>
>> Cheers,
>>
>> Dave.
>> --
>> Dave Chinner
>> david@xxxxxxxxxxxxx
>>
>

Ignore the previous. Resending the patch with fixes mentioned.

<Prev in Thread] Current Thread [Next in Thread>