From 3b7MbVQsHA_ImohmYmiZnvuv0x.WigrZmimm.mac.Wig@trix.bounces.google.com Wed Apr 1 03:59:31 2015
Return-Path: <3b7MbVQsHA_ImohmYmiZnvuv0x.WigrZmimm.mac.Wig@trix.bounces.google.com>
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on oss.sgi.com
X-Spam-Level:
X-Spam-Status: No, score=0.0 required=5.0 tests=HTML_MESSAGE,T_REMOTE_IMAGE
autolearn=ham version=3.3.1
X-Original-To: xfs@oss.sgi.com
Delivered-To: xfs@oss.sgi.com
Received: from relay.sgi.com (relay3.corp.sgi.com [198.149.34.15])
by oss.sgi.com (Postfix) with ESMTP id 4E31D7F37
for ; Wed, 1 Apr 2015 03:59:31 -0500 (CDT)
Received: from cuda.sgi.com (cuda3.sgi.com [192.48.176.15])
by relay3.corp.sgi.com (Postfix) with ESMTP id A6184AC006
for ; Wed, 1 Apr 2015 01:59:30 -0700 (PDT)
X-ASG-Debug-ID: 1427878767-04cb6c3fdc230990001-NocioJ
Received: from mail-ob0-f198.google.com (mail-ob0-f198.google.com [209.85.214.198]) by cuda.sgi.com with ESMTP id FrAA4xazf143gxCh (version=TLSv1 cipher=RC4-SHA bits=128 verify=NO) for ; Wed, 01 Apr 2015 01:59:28 -0700 (PDT)
X-Barracuda-Envelope-From: 3b7MbVQsHA_ImohmYmiZnvuv0x.WigrZmimm.mac.Wig@trix.bounces.google.com
Received: by obvd1 with SMTP id d1so73259642obv.2
for ; Wed, 01 Apr 2015 01:59:27 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=1e100.net; s=20130820;
h=mime-version:message-id:date:subject:from:to:content-type;
bh=hc7bp2tI5W5Y36z5W26MHxuu8kYcrozKbPET/znZuPc=;
b=eB3bepzYHUXIbp1bM1NM5QUA6kZ0exW3Ls74aHSm5R1+iuAyy202ZLhl+Vf5x5jQwN
xe7deHOhC6G76kFlmkWAGvXIJxhFCZUI/UKpCPTnol0qywELTv+ySUPgnuZW0RqZ4zv5
w8jIiPVJjvai61S8GY3BL8wWyQY829tP+RDmnOXnQcADIy4PP75+Sl04KBbxhDp1L/Hv
r7afnLFt0R7gthIelWP9GqbJL4597J2M+fIsYFAsqJrS9TM4zwDJW/igpM/4qwv1B260
klkJbn0JShNP9GxtyWZljLBjr53YhlEo8zOvG6+e13u4w774TBSAat6h7Ne9a5kwuMPc
LDeA==
MIME-Version: 1.0
X-Received: by 10.182.135.230 with SMTP id pv6mt66640816obb.17.1427878767729;
Wed, 01 Apr 2015 01:59:27 -0700 (PDT)
X-No-Auto-Attachment: 1
Message-ID: <089e0112c5ccbb9f850512a5ee5e@google.com>
Date: Wed, 01 Apr 2015 08:59:27 +0000
Subject: =?GB2312?B?zeLDs9b3tq/KvdOqz/qzyc6qzeLDs9eq0M3QwsK3vrY=?=
From: sunsesoft10@163.com
X-ASG-Orig-Subj: =?GB2312?B?zeLDs9b3tq/KvdOqz/qzyc6qzeLDs9eq0M3QwsK3vrY=?=
To: xfs@oss.sgi.com
Content-Type: multipart/alternative; boundary=089e0112c5ccbc9f5e0512a5ee8d
X-Barracuda-Connect: mail-ob0-f198.google.com[209.85.214.198]
X-Barracuda-Start-Time: 1427878768
X-Barracuda-Encrypted: RC4-SHA
X-Barracuda-URL: http://192.48.176.15:80/cgi-mod/mark.cgi
X-Virus-Scanned: by bsmtpd at sgi.com
X-Barracuda-BRTS-Status: 1
X-Barracuda-Spam-Score: 0.00
X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.7 tests=HTML_MESSAGE, NO_REAL_NAME
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.17425
Rule breakdown below
pts rule name description
---- ---------------------- --------------------------------------------------
0.00 NO_REAL_NAME From: does not include a real name
0.00 HTML_MESSAGE BODY: HTML included in message
--089e0112c5ccbc9f5e0512a5ee8d
Content-Type: text/plain; charset=GB2312; format=flowed; delsp=yes
Content-Transfer-Encoding: base64
xPq21NPaytbJz83iw7PXytS0wvrS4sLwo78NCs7Sw8e3os/WxL/HsLrctuDN4sOzxvPStdP2tb3S
1M/CxNHM4qO6DQrGvcyoxNHX9qO/v827p9PQ0KfRr8XMvPXJ2aO/1bm74bPJsb7Mq7jfo7+/zbun
t7bOp8yr1a2jvw0K16jStc6qzeLDs8bz0rXM4bmpv827p7+qt6K94r72t72wuDoNCs7Sw8fE3M6q
xPrX9rW9x+HLyb+qt6K/zbuno6y2qbWlsru2z6Osv6q3orP2yvTT2tfUvLq1xL/Nu6fIuqGjDQrO
0sPHxNzOqsT6tcTG89K11sa2qNK7zNfXqNK1tcS/zbunv6q3or3ivva3vbC4t/7O8aOss9DFtbDZ
t9aw2bP20Ke5+6Oszt4NCtCnyKu27s3Lv+6how0KztLDx7L6xre1xLmmxNy8sMbk08XKxqO6DQox
oaLA+9PDy9HL99L9x+bW97avv6q3osirx/LEv7Hqv827p6GjDQoyoaK2zMqxvOTE2r7NxNzK1bW9
tPPBv9Gvxcy6zbaptaWhow0KM6Gi1ve2r7XY1dK1vcTjtcTEv7Hqv827p8i6zOWhow0KNKGi1ve2
r72rxPq1xLL6xrfTyrz+yLq3os3GueO4+L/Nu6ehow0KNaGisdzD4s3iw7O1rby+o6yx3MPi0a/F
zLbgo6yzyb27ydmjrM3iw7O/qreizbbXyrjfo6xCMkLRr8XM1srBv7K7uN+1xMCnDQq+1qGjDQo2
oaLN4sOzv827p7+qt6Kyu9TZxNGjrLK71NnX383kwrejrMjDyKvH8r/Nu6fW97avwarPtcT6oaMN
CsjnufvE+rbUztLDx7XEsvrGt9LUvLC3/s7xuNDQy8iko6y7ttOtxPq72Li008q8/rvyvNPO0lFR
z+rMuKGjDQpyZXBsYXkgdG/K/dfWusVRUS8xNzUzMjQ0OTI5DQq/ycD708O588u+s6PTw7XEsvrG
t7nYvPy0yqOsw+K30dTaz9/R3cq+yO28/rXEuabE3LrNy9HL99CnufsNCi0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tDQrI9LK70OjSqrTLwODTyrz+x+vJ6NbDvtzK1aOssafHuLTyxNMNCg0K
DQoNCg0KDQoNCg0KSSd2ZSBpbnZpdGVkIHlvdSB0byBmaWxsIG91dCB0aGUgZm9ybSBVbnRpdGxl
ZCBmb3JtLiBUbyBmaWxsIGl0IG91dCwNCnZpc2l0Og0KaHR0cHM6Ly9kb2NzLmdvb2dsZS5jb20v
Zm9ybXMvZC8xUmFKaG5OU3ZIMWo2X2VHLVFDaDRHVU9LdHRRSFpZXzBlOEpHRWJqeXNoYy92aWV3
Zm9ybT9jPTAmdz0xJnVzcD1tYWlsX2Zvcm1fbGluaw0K
--089e0112c5ccbc9f5e0512a5ee8d
Content-Type: text/html; charset=GB2312
Content-Transfer-Encoding: quoted-printable
=C4=FA=B6=D4=D3=DA=CA=D6=C9=CF=CD=E2=
=C3=B3=D7=CA=D4=B4=C2=FA=D2=E2=C2=F0=A3=BF
=CE=D2=C3=C7=B7=A2=CF=D6=C4=BF=
=C7=B0=BA=DC=B6=E0=CD=E2=C3=B3=C6=F3=D2=B5=D3=F6=B5=BD=D2=D4=CF=C2=C4=D1=CC=
=E2=A3=BA
=C6=BD=CC=A8=C4=D1=D7=F6=A3=BF=BF=CD=BB=A7=D3=D0=D0=A7=D1=AF=
=C5=CC=BC=F5=C9=D9=A3=BF=D5=B9=BB=E1=B3=C9=B1=BE=CC=AB=B8=DF=A3=BF=BF=CD=BB=
=A7=B7=B6=CE=A7=CC=AB=D5=AD=A3=BF
=D7=A8=D2=B5=CE=AA=CD=E2=C3=B3=C6=F3=
=D2=B5=CC=E1=B9=A9=BF=CD=BB=A7=BF=AA=B7=A2=BD=E2=BE=F6=B7=BD=B0=B8:
=CE=
=D2=C3=C7=C4=DC=CE=AA=C4=FA=D7=F6=B5=BD=C7=E1=CB=C9=BF=AA=B7=A2=BF=CD=BB=A7=
=A3=AC=B6=A9=B5=A5=B2=BB=B6=CF=A3=AC=BF=AA=B7=A2=B3=F6=CA=F4=D3=DA=D7=D4=BC=
=BA=B5=C4=BF=CD=BB=A7=C8=BA=A1=A3
=CE=D2=C3=C7=C4=DC=CE=AA=C4=FA=B5=C4=
=C6=F3=D2=B5=D6=C6=B6=A8=D2=BB=CC=D7=D7=A8=D2=B5=B5=C4=BF=CD=BB=A7=BF=AA=B7=
=A2=BD=E2=BE=F6=B7=BD=B0=B8=B7=FE=CE=F1=A3=AC=B3=D0=C5=B5=B0=D9=B7=D6=B0=D9=
=B3=F6=D0=A7=B9=FB=A3=AC=CE=DE=D0=A7=C8=AB=B6=EE=CD=CB=BF=EE=A1=A3
=CE=D2=
=C3=C7=B2=FA=C6=B7=B5=C4=B9=A6=C4=DC=BC=B0=C6=E4=D3=C5=CA=C6=A3=BA
1=A1=
=A2=C0=FB=D3=C3=CB=D1=CB=F7=D2=FD=C7=E6=D6=F7=B6=AF=BF=AA=B7=A2=C8=AB=C7=F2=
=C4=BF=B1=EA=BF=CD=BB=A7=A1=A3
2=A1=A2=B6=CC=CA=B1=BC=E4=C4=DA=BE=CD=C4=
=DC=CA=D5=B5=BD=B4=F3=C1=BF=D1=AF=C5=CC=BA=CD=B6=A9=B5=A5=A1=A3
3=A1=A2=
=D6=F7=B6=AF=B5=D8=D5=D2=B5=BD=C4=E3=B5=C4=C4=BF=B1=EA=BF=CD=BB=A7=C8=BA=CC=
=E5=A1=A3
4=A1=A2=D6=F7=B6=AF=BD=AB=C4=FA=B5=C4=B2=FA=C6=B7=D3=CA=BC=FE=
=C8=BA=B7=A2=CD=C6=B9=E3=B8=F8=BF=CD=BB=A7=A1=A3
5=A1=A2=B1=DC=C3=E2=CD=
=E2=C3=B3=B5=AD=BC=BE=A3=AC=B1=DC=C3=E2=D1=AF=C5=CC=B6=E0=A3=AC=B3=C9=BD=BB=
=C9=D9=A3=AC=CD=E2=C3=B3=BF=AA=B7=A2=CD=B6=D7=CA=B8=DF=A3=ACB2B=D1=AF=C5=CC=
=D6=CA=C1=BF=B2=BB=B8=DF=B5=C4=C0=A7=BE=D6=A1=A3
6=A1=A2=CD=E2=C3=B3=BF=
=CD=BB=A7=BF=AA=B7=A2=B2=BB=D4=D9=C4=D1=A3=AC=B2=BB=D4=D9=D7=DF=CD=E4=C2=B7=
=A3=AC=C8=C3=C8=AB=C7=F2=BF=CD=BB=A7=D6=F7=B6=AF=C1=AA=CF=B5=C4=FA=A1=A3
=C8=E7=B9=FB=C4=FA=B6=D4=CE=D2=C3=C7=B5=C4=B2=FA=C6=B7=D2=D4=BC=B0=B7=FE=
=CE=F1=B8=D0=D0=CB=C8=A4=A3=AC=BB=B6=D3=AD=C4=FA=BB=D8=B8=B4=D3=CA=BC=FE=BB=
=F2=BC=D3=CE=D2QQ=CF=EA=CC=B8=A1=A3
replay to=CA=FD=D7=D6=BA=C5QQ/1753244=
929
=BF=C9=C0=FB=D3=C3=B9=F3=CB=BE=B3=A3=D3=C3=B5=C4=B2=FA=C6=B7=B9=D8=
=BC=FC=B4=CA=A3=AC=C3=E2=B7=D1=D4=DA=CF=DF=D1=DD=CA=BE=C8=ED=BC=FE=B5=C4=B9=
=A6=C4=DC=BA=CD=CB=D1=CB=F7=D0=A7=B9=FB
-----------------------------
=
=C8=F4=B2=BB=D0=E8=D2=AA=B4=CB=C0=E0=D3=CA=BC=FE=C7=EB=C9=E8=D6=C3=BE=DC=CA=
=D5=A3=AC=B1=A7=C7=B8=B4=F2=C4=D3
If you have trouble viewing or submitting this form, you can
fill it out in=
Google Forms.
<=
/html>
--089e0112c5ccbc9f5e0512a5ee8d--
From danny@zadarastorage.com Wed Apr 1 09:09:19 2015
Return-Path:
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on oss.sgi.com
X-Spam-Level:
X-Spam-Status: No, score=0.0 required=5.0 tests=HTML_MESSAGE autolearn=ham
version=3.3.1
X-Original-To: xfs@oss.sgi.com
Delivered-To: xfs@oss.sgi.com
Received: from relay.sgi.com (relay3.corp.sgi.com [198.149.34.15])
by oss.sgi.com (Postfix) with ESMTP id 252537F5A
for ; Wed, 1 Apr 2015 09:09:19 -0500 (CDT)
Received: from cuda.sgi.com (cuda2.sgi.com [192.48.176.25])
by relay3.corp.sgi.com (Postfix) with ESMTP id 76624AC005
for ; Wed, 1 Apr 2015 07:09:15 -0700 (PDT)
X-ASG-Debug-ID: 1427897351-04cbb06cc9295190001-NocioJ
Received: from mail-wg0-f47.google.com (mail-wg0-f47.google.com [74.125.82.47]) by cuda.sgi.com with ESMTP id 7gUPrSsR12aXtRKF (version=TLSv1 cipher=RC4-SHA bits=128 verify=NO) for ; Wed, 01 Apr 2015 07:09:12 -0700 (PDT)
X-Barracuda-Envelope-From: danny@zadarastorage.com
X-Barracuda-Apparent-Source-IP: 74.125.82.47
Received: by wgoe14 with SMTP id e14so54565107wgo.0
for ; Wed, 01 Apr 2015 07:09:11 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=1e100.net; s=20130820;
h=x-gm-message-state:mime-version:date:message-id:subject:from:to:cc
:content-type;
bh=V4//W8etbY4ndyi2H6nTSWXmUJ5/B8g0lSlty+fLzuE=;
b=cjMEHZJ98Z1ihddGqyRW4rsKX+gkPaK+1k4XQEYEiKMFvrH8BmPf7X5otHv6aiV4la
JIQJehX4ZucsSKYDGLySq9zKXZqmMNUPKiKcde+fotXLcw9XkuHfgjoLB2UmQPtozJoj
tWcAxTV+QSYXPWSDgBeVk4jVctEJ3m/lKSSz00vL9Q0DnExya2aiHy1H6xwBDqhcTVPg
i82XjYE5ln4aHGf21K7AvTVebuZsuEbUaiZQvKwjzKjvMR2X8Un45tOQRMB+mxsYM4gF
RacsdqxqnHaQQaEAphOLUmBfKmWFSWlOZOjwbbIO5Ay1PNx00G5nm5O9h0LONqRQQ3uJ
Gz+g==
X-Gm-Message-State: ALoCoQk4YHpwi1MgE1GMG22Pc+gglUSxKwrE1nsjSs8o6gwbLJorTmOGMnvAdKBZHTa8FpUirE3P
MIME-Version: 1.0
X-Received: by 10.194.208.229 with SMTP id mh5mr84432468wjc.108.1427897351276;
Wed, 01 Apr 2015 07:09:11 -0700 (PDT)
Received: by 10.28.60.68 with HTTP; Wed, 1 Apr 2015 07:09:11 -0700 (PDT)
Date: Wed, 1 Apr 2015 17:09:11 +0300
Message-ID:
Subject: xfs corruption issue
From: Danny Shavit
X-ASG-Orig-Subj: xfs corruption issue
To: xfs@oss.sgi.com, Dave Chinner
Cc: Alex Lyakas , Lev Vainblat
Content-Type: multipart/alternative; boundary=001a11338f6c661fed0512aa42e9
X-Barracuda-Connect: mail-wg0-f47.google.com[74.125.82.47]
X-Barracuda-Start-Time: 1427897352
X-Barracuda-Encrypted: RC4-SHA
X-Barracuda-URL: http://192.48.176.25:80/cgi-mod/mark.cgi
X-Virus-Scanned: by bsmtpd at sgi.com
X-Barracuda-BRTS-Status: 1
X-Barracuda-Spam-Score: 1.00
X-Barracuda-Spam-Status: No, SCORE=1.00 using per-user scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.7 tests=BSF_SC0_TG232, HTML_MESSAGE
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.17435
Rule breakdown below
pts rule name description
---- ---------------------- --------------------------------------------------
1.00 BSF_SC0_TG232 BODY: Custom Rule TG232
0.00 HTML_MESSAGE BODY: HTML included in message
--001a11338f6c661fed0512aa42e9
Content-Type: text/plain; charset=UTF-8
Hello Dave,
My name is Danny Shavit and I am with Zadara storage.
We will appreciate your feedback reagrding an xfs_corruption and xfs_reapir
issue.
We found a corrupted xfs volume in one of our systems. It is around 1 TB
size and about 12 M files.
We run xfs_repair on the volume which succeeded after 42 minutes.
We noticed that memory consumption raised to about 7.5 GB.
Since some customers are using only 4GB (and sometimes even 2 GB) we tried
running "xfs_repair -m 3200" on a 4GB RAM machine.
However, this time an OOM event happened during handling of AG 26 during
step 3.
The log of xfs_repair is enclosed below.
We will appreciate your feedback on the amount of memory needed for
xfs_repair in general and when using "-m" option specifically.
The xfs metadata dump (prior to xfs_repair) can be found here:
https://zadarastorage-public.s3.amazonaws.com/xfs/xfsdump-prod-ebs_2015-03-30_23-00-38.tgz
It is a 1.2 GB file (and 5.7 GB uncompressed).
We will appreciate your feedback on the corruption pattern as well.
--
Thank you,
Danny Shavit
Zadarastorage
---------- xfs_repair log ----------------
root@vsa-00000428-vc-1:/export/4xfsdump# date; xfs_repair -v /dev/dm-55;
date
Tue Mar 31 02:28:04 PDT 2015
Phase 1 - find and verify superblock...
- block cache size set to 735288 entries
Phase 2 - using internal log
- zero log...
zero_log: head block 1920 tail block 1920
- scan filesystem freespace and inode maps...
agi_freecount 54, counted 55 in ag 7
sb_ifree 947, counted 948
- found root inode chunk
Phase 3 - for each AG...
- scan and clear agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- agno = 13
- agno = 14
- agno = 15
- agno = 16
- agno = 17
- agno = 18
- agno = 19
- agno = 20
- agno = 21
bad . entry in directory inode 5691013154, was 5691013170: correcting
bad . entry in directory inode 5691013156, was 5691013172: correcting
bad . entry in directory inode 5691013157, was 5691013173: correcting
bad . entry in directory inode 5691013163, was 5691013179: correcting
- agno = 22
- agno = 23
- agno = 24
- agno = 25
- agno = 26 (Danny: OOM occurred here with -m 3200)
- agno = 27
- agno = 28
- agno = 29
- agno = 30
- agno = 31
- agno = 32
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- agno = 13
- agno = 14
- agno = 15
- agno = 16
- agno = 17
- agno = 18
- agno = 19
- agno = 20
- agno = 21
- agno = 22
- agno = 23
- agno = 24
- agno = 25
- agno = 26
- agno = 27
- agno = 28
- agno = 29
- agno = 30
- agno = 31
- agno = 32
Phase 5 - rebuild AG headers and trees...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- agno = 13
- agno = 14
- agno = 15
- agno = 16
- agno = 17
- agno = 18
- agno = 19
- agno = 20
- agno = 21
- agno = 22
- agno = 23
- agno = 24
- agno = 25
- agno = 26
- agno = 27
- agno = 28
- agno = 29
- agno = 30
- agno = 31
- agno = 32
- reset superblock...
Phase 6 - check inode connectivity...
- resetting contents of realtime bitmap and summary inodes
- traversing filesystem ...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
entry "SavedXML" in dir inode 2992927241 inconsistent with .. value
(4324257659) in ino 5691013156
will clear entry "SavedXML"
rebuilding directory inode 2992927241
- agno = 12
- agno = 13
- agno = 14
- agno = 15
- agno = 16
entry "Out" in dir inode 4324257659 inconsistent with .. value (2992927241)
in ino 5691013172
will clear entry "Out"
rebuilding directory inode 4324257659
- agno = 17
- agno = 18
- agno = 19
- agno = 20
- agno = 21
entry "tocs_file" in dir inode 5691012138 inconsistent with .. value
(3520464676) in ino 5691013154
will clear entry "tocs_file"
entry "trees.log" in dir inode 5691012138 inconsistent with .. value
(3791956240) in ino 5691013155
will clear entry "trees.log"
rebuilding directory inode 5691012138
entry "filelist.xml" in directory inode 5691012139 not consistent with ..
value (1909707067) in inode 5691013157,
junking entry
fixing i8count in inode 5691012139
entry "image001.jpg" in directory inode 5691012140 not consistent with ..
value (2450176033) in inode 5691013163,
junking entry
fixing i8count in inode 5691012140
entry "OCR" in dir inode 5691013154 inconsistent with .. value (5691013170)
in ino 1909707065
will clear entry "OCR"
entry "Tmp" in dir inode 5691013154 inconsistent with .. value (5691013170)
in ino 2179087403
will clear entry "Tmp"
entry "images" in dir inode 5691013154 inconsistent with .. value
(5691013170) in ino 2450176007
will clear entry "images"
rebuilding directory inode 5691013154
entry "286_Kellman_Hoffer_Master.pdf_files" in dir inode 5691013156
inconsistent with .. value (5691013172) in ino 834535727
will clear entry "286_Kellman_Hoffer_Master.pdf_files"
rebuilding directory inode 5691013156
- agno = 22
- agno = 23
- agno = 24
- agno = 25
- agno = 26
- agno = 27
- agno = 28
- agno = 29
- agno = 30
- agno = 31
- agno = 32
- traversal finished ...
- moving disconnected inodes to lost+found ...
disconnected dir inode 834535727, moving to lost+found
disconnected dir inode 1909707065, moving to lost+found
disconnected dir inode 2179087403, moving to lost+found
disconnected dir inode 2450176007, moving to lost+found
disconnected dir inode 5691013154, moving to lost+found
disconnected dir inode 5691013155, moving to lost+found
disconnected dir inode 5691013156, moving to lost+found
disconnected dir inode 5691013157, moving to lost+found
disconnected dir inode 5691013163, moving to lost+found
disconnected dir inode 5691013172, moving to lost+found
Phase 7 - verify and correct link counts...
resetting inode 81777983 nlinks from 2 to 12
resetting inode 1909210410 nlinks from 1 to 2
resetting inode 1909707067 nlinks from 3 to 2
resetting inode 2450176033 nlinks from 18 to 17
resetting inode 2992927241 nlinks from 13 to 12
resetting inode 3520464676 nlinks from 13 to 12
resetting inode 3791956240 nlinks from 13 to 12
resetting inode 4324257659 nlinks from 13 to 12
resetting inode 5691013154 nlinks from 5 to 2
resetting inode 5691013156 nlinks from 3 to 2
XFS_REPAIR Summary Tue Mar 31 03:11:00 2015
Phase Start End Duration
Phase 1: 03/31 02:28:04 03/31 02:28:05 1 second
Phase 2: 03/31 02:28:05 03/31 02:28:42 37 seconds
Phase 3: 03/31 02:28:42 03/31 02:48:29 19 minutes, 47 seconds
Phase 4: 03/31 02:48:29 03/31 02:55:40 7 minutes, 11 seconds
Phase 5: 03/31 02:55:40 03/31 02:55:43 3 seconds
Phase 6: 03/31 02:55:43 03/31 03:10:57 15 minutes, 14 seconds
Phase 7: 03/31 03:10:57 03/31 03:10:57
Total run time: 42 minutes, 53 seconds
done
Tue Mar 31 03:11:01 PDT 2015
--001a11338f6c661fed0512aa42e9
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64
PGRpdiBkaXI9Imx0ciI+PGRpdj48ZGl2PjxkaXY+PGRpdj48ZGl2PjxkaXY+PGRpdj48ZGl2Pkhl
bGxvIERhdmUsPGJyPk15IG5hbWUgaXMgRGFubnkgU2hhdml0IGFuZCBJIGFtIHdpdGggWmFkYXJh
IHN0b3JhZ2UuPGJyPjwvZGl2PldlIHdpbGwgYXBwcmVjaWF0ZSB5b3VyIGZlZWRiYWNrIHJlYWdy
ZGluZyBhbiB4ZnNfY29ycnVwdGlvbiBhbmQgeGZzX3JlYXBpciBpc3N1ZS48YnI+PGJyPldlIGZv
dW5kIGEgY29ycnVwdGVkIHhmcyB2b2x1bWUgaW4gb25lIG9mIG91ciBzeXN0ZW1zLiBJdCBpcyBh
cm91bmQgMSBUQiBzaXplIGFuZCBhYm91dCAxMiBNIGZpbGVzLjxicj48L2Rpdj5XZSBydW4geGZz
X3JlcGFpciBvbiB0aGUgdm9sdW1lIHdoaWNoIHN1Y2NlZWRlZCBhZnRlciA0MiBtaW51dGVzLjxi
cj48L2Rpdj48ZGl2PldlIG5vdGljZWQgdGhhdCBtZW1vcnkgY29uc3VtcHRpb24gcmFpc2VkIHRv
IGFib3V0IDcuNSBHQi48YnI+PC9kaXY+PGRpdj5TaW5jZSBzb21lIGN1c3RvbWVycyBhcmUgdXNp
bmcgb25seSA0R0IgKGFuZCBzb21ldGltZXMgZXZlbiAyIEdCKSB3ZSB0cmllZCBydW5uaW5nICZx
dW90O3hmc19yZXBhaXIgLW0gMzIwMCZxdW90OyBvbiBhIDRHQiBSQU0gbWFjaGluZS4gPGJyPkhv
d2V2ZXIsIHRoaXMgdGltZSBhbiBPT00gZXZlbnQgaGFwcGVuZWQgZHVyaW5nIGhhbmRsaW5nIG9m
IEFHIDI2IGR1cmluZyBzdGVwIDMuPGJyPjwvZGl2PjwvZGl2PlRoZSBsb2cgb2YgeGZzX3JlcGFp
ciBpcyBlbmNsb3NlZCBiZWxvdy48YnI+PC9kaXY+V2Ugd2lsbCBhcHByZWNpYXRlIHlvdXIgZmVl
ZGJhY2sgb24gdGhlIGFtb3VudCBvZiBtZW1vcnkgbmVlZGVkIGZvciB4ZnNfcmVwYWlyIGluIGdl
bmVyYWwgYW5kIHdoZW4gdXNpbmcgJnF1b3Q7LW0mcXVvdDsgb3B0aW9uIHNwZWNpZmljYWxseS48
YnI+PC9kaXY+VGhlIHhmcyBtZXRhZGF0YSBkdW1wIChwcmlvciB0byB4ZnNfcmVwYWlyKSBjYW4g
YmUgZm91bmQgaGVyZTo8YnI+PGEgaHJlZj0iaHR0cHM6Ly96YWRhcmFzdG9yYWdlLXB1YmxpYy5z
My5hbWF6b25hd3MuY29tL3hmcy94ZnNkdW1wLXByb2QtZWJzXzIwMTUtMDMtMzBfMjMtMDAtMzgu
dGd6Ij5odHRwczovL3phZGFyYXN0b3JhZ2UtcHVibGljLnMzLmFtYXpvbmF3cy5jb20veGZzL3hm
c2R1bXAtcHJvZC1lYnNfMjAxNS0wMy0zMF8yMy0wMC0zOC50Z3o8L2E+PGJyPjwvZGl2Pkl0IGlz
IGEgMS4yIEdCIGZpbGUgKGFuZCA1LjcgR0IgdW5jb21wcmVzc2VkKS48YnI+PGJyPjwvZGl2Pldl
IHdpbGwgYXBwcmVjaWF0ZSB5b3VyIGZlZWRiYWNrIG9uIHRoZSBjb3JydXB0aW9uIHBhdHRlcm4g
YXMgd2VsbC48YnI+PGRpdj48ZGl2PjxkaXY+PGRpdj48ZGl2PjxkaXY+PGRpdj48ZGl2PjxkaXY+
PGRpdj4tLSA8YnI+PGRpdiBjbGFzcz0iZ21haWxfc2lnbmF0dXJlIj48ZGl2IGRpcj0ibHRyIj48
ZGl2PlRoYW5rIHlvdSw8YnI+PC9kaXY+RGFubnkgU2hhdml0PGJyPjwvZGl2PjxkaXY+WmFkYXJh
c3RvcmFnZTxicj48YnI+PC9kaXY+PGRpdj4tLS0tLS0tLS0tIHhmc19yZXBhaXIgbG9nwqAgLS0t
LS0tLS0tLS0tLS0tLTxicj5yb290QHZzYS0wMDAwMDQyOC12Yy0xOi9leHBvcnQvNHhmc2R1bXAj
IGRhdGU7IHhmc19yZXBhaXIgLXYgL2Rldi9kbS01NTsgZGF0ZcKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCA8YnI+VHVl
IE1hciAzMSAwMjoyODowNCBQRFQgMjAxNTxicj5QaGFzZSAxIC0gZmluZCBhbmQgdmVyaWZ5IHN1
cGVyYmxvY2suLi48YnI+wqDCoMKgwqDCoMKgwqAgLSBibG9jayBjYWNoZSBzaXplIHNldCB0byA3
MzUyODggZW50cmllczxicj5QaGFzZSAyIC0gdXNpbmcgaW50ZXJuYWwgbG9nPGJyPsKgwqDCoMKg
wqDCoMKgIC0gemVybyBsb2cuLi48YnI+emVyb19sb2c6IGhlYWQgYmxvY2sgMTkyMCB0YWlsIGJs
b2NrIDE5MjA8YnI+wqDCoMKgwqDCoMKgwqAgLSBzY2FuIGZpbGVzeXN0ZW0gZnJlZXNwYWNlIGFu
ZCBpbm9kZSBtYXBzLi4uPGJyPmFnaV9mcmVlY291bnQgNTQsIGNvdW50ZWQgNTUgaW4gYWcgNzxi
cj5zYl9pZnJlZSA5NDcsIGNvdW50ZWQgOTQ4PGJyPsKgwqDCoMKgwqDCoMKgIC0gZm91bmQgcm9v
dCBpbm9kZSBjaHVuazxicj5QaGFzZSAzIC0gZm9yIGVhY2ggQUcuLi48YnI+wqDCoMKgwqDCoMKg
wqAgLSBzY2FuIGFuZCBjbGVhciBhZ2kgdW5saW5rZWQgbGlzdHMuLi48YnI+wqDCoMKgwqDCoMKg
wqAgLSBwcm9jZXNzIGtub3duIGlub2RlcyBhbmQgcGVyZm9ybSBpbm9kZSBkaXNjb3ZlcnkuLi48
YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMDxicj7CoMKgwqDCoMKgwqDCoCAtIGFnbm8gPSAx
PGJyPsKgwqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMjxicj7CoMKgwqDCoMKgwqDCoCAtIGFnbm8g
PSAzPGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDQ8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25v
ID0gNTxicj7CoMKgwqDCoMKgwqDCoCAtIGFnbm8gPSA2PGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdu
byA9IDc8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gODxicj7CoMKgwqDCoMKgwqDCoCAtIGFn
bm8gPSA5PGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDEwPGJyPsKgwqDCoMKgwqDCoMKgIC0g
YWdubyA9IDExPGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDEyPGJyPsKgwqDCoMKgwqDCoMKg
IC0gYWdubyA9IDEzPGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDE0PGJyPsKgwqDCoMKgwqDC
oMKgIC0gYWdubyA9IDE1PGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDE2PGJyPsKgwqDCoMKg
wqDCoMKgIC0gYWdubyA9IDE3PGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDE4PGJyPsKgwqDC
oMKgwqDCoMKgIC0gYWdubyA9IDE5PGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDIwPGJyPsKg
wqDCoMKgwqDCoMKgIC0gYWdubyA9IDIxPGJyPmJhZCAuIGVudHJ5IGluIGRpcmVjdG9yeSBpbm9k
ZSA1NjkxMDEzMTU0LCB3YXMgNTY5MTAxMzE3MDogY29ycmVjdGluZzxicj5iYWQgLiBlbnRyeSBp
biBkaXJlY3RvcnkgaW5vZGUgNTY5MTAxMzE1Niwgd2FzIDU2OTEwMTMxNzI6IGNvcnJlY3Rpbmc8
YnI+YmFkIC4gZW50cnkgaW4gZGlyZWN0b3J5IGlub2RlIDU2OTEwMTMxNTcsIHdhcyA1NjkxMDEz
MTczOiBjb3JyZWN0aW5nPGJyPmJhZCAuIGVudHJ5IGluIGRpcmVjdG9yeSBpbm9kZSA1NjkxMDEz
MTYzLCB3YXMgNTY5MTAxMzE3OTogY29ycmVjdGluZzxicj7CoMKgwqDCoMKgwqDCoCAtIGFnbm8g
PSAyMjxicj7CoMKgwqDCoMKgwqDCoCAtIGFnbm8gPSAyMzxicj7CoMKgwqDCoMKgwqDCoCAtIGFn
bm8gPSAyNDxicj7CoMKgwqDCoMKgwqDCoCAtIGFnbm8gPSAyNTxicj7CoMKgwqDCoMKgwqDCoCAt
IGFnbm8gPSAyNsKgwqAgKERhbm55OiBPT00gb2NjdXJyZWQgaGVyZSB3aXRoIC1tIDMyMDApPGJy
PsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDI3PGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDI4
PGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDI5PGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9
IDMwPGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDMxPGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdu
byA9IDMyPGJyPsKgwqDCoMKgwqDCoMKgIC0gcHJvY2VzcyBuZXdseSBkaXNjb3ZlcmVkIGlub2Rl
cy4uLjxicj5QaGFzZSA0IC0gY2hlY2sgZm9yIGR1cGxpY2F0ZSBibG9ja3MuLi48YnI+wqDCoMKg
wqDCoMKgwqAgLSBzZXR0aW5nIHVwIGR1cGxpY2F0ZSBleHRlbnQgbGlzdC4uLjxicj7CoMKgwqDC
oMKgwqDCoCAtIGNoZWNrIGZvciBpbm9kZXMgY2xhaW1pbmcgZHVwbGljYXRlIGJsb2Nrcy4uLjxi
cj7CoMKgwqDCoMKgwqDCoCAtIGFnbm8gPSAwPGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDE8
YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMjxicj7CoMKgwqDCoMKgwqDCoCAtIGFnbm8gPSAz
PGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDQ8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0g
NTxicj7CoMKgwqDCoMKgwqDCoCAtIGFnbm8gPSA2PGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9
IDc8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gODxicj7CoMKgwqDCoMKgwqDCoCAtIGFnbm8g
PSA5PGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDEwPGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdu
byA9IDExPGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDEyPGJyPsKgwqDCoMKgwqDCoMKgIC0g
YWdubyA9IDEzPGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDE0PGJyPsKgwqDCoMKgwqDCoMKg
IC0gYWdubyA9IDE1PGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDE2PGJyPsKgwqDCoMKgwqDC
oMKgIC0gYWdubyA9IDE3PGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDE4PGJyPsKgwqDCoMKg
wqDCoMKgIC0gYWdubyA9IDE5PGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDIwPGJyPsKgwqDC
oMKgwqDCoMKgIC0gYWdubyA9IDIxPGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDIyPGJyPsKg
wqDCoMKgwqDCoMKgIC0gYWdubyA9IDIzPGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDI0PGJy
PsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDI1PGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDI2
PGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDI3PGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9
IDI4PGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDI5PGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdu
byA9IDMwPGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDMxPGJyPsKgwqDCoMKgwqDCoMKgIC0g
YWdubyA9IDMyPGJyPlBoYXNlIDUgLSByZWJ1aWxkIEFHIGhlYWRlcnMgYW5kIHRyZWVzLi4uPGJy
PsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDA8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMTxi
cj7CoMKgwqDCoMKgwqDCoCAtIGFnbm8gPSAyPGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDM8
YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gNDxicj7CoMKgwqDCoMKgwqDCoCAtIGFnbm8gPSA1
PGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDY8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0g
Nzxicj7CoMKgwqDCoMKgwqDCoCAtIGFnbm8gPSA4PGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9
IDk8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMTA8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25v
ID0gMTE8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMTI8YnI+wqDCoMKgwqDCoMKgwqAgLSBh
Z25vID0gMTM8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMTQ8YnI+wqDCoMKgwqDCoMKgwqAg
LSBhZ25vID0gMTU8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMTY8YnI+wqDCoMKgwqDCoMKg
wqAgLSBhZ25vID0gMTc8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMTg8YnI+wqDCoMKgwqDC
oMKgwqAgLSBhZ25vID0gMTk8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMjA8YnI+wqDCoMKg
wqDCoMKgwqAgLSBhZ25vID0gMjE8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMjI8YnI+wqDC
oMKgwqDCoMKgwqAgLSBhZ25vID0gMjM8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMjQ8YnI+
wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMjU8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMjY8
YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMjc8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0g
Mjg8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMjk8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25v
ID0gMzA8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMzE8YnI+wqDCoMKgwqDCoMKgwqAgLSBh
Z25vID0gMzI8YnI+wqDCoMKgwqDCoMKgwqAgLSByZXNldCBzdXBlcmJsb2NrLi4uPGJyPlBoYXNl
IDYgLSBjaGVjayBpbm9kZSBjb25uZWN0aXZpdHkuLi48YnI+wqDCoMKgwqDCoMKgwqAgLSByZXNl
dHRpbmcgY29udGVudHMgb2YgcmVhbHRpbWUgYml0bWFwIGFuZCBzdW1tYXJ5IGlub2Rlczxicj7C
oMKgwqDCoMKgwqDCoCAtIHRyYXZlcnNpbmcgZmlsZXN5c3RlbSAuLi48YnI+wqDCoMKgwqDCoMKg
wqAgLSBhZ25vID0gMDxicj7CoMKgwqDCoMKgwqDCoCAtIGFnbm8gPSAxPGJyPsKgwqDCoMKgwqDC
oMKgIC0gYWdubyA9IDI8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMzxicj7CoMKgwqDCoMKg
wqDCoCAtIGFnbm8gPSA0PGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDU8YnI+wqDCoMKgwqDC
oMKgwqAgLSBhZ25vID0gNjxicj7CoMKgwqDCoMKgwqDCoCAtIGFnbm8gPSA3PGJyPsKgwqDCoMKg
wqDCoMKgIC0gYWdubyA9IDg8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gOTxicj7CoMKgwqDC
oMKgwqDCoCAtIGFnbm8gPSAxMDxicj7CoMKgwqDCoMKgwqDCoCAtIGFnbm8gPSAxMTxicj5lbnRy
eSAmcXVvdDtTYXZlZFhNTCZxdW90OyBpbiBkaXIgaW5vZGUgMjk5MjkyNzI0MSBpbmNvbnNpc3Rl
bnQgd2l0aCAuLiB2YWx1ZSAoNDMyNDI1NzY1OSkgaW4gaW5vIDU2OTEwMTMxNTY8YnI+wqDCoMKg
wqDCoMKgwqAgd2lsbCBjbGVhciBlbnRyeSAmcXVvdDtTYXZlZFhNTCZxdW90Ozxicj5yZWJ1aWxk
aW5nIGRpcmVjdG9yeSBpbm9kZSAyOTkyOTI3MjQxPGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9
IDEyPGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDEzPGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdu
byA9IDE0PGJyPsKgwqDCoMKgwqDCoMKgIC0gYWdubyA9IDE1PGJyPsKgwqDCoMKgwqDCoMKgIC0g
YWdubyA9IDE2PGJyPmVudHJ5ICZxdW90O091dCZxdW90OyBpbiBkaXIgaW5vZGUgNDMyNDI1NzY1
OSBpbmNvbnNpc3RlbnQgd2l0aCAuLiB2YWx1ZSAoMjk5MjkyNzI0MSkgaW4gaW5vIDU2OTEwMTMx
NzI8YnI+wqDCoMKgwqDCoMKgwqAgd2lsbCBjbGVhciBlbnRyeSAmcXVvdDtPdXQmcXVvdDs8YnI+
cmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgNDMyNDI1NzY1OTxicj7CoMKgwqDCoMKgwqDCoCAt
IGFnbm8gPSAxNzxicj7CoMKgwqDCoMKgwqDCoCAtIGFnbm8gPSAxODxicj7CoMKgwqDCoMKgwqDC
oCAtIGFnbm8gPSAxOTxicj7CoMKgwqDCoMKgwqDCoCAtIGFnbm8gPSAyMDxicj7CoMKgwqDCoMKg
wqDCoCAtIGFnbm8gPSAyMTxicj5lbnRyeSAmcXVvdDt0b2NzX2ZpbGUmcXVvdDsgaW4gZGlyIGlu
b2RlIDU2OTEwMTIxMzggaW5jb25zaXN0ZW50IHdpdGggLi4gdmFsdWUgKDM1MjA0NjQ2NzYpIGlu
IGlubyA1NjkxMDEzMTU0PGJyPsKgwqDCoMKgwqDCoMKgIHdpbGwgY2xlYXIgZW50cnkgJnF1b3Q7
dG9jc19maWxlJnF1b3Q7PGJyPmVudHJ5ICZxdW90O3RyZWVzLmxvZyZxdW90OyBpbiBkaXIgaW5v
ZGUgNTY5MTAxMjEzOCBpbmNvbnNpc3RlbnQgd2l0aCAuLiB2YWx1ZSAoMzc5MTk1NjI0MCkgaW4g
aW5vIDU2OTEwMTMxNTU8YnI+wqDCoMKgwqDCoMKgwqAgd2lsbCBjbGVhciBlbnRyeSAmcXVvdDt0
cmVlcy5sb2cmcXVvdDs8YnI+cmVidWlsZGluZyBkaXJlY3RvcnkgaW5vZGUgNTY5MTAxMjEzODxi
cj5lbnRyeSAmcXVvdDtmaWxlbGlzdC54bWwmcXVvdDsgaW4gZGlyZWN0b3J5IGlub2RlIDU2OTEw
MTIxMzkgbm90IGNvbnNpc3RlbnQgd2l0aCAuLiB2YWx1ZSAoMTkwOTcwNzA2NykgaW4gaW5vZGUg
NTY5MTAxMzE1Nyw8YnI+anVua2luZyBlbnRyeTxicj5maXhpbmcgaThjb3VudCBpbiBpbm9kZSA1
NjkxMDEyMTM5PGJyPmVudHJ5ICZxdW90O2ltYWdlMDAxLmpwZyZxdW90OyBpbiBkaXJlY3Rvcnkg
aW5vZGUgNTY5MTAxMjE0MCBub3QgY29uc2lzdGVudCB3aXRoIC4uIHZhbHVlICgyNDUwMTc2MDMz
KSBpbiBpbm9kZSA1NjkxMDEzMTYzLDxicj5qdW5raW5nIGVudHJ5PGJyPmZpeGluZyBpOGNvdW50
IGluIGlub2RlIDU2OTEwMTIxNDA8YnI+ZW50cnkgJnF1b3Q7T0NSJnF1b3Q7IGluIGRpciBpbm9k
ZSA1NjkxMDEzMTU0IGluY29uc2lzdGVudCB3aXRoIC4uIHZhbHVlICg1NjkxMDEzMTcwKSBpbiBp
bm8gMTkwOTcwNzA2NTxicj7CoMKgwqDCoMKgwqDCoCB3aWxsIGNsZWFyIGVudHJ5ICZxdW90O09D
UiZxdW90Ozxicj5lbnRyeSAmcXVvdDtUbXAmcXVvdDsgaW4gZGlyIGlub2RlIDU2OTEwMTMxNTQg
aW5jb25zaXN0ZW50IHdpdGggLi4gdmFsdWUgKDU2OTEwMTMxNzApIGluIGlubyAyMTc5MDg3NDAz
PGJyPsKgwqDCoMKgwqDCoMKgIHdpbGwgY2xlYXIgZW50cnkgJnF1b3Q7VG1wJnF1b3Q7PGJyPmVu
dHJ5ICZxdW90O2ltYWdlcyZxdW90OyBpbiBkaXIgaW5vZGUgNTY5MTAxMzE1NCBpbmNvbnNpc3Rl
bnQgd2l0aCAuLiB2YWx1ZSAoNTY5MTAxMzE3MCkgaW4gaW5vIDI0NTAxNzYwMDc8YnI+wqDCoMKg
wqDCoMKgwqAgd2lsbCBjbGVhciBlbnRyeSAmcXVvdDtpbWFnZXMmcXVvdDs8YnI+cmVidWlsZGlu
ZyBkaXJlY3RvcnkgaW5vZGUgNTY5MTAxMzE1NDxicj5lbnRyeSAmcXVvdDsyODZfS2VsbG1hbl9I
b2ZmZXJfTWFzdGVyLnBkZl9maWxlcyZxdW90OyBpbiBkaXIgaW5vZGUgNTY5MTAxMzE1NiBpbmNv
bnNpc3RlbnQgd2l0aCAuLiB2YWx1ZSAoNTY5MTAxMzE3MikgaW4gaW5vIDgzNDUzNTcyNzxicj7C
oMKgwqDCoMKgwqDCoCB3aWxsIGNsZWFyIGVudHJ5ICZxdW90OzI4Nl9LZWxsbWFuX0hvZmZlcl9N
YXN0ZXIucGRmX2ZpbGVzJnF1b3Q7PGJyPnJlYnVpbGRpbmcgZGlyZWN0b3J5IGlub2RlIDU2OTEw
MTMxNTY8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMjI8YnI+wqDCoMKgwqDCoMKgwqAgLSBh
Z25vID0gMjM8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMjQ8YnI+wqDCoMKgwqDCoMKgwqAg
LSBhZ25vID0gMjU8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMjY8YnI+wqDCoMKgwqDCoMKg
wqAgLSBhZ25vID0gMjc8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMjg8YnI+wqDCoMKgwqDC
oMKgwqAgLSBhZ25vID0gMjk8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMzA8YnI+wqDCoMKg
wqDCoMKgwqAgLSBhZ25vID0gMzE8YnI+wqDCoMKgwqDCoMKgwqAgLSBhZ25vID0gMzI8YnI+wqDC
oMKgwqDCoMKgwqAgLSB0cmF2ZXJzYWwgZmluaXNoZWQgLi4uPGJyPsKgwqDCoMKgwqDCoMKgIC0g
bW92aW5nIGRpc2Nvbm5lY3RlZCBpbm9kZXMgdG8gbG9zdCtmb3VuZCAuLi48YnI+ZGlzY29ubmVj
dGVkIGRpciBpbm9kZSA4MzQ1MzU3MjcsIG1vdmluZyB0byBsb3N0K2ZvdW5kPGJyPmRpc2Nvbm5l
Y3RlZCBkaXIgaW5vZGUgMTkwOTcwNzA2NSwgbW92aW5nIHRvIGxvc3QrZm91bmQ8YnI+ZGlzY29u
bmVjdGVkIGRpciBpbm9kZSAyMTc5MDg3NDAzLCBtb3ZpbmcgdG8gbG9zdCtmb3VuZDxicj5kaXNj
b25uZWN0ZWQgZGlyIGlub2RlIDI0NTAxNzYwMDcsIG1vdmluZyB0byBsb3N0K2ZvdW5kPGJyPmRp
c2Nvbm5lY3RlZCBkaXIgaW5vZGUgNTY5MTAxMzE1NCwgbW92aW5nIHRvIGxvc3QrZm91bmQ8YnI+
ZGlzY29ubmVjdGVkIGRpciBpbm9kZSA1NjkxMDEzMTU1LCBtb3ZpbmcgdG8gbG9zdCtmb3VuZDxi
cj5kaXNjb25uZWN0ZWQgZGlyIGlub2RlIDU2OTEwMTMxNTYsIG1vdmluZyB0byBsb3N0K2ZvdW5k
PGJyPmRpc2Nvbm5lY3RlZCBkaXIgaW5vZGUgNTY5MTAxMzE1NywgbW92aW5nIHRvIGxvc3QrZm91
bmQ8YnI+ZGlzY29ubmVjdGVkIGRpciBpbm9kZSA1NjkxMDEzMTYzLCBtb3ZpbmcgdG8gbG9zdCtm
b3VuZDxicj5kaXNjb25uZWN0ZWQgZGlyIGlub2RlIDU2OTEwMTMxNzIsIG1vdmluZyB0byBsb3N0
K2ZvdW5kPGJyPlBoYXNlIDcgLSB2ZXJpZnkgYW5kIGNvcnJlY3QgbGluayBjb3VudHMuLi48YnI+
cmVzZXR0aW5nIGlub2RlIDgxNzc3OTgzIG5saW5rcyBmcm9tIDIgdG8gMTI8YnI+cmVzZXR0aW5n
IGlub2RlIDE5MDkyMTA0MTAgbmxpbmtzIGZyb20gMSB0byAyPGJyPnJlc2V0dGluZyBpbm9kZSAx
OTA5NzA3MDY3IG5saW5rcyBmcm9tIDMgdG8gMjxicj5yZXNldHRpbmcgaW5vZGUgMjQ1MDE3NjAz
MyBubGlua3MgZnJvbSAxOCB0byAxNzxicj5yZXNldHRpbmcgaW5vZGUgMjk5MjkyNzI0MSBubGlu
a3MgZnJvbSAxMyB0byAxMjxicj5yZXNldHRpbmcgaW5vZGUgMzUyMDQ2NDY3NiBubGlua3MgZnJv
bSAxMyB0byAxMjxicj5yZXNldHRpbmcgaW5vZGUgMzc5MTk1NjI0MCBubGlua3MgZnJvbSAxMyB0
byAxMjxicj5yZXNldHRpbmcgaW5vZGUgNDMyNDI1NzY1OSBubGlua3MgZnJvbSAxMyB0byAxMjxi
cj5yZXNldHRpbmcgaW5vZGUgNTY5MTAxMzE1NCBubGlua3MgZnJvbSA1IHRvIDI8YnI+cmVzZXR0
aW5nIGlub2RlIDU2OTEwMTMxNTYgbmxpbmtzIGZyb20gMyB0byAyPGJyPjxicj7CoMKgwqDCoMKg
wqDCoCBYRlNfUkVQQUlSIFN1bW1hcnnCoMKgwqAgVHVlIE1hciAzMSAwMzoxMTowMCAyMDE1PGJy
Pjxicj5QaGFzZcKgwqDCoMKgwqDCoMKgwqDCoMKgIFN0YXJ0wqDCoMKgwqDCoMKgwqDCoMKgwqAg
RW5kwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIER1cmF0aW9uPGJyPlBoYXNlIDE6wqDCoMKgwqDC
oMKgwqAgMDMvMzEgMDI6Mjg6MDTCoCAwMy8zMSAwMjoyODowNcKgIDEgc2Vjb25kPGJyPlBoYXNl
IDI6wqDCoMKgwqDCoMKgwqAgMDMvMzEgMDI6Mjg6MDXCoCAwMy8zMSAwMjoyODo0MsKgIDM3IHNl
Y29uZHM8YnI+UGhhc2UgMzrCoMKgwqDCoMKgwqDCoCAwMy8zMSAwMjoyODo0MsKgIDAzLzMxIDAy
OjQ4OjI5wqAgMTkgbWludXRlcywgNDcgc2Vjb25kczxicj5QaGFzZSA0OsKgwqDCoMKgwqDCoMKg
IDAzLzMxIDAyOjQ4OjI5wqAgMDMvMzEgMDI6NTU6NDDCoCA3IG1pbnV0ZXMsIDExIHNlY29uZHM8
YnI+UGhhc2UgNTrCoMKgwqDCoMKgwqDCoCAwMy8zMSAwMjo1NTo0MMKgIDAzLzMxIDAyOjU1OjQz
wqAgMyBzZWNvbmRzPGJyPlBoYXNlIDY6wqDCoMKgwqDCoMKgwqAgMDMvMzEgMDI6NTU6NDPCoCAw
My8zMSAwMzoxMDo1N8KgIDE1IG1pbnV0ZXMsIDE0IHNlY29uZHM8YnI+UGhhc2UgNzrCoMKgwqDC
oMKgwqDCoCAwMy8zMSAwMzoxMDo1N8KgIDAzLzMxIDAzOjEwOjU3PGJyPjxicj5Ub3RhbCBydW4g
dGltZTogNDIgbWludXRlcywgNTMgc2Vjb25kczxicj5kb25lPGJyPlR1ZSBNYXIgMzEgMDM6MTE6
MDEgUERUIDIwMTU8YnI+PGJyPjwvZGl2PjwvZGl2Pg0KPC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+
PC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+DQo=
--001a11338f6c661fed0512aa42e9--
From jack@suse.cz Wed Apr 1 09:34:37 2015
Return-Path:
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on oss.sgi.com
X-Spam-Level:
X-Spam-Status: No, score=0.0 required=5.0 tests=none autolearn=ham
version=3.3.1
X-Original-To: xfs@oss.sgi.com
Delivered-To: xfs@oss.sgi.com
Received: from relay.sgi.com (relay1.corp.sgi.com [137.38.102.111])
by oss.sgi.com (Postfix) with ESMTP id BBC377F5A
for ; Wed, 1 Apr 2015 09:34:37 -0500 (CDT)
Received: from cuda.sgi.com (cuda2.sgi.com [192.48.176.25])
by relay1.corp.sgi.com (Postfix) with ESMTP id B42FE8F8052
for ; Wed, 1 Apr 2015 07:34:34 -0700 (PDT)
X-ASG-Debug-ID: 1427898868-04cbb06cca2965a0001-NocioJ
Received: from mx2.suse.de (cantor2.suse.de [195.135.220.15]) by cuda.sgi.com with ESMTP id v9mqGWxR0g0xezN1 (version=TLSv1 cipher=AES256-SHA bits=256 verify=NO) for ; Wed, 01 Apr 2015 07:34:29 -0700 (PDT)
X-Barracuda-Envelope-From: jack@suse.cz
X-Barracuda-Apparent-Source-IP: 195.135.220.15
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254])
by mx2.suse.de (Postfix) with ESMTP id DF5B6AC28;
Wed, 1 Apr 2015 14:34:27 +0000 (UTC)
Received: by quack.suse.cz (Postfix, from userid 1000)
id C3D3682878; Wed, 1 Apr 2015 16:34:23 +0200 (CEST)
Date: Wed, 1 Apr 2015 16:34:23 +0200
From: Jan Kara
To: Dave Chinner
Cc: xfs@oss.sgi.com, linux-fsdevel@vger.kernel.org, willy@linux.intel.com,
jack@suse.cz
Subject: Re: [PATCH 1/8] xfs: mmap lock needs to be inside freeze protection
Message-ID: <20150401143423.GO26339@quack.suse.cz>
X-ASG-Orig-Subj: Re: [PATCH 1/8] xfs: mmap lock needs to be inside freeze protection
References: <1427194266-2885-1-git-send-email-david@fromorbit.com>
<1427194266-2885-2-git-send-email-david@fromorbit.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <1427194266-2885-2-git-send-email-david@fromorbit.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Barracuda-Connect: cantor2.suse.de[195.135.220.15]
X-Barracuda-Start-Time: 1427898869
X-Barracuda-Encrypted: AES256-SHA
X-Barracuda-URL: http://192.48.176.25:80/cgi-mod/mark.cgi
X-Virus-Scanned: by bsmtpd at sgi.com
X-Barracuda-BRTS-Status: 1
X-Barracuda-Spam-Score: 0.00
X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.7 tests=
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.17436
Rule breakdown below
pts rule name description
---- ---------------------- --------------------------------------------------
On Tue 24-03-15 21:50:59, Dave Chinner wrote:
> From: Dave Chinner
>
> Lock ordering for the new mmap lock needs to be:
>
> mmap_sem
> sb_start_pagefault
> i_mmap_lock
> page lock
>
>
> Right now xfs_vm_page_mkwrite gets this the wrong way around,
> While technically it cannot deadlock due to the current freeze
> ordering, it's still a landmine that might explode if we change
> anything in future. Hence we need to nest the locks correctly.
Looks good to me. You can add:
Reviewed-by: Jan Kara
Honza
>
> Signed-off-by: Dave Chinner
> ---
> fs/xfs/xfs_file.c | 11 ++++++++---
> 1 file changed, 8 insertions(+), 3 deletions(-)
>
> diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
> index dc5f609..a4c882e 100644
> --- a/fs/xfs/xfs_file.c
> +++ b/fs/xfs/xfs_file.c
> @@ -1449,15 +1449,20 @@ xfs_filemap_page_mkwrite(
> struct vm_fault *vmf)
> {
> struct xfs_inode *ip = XFS_I(vma->vm_file->f_mapping->host);
> - int error;
> + int ret;
>
> trace_xfs_filemap_page_mkwrite(ip);
>
> + sb_start_pagefault(VFS_I(ip)->i_sb);
> + file_update_time(vma->vm_file);
> xfs_ilock(ip, XFS_MMAPLOCK_SHARED);
> - error = block_page_mkwrite(vma, vmf, xfs_get_blocks);
> +
> + ret = __block_page_mkwrite(vma, vmf, xfs_get_blocks);
> +
> xfs_iunlock(ip, XFS_MMAPLOCK_SHARED);
> + sb_end_pagefault(VFS_I(ip)->i_sb);
>
> - return error;
> + return block_page_mkwrite_return(ret);
> }
>
> const struct file_operations xfs_file_operations = {
> --
> 2.0.0
>
--
Jan Kara
SUSE Labs, CR
From jack@suse.cz Wed Apr 1 09:53:36 2015
Return-Path:
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on oss.sgi.com
X-Spam-Level:
X-Spam-Status: No, score=0.0 required=5.0 tests=none autolearn=ham
version=3.3.1
X-Original-To: xfs@oss.sgi.com
Delivered-To: xfs@oss.sgi.com
Received: from relay.sgi.com (relay3.corp.sgi.com [198.149.34.15])
by oss.sgi.com (Postfix) with ESMTP id 577567F5A
for ; Wed, 1 Apr 2015 09:53:36 -0500 (CDT)
Received: from cuda.sgi.com (cuda2.sgi.com [192.48.176.25])
by relay3.corp.sgi.com (Postfix) with ESMTP id C734CAC002
for ; Wed, 1 Apr 2015 07:53:35 -0700 (PDT)
X-ASG-Debug-ID: 1427900012-04cbb06ccb2974a0001-NocioJ
Received: from mx2.suse.de (cantor2.suse.de [195.135.220.15]) by cuda.sgi.com with ESMTP id uTnUzDB3DimHitL3 (version=TLSv1 cipher=AES256-SHA bits=256 verify=NO) for ; Wed, 01 Apr 2015 07:53:33 -0700 (PDT)
X-Barracuda-Envelope-From: jack@suse.cz
X-Barracuda-Apparent-Source-IP: 195.135.220.15
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254])
by mx2.suse.de (Postfix) with ESMTP id A3637AC54;
Wed, 1 Apr 2015 14:53:31 +0000 (UTC)
Received: by quack.suse.cz (Postfix, from userid 1000)
id 1313882878; Wed, 1 Apr 2015 16:53:28 +0200 (CEST)
Date: Wed, 1 Apr 2015 16:53:28 +0200
From: Jan Kara
To: Dave Chinner
Cc: xfs@oss.sgi.com, linux-fsdevel@vger.kernel.org, willy@linux.intel.com,
jack@suse.cz
Subject: Re: [PATCH 2/8] dax: don't abuse get_block mapping for endio
callbacks
Message-ID: <20150401145328.GP26339@quack.suse.cz>
X-ASG-Orig-Subj: Re: [PATCH 2/8] dax: don't abuse get_block mapping for endio
callbacks
References: <1427194266-2885-1-git-send-email-david@fromorbit.com>
<1427194266-2885-3-git-send-email-david@fromorbit.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <1427194266-2885-3-git-send-email-david@fromorbit.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Barracuda-Connect: cantor2.suse.de[195.135.220.15]
X-Barracuda-Start-Time: 1427900012
X-Barracuda-Encrypted: AES256-SHA
X-Barracuda-URL: http://192.48.176.25:80/cgi-mod/mark.cgi
X-Virus-Scanned: by bsmtpd at sgi.com
X-Barracuda-BRTS-Status: 1
X-Barracuda-Spam-Score: 0.00
X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.7 tests=
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.17438
Rule breakdown below
pts rule name description
---- ---------------------- --------------------------------------------------
On Tue 24-03-15 21:51:00, Dave Chinner wrote:
> From: Dave Chinner
>
> dax_fault() currently relies on the get_block callback to attach an
> io completion callback to the mapping buffer head so that it can
> run unwritten extent conversion after zeroing allocated blocks.
>
> Instead of this hack, pass the conversion callback directly into
> dax_fault() similar to the get_block callback. When the filesystem
> allocates unwritten extents, it will set the buffer_unwritten()
> flag, and hence the dax_fault code can call the completion function
> in the contexts where it is necessary without overloading the
> mapping buffer head.
>
> Note: The changes to ext4 to use this interface are suspect at best.
> In fact, the way ext4 did this end_io assignment in the first place
> looks suspect because it only set a completion callback when there
> wasn't already some other write() call taking place on the same
> inode. The ext4 end_io code looks rather intricate and fragile with
> all it's reference counting and passing to different contexts for
> modification via inode private pointers that aren't protected by
> locks...
Yeah, the io_end handling is currently buggy when you try to do more than
one write in parallel (normally we don't allow that and seriealize
everything behind i_mutex). That needs fixing but here what you did looks
good enough for this patch set. You have my
Acked-by: Jan Kara
Honza
> Signed-off-by: Dave Chinner
> ---
> fs/dax.c | 17 +++++++++++------
> fs/ext2/file.c | 4 ++--
> fs/ext4/file.c | 16 ++++++++++++++--
> fs/ext4/inode.c | 21 +++++++--------------
> include/linux/fs.h | 6 ++++--
> 5 files changed, 38 insertions(+), 26 deletions(-)
>
> diff --git a/fs/dax.c b/fs/dax.c
> index ed1619e..431ec2b 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -310,14 +310,11 @@ static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh,
> out:
> i_mmap_unlock_read(mapping);
>
> - if (bh->b_end_io)
> - bh->b_end_io(bh, 1);
> -
> return error;
> }
>
> static int do_dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
> - get_block_t get_block)
> + get_block_t get_block, dax_iodone_t complete_unwritten)
> {
> struct file *file = vma->vm_file;
> struct address_space *mapping = file->f_mapping;
> @@ -418,7 +415,15 @@ static int do_dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
> page_cache_release(page);
> }
>
> + /*
> + * If we successfully insert the new mapping over an unwritten extent,
> + * we need to ensure we convert the unwritten extent. If there is an
> + * error inserting the mapping, we leave the extent as unwritten to
> + * prevent exposure of the stale underlying data to userspace.
> + */
> error = dax_insert_mapping(inode, &bh, vma, vmf);
> + if (!error && buffer_unwritten(&bh))
> + complete_unwritten(&bh, 1);
>
> out:
> if (error == -ENOMEM)
> @@ -446,7 +451,7 @@ static int do_dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
> * fault handler for DAX files.
> */
> int dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
> - get_block_t get_block)
> + get_block_t get_block, dax_iodone_t complete_unwritten)
> {
> int result;
> struct super_block *sb = file_inode(vma->vm_file)->i_sb;
> @@ -455,7 +460,7 @@ int dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
> sb_start_pagefault(sb);
> file_update_time(vma->vm_file);
> }
> - result = do_dax_fault(vma, vmf, get_block);
> + result = do_dax_fault(vma, vmf, get_block, complete_unwritten);
> if (vmf->flags & FAULT_FLAG_WRITE)
> sb_end_pagefault(sb);
>
> diff --git a/fs/ext2/file.c b/fs/ext2/file.c
> index e317017..8da747a 100644
> --- a/fs/ext2/file.c
> +++ b/fs/ext2/file.c
> @@ -28,12 +28,12 @@
> #ifdef CONFIG_FS_DAX
> static int ext2_dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
> {
> - return dax_fault(vma, vmf, ext2_get_block);
> + return dax_fault(vma, vmf, ext2_get_block, NULL);
> }
>
> static int ext2_dax_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf)
> {
> - return dax_mkwrite(vma, vmf, ext2_get_block);
> + return dax_mkwrite(vma, vmf, ext2_get_block, NULL);
> }
>
> static const struct vm_operations_struct ext2_dax_vm_ops = {
> diff --git a/fs/ext4/file.c b/fs/ext4/file.c
> index 33a09da..f7dabb1 100644
> --- a/fs/ext4/file.c
> +++ b/fs/ext4/file.c
> @@ -192,15 +192,27 @@ errout:
> }
>
> #ifdef CONFIG_FS_DAX
> +static void ext4_end_io_unwritten(struct buffer_head *bh, int uptodate)
> +{
> + struct inode *inode = bh->b_assoc_map->host;
> + /* XXX: breaks on 32-bit > 16GB. Is that even supported? */
> + loff_t offset = (loff_t)(uintptr_t)bh->b_private << inode->i_blkbits;
> + int err;
> + if (!uptodate)
> + return;
> + WARN_ON(!buffer_unwritten(bh));
> + err = ext4_convert_unwritten_extents(NULL, inode, offset, bh->b_size);
> +}
> +
> static int ext4_dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
> {
> - return dax_fault(vma, vmf, ext4_get_block);
> + return dax_fault(vma, vmf, ext4_get_block, ext4_end_io_unwritten);
> /* Is this the right get_block? */
> }
>
> static int ext4_dax_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf)
> {
> - return dax_mkwrite(vma, vmf, ext4_get_block);
> + return dax_mkwrite(vma, vmf, ext4_get_block, ext4_end_io_unwritten);
> }
>
> static const struct vm_operations_struct ext4_dax_vm_ops = {
> diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
> index 5cb9a21..43433de 100644
> --- a/fs/ext4/inode.c
> +++ b/fs/ext4/inode.c
> @@ -657,18 +657,6 @@ has_zeroout:
> return retval;
> }
>
> -static void ext4_end_io_unwritten(struct buffer_head *bh, int uptodate)
> -{
> - struct inode *inode = bh->b_assoc_map->host;
> - /* XXX: breaks on 32-bit > 16GB. Is that even supported? */
> - loff_t offset = (loff_t)(uintptr_t)bh->b_private << inode->i_blkbits;
> - int err;
> - if (!uptodate)
> - return;
> - WARN_ON(!buffer_unwritten(bh));
> - err = ext4_convert_unwritten_extents(NULL, inode, offset, bh->b_size);
> -}
> -
> /* Maximum number of blocks we map for direct IO at once. */
> #define DIO_MAX_BLOCKS 4096
>
> @@ -706,10 +694,15 @@ static int _ext4_get_block(struct inode *inode, sector_t iblock,
>
> map_bh(bh, inode->i_sb, map.m_pblk);
> bh->b_state = (bh->b_state & ~EXT4_MAP_FLAGS) | map.m_flags;
> - if (IS_DAX(inode) && buffer_unwritten(bh) && !io_end) {
> + if (IS_DAX(inode) && buffer_unwritten(bh)) {
> + /*
> + * dgc: I suspect unwritten conversion on ext4+DAX is
> + * fundamentally broken here when there are concurrent
> + * read/write in progress on this inode.
> + */
> + WARN_ON_ONCE(io_end);
> bh->b_assoc_map = inode->i_mapping;
> bh->b_private = (void *)(unsigned long)iblock;
> - bh->b_end_io = ext4_end_io_unwritten;
> }
> if (io_end && io_end->flag & EXT4_IO_END_UNWRITTEN)
> set_buffer_defer_completion(bh);
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index 937e280..82100ae 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -70,6 +70,7 @@ typedef int (get_block_t)(struct inode *inode, sector_t iblock,
> struct buffer_head *bh_result, int create);
> typedef void (dio_iodone_t)(struct kiocb *iocb, loff_t offset,
> ssize_t bytes, void *private);
> +typedef void (dax_iodone_t)(struct buffer_head *bh_map, int uptodate);
>
> #define MAY_EXEC 0x00000001
> #define MAY_WRITE 0x00000002
> @@ -2603,8 +2604,9 @@ ssize_t dax_do_io(int rw, struct kiocb *, struct inode *, struct iov_iter *,
> int dax_clear_blocks(struct inode *, sector_t block, long size);
> int dax_zero_page_range(struct inode *, loff_t from, unsigned len, get_block_t);
> int dax_truncate_page(struct inode *, loff_t from, get_block_t);
> -int dax_fault(struct vm_area_struct *, struct vm_fault *, get_block_t);
> -#define dax_mkwrite(vma, vmf, gb) dax_fault(vma, vmf, gb)
> +int dax_fault(struct vm_area_struct *, struct vm_fault *, get_block_t,
> + dax_iodone_t);
> +#define dax_mkwrite(vma, vmf, gb, iod) dax_fault(vma, vmf, gb, iod)
>
> #ifdef CONFIG_BLOCK
> typedef void (dio_submit_t)(int rw, struct bio *bio, struct inode *inode,
> --
> 2.0.0
>
--
Jan Kara
SUSE Labs, CR
From jack@suse.cz Wed Apr 1 10:07:22 2015
Return-Path:
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on oss.sgi.com
X-Spam-Level:
X-Spam-Status: No, score=0.0 required=5.0 tests=none autolearn=ham
version=3.3.1
X-Original-To: xfs@oss.sgi.com
Delivered-To: xfs@oss.sgi.com
Received: from relay.sgi.com (relay2.corp.sgi.com [137.38.102.29])
by oss.sgi.com (Postfix) with ESMTP id 0D1DA7F5A
for ; Wed, 1 Apr 2015 10:07:22 -0500 (CDT)
Received: from cuda.sgi.com (cuda3.sgi.com [192.48.176.15])
by relay2.corp.sgi.com (Postfix) with ESMTP id 06177304043
for ; Wed, 1 Apr 2015 08:07:18 -0700 (PDT)
X-ASG-Debug-ID: 1427900833-04cb6c3fdb25ac70001-NocioJ
Received: from mx2.suse.de (cantor2.suse.de [195.135.220.15]) by cuda.sgi.com with ESMTP id Gdn379ZN4s1QBRAx (version=TLSv1 cipher=AES256-SHA bits=256 verify=NO) for ; Wed, 01 Apr 2015 08:07:13 -0700 (PDT)
X-Barracuda-Envelope-From: jack@suse.cz
X-Barracuda-Apparent-Source-IP: 195.135.220.15
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254])
by mx2.suse.de (Postfix) with ESMTP id BF4F8AC54;
Wed, 1 Apr 2015 15:07:12 +0000 (UTC)
Received: by quack.suse.cz (Postfix, from userid 1000)
id 4C35E82878; Wed, 1 Apr 2015 17:07:09 +0200 (CEST)
Date: Wed, 1 Apr 2015 17:07:09 +0200
From: Jan Kara
To: Dave Chinner
Cc: xfs@oss.sgi.com, linux-fsdevel@vger.kernel.org, willy@linux.intel.com,
jack@suse.cz
Subject: Re: [PATCH 3/8] dax: expose __dax_fault for filesystems with locking
constraints
Message-ID: <20150401150709.GQ26339@quack.suse.cz>
X-ASG-Orig-Subj: Re: [PATCH 3/8] dax: expose __dax_fault for filesystems with locking
constraints
References: <1427194266-2885-1-git-send-email-david@fromorbit.com>
<1427194266-2885-4-git-send-email-david@fromorbit.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <1427194266-2885-4-git-send-email-david@fromorbit.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Barracuda-Connect: cantor2.suse.de[195.135.220.15]
X-Barracuda-Start-Time: 1427900833
X-Barracuda-Encrypted: AES256-SHA
X-Barracuda-URL: http://192.48.176.15:80/cgi-mod/mark.cgi
X-Virus-Scanned: by bsmtpd at sgi.com
X-Barracuda-BRTS-Status: 1
X-Barracuda-Spam-Score: 0.00
X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.7 tests=
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.17437
Rule breakdown below
pts rule name description
---- ---------------------- --------------------------------------------------
On Tue 24-03-15 21:51:01, Dave Chinner wrote:
> From: Dave Chinner
>
> Some filesystems cannot call dax_fault() directly because they have
> different locking and/or allocation constraints in the page fault IO
> path. To handle this, we need to follow the same model as the
> generic block_page_mkwrite code, where the internals are exposed via
> __block_page_mkwrite() so that filesystems can wrap the correct
> locking and operations around the outside.
>
> This is loosely based on a patch originally from Matthew Willcox.
> Unlike the original patch, it does not change ext4 code, error
> returns or unwritten extent conversion handling. It also adds a
> __dax_mkwrite() wrapper for .page_mkwrite implementations to do the
> right thing, too.
We will need a normal error return from __dax_mkwrite() for proper ENOSPC
handling in ext4. You could do this when touching that code here if you
feel like that but if not, I can do that as a separate patch.
Anyway, feel free to add:
Reviewed-by: Jan Kara
Honza
>
> Signed-off-by: Dave Chinner
> ---
> fs/dax.c | 15 +++++++++++++--
> include/linux/fs.h | 5 ++++-
> 2 files changed, 17 insertions(+), 3 deletions(-)
>
> diff --git a/fs/dax.c b/fs/dax.c
> index 431ec2b..0121f7d 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -313,7 +313,17 @@ static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh,
> return error;
> }
>
> -static int do_dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
> +/**
> + * __dax_fault - handle a page fault on a DAX file
> + * @vma: The virtual memory area where the fault occurred
> + * @vmf: The description of the fault
> + * @get_block: The filesystem method used to translate file offsets to blocks
> + *
> + * When a page fault occurs, filesystems may call this helper in their
> + * fault handler for DAX files. __dax_fault() assumes the caller has done all
> + * the necessary locking for the page fault to proceed successfully.
> + */
> +int __dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
> get_block_t get_block, dax_iodone_t complete_unwritten)
> {
> struct file *file = vma->vm_file;
> @@ -440,6 +450,7 @@ static int do_dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
> }
> goto out;
> }
> +EXPORT_SYMBOL(__dax_fault);
>
> /**
> * dax_fault - handle a page fault on a DAX file
> @@ -460,7 +471,7 @@ int dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
> sb_start_pagefault(sb);
> file_update_time(vma->vm_file);
> }
> - result = do_dax_fault(vma, vmf, get_block, complete_unwritten);
> + result = __dax_fault(vma, vmf, get_block, complete_unwritten);
> if (vmf->flags & FAULT_FLAG_WRITE)
> sb_end_pagefault(sb);
>
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index 82100ae..7e5a2d6 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -2606,7 +2606,10 @@ int dax_zero_page_range(struct inode *, loff_t from, unsigned len, get_block_t);
> int dax_truncate_page(struct inode *, loff_t from, get_block_t);
> int dax_fault(struct vm_area_struct *, struct vm_fault *, get_block_t,
> dax_iodone_t);
> -#define dax_mkwrite(vma, vmf, gb, iod) dax_fault(vma, vmf, gb, iod)
> +int __dax_fault(struct vm_area_struct *, struct vm_fault *, get_block_t,
> + dax_iodone_t);
> +#define dax_mkwrite(vma, vmf, gb, iod) dax_fault(vma, vmf, gb, iod)
> +#define __dax_mkwrite(vma, vmf, gb, iod) __dax_fault(vma, vmf, gb, iod)
>
> #ifdef CONFIG_BLOCK
> typedef void (dio_submit_t)(int rw, struct bio *bio, struct inode *inode,
> --
> 2.0.0
>
--
Jan Kara
SUSE Labs, CR
From tfire.xfs-xfs=oss.sgi.com@diznyxd.com Wed Apr 1 11:09:30 2015
Return-Path:
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on oss.sgi.com
X-Spam-Level:
X-Spam-Status: No, score=0.0 required=5.0 tests=HTML_MESSAGE,T_DKIM_INVALID,
T_REMOTE_IMAGE autolearn=ham version=3.3.1
X-Original-To: xfs@oss.sgi.com
Delivered-To: xfs@oss.sgi.com
Received: from relay.sgi.com (relay3.corp.sgi.com [198.149.34.15])
by oss.sgi.com (Postfix) with ESMTP id BF92D7F5A
for ; Wed, 1 Apr 2015 11:09:29 -0500 (CDT)
Received: from cuda.sgi.com (cuda3.sgi.com [192.48.176.15])
by relay3.corp.sgi.com (Postfix) with ESMTP id 2B939AC008
for ; Wed, 1 Apr 2015 09:09:25 -0700 (PDT)
X-ASG-Debug-ID: 1427904551-04cb6c3fdc25e7d0001-NocioJ
Received: from measure.diznyxd.com (measure.diznyxd.com [46.228.205.97]) by cuda.sgi.com with ESMTP id woa18JtkHiF6mmE7 for ; Wed, 01 Apr 2015 09:09:23 -0700 (PDT)
X-Barracuda-Envelope-From: tfire.xfs-xfs=oss.sgi.com@diznyxd.com
X-Barracuda-Apparent-Source-IP: 46.228.205.97
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; s=dkim; d=diznyxd.com;
h=MIME-Version:Content-Type:Date:Message-Id:Subject:From:To; i=tfire.xfs@diznyxd.com;
bh=xz0u+PCHpmn5AxXawolpCLtnh7M=;
b=n3HFjLfkXbPtf5z37CyuELbzyjWs98d18lGHv+assESN2aQOdYbLDarRgNz8UQqLhd5GW1SffLw6
YIlYrmCRL9uYt4K8TY5ywhl0x/qbQCg28tqn5vN9sgBDMyz0yGxPuF6/bQEwea7RfcjYnTpDSFZ+
jSBQ0BrLKLAp9GP1w8E=
Received: by measure.diznyxd.com id h3gbvk0001gj for ; Wed, 1 Apr 2015 10:35:50 -0400 (envelope-from )
MIME-Version: 1.0
Content-Type: multipart/alternative; boundary="997e3debc04397f86d48cf77384f52"
Date: Wed, 1 Apr 2015 10:35:50 -0400
Message-Id:
Subject: =?UTF-8?B?SW50cm9kdWNpbmcgVGhlIFRydWUgTGFwdG9wIEtpbGxlci4=?=
From: Touch-FIRE
X-ASG-Orig-Subj: =?UTF-8?B?SW50cm9kdWNpbmcgVGhlIFRydWUgTGFwdG9wIEtpbGxlci4=?=
To: xfs@oss.sgi.com
X-Barracuda-Connect: measure.diznyxd.com[46.228.205.97]
X-Barracuda-Start-Time: 1427904551
X-Barracuda-URL: http://192.48.176.15:80/cgi-mod/mark.cgi
X-Virus-Scanned: by bsmtpd at sgi.com
X-Barracuda-BRTS-Status: 1
X-Barracuda-Spam-Score: 2.00
X-Barracuda-Spam-Status: No, SCORE=2.00 using per-user scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.7 tests=BSF_SC0_MV0240, BSF_SC0_SA828, DKIM_SIGNED, HTML_MESSAGE
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.17438
Rule breakdown below
pts rule name description
---- ---------------------- --------------------------------------------------
0.00 DKIM_SIGNED Domain Keys Identified Mail: message has a signature
1.00 BSF_SC0_MV0240 BODY: Custom rule MV0240
0.00 HTML_MESSAGE BODY: HTML included in message
1.00 BSF_SC0_SA828 Custom Outbreak Rule BSF_SC0_SA828
--997e3debc04397f86d48cf77384f52
Content-Type: text/plain;
Touch__fire Touchfire,_Inc. | 1117_NW_54th__Street | Seattle,_WA_98107 Attention iPad Owners: Bluetooth Keyboards Are A Thing Of The Past. | Introducing The All-New Touchfire Case and Keyboard | Unlike bluetooth keyboards, Touchfire... -Isnât An Extra Device You Have To Carry -Doesnât Add Any Noticeable Size Or Weight -Wonât Drain Your iPadâs Battery -Doesnât Need To Pair Or Synch With Your iPad -Is Affordable And Has A Lifetime Warranty | Over 35,000 Satisfied Customers!| âIncredibly Simple To Use. Like Typing On A Physical Keyboardâ - The New York Times | âI love it! I love it!â ~ Kathy Lee - Today Show | âAn Ingenious Ideaâ - TIME Touchfire,_Inc. | 1117_NW_54th_Street | Seattle,_WA 98107 To__unsubscribe click__here Form: Novmbr 11, 19261 vwalskey 1 Mar collson Th U.
S. govrnmn shoul classfy aonal nformaon abou h vn. A horough publc arng of h xsng nformaon coul rsolv h conrovrsy. 32?0?N 80?51?WCoornas: 32?0?N 80?51?W 2.1 Zons of opraon To a, no unu lvls of unnaural raoacv conamnaon hav bn c n h rgonal Uppr Floran aqufr by h Gorga Dparmn of Naural Rsourcs (ovr an abov h alray hgh lvls hough o b u o monaz, a locally occurrng san naurally hgh n raaon).910 S also: Ls of norsmns n h Scosh npnnc rfrnum, 2014 Man arcl: Consuonal saus of Orkny, Shlan an h Wsrn Isls In 2010, Jmmy Carr publsh hs Wh Hous Dars. In h nry for Fbruary 27, 1980, Jmmy Carr wro: Bhn h scns, h fral a program ha bgun wh h passag of h Fral A Roa Ac of 1916, provng 50% monary suppor from h fral govrnmn for mprovmn of major roas. Th Fral A Hghway Ac of 1921 lm h rous o 7% of ach sa's roas, whl 3 n vry 7 roas ha o b nrsa n characr. Infcaon of hs man roas was compl n 1923.1 Th U.S. Rou shl s prn on a squar blank wh a black backgroun. Calforna s h only sa o us an olr cu-ou sgn. B
r
sh czns who ar rsn n Scolan; 4 Racons Auhncy of h fgur S also: Ls of Un Sas Numbr Hghways Prm Mnsr Dav Camron sa h was lgh wh h rsul, gong on o say ha woul hav brokn my har o s our Un Kngom com o an n an I know ha hs snmn was shar no jus by popl across our counry bu also aroun h worl.394 Shan Anrson from h Wllam Crk Hol, loca 200 km (124 m) norh-ws of h own of Marr clam h hol rcv an anonymous fax scrbng h locaon of h arwork, bu hy gnor , smssng h fax as a jok. In Dcmbr 2013 h Br Toghr campagn clar ha ha rcv onaons of ?2.8 mllon.55 Sx-fgur conrbuons wr ma by busnssmn Ian Taylor an Donal Houson, an by auhor C. J. Sansom; almos 27,000 onaons of unr ?7,500 ha bn rcv by h sam a.56 A lar onaon cam from wrr J. K. Rowlng, who announc n Jun 2014 ha sh ha gvn ?1 mllon.5556 In h followng monh, whsky sllr Wllam Gran & Sons announc a onaon of approxmaly ?100,000.57 On 12 Augus 2014 Br Toghr announc ha ha ras nough mony o covr h maxmum spnng prm an was no longr accpng onaons.58 Ths was
arbu n par o a larg numbr of small onaons bng rcv afr h frs lvs ba bwn Salmon an Darlng.58 6 Furhr rang
--997e3debc04397f86d48cf77384f52
Content-Type: text/html;
Touch__fire | Touchfire,_Inc. | 1117_NW_54th__Street | Seattle,_WA_98107 | Attention iPad Owners: Bluetooth Keyboards Are A Thing Of The Past. | Introducing The All-New Touchfire Case and Keyboard | Unlike bluetooth keyboards, Touchfire... -Isnât An Extra Device You Have To Carry -Doesnât Add Any Noticeable Size Or Weight -Wonât Drain Your iPadâs Battery -Doesnât Need To Pair Or Synch With Your iPad -Is Affordable And Has A Lifetime Warranty | Over 35,000 Satisfied Customers!| âIncredibly Simple To Use. Like Typing On A Physical Keyboardâ - The New York Times | âI love it! I love it!â ~ Kathy Lee - Today Show | âAn Ingenious Ideaâ - TIME |
Touchfire,_Inc. | 1117_NW_54th_Street | Seattle,_WA 98107 To__unsubscribe
<
a href="http://diznyxd.com/n8w6=oKWAMdAivt/368722a09baf57a27b39c0ba1f12b194">click__here
A smnar hos by h Carng Enowmn for Inrnaonal Pac sa ha h Royal Navy woul hav o consr a rang of alrnavs, nclung sarmamn.96 A rpor n 2013 from h Scolan Insu hnk ank suggs a fuur Scosh govrnmn coul b convnc o las h Faslan nuclar bas o h rs of h UK o manan goo plomac rlaons an xp NATO nry ngoaons.97 xfs Cos an funng A shorr form of hs worng was us n a subsqun Naonal Inllgnc Councl mmoranum of Spmbr, 1985.48 Inrnaonal racon Th Ys campagn rpaly call for hr o b a lvs ba bwn UK Prm Mnsr Dav Camron an Frs Mnsr of Scolan Alx Salmon. Ths calls for a on-on-on ba wr smss by Camron354355 on h bass ha h rfrnum s for Scos o c an h ba shoul b bwn popl n Scolan who wan o say, an popl n Scolan who wan o go.356 Calls for such a ba wr also suppor by formr Prm Mnsr Goron Brown who sa woul b a goo a.357 Br Toghr charman Alsar Darlng accus Salmon of run
n
ng scar from bang hm nsa,358 alhough Surgon sa n 2013 ha a Salmon-Darlng ba woul ak plac a som pon.359 Darlng rfus a publc ba wh Ys Scolan charman Blar Jnkns.360 UKIP lar Ngl Farag also challng Salmon o ba, bu Farag was smss by an SNP spokswoman as an rrlvanc n Scolan.361 In Ocobr 1999, a wh papr ha was publsh by h U.S. Sna Rpublcan Polcy Comm n opposon o h Comprhnsv Ts Ban Tray sa: Canaa: John Bar, h Mnsr of Forgn Affars of Canaa, wlcom h cson an pras h conuc of h rfrnum.398 Wh 32 sas alray markng hr rous, h plan was approv by AASHO on Novmbr 11, 1926.1 Ths plan nclu a numbr of rconally spl rous, svral sconnuous rous (nclung US 6, US 19 an US 50), an som rmn a sa lns.22 By h m h frs rou log was publsh n Aprl 1927, major numbrng changs ha bn ma n Pnnsylvana n orr o algn h rous o h xsng auo rals.23 In aon, U.S. Rou 15 ha bn xn across Vrgna.24 In January 2012, Elan Murray MSP of Labour l a ba argung ha h franchs shoul b xn o Scos lvng ous Scolan, nclung h approxmaly 800,000 lv
n
g n h ohr pars of h UK.35 Ths was oppos by h Scosh Govrnmn, whch argu ha woul graly ncras h complxy of h rfrnum an sa ha hr was vnc from h Un Naons Human Rghs Comm ha ohr naons mgh quson h lgmacy of a rfrnum f h franchs s no rroral.35 Th Kngom of Scolan an h Kngom of Englan wr sablsh as npnn counrs urng h Ml Ags. Afr fghng a srs of wars urng h 14h cnury, h wo monarchs nr a prsonal unon n 1603 (h Unon of h Crowns) whn Jams VI of Scolan also bcam Jams I of Englan. Th wo naons wr mporarly un unr on govrnmn whn Olvr Cromwll was clar Lor Procor of a Commonwalh n 1653, bu hs was ssolv whn h monarchy was rsor n 1660. Scolan an Englan un o form h Kngom of Gra Bran n 1707, facors n favour of unon bng, on h Scosh s, h conomc problms caus by h falur of h Darn schm an, on h Englsh, scurng h Hannovran ln of succsson. Gra Bran n urn un wh h Kngom of Irlan n 1801, formng h Un Kngom of Gra Bran an Irlan. Mos of Irlan lf h Unon n 1922 as h Irsh Fr Sa; hus h full nam of h sovrgn sa oay s h U
n
Kngom of Gra Bran an Norhrn Irlan. Th rou numbrs an locaons ar coorna by h Amrcan Assocaon of Sa Hghway an Transporaon Offcals (AASHTO).3 Th only fral nvolvmn n AASHTO s a nonvong sa for h Un Sas Dparmn of Transporaon. Gnrally, norh-o-souh hghways ar o-numbr, wh lows numbrs n h as, h ara of h founng hrn sas of h Un Sas, an hghs n h ws. Smlarly, as-o-ws hghways ar ypcally vn-numbr, wh h lows numbrs n h norh, whr roas wr frs mprov mos nnsvly, an hghs n h souh. Major norh-souh rous hav numbrs nng n 1 whl major as-ws rous hav numbrs nng n 0. Thr-g numbr hghways ar spur rous of parn hghways bu ar no ncssarly connc o hr parns. Som v rous xs o prov wo algnmns for on rou, vn hough many spls hav bn lmna. Spcal rous, usually pos wh a bannr, can prov varous rous, such as an alrna, bypass or busnss rou, for a U.S. Hghway. Th Scosh govrnmn an pro-npnnc campagnrs sa ha a mocrac fc xs n Scolan111112113 bcaus h UK was a unary sa ha no hav a cof consuon.114 Th SNP also scrb h unlc Hous of
L
ors as an affron o mocracy.115 Th mocrac fc labl has somms bn us o rfr o h pro bwn h 1979 an 1997 UK gnral lcons, urng whch h Labour Pary hl a majory of Scosh sas bu h Consrvav Pary govrn h whol of h UK.116 Alx Salmon sa n Spmbr 2013 ha nsancs such as hs amoun o a lack of mocracy, an ha h popl who lv an work n Scolan ar h popl mos lkly o mak h rgh chocs for Scolan.117118 In January 2012, Parck Harv sa: Grns hav a vson of a mor racal mocracy n Scolan, wh far grar lvls of scusson an cson makng a communy lvl.119 Th campagn n favour of Scolan rmanng n h UK, Br Toghr, was launch on 25 Jun 2012.51 I was l by Alsar Darlng, formr Chancllor of h Exchqur, an ha suppor from h Consrvav Pary, Labour Pary an Lbral Dmocras.5152 S Tyb Islan, Gorga, Un Sas Prm Mnsr Dav Camron sa h was lgh wh h rsul, gong on o say ha woul hav brokn my har o s our Un Kngom com o an n an I know ha hs snmn was shar no jus by popl across our counry bu also aroun h worl.394 1958 Tyb Islan m-ar collson War Plan R
f
rs s ou a scrpon of Canaa's gography, mlary rsourcs, an ransporaon, an wn on o valua a srs of possbl pr-mpv Amrcan campagns o nva Canaa n svral aras an occupy ky pors an ralways bfor Brsh roops coul prov rnforcmn o h Canaans - h assumpon bng ha Bran woul us Canaa as a sagng pon. Th a was ha h Amrcan aacks on Canaa woul prvn Bran from usng Canaan rsourcs, pors, or arbass.2 8 Rfrncs Vancouvr an Vcora:
--997e3debc04397f86d48cf77384f52--
From bfoster@redhat.com Wed Apr 1 11:38:30 2015
Return-Path:
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on oss.sgi.com
X-Spam-Level:
X-Spam-Status: No, score=0.0 required=5.0 tests=none autolearn=ham
version=3.3.1
X-Original-To: xfs@oss.sgi.com
Delivered-To: xfs@oss.sgi.com
Received: from relay.sgi.com (relay1.corp.sgi.com [137.38.102.111])
by oss.sgi.com (Postfix) with ESMTP id 2188E7F5D
for ; Wed, 1 Apr 2015 11:38:30 -0500 (CDT)
Received: from cuda.sgi.com (cuda1.sgi.com [192.48.157.11])
by relay1.corp.sgi.com (Postfix) with ESMTP id 025BC8F8081
for ; Wed, 1 Apr 2015 09:38:26 -0700 (PDT)
X-ASG-Debug-ID: 1427906305-04bdf036242cc5e0001-NocioJ
Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by cuda.sgi.com with ESMTP id ctqn1a5bw0yUPmwk (version=TLSv1 cipher=AES256-SHA bits=256 verify=NO) for ; Wed, 01 Apr 2015 09:38:25 -0700 (PDT)
X-Barracuda-Envelope-From: bfoster@redhat.com
X-Barracuda-Apparent-Source-IP: 209.132.183.28
X-ASG-Whitelist: Client
Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27])
by mx1.redhat.com (Postfix) with ESMTPS id 6FF1B8E3CC;
Wed, 1 Apr 2015 16:38:25 +0000 (UTC)
Received: from bfoster.bfoster (dhcp-41-237.bos.redhat.com [10.18.41.237])
by int-mx14.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t31GcOwA026783;
Wed, 1 Apr 2015 12:38:24 -0400
Received: by bfoster.bfoster (Postfix, from userid 1000)
id AF3A41208AB; Wed, 1 Apr 2015 12:38:23 -0400 (EDT)
Date: Wed, 1 Apr 2015 12:38:23 -0400
From: Brian Foster
To: Danny Shavit
Cc: xfs@oss.sgi.com, Dave Chinner ,
Lev Vainblat ,
Alex Lyakas
Subject: Re: xfs corruption issue
Message-ID: <20150401163822.GC4756@bfoster.bfoster>
X-ASG-Orig-Subj: Re: xfs corruption issue
References:
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To:
User-Agent: Mutt/1.5.23 (2014-03-12)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.27
X-Barracuda-Connect: mx1.redhat.com[209.132.183.28]
X-Barracuda-Start-Time: 1427906305
X-Barracuda-Encrypted: AES256-SHA
X-Barracuda-URL: http://192.48.157.11:80/cgi-mod/mark.cgi
X-Virus-Scanned: by bsmtpd at sgi.com
X-Barracuda-BRTS-Status: 1
On Wed, Apr 01, 2015 at 05:09:11PM +0300, Danny Shavit wrote:
> Hello Dave,
> My name is Danny Shavit and I am with Zadara storage.
> We will appreciate your feedback reagrding an xfs_corruption and xfs_reapir
> issue.
>
> We found a corrupted xfs volume in one of our systems. It is around 1 TB
> size and about 12 M files.
> We run xfs_repair on the volume which succeeded after 42 minutes.
> We noticed that memory consumption raised to about 7.5 GB.
> Since some customers are using only 4GB (and sometimes even 2 GB) we tried
> running "xfs_repair -m 3200" on a 4GB RAM machine.
> However, this time an OOM event happened during handling of AG 26 during
> step 3.
> The log of xfs_repair is enclosed below.
> We will appreciate your feedback on the amount of memory needed for
> xfs_repair in general and when using "-m" option specifically.
> The xfs metadata dump (prior to xfs_repair) can be found here:
> https://zadarastorage-public.s3.amazonaws.com/xfs/xfsdump-prod-ebs_2015-03-30_23-00-38.tgz
> It is a 1.2 GB file (and 5.7 GB uncompressed).
>
> We will appreciate your feedback on the corruption pattern as well.
Have you tried something smaller, perhaps -m 2048? I just ran repair on
the metadump on a 4g vm. It oom'd with default options and completed in
a few minutes with -m 2048, though rss still peaked at around 3.6G.
Using -P seems to help at the cost of time. That took me ~20m, but rss
peaked around 2.4GB.
FWIW, I'm also on a recent xfsprogs:
# xfs_repair -V
xfs_repair version 3.2.2
Brian
> --
> Thank you,
> Danny Shavit
> Zadarastorage
>
> ---------- xfs_repair log ----------------
> root@vsa-00000428-vc-1:/export/4xfsdump# date; xfs_repair -v /dev/dm-55;
> date
> Tue Mar 31 02:28:04 PDT 2015
> Phase 1 - find and verify superblock...
> - block cache size set to 735288 entries
> Phase 2 - using internal log
> - zero log...
> zero_log: head block 1920 tail block 1920
> - scan filesystem freespace and inode maps...
> agi_freecount 54, counted 55 in ag 7
> sb_ifree 947, counted 948
> - found root inode chunk
> Phase 3 - for each AG...
> - scan and clear agi unlinked lists...
> - process known inodes and perform inode discovery...
> - agno = 0
> - agno = 1
> - agno = 2
> - agno = 3
> - agno = 4
> - agno = 5
> - agno = 6
> - agno = 7
> - agno = 8
> - agno = 9
> - agno = 10
> - agno = 11
> - agno = 12
> - agno = 13
> - agno = 14
> - agno = 15
> - agno = 16
> - agno = 17
> - agno = 18
> - agno = 19
> - agno = 20
> - agno = 21
> bad . entry in directory inode 5691013154, was 5691013170: correcting
> bad . entry in directory inode 5691013156, was 5691013172: correcting
> bad . entry in directory inode 5691013157, was 5691013173: correcting
> bad . entry in directory inode 5691013163, was 5691013179: correcting
> - agno = 22
> - agno = 23
> - agno = 24
> - agno = 25
> - agno = 26 (Danny: OOM occurred here with -m 3200)
> - agno = 27
> - agno = 28
> - agno = 29
> - agno = 30
> - agno = 31
> - agno = 32
> - process newly discovered inodes...
> Phase 4 - check for duplicate blocks...
> - setting up duplicate extent list...
> - check for inodes claiming duplicate blocks...
> - agno = 0
> - agno = 1
> - agno = 2
> - agno = 3
> - agno = 4
> - agno = 5
> - agno = 6
> - agno = 7
> - agno = 8
> - agno = 9
> - agno = 10
> - agno = 11
> - agno = 12
> - agno = 13
> - agno = 14
> - agno = 15
> - agno = 16
> - agno = 17
> - agno = 18
> - agno = 19
> - agno = 20
> - agno = 21
> - agno = 22
> - agno = 23
> - agno = 24
> - agno = 25
> - agno = 26
> - agno = 27
> - agno = 28
> - agno = 29
> - agno = 30
> - agno = 31
> - agno = 32
> Phase 5 - rebuild AG headers and trees...
> - agno = 0
> - agno = 1
> - agno = 2
> - agno = 3
> - agno = 4
> - agno = 5
> - agno = 6
> - agno = 7
> - agno = 8
> - agno = 9
> - agno = 10
> - agno = 11
> - agno = 12
> - agno = 13
> - agno = 14
> - agno = 15
> - agno = 16
> - agno = 17
> - agno = 18
> - agno = 19
> - agno = 20
> - agno = 21
> - agno = 22
> - agno = 23
> - agno = 24
> - agno = 25
> - agno = 26
> - agno = 27
> - agno = 28
> - agno = 29
> - agno = 30
> - agno = 31
> - agno = 32
> - reset superblock...
> Phase 6 - check inode connectivity...
> - resetting contents of realtime bitmap and summary inodes
> - traversing filesystem ...
> - agno = 0
> - agno = 1
> - agno = 2
> - agno = 3
> - agno = 4
> - agno = 5
> - agno = 6
> - agno = 7
> - agno = 8
> - agno = 9
> - agno = 10
> - agno = 11
> entry "SavedXML" in dir inode 2992927241 inconsistent with .. value
> (4324257659) in ino 5691013156
> will clear entry "SavedXML"
> rebuilding directory inode 2992927241
> - agno = 12
> - agno = 13
> - agno = 14
> - agno = 15
> - agno = 16
> entry "Out" in dir inode 4324257659 inconsistent with .. value (2992927241)
> in ino 5691013172
> will clear entry "Out"
> rebuilding directory inode 4324257659
> - agno = 17
> - agno = 18
> - agno = 19
> - agno = 20
> - agno = 21
> entry "tocs_file" in dir inode 5691012138 inconsistent with .. value
> (3520464676) in ino 5691013154
> will clear entry "tocs_file"
> entry "trees.log" in dir inode 5691012138 inconsistent with .. value
> (3791956240) in ino 5691013155
> will clear entry "trees.log"
> rebuilding directory inode 5691012138
> entry "filelist.xml" in directory inode 5691012139 not consistent with ..
> value (1909707067) in inode 5691013157,
> junking entry
> fixing i8count in inode 5691012139
> entry "image001.jpg" in directory inode 5691012140 not consistent with ..
> value (2450176033) in inode 5691013163,
> junking entry
> fixing i8count in inode 5691012140
> entry "OCR" in dir inode 5691013154 inconsistent with .. value (5691013170)
> in ino 1909707065
> will clear entry "OCR"
> entry "Tmp" in dir inode 5691013154 inconsistent with .. value (5691013170)
> in ino 2179087403
> will clear entry "Tmp"
> entry "images" in dir inode 5691013154 inconsistent with .. value
> (5691013170) in ino 2450176007
> will clear entry "images"
> rebuilding directory inode 5691013154
> entry "286_Kellman_Hoffer_Master.pdf_files" in dir inode 5691013156
> inconsistent with .. value (5691013172) in ino 834535727
> will clear entry "286_Kellman_Hoffer_Master.pdf_files"
> rebuilding directory inode 5691013156
> - agno = 22
> - agno = 23
> - agno = 24
> - agno = 25
> - agno = 26
> - agno = 27
> - agno = 28
> - agno = 29
> - agno = 30
> - agno = 31
> - agno = 32
> - traversal finished ...
> - moving disconnected inodes to lost+found ...
> disconnected dir inode 834535727, moving to lost+found
> disconnected dir inode 1909707065, moving to lost+found
> disconnected dir inode 2179087403, moving to lost+found
> disconnected dir inode 2450176007, moving to lost+found
> disconnected dir inode 5691013154, moving to lost+found
> disconnected dir inode 5691013155, moving to lost+found
> disconnected dir inode 5691013156, moving to lost+found
> disconnected dir inode 5691013157, moving to lost+found
> disconnected dir inode 5691013163, moving to lost+found
> disconnected dir inode 5691013172, moving to lost+found
> Phase 7 - verify and correct link counts...
> resetting inode 81777983 nlinks from 2 to 12
> resetting inode 1909210410 nlinks from 1 to 2
> resetting inode 1909707067 nlinks from 3 to 2
> resetting inode 2450176033 nlinks from 18 to 17
> resetting inode 2992927241 nlinks from 13 to 12
> resetting inode 3520464676 nlinks from 13 to 12
> resetting inode 3791956240 nlinks from 13 to 12
> resetting inode 4324257659 nlinks from 13 to 12
> resetting inode 5691013154 nlinks from 5 to 2
> resetting inode 5691013156 nlinks from 3 to 2
>
> XFS_REPAIR Summary Tue Mar 31 03:11:00 2015
>
> Phase Start End Duration
> Phase 1: 03/31 02:28:04 03/31 02:28:05 1 second
> Phase 2: 03/31 02:28:05 03/31 02:28:42 37 seconds
> Phase 3: 03/31 02:28:42 03/31 02:48:29 19 minutes, 47 seconds
> Phase 4: 03/31 02:48:29 03/31 02:55:40 7 minutes, 11 seconds
> Phase 5: 03/31 02:55:40 03/31 02:55:43 3 seconds
> Phase 6: 03/31 02:55:43 03/31 03:10:57 15 minutes, 14 seconds
> Phase 7: 03/31 03:10:57 03/31 03:10:57
>
> Total run time: 42 minutes, 53 seconds
> done
> Tue Mar 31 03:11:01 PDT 2015
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
From sandeen@sandeen.net Wed Apr 1 12:12:40 2015
Return-Path:
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on oss.sgi.com
X-Spam-Level:
X-Spam-Status: No, score=0.0 required=5.0 tests=none autolearn=ham
version=3.3.1
X-Original-To: xfs@oss.sgi.com
Delivered-To: xfs@oss.sgi.com
Received: from relay.sgi.com (relay2.corp.sgi.com [137.38.102.29])
by oss.sgi.com (Postfix) with ESMTP id D7A127F5E
for ; Wed, 1 Apr 2015 12:12:40 -0500 (CDT)
Received: from cuda.sgi.com (cuda2.sgi.com [192.48.176.25])
by relay2.corp.sgi.com (Postfix) with ESMTP id C7B46304039
for ; Wed, 1 Apr 2015 10:12:37 -0700 (PDT)
X-ASG-Debug-ID: 1427908352-04cbb06ccb2a5b50001-NocioJ
Received: from sandeen.net (sandeen.net [63.231.237.45]) by cuda.sgi.com with ESMTP id MvGv0RkwQObYMRmG for ; Wed, 01 Apr 2015 10:12:32 -0700 (PDT)
X-Barracuda-Envelope-From: sandeen@sandeen.net
X-Barracuda-Apparent-Source-IP: 63.231.237.45
Received: from Liberator.local (unknown [184.49.98.206])
(using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits))
(No client certificate requested)
by sandeen.net (Postfix) with ESMTPSA id A178163C3A22;
Wed, 1 Apr 2015 12:12:31 -0500 (CDT)
Message-ID: <551C26FC.10803@sandeen.net>
Date: Wed, 01 Apr 2015 13:12:28 -0400
From: Eric Sandeen
MIME-Version: 1.0
To: Danny Shavit , xfs@oss.sgi.com,
Dave Chinner
CC: Lev Vainblat ,
Alex Lyakas
Subject: Re: xfs corruption issue
References:
X-ASG-Orig-Subj: Re: xfs corruption issue
In-Reply-To:
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: 8bit
X-Barracuda-Connect: sandeen.net[63.231.237.45]
X-Barracuda-Start-Time: 1427908352
X-Barracuda-URL: http://192.48.176.25:80/cgi-mod/mark.cgi
X-Virus-Scanned: by bsmtpd at sgi.com
X-Barracuda-BRTS-Status: 1
X-Barracuda-Spam-Score: 1.00
X-Barracuda-Spam-Status: No, SCORE=1.00 using per-user scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.7 tests=BSF_SC0_TG232
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.17443
Rule breakdown below
pts rule name description
---- ---------------------- --------------------------------------------------
1.00 BSF_SC0_TG232 BODY: Custom Rule TG232
On 4/1/15 10:09 AM, Danny Shavit wrote:
> Hello Dave,
> My name is Danny Shavit and I am with Zadara storage.
> We will appreciate your feedback reagrding an xfs_corruption and xfs_reapir issue.
>
> We found a corrupted xfs volume in one of our systems. It is around 1 TB size and about 12 M files.
> We run xfs_repair on the volume which succeeded after 42 minutes.
> We noticed that memory consumption raised to about 7.5 GB.
> Since some customers are using only 4GB (and sometimes even 2 GB) we tried running "xfs_repair -m 3200" on a 4GB RAM machine.
> However, this time an OOM event happened during handling of AG 26 during step 3.
> The log of xfs_repair is enclosed below.
> We will appreciate your feedback on the amount of memory needed for xfs_repair in general and when using "-m" option specifically.
> The xfs metadata dump (prior to xfs_repair) can be found here:
> https://zadarastorage-public.s3.amazonaws.com/xfs/xfsdump-prod-ebs_2015-03-30_23-00-38.tgz
> It is a 1.2 GB file (and 5.7 GB uncompressed).
>
> We will appreciate your feedback on the corruption pattern as well.
> --
> Thank you,
> Danny Shavit
> Zadarastorage
>
> ---------- xfs_repair log ----------------
Just a note ...
> bad . entry in directory inode 5691013154, was 5691013170: correcting
101010011001101011111100000100100
101010011001101011111100000110100
^ bit flip
> bad . entry in directory inode 5691013156, was 5691013172: correcting
101010011001101011111100000100100
101010011001101011111100000110100
^ bit flip
etc ...
> bad . entry in directory inode 5691013157, was 5691013173: correcting
> bad . entry in directory inode 5691013163, was 5691013179: correcting
From kdhall@binghamton.edu Wed Apr 1 14:53:41 2015
Return-Path:
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on oss.sgi.com
X-Spam-Level:
X-Spam-Status: No, score=0.0 required=5.0 tests=HTML_MESSAGE autolearn=ham
version=3.3.1
X-Original-To: xfs@oss.sgi.com
Delivered-To: xfs@oss.sgi.com
Received: from relay.sgi.com (relay3.corp.sgi.com [198.149.34.15])
by oss.sgi.com (Postfix) with ESMTP id 825387F60
for ; Wed, 1 Apr 2015 14:53:41 -0500 (CDT)
Received: from cuda.sgi.com (cuda1.sgi.com [192.48.157.11])
by relay3.corp.sgi.com (Postfix) with ESMTP id 1AE7DAC001
for ; Wed, 1 Apr 2015 12:53:37 -0700 (PDT)
X-ASG-Debug-ID: 1427918010-04bdf036252d92a0001-NocioJ
Received: from mail-qc0-f173.google.com (mail-qc0-f173.google.com [209.85.216.173]) by cuda.sgi.com with ESMTP id kyGoTO0fpMY9kQSQ (version=TLSv1 cipher=RC4-SHA bits=128 verify=NO) for ; Wed, 01 Apr 2015 12:53:31 -0700 (PDT)
X-Barracuda-Envelope-From: kdhall@binghamton.edu
X-Barracuda-Apparent-Source-IP: 209.85.216.173
Received: by qcay5 with SMTP id y5so50657350qca.1
for ; Wed, 01 Apr 2015 12:53:30 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=1e100.net; s=20130820;
h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
:cc:subject:references:in-reply-to:content-type;
bh=63afQYk4HEie/+syY9QksjdvgIzMLYXVBKzFYr1ewO8=;
b=BOS6SwAybLP8hnNBBAVwz6S5t4VEhRRCQs7kGtpaXVF+MWq48Fbg8UVNQcBdue28te
DzZRStBrYYsbInVy7yUFn0SPV8OAzIi80TSucO/JoU01uAn5OdwMPTNiS9RDw0M7lmPa
xX5u/zc5/4UvfsPsiKErzvB5+/dkJdJwEdbxme6L93AnTQbLhJb/xPjpN5/7LNZxnaPg
F4QTaXKXqKPMWMJb4vpi0rwSlkeL8uBdll4KYDMUObpZqnHcVtmua/sDa8N3q71FeTwz
6y46tU/oIPZJpWXFGpo/C2/JHE63gaAItrU8+fC6/RdOEyJlqKQd+IekpKBa6jMrFG4O
j5fw==
X-Gm-Message-State: ALoCoQnFzxEDnbX7EFLA7+Hc2/B86Q2dqlq6EH+eHMG0WI9DxwxqKrqXRexgTi75ysRLPMNVQwGx
X-Received: by 10.229.122.70 with SMTP id k6mr42937219qcr.27.1427918010310;
Wed, 01 Apr 2015 12:53:30 -0700 (PDT)
Received: from [128.226.118.196] (omega.cs.binghamton.edu. [128.226.118.196])
by mx.google.com with ESMTPSA id h128sm1959938qhc.6.2015.04.01.12.53.29
(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
Wed, 01 Apr 2015 12:53:29 -0700 (PDT)
Message-ID: <551C4CB8.7@binghamton.edu>
Date: Wed, 01 Apr 2015 15:53:28 -0400
From: Dave Hall
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.16) Gecko/20121215 Icedove/3.0.11
MIME-Version: 1.0
To: Dave Chinner
CC: xfs@oss.sgi.com
Subject: Re: Slightly Urgent: XFS No Space Left On Device
References: <551993CF.4060908@binghamton.edu> <20150330194510.GD28621@dastard>
X-ASG-Orig-Subj: Re: Slightly Urgent: XFS No Space Left On Device
In-Reply-To: <20150330194510.GD28621@dastard>
Content-Type: multipart/alternative;
boundary="------------000606080309030203050905"
X-Barracuda-Connect: mail-qc0-f173.google.com[209.85.216.173]
X-Barracuda-Start-Time: 1427918011
X-Barracuda-Encrypted: RC4-SHA
X-Barracuda-URL: http://192.48.157.11:80/cgi-mod/mark.cgi
X-Barracuda-BRTS-Status: 1
X-Virus-Scanned: by bsmtpd at sgi.com
X-Barracuda-Spam-Score: 0.00
X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.7 tests=HTML_MESSAGE
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.17450
Rule breakdown below
pts rule name description
---- ---------------------- --------------------------------------------------
0.00 HTML_MESSAGE BODY: HTML included in message
This is a multi-part message in MIME format.
--------------000606080309030203050905
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Please pardon the 'top-post', but here is the additional information
requested:
This is a Dell R720xd dual 8-core Xeon system with 128GB RAM. The RAID
controller is Dell PERC H710 Mini with 12 2TB disks in RAID6.
The OS is Debian 6 with kernel 3.2.0-0.bpo.4-amd64 #1 SMP Debian
3.2.65-1+deb7u2~bpo60+1 x86_64.
From /proc/mounts:
/dev/sdb1 /data xfs
rw,noexec,noatime,attr2,delaylog,allocsize=64k,logbsize=64k,sunit=128,swidth=1280,usrquota,prjquota
0 0
Content-wise there are 7 first level directories. Four contain just a
couple files. One of these has a 4.9TB file in it. The other 3
directories are multi-terabyte, but contain many hundreds of thousands
of smaller files. There are nearly 5 million files in about 6500
directories, but less than 500 files are over 1GB in size, with only 200
over 20GB and less than 10 over 1TB.
The output from xfs_info was previously included, but is repeated here:
# xfs_info /data
meta-data=/dev/sdb1 isize=256 agcount=19,
agsize=268435440 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=4882431488, imaxpct=5
= sunit=16 swidth=160 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=521728, version=2
= sectsz=512 sunit=16 blks,
lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
# df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 19T 12T 7.0T 62% /data
# df -ih .
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sdb1 3.7G 4.7M 3.7G 1% /data
Here are the more extensive freesp outputs for each of the 19 AGs:
# xfs_db -r /dev/sdb1 -c 'freesp -s -a0'
from to extents blocks pct
1 1 747 747 19.68
2 3 1045 2496 65.77
4 7 138 552 14.55
total free extents 1930
total free blocks 3795
average free extent size 1.96632
(I don't recall the output from AG0 being so terse on Monday when I
first posted, but the summary information is the same.)
# xfs_db -r /dev/sdb1 -c 'freesp -s -a1'
from to extents blocks pct
1 1 4494 4494 0.00
2 3 42096 91313 0.05
4 7 41096 232953 0.13
8 15 121930 1401067 0.81
16 31 44994 1067002 0.61
32 63 4 209 0.00
64 127 20 1888 0.00
128 255 38 7408 0.00
256 511 14728 5936038 3.41
512 1023 308 246748 0.14
1024 2047 14893 22978919 13.22
2048 4095 1229 4118315 2.37
4096 8191 778 4743029 2.73
8192 16383 329 3694322 2.13
16384 32767 51 1098154 0.63
32768 65535 3 98794 0.06
65536 131071 3 275197 0.16
131072 262143 4 957177 0.55
1048576 2097151 1 1968807 1.13
2097152 4194303 1 3085945 1.78
4194304 8388607 1 5131888 2.95
8388608 16777215 3 33124064 19.06
16777216 33554431 1 28684574 16.50
33554432 67108863 1 54883950 31.57
total free extents 287006
total free blocks 173832255
average free extent size 605.675
# xfs_db -r /dev/sdb1 -c 'freesp -s -a2'
from to extents blocks pct
1 1 5695 5695 0.01
2 3 15405 39309 0.04
4 7 52230 296302 0.31
8 15 112686 1303036 1.38
16 31 967 20943 0.02
32 63 67 2983 0.00
64 127 343 31251 0.03
128 255 100 17428 0.02
256 511 76672 30821379 32.69
512 1023 4 2800 0.00
1024 2047 7537 11762194 12.47
2048 4095 326 1130975 1.20
4096 8191 251 1615591 1.71
8192 16383 105 1184516 1.26
16384 32767 14 274014 0.29
65536 131071 1 73535 0.08
131072 262143 1 234632 0.25
262144 524287 2 788639 0.84
524288 1048575 1 738305 0.78
1048576 2097151 17 34302421 36.38
8388608 16777215 1 9645304 10.23
total free extents 272425
total free blocks 94291252
# xfs_db -r /dev/sdb1 -c 'freesp -s -a3'
from to extents blocks pct
1 1 5793 5793 0.01
2 3 30667 70359 0.06
4 7 53174 301241 0.27
8 15 129098 1480652 1.34
16 31 4875 116797 0.11
32 63 148 6755 0.01
64 127 13192 1200672 1.09
128 255 1754 286342 0.26
256 511 35132 14122824 12.81
512 1023 6 4349 0.00
1024 2047 11609 18155617 16.47
2048 4095 447 1557312 1.41
4096 8191 342 2120360 1.92
8192 16383 147 1685429 1.53
16384 32767 21 438149 0.40
32768 65535 5 221907 0.20
65536 131071 4 384869 0.35
131072 262143 3 576503 0.52
524288 1048575 1 524974 0.48
1048576 2097151 2 2718327 2.47
33554432 67108863 1 64229173 58.28
total free extents 286421
total free blocks 110208404
average free extent size 384.778
# xfs_db -r /dev/sdb1 -c 'freesp -s -a4'
from to extents blocks pct
1 1 5399 5399 0.01
2 3 29098 67289 0.06
4 7 50889 287977 0.27
8 15 125018 1433485 1.34
16 31 4601 108565 0.10
32 63 86 3986 0.00
64 127 12587 1145709 1.07
128 255 1537 250472 0.23
256 511 35982 14464615 13.50
512 1023 2 1039 0.00
1024 2047 11074 17306417 16.16
2048 4095 428 1488906 1.39
4096 8191 343 2130436 1.99
8192 16383 141 1574556 1.47
16384 32767 22 437491 0.41
32768 65535 2 74530 0.07
65536 131071 2 198418 0.19
131072 262143 2 399680 0.37
262144 524287 1 278259 0.26
524288 1048575 1 858623 0.80
2097152 4194303 1 2357798 2.20
4194304 8388607 1 7007241 6.54
8388608 16777215 2 24665312 23.03
16777216 33554431 1 30572144 28.54
total free extents 277220
total free blocks 107118347
average free extent size 386.402
# xfs_db -r /dev/sdb1 -c 'freesp -s -a5'
from to extents blocks pct
1 1 5623 5623 0.01
2 3 28053 65224 0.06
4 7 51000 288250 0.27
8 15 122593 1405739 1.32
16 31 4439 104165 0.10
32 63 107 4913 0.00
64 127 10904 992287 0.93
128 255 1458 237872 0.22
256 511 37480 15066766 14.19
512 1023 4 3298 0.00
1024 2047 11035 17206454 16.20
2048 4095 416 1447533 1.36
4096 8191 367 2264983 2.13
8192 16383 132 1507258 1.42
16384 32767 17 369018 0.35
32768 65535 5 252737 0.24
65536 131071 1 93292 0.09
131072 262143 2 369218 0.35
262144 524287 1 371390 0.35
8388608 16777215 1 11907027 11.21
16777216 33554431 1 17447945 16.43
33554432 67108863 1 34801264 32.77
total free extents 273640
total free blocks 106212256
average free extent size 388.146
# xfs_db -r /dev/sdb1 -c 'freesp -s -a6'
from to extents blocks pct
1 1 5485 5485 0.01
2 3 28092 65622 0.06
4 7 51124 288408 0.27
8 15 122946 1411945 1.32
16 31 4295 99036 0.09
32 63 136 6165 0.01
64 127 10723 975901 0.91
128 255 1393 227148 0.21
256 511 37816 15202240 14.21
512 1023 9 6955 0.01
1024 2047 11001 17139027 16.02
2048 4095 452 1570875 1.47
4096 8191 310 1937437 1.81
8192 16383 140 1622878 1.52
16384 32767 22 432606 0.40
32768 65535 3 119928 0.11
65536 131071 2 201539 0.19
131072 262143 1 242792 0.23
524288 1048575 2 1642283 1.53
1048576 2097151 2 2522760 2.36
4194304 8388607 2 9405762 8.79
16777216 33554431 2 51878521 48.48
total free extents 273958
total free blocks 107005313
average free extent size 390.59
# xfs_db -r /dev/sdb1 -c 'freesp -s -a7'
from to extents blocks pct
1 1 5728 5728 0.01
2 3 27342 63963 0.06
4 7 51098 288588 0.27
8 15 122083 1400413 1.29
16 31 4154 95945 0.09
32 63 125 5696 0.01
64 127 10490 954737 0.88
128 255 1377 225554 0.21
256 511 38215 15362799 14.12
512 1023 5 3014 0.00
1024 2047 11138 17383490 15.98
2048 4095 446 1547400 1.42
4096 8191 314 1940099 1.78
8192 16383 138 1553781 1.43
16384 32767 26 526808 0.48
32768 65535 5 198738 0.18
65536 131071 3 306072 0.28
131072 262143 1 204457 0.19
524288 1048575 1 675084 0.62
4194304 8388607 1 6256240 5.75
8388608 16777215 1 16700425 15.35
16777216 33554431 2 43106323 39.62
total free extents 272693
total free blocks 108805354
average free extent size 399.003
# xfs_db -r /dev/sdb1 -c 'freesp -s -a8'
from to extents blocks pct
1 1 5545 5545 0.01
2 3 27537 64379 0.06
4 7 50486 284834 0.28
8 15 121719 1398087 1.35
16 31 4169 96146 0.09
32 63 140 6404 0.01
64 127 10168 925246 0.90
128 255 1347 219934 0.21
256 511 38396 15435162 14.96
512 1023 9 6657 0.01
1024 2047 11038 17234155 16.70
2048 4095 411 1427988 1.38
4096 8191 337 2110360 2.04
8192 16383 134 1540661 1.49
16384 32767 29 608663 0.59
32768 65535 4 194772 0.19
65536 131071 1 103722 0.10
131072 262143 1 204540 0.20
1048576 2097151 1 1177573 1.14
16777216 33554431 1 19036961 18.45
33554432 67108863 1 41120777 39.84
total free extents 271474
total free blocks 103202566
average free extent size 380.156
# xfs_db -r /dev/sdb1 -c 'freesp -s -a9'
from to extents blocks pct
1 1 5614 5614 0.01
2 3 27343 63817 0.06
4 7 50789 286921 0.26
8 15 122085 1402116 1.28
16 31 4116 95310 0.09
32 63 152 6954 0.01
64 127 10679 971872 0.89
128 255 1315 215145 0.20
256 511 38557 15499672 14.19
512 1023 6 4435 0.00
1024 2047 11119 17330956 15.86
2048 4095 428 1485414 1.36
4096 8191 313 1932235 1.77
8192 16383 158 1823615 1.67
16384 32767 20 427607 0.39
32768 65535 4 162954 0.15
65536 131071 1 74125 0.07
262144 524287 2 782823 0.72
524288 1048575 1 979230 0.90
4194304 8388607 1 6064549 5.55
33554432 67108863 1 59625070 54.58
total free extents 272704
total free blocks 109240434
average free extent size 400.582
# xfs_db -r /dev/sdb1 -c 'freesp -s -a10'
from to extents blocks pct
1 1 5451 5451 0.01
2 3 27619 64469 0.06
4 7 50888 287306 0.27
8 15 122129 1401775 1.30
16 31 4156 96465 0.09
32 63 112 5115 0.00
64 127 10378 944415 0.87
128 255 1336 218180 0.20
256 511 38056 15298154 14.15
512 1023 6 3630 0.00
1024 2047 10908 17025890 15.75
2048 4095 443 1541035 1.43
4096 8191 326 2036141 1.88
8192 16383 150 1670607 1.55
16384 32767 23 497495 0.46
32768 65535 6 259503 0.24
65536 131071 1 80765 0.07
131072 262143 2 466041 0.43
8388608 16777215 2 24552174 22.72
16777216 33554431 2 41626231 38.51
total free extents 271994
total free blocks 108080842
average free extent size 397.365
# xfs_db -r /dev/sdb1 -c 'freesp -s -a11'
from to extents blocks pct
1 1 5753 5753 0.01
2 3 28506 66164 0.06
4 7 51222 289018 0.27
8 15 122115 1400237 1.31
16 31 4325 100622 0.09
32 63 121 5515 0.01
64 127 11218 1020941 0.95
128 255 1419 231469 0.22
256 511 37233 14967258 13.96
512 1023 13 10433 0.01
1024 2047 11040 17243570 16.08
2048 4095 438 1528105 1.42
4096 8191 313 1948122 1.82
8192 16383 137 1545209 1.44
16384 32767 17 340315 0.32
32768 65535 3 135239 0.13
524288 1048575 1 806510 0.75
1048576 2097151 1 1670160 1.56
2097152 4194303 1 3359120 3.13
4194304 8388607 1 4927086 4.59
8388608 16777215 2 26372734 24.59
16777216 33554431 1 29269614 27.29
total free extents 273880
total free blocks 107243194
average free extent size 391.57
# xfs_db -r /dev/sdb1 -c 'freesp -s -a12'
from to extents blocks pct
1 1 5373 5373 0.01
2 3 27530 64216 0.06
4 7 50788 286603 0.27
8 15 121652 1396720 1.31
16 31 4188 97008 0.09
32 63 71 3299 0.00
64 127 10446 950836 0.89
128 255 1349 220210 0.21
256 511 37835 15209592 14.28
512 1023 1 918 0.00
1024 2047 10950 17087135 16.04
2048 4095 416 1445170 1.36
4096 8191 341 2103801 1.98
8192 16383 146 1643458 1.54
16384 32767 27 551354 0.52
32768 65535 5 173876 0.16
65536 131071 3 273193 0.26
262144 524287 2 695714 0.65
524288 1048575 2 1740580 1.63
16777216 33554431 1 22797321 21.40
33554432 67108863 1 39761770 37.33
total free extents 271127
total free blocks 106508147
average free extent size 392.835
# xfs_db -r /dev/sdb1 -c 'freesp -s -a13'
from to extents blocks pct
1 1 5756 5756 0.01
2 3 27074 63268 0.06
4 7 50796 287174 0.26
8 15 121675 1397015 1.28
16 31 4260 98417 0.09
32 63 136 6191 0.01
64 127 10324 939549 0.86
128 255 1315 215314 0.20
256 511 39195 15756002 14.40
512 1023 8 5675 0.01
1024 2047 11129 17335479 15.84
2048 4095 419 1457554 1.33
4096 8191 321 2012733 1.84
8192 16383 143 1666063 1.52
16384 32767 23 460740 0.42
32768 65535 2 103286 0.09
65536 131071 2 193585 0.18
262144 524287 1 356370 0.33
33554432 67108863 1 67081225 61.29
total free extents 272580
total free blocks 109441396
average free extent size 401.502
# xfs_db -r /dev/sdb1 -c 'freesp -s -a14'
from to extents blocks pct
1 1 5585 5585 0.01
2 3 26740 62793 0.06
4 7 50781 286750 0.27
8 15 120804 1388061 1.30
16 31 4186 96930 0.09
32 63 160 7192 0.01
64 127 9898 900897 0.84
128 255 1318 215049 0.20
256 511 38427 15446911 14.48
512 1023 7 4130 0.00
1024 2047 11116 17330340 16.25
2048 4095 390 1356701 1.27
4096 8191 307 1917633 1.80
8192 16383 150 1679866 1.57
16384 32767 24 490742 0.46
32768 65535 3 156921 0.15
65536 131071 3 290496 0.27
524288 1048575 1 715032 0.67
1048576 2097151 1 1570472 1.47
33554432 67108863 1 62750353 58.83
total free extents 269902
total free blocks 106672854
average free extent size 395.228
# xfs_db -r /dev/sdb1 -c 'freesp -s -a15'
from to extents blocks pct
1 1 5734 5734 0.01
2 3 15777 40616 0.05
4 7 51372 290289 0.39
8 15 121640 1396823 1.88
16 31 3105 69153 0.09
32 63 14 700 0.00
64 127 19 1760 0.00
128 255 3157 530350 0.71
256 511 18 7797 0.01
512 1023 7 4504 0.01
1024 2047 44155 71890115 96.61
2048 4095 5 13601 0.02
4096 8191 4 20168 0.03
8192 16383 3 24586 0.03
16384 32767 4 80524 0.11
32768 65535 1 37430 0.05
total free extents 245015
total free blocks 74414150
average free extent size 303.713
# xfs_db -r /dev/sdb1 -c 'freesp -s -a16'
from to extents blocks pct
1 1 5458 5458 0.01
2 3 29896 69017 0.07
4 7 50646 286147 0.28
8 15 123250 1414603 1.37
16 31 4257 99155 0.10
32 63 112 5139 0.00
64 127 13228 1203844 1.17
128 255 1363 222544 0.22
256 511 31264 12567433 12.17
512 1023 8 5911 0.01
1024 2047 11091 17306832 16.76
2048 4095 452 1573760 1.52
4096 8191 356 2239416 2.17
8192 16383 135 1522673 1.47
16384 32767 17 350269 0.34
32768 65535 3 122543 0.12
65536 131071 3 374987 0.36
131072 262143 5 1169749 1.13
524288 1048575 2 1884165 1.83
1048576 2097151 1 1237015 1.20
8388608 16777215 1 9194667 8.91
16777216 33554431 2 50384042 48.80
total free extents 271550
total free blocks 103239369
average free extent size 380.185
# xfs_db -r /dev/sdb1 -c 'freesp -s -a17'
from to extents blocks pct
1 1 5788 5788 0.01
2 3 26404 61921 0.06
4 7 50904 287563 0.27
8 15 120710 1385219 1.30
16 31 4204 97175 0.09
32 63 76 3490 0.00
64 127 10045 914186 0.86
128 255 1392 228552 0.21
256 511 36867 14820192 13.90
512 1023 7 4938 0.00
1024 2047 11286 17637792 16.54
2048 4095 441 1532071 1.44
4096 8191 334 2078958 1.95
8192 16383 123 1419610 1.33
16384 32767 19 396082 0.37
32768 65535 5 224967 0.21
65536 131071 4 362807 0.34
131072 262143 1 155224 0.15
262144 524287 2 866414 0.81
524288 1048575 1 999449 0.94
1048576 2097151 1 1158766 1.09
2097152 4194303 1 2528878 2.37
8388608 16777215 3 39151859 36.72
16777216 33554431 1 20313097 19.05
total free extents 268619
total free blocks 106634998
average free extent size 396.975
# xfs_db -r /dev/sdb1 -c 'freesp -s -a18'
from to extents blocks pct
1 1 5588 5588 0.03
2 3 24900 58887 0.32
4 7 50929 287739 1.56
8 15 120592 1386142 7.52
16 31 4089 93924 0.51
32 63 141 6372 0.03
64 127 8468 770640 4.18
128 255 1339 218783 1.19
256 511 22 8582 0.05
512 1023 4 2711 0.01
1024 2047 9719 15333235 83.15
2048 4095 4 10960 0.06
4096 8191 1 4097 0.02
8192 16383 1 8769 0.05
16384 32767 2 32791 0.18
32768 65535 5 210714 1.14
total free extents 225804
total free blocks 18439934
average free extent size 81.6635
Dave Hall
Binghamton University
kdhall@binghamton.edu
607-760-2328 (Cell)
607-777-4641 (Office)
On 03/30/2015 03:45 PM, Dave Chinner wrote:
> On Mon, Mar 30, 2015 at 02:19:59PM -0400, Dave Hall wrote:
>
>> Hello,
>>
>> I have an XFS file system that's getting 'No space left on device'
>> errors. xfs_fsr also complains of 'No space left'. The XFS Info
>> is:
>>
>> # xfs_info /data
>> meta-data=/dev/sdb1 isize=256 agcount=19,
>> agsize=268435440 blks
>> = sectsz=512 attr=2
>> data = bsize=4096 blocks=4882431488, imaxpct=5
>> = sunit=16 swidth=160 blks
>> naming =version 2 bsize=4096 ascii-ci=0
>> log =internal bsize=4096 blocks=521728, version=2
>> = sectsz=512 sunit=16 blks,
>> lazy-count=1
>> realtime =none extsz=4096 blocks=0, rtextents=0
>>
>> # df -h .
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/sdb1 19T 12T 7.0T 62% /data
>> # df -ih .
>> Filesystem Inodes IUsed IFree IUse% Mounted on
>> /dev/sdb1 3.7G 4.7M 3.7G 1% /data
>>
>>
>> xfs_db freesp shows that AG 0 seems to be full. I've included the
>> freesp for the first few AGs, but the rest seem pretty consistent
>> with AGs 1 - 4 that I've included below.
>>
>> xfs_db> freesp -s -e 1000000000 -a 0
>>
> Can you please drop the "-e 1000000" from these commands and post it
> again? The histogram of differing free space sizes is information
> we actually need to diagnose the problem...
>
> Also, kernel version, mount options and machine details are
> necessary to determine why AG0 might be full (e.g. allocation policy
> in use).
>
> Cheers,
>
> Dave.
>
--------------000606080309030203050905
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Please pardon the 'top-post', but here is the additional information
requested:
This is a Dell R720xd dual 8-core Xeon system with 128GB RAM. The RAID
controller is Dell PERC H710 Mini with 12 2TB disks in RAID6.
The OS is Debian 6 with kernel 3.2.0-0.bpo.4-amd64 #1 SMP Debian
3.2.65-1+deb7u2~bpo60+1 x86_64.
>From /proc/mounts:
/dev/sdb1 /data xfs
rw,noexec,noatime,attr2,delaylog,allocsize=64k,logbsize=64k,sunit=128,swidth=1280,usrquota,prjquota
0 0
Content-wise there are 7 first level directories. Four contain just a
couple files. One of these has a 4.9TB file in it. The other 3
directories are multi-terabyte, but contain many hundreds of thousands
of smaller files. There are nearly 5 million files in about 6500
directories, but less than 500 files are over 1GB in size, with only
200 over 20GB and less than 10 over 1TB.
The output from xfs_info was previously included, but is repeated here:
# xfs_info /data
meta-data=/dev/sdb1 isize=256 agcount=19,
agsize=268435440 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=4882431488, imaxpct=5
= sunit=16 swidth=160 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=521728, version=2
= sectsz=512 sunit=16 blks,
lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
# df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 19T 12T 7.0T 62% /data
# df -ih .
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sdb1 3.7G 4.7M 3.7G 1% /data
Here are the more extensive freesp outputs for each of the 19 AGs:
# xfs_db -r /dev/sdb1 -c 'freesp -s -a0'
from to extents blocks pct
1 1 747 747 19.68
2 3 1045 2496 65.77
4 7 138 552 14.55
total free extents 1930
total free blocks 3795
average free extent size 1.96632
(I don't recall the output from AG0 being so terse on Monday when I
first posted, but the summary information is the same.)
# xfs_db -r /dev/sdb1 -c 'freesp -s -a1'
from to extents blocks pct
1 1 4494 4494 0.00
2 3 42096 91313 0.05
4 7 41096 232953 0.13
8 15 121930 1401067 0.81
16 31 44994 1067002 0.61
32 63 4 209 0.00
64 127 20 1888 0.00
128 255 38 7408 0.00
256 511 14728 5936038 3.41
512 1023 308 246748 0.14
1024 2047 14893 22978919 13.22
2048 4095 1229 4118315 2.37
4096 8191 778 4743029 2.73
8192 16383 329 3694322 2.13
16384 32767 51 1098154 0.63
32768 65535 3 98794 0.06
65536 131071 3 275197 0.16
131072 262143 4 957177 0.55
1048576 2097151 1 1968807 1.13
2097152 4194303 1 3085945 1.78
4194304 8388607 1 5131888 2.95
8388608 16777215 3 33124064 19.06
16777216 33554431 1 28684574 16.50
33554432 67108863 1 54883950 31.57
total free extents 287006
total free blocks 173832255
average free extent size 605.675
# xfs_db -r /dev/sdb1 -c 'freesp -s -a2'
from to extents blocks pct
1 1 5695 5695 0.01
2 3 15405 39309 0.04
4 7 52230 296302 0.31
8 15 112686 1303036 1.38
16 31 967 20943 0.02
32 63 67 2983 0.00
64 127 343 31251 0.03
128 255 100 17428 0.02
256 511 76672 30821379 32.69
512 1023 4 2800 0.00
1024 2047 7537 11762194 12.47
2048 4095 326 1130975 1.20
4096 8191 251 1615591 1.71
8192 16383 105 1184516 1.26
16384 32767 14 274014 0.29
65536 131071 1 73535 0.08
131072 262143 1 234632 0.25
262144 524287 2 788639 0.84
524288 1048575 1 738305 0.78
1048576 2097151 17 34302421 36.38
8388608 16777215 1 9645304 10.23
total free extents 272425
total free blocks 94291252
# xfs_db -r /dev/sdb1 -c 'freesp -s -a3'
from to extents blocks pct
1 1 5793 5793 0.01
2 3 30667 70359 0.06
4 7 53174 301241 0.27
8 15 129098 1480652 1.34
16 31 4875 116797 0.11
32 63 148 6755 0.01
64 127 13192 1200672 1.09
128 255 1754 286342 0.26
256 511 35132 14122824 12.81
512 1023 6 4349 0.00
1024 2047 11609 18155617 16.47
2048 4095 447 1557312 1.41
4096 8191 342 2120360 1.92
8192 16383 147 1685429 1.53
16384 32767 21 438149 0.40
32768 65535 5 221907 0.20
65536 131071 4 384869 0.35
131072 262143 3 576503 0.52
524288 1048575 1 524974 0.48
1048576 2097151 2 2718327 2.47
33554432 67108863 1 64229173 58.28
total free extents 286421
total free blocks 110208404
average free extent size 384.778
# xfs_db -r /dev/sdb1 -c 'freesp -s -a4'
from to extents blocks pct
1 1 5399 5399 0.01
2 3 29098 67289 0.06
4 7 50889 287977 0.27
8 15 125018 1433485 1.34
16 31 4601 108565 0.10
32 63 86 3986 0.00
64 127 12587 1145709 1.07
128 255 1537 250472 0.23
256 511 35982 14464615 13.50
512 1023 2 1039 0.00
1024 2047 11074 17306417 16.16
2048 4095 428 1488906 1.39
4096 8191 343 2130436 1.99
8192 16383 141 1574556 1.47
16384 32767 22 437491 0.41
32768 65535 2 74530 0.07
65536 131071 2 198418 0.19
131072 262143 2 399680 0.37
262144 524287 1 278259 0.26
524288 1048575 1 858623 0.80
2097152 4194303 1 2357798 2.20
4194304 8388607 1 7007241 6.54
8388608 16777215 2 24665312 23.03
16777216 33554431 1 30572144 28.54
total free extents 277220
total free blocks 107118347
average free extent size 386.402
# xfs_db -r /dev/sdb1 -c 'freesp -s -a5'
from to extents blocks pct
1 1 5623 5623 0.01
2 3 28053 65224 0.06
4 7 51000 288250 0.27
8 15 122593 1405739 1.32
16 31 4439 104165 0.10
32 63 107 4913 0.00
64 127 10904 992287 0.93
128 255 1458 237872 0.22
256 511 37480 15066766 14.19
512 1023 4 3298 0.00
1024 2047 11035 17206454 16.20
2048 4095 416 1447533 1.36
4096 8191 367 2264983 2.13
8192 16383 132 1507258 1.42
16384 32767 17 369018 0.35
32768 65535 5 252737 0.24
65536 131071 1 93292 0.09
131072 262143 2 369218 0.35
262144 524287 1 371390 0.35
8388608 16777215 1 11907027 11.21
16777216 33554431 1 17447945 16.43
33554432 67108863 1 34801264 32.77
total free extents 273640
total free blocks 106212256
average free extent size 388.146
# xfs_db -r /dev/sdb1 -c 'freesp -s -a6'
from to extents blocks pct
1 1 5485 5485 0.01
2 3 28092 65622 0.06
4 7 51124 288408 0.27
8 15 122946 1411945 1.32
16 31 4295 99036 0.09
32 63 136 6165 0.01
64 127 10723 975901 0.91
128 255 1393 227148 0.21
256 511 37816 15202240 14.21
512 1023 9 6955 0.01
1024 2047 11001 17139027 16.02
2048 4095 452 1570875 1.47
4096 8191 310 1937437 1.81
8192 16383 140 1622878 1.52
16384 32767 22 432606 0.40
32768 65535 3 119928 0.11
65536 131071 2 201539 0.19
131072 262143 1 242792 0.23
524288 1048575 2 1642283 1.53
1048576 2097151 2 2522760 2.36
4194304 8388607 2 9405762 8.79
16777216 33554431 2 51878521 48.48
total free extents 273958
total free blocks 107005313
average free extent size 390.59
# xfs_db -r /dev/sdb1 -c 'freesp -s -a7'
from to extents blocks pct
1 1 5728 5728 0.01
2 3 27342 63963 0.06
4 7 51098 288588 0.27
8 15 122083 1400413 1.29
16 31 4154 95945 0.09
32 63 125 5696 0.01
64 127 10490 954737 0.88
128 255 1377 225554 0.21
256 511 38215 15362799 14.12
512 1023 5 3014 0.00
1024 2047 11138 17383490 15.98
2048 4095 446 1547400 1.42
4096 8191 314 1940099 1.78
8192 16383 138 1553781 1.43
16384 32767 26 526808 0.48
32768 65535 5 198738 0.18
65536 131071 3 306072 0.28
131072 262143 1 204457 0.19
524288 1048575 1 675084 0.62
4194304 8388607 1 6256240 5.75
8388608 16777215 1 16700425 15.35
16777216 33554431 2 43106323 39.62
total free extents 272693
total free blocks 108805354
average free extent size 399.003
# xfs_db -r /dev/sdb1 -c 'freesp -s -a8'
from to extents blocks pct
1 1 5545 5545 0.01
2 3 27537 64379 0.06
4 7 50486 284834 0.28
8 15 121719 1398087 1.35
16 31 4169 96146 0.09
32 63 140 6404 0.01
64 127 10168 925246 0.90
128 255 1347 219934 0.21
256 511 38396 15435162 14.96
512 1023 9 6657 0.01
1024 2047 11038 17234155 16.70
2048 4095 411 1427988 1.38
4096 8191 337 2110360 2.04
8192 16383 134 1540661 1.49
16384 32767 29 608663 0.59
32768 65535 4 194772 0.19
65536 131071 1 103722 0.10
131072 262143 1 204540 0.20
1048576 2097151 1 1177573 1.14
16777216 33554431 1 19036961 18.45
33554432 67108863 1 41120777 39.84
total free extents 271474
total free blocks 103202566
average free extent size 380.156
# xfs_db -r /dev/sdb1 -c 'freesp -s -a9'
from to extents blocks pct
1 1 5614 5614 0.01
2 3 27343 63817 0.06
4 7 50789 286921 0.26
8 15 122085 1402116 1.28
16 31 4116 95310 0.09
32 63 152 6954 0.01
64 127 10679 971872 0.89
128 255 1315 215145 0.20
256 511 38557 15499672 14.19
512 1023 6 4435 0.00
1024 2047 11119 17330956 15.86
2048 4095 428 1485414 1.36
4096 8191 313 1932235 1.77
8192 16383 158 1823615 1.67
16384 32767 20 427607 0.39
32768 65535 4 162954 0.15
65536 131071 1 74125 0.07
262144 524287 2 782823 0.72
524288 1048575 1 979230 0.90
4194304 8388607 1 6064549 5.55
33554432 67108863 1 59625070 54.58
total free extents 272704
total free blocks 109240434
average free extent size 400.582
# xfs_db -r /dev/sdb1 -c 'freesp -s -a10'
from to extents blocks pct
1 1 5451 5451 0.01
2 3 27619 64469 0.06
4 7 50888 287306 0.27
8 15 122129 1401775 1.30
16 31 4156 96465 0.09
32 63 112 5115 0.00
64 127 10378 944415 0.87
128 255 1336 218180 0.20
256 511 38056 15298154 14.15
512 1023 6 3630 0.00
1024 2047 10908 17025890 15.75
2048 4095 443 1541035 1.43
4096 8191 326 2036141 1.88
8192 16383 150 1670607 1.55
16384 32767 23 497495 0.46
32768 65535 6 259503 0.24
65536 131071 1 80765 0.07
131072 262143 2 466041 0.43
8388608 16777215 2 24552174 22.72
16777216 33554431 2 41626231 38.51
total free extents 271994
total free blocks 108080842
average free extent size 397.365
# xfs_db -r /dev/sdb1 -c 'freesp -s -a11'
from to extents blocks pct
1 1 5753 5753 0.01
2 3 28506 66164 0.06
4 7 51222 289018 0.27
8 15 122115 1400237 1.31
16 31 4325 100622 0.09
32 63 121 5515 0.01
64 127 11218 1020941 0.95
128 255 1419 231469 0.22
256 511 37233 14967258 13.96
512 1023 13 10433 0.01
1024 2047 11040 17243570 16.08
2048 4095 438 1528105 1.42
4096 8191 313 1948122 1.82
8192 16383 137 1545209 1.44
16384 32767 17 340315 0.32
32768 65535 3 135239 0.13
524288 1048575 1 806510 0.75
1048576 2097151 1 1670160 1.56
2097152 4194303 1 3359120 3.13
4194304 8388607 1 4927086 4.59
8388608 16777215 2 26372734 24.59
16777216 33554431 1 29269614 27.29
total free extents 273880
total free blocks 107243194
average free extent size 391.57
# xfs_db -r /dev/sdb1 -c 'freesp -s -a12'
from to extents blocks pct
1 1 5373 5373 0.01
2 3 27530 64216 0.06
4 7 50788 286603 0.27
8 15 121652 1396720 1.31
16 31 4188 97008 0.09
32 63 71 3299 0.00
64 127 10446 950836 0.89
128 255 1349 220210 0.21
256 511 37835 15209592 14.28
512 1023 1 918 0.00
1024 2047 10950 17087135 16.04
2048 4095 416 1445170 1.36
4096 8191 341 2103801 1.98
8192 16383 146 1643458 1.54
16384 32767 27 551354 0.52
32768 65535 5 173876 0.16
65536 131071 3 273193 0.26
262144 524287 2 695714 0.65
524288 1048575 2 1740580 1.63
16777216 33554431 1 22797321 21.40
33554432 67108863 1 39761770 37.33
total free extents 271127
total free blocks 106508147
average free extent size 392.835
# xfs_db -r /dev/sdb1 -c 'freesp -s -a13'
from to extents blocks pct
1 1 5756 5756 0.01
2 3 27074 63268 0.06
4 7 50796 287174 0.26
8 15 121675 1397015 1.28
16 31 4260 98417 0.09
32 63 136 6191 0.01
64 127 10324 939549 0.86
128 255 1315 215314 0.20
256 511 39195 15756002 14.40
512 1023 8 5675 0.01
1024 2047 11129 17335479 15.84
2048 4095 419 1457554 1.33
4096 8191 321 2012733 1.84
8192 16383 143 1666063 1.52
16384 32767 23 460740 0.42
32768 65535 2 103286 0.09
65536 131071 2 193585 0.18
262144 524287 1 356370 0.33
33554432 67108863 1 67081225 61.29
total free extents 272580
total free blocks 109441396
average free extent size 401.502
# xfs_db -r /dev/sdb1 -c 'freesp -s -a14'
from to extents blocks pct
1 1 5585 5585 0.01
2 3 26740 62793 0.06
4 7 50781 286750 0.27
8 15 120804 1388061 1.30
16 31 4186 96930 0.09
32 63 160 7192 0.01
64 127 9898 900897 0.84
128 255 1318 215049 0.20
256 511 38427 15446911 14.48
512 1023 7 4130 0.00
1024 2047 11116 17330340 16.25
2048 4095 390 1356701 1.27
4096 8191 307 1917633 1.80
8192 16383 150 1679866 1.57
16384 32767 24 490742 0.46
32768 65535 3 156921 0.15
65536 131071 3 290496 0.27
524288 1048575 1 715032 0.67
1048576 2097151 1 1570472 1.47
33554432 67108863 1 62750353 58.83
total free extents 269902
total free blocks 106672854
average free extent size 395.228
# xfs_db -r /dev/sdb1 -c 'freesp -s -a15'
from to extents blocks pct
1 1 5734 5734 0.01
2 3 15777 40616 0.05
4 7 51372 290289 0.39
8 15 121640 1396823 1.88
16 31 3105 69153 0.09
32 63 14 700 0.00
64 127 19 1760 0.00
128 255 3157 530350 0.71
256 511 18 7797 0.01
512 1023 7 4504 0.01
1024 2047 44155 71890115 96.61
2048 4095 5 13601 0.02
4096 8191 4 20168 0.03
8192 16383 3 24586 0.03
16384 32767 4 80524 0.11
32768 65535 1 37430 0.05
total free extents 245015
total free blocks 74414150
average free extent size 303.713
# xfs_db -r /dev/sdb1 -c 'freesp -s -a16'
from to extents blocks pct
1 1 5458 5458 0.01
2 3 29896 69017 0.07
4 7 50646 286147 0.28
8 15 123250 1414603 1.37
16 31 4257 99155 0.10
32 63 112 5139 0.00
64 127 13228 1203844 1.17
128 255 1363 222544 0.22
256 511 31264 12567433 12.17
512 1023 8 5911 0.01
1024 2047 11091 17306832 16.76
2048 4095 452 1573760 1.52
4096 8191 356 2239416 2.17
8192 16383 135 1522673 1.47
16384 32767 17 350269 0.34
32768 65535 3 122543 0.12
65536 131071 3 374987 0.36
131072 262143 5 1169749 1.13
524288 1048575 2 1884165 1.83
1048576 2097151 1 1237015 1.20
8388608 16777215 1 9194667 8.91
16777216 33554431 2 50384042 48.80
total free extents 271550
total free blocks 103239369
average free extent size 380.185
# xfs_db -r /dev/sdb1 -c 'freesp -s -a17'
from to extents blocks pct
1 1 5788 5788 0.01
2 3 26404 61921 0.06
4 7 50904 287563 0.27
8 15 120710 1385219 1.30
16 31 4204 97175 0.09
32 63 76 3490 0.00
64 127 10045 914186 0.86
128 255 1392 228552 0.21
256 511 36867 14820192 13.90
512 1023 7 4938 0.00
1024 2047 11286 17637792 16.54
2048 4095 441 1532071 1.44
4096 8191 334 2078958 1.95
8192 16383 123 1419610 1.33
16384 32767 19 396082 0.37
32768 65535 5 224967 0.21
65536 131071 4 362807 0.34
131072 262143 1 155224 0.15
262144 524287 2 866414 0.81
524288 1048575 1 999449 0.94
1048576 2097151 1 1158766 1.09
2097152 4194303 1 2528878 2.37
8388608 16777215 3 39151859 36.72
16777216 33554431 1 20313097 19.05
total free extents 268619
total free blocks 106634998
average free extent size 396.975
# xfs_db -r /dev/sdb1 -c 'freesp -s -a18'
from to extents blocks pct
1 1 5588 5588 0.03
2 3 24900 58887 0.32
4 7 50929 287739 1.56
8 15 120592 1386142 7.52
16 31 4089 93924 0.51
32 63 141 6372 0.03
64 127 8468 770640 4.18
128 255 1339 218783 1.19
256 511 22 8582 0.05
512 1023 4 2711 0.01
1024 2047 9719 15333235 83.15
2048 4095 4 10960 0.06
4096 8191 1 4097 0.02
8192 16383 1 8769 0.05
16384 32767 2 32791 0.18
32768 65535 5 210714 1.14
total free extents 225804
total free blocks 18439934
average free extent size 81.6635
Dave Hall
Binghamton University
kdhall@binghamton.edu
607-760-2328 (Cell)
607-777-4641 (Office)
On 03/30/2015 03:45 PM, Dave Chinner wrote:
On Mon, Mar 30, 2015 at 02:19:59PM -0400, Dave Hall wrote:
Hello,
I have an XFS file system that's getting 'No space left on device'
errors. xfs_fsr also complains of 'No space left'. The XFS Info
is:
# xfs_info /data
meta-data=/dev/sdb1 isize=256 agcount=19,
agsize=268435440 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=4882431488, imaxpct=5
= sunit=16 swidth=160 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=521728, version=2
= sectsz=512 sunit=16 blks,
lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
# df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 19T 12T 7.0T 62% /data
# df -ih .
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sdb1 3.7G 4.7M 3.7G 1% /data
xfs_db freesp shows that AG 0 seems to be full. I've included the
freesp for the first few AGs, but the rest seem pretty consistent
with AGs 1 - 4 that I've included below.
xfs_db> freesp -s -e 1000000000 -a 0
Can you please drop the "-e 1000000" from these commands and post it
again? The histogram of differing free space sizes is information
we actually need to diagnose the problem...
Also, kernel version, mount options and machine details are
necessary to determine why AG0 might be full (e.g. allocation policy
in use).
Cheers,
Dave.
--------------000606080309030203050905--
From david@fromorbit.com Wed Apr 1 19:12:43 2015
Return-Path:
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on oss.sgi.com
X-Spam-Level:
X-Spam-Status: No, score=0.0 required=5.0 tests=none autolearn=ham
version=3.3.1
X-Original-To: xfs@oss.sgi.com
Delivered-To: xfs@oss.sgi.com
Received: from relay.sgi.com (relay2.corp.sgi.com [137.38.102.29])
by oss.sgi.com (Postfix) with ESMTP id 589397F62
for ; Wed, 1 Apr 2015 19:12:43 -0500 (CDT)
Received: from cuda.sgi.com (cuda3.sgi.com [192.48.176.15])
by relay2.corp.sgi.com (Postfix) with ESMTP id 30F75304039
for ; Wed, 1 Apr 2015 17:12:40 -0700 (PDT)
X-ASG-Debug-ID: 1427933557-04cb6c3fde279a90001-NocioJ
Received: from ipmail06.adl6.internode.on.net (ipmail06.adl6.internode.on.net [150.101.137.145]) by cuda.sgi.com with ESMTP id FNnv35NlLHFMMjdj for ; Wed, 01 Apr 2015 17:12:37 -0700 (PDT)
X-Barracuda-Envelope-From: david@fromorbit.com
X-Barracuda-Apparent-Source-IP: 150.101.137.145
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: A2A+BwDGiBxVPM+HLHlcDoJ6gS6GQqt4BpkQBAICgUFNAQEBAQEBBwEBAQE4O4QeAQEBAwE6HCMFCwgDDgoJJQ8FJQMHGhOIJwfNTwEBAQEGAQEBAR4YhXeFGoR5B4QtBZpZgR6DMo9ygiQcgRNRKjGCQwEBAQ
Received: from ppp121-44-135-207.lns20.syd7.internode.on.net (HELO dastard) ([121.44.135.207])
by ipmail06.adl6.internode.on.net with ESMTP; 02 Apr 2015 10:42:36 +1030
Received: from dave by dastard with local (Exim 4.80)
(envelope-from )
id 1YdSkN-0007Bc-BM; Thu, 02 Apr 2015 11:12:35 +1100
Date: Thu, 2 Apr 2015 11:12:35 +1100
From: Dave Chinner
To: Dave Hall
Cc: xfs@oss.sgi.com
Subject: Re: Slightly Urgent: XFS No Space Left On Device
Message-ID: <20150402001235.GI28621@dastard>
X-ASG-Orig-Subj: Re: Slightly Urgent: XFS No Space Left On Device
References: <551993CF.4060908@binghamton.edu>
<20150330194510.GD28621@dastard>
<551C4CB8.7@binghamton.edu>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <551C4CB8.7@binghamton.edu>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Barracuda-Connect: ipmail06.adl6.internode.on.net[150.101.137.145]
X-Barracuda-Start-Time: 1427933557
X-Barracuda-URL: http://192.48.176.15:80/cgi-mod/mark.cgi
X-Virus-Scanned: by bsmtpd at sgi.com
X-Barracuda-BRTS-Status: 1
X-Barracuda-Spam-Score: 0.00
X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.7 tests=
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.17460
Rule breakdown below
pts rule name description
---- ---------------------- --------------------------------------------------
On Wed, Apr 01, 2015 at 03:53:28PM -0400, Dave Hall wrote:
> Please pardon the 'top-post', but here is the additional information
> requested:
>
> This is a Dell R720xd dual 8-core Xeon system with 128GB RAM. The
> RAID controller is Dell PERC H710 Mini with 12 2TB disks in RAID6.
>
> The OS is Debian 6 with kernel 3.2.0-0.bpo.4-amd64 #1 SMP Debian
> 3.2.65-1+deb7u2~bpo60+1 x86_64.
So defaults to inode32 allocation....
> From /proc/mounts:
>
> /dev/sdb1 /data xfs
> rw,noexec,noatime,attr2,delaylog,allocsize=64k,logbsize=64k,sunit=128,swidth=1280,usrquota,prjquota
> 0 0
... and inode64 is not in the mount options.....
> The output from xfs_info was previously included, but is repeated here:
>
> # xfs_info /data
> meta-data=/dev/sdb1 isize=256 agcount=19,agsize=268435440 blks
Inode allocation requires contiguous free space of 16k aligned to 8k
boundaries to allocate new inode chunks. Also, 1TB AGs, so with
inode32, inodes can only be allocated in AG 0.
> Here are the more extensive freesp outputs for each of the 19 AGs:
>
> # xfs_db -r /dev/sdb1 -c 'freesp -s -a0'
> from to extents blocks pct
> 1 1 747 747 19.68
> 2 3 1045 2496 65.77
> 4 7 138 552 14.55
> total free extents 1930
> total free blocks 3795
> average free extent size 1.96632
And that says you have no correctly aligned free 16k extents that
can be allocated in AG 0. i.e. no more inodes can be allocated, and
that's where the ENOSPC is coming from.
Unmount, add the inode64 mount option, and you'll be able to
allocate inodes again as they will be allowed to be allocated in
any AG, not just AG 0.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
From rjevskiy@gmail.com Thu Apr 2 06:42:17 2015
Return-Path:
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on oss.sgi.com
X-Spam-Level: *
X-Spam-Status: No, score=1.0 required=5.0 tests=FREEMAIL_FROM,FRT_ADOBE2,
T_DKIM_INVALID autolearn=no version=3.3.1
X-Original-To: xfs@oss.sgi.com
Delivered-To: xfs@oss.sgi.com
Received: from relay.sgi.com (relay3.corp.sgi.com [198.149.34.15])
by oss.sgi.com (Postfix) with ESMTP id E24037F59
for ; Thu, 2 Apr 2015 06:42:16 -0500 (CDT)
Received: from cuda.sgi.com (cuda2.sgi.com [192.48.176.25])
by relay3.corp.sgi.com (Postfix) with ESMTP id 49B78AC002
for ; Thu, 2 Apr 2015 04:42:16 -0700 (PDT)
X-ASG-Debug-ID: 1427974932-04cbb06cc92f9870001-NocioJ
Received: from mail-wg0-f49.google.com (mail-wg0-f49.google.com [74.125.82.49]) by cuda.sgi.com with ESMTP id 8QdNTgqptZitGpuB (version=TLSv1 cipher=RC4-SHA bits=128 verify=NO) for ; Thu, 02 Apr 2015 04:42:13 -0700 (PDT)
X-Barracuda-Envelope-From: rjevskiy@gmail.com
X-Barracuda-Apparent-Source-IP: 74.125.82.49
Received: by wgra20 with SMTP id a20so82415283wgr.3
for ; Thu, 02 Apr 2015 04:42:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=gmail.com; s=20120113;
h=sender:from:to:subject:user-agent:date:message-id:mime-version
:content-type;
bh=3xCC0WgG9OWGmdq+F3AFJjW9XezfT7GJeR5xxoDvUQ8=;
b=dq2uZ/YjyoDetQBz1p09dMtz964WDFg0qLYjVNXPyVDxtLWM9yyLNbvZXT7I9CT1nr
pEkj81oumMiiBOKR6dNGEOE3j4wS0FLhbOy//W/Ai7BO21CysQ/y+uF1hO8LFEPcYrFZ
SU/Muj/9gXQwJZJh7K9zQxMn1TpUoN6lXpjhnlwXdl5NjINQ6viY4EM9IGZ1MrT3FF4j
C6XqFR45WSgUC148T4iSfCtMT6jWo2KBzW8k1n5WI/3e7OI0egE3mfHuCeTXubIwWhay
c4oHKMUfBMTTzxgnQSDZeKkcLUda/xaPnifUDkhZIZ4acO8wqPtT4I52TBc4JCEvjOpC
NHAA==
X-Received: by 10.194.61.100 with SMTP id o4mr93929887wjr.28.1427974932603;
Thu, 02 Apr 2015 04:42:12 -0700 (PDT)
Received: from smtp.gmail.com ([195.214.234.4])
by mx.google.com with ESMTPSA id j7sm7361180wix.4.2015.04.02.04.42.10
(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
Thu, 02 Apr 2015 04:42:11 -0700 (PDT)
Sender: Dmitry Monakhov
From: Dmitry Monakhov
To: xfs@oss.sgi.com, Dave Chinner
Subject: FYI: xfstests generic/019 result panic. 4.0.0-rc5
User-Agent: Notmuch/0.18.1 (http://notmuchmail.org) Emacs/24.4.1 (x86_64-pc-linux-gnu)
X-ASG-Orig-Subj: FYI: xfstests generic/019 result panic. 4.0.0-rc5
Date: Thu, 02 Apr 2015 14:40:26 +0300
Message-ID: <87r3s2g3md.fsf@openvz.org>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="=-=-="
X-Barracuda-Connect: mail-wg0-f49.google.com[74.125.82.49]
X-Barracuda-Start-Time: 1427974933
X-Barracuda-Encrypted: RC4-SHA
X-Barracuda-URL: http://192.48.176.25:80/cgi-mod/mark.cgi
X-Virus-Scanned: by bsmtpd at sgi.com
X-Barracuda-BRTS-Status: 1
X-Barracuda-Spam-Score: 0.60
X-Barracuda-Spam-Status: No, SCORE=0.60 using per-user scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.7 tests=DKIM_SIGNED, DKIM_VERIFIED, MARKETING_SUBJECT
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.17479
Rule breakdown below
pts rule name description
---- ---------------------- --------------------------------------------------
0.60 MARKETING_SUBJECT Subject contains popular marketing words
-0.00 DKIM_VERIFIED Domain Keys Identified Mail: signature passes
verification
0.00 DKIM_SIGNED Domain Keys Identified Mail: message has a signature
--=-=-=
Content-Type: multipart/signed; boundary="==-=-=";
micalg=pgp-sha512; protocol="application/pgp-signature"
--==-=-=
Content-Type: text/plain
Hi I've played with recent kernel 4.0.0-rc5 (AlViro's tree vfs.git/for-next)
And have found two issues (I do not know whenever it was fixed in
xfs.git already, so I just want to let you know)
First one is Panic caused by xfstest generic/019 (disk failure
simulation test) see attachment
--==-=-=
Content-Type: application/pgp-signature; name="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBCgAGBQJVHSqrAAoJELhyPTmIL6kB57IH/3F+o3Zgvm/IHSTtg73uuJus
bmmOjvStQ/y+moDIDC8gkZDAMBHC249vLrWGUaxIVaKqalsrBFgGS4tnQgAeKkQp
cLRcnNRi5XonazKnHPAaG+sECmivWGBeqHdq70SyE98dOEj9yv6b8bPsXac/1BAh
LIxO1pYJYaweavLH/voU2f8sWiyQikkIQITrYbBV9hanTxy+74nIlO1aBqh97za3
MobynFwYFx6WVMIh4dveSas8ePd4mVYiKVaYBfqe3LHUWl4Zphg9CR9c4oO5pWFt
TuLCBxAZdA4XzJTBVJsrk5WF3ZKdzE6VGwohvUiok9CkEZH+hNGy9dI9CFxrzAw=
=gec5
-----END PGP SIGNATURE-----
--==-=-=--
--=-=-=
Content-Type: text/plain
Content-Disposition: inline; filename=xfs-generic-019-panic.txt
/dev/vdb: 68/327680 files (5.9% non-contiguous), 59205/1310720 blocks
FSTESTVER: fio fio-2.2.5-2-g64666f8-dirty (Thu, 22 Jan 2015 00:57:00 +0100)
FSTESTVER: quota 52f4e0a (Mon, 5 Jan 2015 17:13:22 +0100)
FSTESTVER: xfsprogs v3.2.2 (Thu, 4 Dec 2014 07:56:44 +1100)
FSTESTVER: xfstests-bld 5a41f87 (Thu, 22 Jan 2015 17:26:16 +0300)
FSTESTVER: xfstests linux-v3.8-571-gad5c393 (Tue, 20 Jan 2015 15:37:19 +0400)
FSTESTVER: kernel 4.0.0-rc5-196354-gcf5ffe9 #18 SMP Tue Mar 31 17:23:06 MSK 2015 x86_64
FSTESTCFG: "xfs"
FSTESTSET: "generic/019"
FSTESTEXC: ""
FSTESTOPT: "aex"
MNTOPTS: ""
meta-data=/dev/vdd isize=256 agcount=4, agsize=327680 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=1310720, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
total used free shared buffers cached
Mem: 1974 69 1905 9 0 17
-/+ buffers/cache: 51 1923
Swap: 0 0 0
xfs_dqtrx 0 0 576 14 2 : tunables 0 0 0 : slabdata 0 0 0
xfs_dquot 0 0 720 22 4 : tunables 0 0 0 : slabdata 0 0 0
xfs_icr 0 0 144 28 1 : tunables 0 0 0 : slabdata 0 0 0
xfs_inode 0 0 1792 18 8 : tunables 0 0 0 : slabdata 0 0 0
xfs_efd_item 0 0 400 20 2 : tunables 0 0 0 : slabdata 0 0 0
xfs_buf_item 34 34 232 17 1 : tunables 0 0 0 : slabdata 2 2 0
xfs_da_state 0 0 480 17 2 : tunables 0 0 0 : slabdata 0 0 0
xfs_btree_cur 0 0 208 19 1 : tunables 0 0 0 : slabdata 0 0 0
xfs_log_ticket 0 0 184 22 1 : tunables 0 0 0 : slabdata 0 0 0
xfs_ioend 52 52 152 26 1 : tunables 0 0 0 : slabdata 2 2 0
BEGIN TEST: XFS Tue Mar 31 13:30:30 UTC 2015
Device: /dev/vdd
mk2fs options:
mount options: -o block_validity
FSTYP -- xfs (debug)
PLATFORM -- Linux/x86_64 kvm-xfstests 4.0.0-rc5-196354-gcf5ffe9
MKFS_OPTIONS -- -f -bsize=4096 /dev/vdc
MOUNT_OPTIONS -- /dev/vdc /vdc
generic/019 [13:30:32][ 17.619593] XFS (vdc): xlog_verify_grant_tail: space > BBTOB(tail_blocks)
[ 41.914283] XFS (vdc): metadata I/O error: block 0x503d1f ("xlog_iodone") error 5 numblks 64
[ 41.917326] XFS (vdc): xfs_bmap_check_leaf_extents: BAD after btree leaves for 6623 extents
[ 41.917376] XFS (vdc): Log I/O Error Detected. Shutting down filesystem
[ 41.917378] XFS (vdc): Please umount the filesystem and rectify the problem(s)
[ 41.918098] fsstress (3180) used greatest stack depth: 11392 bytes left
[ 41.918876] XFS (vdc): metadata I/O error: block 0x503d5f ("xlog_iodone") error 5 numblks 64
[ 41.918966] XFS (vdc): xfs_log_force: error -5 returned.
[ 41.930237] Kernel panic - not syncing: xfs_bmap_check_leaf_extents: CORRUPTED BTREE OR SOMETHING
[ 41.932793] CPU: 0 PID: 3214 Comm: fio Not tainted 4.0.0-rc5-196354-gcf5ffe9 #18
[ 41.933500] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
[ 41.933500] 00000000000019df ffff880072b4f508 ffffffff816effa5 000000000000001e
[ 41.933500] ffffffff81ac0665 ffff880072b4f588 ffffffff816efc10 ffff880000000010
[ 41.933500] ffff880072b4f598 ffff880072b4f538 ffff880072b4f598 ffff880072b4f558
[ 41.933500] Call Trace:
[ 41.933500] [] dump_stack+0x48/0x5b
[ 41.933500] [] panic+0xd4/0x21c
[ 41.933500] [] xfs_bmap_check_leaf_extents+0x495/0x506
[ 41.933500] [] xfs_bmap_add_extent_hole_real+0x786/0x7ae
[ 41.933500] [] xfs_bmapi_write+0x6da/0xbb9
[ 41.933500] [] xfs_iomap_write_direct+0x26d/0x321
[ 41.933500] [] __xfs_get_blocks+0x1cb/0x4a1
[ 41.933500] [] ? trace_hardirqs_on_caller+0x164/0x19b
[ 41.933500] [] xfs_get_blocks_direct+0x14/0x16
[ 41.933500] [] do_blockdev_direct_IO+0x64a/0xb83
[ 41.933500] [] ? local_clock+0x1a/0x23
[ 41.933500] [] ? __xfs_get_blocks+0x4a1/0x4a1
[ 41.933500] [] __blockdev_direct_IO+0x4c/0x4e
[ 41.933500] [] ? __xfs_get_blocks+0x4a1/0x4a1
[ 41.933500] [] ? xfs_setfilesize+0xf3/0xf3
[ 41.933500] [] xfs_vm_direct_IO+0x8a/0x8c
[ 41.933500] [] ? __xfs_get_blocks+0x4a1/0x4a1
[ 41.933500] [] ? xfs_setfilesize+0xf3/0xf3
[ 41.933500] [] generic_file_direct_write+0xc1/0x150
[ 41.933500] [] xfs_file_dio_aio_write+0x21c/0x265
[ 41.933500] [] ? aio_run_iocb+0x163/0x28d
[ 41.933500] [] ? kvm_clock_read+0x1e/0x20
[ 41.933500] [] ? xfs_file_buffered_aio_write+0x1e8/0x1e8
[ 41.933500] [] xfs_file_write_iter+0x7c/0x107
[ 41.933500] [] aio_run_iocb+0x172/0x28d
[ 41.933500] [] ? might_fault+0x42/0x92
[ 41.933500] [] ? might_fault+0x42/0x92
[ 41.933500] [] do_io_submit+0x34c/0x3e3
[ 41.933500] [] SyS_io_submit+0x10/0x12
[ 41.933500] [] system_call_fastpath+0x12/0x17
[ 41.933500] Kernel Offset: 0x0 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffff9fffffff)
[ 41.933500] ---[ end Kernel panic - not syncing: xfs_bmap_check_leaf_extents: CORRUPTED BTREE OR SOMETHING
--=-=-=
Content-Type: text/plain
Second one is lockdep's complain from splice, It looks like a false-positive one, but still.
--=-=-=
Content-Type: text/plain
Content-Disposition: inline; filename=xfs-lockdep-complain.txt
[ 468.667585] ======================================================
[ 468.669774] [ INFO: possible circular locking dependency detected ]
[ 468.669983] 4.0.0-rc5-196355-gd50b8fe-dirty #19 Not tainted
[ 468.669983] -------------------------------------------------------
[ 468.669983] splice-fcntl/2950 is trying to acquire lock:
[ 468.669983] (&sb->s_type->i_mutex_key#11){+.+.+.}, at: [] xfs_rw_ilock+0x21/0x31
[ 468.669983]
but task is already holding lock:
[ 468.669983] (&pipe->mutex/1){+.+.+.}, at: [] pipe_lock+0x1c/0x1e
[ 468.669983]
which lock already depends on the new lock.
[ 468.669983]
the existing dependency chain (in reverse order) is:
[ 468.669983]
-> #2 (&pipe->mutex/1){+.+.+.}:
[ 468.669983] [] lock_acquire+0xd7/0x112
[ 468.669983] [] mutex_lock_nested+0x63/0x5ab
[ 468.669983] [] pipe_lock+0x1c/0x1e
[ 468.669983] [] splice_to_pipe+0x2d/0x203
[ 468.669983] [] __generic_file_splice_read+0x41f/0x440
[ 468.669983] [] generic_file_splice_read+0x49/0x73
[ 468.669983] [] xfs_file_splice_read+0xfb/0x144
[ 468.669983] [] do_splice_to+0x74/0x81
[ 468.669983] [] SyS_splice+0x4b6/0x55e
[ 468.669983] [] system_call_fastpath+0x12/0x17
[ 468.669983]
-> #1 (&(&ip->i_iolock)->mr_lock){++++++}:
[ 468.669983] [] lock_acquire+0xd7/0x112
[ 468.669983] [] down_write_nested+0x4b/0xad
[ 468.669983] [] xfs_ilock+0xdb/0x14b
[ 468.669983] [] xfs_rw_ilock+0x2c/0x31
[ 468.669983] [] xfs_file_buffered_aio_write+0x59/0x1e8
[ 468.669983] [] xfs_file_write_iter+0x83/0x107
[ 468.669983] [] new_sync_write+0x64/0x82
[ 468.669983] [] vfs_write+0xb5/0x14d
[ 468.669983] [] SyS_write+0x5c/0x8c
[ 468.669983] [] system_call_fastpath+0x12/0x17
[ 468.669983]
-> #0 (&sb->s_type->i_mutex_key#11){+.+.+.}:
[ 468.669983] [] __lock_acquire+0xbd6/0xefb
[ 468.669983] [] lock_acquire+0xd7/0x112
[ 468.669983] [] mutex_lock_nested+0x63/0x5ab
[ 468.669983] [] xfs_rw_ilock+0x21/0x31
[ 468.669983] [] xfs_file_buffered_aio_write+0x59/0x1e8
[ 468.669983] [] xfs_file_write_iter+0x83/0x107
[ 468.669983] [] vfs_iter_write+0x4c/0x6b
[ 468.669983] [] iter_file_splice_write+0x230/0x33a
[ 468.669983] [] SyS_splice+0x409/0x55e
[ 468.669983] [] system_call_fastpath+0x12/0x17
[ 468.669983]
other info that might help us debug this[ 604.889687] serial8250: too much work for irq4
:
[ 468.669983] Chain exists of:
&sb->s_type->i_mutex_key#11 --> &(&ip->i_iolock)->mr_lock --> &pipe->mutex/1
[ 468.669983] Possible unsafe locking scenario:
[ 468.669983] CPU0 CPU1
[ 468.669983] ---- ----
[ 468.669983] lock(&pipe->mutex/1);
[ 468.669983] lock(&(&ip->i_iolock)->mr_lock);
[ 468.669983] lock(&pipe->mutex/1);
[ 468.669983] lock(&sb->s_type->i_mutex_key#11);
[ 468.669983]
*** DEADLOCK ***
[ 468.669983] 2 locks held by splice-fcntl/2950:
[ 468.669983] #0: (sb_writers#9){.+.+.+}, at: [] SyS_splice+0x3d6/0x55e
[ 468.669983] #1: (&pipe->mutex/1){+.+.+.}, at: [] pipe_lock+0x1c/0x1e
[ 468.669983]
stack backtrace:
[ 468.669983] CPU: 1 PID: 2950 Comm: splice-fcntl Not tainted 4.0.0-rc5-196355-gd50b8fe-dirty #19
[ 468.669983] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
[ 468.669983] ffffffff8247a700 ffff88006b03fa18 ffffffff816eff65 0000000000000001
[ 468.669983] ffffffff824924b0 ffff88006b03fa68 ffffffff810c2fae ffff88006b03fa68
[ 468.669983] ffffffff822781f0 ffff88007c062de0 ffff88007c0624b0 ffff88007c062de0
[ 468.669983] Call Trace:
[ 468.669983] [] dump_stack+0x48/0x5b
[ 468.669983] [] print_circular_bug+0x1f8/0x209
[ 468.669983] [] __lock_acquire+0xbd6/0xefb
[ 468.669983] [] ? xfs_rw_ilock+0x21/0x31
[ 468.669983] [] lock_acquire+0xd7/0x112
[ 468.669983] [] ? xfs_rw_ilock+0x21/0x31
[ 468.669983] [] mutex_lock_nested+0x63/0x5ab
[ 468.669983] [] ? xfs_rw_ilock+0x21/0x31
[ 468.669983] [] ? xfs_rw_ilock+0x21/0x31
[ 468.669983] [] ? mark_held_locks+0x59/0x77
[ 468.669983] [] ? slab_free_hook+0x7a/0x9a
[ 468.669983] [] ? kvm_clock_read+0x1e/0x20
[ 468.669983] [] xfs_rw_ilock+0x21/0x31
[ 468.669983] [] xfs_file_buffered_aio_write+0x59/0x1e8
[ 468.669983] [] ? trace_hardirqs_on_caller+0x164/0x19b
[ 468.669983] [] ? trace_hardirqs_on+0xd/0xf
[ 468.669983] [] ? pipe_lock+0x1c/0x1e
[ 468.669983] [] xfs_file_write_iter+0x83/0x107
[ 468.669983] [] vfs_iter_write+0x4c/0x6b
[ 468.669983] [] iter_file_splice_write+0x230/0x33a
[ 468.669983] [] SyS_splice+0x409/0x55e
[ 468.669983] [] ? __fd_install+0x9f/0xab
[ 468.669983] [] ? trace_hardirqs_on_caller+0x164/0x19b
[ 468.669983] [] system_call_fastpath+0x12/0x17
--=-=-=--
From kdhall@binghamton.edu Thu Apr 2 09:33:02 2015
Return-Path:
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on oss.sgi.com
X-Spam-Level:
X-Spam-Status: No, score=0.0 required=5.0 tests=none autolearn=ham
version=3.3.1
X-Original-To: xfs@oss.sgi.com
Delivered-To: xfs@oss.sgi.com
Received: from relay.sgi.com (relay3.corp.sgi.com [198.149.34.15])
by oss.sgi.com (Postfix) with ESMTP id 14A307F59
for ; Thu, 2 Apr 2015 09:33:02 -0500 (CDT)
Received: from cuda.sgi.com (cuda2.sgi.com [192.48.176.25])
by relay3.corp.sgi.com (Postfix) with ESMTP id 93827AC007
for ; Thu, 2 Apr 2015 07:32:58 -0700 (PDT)
X-ASG-Debug-ID: 1427985176-04cbb06cca3004a0001-NocioJ
Received: from mail-qg0-f41.google.com (mail-qg0-f41.google.com [209.85.192.41]) by cuda.sgi.com with ESMTP id AJ1DdDFIJJ3tticJ (version=TLSv1 cipher=RC4-SHA bits=128 verify=NO) for ; Thu, 02 Apr 2015 07:32:56 -0700 (PDT)
X-Barracuda-Envelope-From: kdhall@binghamton.edu
X-Barracuda-Apparent-Source-IP: 209.85.192.41
Received: by qgdy78 with SMTP id y78so9131579qgd.0
for ; Thu, 02 Apr 2015 07:32:56 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=1e100.net; s=20130820;
h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
:cc:subject:references:in-reply-to:content-type
:content-transfer-encoding;
bh=PSZkTkVilyUd6i6g7s8ncZ8anNPZdInRSNMySMM5v7U=;
b=A017P4cTlWM7qxFfICbqJjFl+uEvD2DmQuX9bH+A46azBtBt5VxLwo3g3ooSoABntC
zbtdte2axFEnGfKTEMUguLLW/suAhVQa3sWoxiKMH9yTlslY4ckpcm28gnZKdbkGMm6j
L2fiayAIJzuk8N5B7RB/FpEdQoOGwzefsOfzmcPxfpgy+b+zTx6rjEb5ymfiwcJocGR8
bY9tDYVca2KKuM8swjZHajXWJxAVE2KYca7xQ+yMlWgEXYtBYv9IEa4GWKPBfE+P7h/e
Q1mIZGWqGj1SePLGRP2dqcXEn6Udb4BpxdLVAuPLe5oIyRSXaegy1Cuu4reCO7uUx/FK
Elwg==
X-Gm-Message-State: ALoCoQnclh/tfbLs7xGkOimx4vbAGzhTkiiInkWq+lW8E8Ae8wiN31FbZmOOCwsWrnlfLtXdNLjM
X-Received: by 10.141.28.14 with SMTP id f14mr23478942qhe.74.1427985176092;
Thu, 02 Apr 2015 07:32:56 -0700 (PDT)
Received: from [128.226.118.196] (omega.cs.binghamton.edu. [128.226.118.196])
by mx.google.com with ESMTPSA id z77sm3555835qkg.44.2015.04.02.07.32.55
(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
Thu, 02 Apr 2015 07:32:55 -0700 (PDT)
Message-ID: <551D5316.8050201@binghamton.edu>
Date: Thu, 02 Apr 2015 10:32:54 -0400
From: Dave Hall
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.16) Gecko/20121215 Icedove/3.0.11
MIME-Version: 1.0
To: Dave Chinner
CC: xfs@oss.sgi.com
Subject: Re: Slightly Urgent: XFS No Space Left On Device
References: <551993CF.4060908@binghamton.edu> <20150330194510.GD28621@dastard> <551C4CB8.7@binghamton.edu> <20150402001235.GI28621@dastard>
X-ASG-Orig-Subj: Re: Slightly Urgent: XFS No Space Left On Device
In-Reply-To: <20150402001235.GI28621@dastard>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
X-Barracuda-Connect: mail-qg0-f41.google.com[209.85.192.41]
X-Barracuda-Start-Time: 1427985176
X-Barracuda-Encrypted: RC4-SHA
X-Barracuda-URL: http://192.48.176.25:80/cgi-mod/mark.cgi
X-Virus-Scanned: by bsmtpd at sgi.com
X-Barracuda-BRTS-Status: 1
X-Barracuda-Spam-Score: 0.00
X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.7 tests=
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.17485
Rule breakdown below
pts rule name description
---- ---------------------- --------------------------------------------------
Thanks for the help. Rookie error. I didn't set these mount options,
but I see that this option is set for all of the other XFS volumes I have.
I am wondering why XFS would default this way though. Seems like
heuristically you could assume that a large volume on a 64-bit OS would
need 64-bit inodes. At least perhaps put out a message from mkfs.xfs
suggesting the use of inode64 on the mount command?
Thanks.
-Dave
Dave Hall
Binghamton University
kdhall@binghamton.edu
607-760-2328 (Cell)
607-777-4641 (Office)
On 04/01/2015 08:12 PM, Dave Chinner wrote:
> On Wed, Apr 01, 2015 at 03:53:28PM -0400, Dave Hall wrote:
>
>> Please pardon the 'top-post', but here is the additional information
>> requested:
>>
>> This is a Dell R720xd dual 8-core Xeon system with 128GB RAM. The
>> RAID controller is Dell PERC H710 Mini with 12 2TB disks in RAID6.
>>
>> The OS is Debian 6 with kernel 3.2.0-0.bpo.4-amd64 #1 SMP Debian
>> 3.2.65-1+deb7u2~bpo60+1 x86_64.
>>
> So defaults to inode32 allocation....
>
>
>> From /proc/mounts:
>>
>> /dev/sdb1 /data xfs
>> rw,noexec,noatime,attr2,delaylog,allocsize=64k,logbsize=64k,sunit=128,swidth=1280,usrquota,prjquota
>> 0 0
>>
> ... and inode64 is not in the mount options.....
>
>
>> The output from xfs_info was previously included, but is repeated here:
>>
>> # xfs_info /data
>> meta-data=/dev/sdb1 isize=256 agcount=19,agsize=268435440 blks
>>
> Inode allocation requires contiguous free space of 16k aligned to 8k
> boundaries to allocate new inode chunks. Also, 1TB AGs, so with
> inode32, inodes can only be allocated in AG 0.
>
>
>> Here are the more extensive freesp outputs for each of the 19 AGs:
>>
>> # xfs_db -r /dev/sdb1 -c 'freesp -s -a0'
>> from to extents blocks pct
>> 1 1 747 747 19.68
>> 2 3 1045 2496 65.77
>> 4 7 138 552 14.55
>> total free extents 1930
>> total free blocks 3795
>> average free extent size 1.96632
>>
> And that says you have no correctly aligned free 16k extents that
> can be allocated in AG 0. i.e. no more inodes can be allocated, and
> that's where the ENOSPC is coming from.
>
> Unmount, add the inode64 mount option, and you'll be able to
> allocate inodes again as they will be allowed to be allocated in
> any AG, not just AG 0.
>
> Cheers,
>
> Dave.
>
From neutrino8@gmail.com Thu Apr 2 09:37:01 2015
Return-Path:
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on oss.sgi.com
X-Spam-Level:
X-Spam-Status: No, score=0.3 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,
FREEMAIL_FROM,T_DKIM_INVALID autolearn=no version=3.3.1
X-Original-To: xfs@oss.sgi.com
Delivered-To: xfs@oss.sgi.com
Received: from relay.sgi.com (relay3.corp.sgi.com [198.149.34.15])
by oss.sgi.com (Postfix) with ESMTP id 4C60D7F5A
for ; Thu, 2 Apr 2015 09:37:01 -0500 (CDT)
Received: from cuda.sgi.com (cuda1.sgi.com [192.48.157.11])
by relay3.corp.sgi.com (Postfix) with ESMTP id D88ECAC003
for ; Thu, 2 Apr 2015 07:37:00 -0700 (PDT)
X-ASG-Debug-ID: 1427985418-04bdf036223339f0001-NocioJ
Received: from mail-ig0-f173.google.com (mail-ig0-f173.google.com [209.85.213.173]) by cuda.sgi.com with ESMTP id G6UcDGAdSjfYANgd (version=TLSv1 cipher=RC4-SHA bits=128 verify=NO) for ; Thu, 02 Apr 2015 07:36:59 -0700 (PDT)
X-Barracuda-Envelope-From: neutrino8@gmail.com
Received: by igbqf9 with SMTP id qf9so74627687igb.1
for ; Thu, 02 Apr 2015 07:36:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=gmail.com; s=20120113;
h=mime-version:in-reply-to:references:date:message-id:subject:from:to
:cc:content-type;
bh=fNgRH7MYDfSOftOAtk8K5UMAPbrDgmMzZeHcXReWDiA=;
b=p2XnC3sJo3oSPr2/OcDeyeuOAnLJv4mkyB9K1SNBzvLj2/r8KuaGBsN5nCCcpW9tmT
cu3y2Dj8CZGmrvHyKJjVzf8tQLonsQ4BEi5ohJspt7xJ5V7zVl7tXC3LKFZtxbVWwKDZ
HIodsv2uG8ejzD0kweIpOfXpGPWEUJaZlir9d/UmDyT2X59eca/pJvgNFfUXJ+iGz+BM
s8Fpermfbb+mxgZ+HAHffUcvpJecb10TwmP7D6Un+4ADyhb43oOwNGnGvJxBP0ytI/3U
6k9v3v0xVKWmceSuAEQLx+c9LsskBQ/+y+DNO1dwoxQ04dzuqXJigVz74KSdUEmBpOR6
HcdQ==
MIME-Version: 1.0
X-Received: by 10.50.137.37 with SMTP id qf5mr8961619igb.1.1427985418549; Thu,
02 Apr 2015 07:36:58 -0700 (PDT)
Received: by 10.50.208.67 with HTTP; Thu, 2 Apr 2015 07:36:58 -0700 (PDT)
In-Reply-To: <551D5316.8050201@binghamton.edu>
References: <551993CF.4060908@binghamton.edu>
<20150330194510.GD28621@dastard>
<551C4CB8.7@binghamton.edu>
<20150402001235.GI28621@dastard>
<551D5316.8050201@binghamton.edu>
Date: Thu, 2 Apr 2015 16:36:58 +0200
Message-ID:
Subject: Re: Slightly Urgent: XFS No Space Left On Device
From: Grozdan
X-ASG-Orig-Subj: Re: Slightly Urgent: XFS No Space Left On Device
To: Dave Hall
Cc: Dave Chinner , Xfs
Content-Type: text/plain; charset=UTF-8
X-Barracuda-Connect: mail-ig0-f173.google.com[209.85.213.173]
X-Barracuda-Start-Time: 1427985419
X-Barracuda-Encrypted: RC4-SHA
X-Barracuda-URL: http://192.48.157.11:80/cgi-mod/mark.cgi
X-Virus-Scanned: by bsmtpd at sgi.com
X-Barracuda-BRTS-Status: 1
X-Barracuda-Spam-Score: 0.00
X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.7 tests=DKIM_SIGNED, DKIM_VERIFIED
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.17485
Rule breakdown below
pts rule name description
---- ---------------------- --------------------------------------------------
-0.00 DKIM_VERIFIED Domain Keys Identified Mail: signature passes
verification
0.00 DKIM_SIGNED Domain Keys Identified Mail: message has a signature
On Thu, Apr 2, 2015 at 4:32 PM, Dave Hall wrote:
> Thanks for the help. Rookie error. I didn't set these mount options, but I
> see that this option is set for all of the other XFS volumes I have.
>
> I am wondering why XFS would default this way though. Seems like
> heuristically you could assume that a large volume on a 64-bit OS would need
> 64-bit inodes. At least perhaps put out a message from mkfs.xfs suggesting
> the use of inode64 on the mount command?
inode64 has been made default, even for 32-bit systems, by recent
versions of xfsprogs so I'd suggest to upgrade your xfsprogs
>
> Thanks.
>
> -Dave
>
> Dave Hall
> Binghamton University
> kdhall@binghamton.edu
> 607-760-2328 (Cell)
> 607-777-4641 (Office)
>
>
> On 04/01/2015 08:12 PM, Dave Chinner wrote:
>>
>> On Wed, Apr 01, 2015 at 03:53:28PM -0400, Dave Hall wrote:
>>
>>>
>>> Please pardon the 'top-post', but here is the additional information
>>> requested:
>>>
>>> This is a Dell R720xd dual 8-core Xeon system with 128GB RAM. The
>>> RAID controller is Dell PERC H710 Mini with 12 2TB disks in RAID6.
>>>
>>> The OS is Debian 6 with kernel 3.2.0-0.bpo.4-amd64 #1 SMP Debian
>>> 3.2.65-1+deb7u2~bpo60+1 x86_64.
>>>
>>
>> So defaults to inode32 allocation....
>>
>>
>>>
>>> From /proc/mounts:
>>>
>>> /dev/sdb1 /data xfs
>>>
>>> rw,noexec,noatime,attr2,delaylog,allocsize=64k,logbsize=64k,sunit=128,swidth=1280,usrquota,prjquota
>>> 0 0
>>>
>>
>> ... and inode64 is not in the mount options.....
>>
>>
>>>
>>> The output from xfs_info was previously included, but is repeated here:
>>>
>>> # xfs_info /data
>>> meta-data=/dev/sdb1 isize=256 agcount=19,agsize=268435440
>>> blks
>>>
>>
>> Inode allocation requires contiguous free space of 16k aligned to 8k
>> boundaries to allocate new inode chunks. Also, 1TB AGs, so with
>> inode32, inodes can only be allocated in AG 0.
>>
>>
>>>
>>> Here are the more extensive freesp outputs for each of the 19 AGs:
>>>
>>> # xfs_db -r /dev/sdb1 -c 'freesp -s -a0'
>>> from to extents blocks pct
>>> 1 1 747 747 19.68
>>> 2 3 1045 2496 65.77
>>> 4 7 138 552 14.55
>>> total free extents 1930
>>> total free blocks 3795
>>> average free extent size 1.96632
>>>
>>
>> And that says you have no correctly aligned free 16k extents that
>> can be allocated in AG 0. i.e. no more inodes can be allocated, and
>> that's where the ENOSPC is coming from.
>>
>> Unmount, add the inode64 mount option, and you'll be able to
>> allocate inodes again as they will be allowed to be allocated in
>> any AG, not just AG 0.
>>
>> Cheers,
>>
>> Dave.
>>
>
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
--
Yours truly
From neutrino8@gmail.com Thu Apr 2 09:41:40 2015
Return-Path:
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on oss.sgi.com
X-Spam-Level:
X-Spam-Status: No, score=0.3 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,
FREEMAIL_FROM,T_DKIM_INVALID autolearn=no version=3.3.1
X-Original-To: xfs@oss.sgi.com
Delivered-To: xfs@oss.sgi.com
Received: from relay.sgi.com (relay1.corp.sgi.com [137.38.102.111])
by oss.sgi.com (Postfix) with ESMTP id 997F67F59
for ; Thu, 2 Apr 2015 09:41:40 -0500 (CDT)
Received: from cuda.sgi.com (cuda3.sgi.com [192.48.176.15])
by relay1.corp.sgi.com (Postfix) with ESMTP id 7D0708F8078
for ; Thu, 2 Apr 2015 07:41:40 -0700 (PDT)
X-ASG-Debug-ID: 1427985699-04cb6c3fde2ae8d0001-NocioJ
Received: from mail-ie0-f174.google.com (mail-ie0-f174.google.com [209.85.223.174]) by cuda.sgi.com with ESMTP id zT15Dvs1wkNby6Cx (version=TLSv1 cipher=RC4-SHA bits=128 verify=NO) for ; Thu, 02 Apr 2015 07:41:39 -0700 (PDT)
X-Barracuda-Envelope-From: neutrino8@gmail.com
Received: by ierf6 with SMTP id f6so70261075ier.2
for ; Thu, 02 Apr 2015 07:41:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=gmail.com; s=20120113;
h=mime-version:in-reply-to:references:date:message-id:subject:from:to
:cc:content-type;
bh=GD4uogV5nHPItdzcmz1MiLOoZNyXx0TMSAOxNRbHR0I=;
b=Y+NtcBcb2scFlJYl9eOhBTnnQ9oASvshHvA3nE/k+z5nRiWl4RFxZ1aIAfR8hMvRsG
M/QZqfXjoztlv0pO62FYPzVwBv2k5LNJ02cqZhjc2GPG/3KTkXruJVConMKy2qxk+G8h
7hSCeyrJtJ2Jkv+LnHGFs3Fu+Xi5k+sBmHmdY6v3ggYXVnr+lOtc3R31QuPYuPUwCcM4
RZQACqTrcqKGHxRf4kR78KOowxdej0B97j524nPxzpACQNUf7b8ABy9z8K6ziBh/9AGB
m35t1YWfZnUX69afX6mQMTmNR8PlTZUg7uDmkI9mQc33PwNKv0j0tMyoTJTPIFqEoGUd
Gm1w==
MIME-Version: 1.0
X-Received: by 10.107.128.3 with SMTP id b3mr16928123iod.24.1427985698948;
Thu, 02 Apr 2015 07:41:38 -0700 (PDT)
Received: by 10.50.208.67 with HTTP; Thu, 2 Apr 2015 07:41:38 -0700 (PDT)
In-Reply-To:
References: <551993CF.4060908@binghamton.edu>
<20150330194510.GD28621@dastard>
<551C4CB8.7@binghamton.edu>
<20150402001235.GI28621@dastard>
<551D5316.8050201@binghamton.edu>
Date: Thu, 2 Apr 2015 16:41:38 +0200
Message-ID:
Subject: Re: Slightly Urgent: XFS No Space Left On Device
From: Grozdan
X-ASG-Orig-Subj: Re: Slightly Urgent: XFS No Space Left On Device
To: Dave Hall
Cc: Dave Chinner , Xfs
Content-Type: text/plain; charset=UTF-8
X-Barracuda-Connect: mail-ie0-f174.google.com[209.85.223.174]
X-Barracuda-Start-Time: 1427985699
X-Barracuda-Encrypted: RC4-SHA
X-Barracuda-URL: http://192.48.176.15:80/cgi-mod/mark.cgi
X-Virus-Scanned: by bsmtpd at sgi.com
X-Barracuda-BRTS-Status: 1
X-Barracuda-Spam-Score: 0.00
X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.7 tests=DKIM_SIGNED, DKIM_VERIFIED
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.17484
Rule breakdown below
pts rule name description
---- ---------------------- --------------------------------------------------
-0.00 DKIM_VERIFIED Domain Keys Identified Mail: signature passes
verification
0.00 DKIM_SIGNED Domain Keys Identified Mail: message has a signature
On Thu, Apr 2, 2015 at 4:36 PM, Grozdan wrote:
> On Thu, Apr 2, 2015 at 4:32 PM, Dave Hall wrote:
>> Thanks for the help. Rookie error. I didn't set these mount options, but I
>> see that this option is set for all of the other XFS volumes I have.
>>
>> I am wondering why XFS would default this way though. Seems like
>> heuristically you could assume that a large volume on a 64-bit OS would need
>> 64-bit inodes. At least perhaps put out a message from mkfs.xfs suggesting
>> the use of inode64 on the mount command?
>
>
> inode64 has been made default, even for 32-bit systems, by recent
> versions of xfsprogs so I'd suggest to upgrade your xfsprogs
sorry, I was thinking of the crc flag. XFS uses by default inode64
from kernel versions 3.7 and up
>
>>
>> Thanks.
>>
>> -Dave
>>
>> Dave Hall
>> Binghamton University
>> kdhall@binghamton.edu
>> 607-760-2328 (Cell)
>> 607-777-4641 (Office)
>>
>>
>> On 04/01/2015 08:12 PM, Dave Chinner wrote:
>>>
>>> On Wed, Apr 01, 2015 at 03:53:28PM -0400, Dave Hall wrote:
>>>
>>>>
>>>> Please pardon the 'top-post', but here is the additional information
>>>> requested:
>>>>
>>>> This is a Dell R720xd dual 8-core Xeon system with 128GB RAM. The
>>>> RAID controller is Dell PERC H710 Mini with 12 2TB disks in RAID6.
>>>>
>>>> The OS is Debian 6 with kernel 3.2.0-0.bpo.4-amd64 #1 SMP Debian
>>>> 3.2.65-1+deb7u2~bpo60+1 x86_64.
>>>>
>>>
>>> So defaults to inode32 allocation....
>>>
>>>
>>>>
>>>> From /proc/mounts:
>>>>
>>>> /dev/sdb1 /data xfs
>>>>
>>>> rw,noexec,noatime,attr2,delaylog,allocsize=64k,logbsize=64k,sunit=128,swidth=1280,usrquota,prjquota
>>>> 0 0
>>>>
>>>
>>> ... and inode64 is not in the mount options.....
>>>
>>>
>>>>
>>>> The output from xfs_info was previously included, but is repeated here:
>>>>
>>>> # xfs_info /data
>>>> meta-data=/dev/sdb1 isize=256 agcount=19,agsize=268435440
>>>> blks
>>>>
>>>
>>> Inode allocation requires contiguous free space of 16k aligned to 8k
>>> boundaries to allocate new inode chunks. Also, 1TB AGs, so with
>>> inode32, inodes can only be allocated in AG 0.
>>>
>>>
>>>>
>>>> Here are the more extensive freesp outputs for each of the 19 AGs:
>>>>
>>>> # xfs_db -r /dev/sdb1 -c 'freesp -s -a0'
>>>> from to extents blocks pct
>>>> 1 1 747 747 19.68
>>>> 2 3 1045 2496 65.77
>>>> 4 7 138 552 14.55
>>>> total free extents 1930
>>>> total free blocks 3795
>>>> average free extent size 1.96632
>>>>
>>>
>>> And that says you have no correctly aligned free 16k extents that
>>> can be allocated in AG 0. i.e. no more inodes can be allocated, and
>>> that's where the ENOSPC is coming from.
>>>
>>> Unmount, add the inode64 mount option, and you'll be able to
>>> allocate inodes again as they will be allowed to be allocated in
>>> any AG, not just AG 0.
>>>
>>> Cheers,
>>>
>>> Dave.
>>>
>>
>>
>> _______________________________________________
>> xfs mailing list
>> xfs@oss.sgi.com
>> http://oss.sgi.com/mailman/listinfo/xfs
>
>
>
> --
> Yours truly
--
Yours truly
From david@fromorbit.com Thu Apr 2 17:36:29 2015
Return-Path:
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on oss.sgi.com
X-Spam-Level:
X-Spam-Status: No, score=0.0 required=5.0 tests=none autolearn=ham
version=3.3.1
X-Original-To: xfs@oss.sgi.com
Delivered-To: xfs@oss.sgi.com
Received: from relay.sgi.com (relay1.corp.sgi.com [137.38.102.111])
by oss.sgi.com (Postfix) with ESMTP id 2FF2C7F59
for ; Thu, 2 Apr 2015 17:36:29 -0500 (CDT)
Received: from cuda.sgi.com (cuda3.sgi.com [192.48.176.15])
by relay1.corp.sgi.com (Postfix) with ESMTP id 1DF548F8054
for ; Thu, 2 Apr 2015 15:36:28 -0700 (PDT)
X-ASG-Debug-ID: 1428014182-04cb6c11dd12bb0001-NocioJ
Received: from ipmail07.adl2.internode.on.net (ipmail07.adl2.internode.on.net [150.101.137.131]) by cuda.sgi.com with ESMTP id Oyiw7mDBMqlXDFiV for ; Thu, 02 Apr 2015 15:36:23 -0700 (PDT)
X-Barracuda-Envelope-From: david@fromorbit.com
X-Barracuda-Apparent-Source-IP: 150.101.137.131
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: A2DEBgC2wx1V/8+HLHlcDoJ6gS6yNQMBAQEGmR8EAgKBS00BAQEBAQF+hB4BAQEDATocIwULCAMOCgklDwUlAyETiCcHzVUBAQEBBgIBHxiFd4UahHkHgxeBFgWaaZRJIoMyUSoxgkMBAQE
Received: from ppp121-44-135-207.lns20.syd7.internode.on.net (HELO dastard) ([121.44.135.207])
by ipmail07.adl2.internode.on.net with ESMTP; 03 Apr 2015 09:06:22 +1030
Received: from dave by dastard with local (Exim 4.80)
(envelope-from )
id 1Ydnia-0001HZ-Ma; Fri, 03 Apr 2015 09:36:08 +1100
Date: Fri, 3 Apr 2015 09:36:08 +1100
From: Dave Chinner
To: Dave Hall
Cc: xfs@oss.sgi.com
Subject: Re: Slightly Urgent: XFS No Space Left On Device
Message-ID: <20150402223608.GF8465@dastard>
X-ASG-Orig-Subj: Re: Slightly Urgent: XFS No Space Left On Device
References: <551993CF.4060908@binghamton.edu>
<20150330194510.GD28621@dastard>
<551C4CB8.7@binghamton.edu>
<20150402001235.GI28621@dastard>
<551D5316.8050201@binghamton.edu>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <551D5316.8050201@binghamton.edu>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Barracuda-Connect: ipmail07.adl2.internode.on.net[150.101.137.131]
X-Barracuda-Start-Time: 1428014182
X-Barracuda-URL: http://192.48.176.15:80/cgi-mod/mark.cgi
X-Virus-Scanned: by bsmtpd at sgi.com
X-Barracuda-BRTS-Status: 1
X-Barracuda-Spam-Score: 0.00
X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.7 tests=
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.17501
Rule breakdown below
pts rule name description
---- ---------------------- --------------------------------------------------
On Thu, Apr 02, 2015 at 10:32:54AM -0400, Dave Hall wrote:
> Thanks for the help. Rookie error. I didn't set these mount
> options, but I see that this option is set for all of the other XFS
> volumes I have.
>
> I am wondering why XFS would default this way though. Seems like
> heuristically you could assume that a large volume on a 64-bit OS
> would need 64-bit inodes.
The historical argument against inode64 on 64 bit machines was NFS
export and 32 bit clients, which could then break if we used 64 bit
inodes. The Linux NFS client handles this just fine on 32bit
machines (has for a few years), so there's no reason for us to care
very much about this anymore....
> At least perhaps put out a message from
> mkfs.xfs suggesting the use of inode64 on the mount command?
inode64 is now the default for upstream kernels (has been for a
couple of years), but older distros are not going to get those
kernel updates, or anything we might have put in xfsprogs...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
From david@fromorbit.com Thu Apr 2 17:43:30 2015
Return-Path:
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on oss.sgi.com
X-Spam-Level:
X-Spam-Status: No, score=0.0 required=5.0 tests=none autolearn=ham
version=3.3.1
X-Original-To: xfs@oss.sgi.com
Delivered-To: xfs@oss.sgi.com
Received: from relay.sgi.com (relay2.corp.sgi.com [137.38.102.29])
by oss.sgi.com (Postfix) with ESMTP id 578A37F59
for ; Thu, 2 Apr 2015 17:43:30 -0500 (CDT)
Received: from cuda.sgi.com (cuda2.sgi.com [192.48.176.25])
by relay2.corp.sgi.com (Postfix) with ESMTP id 47449304032
for ; Thu, 2 Apr 2015 15:43:27 -0700 (PDT)
X-ASG-Debug-ID: 1428014604-04cbb043b916b60001-NocioJ
Received: from ipmail07.adl2.internode.on.net (ipmail07.adl2.internode.on.net [150.101.137.131]) by cuda.sgi.com with ESMTP id fWgg2nWfT0kOkqFx for ; Thu, 02 Apr 2015 15:43:24 -0700 (PDT)
X-Barracuda-Envelope-From: david@fromorbit.com
X-Barracuda-Apparent-Source-IP: 150.101.137.131
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: A2DEBgDjxB1V/8+HLHlcgwiBLrI1AwEBAQaZHwQCAoFLTQEBAQEBAX6EHgEBAQMBOhwjBQsIAw4KCSUPBSUDIROIJwfNSAEBAQEGAgEfGIV3hRqEeQeELQWaaYEegzSIeoZ9IoIAAxyBZCoxgkMBAQE
Received: from ppp121-44-135-207.lns20.syd7.internode.on.net (HELO dastard) ([121.44.135.207])
by ipmail07.adl2.internode.on.net with ESMTP; 03 Apr 2015 09:13:23 +1030
Received: from dave by dastard with local (Exim 4.80)
(envelope-from )
id 1YdnpO-0001In-VQ; Fri, 03 Apr 2015 09:43:11 +1100
Date: Fri, 3 Apr 2015 09:43:10 +1100
From: Dave Chinner
To: Dmitry Monakhov
Cc: xfs@oss.sgi.com
Subject: Re: FYI: xfstests generic/019 result panic. 4.0.0-rc5
Message-ID: <20150402224310.GG8465@dastard>
X-ASG-Orig-Subj: Re: FYI: xfstests generic/019 result panic. 4.0.0-rc5
References: <87r3s2g3md.fsf@openvz.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <87r3s2g3md.fsf@openvz.org>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Barracuda-Connect: ipmail07.adl2.internode.on.net[150.101.137.131]
X-Barracuda-Start-Time: 1428014604
X-Barracuda-URL: http://192.48.176.25:80/cgi-mod/mark.cgi
X-Virus-Scanned: by bsmtpd at sgi.com
X-Barracuda-BRTS-Status: 1
X-Barracuda-Spam-Score: 0.60
X-Barracuda-Spam-Status: No, SCORE=0.60 using per-user scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.7 tests=MARKETING_SUBJECT
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.17502
Rule breakdown below
pts rule name description
---- ---------------------- --------------------------------------------------
0.60 MARKETING_SUBJECT Subject contains popular marketing words
On Thu, Apr 02, 2015 at 02:40:26PM +0300, Dmitry Monakhov wrote:
>
> Hi I've played with recent kernel 4.0.0-rc5 (AlViro's tree vfs.git/for-next)
>
> And have found two issues (I do not know whenever it was fixed in
> xfs.git already, so I just want to let you know)
> First one is Panic caused by xfstest generic/019 (disk failure
> simulation test) see attachment
.....
>
> generic/019 [13:30:32][ 17.619593] XFS (vdc): xlog_verify_grant_tail: space > BBTOB(tail_blocks)
> [ 41.914283] XFS (vdc): metadata I/O error: block 0x503d1f ("xlog_iodone") error 5 numblks 64
So the test has shut down the filesystem via device pull...
> [ 41.917326] XFS (vdc): xfs_bmap_check_leaf_extents: BAD after btree leaves for 6623 extents
in the middle of a bmbt update operation, which aborted in an
inconsistent state in memory due to shutdown...
> [ 41.917376] XFS (vdc): Log I/O Error Detected. Shutting down filesystem
> [ 41.917378] XFS (vdc): Please umount the filesystem and rectify the problem(s)
> [ 41.918098] fsstress (3180) used greatest stack depth: 11392 bytes left
> [ 41.918876] XFS (vdc): metadata I/O error: block 0x503d5f ("xlog_iodone") error 5 numblks 64
> [ 41.918966] XFS (vdc): xfs_log_force: error -5 returned.
> [ 41.930237] Kernel panic - not syncing: xfs_bmap_check_leaf_extents: CORRUPTED BTREE OR SOMETHING
And debug code detected that inconsistent in-memory state, and threw
out the panic. Production machines won't run this code (it's
CONFIG_XFS_DEBUG=y specific) so they'll just shut down normally.
> Second one is lockdep's complain from splice, It looks like a false-positive one, but still.
No, that's a real one. splice has inverted locks and we've been able
to deadlock it since, well, forever. The recent rework that Al Viro
did removed the old lock inversion problem, and created a new one
w.r.t. to the pipe_lock and filesystem locks. I've reported this to
him previously, but I've never got any response about it...
Thanks for the reports, though, Dmitry.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
From lucy@demaxlt.com Fri Apr 3 04:19:00 2015
Return-Path:
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on oss.sgi.com
X-Spam-Level:
X-Spam-Status: No, score=0.0 required=5.0 tests=HTML_MESSAGE,MIME_QP_LONG_LINE,
UNPARSEABLE_RELAY autolearn=ham version=3.3.1
X-Original-To: xfs@oss.sgi.com
Delivered-To: xfs@oss.sgi.com
Received: from relay.sgi.com (relay2.corp.sgi.com [137.38.102.29])
by oss.sgi.com (Postfix) with ESMTP id 2925C7F5D
for ; Fri, 3 Apr 2015 04:19:00 -0500 (CDT)
Received: from cuda.sgi.com (cuda1.sgi.com [192.48.157.11])
by relay2.corp.sgi.com (Postfix) with ESMTP id EE751304032
for ; Fri, 3 Apr 2015 02:18:56 -0700 (PDT)
X-ASG-Debug-ID: 1428052732-04bdf04f8133f50001-NocioJ
Received: from smtp2192-65.mail.aliyun.com (smtp2192-65.mail.aliyun.com [121.197.192.65]) by cuda.sgi.com with ESMTP id KC8gQa0I0nOrtxdv for ; Fri, 03 Apr 2015 02:18:53 -0700 (PDT)
X-Barracuda-Envelope-From: lucy@demaxlt.com
X-Barracuda-Apparent-Source-IP: 121.197.192.65
Received: from WS-web by r41f05012.xy2.aliyun.com at Fri, 03 Apr 2015 17:17:41 +0800
Date: Fri, 03 Apr 2015 17:17:31 +0800
From: "LUCY"
To: "xfs"
Reply-To: "LUCY"
Message-ID:
Subject: =?UTF-8?B?cHZjIHBsYW5rIGZsb29yIGZyb20gREJETUM=?=
X-Priority: 3
X-ASG-Orig-Subj: =?UTF-8?B?cHZjIHBsYW5rIGZsb29yIGZyb20gREJETUM=?=
X-Mailer: Alimail-Mailagent
MIME-Version: 1.0
X-Alimail-AntiSpam: AC=CONTINUE;BC=0.3739916|-1;FP=18102776694863229021|5|1|85|0|-1|-1|-1;HT=r46d02008;MF=lucy@demaxlt.com;PH=DW;RN=35;RT=35;SR=0;
X-Mailer: Alimail-Mailagent revision 2688041
x-aliyun-mail-creator: W4_2689231_V2lTW96aWxsYS81LjAgKGNvbXBhdGlibGU7IE1TSUUgMTAuMDsgV2luZG93cyBOVCA2LjE7IFdPVzY0OyBUcmlkZW50LzYuMCk=Ds
Content-Type: multipart/alternative;
boundary="----=ALIBOUNDARY_12464_56516940_551e5ab5_3936"
X-Barracuda-Connect: smtp2192-65.mail.aliyun.com[121.197.192.65]
X-Barracuda-Start-Time: 1428052733
X-Barracuda-URL: http://192.48.157.11:80/cgi-mod/mark.cgi
X-Virus-Scanned: by bsmtpd at sgi.com
X-Barracuda-BRTS-Status: 1
X-Barracuda-Spam-Score: 0.82
X-Barracuda-Spam-Status: No, SCORE=0.82 using per-user scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.7 tests=HTML_MESSAGE, MIME_QP_LONG_LINE, MIME_QP_LONG_LINE_2, UNPARSEABLE_RELAY
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.17514
Rule breakdown below
pts rule name description
---- ---------------------- --------------------------------------------------
0.00 UNPARSEABLE_RELAY Informational: message has unparseable relay lines
0.00 HTML_MESSAGE BODY: HTML included in message
0.00 MIME_QP_LONG_LINE RAW: Quoted-printable line longer than 76 chars
0.82 MIME_QP_LONG_LINE_2 RAW: Quoted-printable line longer than 76 chars
------=ALIBOUNDARY_12464_56516940_551e5ab5_3936
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
dear Manager:=0AHere is Lucy from Dezhou Demax Building Material Co,ltd. As on=
e of the largest manufacturer of vinyl plank flooring in North China,our compa=
ny has exported to more than 60 countries and have good reputation both in dom=
estic and abroad.=0AAccording to the different installation methods,there are =
4styles of vinyl plank flooring for your choice: Unilin click,dry back,self-st=
ick and loose lay.=0Asize:6*36,6*48, 7*48, 9*36,9*48,12*12,18*18,24*24 inches =
etc=0Athickness:1.5mm to 5.00mm=0Awearlayer: 0.07mm to 0.7mm uv coating.=0AIf =
you are interested in our products,plese feel free to contact me.We will give =
you the best service,quality and competitive price.=0A=C2=A0=0AAny reply from =
you will be highly appreciated!=0Abest regards=0ALucy=0A=C2=A0=0A=0A
------=ALIBOUNDARY_12464_56516940_551e5ab5_3936
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
dear Manager:
=
Here is Lucy from Dezhou Demax Building Material Co,ltd. As =
one of the largest manufacturer of vinyl plank flooring in North China,our com=
pany has exported to more than 60 countries and have good reputation both in d=
omestic and abroad.
According to the different installati=
on methods,there are 4styles of vinyl plank flooring for your choice: Unilin c=
lick,dry back,self-stick and loose lay.
size:6*36,6*48, 7=
*48, 9*36,9*48,12*12,18*18,24*24 inches etc
thickness:1.5=
mm to 5.00mm
wearlayer: 0.07mm to 0.7mm uv coating.
If you are interested in our products,plese feel free to conta=
ct me.We will give you the best service,quality and competitive price.
Any reply from you will be highly ap=
preciated!
best regards
Lucy
------=ALIBOUNDARY_12464_56516940_551e5ab5_3936--
From usmyusuf5@gmail.com Fri Apr 3 12:22:36 2015
Return-Path:
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on oss.sgi.com
X-Spam-Level: ****
X-Spam-Status: No, score=4.3 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,
FREEMAIL_FROM,HTML_FONT_FACE_BAD,HTML_MESSAGE,LOTS_OF_MONEY,MONEY_FORM_SHORT,
T_DKIM_INVALID,T_FILL_THIS_FORM_SHORT autolearn=no version=3.3.1
X-Original-To: xfs@oss.sgi.com
Delivered-To: xfs@oss.sgi.com
Received: from relay.sgi.com (relay2.corp.sgi.com [137.38.102.29])
by oss.sgi.com (Postfix) with ESMTP id 18B097F3F
for ; Fri, 3 Apr 2015 12:22:36 -0500 (CDT)
Received: from cuda.sgi.com (cuda2.sgi.com [192.48.176.25])
by relay2.corp.sgi.com (Postfix) with ESMTP id 072BD304048
for ; Fri, 3 Apr 2015 10:22:32 -0700 (PDT)
X-ASG-Debug-ID: 1428081751-04cbb043b8472a0001-NocioJ
Received: from mail-ie0-f193.google.com (mail-ie0-f193.google.com [209.85.223.193]) by cuda.sgi.com with ESMTP id To634hD0GcxSMoJk (version=TLSv1 cipher=RC4-SHA bits=128 verify=NO) for ; Fri, 03 Apr 2015 10:22:31 -0700 (PDT)
X-Barracuda-Envelope-From: usmyusuf5@gmail.com
Received: by iery20 with SMTP id y20so5132910ier.2
for ; Fri, 03 Apr 2015 10:22:31 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=gmail.com; s=20120113;
h=mime-version:date:message-id:subject:from:to:content-type;
bh=LrzMDBLYkarYG+noK58vP13WUEgg7vznbrvq/q3M30I=;
b=r/27jld0aB+akTxId7dEJDAS1AndA4UOzi3q0p7fZSxGS8nl9PZjGltNRcaGNO4BHm
qvEJ9S4LZ41DhTbEEJU5euYFW7aCYlC8UXbk0bYJkY1jONds6xH40bAUWnMFnbtaTcL+
R8Br0Jv473WO1hdDmH6SMmLbAWdpzWjnAFTLxMTuWPbXa/Tti5yCdHTWIKkMr8DUr1HT
TgbfPR7tcCSxP4ilgak9BsF4RfZXHM5anRHKQSc0R5nhUK1hO1qixnTQskwAWHI2qpmp
8/ZBxZ/dffiH/KlETjfecS8nZd1BClLtKQ3hlqUt9OBzvzhOo7aUVa1tQ06iLKxa+w4j
rtjA==
MIME-Version: 1.0
X-Received: by 10.107.158.143 with SMTP id h137mr5541816ioe.12.1428081751346;
Fri, 03 Apr 2015 10:22:31 -0700 (PDT)
Received: by 10.64.52.100 with HTTP; Fri, 3 Apr 2015 10:22:31 -0700 (PDT)
Date: Fri, 3 Apr 2015 10:22:31 -0700
Message-ID:
Subject: URGENT,
From: Usman yusuf
X-ASG-Orig-Subj: URGENT,
To: undisclosed-recipients:;
Content-Type: multipart/alternative; boundary=001a114039167fd50e0512d53173
X-Barracuda-Connect: mail-ie0-f193.google.com[209.85.223.193]
X-Barracuda-Start-Time: 1428081751
X-Barracuda-Encrypted: RC4-SHA
X-Barracuda-URL: http://192.48.176.25:80/cgi-mod/mark.cgi
X-Virus-Scanned: by bsmtpd at sgi.com
X-Barracuda-BRTS-Status: 1
X-Barracuda-Spam-Score: 1.71
X-Barracuda-Spam-Status: No, SCORE=1.71 using per-user scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.7 tests=DKIM_SIGNED, DKIM_VERIFIED, HTML_FONT_FACE_BAD, HTML_MESSAGE, TVD_PH_SUBJ_URGENT
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.17527
Rule breakdown below
pts rule name description
---- ---------------------- --------------------------------------------------
1.10 TVD_PH_SUBJ_URGENT TVD_PH_SUBJ_URGENT
-0.00 DKIM_VERIFIED Domain Keys Identified Mail: signature passes
verification
0.00 DKIM_SIGNED Domain Keys Identified Mail: message has a signature
0.00 HTML_MESSAGE BODY: HTML included in message
0.61 HTML_FONT_FACE_BAD BODY: HTML font face is not a word
--001a114039167fd50e0512d53173
Content-Type: text/plain; charset=UTF-8
Sir,
I wish to inform you that your contract payment has been approved under
category (C) you shall receive your payment under Federal Ministry of
health.
US$4. 2m shall be paid to you as part payment; you are advised to confirm
your contact address/telephone number for an immediate payment.
Thank you,
Usman Yusuf.
DEBT MANAGEMENT OFFICE,
--001a114039167fd50e0512d53173
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Sir,
I wish to inform you that your contract payment has been approved unde=
r category (C) you shall receive your payment under Federal Ministry of hea=
lth.
=C2=A0
US$4. 2m shall be paid to you as part pa=
yment; you are advised to confirm your contact address/telephone number for=
an immediate payment.
Thank you,
Usman Yusuf.DE=
BT MANAGEMENT OFFICE,
--001a114039167fd50e0512d53173--
From darrick.wong@oracle.com Fri Apr 3 13:28:44 2015
Return-Path:
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on oss.sgi.com
X-Spam-Level:
X-Spam-Status: No, score=0.0 required=5.0 tests=UNPARSEABLE_RELAY
autolearn=ham version=3.3.1
X-Original-To: xfs@oss.sgi.com
Delivered-To: xfs@oss.sgi.com
Received: from relay.sgi.com (relay2.corp.sgi.com [137.38.102.29])
by oss.sgi.com (Postfix) with ESMTP id 7E9037F3F
for ; Fri, 3 Apr 2015 13:28:44 -0500 (CDT)
Received: from cuda.sgi.com (cuda2.sgi.com [192.48.176.25])
by relay2.corp.sgi.com (Postfix) with ESMTP id 6C2DB304043
for ; Fri, 3 Apr 2015 11:28:41 -0700 (PDT)
X-ASG-Debug-ID: 1428085717-04cbb043b649940001-NocioJ
Received: from userp1040.oracle.com (userp1040.oracle.com [156.151.31.81]) by cuda.sgi.com with ESMTP id jhYaKdCw8GJD0RwE (version=TLSv1 cipher=AES256-SHA bits=256 verify=NO) for ; Fri, 03 Apr 2015 11:28:37 -0700 (PDT)
X-Barracuda-Envelope-From: darrick.wong@oracle.com
X-Barracuda-Apparent-Source-IP: 156.151.31.81
Received: from aserv0021.oracle.com (aserv0021.oracle.com [141.146.126.233])
by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id t33ISVbP016197
(version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
Fri, 3 Apr 2015 18:28:32 GMT
Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235])
by aserv0021.oracle.com (8.13.8/8.13.8) with ESMTP id t33ISV8S002869
(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
Fri, 3 Apr 2015 18:28:31 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
by aserv0121.oracle.com (8.13.8/8.13.8) with ESMTP id t33ISUQ0030611;
Fri, 3 Apr 2015 18:28:31 GMT
Received: from localhost (/24.21.154.84)
by default (Oracle Beehive Gateway v4.0)
with ESMTP ; Fri, 03 Apr 2015 11:28:29 -0700
Date: Fri, 3 Apr 2015 11:28:25 -0700
From: "Darrick J. Wong"
To: Ric Wheeler
Cc: Linux FS Devel ,
"linux-scsi@vger.kernel.org" ,
device-mapper development ,
linux-ext4@vger.kernel.org, xfs@oss.sgi.com,
linux-btrfs@vger.kernel.org
Subject: LPC2015: File and Storage Systems uconf
Message-ID: <20150403182825.GA11030@birch.djwong.org>
X-ASG-Orig-Subj: LPC2015: File and Storage Systems uconf
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: aserv0021.oracle.com [141.146.126.233]
X-Barracuda-Connect: userp1040.oracle.com[156.151.31.81]
X-Barracuda-Start-Time: 1428085717
X-Barracuda-Encrypted: AES256-SHA
X-Barracuda-URL: http://192.48.176.25:80/cgi-mod/mark.cgi
X-Virus-Scanned: by bsmtpd at sgi.com
X-Barracuda-BRTS-Status: 1
X-Barracuda-Spam-Score: 0.00
X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.7 tests=UNPARSEABLE_RELAY
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.17528
Rule breakdown below
pts rule name description
---- ---------------------- --------------------------------------------------
0.00 UNPARSEABLE_RELAY Informational: message has unparseable relay lines
Hi everyone,
Linux Plumbers is coming up in just four months! I would like for there to be
a file & storage miniconf at this year's LPC, so I've started assembling a plan
for what we might discuss. As a starting point, I've filled the planning page
with the topics that didn't achieve any sort of resolution at LSF/MM:
http://wiki.linuxplumbersconf.org/2015:file_and_storage_systems
There are undoubtedly things that I missed in my initial list, and it would be
very helpful to figure out who's going.
If you'd like to visit Seattle in mid-August (I promise it probably won't be
raining!) and/or have a topic that you'd like to talk about that I missed,
I'd appreciate it if you wrote it into the wiki page.
Thanks,
--Darrick
From viro@ftp.linux.org.uk Sun Apr 5 11:29:21 2015
Return-Path:
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on oss.sgi.com
X-Spam-Level:
X-Spam-Status: No, score=0.0 required=5.0 tests=none autolearn=ham
version=3.3.1
X-Original-To: xfs@oss.sgi.com
Delivered-To: xfs@oss.sgi.com
Received: from relay.sgi.com (relay1.corp.sgi.com [137.38.102.111])
by oss.sgi.com (Postfix) with ESMTP id 8D6C07F37
for ; Sun, 5 Apr 2015 11:29:21 -0500 (CDT)
Received: from cuda.sgi.com (cuda1.sgi.com [192.48.157.11])
by relay1.corp.sgi.com (Postfix) with ESMTP id 690DE8F8033
for ; Sun, 5 Apr 2015 09:29:18 -0700 (PDT)
X-ASG-Debug-ID: 1428251354-04bdf04f7f122150001-NocioJ
Received: from ZenIV.linux.org.uk (zeniv.linux.org.uk [195.92.253.2]) by cuda.sgi.com with ESMTP id 8U9hiykHvxPpYRS0 (version=TLSv1 cipher=AES256-SHA bits=256 verify=NO) for ; Sun, 05 Apr 2015 09:29:15 -0700 (PDT)
X-Barracuda-Envelope-From: viro@ftp.linux.org.uk
X-Barracuda-Apparent-Source-IP: 195.92.253.2
Received: from viro by ZenIV.linux.org.uk with local (Exim 4.76 #1 (Red Hat Linux))
id 1YenOx-0002JJ-6f; Sun, 05 Apr 2015 16:27:59 +0000
Date: Sun, 5 Apr 2015 17:27:59 +0100
From: Al Viro
To: Omar Sandoval
Cc: linux-fsdevel@vger.kernel.org, linux-btrfs@vger.kernel.org,
ceph-devel@vger.kernel.org, linux-cifs@vger.kernel.org,
osd-dev@open-osd.org, linux-ext4@vger.kernel.org,
linux-f2fs-devel@lists.sourceforge.net,
fuse-devel@lists.sourceforge.net, cluster-devel@redhat.com,
jfs-discussion@lists.sourceforge.net, HPDD-discuss@ml01.01.org,
linux-nfs@vger.kernel.org, linux-nilfs@vger.kernel.org,
ocfs2-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org,
v9fs-developer@lists.sourceforge.net, xfs@oss.sgi.com,
linux-kernel@vger.kernel.org, Chris Mason ,
Josef Bacik , David Sterba ,
Yan Zheng , Sage Weil ,
Steve French ,
Boaz Harrosh ,
Benny Halevy , Jan Kara ,
Theodore Ts'o ,
Andreas Dilger ,
Jaegeuk Kim ,
Changman Lee ,
Miklos Szeredi ,
Steven Whitehouse ,
Dave Kleikamp ,
Oleg Drokin ,
Trond Myklebust ,
Anna Schumaker ,
Ryusuke Konishi ,
Mark Fasheh , Joel Becker ,
Eric Van Hensbergen ,
Ron Minnich ,
Latchesar Ionkov ,
Dave Chinner
Subject: Re: [RFC PATCH 0/5] Remove rw parameter from direct_IO()
Message-ID: <20150405162758.GI889@ZenIV.linux.org.uk>
X-ASG-Orig-Subj: Re: [RFC PATCH 0/5] Remove rw parameter from direct_IO()
References:
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To:
User-Agent: Mutt/1.5.21 (2010-09-15)
Sender: Al Viro
X-Barracuda-Connect: zeniv.linux.org.uk[195.92.253.2]
X-Barracuda-Start-Time: 1428251354
X-Barracuda-Encrypted: AES256-SHA
X-Barracuda-URL: http://192.48.157.11:80/cgi-mod/mark.cgi
X-Virus-Scanned: by bsmtpd at sgi.com
X-Barracuda-BRTS-Status: 1
X-Barracuda-Spam-Score: 0.00
X-Barracuda-Spam-Status: No, SCORE=0.00 using per-user scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.7 tests=
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.17582
Rule breakdown below
pts rule name description
---- ---------------------- --------------------------------------------------
On Mon, Mar 16, 2015 at 04:33:48AM -0700, Omar Sandoval wrote:
> Hi,
>
> Al, here's some cleanup that you mentioned back in December that I got
> around to (https://lkml.org/lkml/2014/12/15/28).
Applied. See #for-next
From danny@zadarastorage.com Mon Apr 6 02:03:11 2015
Return-Path:
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on oss.sgi.com
X-Spam-Level:
X-Spam-Status: No, score=0.0 required=5.0 tests=HTML_MESSAGE autolearn=ham
version=3.3.1
X-Original-To: xfs@oss.sgi.com
Delivered-To: xfs@oss.sgi.com
Received: from relay.sgi.com (relay2.corp.sgi.com [137.38.102.29])
by oss.sgi.com (Postfix) with ESMTP id 259E97F37
for ; Mon, 6 Apr 2015 02:03:11 -0500 (CDT)
Received: from cuda.sgi.com (cuda2.sgi.com [192.48.176.25])
by relay2.corp.sgi.com (Postfix) with ESMTP id 057AB304039
for ; Mon, 6 Apr 2015 00:03:10 -0700 (PDT)
X-ASG-Debug-ID: 1428303779-04cbb043b8137e50001-NocioJ
Received: from mail-wi0-f169.google.com (mail-wi0-f169.google.com [209.85.212.169]) by cuda.sgi.com with ESMTP id jtFF8vIoLfQSl0tz (version=TLSv1 cipher=RC4-SHA bits=128 verify=NO) for ; Mon, 06 Apr 2015 00:03:00 -0700 (PDT)
X-Barracuda-Envelope-From: danny@zadarastorage.com
X-Barracuda-Apparent-Source-IP: 209.85.212.169
Received: by wizk4 with SMTP id k4so22716868wiz.1
for ; Mon, 06 Apr 2015 00:02:59 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=1e100.net; s=20130820;
h=x-gm-message-state:mime-version:in-reply-to:references:date
:message-id:subject:from:to:cc:content-type;
bh=obTuewBDB1FtV8eu0WiOEYb82pDCRjd7J7uVHmxcsBA=;
b=PR/sIwk3jSAi6ZQ6kzUxuZkYjLVMSzH3WirI46eP7+hm1ZtsKU6bOfWBNWnlEaivLy
QvsPyg1NylIiWzJK0TfZYQRNUm2pF1/yxG6VAEnaPYeMzM+wzq5JB8DxEESIU9osvlft
OaTD3BcM+ZqLmMA9QQBB4HFDUJQUk+1xr6WoT9LPAHdUc8QwYgcj+xij85cEzoblPX3N
p70KF08tA38Wu0XEpYPVNPhIhBzI7rBnB0q1/pOq+KPBATilX1gBI1DwG2mLDsdhT3ca
yiVx1o6ZznHIcS4dTplsS9jrt/JMHxIkNQXoIAmiNGjvqlQMNe95w5/PXbJ0qD9MzjBs
Y7FQ==
X-Gm-Message-State: ALoCoQmXgrdQmGdh91iKYwvtek69SEZpRS9VjwZFJbWkfxxlyHq9f4XJHVRqF07XKuMpS81qOZ/S
MIME-Version: 1.0
X-Received: by 10.194.61.12 with SMTP id l12mr28203361wjr.139.1428303779592;
Mon, 06 Apr 2015 00:02:59 -0700 (PDT)
Received: by 10.28.60.68 with HTTP; Mon, 6 Apr 2015 00:02:59 -0700 (PDT)
In-Reply-To: <551C26FC.10803@sandeen.net>
References:
<551C26FC.10803@sandeen.net>
Date: Mon, 6 Apr 2015 10:02:59 +0300
Message-ID:
Subject: Re: xfs corruption issue
From: Danny Shavit
X-ASG-Orig-Subj: Re: xfs corruption issue
To: Eric Sandeen
Cc: xfs@oss.sgi.com, Dave Chinner ,
Lev Vainblat , Alex Lyakas
Content-Type: multipart/alternative; boundary=047d7b86df386a105a051308e3a7
X-Barracuda-Connect: mail-wi0-f169.google.com[209.85.212.169]
X-Barracuda-Start-Time: 1428303780
X-Barracuda-Encrypted: RC4-SHA
X-Barracuda-URL: http://192.48.176.25:80/cgi-mod/mark.cgi
X-Virus-Scanned: by bsmtpd at sgi.com
X-Barracuda-BRTS-Status: 1
X-Barracuda-Spam-Score: 1.00
X-Barracuda-Spam-Status: No, SCORE=1.00 using per-user scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=2.7 tests=BSF_SC0_TG232, HTML_MESSAGE
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.17596
Rule breakdown below
pts rule name description
---- ---------------------- --------------------------------------------------
1.00 BSF_SC0_TG232 BODY: Custom Rule TG232
0.00 HTML_MESSAGE BODY: HTML included in message
--047d7b86df386a105a051308e3a7
Content-Type: text/plain; charset=UTF-8
Thanks guys.
So far we did not figure out the bit fllip.
Will update if there is interesting information.
Best regards,
Danny
On Wed, Apr 1, 2015 at 8:12 PM, Eric Sandeen wrote:
> On 4/1/15 10:09 AM, Danny Shavit wrote:
> > Hello Dave,
> > My name is Danny Shavit and I am with Zadara storage.
> > We will appreciate your feedback reagrding an xfs_corruption and
> xfs_reapir issue.
> >
> > We found a corrupted xfs volume in one of our systems. It is around 1 TB
> size and about 12 M files.
> > We run xfs_repair on the volume which succeeded after 42 minutes.
> > We noticed that memory consumption raised to about 7.5 GB.
> > Since some customers are using only 4GB (and sometimes even 2 GB) we
> tried running "xfs_repair -m 3200" on a 4GB RAM machine.
> > However, this time an OOM event happened during handling of AG 26 during
> step 3.
> > The log of xfs_repair is enclosed below.
> > We will appreciate your feedback on the amount of memory needed for
> xfs_repair in general and when using "-m" option specifically.
> > The xfs metadata dump (prior to xfs_repair) can be found here:
> >
> https://zadarastorage-public.s3.amazonaws.com/xfs/xfsdump-prod-ebs_2015-03-30_23-00-38.tgz
> > It is a 1.2 GB file (and 5.7 GB uncompressed).
> >
> > We will appreciate your feedback on the corruption pattern as well.
> > --
> > Thank you,
> > Danny Shavit
> > Zadarastorage
> >
> > ---------- xfs_repair log ----------------
>
> Just a note ...
>
> > bad . entry in directory inode 5691013154, was 5691013170: correcting
>
> 101010011001101011111100000100100
> 101010011001101011111100000110100
> ^ bit flip
>
> > bad . entry in directory inode 5691013156, was 5691013172: correcting
>
> 101010011001101011111100000100100
> 101010011001101011111100000110100
> ^ bit flip
>
> etc ...
>
> > bad . entry in directory inode 5691013157, was 5691013173: correcting
> > bad . entry in directory inode 5691013163, was 5691013179: correcting
>
>
--
Regards,
Danny
--047d7b86df386a105a051308e3a7
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable