Hi all,
First of, i am not sure whether this is really an xfs issue, but as most
of you are experts with filesystems, i hope someone can help here.
i am using redhat 8 with a linux 2.4.19 kernel patched with the xfs
patches. A 350GB drive has been formatted with xfs and one directory
hierarchie on that fs (with 130.000 files) is nfs exported to another
server. Besides, the machine is only running the postgresql database
(Postgres data files are also on the xfs-drive) The nfs-client is a web
server machine that delivers some files from the nfs-drive (only huge
downloads) Today and yesterday that web server has crashed due to
network problems.
today I got the following error message on the nfs server: "too many
open files in system". First I increased the limit
(/proc/sys/fs/file-max) to 12000 (was 8192) files, to get things running
again. But now I want to find out why so many files are open and I would
like to know which files are open.
Our web service is not so popular that more than a hundred files should
be open at the same time over the nfs link. I tried to find open files
with "lsof -N" (tried that on client and server) but did not get any
result. "lsof" lists only about 600 open files.
So my questions are:
- how do I find out which files are open
- can the crash of the webserver (nfs-client) be part of the problem
- is this a "normal" situation (that one has to increase file-max) when
exporting so many files or could there be something bad going on?
TIA, peter
|