>>> On Tue, 19 Feb 2008 12:44:57 +0100, Hannes Dorbath
>>> <light@xxxxxxxxxxxxxxxxxxxx> said:
[ ... a collection of millions of small records ... ]
>> That sounds like a good use for a LDAP database, but using
>> Berkeley DB directly may be best. One could also do a FUSE
>> module or a special purpose NFS server that presents a
>> Berkeley DB as a filesystem, but then we would be getting
>> rather close to ReiserFS.
light> During testing of HA clusters some time ago I found BDB
light> to always be the first thing to break. It seems to have
light> very poor recovery and seems not fine with neither file
light> systems snapshots nor power failures. [ ... ]
Sometimes BDB had problems, but that seems in the past. It also
relies critically on some precise behaviour from the storage
layer, filesystem downwards:
If all those conditions are not met, then it cannot do recovery.
Fortunately XFS can meet those conditions (I think also the page
size one), if properly configured and if the hardware does not
light> Personally I ended up doing this for OpenLDAP and never looked back:
Well, PostgresQL is of course a much nicer, more scalable DBMS
than BDB. But for a relatively small, mostly-ro collection of
small records the latter may be appropriate. XFS works with it
fairly well too.