On Fri, Sep 12, 2014 at 08:02:30PM +1000, Dave Chinner wrote:
> On Thu, Sep 11, 2014 at 03:37:35PM -0500, Ben Myers wrote:
> > When comparing unicode strings for equality, normalization comes into play:
> > we must compare the normalized forms of strings, not just the raw sequences
> > of bytes. There are a number of defined normalization forms for unicode.
> > We decided on a variant of NFKD we call NFKDI. NFD was chosed over NFC,
> > because calculating NFC requires calculating NFD first, followed by an
> > additional step. NFKD was chosen over NFD because this makes filenames
> > that ought to be equal compare as equal.
> But are they really equal?
> Choosing *compatibility* decomposition over *canonical*
> decomposition means that compound characters and formatting
> distinctions don't affect the hash. i.e. "of'fi'ce", "o'ffi'ce" and
> "office" all hash and compare as the same name, but then they get
> stored on disk unnormalised. So they are the "same" in memory, but
> very different on disk.
> I note that the unicode spec says this for normalised forms
> "A normalized string is guaranteed to be stable; that is, once
> normalized, a string is normalized according to all future versions
> of Unicode."
> So if we store normalised strings on disk, they are guaranteed to
> be compatible with all future versions of unicode and anything that
> goes to use them. So why wouldn't we store normalised forms on disk?
I've had a very similar discussion about normalization in ZFS. Sadly, I
can't find where it happened so I can't point you to it. One interesting
point that I remember is that storing the original form may be less
surprising to an application. Specifically, the name it reads back is the
same it supplied during the creation. (Granted, if the file already exists,
the application will read back the new form.)
Only two things are infinite, the universe and human stupidity, and I'm not
sure about the former.
- Albert Einstein