>
>
> * for performance I want to read/write directly to the local volume.
> As DHT has no central metadata this should not give problems as long as
> the the data is on the right volume, correct?
>
>
That is correct.
> * The data is already distributed (in the unify/nufa setup) with a
> distributed hashing scheme. My initial idea was to hack my hashing scheme
> into Gluster so they would agree on where the data should be, but then I
> spotted a file nufa.c in the DHT directory. Does this mean I can use
> nufa with DHT? I can't find any documentation on this.
>
>
The 'nufa.c' you spotted in DHT directory is a development to get rid of
old
'cluster/unify' module, which had a centralized namespace server which
was
bottlenecking in scaling to more servers. (NOTE: there is another
'switch.c'
in dht directory, which should provide switch scheduler behavior).
Sorry about lesser documentation on this. But, yes, you can use nufa without
'cluster/unify' now, and you can still see all the data which were
populated
with 'cluster/unify' earlier.
Define a nufa volume like below:
---
# instead of cluster/unify in volume write below volume
volume nufa
type cluster/nufa
option local-volume <volumename present in earlier nufa scheduler>
option lookup-unhashed yes
subvolumes <subvolumes list used in 'cluster/unify'>
end-volume
----
> * Is there documentation about how DHT determines the location?
>
>
DHT determines the location based on a hashing algorithm. But when you use
NUFA, it tries to override the hashing logic with local-volume name in case
of create, but to find the file which is not in local subvolume, it will use
the same hashing algorithm used by dht.
Regards,
Amar