Hi Antonio,
The current version of gluster-swift has a limitation where swift
accounts must map to a gluster volume.
When using keystone, you will need to create a gluster volume with the
tenant id (not tenant name). Then, generate the ring again using
'gluster-swift-gen-builders' and restart swift.
Thiago
On Mon, 2014-02-10 at 13:54 +0100, Antonio Messina
wrote:> Hi all,
>
> I am testing gluster and gluster-swift. I currently have a cluster of
> 8 nodes plus a frontend node which is a peer but it doesn't have any
> bricks.
> On the frontend node I have installed swift (from debian packages,
> havana version) and gluster-swift from git.
>
> I tested it *without* authentication and it basically worked, but
> since I need to enable keystone authentication (and possibly also s3
> tokens eventually) I tried to just add the configuration options for
> the proxy-server I used for my "standard" swift installation
(i.e.
> without gluster). but it didn't work.
>
> The error I am getting is "503", the relevant logs (I'm
running the
> daemons with loglevel DEBUG) are:
>
> Feb 10 13:49:36 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9
> proxy-server Authenticating user token
> Feb 10 13:49:36 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9
> proxy-server Removing headers from request environment:
>
X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role
> Feb 10 13:49:36 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9
> proxy-server Storing 484f080f8436324ea6be721dee58cd0f token in
> memcache
> Feb 10 13:49:36 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9
> proxy-server Using identity: {'roles': [u'_member_',
u'Member'],
> 'user': u'antonio', 'tenant':
(u'a9b091f85e04499eb2282733ff7d183e',
> u'demo')} (txn: txaaf80571c35d49818e757-0052f8cae0)
> Feb 10 13:49:36 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9
> proxy-server allow user with role member as account admin (txn:
> txaaf80571c35d49818e757-0052f8cae0) (client_ip: 130.60.24.12)
> Feb 10 13:49:37 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9
> account-server STDOUT: ERROR:root:No export found in ['default']
> matching drive, volume_not_in_ring (txn:
> txaaf80571c35d49818e757-0052f8cae0)
> Feb 10 13:49:37 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9
> account-server 127.0.0.1 - - [10/Feb/2014:12:49:37 +0000] "GET
> /volume_not_in_ring/0/AUTH_a9b091f85e04499eb2282733ff7d183e" 507 -
> "txaaf80571c35d49818e757-0052f8cae0" "GET
>
http://130.60.24.55:8080/v1/AUTH_a9b091f85e04499eb2282733ff7d183e?format=json"
> "proxy-server 5215" 0.1409 ""
> Feb 10 13:49:37 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9
> proxy-server ERROR Insufficient Storage
> 127.0.0.1:6012/volume_not_in_ring (txn:
> txaaf80571c35d49818e757-0052f8cae0) (client_ip: 130.60.24.12)
> Feb 10 13:49:37 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9
> proxy-server Node error limited 127.0.0.1:6012 (volume_not_in_ring)
> (txn: txaaf80571c35d49818e757-0052f8cae0) (client_ip: 130.60.24.12)
> Feb 10 13:49:37 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9
> proxy-server Account GET returning 503 for [507] (txn:
> txaaf80571c35d49818e757-0052f8cae0) (client_ip: 130.60.24.12)
>
> From the account-server log lines it seems that gluster-swift is
> trying to match the gluster volume (which in my case is called
> 'default') with a "drive" which is called, actually,
"drive", but I
> don't really understand where this come from: what is a drive in this
> context? Is it related to swift? Is it something inserted by the
> "authtoken" or "keystone" filters I added in the
pipeline?
>
> On the proxy-server.conf I basically replaced the pipeline with:
>
> [pipeline:main]
> pipeline = catch_errors healthcheck proxy-logging cache proxy-logging
> authtoken keystoneauth proxy-server
>
> i.e. adding "authtoken" and "keystoneauth" filters, and
added the
> following two stanzas:
>
>
> # Addition to make it work with keystone
> [filter:authtoken]
> paste.filter_factory = keystone.middleware.auth_token:filter_factory
> auth_host = keystone-host
> auth_port = 35357
> auth_protocol = http
> auth_uri = http://keystone-host:5000/
> admin_tenant_name = service
> admin_user = swift
> admin_password = swift-password
> delay_auth_decision = 1
>
> [filter:keystoneauth]
> use = egg:swift#keystoneauth
> operator_roles = Member, admin
>
> Thank you in advance to anyone willing to spent some times helping me
> with this :)
>
> .a.
>