search for: cinder

Displaying 20 results from an estimated 67 matches for "cinder".

Did you mean: finder
2017 Jun 01
0
Who's using OpenStack Cinder & Gluster? [ Was Re: [Gluster-devel] Fwd: Re: GlusterFS removal from Openstack Cinder]
Joe, Agree with you on turning this around into something more positive. One aspect that would really help us decide on our next steps here is the actual number of deployments that will be affected by the removal of the gluster driver in Cinder. If you are running or aware of a deployment of OpenStack Cinder & Gluster, can you please respond on this thread or to me & Niels in private providing more details about your deployment? Details like OpenStack & Gluster versions, number of Gluster nodes & total storage capactiy wou...
2013 May 02
4
Kickstart and volume group with a dash in the name
Hi, I'm trying to setup the provisioning of new OpenStack hypervisors with cinder volumes on them. The problem is that kickstart doesn't allow dashed in volume group names? I tried this: volgroup cinder-volumes --pesize=4096 pv.02 and this: volgroup cinder--volumes --pesize=4096 pv.02 but in both cases I end up with a volume group named "cindervolumes" on the...
2020 Feb 04
1
[PATCH v2v] openstack: Increase Cinder volume attach timeout to 5 minutes (RHBZ#1685032).
In some cases we have observed the time taken for a Cinder volume to attach to the conversion appliance can be longer than the current 60 seconds. Increase the timeout to 5 minutes. Thanks: Ming Xie. --- v2v/output_openstack.ml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/v2v/output_openstack.ml b/v2v/output_openstack.ml index caa...
2015 Nov 24
0
LVM - how to change lv from linear to stripped? Is it possible?
...se 7.1.1503 (Core) $ uname -r 3.10.0-229.14.1.el7.x86_64 $ rpm -qa | grep -i lvm lvm2-libs-2.02.115-3.el7_1.1.x86_64 lvm2-2.02.115-3.el7_1.1.x86_64 And the solution proposed in above examples does not work on it. After (lv xxx is only on /dev/sdb4 before): # lvconvert --mirrors 1 --stripes 4 /dev/cinder-volumes/xxx /dev/sda4 /dev/sdc4 /dev/sdd4 /dev/sdf4 I am getting in "lvdisplay -m": --- Logical volume --- LV Path /dev/cinder-volumes/xxx LV Name xxx VG Name cinder-volumes LV UUID AKjKAo-66cv-Ygc2-4Ykq-sSJQ-RJOY-mfjoMD...
2018 Aug 29
2
[PATCH 0/2] v2v: Add -o openstack target.
This patch implements output to OpenStack Cinder volumes using OpenStack APIs. It has only been lightly tested, but appears to work. There are some important things to understand about how this works: (1) You must run virt-v2v in a conversion appliance running on top of OpenStack. And you must supply the name or UUID of this appliance to virt...
2018 Aug 30
3
[PATCH v2 0/2] v2v: Add -o openstack target.
v1 was here: https://www.redhat.com/archives/libguestfs/2018-August/thread.html#00287 v2: - The -oa option now gives an error; apparently Cinder cannot generally control sparse/preallocated behaviour, although certain Cinder backends can. - The -os option maps to Cinder volume type; suggested by Matt Booth. - Add a simple test.
2013 Oct 17
0
Gluster Community Congratulates OpenStack Developers on Havana Release
The Gluster Community would like to congratulate the OpenStack Foundation and developers on the Havana release. With performance-boosting enhancements for OpenStack Block Storage (Cinder), Compute (Nova) and Image Service (Glance), as well as a native template language for OpenStack Orchestration (Heat), the OpenStack Havana release points the way to continued momentum for the OpenStack community. The many storage-related features in the Havana release coupled with the growing scop...
2016 Jul 28
1
QEMU IMG vs Libvirt block commit
...Controller to manage the snapshots when the volume is not attached (offline) or calls the Compute (which calls libvirt) to manage snapshots if the volume is attached (online). When I try to create/delete snapshots from a snapshot chain. There are 3 situations: 1 - I create/delete the snapshots in Cinder (it uses qemu-img). It goes OK! I can delete and I can create snapshots. 2 - I create/delete the snapshots, in online mode (it uses libvirt). It goes Ok, as weel. 3 - I create the snapshots in Cinder (offline) and delete then in online mode (using libvirt), then it fails with this message[1]: libv...
2016 Feb 16
3
[PATCH 0/2] v2v: glance: Allow Glance backend to import multiple disks
...39;git diff -w' since it is mostly a whitespace change). virt-v2v -o glance will now create multiple disks called: - guestname # assumed system disk - guestname-disk2 # assumed data disks - guestname-disk3 - etc. Probably you'd want to immediately import the data disks into Cinder, and possibly the system disk too. Unfortunately direct import from virt-v2v to Cinder is not possible at this time (https://bugzilla.redhat.com/1155229). Rich.
2014 Feb 02
1
Trouble implementing ov_callbacks, endless loop calling seek_func
...oid *datasource ) { return 0; } long SourceFileImplOggVorbis::tellFn( void *datasource ) { auto sourceFile = (SourceFileImplOggVorbis *)datasource; long pos = sourceFile->mStream->tell(); return pos; } The source file for this code can also be found here<https://github.com/richardeakin/Cinder-Audio2/blob/85126bdddc69110d6ea8bbd982bc11ff20d536d3/src/cinder/audio2/FileOggVorbis.cpp#L153> . Thanks a ton, Rich -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.xiph.org/pipermail/vorbis/attachments/20140202/3ab2b6af/attachment.htm
2018 Sep 26
2
OpenStack output workflow
Hi, There has been discussion about the OpenStack output and Richard asked for a public thread on this list, so here it is. For v2v from VMware to RHV, there is a Python script that does some extra steps to create the virtual machine after the disks have been converted. We want to have the same behavior for OpenStack, i.e. have virt-v2v create the instance once the volumes have been created.
2018 Apr 25
1
RDMA Client Hang Problem
...ng -d mlx5_0 -g 0 ? local address:? LID 0x0000, QPN 0x0001e4, PSN 0x10090e, GID fe80::ee0d:9aff:fec0:1dc8 ? remote address: LID 0x0000, QPN 0x00014c, PSN 0x09402b, GID fe80::ee0d:9aff:fec0:1b14 8192000 bytes in 0.01 seconds = 7964.03 Mbit/sec 1000 iters in 0.01 seconds = 8.23 usec/iter root at cinder:~# ibv_rc_pingpong -g 0 -d mlx5_0 gluster1 ? local address:? LID 0x0000, QPN 0x00014c, PSN 0x09402b, GID fe80::ee0d:9aff:fec0:1b14 ? remote address: LID 0x0000, QPN 0x0001e4, PSN 0x10090e, GID fe80::ee0d:9aff:fec0:1dc8 8192000 bytes in 0.01 seconds = 8424.73 Mbit/sec 1000 iters in 0.01 seconds...
2018 Apr 25
2
RDMA Client Hang Problem
...om Ubuntu PPA) to 3 servers. These 3 boxes are running as gluster cluster. Additionally, I have installed Glusterfs Client to the last one. I have created Gluster Volume with this command: # gluster volume create db transport rdma replica 3 arbiter 1 gluster1:/storage/db/ gluster2:/storage/db/ cinder:/storage/db force (network.ping-timeout is 3) Then I have mounted this volume using mount command below. mount -t glusterfs -o transport=rdma gluster1:/db /db After mountings "/db", I can access the files. The problem is, when I reboot one of the cluster nodes, fuse client gives thi...
2018 Sep 26
0
Re: OpenStack output workflow
...ing a VM should not change its state from shutdown to running for what I think are fairly obvious reasons. Complicating this is that OpenStack itself doesn't seem to have a concept of a VM which is created but not running (in this way it is different from libvirt and RHV). We currently create Cinder volume(s) with the VM disk data, plus image properties attached to those volume(s), plus other volume properties [NB: in Cinder properties and image properties are different things] which is sufficient for someone else to start the instance (see virt-v2v(1) man page for exactly how to start it). &...
2017 Aug 02
2
Libvirt fails on network disk with ISCSI protocol
Hi, I am working on oVirt, and I am trying to run a VM with a network disk with ISCSI protocol ( the storage is on a Cinder server). Here is the disk XML I use: <disk device="disk" snapshot="no" type="network"> <address bus="0" controller="0" target="0" type="drive" unit="0" /> <source name=&q...
2018 Apr 25
0
RDMA Client Hang Problem
...gt; These 3 boxes are running as gluster cluster. Additionally, I have > installed Glusterfs Client to the last one. > > I have created Gluster Volume with this command: > > # gluster volume create db transport rdma replica 3 arbiter 1 > gluster1:/storage/db/ gluster2:/storage/db/ cinder:/storage/db force > > (network.ping-timeout is 3) > > Then I have mounted this volume using mount command below. > > mount -t glusterfs -o transport=rdma gluster1:/db /db > > After mountings "/db", I can access the files. > > The problem is, when I reboot one...
2018 Nov 02
2
How can I rebase network disk?
...uest.py#L802 But I use the ceph as the openstack block backend. And the type of disk is network in xml. <disk type='network' device='disk'> <driver name='qemu' type='raw' cache='writeback' discard='unmap'/> <auth username='cinder'> <secret type='ceph' uuid='86d3922a-b471-4dc1-bb89-b46ab7024e81'/> </auth> <source protocol='rbd' name='volumes002/volume-127f46fc-ef10-4462-af30-c3893cda31f9'> <host name='172.16.140.63' port='6789'/> &lt...
2019 Apr 03
1
[PATCH] UNTESTED v2v: openstack: Read server-id from metadata service.
Random old patch that I had in my queue. Posting it as a backup, it is still untested. Rich.
2015 Oct 05
0
Storage statistics
...t;: 0, "bps_wr": 0, "drv": "raw", "encrypted": false, "encryption_key_missing": false, "file": "rbd:volumes/volume-791f56d7-d115-4889-ba6c-85f7795b84db:id=cinder:key=AQCxLlxTYI2gHBAAWztETyuM1Nkgu6+0ANkl1w==:auth_supported=cephx\\;none:mon_host=10.19.24.127\\:6789\\;10.19.24.128\\:6789\\;10.19.24.129\\:6789", "image": { "cluster-size": 4194304, "dirty-flag": false,...
2013 Jun 26
0
Puppet OpenStack Modules Version 2.0 Released to The Forge
I''m happy to announce the release of version 2.0 of the Puppet Labs OpenStack modules to the Puppet Forge. These modules handle the deployment and management of the latest Grizzly releases of OpenStack, including Keystone, Swift, Glance, Cinder, Nova, and Horizon. Additionally, an OpenStack module is provided for single or multi-node deployments. Here is a set of links to the Forge release pages: http://forge.puppetlabs.com/puppetlabs/keystone/2.0.0 http://forge.puppetlabs.com/puppetlabs/swift/2.0.0 http://forge.puppetlabs.com/puppetlab...