search for: rbds

Displaying 14 results from an estimated 14 matches for "rbds".

Did you mean: rbd
2018 May 27
1
Using libvirt to access Ceph RBDs with Xen
Hi everybody, my background: I'm doing Xen since 10++ years, many years with DRBD for high availability, since some time I'm using preferable GlusterFS with FUSE as replicated storage, where I place the image-files for the vms. In my current project we started (successfully) with Xen/GlusterFS too, but because the provider, where we placed the servers, uses widely CEPH, we decided to
2013 Nov 06
1
Re: Problem using virt-sysprep with RBD images
...tand the ceph configuration, so when I create a device with qemu-img I only specify the protocol and pool/device. e.g.: >> qemu-img create rbd:pool-name/device-name 5G (there is some voodoo that I don't understand, I've got a whole thread on trying to get qemu-img to create format 2 rbds by default... but that's for another thread) Would it be possible to specify rbds like this instead? Or is the scope bigger than I'm understanding and that would cause issues with other disk types specified for the --add parameter. It seems like --add can take either a URI or a physical...
2013 Nov 26
3
Re: Problem Connecting to RBD images using Sys::Guestfs Perl Module
...mu pacakges. It's funny you should say that. I can't for the life of me get qemu-img to create firmat 2 rbd images... but I think that may be a ceph config problem, not an issue with qemu/kvm. >> therefore its qemu packages have no rbd support either. libvirt has no problem with my rbds. It's just libguestfs that doesn't like my rbds... unless I prefix the name of the pool with a "/". I'm open to suggestions, I do most of my work in Perl with the expectation my code will run on other platforms, so I'm not married to Ubuntu/debian. Thanks for your feedba...
2013 Nov 26
0
Re: Problem Connecting to RBD images using Sys::Guestfs Perl Module
..... (I > don't see a reason to run i386 these days, but I _guarntee_ you, I > have a customer who /refuses/ to use 64bit vms...) x86-64 as a platform supports 32 bit operating systems. So there's no special KVM emulator required to run 32 bit VMs. > libvirt has no problem with my rbds. It's just libguestfs that doesn't like > my rbds... unless I prefix the name of the pool with a "/". > > I'm open to suggestions, I do most of my work in Perl with the expectation > my code will run on other platforms, so I'm not married to Ubuntu/debian. Th...
2012 Nov 17
2
iSCSI Question
...far as I can tell, none do (that I can find). I know some proprietary vendors have this type of functionality, which may or may not be using iSCSI code (but that's a whole set of arguments for later..). Ultimately, my goal is this. I want to take my existing Ceph cluster, expose a handful of RBDs from it to two iSCSI heads running C6 or RHEL6 or what have you, so I can use my inexpensive storage built on Linux -- for my Windows 2008 machines. Thoughts? Steven Crothers steven.crothers at gmail.com
2013 Nov 06
3
Re: Problem using virt-sysprep with RBD images
Hello Richard, Haha, ok, here's a good one: I commented that if statement out at line 300, applied your patch (I see you updated the github of this code, perhaps that's the best place to grab the code from), and when I run virt-sysprep, I get the following parameter for my disk drive: >> qemu-system-x86_64: -drive
2020 Aug 19
0
[PATCH 23/28] lib82596: convert from dma_cache_sync to dma_sync_single_for_device
...P32(virt_to_dma(lp, dma->rfds)); rfd->cmd = SWAP16(CMD_EOL|CMD_FLEX); - DMA_WBACK_INV(dev, dma, sizeof(struct i596_dma)); + dma_sync_dev(dev, dma, sizeof(struct i596_dma)); return 0; } @@ -547,7 +575,7 @@ static void rebuild_rx_bufs(struct net_device *dev) lp->rbd_head = dma->rbds; dma->rfds[0].rbd = SWAP32(virt_to_dma(lp, dma->rbds)); - DMA_WBACK_INV(dev, dma, sizeof(struct i596_dma)); + dma_sync_dev(dev, dma, sizeof(struct i596_dma)); } @@ -575,9 +603,9 @@ static int init_i596_mem(struct net_device *dev) DEB(DEB_INIT, printk(KERN_DEBUG "%s: starting...
2013 Nov 25
4
Problem Connecting to RBD images using Sys::Guestfs Perl Module
Hello, I'm having trouble connecting to rbd images. It seems like somewhere the name is getting chewed up. I wonder if this is related to my previous troubles [1] [2] with rbd images. I'm trying to add an rbd image, but when I launch the guestfs object I get an error: >> libguestfs: trace: launch = -1 (error) I'm adding a single RBD >> libguestfs: trace: add_drive
2020 Sep 15
0
[PATCH 11/18] lib82596: convert to dma_alloc_noncoherent
...P32(virt_to_dma(lp, dma->rfds)); rfd->cmd = SWAP16(CMD_EOL|CMD_FLEX); - DMA_WBACK_INV(dev, dma, sizeof(struct i596_dma)); + dma_sync_dev(dev, dma, sizeof(struct i596_dma)); return 0; } @@ -547,7 +575,7 @@ static void rebuild_rx_bufs(struct net_device *dev) lp->rbd_head = dma->rbds; dma->rfds[0].rbd = SWAP32(virt_to_dma(lp, dma->rbds)); - DMA_WBACK_INV(dev, dma, sizeof(struct i596_dma)); + dma_sync_dev(dev, dma, sizeof(struct i596_dma)); } @@ -575,9 +603,9 @@ static int init_i596_mem(struct net_device *dev) DEB(DEB_INIT, printk(KERN_DEBUG "%s: starting...
2020 Sep 14
2
[PATCH 11/17] sgiseeq: convert to dma_alloc_noncoherent
...P32(virt_to_dma(lp, dma->rfds)); rfd->cmd = SWAP16(CMD_EOL|CMD_FLEX); - DMA_WBACK_INV(dev, dma, sizeof(struct i596_dma)); + dma_sync_dev(dev, dma, sizeof(struct i596_dma)); return 0; } @@ -547,7 +575,7 @@ static void rebuild_rx_bufs(struct net_device *dev) lp->rbd_head = dma->rbds; dma->rfds[0].rbd = SWAP32(virt_to_dma(lp, dma->rbds)); - DMA_WBACK_INV(dev, dma, sizeof(struct i596_dma)); + dma_sync_dev(dev, dma, sizeof(struct i596_dma)); } @@ -575,9 +603,9 @@ static int init_i596_mem(struct net_device *dev) DEB(DEB_INIT, printk(KERN_DEBUG "%s: starting...
2014 Jan 15
2
Ceph RBD locking for libvirt-managed LXC (someday) live migrations
Hi, I'm trying to build an active/active virtualization cluster using a Ceph RBD as backing for each libvirt-managed LXC. I know live migration for LXC isn't yet possible, but I'd like to build my infrastructure as if it were. That is, I would like to be sure proper locking is in place for live migrations to someday take place. In other words, I'm building things as if I were
2020 Sep 14
20
a saner API for allocating DMA addressable pages v2
Hi all, this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs with a separate new dma_alloc_pages API, which is available on all platforms. In addition to cleaning up the convoluted code path, this ensures that other drivers that have asked for better support for non-coherent DMA to pages with incurring bounce buffering over can finally be properly supported. I'm still a
2020 Sep 15
32
a saner API for allocating DMA addressable pages v3
Hi all, this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs with a separate new dma_alloc_pages API, which is available on all platforms. In addition to cleaning up the convoluted code path, this ensures that other drivers that have asked for better support for non-coherent DMA to pages with incurring bounce buffering over can finally be properly supported. As a follow up I
2020 Aug 19
39
a saner API for allocating DMA addressable pages
Hi all, this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs with a separate new dma_alloc_pages API, which is available on all platforms. In addition to cleaning up the convoluted code path, this ensures that other drivers that have asked for better support for non-coherent DMA to pages with incurring bounce buffering over can finally be properly supported. I'm still a