search for: lvmcach

Displaying 18 results from an estimated 18 matches for "lvmcach".

Did you mean: lvmcache
2012 Oct 30
1
Bug#691808: xcp-storage-managers: Another wrong binary path + wrong parameter in storage managers backend
...3260', 'ipaddr': '192.168.10.100', 'port': 3260} [4168] 2012-10-29 23:25:50.278164 lock: closed /var/lock/sm/iscsiadm/running [4168] 2012-10-29 23:25:50.285797 lock: creating lock file /var/lock/sm/4bc38254-5e28-4cb6-4566-067fd46ab0b2/sr [4168] 2012-10-29 23:25:50.286136 LVMCache created for VG_XenStorage-4bc38254-5e28-4cb6-4566-067fd46ab0b2 [4168] 2012-10-29 23:25:50.295899 ['/sbin/vgs', 'VG_XenStorage-4bc38254-5e28-4cb6-4566-067fd46ab0b2'] [4168] 2012-10-29 23:25:50.434364 FAILED: (rc 5) stdout: '', stderr: ' Volume group "VG_XenStorage-...
2008 Aug 17
2
mirroring with LVM?
...losed /dev/dm-1 /dev/ram2: Not using O_DIRECT Opened /dev/ram2 RW /dev/ram2: block size is 1024 bytes /dev/ram2: No label detected Closed /dev/ram2 Opened /dev/md2 RW O_DIRECT /dev/md2: block size is 4096 bytes /dev/md2: lvm2 label detected lvmcache: /dev/md2: now orphaned /dev/md2: Found metadata at 8192 size 872 for vg0 (lRcg12-Pt0L-NYUv-zJfF-GT62-UKnP-VmhtD0) lvmcache: /dev/md2: now in VG vg0 lvmcache: /dev/md2: setting vg0 VGID to lRcg12Pt0LNYUvzJfFGT62UKnPVmhtD0 lvmcache: /dev/md2: VG vg0: Set creation host to...
2017 Jan 11
2
HSM
Hmm, don't you just love changing terminology! I've been using HSM systems at work since '99. BTW, DMAPI is the Data Management API which was a common(ish) extension used by amongst others SGI and IBM. Back to lvmcache. It looks interesting. I'd earlier dismissed LVM since it is block orientated, not file orientated. Probably because my mental image is of files migrating to tapes in a silo. On 11/01/17 10:23, Andrew Holway wrote: > HSM also stands for "Hardware security module" > > Ma...
2013 Jul 04
1
Failed to create SR with lvmoiscsi on xcp1.6[ [opterr=Logical Volume partition creation error [opterr=error is 5]]
...72 IQN match. Incrementing sessions to 1 [23260] 2013-07-04 17:44:51.023807 lock: released /var/lock/sm/4f834073-fda6-765a-c83c-f1ef1edf8c80/sr [23260] 2013-07-04 17:44:51.023906 lock: closed /var/lock/sm/4f834073-fda6-765a-c83c-f1ef1edf8c80/sr [23260] 2013-07-04 17:44:51.024039 LVMCache created for VG_XenStorage-4f834073-fda6-765a-c83c-f1ef1edf8c80 [23260] 2013-07-04 17:44:51.051901 ['/usr/sbin/vgs', 'VG_XenStorage-4f834073-fda6-765a-c83c-f1ef1edf8c80'] [23260] 2013-07-04 17:44:51.121525 FAILED in util.pread: (rc 5) stdout: '', stderr: ' Vol...
2017 Jan 11
2
HSM
I think there may be some confusion here. By HSM I was referring to Hierarchical Storage Management, whereby there are multiple levels of storage (fast+expensive <-> slow+cheap) and files migrate up or down. Originally it was used to keep data on tape with the metadata residing on disk though it has been expanded to allow a SAS/SATA hierarchy. Quite where PKI comes in I'm not sure,
2017 Jan 11
0
HSM
...< martinrushton56 at btinternet.com> wrote: > Hmm, don't you just love changing terminology! I've been using HSM > systems at work since '99. BTW, DMAPI is the Data Management API which > was a common(ish) extension used by amongst others SGI and IBM. > > Back to lvmcache. It looks interesting. I'd earlier dismissed LVM > since it is block orientated, not file orientated. Probably because my > mental image is of files migrating to tapes in a silo. > > On 11/01/17 10:23, Andrew Holway wrote: > > HSM also stands for "Hardware security mo...
2018 Dec 08
1
Weird problems with CentOS 7.6 1810 installer
...owever, going back to mdadm 0.9 metadata, that version should only be kernel auto detected. Theoretically, it gets activated before dracut gets involved. 1.x versions have no kernel autodetect, instead it happens in dracut (by calling mdadm to assemble and run it). Oh, is this really dm-cache, not lvmcache? That might be a source of confusion, if there isn't lvm metadata present to hint at LVM for proper assembly. Of course, lvmcache still uses device mapper, but with LVM metadata. Anyway is quite an interesting, and concerning problem. Chris Murphy
2017 Oct 10
1
ZFS with SSD ZIL vs XFS
I've had good results with using SSD as LVM cache for gluster bricks ( http://man7.org/linux/man-pages/man7/lvmcache.7.html). I still use XFS on bricks. On Tue, Oct 10, 2017 at 12:27 PM, Jeff Darcy <jeff at pl.atyp.us> wrote: > On Tue, Oct 10, 2017, at 11:19 AM, Gandalf Corvotempesta wrote: > > Anyone made some performance comparison between XFS and ZFS with ZIL > > on SSD, in gluster en...
2017 Feb 09
4
Huge directory tree: Get files to sync via tools like sysdig
As Ben mentioned, ZFS snapshots is one possible approach. Another approach is to have a faster storage system. I have seen considerable speed improvements with rsync on similar data sets by say upgrading the storage sub system. -------------------------------------------------------------------- This email is protected by LBackup, an open source backup solution http://www.lbackup.org
2018 Dec 05
3
Weird problems with CentOS 7.6 1810 installer
Hi, I just updated my installation media for CentOS. I have a few sandbox PCs in my office, and I'm testing CentOS 7.6 1810 on them. There seem to be a few issues with the CentOS 7.6 1810 DVD. Checked DVD integrity on startup : OK. First attempt : installer froze on root password dialog. Second attempt : installer froze on dependency calculation. Third attempt : installer froze on network
2017 Oct 31
3
BoF - Gluster for VM store use case
...f-heal, rebalance process could be useful for this usecase * Erasure coded volumes with sharding - seen as a good fit for VM disk storage * Performance related ** accessing qemu images using gfapi driver does not perform as well as fuse access. Need to understand why. ** Using zfs with cache or lvmcache for xfs filesystem is seen to improve performance If you have any further inputs on this topic, please add to thread. thanks! sahina -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171031/6370cff1/...
2017 Jan 11
0
HSM
HSM also stands for "Hardware security module" Maybe lvmcache would be interesting for you? HSM is more popularly known as "tiering". Cheers, Andrew On 11 January 2017 at 11:15, J Martin Rushton < martinrushton56 at btinternet.com> wrote: > I think there may be some confusion here. By HSM I was referring to > Hierarchical Storage M...
2017 Feb 10
0
Huge directory tree: Get files to sync via tools like sysdig
...rote: > As Ben mentioned, ZFS snapshots is one possible approach. Another > approach is to have a faster storage system. I have seen considerable > speed improvements with rsync on similar data sets by say upgrading > the storage sub system. Another possibility could be to use lvm and lvmcache to throw a ssd in front of the spinning disks. This would only improve things if you didn't otherwise fill up the cache with data -- you want the cache to contain inodes. So this might work only if your ssd cache was larger than whatever amount of data you typically write between rsync runs,...
2017 Oct 10
0
ZFS with SSD ZIL vs XFS
On Tue, Oct 10, 2017, at 11:19 AM, Gandalf Corvotempesta wrote: > Anyone made some performance comparison between XFS and ZFS with ZIL > on SSD, in gluster environment ? > > I've tried to compare both on another SDS (LizardFS) and I haven't > seen any tangible performance improvement. > > Is gluster different ? Probably not. If there is, it would probably favor
2017 Nov 01
0
BoF - Gluster for VM store use case
...my data / findings. > * Performance related > ** accessing qemu images using gfapi driver does not perform as well as fuse > access. Need to understand why. +1 I have some ideas here that I have came up with in my research. Happy to share these as well. > ** Using zfs with cache or lvmcache for xfs filesystem is seen to improve > performance I have done some interesting stuff with customers here too, nothing with VMs iirc it was more for backing up bricks without geo-rep(was too slow for them). -b > > If you have any further inputs on this topic, please add to thread. &g...
2017 Oct 10
4
ZFS with SSD ZIL vs XFS
Anyone made some performance comparison between XFS and ZFS with ZIL on SSD, in gluster environment ? I've tried to compare both on another SDS (LizardFS) and I haven't seen any tangible performance improvement. Is gluster different ?
2018 Mar 07
4
gluster for home directories?
...gi?id=1405147 * nfs-ganesha does not have the 'async' option that kernel nfs has. I can understand why they don't want to implement this feature, but do wonder how others are increasing their nfs-ganesha performance. I've put some SSD's in each brick and have them configured as lvmcache to the bricks. This setup only increases throughput once the data is on the ssd and not for just-written data. Regards, Rik [1] 4 servers with 2 1Gbit nics (one for the client traffic, one for s2s traffic with jumbo frames enabled). Each server has two disks (bricks). [2] ioping from the nfs c...
2018 Mar 08
0
gluster for home directories?
...* nfs-ganesha does not have the 'async' option that kernel nfs has. I > can understand why they don't want to implement this feature, but do > wonder how others are increasing their nfs-ganesha performance. I've put > some SSD's in each brick and have them configured as lvmcache to the > bricks. This setup only increases throughput once the data is on the ssd > and not for just-written data. > > Regards, > > Rik > > [1] 4 servers with 2 1Gbit nics (one for the client traffic, one for s2s > traffic with jumbo frames enabled). Each server has two...