search for: vmdata

Displaying 19 results from an estimated 19 matches for "vmdata".

Did you mean: vdata
2013 Sep 16
1
Gluster 3.4 QEMU and Permission Denied Errors
Hey List, I'm trying to test out using Gluster 3.4 for virtual machine disks. My enviroment consists of two Fedora 19 hosts with gluster and qemu/kvm installed. I have a single volume on gluster called vmdata that contains my qcow2 formated image created like this: qemu-img create -f qcow2 gluster://localhost/vmdata/test1.qcow 8G I'm able to boot my created virtual machine but in the logs I see this: [2013-09-16 15:16:04.471205] E [addr.c:152:gf_auth] 0-auth/addr: client is bound to port 46021 wh...
2017 Jun 09
4
Urgent :) Procedure for replacing Gluster Node on 3.8.12
...the 3.2TB over the weekend. In which case what is the best way to replace the old failed node? the new node would have a new hostname and ip. Failed node is vna. Lets call the new node vnd I'm thinking the following: gluster volume remove-brick datastore4 replica 2 vna.proxmox.softlog:/tank/vmdata/datastore4 force gluster volume add-brick datastore4 replica 3 vnd.proxmox.softlog:/tank/vmdata/datastore4 Would that be all that is required? Existing setup below: gluster v info Volume Name: datastore4 Type: Replicate Volume ID: 0ba131ef-311d-4bb1-be46-596e83b2f6ce Status: Started Snapshot...
2017 Jun 09
2
Urgent :) Procedure for replacing Gluster Node on 3.8.12
On Fri, Jun 9, 2017 at 12:41 PM, <lemonnierk at ulrar.net> wrote: > > I'm thinking the following: > > > > gluster volume remove-brick datastore4 replica 2 > > vna.proxmox.softlog:/tank/vmdata/datastore4 force > > > > gluster volume add-brick datastore4 replica 3 > > vnd.proxmox.softlog:/tank/vmdata/datastore4 > > I think that should work perfectly fine yes, either that > or directly use replace-brick ? > Yes, this should be replace-brick > > ______...
2017 Sep 21
2
Performance drop from 3.8 to 3.10
...ersion I made no changes to the volume settings. op.version is 31004 gluster v info Volume Name: datastore4 Type: Replicate Volume ID: 0ba131ef-311d-4bb1-be46-596e83b2f6ce Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: vnb.proxmox.softlog:/tank/vmdata/datastore4 Brick2: vng.proxmox.softlog:/tank/vmdata/datastore4 Brick3: vnh.proxmox.softlog:/tank/vmdata/datastore4 Options Reconfigured: transport.address-family: inet cluster.locking-scheme: granular cluster.granular-entry-heal: yes features.shard-block-size: 64MB network.remote-dio: enable cluste...
2017 Jun 09
0
Urgent :) Procedure for replacing Gluster Node on 3.8.12
> I'm thinking the following: > > gluster volume remove-brick datastore4 replica 2 > vna.proxmox.softlog:/tank/vmdata/datastore4 force > > gluster volume add-brick datastore4 replica 3 > vnd.proxmox.softlog:/tank/vmdata/datastore4 I think that should work perfectly fine yes, either that or directly use replace-brick ? -------------- next part -------------- A non-text attachment was scrubbed... Name: si...
2017 Jun 09
0
Urgent :) Procedure for replacing Gluster Node on 3.8.12
On 9/06/2017 9:56 PM, Pranith Kumar Karampuri wrote: > > > gluster volume remove-brick datastore4 replica 2 > > vna.proxmox.softlog:/tank/vmdata/datastore4 force > > > > gluster volume add-brick datastore4 replica 3 > > vnd.proxmox.softlog:/tank/vmdata/datastore4 > > I think that should work perfectly fine yes, either that > or directly use replace-brick ? > > > Yes, this should be rep...
2018 Apr 30
3
Finding performance bottlenecks
...CI_IATT Duration: 1655 seconds Data Read: 8804864 bytes Data Written: 612756480 bytes config: Volume Name: gv0 Type: Replicate Volume ID: a0b6635a-ae48-491b-834a-08e849e87642 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: barbelith10:/tank/vmdata/gv0 Brick2: rommel10:/tank/vmdata/gv0 Brick3: panzer10:/tank/vmdata/gv0 Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on features.cache-invalidation: on nfs.disable: on cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: enable cl...
2018 May 01
0
Finding performance bottlenecks
...ytes > Data Written: 612756480 bytes > > config: > Volume Name: gv0 > Type: Replicate > Volume ID: a0b6635a-ae48-491b-834a-08e849e87642 > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x 3 = 3 > Transport-type: tcp > Bricks: > Brick1: barbelith10:/tank/vmdata/gv0 > Brick2: rommel10:/tank/vmdata/gv0 > Brick3: panzer10:/tank/vmdata/gv0 > Options Reconfigured: > diagnostics.count-fop-hits: on > diagnostics.latency-measurement: on > features.cache-invalidation: on > nfs.disable: on > cluster.server-quorum-type: server > cluster.qu...
2017 Sep 22
0
Performance drop from 3.8 to 3.10
...ion is 31004 > > gluster v info > > Volume Name: datastore4 > Type: Replicate > Volume ID: 0ba131ef-311d-4bb1-be46-596e83b2f6ce > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x 3 = 3 > Transport-type: tcp > Bricks: > Brick1: vnb.proxmox.softlog:/tank/vmdata/datastore4 > Brick2: vng.proxmox.softlog:/tank/vmdata/datastore4 > Brick3: vnh.proxmox.softlog:/tank/vmdata/datastore4 > Options Reconfigured: > transport.address-family: inet > cluster.locking-scheme: granular > cluster.granular-entry-heal: yes > features.shard-block-size: 64M...
2020 Jan 08
0
Re: bug report
...the problem appear .&nbsp; > &nbsp; &nbsp; &nbsp; > &nbsp; &nbsp;&nbsp; > &nbsp; &nbsp; Here is the error message blow .&nbsp;&nbsp;I really appreciate your geneous help > > > > > > > > > [root at gz-op-vmhost01 vmdata]# uname&nbsp; -r &amp;&amp; cat /etc/redhat-release&nbsp; &amp;&amp; virsh -V > 4.19.8-1.el7.elrepo.x86_64 > CentOS Linux release 7.2.1511 (Core) > Virsh command line tool of libvirt 4.5.0 > See web site at https://libvirt.org/ > > > Compiled with supp...
2018 Feb 19
2
Upgrade from 3.8.15 to 3.12.5
...296816] I [MSGID: 106490] [glusterd-handler.c:2540:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: a23fa00c-4c7c-436d-9d04-0ceeee16941c [2018-02-19 05:32:50.298392] E [MSGID: 106010] [glusterd-utils.c:3374:glusterd_compare_friend_volume] 0-management: Version of Cksums VMData differ. local cksum = 1127272657, remote cksum = 3816303263 on peer found2.ssd.org [2018-02-19 05:32:50.298747] I [MSGID: 106493] [glusterd-handler.c:3800:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to found2.ssd.org (0), ret: 0, op_ret: -1 [2018-02-19 05:32:50.987194] I [MSGID: 106163]...
2018 Feb 19
0
Upgrade from 3.8.15 to 3.12.5
...490] [glusterd-handler.c:2540:__ > glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from > uuid: a23fa00c-4c7c-436d-9d04-0ceeee16941c > [2018-02-19 05:32:50.298392] E [MSGID: 106010] [glusterd-utils.c:3374: > glusterd_compare_friend_volume] 0-management: Version of Cksums VMData > differ. local cksum = 1127272657, remote cksum = 3816303263 on peer > found2.ssd.org > [2018-02-19 05:32:50.298747] I [MSGID: 106493] [glusterd-handler.c:3800:glusterd_xfer_friend_add_resp] > 0-glusterd: Responded to found2.ssd.org (0), ret: 0, op_ret: -1 > [2018-02-19 05:32:50.987...
2013 Jul 24
1
Cpus_allowed_list issue in RHEL6.4
...# cat /proc/1/status Name: init State: S (sleeping) Tgid: 1 Pid: 1 PPid: 0 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 Utrace: 0 FDSize: 64 Groups: VmPeak: 19364 kB VmSize: 19356 kB VmLck: 0 kB VmHWM: 1544 kB VmRSS: 1544 kB VmData: 328 kB VmStk: 88 kB VmExe: 140 kB VmLib: 2348 kB VmPTE: 52 kB VmSwap: 0 kB Threads: 1 SigQ: 1/256326 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000001000 SigCgt: 00000001a0016623 CapInh: 0000000000000000 C...
2020 Jan 03
2
bug report
...o&nbsp; nothing&nbsp; with my server&nbsp; ?but the problem appear .&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp; Here is the error message blow .&nbsp;&nbsp;I really appreciate your geneous help [root at gz-op-vmhost01 vmdata]# uname&nbsp; -r &amp;&amp; cat /etc/redhat-release&nbsp; &amp;&amp; virsh -V 4.19.8-1.el7.elrepo.x86_64 CentOS Linux release 7.2.1511 (Core) Virsh command line tool of libvirt 4.5.0 See web site at https://libvirt.org/ Compiled with support for: &nbsp;Hypervisors: QEM...
2017 Jul 01
3
integrating samba with pam
On Sat, 1 Jul 2017 16:30:25 +0100, Rowland Penny via samba wrote: > On Sat, 01 Jul 2017 11:48:21 -0300 > Guido Lorenzutti via samba wrote: > >> Hi there! I been using samba3 with ldap for years, and now im about to move to samba4 to leave the slapd. > > I take it you mean that you use Samba as an AD DC Exactly. >> I didnt try yet to migrate the directory from
2005 Oct 10
0
process sigblk
...t signal is SigBlk indicating? THanks, Jerry ------------ State: S (sleeping) Tgid: 11542 Pid: 11542 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 256 Groups: 0 1 2 3 4 6 10 VmSize: 4888 kB VmLck: 0 kB VmRSS: 1944 kB VmData: 692 kB VmStk: 112 kB VmExe: 1220 kB VmLib: 2336 kB SigPnd: 0000000000000000 SigBlk: 0000000080000000 SigIgn: 8000000000000000 SigCgt: 00000003fffbfeff CapInh: 0000000000000000 CapPrm: 00000000fffffeff CapEff: 00000000fffffeff
2018 Feb 20
0
Stale File handle
...volume that has the following when i try to access it (ls) ls: cannot access 37019600-c34e-4d10-8829-ac08cb141f19.meta: Stale file handle 37019600-c34e-4d10-8829-ac08cb141f19 37019600-c34e-4d10-8829-ac08cb141f19.lease 37019600-c34e-4d10-8829-ac08cb141f19.meta when i look at gluster volume heal VMData info i get the following Brick found1.ssd.org:/data/brick1/data /8d4b29ee-16c9-4bb9-a7db-937c9da6805d/images/a01b2883-73b1-4d4f-a338-44afcfce57a6/37019600-c34e-4d10-8829-ac08cb141f19 Status: Connected Number of entries: 1 Brick found2.ssd.org:/data/brick1/data Status: Connected Number of e...
2020 Jan 08
3
回复: bug report
...&gt; &amp;nbsp; &amp;nbsp;&amp;nbsp; &gt; &amp;nbsp; &amp;nbsp; Here is the error message blow .&amp;nbsp;&amp;nbsp;I really appreciate your geneous help &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; [root@gz-op-vmhost01 vmdata]# uname&amp;nbsp; -r &amp;amp;&amp;amp; cat /etc/redhat-release&amp;nbsp; &amp;amp;&amp;amp; virsh -V &gt; 4.19.8-1.el7.elrepo.x86_64 &gt; CentOS Linux release 7.2.1511 (Core) &gt; Virsh command line tool of libvirt 4.5.0 &gt; See web site at https://libvirt....
2011 Jul 27
4
Creating a vm with a non-existent /dev/mapper/ tap2 device effectively hangs dom0 system
...ukxen2 11327 # Name: tapdisk2 State: D (disk sleep) Tgid: 11327 Pid: 11327 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: VmPeak: 23056 kB VmSize: 21644 kB VmLck: 21640 kB VmHWM: 3848 kB VmRSS: 3232 kB VmData: 364 kB VmStk: 88 kB VmExe: 224 kB VmLib: 2460 kB VmPTE: 64 kB Threads: 1 SigQ: 2/6081 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000001000 SigCgt: 0000000181000242 CapInh: 0000000000000000 CapPrm: fffffffffffffff...