Displaying 20 results from an estimated 100 matches similar to: "virsh dominfo not returning any stats"
2016 May 11
0
Questions about CMT event statistic
Hi
I'm testing cmt event of libvirt, and I have two questions. I will be very grateful if someone can give me some help.
Q1:"virsh domstats --perf" and linux perf tool has different result.
I have a guest with cmt event enabled, start guest and get perf statistic every 1s:
# while true; do virsh domstats rhel7.2-1030 --perf; sleep 1; done
In the meanwhile, I use perf tool to get
2016 Feb 18
2
query regarding domstats option with virsh
Hi,
I am looking for the information w.r.t domstats command line option of virsh tool.
1> This command line option is available with which version of libvirt.
2> How to get Guest's disk related information in case storage pool does not exist on host.
Thanks and Regards,
Manish
2016 Feb 18
0
Re: query regarding domstats option with virsh
On 18.02.2016 06:01, Jain, Manish (HP Software) wrote:
> Hi,
>
> I am looking for the information w.r.t domstats command line option of virsh tool.
>
>
> 1> This command line option is available with which version of libvirt.
Looks like the command has been introduced in 5e542970 which is part of
1.2.8 release.
>
> 2> How to get Guest's disk related
2017 Aug 27
0
self-heal not working
Thanks Ravi for your analysis. So as far as I understand nothing to worry about but my question now would be: how do I get rid of this file from the heal info?
> -------- Original Message --------
> Subject: Re: [Gluster-users] self-heal not working
> Local Time: August 27, 2017 3:45 PM
> UTC Time: August 27, 2017 1:45 PM
> From: ravishankar at redhat.com
> To: mabi <mabi at
2017 Aug 28
0
self-heal not working
On 08/28/2017 01:57 AM, Ben Turner wrote:
> ----- Original Message -----
>> From: "mabi" <mabi at protonmail.ch>
>> To: "Ravishankar N" <ravishankar at redhat.com>
>> Cc: "Ben Turner" <bturner at redhat.com>, "Gluster Users" <gluster-users at gluster.org>
>> Sent: Sunday, August 27, 2017 3:15:33 PM
>>
2005 Sep 09
0
[PATCH] Call dominfo.device_delete instead of non-existant dominfo.device_destroy
This patch changes the device destruction function to one that exists in
in the XendDomainInfo class, instead of a non existent method.
Signed-off-by: Sean Dague <sean@dague.net>
Diffstat output:
XendDomain.py | 2 +-
1 files changed, 1 insertion(+), 1 deletion(-)
diff -r 41a74438bcba tools/python/xen/xend/XendDomain.py
--- a/tools/python/xen/xend/XendDomain.py Fri Sep 9 18:36:48
2005 Sep 13
1
[RESEND] [PATCH] Call dominfo.device_delete instead of non-existant dominfo.device_destroy
This is a resend on the patch from late last week as I haven''t seen it in
the changelog yet. Comments welcomed if there is an issue with the patch.
-Sean
--
__________________________________________________________________
Sean Dague Mid-Hudson Valley
sean at dague dot net Linux Users Group
http://dague.net
2017 Aug 27
2
self-heal not working
Yes, the shds did pick up the file for healing (I saw messages like "
got entry: 1985e233-d5ee-4e3e-a51a-cf0b5f9f2aea") but no error afterwards.
Anyway I reproduced it by manually setting the afr.dirty bit for a zero
byte file on all 3 bricks. Since there are no afr pending xattrs
indicating good/bad copies and all files are zero bytes, the data
self-heal algorithm just picks the
2017 Aug 28
0
self-heal not working
On 08/28/2017 01:29 PM, mabi wrote:
> Excuse me for my naive questions but how do I reset the afr.dirty
> xattr on the file to be healed? and do I need to do that through a
> FUSE mount? or simply on every bricks directly?
>
>
Directly on the bricks: `setfattr -n trusted.afr.dirty -v
0x000000000000000000000000
2017 Aug 27
2
self-heal not working
----- Original Message -----
> From: "mabi" <mabi at protonmail.ch>
> To: "Ravishankar N" <ravishankar at redhat.com>
> Cc: "Ben Turner" <bturner at redhat.com>, "Gluster Users" <gluster-users at gluster.org>
> Sent: Sunday, August 27, 2017 3:15:33 PM
> Subject: Re: [Gluster-users] self-heal not working
>
>
2017 Aug 28
0
self-heal not working
Great, can you raise a bug for the issue so that it is easier to keep
track (plus you'll be notified if the patch is posted) of it? The
general guidelines are @
https://gluster.readthedocs.io/en/latest/Contributors-Guide/Bug-Reporting-Guidelines
but you just need to provide whatever you described in this email thread
in the bug:
i.e. volume info, heal info, getfattr and stat output of
2015 Jul 24
0
virsh dominfo does not show correct cpuTime
Hi,
I am doing some domain resource monitoring work. I use
cat /dev/urandom | md5sum
to simulate vCPU load and write a script to calculate vCPU utilization. It
seems all good at the beginning. After I abort the ``cat ... md5sum``
command in domain, I see some strange data as below:
CPU time: 8410960000000
CPU util: 99.8410609331%
CPU time: 8411970000000
CPU util: 100.843672381%
CPU time:
2015 Jul 24
0
Re: virsh dominfo does not show correct cpuTime
I was getting vCPU use time outside of guest with libvirt-python API, and
them calculate utilization with (cpuTime2 - cpuTime1) / (t2 - t1). I was
not doing this inside the guest os.
2015-07-24 15:09 GMT+08:00 2020human <human2020@qq.com>:
> You calculate is vCPU use time not utilization。
>
> use_time/total_cpu_time is utilization。
>
> total_cpu_time=`cat /proc/stat |sed -n
2017 Aug 28
2
self-heal not working
Thank you for the command. I ran it on all my nodes and now finally the the self-heal daemon does not report any files to be healed. Hopefully this scenario can get handled properly in newer versions of GlusterFS.
> -------- Original Message --------
> Subject: Re: [Gluster-users] self-heal not working
> Local Time: August 28, 2017 10:41 AM
> UTC Time: August 28, 2017 8:41 AM
>
2020 Apr 08
0
Getting Intel RDT cache allocation status from libvirt?
Is there a way to get the current cache allocation status of a host
from libvirt, via API or virsh?
I've seen some references to a 'virsh nodecachestats' command in some
patches from 2017 that doesn't seem to exist, and I see that 'virsh
domstats --cpu-total <domain>' has similar information but seems to be
focused on cache monitoring information rather than the
2020 Apr 07
0
Getting Intel RDT cache allocation status from libvirt?
Is there a way to get the current cache allocation status of a host
from libvirt, via API or virsh?
I've seen some references to a 'virsh nodecachestats' command in some
patches from 2017 that doesn't seem to exist, and I see that 'virsh
domstats --cpu-total <domain>' has similar information but seems to be
focused on cache monitoring information rather than the
2017 Aug 28
3
self-heal not working
Excuse me for my naive questions but how do I reset the afr.dirty xattr on the file to be healed? and do I need to do that through a FUSE mount? or simply on every bricks directly?
> -------- Original Message --------
> Subject: Re: [Gluster-users] self-heal not working
> Local Time: August 28, 2017 5:58 AM
> UTC Time: August 28, 2017 3:58 AM
> From: ravishankar at redhat.com
>
2017 Aug 25
0
self-heal not working
Hi Ravi,
Did you get a chance to have a look at the log files I have attached in my last mail?
Best,
Mabi
> -------- Original Message --------
> Subject: Re: [Gluster-users] self-heal not working
> Local Time: August 24, 2017 12:08 PM
> UTC Time: August 24, 2017 10:08 AM
> From: mabi at protonmail.ch
> To: Ravishankar N <ravishankar at redhat.com>
> Ben Turner
2018 Mar 08
1
Statistics domain memory block when domain shutdown
Hi
My libvirt version is 3.4.0,host system is centos 7.4 ,kernel is 3.10.0-693.el7.x86_64 , when I shutdown domain in virtual system, My program call virDomainMemoryStats, My program blocked in this api. the call stack is
#0 0x00007ff242d78a3d in poll () from /lib64/libc.so.6
#1 0x00007ff243755ce8 in virNetClientIOEventLoop () from /lib64/libvirt.so.0
#2 0x00007ff24375654b in
2013 Mar 25
0
Bug in DOMINFO command when balloon driver is used on a vm with more then 8 GB of MaxMemory ?
Hi ,
I Sent this to the wrong list (libvirt-devel) on friday ... so i am trying
to send it to the correct one this time.
Apologize for double posting.
I also created a ticket on bugzilla.redhat.com for this
https://bugzilla.redhat.com/show_bug.cgi?id=927336
still i am posting it here because is absolutely possible i am doing
something wrong and someone here will see it .
Description of the