similar to: Statistics domain memory block when domain shutdown

Displaying 7 results from an estimated 7 matches similar to: "Statistics domain memory block when domain shutdown"

2019 Aug 05
2
Vm in state "in shutdown"
Description of problem: libvirt 3.9 on CentOS Linux release 7.4.1708 (kernel 3.10.0-693.21.1.el7.x86_64) on Qemu version 2.10.0 I’m currently facing a strange situation. Sometimes my vm is shown by ‘virsh list’ as in state “in shutdown” but there is no qemu-kvm process linked to it. Libvirt log when “in shutdown” state occur is as follows: “d470c3b284425b9bacb34d3b5f3845fe” is vm’s name,
2012 Aug 14
1
question about list directory missing files or hang
Hi Gluster experts, I'm new to glusterfs and I have encountered a problem about list directory of glusters 3.3. I have a volume configuration of 3(distribute) * 2(replica). When write file on the glusterfs client mount directory some of the files can't be listed through ls command but the file exists. Some times the ls command hangs. Any one know what's the problem is? Thank you
2007 Nov 11
2
No more sound
Hello. I've just erased my ~/.wine by mistake, and I don't have sound anymore! I can't find why; when I start wine with WINEDEBUG=+winealsa,+dsound I get ALSA lib conf.c:3949:(snd_config_expand) Unknown parameters 0 ALSA lib pcm.c:2145:(snd_pcm_open_noupdate) Unknown PCM default:0 ALSA lib conf.c:3949:(snd_config_expand) Unknown parameters 0 ALSA lib
2013 Apr 30
1
Volume heal daemon 3.4alpha3
gluster> volume heal dyn_coldfusion Self-heal daemon is not running. Check self-heal daemon log file. gluster> Is there a specific log? When i check /var/log/glusterfs/glustershd.log glustershd.log:[2013-04-30 15:51:40.463259] E [afr-self-heald.c:409:_crawl_proceed] 0-dyn_coldfusion-replicate-0: Stopping crawl for dyn_coldfusion-client-1 , subvol went down Is there a specific log? When
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
Hi Karthik, thanks for taking a look at this. I'm not working with gluster long enough to make heads or tails out of the logs. The logs are attached to this mail and here is the other information: # gluster volume info home Volume Name: home Type: Replicate Volume ID: fe6218ae-f46b-42b3-a467-5fc6a36ad48a Status: Started Snapshot Count: 1 Number of Bricks: 1 x 3 = 3 Transport-type: tcp