similar to: Clear heal statistics

Displaying 20 results from an estimated 1200 matches similar to: "Clear heal statistics"

2018 Feb 08
2
Thousands of EPOLLERR - disconnecting now
Hello I have a large cluster in which every node is logging: I [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR - disconnecting now At a rate of of around 4 or 5 per second per node, which is adding up to a lot of messages. This seems to happen while my cluster is idle. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2018 Feb 08
0
Thousands of EPOLLERR - disconnecting now
On Thu, Feb 8, 2018 at 2:04 PM, Gino Lisignoli <glisignoli at gmail.com> wrote: > Hello > > I have a large cluster in which every node is logging: > > I [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR - > disconnecting now > > At a rate of of around 4 or 5 per second per node, which is adding up to a > lot of messages. This seems to happen while my
2017 Nov 21
1
Brick and Subvolume Info
Hello I have a Distributed-Replicate volume and I would like to know if it is possible to see what sub-volume a brick belongs to, eg: A Distributed-Replicate volume containing: Number of Bricks: 2 x 2 = 4 Brick1: node1.localdomain:/mnt/data1/brick1 Brick2: node2.localdomain:/mnt/data1/brick1 Brick3: node1.localdomain:/mnt/data2/brick2 Brick4: node2.localdomain:/mnt/data2/brick2 Is it possible
2016 Sep 20
1
[PATCH] libvirt: read disk paths from pools (RHBZ#1366049)
A disk of type 'volume' is stored as <source pool='pool_name' volume='volume_name'/> and its real location is inside the 'volume_name', as 'pool_name': in this case, query libvirt for the actual path of the specified volume in the specified pool. Adjust the code so that: - for_each_disk gets the virConnectPtr, needed to do operations with libvirt
2009 Jul 13
1
[PATCH] Use volume key instead of path to identify volume.
This patch teaches taskomatic to use the volume 'key' instead of the path from libvirt to key the volume off of in the database. This fixes the duplicate iscsi volume bug we were seeing. The issue was that libvirt changed the way they name storage volumes and included a local ID that changed each time it was attached. Note that the first run with this new patch will cause duplicate
2016 Sep 22
1
[PATCH v2] libvirt: read disk paths from pools (RHBZ#1366049)
A disk of type 'volume' is stored as <source pool='pool_name' volume='volume_name'/> and its real location is inside the 'volume_name', as 'pool_name': in this case, query libvirt for the actual path of the specified volume in the specified pool. Adjust the code so that: - for_each_disk gets the virConnectPtr, needed to do operations with libvirt
2017 Nov 30
0
Clear heal statistics
Is there any way to clear the historic statistic from the command "gluster volume heal <volume_name> statistics" ? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171201/9ea405e5/attachment.html>
2013 Nov 28
1
how to recover a accidentally delete brick directory?
hi all, I accidentally removed the brick directory of a volume on one node, the replica for this volume is 2. now the situation is , there is no corresponding glusterfsd process on this node, and 'glusterfs volume status' shows that the brick is offline, like this: Brick 192.168.64.11:/opt/gluster_data/eccp_glance N/A Y 2513 Brick 192.168.64.12:/opt/gluster_data/eccp_glance
2010 Feb 25
1
[PATCH] fix storage problem.
Since Ruby::Qmf moves, the .key() method does not work anymore. It forces to use a .get_attr('key') in order to get the good value. Signed-off-by: Loiseleur Michel <mloiseleur at linagora.com> --- src/task-omatic/taskomatic.rb | 15 +++++++-------- 1 files changed, 7 insertions(+), 8 deletions(-) diff --git a/src/task-omatic/taskomatic.rb b/src/task-omatic/taskomatic.rb index
2007 Mar 21
4
HELP!! I can''t mount my zpool!!
Hi all. One of our server had a panic and now can''t mount the zpool anymore! Here is what I get at boot: Mar 21 11:09:17 SERVER142 ^Mpanic[cpu1]/thread=ffffffff90878200: Mar 21 11:09:17 SERVER142 genunix: [ID 603766 kern.notice] assertion failed: ss->ss_start <= start (0x670000b800 <= 0x67 00009000), file: ../../common/fs/zfs/space_map.c, line: 126 Mar 21 11:09:17 SERVER142
2003 Jun 19
4
WinXP can`t log on Samba PDC
I`m folowing the steps on the unofficial Samba how to. I already join my WinXP box to the domain but I can`t login from my WinXP box after restart. There is an error message that sais: Windows can`t connect to the domain because the domain controller is unable or I`m using Samba-2.2.7a on red hat 9.0 with kernel 2.4.20-18.9 this is my smb.conf [global] domain logons = yes
2011 Jan 08
1
One shared folder to be HA over CIFS to windows clients
Hi, i'm Emiliano this is my first mail to samba mailing list. I have to solve this issue for a company. They need to had a folder, shared over CIFS for windows/mac clients, that is always available, also if the server who host it hang up or burn. I've looked for a lot of solution but i cannot find the right for me. Actually the company has two server, all running debian lenny as linux
2009 May 28
0
[PATCH server] Use qpid for migration and add more debugging to taskomatic.
This patch uses the qpid migration call which requires the latest libvirt-qpid and libvirt. I also add a bunch of debug logging here which is switchable but I've left it on for now so all users will have this in their logs. Signed-off-by: Ian Main <imain at redhat.com> --- src/task-omatic/taskomatic.rb | 36 ++++++++++++++++++++++++++++-------- 1 files changed, 28 insertions(+), 8
2007 Sep 04
23
I/O freeze after a disk failure
Hi all, yesterday we had a drive failure on a fc-al jbod with 14 drives. Suddenly the zpool using that jbod stopped to respond to I/O requests and we get tons of the following messages on /var/adm/messages: Sep 3 15:20:10 fb2 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g20000004cfd81b9f (sd52): Sep 3 15:20:10 fb2 SCSI transport failed: reason ''timeout'':
2007 Sep 16
3
PLOGI errors
Hello, today we made some tests with failed drives on a zpool. (SNV60, 2xHBA, 4xJBOD connected through 2 Brocade 2800) On the log we found hundred of the following errors: Sep 16 12:04:23 svrt12 fp: [ID 517869 kern.info] NOTICE: fp(0): PLOGI to 11dca failed state=Timeout, reason=Hardware Error Sep 16 12:04:23 svrt12 fctl: [ID 517869 kern.warning] WARNING: fp(0)::PLOGI to 11dca failed. state=c
2011 Nov 24
5
ActiveRecord::AssociationTypeMismatch
Hi to all, I have this error and I don''t understand why. I have three model, Image and Playlist and PlaylistItem. Everything works fine. The app should work also a XML REST service. When I made this call I obtain this XML because the playlist don''t contains images: GET http://0.0.0.0:3000/playlists/7.xml <playlist> <id>7</id>
2007 Sep 18
5
ZFS panic in space_map.c line 125
One of our Solaris 10 update 3 servers paniced today with the following error: Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after panic: assertion failed: ss != NULL, file: ../../common/fs/zfs/space_map.c, line: 125 The server saved a core file, and the resulting backtrace is listed below: $ mdb unix.0 vmcore.0 > $c vpanic() 0xfffffffffb9b49f3() space_map_remove+0x239()
2007 May 12
3
zfs and jbod-storage
Hi. I''m managing a HDS storage system which is slightly larger than 100 TB and we have used approx. 3/4. We use vxfs. The storage system is attached to a solaris 9 on sparc via a fiberswitch. The storage is shared via nfs to our webservers. If I was to replace vxfs with zfs I could utilize raidz(2) instead of the built-in hardware raid-controller. Are there any jbod-only storage
2002 Feb 20
2
samaba upgrade
I'm trying to upgrade from samba-2.2.1a-4 to samba-2.2.-3a but it keeps failing. I've tried numerous ways and I still get the same error. I must be doing something worry. Thanks # rpm -Uvh samba-2.2.3a-20020206.i386.rpm error: failed dependencies: samba = 2.2.1a is needed by samba-swat-2.2.1a-4 # rpm -qa | grep samba samba-client-2.2.1a-4 samba-2.2.1a-4 samba-swat-2.2.1a-4
2010 Apr 10
3
nfs-alpha feedback
I ran the same dd tests from KnowYourNFSAlpha-1.pdf and performance is inconsistent and causes the server to become unresponsive. My server freezes every time when I run the following command: dd if=/dev/zero of=garb bs=256k count=64000 I would also like to mount a path like: /volume/some/random/dir # mount host:/gluster/tmp /mnt/test mount: host:/gluster/tmp failed, reason given by server: No