search for: 126g

Displaying 15 results from an estimated 15 matches for "126g".

Did you mean: 126
2023 Mar 15
1
How to configure?
...-level: CRITICAL features.scrub-freq: monthly cluster.data-self-heal: on cluster.brick-multiplex: on cluster.daemon-log-level: ERROR -8<-- htop reports that memory usage is up to 143G, there are 602 tasks and 5232 threads (~20 running) on clustor00, 117G/49 tasks/1565 threads on clustor01 and 126G/45 tasks/1574 threads on clustor02. I see quite a lot (284!) of glfsheal processes running on clustor00 (a "gluster v heal cluster_data info summary" is running on clustor02 since yesterday, still no output). Shouldn't be just one per brick? Diego Il 15/03/2023 08:30, Strahil Niko...
2009 Jan 16
2
Problem setting quotas on a zfs pool
...rved space [root at osprey /] # zfs list -o name,used,available NAME USED AVAIL target 1.32T 206G target/u02 72.2G 148G target/u02 at 1 12.0G - target/u03 61.1G 159G target/u03 at 1 12.1G - target/u04 126G 93.6G target/u04 at 1 14.5G - target/u05 1.06T 206G target/u05 at 1 671G - target/zoneroot 3.70G 4.30G target/zoneroot at 1 12.9M - zfspool 553G 1018G zfspool/u02 60.0G 160G zfspool/u02 at 1 0 - zfspool...
2004 Jul 14
1
Rsync Problems, Possible Addressed Bug?
...xr-x 3 root root 4096 Jun 27 04:22 weekly.2 drwxr-xr-x 3 root root 4096 Jun 20 04:22 weekly.3 Filesystem Size Used Avail Use% Mounted on /dev/hda3 5.7G 547M 4.8G 10% / none 30M 0 30M 0% /dev/shm /dev/hdb2 126G 40G 79G 34% /mnt/storage -= Beginning backups for db ssh: connect to host db.music.uga.edu port 22: No route to host rsync: connection unexpectedly closed (0 bytes read so far) rsync error: error in rsync protocol data stream (code 12) at io.c(150) WARNING: there seems to have been an rsync...
2017 Aug 16
1
[ovirt-users] Recovering from a multi-node failure
On Sun, Aug 6, 2017 at 4:42 AM, Jim Kusznir <jim at palousetech.com> wrote: > Well, after a very stressful weekend, I think I have things largely > working. Turns out that most of the above issues were caused by the linux > permissions of the exports for all three volumes (they had been reset to > 600; setting them to 774 or 770 fixed many of the issues). Of course, I >
2023 Mar 15
1
How to configure?
...-level: CRITICAL features.scrub-freq: monthly cluster.data-self-heal: on cluster.brick-multiplex: on cluster.daemon-log-level: ERROR -8<-- htop reports that memory usage is up to 143G, there are 602 tasks and 5232 threads (~20 running) on clustor00, 117G/49 tasks/1565 threads on clustor01 and 126G/45 tasks/1574 threads on clustor02. I see quite a lot (284!) of glfsheal processes running on clustor00 (a "gluster v heal cluster_data info summary" is running on clustor02 since yesterday, still no output). Shouldn't be just one per brick? Diego Il 15/03/2023 08:30, Strahil Niko...
2023 Mar 16
1
How to configure?
...a-self-heal: on > cluster.brick-multiplex: on > cluster.daemon-log-level: ERROR > -8<-- > > htop reports that memory usage is up to 143G, there are 602 tasks and > 5232 threads (~20 running) on clustor00, 117G/49 tasks/1565 threads on > clustor01 and 126G/45 tasks/1574 threads on clustor02. > I see quite a lot (284!) of glfsheal processes running on clustor00 (a > "gluster v heal cluster_data info summary" is running on clustor02 > since > yesterday, still no output). Shouldn't be just one per brick? > &...
2023 Mar 16
1
How to configure?
...er.data-self-heal: on >? ? cluster.brick-multiplex: on >? ? cluster.daemon-log-level: ERROR >? ? -8<-- > >? ? htop reports that memory usage is up to 143G, there are 602 tasks and >? ? 5232 threads (~20 running) on clustor00, 117G/49 tasks/1565 threads on >? ? clustor01 and 126G/45 tasks/1574 threads on clustor02. >? ? I see quite a lot (284!) of glfsheal processes running on clustor00 (a >? ? "gluster v heal cluster_data info summary" is running on clustor02 >? ? since >? ? yesterday, still no output). Shouldn't be just one per brick? > >?...
2023 Mar 15
1
How to configure?
Do you use brick multiplexing ? Best Regards,Strahil Nikolov? On Tue, Mar 14, 2023 at 16:44, Diego Zuccato<diego.zuccato at unibo.it> wrote: Hello all. Our Gluster 9.6 cluster is showing increasing problems. Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 cores dual thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 [12TB]), configured in replica 3
2023 Mar 21
1
How to configure?
...cluster.daemon-log-level: ERROR >? ? ? >? ? -8<-- >? ? ? > >? ? ? >? ? htop reports that memory usage is up to 143G, there are 602 >? ? tasks and >? ? ? >? ? 5232 threads (~20 running) on clustor00, 117G/49 tasks/1565 >? ? threads on >? ? ? >? ? clustor01 and 126G/45 tasks/1574 threads on clustor02. >? ? ? >? ? I see quite a lot (284!) of glfsheal processes running on >? ? clustor00 (a >? ? ? >? ? "gluster v heal cluster_data info summary" is running on clustor02 >? ? ? >? ? since >? ? ? >? ? yesterday, still no output)....
2023 Mar 21
1
How to configure?
...> >? ? ? > > >? ? ? >? ? htop reports that memory usage is up to 143G, there are 602 > >? ? tasks and > >? ? ? >? ? 5232 threads (~20 running) on clustor00, 117G/49 tasks/1565 > >? ? threads on > >? ? ? >? ? clustor01 and 126G/45 tasks/1574 threads on clustor02. > >? ? ? >? ? I see quite a lot (284!) of glfsheal processes running on > >? ? clustor00 (a > >? ? ? >? ? "gluster v heal cluster_data info summary" is running > on clustor02 > >? ? ? >? ? sinc...
2023 Mar 21
1
How to configure?
...>? ? ? >? ? ? > >? ? ? >? ? ? >? ? htop reports that memory usage is up to 143G, there are 602 >? ? ? >? ? tasks and >? ? ? >? ? ? >? ? 5232 threads (~20 running) on clustor00, 117G/49 tasks/1565 >? ? ? >? ? threads on >? ? ? >? ? ? >? ? clustor01 and 126G/45 tasks/1574 threads on clustor02. >? ? ? >? ? ? >? ? I see quite a lot (284!) of glfsheal processes running on >? ? ? >? ? clustor00 (a >? ? ? >? ? ? >? ? "gluster v heal cluster_data info summary" is running >? ? on clustor02 >? ? ? >? ? ? >? ? since...
2023 Mar 24
1
How to configure?
...reports that memory usage is up to 143G, > there are 602 > >? ? ? >? ? tasks and > >? ? ? >? ? ? >? ? 5232 threads (~20 running) on clustor00, 117G/49 > tasks/1565 > >? ? ? >? ? threads on > >? ? ? >? ? ? >? ? clustor01 and 126G/45 tasks/1574 threads on > clustor02. > >? ? ? >? ? ? >? ? I see quite a lot (284!) of glfsheal processes > running on > >? ? ? >? ? clustor00 (a > >? ? ? >? ? ? >? ? "gluster v heal cluster_data info summary" is > runni...
2023 Mar 24
1
How to configure?
...p reports that memory usage is up to 143G, >? ? there are 602 >? ? ? >? ? ? >? ? tasks and >? ? ? >? ? ? >? ? ? >? ? 5232 threads (~20 running) on clustor00, 117G/49 >? ? tasks/1565 >? ? ? >? ? ? >? ? threads on >? ? ? >? ? ? >? ? ? >? ? clustor01 and 126G/45 tasks/1574 threads on >? ? clustor02. >? ? ? >? ? ? >? ? ? >? ? I see quite a lot (284!) of glfsheal processes >? ? running on >? ? ? >? ? ? >? ? clustor00 (a >? ? ? >? ? ? >? ? ? >? ? "gluster v heal cluster_data info summary" is >? ? running...
2023 Mar 24
1
How to configure?
...e 602 > >? ? ? >? ? ? >? ? tasks and > >? ? ? >? ? ? >? ? ? >? ? 5232 threads (~20 running) on clustor00, > 117G/49 > >? ? tasks/1565 > >? ? ? >? ? ? >? ? threads on > >? ? ? >? ? ? >? ? ? >? ? clustor01 and 126G/45 tasks/1574 threads on > >? ? clustor02. > >? ? ? >? ? ? >? ? ? >? ? I see quite a lot (284!) of glfsheal > processes > >? ? running on > >? ? ? >? ? ? >? ? clustor00 (a > >? ? ? >? ? ? >? ? ? >? ? "gluster...
2023 Apr 23
1
How to configure?
...>? ? ? >? ? ? >? ? ? >? ? ? >? ? 5232 threads (~20 running) on > clustor00, > >? ? 117G/49 > >? ? ? >? ? tasks/1565 > >? ? ? >? ? ? >? ? ? >? ? threads on > >? ? ? >? ? ? >? ? ? >? ? ? >? ? clustor01 and 126G/45 tasks/1574 > threads on > >? ? ? >? ? clustor02. > >? ? ? >? ? ? >? ? ? >? ? ? >? ? I see quite a lot (284!) of glfsheal > >? ? processes > >? ? ? >? ? running on > >? ? ? >? ? ? >? ? ? >? ? clustor00 (a &gt...