search for: 143g

Displaying 18 results from an estimated 18 matches for "143g".

Did you mean: 143
2019 Apr 20
3
Does devtmps and tmpfs use underlying hard disk storage or Physical Memory (RAM)
Hi, I am running the below command on CentOS Linux release 7.6.1810 (Core) # df -hT --total Filesystem Type Size Used Avail Use% Mounted on /dev/xvda1 xfs 150G 8.0G 143G 6% / devtmpfs devtmpfs 7.8G 0 7.8G 0% /dev tmpfs tmpfs 7.8G 0 7.8G 0% /dev/shm tmpfs tmpfs 7.8G 817M 7.0G 11% /run tmpfs tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup tmpfs tmpfs 1.6G 0 1.6G 0% /run/user/995 tmpfs...
2007 Apr 14
3
zfs snaps and removing some files
...taking snaps so I have it since last year. I removed month 11 already because did not have any space left. [11:32:52] root at chrysek: /d/d2 > zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 271G 2.40G 24.5K /mypool mypool/d 271G 2.40G 143G /d/d2 mypool/d at month_10 3.72G - 123G - mypool/d at month_12 22.3G - 156G - mypool/d at month_01 23.3G - 161G - mypool/d at month_02 16.1G - 172G - mypool/d at month_03 13.8G - 168G - mypool/d at month_04 15.7G - 168G - mypo...
2010 Mar 05
2
ZFS replication send/receive errors out
....895s It ran about an hour. A full backup that succeeds takes about 6 hours (the backup pools are external USB drives, not so fast). bash-3.2$ zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT bup-ruin 928G 63.7G 864G 6% ONLINE /backups/bup-ruin rpool 149G 6.41G 143G 4% ONLINE - zp1 1.09T 637G 479G 57% ONLINE - As you can see, it was nowhere near finished. But nothing was full, nothing crashed. Anybody got a spare clue? It shouldn''t, as I understand it, be possible for a full replication stream going into a newly-created filesy...
2023 Mar 15
1
How to configure?
...ds: 0 cluster.lookup-unhashed: on config.client-threads: 1 cluster.use-anonymous-inode: off diagnostics.brick-sys-log-level: CRITICAL features.scrub-freq: monthly cluster.data-self-heal: on cluster.brick-multiplex: on cluster.daemon-log-level: ERROR -8<-- htop reports that memory usage is up to 143G, there are 602 tasks and 5232 threads (~20 running) on clustor00, 117G/49 tasks/1565 threads on clustor01 and 126G/45 tasks/1574 threads on clustor02. I see quite a lot (284!) of glfsheal processes running on clustor00 (a "gluster v heal cluster_data info summary" is running on clustor...
2007 Apr 20
0
problem mounting one of the zfs file system during boot
...ng that I do with it? As I said manual mount works just fine but during boot it complains about mounting it [11:35:08] root at chrysek: /root > zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 272G 2.12G 24.5K /mypool mypool/d 271G 2.12G 143G /d/d2 mypool/d at 2006_month_10 3.72G - 123G - mypool/d at 2006_month_12 22.3G - 156G - mypool/d at 2007_month_01 23.3G - 161G - mypool/d at 2007_month_02 16.1G - 172G - mypool/d at 2007_month_03 13.8G - 168G - mypool/d at 2007_month_04 15.7G -...
2023 Mar 15
1
How to configure?
...ds: 0 cluster.lookup-unhashed: on config.client-threads: 1 cluster.use-anonymous-inode: off diagnostics.brick-sys-log-level: CRITICAL features.scrub-freq: monthly cluster.data-self-heal: on cluster.brick-multiplex: on cluster.daemon-log-level: ERROR -8<-- htop reports that memory usage is up to 143G, there are 602 tasks and 5232 threads (~20 running) on clustor00, 117G/49 tasks/1565 threads on clustor01 and 126G/45 tasks/1574 threads on clustor02. I see quite a lot (284!) of glfsheal processes running on clustor00 (a "gluster v heal cluster_data info summary" is running on clustor...
2023 Mar 16
1
How to configure?
...se-anonymous-inode: off > diagnostics.brick-sys-log-level: CRITICAL > features.scrub-freq: monthly > cluster.data-self-heal: on > cluster.brick-multiplex: on > cluster.daemon-log-level: ERROR > -8<-- > > htop reports that memory usage is up to 143G, there are 602 tasks and > 5232 threads (~20 running) on clustor00, 117G/49 tasks/1565 threads on > clustor01 and 126G/45 tasks/1574 threads on clustor02. > I see quite a lot (284!) of glfsheal processes running on clustor00 (a > "gluster v heal cluster_data info su...
2023 Mar 16
1
How to configure?
...uster.use-anonymous-inode: off >? ? diagnostics.brick-sys-log-level: CRITICAL >? ? features.scrub-freq: monthly >? ? cluster.data-self-heal: on >? ? cluster.brick-multiplex: on >? ? cluster.daemon-log-level: ERROR >? ? -8<-- > >? ? htop reports that memory usage is up to 143G, there are 602 tasks and >? ? 5232 threads (~20 running) on clustor00, 117G/49 tasks/1565 threads on >? ? clustor01 and 126G/45 tasks/1574 threads on clustor02. >? ? I see quite a lot (284!) of glfsheal processes running on clustor00 (a >? ? "gluster v heal cluster_data info summar...
2023 Mar 15
1
How to configure?
Do you use brick multiplexing ? Best Regards,Strahil Nikolov? On Tue, Mar 14, 2023 at 16:44, Diego Zuccato<diego.zuccato at unibo.it> wrote: Hello all. Our Gluster 9.6 cluster is showing increasing problems. Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 cores dual thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 [12TB]), configured in replica 3
2007 Jun 17
18
6 disk raidz2 or 3 stripe 2 way mirror
I''m playing around with ZFS and want to figure out the best use of my 6x 300GB SATA drives. The purpose of the drives is to store all of my data at home (video, photos, music, etc). I''m debating between: 6x 300GB disks in a single raidz2 pool --or-- 2x (3x 300GB disks in a pool) mirrored I''ve read up a lot on ZFS, but I can''t really figure out which is
2023 Mar 21
1
How to configure?
...l: CRITICAL >? ? ? >? ? features.scrub-freq: monthly >? ? ? >? ? cluster.data-self-heal: on >? ? ? >? ? cluster.brick-multiplex: on >? ? ? >? ? cluster.daemon-log-level: ERROR >? ? ? >? ? -8<-- >? ? ? > >? ? ? >? ? htop reports that memory usage is up to 143G, there are 602 >? ? tasks and >? ? ? >? ? 5232 threads (~20 running) on clustor00, 117G/49 tasks/1565 >? ? threads on >? ? ? >? ? clustor01 and 126G/45 tasks/1574 threads on clustor02. >? ? ? >? ? I see quite a lot (284!) of glfsheal processes running on >? ? clustor00 (a...
2023 Mar 21
1
How to configure?
...gt; >? ? ? >? ? cluster.data-self-heal: on > >? ? ? >? ? cluster.brick-multiplex: on > >? ? ? >? ? cluster.daemon-log-level: ERROR > >? ? ? >? ? -8<-- > >? ? ? > > >? ? ? >? ? htop reports that memory usage is up to 143G, there are 602 > >? ? tasks and > >? ? ? >? ? 5232 threads (~20 running) on clustor00, 117G/49 tasks/1565 > >? ? threads on > >? ? ? >? ? clustor01 and 126G/45 tasks/1574 threads on clustor02. > >? ? ? >? ? I see quite a lot (284!) of...
2023 Mar 21
1
How to configure?
...gt;? ? ? >? ? ? >? ? cluster.data-self-heal: on >? ? ? >? ? ? >? ? cluster.brick-multiplex: on >? ? ? >? ? ? >? ? cluster.daemon-log-level: ERROR >? ? ? >? ? ? >? ? -8<-- >? ? ? >? ? ? > >? ? ? >? ? ? >? ? htop reports that memory usage is up to 143G, there are 602 >? ? ? >? ? tasks and >? ? ? >? ? ? >? ? 5232 threads (~20 running) on clustor00, 117G/49 tasks/1565 >? ? ? >? ? threads on >? ? ? >? ? ? >? ? clustor01 and 126G/45 tasks/1574 threads on clustor02. >? ? ? >? ? ? >? ? I see quite a lot (284!) of...
2023 Mar 24
1
How to configure?
...on > >? ? ? >? ? ? >? ? cluster.brick-multiplex: on > >? ? ? >? ? ? >? ? cluster.daemon-log-level: ERROR > >? ? ? >? ? ? >? ? -8<-- > >? ? ? >? ? ? > > >? ? ? >? ? ? >? ? htop reports that memory usage is up to 143G, > there are 602 > >? ? ? >? ? tasks and > >? ? ? >? ? ? >? ? 5232 threads (~20 running) on clustor00, 117G/49 > tasks/1565 > >? ? ? >? ? threads on > >? ? ? >? ? ? >? ? clustor01 and 126G/45 tasks/1574 threads on >...
2006 Aug 24
5
unaccounted for daily growth in ZFS disk space usage
We finally flipped the switch on one of our ZFS-based servers, with approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is a RAID5 volume on the adaptec card). We have snapshots every 4 hours for the first few days. If you add up the snapshot references it appears somewhat high versus daily use (mostly mail boxes, spam, etc changing), but say an aggregate of no more than 400+MB a
2023 Mar 24
1
How to configure?
...on >? ? ? >? ? ? >? ? ? >? ? cluster.brick-multiplex: on >? ? ? >? ? ? >? ? ? >? ? cluster.daemon-log-level: ERROR >? ? ? >? ? ? >? ? ? >? ? -8<-- >? ? ? >? ? ? >? ? ? > >? ? ? >? ? ? >? ? ? >? ? htop reports that memory usage is up to 143G, >? ? there are 602 >? ? ? >? ? ? >? ? tasks and >? ? ? >? ? ? >? ? ? >? ? 5232 threads (~20 running) on clustor00, 117G/49 >? ? tasks/1565 >? ? ? >? ? ? >? ? threads on >? ? ? >? ? ? >? ? ? >? ? clustor01 and 126G/45 tasks/1574 threads on >? ? clu...
2023 Mar 24
1
How to configure?
...? cluster.brick-multiplex: on > >? ? ? >? ? ? >? ? ? >? ? cluster.daemon-log-level: ERROR > >? ? ? >? ? ? >? ? ? >? ? -8<-- > >? ? ? >? ? ? >? ? ? > > >? ? ? >? ? ? >? ? ? >? ? htop reports that memory usage is up to 143G, > >? ? there are 602 > >? ? ? >? ? ? >? ? tasks and > >? ? ? >? ? ? >? ? ? >? ? 5232 threads (~20 running) on clustor00, > 117G/49 > >? ? tasks/1565 > >? ? ? >? ? ? >? ? threads on > >? ? ? >? ? ? >...
2023 Apr 23
1
How to configure?
...>? ? ? >? ? ? >? ? ? >? ? cluster.daemon-log-level: ERROR > >? ? ? >? ? ? >? ? ? >? ? ? >? ? -8<-- > >? ? ? >? ? ? >? ? ? >? ? ? > > >? ? ? >? ? ? >? ? ? >? ? ? >? ? htop reports that memory usage is > up to 143G, > >? ? ? >? ? there are 602 > >? ? ? >? ? ? >? ? ? >? ? tasks and > >? ? ? >? ? ? >? ? ? >? ? ? >? ? 5232 threads (~20 running) on > clustor00, > >? ? 117G/49 > >? ? ? >? ? tasks/1565 > >? ? ? >?...