Displaying 7 results from an estimated 7 matches for "341m".
Did you mean:
3413
2009 Dec 15
7
ZFS Dedupe reporting incorrect savings
...53 3.31M 3.31M 3.31M
16 1 64K 64K 64K 16 1M 1M 1M
512 1 64K 64K 64K 854 53.4M 53.4M 53.4M
1K 1 64K 64K 64K 1.08K 69.1M 69.1M 69.1M
4K 1 64K 64K 64K 5.33K 341M 341M 341M
Total 304K 19.0G 19.0G 19.0G 345K 21.5G 21.5G 21.5G
dedup = 1.13, compress = 1.00, copies = 1.00, dedup * compress / copies = 1.13
Am I missing something?
Your inputs are much appritiated.
Thanks,
Giri
--
This message posted from opensolaris.org
2014 Mar 19
3
Disk usage incorrectly reported by du
...files.
After further investigation I think that the problem is most likely on the
source machine.
Here is the du output for for one directory exhibiting the problem:
#du -h |grep \/51
201M ./51/msg/8
567M ./51/msg/9
237M ./51/msg/6
279M ./51/msg/0
174M ./51/msg/10
273M ./51/msg/2
341M ./51/msg/7
408M ./51/msg/4
222M ./51/msg/11
174M ./51/msg/5
238M ./51/msg/1
271M ./51/msg/3
3.3G ./51/msg
3.3G ./51
after changing the directory and running du again I get different numbers
#cd 51
du -h
306M ./msg/8
676M ./msg/9
351M ./msg/6
338M ./msg/0
347M...
2018 May 08
1
mount failing client to gluster cluster.
...8G 0%
/sys/fs/cgroup
/dev/sda1 969M 206M 713M 23% /boot
/dev/mapper/centos-tmp 3.9G 33M 3.9G 1% /tmp
/dev/mapper/centos-home 50G 4.3G 46G 9% /home
/dev/mapper/centos-var 20G 341M 20G 2% /var
/dev/mapper/centos-data1 120G 36M 120G 1% /data1
/dev/mapper/centos00-var_lib 9.4G 179M 9.2G 2%
/var/lib
/dev/mapper/vg--gluster--prod1-gluster--prod1 932G 233G 699G 25%
/bricks/brick1
tmpfs...
2006 Aug 28
1
Can't update kernel, says not enough space
.../ filesystem
But I have much more than 6M:
[root at mail /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 145M 116M 21M 85% /
/dev/sda1 168M 7.0M 152M 5% /boot
/dev/sda9 8.2G 3.9G 3.9G 50% /home
/dev/sda8 2.0G 341M 1.5G 19% /home/root
none 506M 0 506M 0% /dev/shm
/dev/sda7 92M 4.1M 83M 5% /tmp
/dev/sda5 4.2G 1.5G 2.5G 37% /usr
/dev/sda3 981M 743M 189M 80% /var
/dev/shm 53M 0 53M 0% /var/amavis/tmp
What to do?
Th...
2010 Oct 20
0
Increased memory usage between 4.8 and 5.5
...inux
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7799 nobody 16 0 348m 101m 43m S 0.0 2.6 1:10.91 httpd
20285 nobody 15 0 340m 94m 43m S 0.0 2.4 1:11.36 httpd
31734 nobody 15 0 340m 91m 41m S 0.0 2.3 1:13.52 httpd
8904 nobody 15 0 341m 89m 39m S 0.0 2.3 0:35.28 httpd
7353 nobody 15 0 336m 87m 42m S 0.0 2.2 1:21.17 httpd
26097 nobody 15 0 333m 87m 43m S 0.0 2.2 1:28.84 httpd
20765 nobody 15 0 335m 86m 42m S 0.0 2.2 0:48.50 httpd
23299 nobody 15 0 334m 86m 42m S 0.0 2.2 1:13.35...
2005 Sep 24
0
[Bug 3116] New: large tar files: 1 gig size: retransmitted: rsync_rsh
...01:01 host1-backup-etc.zip
-rwx------ 1 backup backup 581M Sep 23 01:06 host1-backup-home-a2e.zip
-rwx------ 1 backup backup 155M Sep 23 01:07 host1-backup-home-f2j.zip
-rwx------ 1 backup backup 423M Sep 23 01:09 host1-backup-home-k2o.zip
-rwx------ 1 backup backup 341M Sep 23 01:10 host1-backup-home-p2t.zip
-rwx------ 1 backup backup 374M Sep 23 01:12 host1-backup-home-u2z.zip
-rwx------ 1 backup backup 13M Sep 23 01:01 host1-backup-mysql.zip
-rwx------ 1 backup backup 264M Sep 23 01:02 host1-backup-staff.zip
-rwx------ 1 backup backup...
2006 Aug 24
5
unaccounted for daily growth in ZFS disk space usage
We finally flipped the switch on one of our ZFS-based servers, with
approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is
a RAID5 volume on the adaptec card). We have snapshots every 4 hours
for the first few days. If you add up the snapshot references it
appears somewhat high versus daily use (mostly mail boxes, spam, etc
changing), but say an aggregate of no more than 400+MB a