search for: 174m

Displaying 6 results from an estimated 6 matches for "174m".

Did you mean: 174
2014 Mar 19
3
Disk usage incorrectly reported by du
...n (verified with fsck.ext4). No sparse files. After further investigation I think that the problem is most likely on the source machine. Here is the du output for for one directory exhibiting the problem: #du -h |grep \/51 201M ./51/msg/8 567M ./51/msg/9 237M ./51/msg/6 279M ./51/msg/0 174M ./51/msg/10 273M ./51/msg/2 341M ./51/msg/7 408M ./51/msg/4 222M ./51/msg/11 174M ./51/msg/5 238M ./51/msg/1 271M ./51/msg/3 3.3G ./51/msg 3.3G ./51 after changing the directory and running du again I get different numbers #cd 51 du -h 306M ./msg/8 676M ./msg/9...
2010 Feb 08
5
zfs send/receive : panic and reboot
<copied from opensolaris-dicuss as this probably belongs here.> I kept on trying to migrate my pool with children (see previous threads) and had the (bad) idea to try the -d option on the receive part. The system reboots immediately. Here is the log in /var/adm/messages Feb 8 16:07:09 amber unix: [ID 836849 kern.notice] Feb 8 16:07:09 amber ^Mpanic[cpu1]/thread=ffffff014ba86e40: Feb 8
2008 Jan 10
2
NCQ
...t1d0 - - 457 0 47.3M 0 c1t1d0 - - 457 0 47.4M 0 c0t6d0 - - 456 0 47.4M 0 c0t4d0 - - 458 0 47.4M 0 c1t3d0 - - 463 0 47.3M 0 raidz1 518G 970G 1.40K 0 174M 0 c1t4d0 - - 434 0 44.7M 0 c1t6d0 - - 433 0 45.3M 0 c0t3d0 - - 445 0 45.3M 0 c1t5d0 - - 427 0 44.4M 0 c0t5d0 - - 424 0 44.3M 0 ---------- ----- ----- -...
2002 Jul 26
1
inflate returned -3
Good day, all, I'm trying to transfer a 174M file to a system with 1.2G free. Other files in the tree come over just fine, but this transfer dies partway through: rsync -avvz -e ssh --partial --progress server.with.the.file:/server/directory /local/directory opening connection using ssh server.with.the.file rsync --server --sender -vvlog...
2011 Jul 07
4
Question on memory usage, garbage collector, 'top' stats on linux
...uby 6218 webappus 20 0 206m 82m 3544 R 98.9 2.1 0:48.81 ruby 6208 webappus 20 0 179m 59m 4788 S 0.0 1.5 0:07.50 ruby 6295 postgres 20 0 102m 32m 28m S 0.0 0.8 17:54.62 postgres 1034 postgres 20 0 98.7m 26m 25m S 0.0 0.7 0:23.67 postgres 843 mysql 20 0 174m 26m 6648 S 0.0 0.7 0:31.82 mysqld 6222 postgres 20 0 107m 19m 11m S 0.0 0.5 0:00.61 postgres 6158 root 20 0 42668 8684 2344 S 0.0 0.2 0:02.48 ruby 907 postgres 20 0 98.6m 6680 5528 S 0.0 0.2 0:13.14 postgres -- You received this message because you are subscri...
2012 Jul 13
1
[Bug 9041] New: Feature request: Better handling of btrfs based sources
...ot root 46897152 Jul 13 17:12 rsync/old/NoTouching.mp3 # Disk space usage is increased accordingly: > df -hT /test/*/ Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/vg-test_btrfs btrfs 1.0G 164M 219M 43% /test/btrfs /dev/mapper/vg-test_rsync btrfs 1.0G 209M 174M 55% /test/rsync Note that I am not asking for rsync to duplicate the subvolume or snapshot functionality. Just recognize that the same file exists in multiple locations kind of like a hard link but not. It seems to me the quickest way to accomplish this would be to add an option that works kind...