search for: 316k

Displaying 6 results from an estimated 6 matches for "316k".

Did you mean: 16k
2009 Sep 09
4
waiting IOs...
...ai hiq siq| read writ| recv send| in out | int csw 0 0 88 12 0 0| 413k 98k| 0 0 | 0 0 | 188 132 0 1 46 53 0 0| 716k 48k| 19k 420k| 0 0 |1345 476 0 1 49 50 0 1| 492k 32k| 12k 181k| 0 0 |1269 482 0 1 63 37 0 0| 316k 159k| 58k 278k| 0 0 |1789 1562 0 0 74 26 0 0| 84k 512k|1937B 6680B| 0 0 |1200 106 0 1 44 55 0 1| 612k 80k| 14k 221k| 0 0 |1378 538 1 1 52 47 0 0| 628k 0 | 17k 318k| 0 0 |1327 520 0 1 50 49 0 0| 484k 60k| 14k...
2018 Mar 05
0
[Bug 13317] rsync returns success when target filesystem is full
...ke no error is returned and result is a sparse file. I think a sync() would be required otherwise the file is truncated on close to meet the quota. [postgres at hades ~]$ df -h arch Filesystem Size Used Avail Capacity Mounted on hydra/home/postgres/arch 1.0G 1.0G 316K 100% /usr/home/postgres/arch [postgres at hades ~]$ rsync -av --inplace 000000010000005E00000017 arch/000000010000005E00000017 sending incremental file list 000000010000005E00000017 sent 67,125,370 bytes received 35 bytes 8,950,054.00 bytes/sec total size is 67,108,864 speedup is 1.00 [po...
2007 Aug 22
5
Slow concurrent actions on the same LVM logical volume
Hi 2 all ! I have problems with concurrent filesystem actions on a ocfs2 filesystem which is mounted by 2 nodes. OS=RH5ES and OCFS2=1.2.6 F.e.: If I have a LV called testlv which is mounted on /mnt on both servers and I do a "dd if=/dev/zero of=/mnt/test.a bs=1024 count=1000000" on server 1 and do at the same time a du -hs /mnt/test.a it takes about 5 seconds for du -hs to execute: 270M
2018 Mar 01
29
[Bug 13317] New: rsync returns success when target filesystem is full
https://bugzilla.samba.org/show_bug.cgi?id=13317 Bug ID: 13317 Summary: rsync returns success when target filesystem is full Product: rsync Version: 3.1.2 Hardware: x64 OS: FreeBSD Status: NEW Severity: major Priority: P5 Component: core Assignee: wayned at samba.org
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate of about 400K/s. (When this pool was first set up we saw rates in the MB/s range during a scrub). Both zpool iostat and an iostat -Xn show lots of idle disk times, no above average service times, no abnormally high busy percentages. Load on the box is .59. 8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.
2019 Apr 30
6
Disk space and RAM requirements in docs
...ng/test/Modules/Output/target-platform-features.m.tmp 328K build/tools/clang/test/CXX/temp/temp.decls 328K build/lib/Target/Lanai/TargetInfo 320K build/tools/lld/tools/lld/CMakeFiles 320K build/tools/clang/test/Modules/Output/odr_hash-Friend.cpp.tmp 320K build/lib/Target/Lanai/TargetInfo/CMakeFiles 316K build/tools/lld/tools/lld/CMakeFiles/lld.dir 316K build/tools/clang/test/Modules/Output/odr_hash-Friend.cpp.tmp/modules.cache 316K build/lib/Target/XCore/InstPrinter 316K build/lib/Target/Lanai/TargetInfo/CMakeFiles/LLVMLanaiInfo.dir 312K build/tools/clang/test/CXX/expr/expr.unary 308K build/tools/...