Displaying 15 results from an estimated 15 matches for "127m".
Did you mean:
127
2006 Jun 05
4
Swap memory: I can't reconcile this stuff.
...es my butt. I
tuned on the swap field in top and sort on it. Here's an edited snippet
of the results.
Mem: 775708k total, 764752k used, 10956k free, 60780k buffers
Swap: 1572856k total, 160k used, 1572696k free, 377324k cached
PID VIRT RES SHR %MEM SWAP COMMAND
24729 127m 32m 15m 4.3 94m evolution
3409 97220 5268 4304 0.7 89m evolution-data-
2851 115m 36m 7120 4.8 79m X
10937 109m 45m 14m 6.0 63m firefox-bin
3417 63076 7876 6756 1.0 53m evolution-alarm
3363 40332 7284 6228 0.9 32m eggcups
24745 37480 8176 6876 1.1 28m evolution-excha
3736 5...
2013 Oct 24
1
failed: Message has been copied too many times
Hello,
I'm running dovecot 2.1.16 in a ubuntu 12.04 server, with lazy_expunge,
SiS and mdbox format.
The problem I'm having is that the index for one the mailboxes of one
of my users is growing too much. This is not the first time of this
problem. In previous cases, is because a message is duplicated thousand
of times (I haven't found any reason for this). In these cases,
2018 Apr 27
3
Size of produced binaries when compiling llvm & clang sources
...4M modularize
443M clang-func-mapping
442M clang-diff
441M libToolingExample00
438M pp-trace
434M diagtool
184M llvm-cfi-verify
170M llvm-objdump
168M sancov
158M llvm-rtdyld
149M llvm-ar
148M llvm-nm
145M llvm-extract
145M llvm-link
142M llvm-dwarfdump
141M llvm-split
131M llvm-mc
127M llvm-pdbutil
126M clang-offload-bundler
122M llvm-mca
121M verify-uselistorder
121M llvm-cat
120M llvm-as
117M llvm-special-case-list-fuzzer
117M llvm-demangle-fuzzer
116M llvm-modextract
114M obj2yaml
112M llvm-xray
105M sanstats
105M llvm-symbolizer
96M llvm-readobj
93M llvm-cov...
2013 Apr 12
1
after snapshot-delete, the qcow2 image file size doesn't decrease
...-04-12 17:13:42 +0800 running
[root at test1 ]# qemu-img info d0.qcow
image: d0.qcow
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 3.8G
cluster_size: 65536
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 1365758005 127M 2013-04-12 17:13:25 00:00:53.141
2 1365758022 127M 2013-04-12 17:13:42 00:01:09.508
[root at test1 ]# ls -lh
total 3.9G
-rw------- 1 qemu qemu 3.9G Apr 12 17:14 d0.qcow
drwx------ 2 root root 4.0K Apr 12 17:12 held
[root at test1 ]# virsh snapshot-delete
bfbe8ca8-8579-11e2-8...
2018 Apr 27
0
Size of produced binaries when compiling llvm & clang sources
...ngExample00
> 438M pp-trace
> 434M diagtool
> 184M llvm-cfi-verify
> 170M llvm-objdump
> 168M sancov
> 158M llvm-rtdyld
> 149M llvm-ar
> 148M llvm-nm
> 145M llvm-extract
> 145M llvm-link
> 142M llvm-dwarfdump
> 141M llvm-split
> 131M llvm-mc
> 127M llvm-pdbutil
> 126M clang-offload-bundler
> 122M llvm-mca
> 121M verify-uselistorder
> 121M llvm-cat
> 120M llvm-as
> 117M llvm-special-case-list-fuzzer
> 117M llvm-demangle-fuzzer
> 116M llvm-modextract
> 114M obj2yaml
> 112M llvm-xray
> 105M sanstats
&g...
2018 Mar 18
4
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
...totally different solution etc. etc.) takes approximately 10-15 seconds(!).
Any advice for tuning the volume or XFS settings would be greatly appreciated.
Hopefully I've included enough relevant information below.
## Gluster Client
root at gluster-client:/mnt/gluster_perf_test/ # du -sh .
127M .
root at gluster-client:/mnt/gluster_perf_test/ # find . -type f | wc -l
21791
root at gluster-client:/mnt/gluster_perf_test/ # du 9584toto9584.txt
4 9584toto9584.txt
root at gluster-client:/mnt/gluster_perf_test/ # time cp -a private private_perf_test
real 5m51.862s
user 0m0.862...
2018 Mar 18
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
...5
> seconds(!).
>
> Any advice for tuning the volume or XFS settings would be greatly
> appreciated.
>
> Hopefully I've included enough relevant information below.
>
>
> ## Gluster Client
>
> root at gluster-client:/mnt/gluster_perf_test/ ?# du -sh .
> 127M ? ?.
> root at gluster-client:/mnt/gluster_perf_test/ ?# find . -type f | wc -l
> 21791
> root at gluster-client:/mnt/gluster_perf_test/ ?# du 9584toto9584.txt
> 4 ? ?9584toto9584.txt
>
>
> root at gluster-client:/mnt/gluster_perf_test/ ?# time cp -a private
> private_per...
2001 Sep 30
1
2.4.9-ac18; issues with '/'
...ounted on
/dev/hda5 20G 19G 1.0G 95% /home
/dev/hda10 3.2G 2.5G 549M 83% /usr
/dev/hda6 4.9G 3.2G 1.4G 68% /usr/local
/dev/hda8 509M 51M 433M 11% /var
/dev/hda9 509M 23M 461M 5% /var/tmp
tmpfs 128M 8.0k 127M 1% /tmp
Other system info:
$ uname -a
Linux ja 2.4.9-ac18 #1 Sun Sep 30 11:30:05 EDT 2001 i686 unknown
This runs on a heavily modified RedHat 7.0 distrib.
Thanks,
-Jeremy
--
Jeremy Andrews <mailto:jeremy@kerneltrap.com>
PGP Key ID: 8F8B617A http://www.kerneltrap.com/
2018 Mar 19
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
...-15
> seconds(!).
>
> Any advice for tuning the volume or XFS settings would be greatly
> appreciated.
>
> Hopefully I've included enough relevant information below.
>
>
> ## Gluster Client
>
> root at gluster-client:/mnt/gluster_perf_test/ ?# du -sh .
> 127M ? ?.
> root at gluster-client:/mnt/gluster_perf_test/ ?# find . -type f | wc -l
> 21791
> root at gluster-client:/mnt/gluster_perf_test/ ?# du 9584toto9584.txt
> 4 ? ?9584toto9584.txt
>
>
> root at gluster-client:/mnt/gluster_perf_test/ ?# time cp -a private
> private_perf...
2017 Oct 03
0
multipath
...ne of the disks sde (mpathj) has a mounted file system, the
remaining two do not.
here is df:
Filesystem Type Size Used Avail Use% Mounted on
/dev/sdc2 ext4 31G 26G 4.0G 87% /
tmpfs tmpfs 16G 92K 16G 1% /dev/shm
/dev/sdc1 ext4 969M 127M 793M 14% /boot
/dev/sdc6 ext4 673G 242G 398G 38% /data01
/dev/mapper/mpathjp1 ext4 917G 196G 676G 23% /data02
/dev/sdc5 ext4 182G 169G 3.9G 98% /home
/dev/mapper/mpathep1 ext4 13T 11T 1005G 92% /SAN101
/dev/mapper/mpathep2 ext4 13T 5.0T 7.0T 42% /S...
2018 Mar 19
2
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
...5
> seconds(!).
>
> Any advice for tuning the volume or XFS settings would be greatly
> appreciated.
>
> Hopefully I've included enough relevant information below.
>
>
> ## Gluster Client
>
> root at gluster-client:/mnt/gluster_perf_test/ ?# du -sh .
> 127M ? ?.
> root at gluster-client:/mnt/gluster_perf_test/ ?# find . -type f | wc -l
> 21791
> root at gluster-client:/mnt/gluster_perf_test/ ?# du 9584toto9584.txt
> 4 ? ?9584toto9584.txt
>
>
> root at gluster-client:/mnt/gluster_perf_test/ ?# time cp -a private
> private_per...
2013 Nov 25
2
Sysinux 6 will not boot ISOs on BIOS (i.e. pre-UEFI) systems
...5535 (As Mattias confirmed)
with isolinux.bin from 6.02.
-----
#!/bin/sh
mkdir -p /tmp/test/isolinux
cp ~/isolinux-6.02.bin /tmp/test/isolinux/isolinux.bin
## Push isolinux.bin at LBA 65570 (0x00010022)
truncate -s 128M /tmp/test/coco
## Push isolinux.bin at LBA 65058 (0x0000fe22)
# truncate -s 127M /tmp/test/coco
xorriso -as mkisofs \
-eltorito-boot isolinux/isolinux.bin \
-eltorito-catalog isolinux/boot.cat \
-no-emul-boot -boot-load-size 4 -boot-info-table \
--sort-weight -1 isolinux/isolinux.bin \
--sort-weight +1 coco \
-output /tmp/test.is...
2013 Nov 25
3
Sysinux 6 will not boot ISOs on BIOS (i.e. pre-UEFI) systems
> As stated earlier, the next version of xorriso will have
> sort weight 2 for El Torito boot images by default.
> But it will not harm to explicitely use --sort-weight
> options with old and new versions of xorriso.
FWIW,
mkisofs is supposed to assign a +2 sort weight by default to the
eltorito boot image and +1 to the boot catalog, at least when no sort
file is provided.
I
2018 Mar 19
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
...Any advice for tuning the volume or XFS settings would be greatly
>> appreciated.
>>
>> Hopefully I've included enough relevant information below.
>>
>>
>> ## Gluster Client
>>
>> root at gluster-client:/mnt/gluster_perf_test/ ?# du -sh .
>> 127M ? ?.
>> root at gluster-client:/mnt/gluster_perf_test/ ?# find . -type f | wc -l
>> 21791
>> root at gluster-client:/mnt/gluster_perf_test/ ?# du 9584toto9584.txt
>> 4 ? ?9584toto9584.txt
>>
>>
>> root at gluster-client:/mnt/gluster_perf_test/ ?# time cp -a...
2010 May 02
8
zpool mirror (dumb question)
Hi there!
I am new to the list, and to OpenSolaris, as well as ZPS.
I am creating a zpool/zfs to use on my NAS server, and basically I want some
redundancy for my files/media. What I am looking to do, is get a bunch of
2TB drives, and mount them mirrored, and in a zpool so that I don''t have to
worry about running out of room. (I know, pretty typical I guess).
My problem is, is that