search for: 190mb

Displaying 20 results from an estimated 22 matches for "190mb".

Did you mean: 100mb
2015 Feb 28
3
disk space trouble on ec2 instance
...inodes for the files on the disk, I attempted rebooting the instance. After logging in again I did a df -h / on the root volume. And look! Still at 100% capcity used. Grrr.... Ok so I then did a du -h on the /var/www directory, which was mounted on the root volume. And saw that it was gobbling up 190MB of disk space. So then I reasoned that I could create an EBS volume, rsync the data there, blow away the contents of /var/www/* and then mount the EBS volume on the /var/www directory. So I went through that exercise and lo and behold. Still at 100% capacity. Rebooted the instance again. Logged in...
2010 Jul 19
2
[LLVMdev] VMkit AOT build problem: llc crushed on glibj compilation to native(.s) file
...sable-fp-elim glibj-optimized.zip.bc -o glibj.zip.s 1. Running pass 'Function Pass Manager' on module 'glibj-optimized.zip.bc'. make: *** [glibj.zip.s] Aborted (core dumped) >>> File sizes: glibj.zip.bc (93Mb), glibj-optimized.zip.bc (93Mb), glibj-optimized.zip.s was ~>190Mb when this crush occurred. How this can be corrected? Thanks, Minas
2015 Feb 28
0
disk space trouble on ec2 instance
On 2/27/2015 10:46 PM, Tim Dunphy wrote: > I'm at a loss to explain how I can delete 190MB worth of data, reboot the > instance and still be at 100% usage. 190MB is less than one percent of 9.9GB aka 9900MB BTW, for cases like this, I'd suggest using df -k or -m rather than -h to get more precise and consistent values. also note, Unix (and Linux) file systems usually have a r...
2015 Mar 02
1
disk space trouble on ec2 instance
...of 100% used. Maybe a little unconventional, but at least it got the job done. Thanks again, guys! Tim On Sat, Feb 28, 2015 at 2:46 AM, John R Pierce <pierce at hogranch.com> wrote: > On 2/27/2015 10:46 PM, Tim Dunphy wrote: > >> I'm at a loss to explain how I can delete 190MB worth of data, reboot the >> instance and still be at 100% usage. >> > > 190MB is less than one percent of 9.9GB aka 9900MB > > BTW, for cases like this, I'd suggest using df -k or -m rather than -h to > get more precise and consistent values. > > > also note...
2010 Aug 02
0
[LLVMdev] VMkit AOT build problem: llc crushed on glibj compilation to native(.s) file
...o > glibj.zip.s > 1.      Running pass 'Function Pass Manager' on module 'glibj-optimized.zip.bc'. > make: *** [glibj.zip.s] Aborted (core dumped) >>>> > > File sizes: glibj.zip.bc (93Mb), glibj-optimized.zip.bc (93Mb), > glibj-optimized.zip.s  was ~>190Mb when this crush occurred. > > How this can be corrected? Attached patch should fix the issue. The unreachable condition was caused by a constant expression involving an inttoptr from i32. This results in a zext to 64 bits, but apparently LowerConstant in AsmPrinter doesn't handle const...
2010 Aug 03
2
[LLVMdev] VMkit AOT build problem: llc crushed on glibj compilation to native(.s) file
...Running pass 'Function Pass Manager' on module > 'glibj-optimized.zip.bc'. > > make: *** [glibj.zip.s] Aborted (core dumped) > >>>> > > > > File sizes: glibj.zip.bc (93Mb), glibj-optimized.zip.bc (93Mb), > > glibj-optimized.zip.s was ~>190Mb when this crush occurred. > > > > How this can be corrected? > > Attached patch should fix the issue. The unreachable condition was > caused by a constant expression involving an inttoptr from i32. This > results in a zext to 64 bits, but apparently LowerConstant in > A...
2015 Feb 27
0
[LLVMdev] SVN dump seed file (was: svnsync of llvm tree)
...ory, then a git clone of the git-svn mirror will give you this very cheaply and with the added bonus that you can then commit to the local copy and still push things upstream (and merge changes from upstream). A fresh clone of the llvm and clang git mirrors transfers about 310MB for LLVM and about 190MB for Clang. What do you want to do with the svnsync copy? David > On 27 Feb 2015, at 10:27, Oliver Schneider <llvm at assarbad.net> wrote: > > Hi folks, > > in a rather old thread on this list titled "svnsync of llvm tree" > <http://comments.gmane.org/gmane....
2003 Apr 16
6
Slow connection
Hi, Last week I discovered that my samba-server seems rather slow. When I searched around I discovered that I wasn't the only one experiencing this, but I didn't find a solution. I experimented a little, and I hope someone can suggest a solution. First my server : SuSE 8.1, with Samba 2.2.5 (comes with the distribution). From a WinME-client I uploaded a 160Mb file to the samba-server.
2015 Feb 27
2
[LLVMdev] SVN dump seed file (was: svnsync of llvm tree)
Hi folks, in a rather old thread on this list titled "svnsync of llvm tree" <http://comments.gmane.org/gmane.comp.compilers.llvm.devel/42523> we noticed that an svnsync would fail due to a few particularly big commits that apparently caused OOM conditions on the server. The error and the revision number were consistent for different people. That seems to be fixed now. I succeeded
2002 Apr 07
0
syslinux and dos memory limitation
why did you desgin syslinux with such dos 640k limitation i couldn't use syslinux on netier xl1000 thin client because of less than 608k available to dos when i have 190mb! the netier has built in pxe boot -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.zytor.com/pipermail/syslinux/attachments/20020407/eab38406/attachment.html>
2010 May 05
0
Migration problem
...15 3165] INFO (XendCheckpoint:423) 1: sent 136192, skipped 538, delta 10803ms, dom0 42%, target 0%, sent 413Mb/s, dirtied 5Mb/s 1711 pages 2: sent 1309, skipped 9, delta 48ms, dom0 56%, target 0%, sent 893Mb/s, dirtied 39Mb/s 58 pages 3: sent 58, skipped 0, delta 10ms, dom0 100%, target 0%, sent 190Mb/s, dirtied 32Mb/s 10 pages 4: sent 10, skipped 0, Start last iterationint:423) Saving memory pages: iter 4 0% [2010-05-05 11:32:26 3165] DEBUG (XendCheckpoint:394) suspend [2010-05-05 11:32:26 3165] DEBUG (XendCheckpoint:127) In saveInputHandler suspend [2010-05-05 11:32:26 3165] DEBUG (XendChec...
2018 Apr 21
0
What is the maximum speed for download from a samba share
...rver. Limiting the linux-server's max cpu-speed had the most affect on performance: (limited to 1.6GHz instead of 2.4GHz) (33% limitation) Using bs=16.0M, count=256, iosize=4.0G (~35% slowdown) R:4294967296 bytes (4.0GB) copied, 10.4467 s, 392MB/s W:4294967296 bytes (4.0GB) copied, 21.5026 s, 190MB/s Limiting the client (cygwin-win7sp1x64): (~7-13% slowdown) (clock limited to 1.16GHz instead of 3.2) Using bs=16.0M, count=256, iosize=4.0G R:4294967296 bytes (4.0GB) copied, 7.14355 s, 573MB/s W:4294967296 bytes (4.0GB) copied, 15.9781 s, 256MB/s This would indicate that even in the unencrypt...
2010 Aug 04
0
[LLVMdev] VMkit AOT build problem: llc crushed on glibj compilation to native(.s) file
...Pass Manager' on module >> > 'glibj-optimized.zip.bc'. >> > make: *** [glibj.zip.s] Aborted (core dumped) >> >>>> >> > >> > File sizes: glibj.zip.bc (93Mb), glibj-optimized.zip.bc (93Mb), >> > glibj-optimized.zip.s  was ~>190Mb when this crush occurred. >> > >> > How this can be corrected? >> >> Attached patch should fix the issue.  The unreachable condition was >> caused by a constant expression involving an inttoptr from i32.  This >> results in a zext to 64 bits, but apparently...
2018 Apr 20
3
What is the maximum speed for download from a samba share
What is the maximum speed for download from a samba share? I have a 100 kbit/s Internet The maximum speed for a download from my  webserver with samba is about 25.000 kbit/s The bottleneck is the samba access.  Without samba  I can download with 100 kbit/s So the question? is it possible to get more speed or is this the maximum speed with samba? I tried a lot of hints for samba tuning but
2015 Nov 20
0
[PATCH -qemu] nvme: support Google vendor extension
...t;match_data" and "data" mean? I use a real NVMe device as backend. -drive file=/dev/nvme0n1,format=raw,if=none,id=D22 \ -device nvme,drive=D22,serial=1234 Here is the test results: local NVMe: 860MB/s qemu-nvme: 108MB/s qemu-nvme+google-ext: 140MB/s qemu-nvme-google-ext+eventfd: 190MB/s root at wheezy:~# cat test.job [global] bs=4k ioengine=libaio iodepth=64 direct=1 runtime=60 time_based norandommap group_reporting gtod_reduce=1 numjobs=8 [job1] filename=/dev/nvme0n1 rw=read
2014 Apr 13
1
gvfs and (lib)smbclient
...l Samba server from a tmpfs to a tmpfs. Note that the latency of this is small and across a LAN any latency effects would be more noticeable. smbclient with default block size: 1429MB/s smbclient with bs of 65534 bytes: 1644MB/s smbget with default block size: 207MB/s smbget with bs of 65534 bytes: 190MB/s smbget with bs of 1MiB: 884MB/s
2006 Jul 17
11
ZFS bechmarks w/8 disk raid - Quirky results, any thoughts?
...resting numbers happen at 7 disks - it''s slower then with 4, in all tests. I ran it 3x to be sure. Note this was a native 7 disk raid-z, it wasn''t 8 running in degraded mode with 7. Something is really wrong with my write performance here across the board. Reads: 4 disks gives me 190MB/sec. WOAH! I''m very happy with that. 8 disks should scale to 380 then, Well 320 isn''t all that far off - no biggie. Looking at the 6 disk raidz is interesting though, 290MB/sec. The disks are good for 60+MB/sec individually. 290 is 48/disk - note also that this is better then my r...
2010 Jul 05
21
Aoe or iScsi???
Hi people... Here we use Xen 4 with Debian Lenny... We''re using kernel 2.6.31.13 pvops... As a storage system, we use AoE devices... So, we installed VM''s on AoE partition... The "NAS" server is a Intel based baremetal with SATA hard disc... However, sometime I feeling that VM''s is so slow... Also, all VM has GPLPV drivers installed... So, I am thing about
2015 Nov 20
2
[PATCH -qemu] nvme: support Google vendor extension
On 20/11/2015 09:11, Ming Lin wrote: > On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote: >> >> On 18/11/2015 06:47, Ming Lin wrote: >>> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val) >>> } >>> >>> start_sqs = nvme_cq_full(cq) ? 1 : 0; >>> - cq->head = new_head;
2015 Nov 20
2
[PATCH -qemu] nvme: support Google vendor extension
On 20/11/2015 09:11, Ming Lin wrote: > On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote: >> >> On 18/11/2015 06:47, Ming Lin wrote: >>> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val) >>> } >>> >>> start_sqs = nvme_cq_full(cq) ? 1 : 0; >>> - cq->head = new_head;