search for: 170mb

Displaying 14 results from an estimated 14 matches for "170mb".

Did you mean: 100mb
2012 Sep 21
2
cptime/memdisk block size
Is there any documentation available about what block size MEMDISK uses to load (big iso/harddisk) image files? Experimenting with cptime.c32 on a Lexar Triton Jumpdrive (64GB, 170MB/s read in Windows, 150MB/s write in Windows) on bootable USB3.0 resulted in the following, depending on specified block size: * 2048 bytes --> 25MB/s * 2MB --> 61MB/s It would be nice to know at which speed MEMDISK is loading, but a stopwatch seems a bit crude as only method. Bernd
2006 Nov 25
5
Mongrel 0.3.17 PR -- Big Work Day, 1.0 RC1 Very Close
Hello Everyone, We''re hard at work getting the hot new win32 service Luis wrote out and ready for production. We''re hoping to have that included in the 1.0 RC1 release we make very soon. This pre-release is just to make sure that we didn''t step on any toes. Install it with: $ gem install fastthread --source=http://mongrel.rubyforge.org/releases $ gem install
2012 Apr 04
1
memdisk speed diagnostics?
...ading an operating system's data at the intended device/controller/interface/bus (etc) speed. My own system has a habit of setting usb2.0 interface to 1.1-speeds for example. With (hopefully) bootable USB3.0 interfaces coming soon, a MEMDISK combined with a sufficient sized disk image (say 170MB Parted Magic cd-image file) would show loading speeds between 60MB and 500MB if a suitable device (USB3.0 Flash Drive or USB3.0-connected SSD) is used. Unfortunately I'd have no idea how to take compression (gzip/zip) into account, it can alter numbers significantly. perhaps something like...
2000 Oct 02
3
R vs S-PLUS with regard to memory usage
I am trying to translate code from S-PLUS to R and R really struggles! After starting R with the foll. R --vsize 50M --nsize 6M --no-restore on a 400 MHz Pentium with 192 MB of memory running Linux (RH 6.2), I run a function that essentially picks up an external dataset with 2121 rows and 30 columns and builds a lm() object and also runs step() ... the step() takes forever to run...(takes very
2004 Sep 28
1
infinite loop in rsync daemon on Mac OSX
...about 35GB of data on the macbox, and if I run a bunch of individual rsync commands to copy all macbox's directories one at a time, they all complete fine. So I run the all-at-once rsync command again, but this time I kill it after 10 minutes to check the nohup.out file (which is now about 170MB), and it turns out that it's full of duplicate lines: [me@linuxbox]$ cat nohup.out |sort|uniq|wc 469433 1016118 53240522 [me@linuxbox]$ wc nohup.out 1591372 3347600 179505609 So that's 1.6 million recv_file_name() lines, 70% of which are duplicates. It appears that it just keeps rec...
2000 Mar 30
1
Problem with Samba 2.0.5 and Q&A
...x C). Press Esc to cancel. Ref#:04EC It seems to be related to drive size/samba/fstype or something.. The share that I am mapping the clients to (win95 and win 311) is: [smallhd] comment = H Drive path = /smallhd read only = No guest only = Yes locking = No fstype = FAT which is a small (170Mb) hard disk. (it was running from a 9Gb scsi drive): [hdrive] comment = H Drive path = /hdrive read only = No guest only = Yes locking = No But due to the info I found at: http://www.qaug.com/faq.htm Is Q&A networkable? All versions of Q&A are networkable on, for example, Windows 95/...
2012 Oct 11
0
samba performance downgrade with glusterfs backend
...iozone (block=1MB) to test write performance, about 400MB/s. #dd if=/dev/zero of=dd.dat bs=1MB count=1k 1024+0 records in 1024+0 records out 1024000000 bytes (1.0 GB) copied, 2.6142 s, 392 MB/s But exporting with samba, use 4 Win7 clients to test with SANergy/Iometer, write performance only about 170MB/s. Command line used: iozone -s 1g -r 1m -i 0 -t 4 Output is in Kbytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. Throughput test with 4 processes Each process writes a 1048576...
2016 Jul 06
3
Master-Master replication question
...retrieve their email via webmail/clients) are significantly smaller then the mailboxes on server B. When investigating, it seems that "older" mailboxes (or storage rather since we use mdbox) are still there on server B, which already had been removed on server A. My personal mailbox was 170MB on server A, while it was still 2.5GB on server B. (which was around that size before cleaning up the mailsboxes). I enabled debugging on the servers, and I see rather quick : "Replication requests" on server A, but when getting an email on server B, I do not see the request at all. My...
2019 Feb 23
5
[Bug 2972] New: Add build-time option to use OpenSSL for ChaCha20-Poly1305
...rg Reporter: businesscorrespondence+openssh at rkjnsn.net I am using an ARM board based on the Marvell ARMADA 38x Cortex-A9+NEON CPU to run a custom NAS server. While the CPU power is limited, OpenSSL ships with a NEON-optimized implementation of ChaCha20-Poly1305 that achieves just over 170MB/s on this CPU (as reported by "openssl speed -elapsed -evp ChaCha20-Poly1305 -aead"), making it by far the fastest algorithm with good security on this CPU. Unfortunately, unlike the other algorithms supported by OpenSSH, it will not use OpenSSL support for ChaCha20-Poly1305 even if build...
2006 Jun 13
3
Easiest (best?) linux distribution for dedicated Asterisk box?
First off, I'm sorry for sending so many messages to the list-serv. Hopefully this will be my last for a while! I was going to use my WRT54G router as a small Asterisk box, but I forgot that I had a spare eMachines computer (Intel Celeron 633 MHz, 20GB HD, 64mb RAM). Will this machine work OK for a very simple dedicated home Asterisk box? Also, what is easiest linux distribution to use and
2011 Apr 08
4
Fast version of Fisher's Exact Test
Is anyone aware of a fast way of doing fisher's exact test for a series of 2 x 2 tables in R? The fisher.test is really slow if n1=1000 and n2 = 1000. -- Thanks, Jim. [[alternative HTML version deleted]]
2008 Feb 15
38
Performance with Sun StorageTek 2540
Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and connected via load-shared 4Gbit FC links. This week I have tried many different configurations, using firmware managed RAID, ZFS managed RAID, and with the controller cache enabled or disabled. My objective is to obtain the best single-file write performance.
2007 Jan 11
4
Help understanding some benchmark results
G''day, all, So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern. However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2003 Dec 01
0
No subject
...is the original post (can you check if you are getting the > same error messages in your log)? > > ------------------- > I'm getting many occurrences of the message below in the Samba 2.2.1 log > file. The size varies, but the rest of the message is the same. I have > about 170mb free in /var, so I'm not quite sure why it's complaining. Is it > complaining about memory or disk space? Is there some kernel parameter > (HP-UX 10.20) that I should bump up for this. > > [2001/07/20 10:28:22, 2] tdb/tdbutil.c:tdb_log(342) > tdb(/var/spool/locks/locking...