search for: 200mb

Displaying 20 results from an estimated 247 matches for "200mb".

Did you mean: 100mb
2010 Mar 17
1
rsync maximum size limit
Hi Is there any way in rsync that allows me to trasfer certain amount of files? For Example: My total directory/folder (/var/tmp/testFolder ) size is 800MB and I want to sync only 200MB of files at other server, whichever file will be synced I don't care but only 200MB of files. I am not talking about single file here, I want to sync the whole directory until the size reach 200MB. Is it possible to do with rsync? Thanks a lot in advance for your help. -------------- next par...
2014 Jul 24
1
Default or UI configuration directive for 200MB.vfd
Hello All, I have About 40 Bootable Floppy Images and would like to operate using "200MB.vfd" * Created "swd200MB.vfd" * * When Booted , passes the Remark > "No Default or UI configuration directive found! * * Requested to provide the Procedure to follow. Prof. S W Damle
2012 Dec 12
0
download quota of 200MB per voucher
Dear Friends, i have a question for you, i am sure someone can help. The pfsense captive portal is up and running. Time countdown vouchers are working without issue, such as 30m, 45m, 1h & so on. However, I'd like to set up a download quota of 200MB per voucher. but then you need to login with a username and password, instead of vouchers. but I haven't found a way to generate username & password when generating vouchers. is there someone who managed to get this working? At the moment vouchers are only for time based login. any clue, l...
2004 Oct 28
1
[Bug 946] scp slow file transfers, even with -1 -c blowfish
...s on a 100Mb/s networks and a 10Mb/s network. Using ssh version 1 and blowfish I get 500kB/s-1MB/s, again on both the networks. I am running scp on Cygwin on a 1GHz Centrino x86 machine (Windows XP Pro SP2). The process monitor reports the CPU on the sending machine is around 50%. In transfering a 200MB movie (avi) I see the following stats: $ scp 200MB-movie.avi foo at bar:/a/b 200MB-movie.avi 2% 4848KB 480.8KB/s 06:00 I see this with any destination machine on two different LANs: a Suse64 AMD64 with no load and a RAID5 array, a P133 FreeBSD box with no load and a single IDE drive, and...
2009 Apr 01
1
problem with 'loading file size'
Hi all, I am working in a project which needed to load *.csv files of size more than 200MB is it posible to load 200MB size file to r-project and do subsetting as per requirement i am able to load maximum of 90 mb is there any way to increase memory limits and how much maximum memory we can exten please some one help me to get it work thanks in advance. [[alternative HTML version dele...
2017 Oct 27
5
Poor gluster performance on large files.
...of gluster. The current XFS/samba array is used for video editing and 300-400MB/s for at least 4 clients is minimum (currently a single windows client gets at least 700/700 for a single client over samba, peaking to 950 at times using blackmagic speed test). Gluster has been getting me as low as 200MB/s when the server can do well over 1000MB/s. I have really been counting on / touting Gluster as being the way of the future for us. However I can't justify cutting our performance to a mere 13% of non-gluster speeds. I've started to reach a give up point and really need some help/hope ot...
2010 Feb 02
6
Smallest possible Asterisk VM
How small can an Asterisk system be, in terms of disk space utilized? I am looking for just asterisk, with mysql, postgresql, or sqlite, with PHP and Python. After finishing the build and removing the tools, how small can the whole system be? 100Mb, 200Mb? Can packages be used to build the whole system, like using debs and rpms alone? /vfclists
2006 Aug 12
2
problem in reading large files
I was trying to read a large .csv file (80 colums, 400,000 rows, size of about 200MB). I used scan(), R 2.3.1 on Windows XP. My computer is AMD 2000+ and has 512MB ram. It sometimes freezes my PC, sometimes just shuts down R quitely. Is there a way (option, function) to better handle large files? Seemingly SAS can deal with it with no problem, but I just persuaded my professor t...
2012 Oct 03
1
no callback on VIR_DOMAIN_EVENT_ID_BALLOON_CHANGE in 0.10.2
...///system setmem <dom> <mem in KB> command and/or subsequent balloon movement inside the guest? Or, is my old version of qemu-kvm not passing this event back to libvirt? The latter doesn't seem correct because I'm getting seemingly-valid values from dommemstat - ie, if I setmem 200MB when the guest OS and applications are consuming 300MB, it reports near 300MB (and the guest no longer responds) instead of just parroting the 200MB value I fed it. Thanks! Ben Clay rbclay at ncsu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: <h...
2006 Jan 08
4
Finding memory leaks?
Where are the memory leaks and what is the way to fix them? I''m working on a game that was fairly stable in terms the memory consumption and it was staying at around 200MB. Recently it has gone crazy, and unless I restart it, it goes straight up to 350-400+MB after 30 minutes-1hour. I am using Apache 1.3+fcgi in production mode. Also the dispatch.fcgi processes take a really high toll (3-5%) on the CPU. ( 2.4GZ Xeon with 2 procs) Since it''s hosted (still) on...
2012 May 15
3
Missing memory when using maxmem
...ory to set the initial allocation). However I appear to lose potentially quite a large chunk of memory as I use a larger "maxmem" option. For instance if I use memory=512, as I use a higher value for maxmem, I lose more memory according to "free" in the guest. I lose about 200MB (nearly half the memory!) if I have a maxmem of 8GB and even more when it is higher. I have looked around and couldn''t find any mention of maxmem taking a chunk out of the memory by design. Would anybody be able to shed any light on this? I have tested this with both Xen 4.0.1 and 4....
2007 Aug 13
3
imap memory footprint rather large
Dear list, I am experimenting with a new mail handling setup and it involves a single IMAP folder with just under 70'000 messages. When OfflineIMAP connects to the server, the imap process starts to eat up a lot of memory: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 15607 madduck 35 19 283m 244m 239m D 16.9 49.3 0:09.96 imap On the contrary, when
2006 Nov 15
3
AutoCad and ArcView
...nd it causes a hardware issue. (We have burned through 2 320GB drives in 2 months) It is possible of course that the controller card is failing, but to take some of the stress off of the machine does anyone have a tweaked smb.conf that has NO issues with serving up files of this size all day (80-200MB) Thanks in advance -- James C. McLaughlin Montrose County IT Office: (970) 252-4598 Cell: (970) 209-8329
2012 Dec 12
1
captive Portal Pfsense + FreeRadius + MySQL DBMS
Dear Friends Greetings, i have a question for you, i am sure someone can help. The pfsense captive portal is up and running. Time countdown vouchers are working without issue, such as 30m, 45m, 1h & so on. However, I'd like to set up a download quota of 200MB per voucher. but then you need to login with a username and password, instead of vouchers. but I haven't found a way to generate username & password when generating vouchers. is there someone who managed to get this working? At the moment vouchers are only for time based login. any clue, l...
2006 Feb 02
4
Newbie - samba 3 as PDC
Dear The Expert, I am very new with this, I don't understand why my Windows 2000 Prof PC failed when trying to register as member of domain "LINUX".. but I am able to login by using Windows 9x client , below is my /etc/samba/smb.conf I need advise.. thanks a lot in advance [global] workgroup = LINUX server string = Samba Server printcap name = /etc/printcap load printers = yes
2009 Nov 01
2
CentOS Mirrored On RapidShare [Links Here]
...are can deliver up 100Mbps download speeds. You don't have to reply to this post, my plan is to simply continue to mirror CentOS and continually post here my mirrors to keep the archive updated. Anyway, enough of my ramblings, lets get down to business ;) CentOS 5.2 i386 CD Install (These are 200MB ZIP parts): http://rapidshare.com/files/255564308/CentOS-5.2-i386-bin-1of6.zip.001 http://rapidshare.com/files/255581907/CentOS-5.2-i386-bin-1of6.zip.002 http://rapidshare.com/files/255596630/CentOS-5.2-i386-bin-1of6.zip.003 http://rapidshare.com/files/255603689/CentOS-5.2-i386-bin-2of6.zip.001 htt...
2017 Oct 30
0
Poor gluster performance on large files.
...rrent XFS/samba array is used for video editing and > 300-400MB/s for at least 4 clients is minimum (currently a single windows > client gets at least 700/700 for a single client over samba, peaking to 950 > at times using blackmagic speed test). Gluster has been getting me as low > as 200MB/s when the server can do well over 1000MB/s. I have really been > counting on / touting Gluster as being the way of the future for us > . However I can't justify cutting our performance to a mere 13% of > non-gluster speeds. I've started to reach a give up point and really >...
2011 Sep 05
1
Quota calculation
...ick1: ylal3020:/soft/venus Brick2: ylal3030:/soft/venus Brick3: yval1000:/soft/venus Brick4: yval1010:/soft/venus Options Reconfigured: nfs.port: 2049 performance.cache-refresh-timeout: 60 performance.cache-size: 1GB network.ping-timeout: 10 features.quota: on features.limit-usage: /test:100MB,/psa:200MB,/:7GB,/soft:5GB features.quota-timeout: 120 Size of each folder from the mount point : /test : 4.1MB /psa : 160MB /soft : 1.2GB Total size 1.4GB (If you want the complete output of du, don't hesitate) gluster volume quota venus list path limit_set size -------------...
2017 Sep 06
2
3.10.5 vs 3.12.0 huge performance loss
Hi, Just do some ingestion tests to 40 node 16+4EC 19PB single volume. 100 clients are writing each has 5 threads total 500 threads. With 3.10.5 each server has 800MB/s network traffic, cluster total is 32GB/s With 3.12.0 each server has 200MB/s network traffic, cluster total is 8GB/s I did not change any volume options in both configs. Any thoughts? Serkan
2010 Aug 01
1
Are enormous extents harmful?
...hing. From the point of view of efficient reading, large extents are good, because they minimize seeks in sequential reads. But there will be diminishing returns when the extent gets bigger than the size of a physical disk cylinder. For instance, modern disks have a data transfer rate of (about) 200MB/s, so adding one extra seek (about 8ms) in the middle of a 200MB extent can''t possibly slow things down by more than 1%. (And, that''s the worst-possible case.) But, large extents (I think) also have costs. For instance, if you are writing a byte into the middle of an extent, do...