search for: 30mb

Displaying 20 results from an estimated 171 matches for "30mb".

Did you mean: 10mb
2005 Feb 25
2
samba 3 performance
Yes, I get more than 30MB/s performance. The benchmark I use (NetBench) is essentially CPU bound, such that a faster processor = faster performance. With a very fast hardware config (dual 3.2GHz processors), I've been able to hit around 100MB/s. Changing the RAM or other attributes does not buy me much, it seems that pr...
2018 Apr 16
2
Lmtp issues on dovecot 2.3.x with big messages
> >> Messages are being sent to dovecot LMTP by postfix. If I change this email > >> to another server with dovecot 2.2.x the same message are delivered > >> immediately. > > Confirmed. Starts to fail here around 30Mb. Tested with Swaks. > > > > Working on a fix... > > Problem found. It is an explicit limit of 40Mb (for the 30Mb I saw in my > tests, there was also a base64 encoding I forgot about). > > Will fix both the unhelpful error and the fact that there should be no > limit...
2008 Apr 15
4
heavy graphs
Dear R community, I am creating large graphs with hundreds of thousands of datapoints. My usual way for output was pdf, but now I am getting file sizes of >30Mb that do not open well (or at all) in Adobe. Is there a way to reduce the resolution or get rid of overlaying datapoints? Any other idea is also warmly welcome! Thank you and wishing you a good day! Georg. ********************** Georg Ehret Johns Hopkins Baltimore - US [[alternative HTML version...
2018 May 23
1
Lmtp issues on dovecot 2.3.x with big messages
...tz: >>>>> Messages are being sent to dovecot LMTP by postfix. If I change >>>>> this email >>>>> to another server with dovecot 2.2.x the same message are delivered >>>>> immediately. >>>> Confirmed. Starts to fail here around 30Mb. Tested with Swaks. >>>> >>>> Working on a fix... >>> Problem found. It is an explicit limit of 40Mb (for the 30Mb I saw >>> in my >>> tests, there was also a base64 encoding I forgot about). >>> >>> Will fix both the unhelpful e...
2006 Aug 31
3
debian unstable & ext3
...U/Linux on a laptop with ext3 on / Some time ago things started getting weird in the following way: I do a fairly normal hack, ^Z, make, test loop when developing and it seems that vim is calling fsync or sync and that is then flushing everything to disk. My tests create maybe 10 dozen files in ~30MB and for some reason this is taking 4 seconds to flush. I'm not sure if ext3, the kernel, or vim is the problem. I already googled and set set swapsync=sync set nofsync in my .exrc but that hasn't helped. Has anyone else seen this and do they have a work around? I'm about to switch...
2017 Nov 06
2
[PATCH] v2v: rework free space check in guest mountpoints
...oint. *) + let has_boot = List.exists (fun { mp_path } -> mp_path = "/boot") mpstats in + + let needed_bytes_for_mp = function + | "/boot" + | "/" when not has_boot -> + (* We usually regenerate the initramfs, which has a + * typical size of 20-30MB. Hence: + *) + 50_000_000L + | "/" -> + (* We may install some packages, and they would usually go + * on the root filesystem. + *) + 20_000_000L + | _ -> + (* For everything else, just make sure there is some free space. *) + 10_000...
2002 Oct 10
1
Ext-3 journal file growth.
I am amazed that after running Ext3 on our RH 7.2 fileserver for 3 months the .journal file has grown to over 30Mb and has exhuasted all the availble space on /. with all of the obvisious problems this brings. Is the Ext 3 file expected to have unlimited root file space to grow it .journal file? If not how on earth is one expected to manage it. Ken Patching with an unmanagable system thanks to Ext 3.
2001 Nov 16
2
Block Size
What is the default block size? I have a few files 30+mb and data is just added to the end of them. It seems like it takes longer to sync them that it was to send it initially. Should I change the block size or something else? I am running: rsync -z -e ssh *.* user@linuxbox:data/ I need to use ssh because I am going over the internet and sending "company data". -------------- next
2017 Oct 16
3
Trash plugin unexpected results
...using accounts with 30 MB quotas, but I > feel like the trash plugin should account for the whole quota. > > Any help is appreciated. > > On 10/10/2017 8:20 AM, Stephan Herker wrote: >> >> I have the trash plugin enabled and testing it out I had an account >> with a 30MB quota.? In the accounts trash it had an email with a >> large attachment.? I sent the same email again to the account >> expecting the trash plugin to purge the message from trash to make >> space for the new message in the inbox.? However I got an error >> saying it couldn&...
2016 Oct 16
3
compile c++ code in an R package without -g
...NSTALL creates C/C++ flags as follows: -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -g However, my package is fairly large. With debug info compiled into the library, the generated .so file is over 200MB. Without debug info, it's about 30MB. I hope by default debug info is disabled. However, I don't see any option in R CMD INSTALL that can disable "-g". Could anyone tell me how to disable it? Many thanks, Da
2004 Feb 29
6
Samba Gigabit very very slow?
Hi! I?m having trubbles with the speed on my samba server, I just uppgraded to gigabit (Realtek 8169 NIC) at home when i copy stuff from the samba server i get around 5-6Mb/sec and if i use ftp to access the same file on the same server i get almost 30Mb/sec does anyone have a clue what causes this problem? When i used my old 3com 905c for the local net i got normal 100Mbit speed (around 10-12Mb/sec) My system: Debian Unstable Kernel 2.6.3 Athlon XP 2200+ 512Mb DDR 2x 120Gb hdd Realtek 8169 Gigabit NIC 3Com 905c 10/100Mbit Best regards Jonas
2010 Aug 24
2
Parsing a XML file
I have one XML file with 30MB that I need to read the data. I try this; library(XML) doc <- xmlDoc("Malaria_Grave.xml") And R answers like this *** caught segfault *** address 0x5, cause 'memory not mapped' Traceback: 1: .Call("RS_XML_createDocFromNode", node, PACKAGE = "XML") 2: x...
2009 Jan 11
4
Large uploads with attchment-fu
Hi I am trying to upload 30mb files with attachment-fu, these seem to just hang. (works file for small image files) I am using mongrel cluster and nginx... Does anyone have any advice? Thanks Richard --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google G...
2005 Aug 09
1
Mail disappearing
My users are complaining that their mail is dissappearing out of folders or if they go into a folder, the mail will be there one minute, the next it isn't. Anybody see this before...and if so..is there a fix? I'm running dovecot 99.11 Brent ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program.
2007 Jun 21
1
A HTB problem
My hardware is a Linksys AP with MIPS 300MHz and Linux kernel 2.4.20. The traffics are from two LAN switch ports to WAN port. And the traffics are generate at a rate of 80Mbit. So the total traffic to WAN port is 160Mbit. The shaping works well that the traffic to WAN port is about 50/30Mbit according to configuration. But the priority seems strange when the root rate is 50Mbit. When the root ceil is 50Mb and ceil for class 12 and 13 is 50Mb, the actual throughput for class 12 and 13 are 32Mb and 16Mb. But in theory, class 12 can almost occupy all the 50Mb throughput for it has a hi...
2008 Mar 04
1
OCFS2 strage freezes
Good day, everyone. I have SAN server build with Openfiler OS, with iSCSI mode turned on. I have two nodes, which connect to that server via iSCSI, using one of two active iSCSI partitions. I've installed ocfs2 1.3.3 with kernel 2.6.23.1, configured it, made ocfs2 partition and was successful in mounting it on both nodes. Everything works just fine, I can upload file from one node and
2009 May 13
8
read multiple large files into one dataframe
...for reading many text files of the same format into one table/dataframe? I have around 90 files that contain continuous data over 3 months but that are split into individual days data and I need the whole 3 months in one file for analysis. Each days file contains a large amount of data (approx 30MB each) and so I need a memory efficient method to merge all of the files into the one dataframe object. From what I have read I will probably want to avoid using for loops etc? All files are in the same directory, none have a header row, and each contain around 180,000 rows and the same 25 columns...
2003 Oct 17
3
Rsync download traffic
Hello, I started monitoring lan traffic with RRDTool on a linux box the other day that runs rsync, and I've found what I would consider a strange traffic pattern. This linux box rsync about 2Gb of data to a local samba share that is connected to a Windows 2003 server. Based on the literal data stat, roughly 20% is changed nightly and uploaded. However, the strange thing is that the traffic
2008 Sep 22
7
performance of pv drivers for windows
...re or less at the same speed, for both, network and disk performance. The disk performance was about 10MB/s reading and writing sequentially, and about 1-1.5MB/s for reading and writing randomly. The network speed was about 10-12MB/s, via a GigaBit line. The Xensource drivers made at least about 30MB/s reading and writing sequentially, but for reading and writing randomly, it was also only lousy 1.5MB/s. Via network, over the GigaBit line, with the xensource drivers, the speed was about 78 MB/s. The Windows system was a XP SP2. hdparm on the dom0 gives about 60MB/s. The network test was an...
2018 Apr 16
0
Lmtp issues on dovecot 2.3.x with big messages
...6/04/2018 om 19:57 schreef Michael Tratz: >>>> Messages are being sent to dovecot LMTP by postfix. If I change this email >>>> to another server with dovecot 2.2.x the same message are delivered >>>> immediately. >>> Confirmed. Starts to fail here around 30Mb. Tested with Swaks. >>> >>> Working on a fix... >> Problem found. It is an explicit limit of 40Mb (for the 30Mb I saw in my >> tests, there was also a base64 encoding I forgot about). >> >> Will fix both the unhelpful error and the fact that there should be...