search for: 48gb

Displaying 20 results from an estimated 63 matches for "48gb".

Did you mean: 40gb
2016 Jan 27
2
Maildir to mdbox conersion size issue
...I do it per partes, so I change mail_location for user (in sql), than I use doveadm sync -u user path_to_maildir and everything works mostly fine. But now I found that when I migrate quite huge Maildir (about 24G of mails) to mdbox, after sync the final mdbox is twice as big as source Maildir (48GB). I tried to find duplicities, but didn't find any. Is there any reason why mdbox consume so much data? Or I do something wrong? Does somebody have same experience? Thanks Petr
2007 Aug 23
4
Monthly traffic limit
Hi Shorewall Users :) I have found shorewall firewall and seems to be interesting. I need to setup a configuration my my network users because i only have 50gb of traffic per month. I want to know if the shorewall can make a 48gb per month limit, but everyday from 1:30 PM do 8:30 AM (happy hour ) the traffic doesnt count. Can shorewall do that ? -- Sem Mais Rui Oliveira 351 - Portugal ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping...
2017 Nov 04
4
mariadb server memory usage
Hi, is this ok for a database server, or do I need to turn the memory allowance down? The machine has 48GB and mariadb is allowed about 40. The machine is a dedicated database server. Mysql seems to go up to what top says is virtually allocated under some circumstances; I don?t know what mariadb does. I don?t want anything get killed because memory runs out. Swap should prevent that anyway, but perha...
2010 Feb 06
2
question about bigmemory: releasing RAM from a big.matrix that isn't used anymore
Hi all, I'm on a Linux server with 48Gb RAM. I did the following: x <- big.matrix(nrow=20000,ncol=500000,type='short',init=0,dimnames=list(1:20000,1:500000)) #Gets around the 2^31 issue - yeah! in Unix, when I hit the "top" command, I see R is taking up about 18Gb RAM, even though the object x is 0 bytes in R. That...
2017 Mar 20
4
Need help
...we get the >> message ST:3P7Y9Y1 >> Do you know what the problem might be. > I'm guessing you mean the message on the front panel, which is the service > tag number. indeed, thats the service tag for a Dell PowerEdge T620, shipped in September 2013, with dual Xeon E5-X2667, 48GB ram, and 10 3TB drives, originally running RHEL 6.5 indeed, you need to connect a monitor to it to determine what stage of the boot is failing and why. -- john r pierce, recycling bits in santa cruz
2010 Nov 12
4
remus support under debian squeeze
...only one what can give me my questions answered. i know how to setup and run a xen infrastructure, i have several server in production. the new thing is, that i started to use the debian squeeze, cause of the native suport of remus (kernel support), as everyone say: hardware ******* hp dl 380, 48gb ram, 8 tb disk, 2 processors software ****** debian squeeze amd64, latest upates xen 4.0.1, via aptituse lvm guests ***** windows 2008r2 x64 windows 7 x64 windows xp x86 debian lenny i have an identical second hardware box, same ram, same disk, same everything. the aproach is now, to have a...
2017 Mar 20
3
Need help
...be. >>>> >>> I'm guessing you mean the message on the front panel, which is the >>> service >>> tag number. >>> >> >> indeed, thats the service tag for a Dell PowerEdge T620, shipped in >> September 2013, with dual Xeon E5-X2667, 48GB ram, and 10 3TB drives, >> originally running RHEL 6.5 >> >> indeed, you need to connect a monitor to it to determine what stage of >> the boot is failing and why. >> >> >> -- >> john r pierce, recycling bits in santa cruz >> >> >> _...
2010 Apr 28
2
Size limitations for model.matrix?
Hello, I am running: R version 2.10.0 (2009-10-26) Copyright (C) 2009 The R Foundation for Statistical Computing ISBN 3-900051-07-0 on a RedHat Linux box with 48Gb of memory. I am trying to create a model.matrix for a big model on a moderately large data set. It seems there is a size limitation to this model.matrix. > dim(coll.train) [1] 677236 128 > coll.1st.model.mat <- model.matrix(coll.1st.formula, data = coll.train) > dim(coll.1st.model...
2010 Mar 16
1
Correlation coefficient of large data sets
...ata into R, remember I am newbe so this is big :) I could find commands that would calculate the correlation between 2 variables but not for a set of variables. How do I do this? Am I going to be able to do this with R, I have the 64 bit version installed and have access to an 8 core machine with 48GB of memory. *Vincent Davis 720-301-3003 * vincent@vincentdavis.net my blog <http://vincentdavis.net> | LinkedIn<http://www.linkedin.com/in/vincentdavis> [[alternative HTML version deleted]]
2017 Nov 08
2
mariadb server memory usage
...own the performance. More ram equals more performance. > > This link help me understand memory usage on linux. > > https://www.linuxatemyram.com/ > > Basically you need yo worry about > > free memory is close to 0 > used memory is close to total Almost 3GB available on a 48GB machine is very close to "free memory is 0" and "used memory is close to total", which is why I?m wondering what I can get away with :) This free memory can go away in less than a second. Maybe it never will, so I figured why not use as much as possible --- just not too much,...
2023 May 08
1
Huge differences in Ram Consumption with different versions of R on the same scripts
Hello R Help, I have some R vignettes that run fpp2 time series regressions. I have run these for a long time using R and a 12 core computer system. I ran each core using Linux to run a vignette on that core, so that all 12 could work concurrently. With 48GB of ram, the ram never filled up. I ran these regressions for hours, one data set right after the other on each core. Recently, I switched to Oracle Linux 8 and R 4.2 Now, with the same scripts, and the same data, the ram fills up and R reserves 4.2GB per instance in some cases. This results in all...
2015 Mar 12
3
Processor usage of qemu process.
I have been using libvirt for a while now with some linux guest installed. And everything has been working great. I've got a nice new (used) HP virtual host with 12 x dual core and 48Gb mem. My Windows servers are getting old, so I found it was time to take the next step and also virtualise my Windows systems. Now I've got two Windows guests on my new host: - A Windows 8.1 which runs a Autodesk Job Processor - A Windows Server 2012 R2 which runs a Pervasive SQL, Autodesk Vaul...
2012 Apr 03
0
No subject
[root at psanaoss214 /]# ls -al core* -rw------- 1 root root 4362727424 Jun 8 00:58 core.13483 -rw------- 1 root root 4624773120 Jun 8 03:21 core.8792 On 06/08/2012 04:34 PM, Anand Avati wrote: Is it possible the system was running low on memory? I see you have 48GB, but memory registration failure typically would be because the system limit on the number of pinnable pages in RAM was hit. Can you tell us the size of your core dump files after the crash? Avati On Fri, Jun 8, 2012 at 4:22 PM, Ling Ho < ling at slac.stanford.edu > wrote: Hello, I...
2017 Mar 20
3
Need help
...be. >>>> >>> I'm guessing you mean the message on the front panel, which is the >>> service >>> tag number. >>> >> >> indeed, thats the service tag for a Dell PowerEdge T620, shipped in >> September 2013, with dual Xeon E5-X2667, 48GB ram, and 10 3TB drives, >> originally running RHEL 6.5 >> >> indeed, you need to connect a monitor to it to determine what stage of >> the boot is failing and why. >> >> >> -- >> john r pierce, recycling bits in santa cruz >> >> >> _...
2012 Sep 16
2
Question about R performance on UNIX/LINUX with different memory/swap configurations
...(using a standard deviation test as well as a GBM model) seems to indicate that the Windows desktop (with small/slow swap footprint) as well as a Solaris 11 X86 server with swap set to half of physical memory seems to perform quicker for these scenarios than an physical server with 16CPU's and 48GB memory. I found a few articles searching the group, but they seem to factor around Windows performance considerations (for example, post entitled "Re: [R] Memory limit for Windows 64bit build of R" from a few months ago.) I do plan on running through these different configurations on my...
2011 Oct 17
1
Need help with optimizing GlusterFS for Apache
Our webserver is configured as such: The actual website files, php, html ,css and so on. Or on a dedicated non-glusterfs ext4 partition. However, the website access Videos and especially image files on a gluster mounted directory. The write performance for our backend gluster storage is not that important. Since it only comes into play when someone uploads a video or image. However, the files
2014 Dec 01
10
best file system ?
Hi, I'm going to set up a new storage for our email users (about 10k). It's a network attached storage (Coraid). In your opinion, what is the best file system for mail server (pop3/imap/webmail) purpose? Thank you
2019 Apr 04
2
[RFC] NEC SX-Aurora VE backend
Hello, we’d like to propose the integration of a new backend into LLVM: NEC SX-Aurora TSUBASA Vector Engine (VE). We hope to get some feedback here and at EuroLLVM about the path and proper procedure of merging. The SX-Aurora VE is a Vector CPU on an PCI-E Accelerator card. It has 48GB memory from six HBM2 stacks, accessible with 1.2TB/s bandwidth, 8 cores with vector and scalar units, each. The cores share a last level cache (LLC) and have 64 scalar registers and a normal scalar unit with two levels of caches as well as 64 long vector registers (256 x 64 bits), 16 vector mask re...
2017 Mar 20
2
Need help
...gt; I'm guessing you mean the message on the front panel, which is the > >>> service > >>> tag number. > >>> > >> > >> indeed, thats the service tag for a Dell PowerEdge T620, shipped in > >> September 2013, with dual Xeon E5-X2667, 48GB ram, and 10 3TB drives, > >> originally running RHEL 6.5 > >> > >> indeed, you need to connect a monitor to it to determine what stage of > >> the boot is failing and why. > >> > >> > >> -- > >> john r pierce, recycling bits in...
2010 Apr 22
1
Odd behavior
...o rsync a lot of files, in a series of about 60 rsyncs, from one server to another. There are about 160 million files. I'm running 3 rsyncs concurrently to increase the speed, and as each one finishes, another starts, until all 60 are done. The machine I'm initiating the rsyncs on has 48GB RAM. This is CentOS linux 5.4, kernel revision 2.6.18-164.15.1.el5. Rsync version 3.0.5 (on both sides). I was able to rsync all the data over to the new machine. But, because there was so much data, I need to run the rsyncs again to catch data that changed during the last rsync run. It so...