Displaying 20 results from an estimated 37 matches for "3.5gb".
Did you mean:
1.5gb
2007 Jun 20
5
How To Free memory
Hi All,
My CentOS 5.0 is running on x86 machine with 4GB RAM. It runs as a webserver
and there is a small java applet application.
When the system is fresh reboot, there is below 1GB of used memory and as
times go , the used memory increased to over 3.5GB.
Is there a way to free memory out like those program which works under
Windows?
Thank you.
-------------- next part --------------
An HTML
2014 Nov 03
3
strange disk space calculation ext4 df and du
Hi,
in one server I do have a SSD raid 1 size 219GB.
df shows 9.4 GB free, 198GB used.
If I do "du -sch * | sort -h -r" on /, I just have close to 3.5GB used ....
Any hints, what's eating up the space?
Centos 6.6, fs = ext4.
regards . G?tz
--
G?tz Reinicke
IT-Koordinator
Tel. +49 7141 969 82 420
E-Mail goetz.reinicke at filmakademie.de
Filmakademie Baden-W?rttemberg GmbH
2003 Jan 29
5
Problems making use of 2K PDC
I'm having problems with samba using the 2K PDC.
I've gotten it to successfully join the 2K PDC via smbpasswd. Winbindd
is running and I can ping it. I've tried googling, but was unsuccessful
at finding something useful. The Windows 2K event viewer shows:
The session setup from the computer DATASRV failed to authenticate. The
name of the account referenced in the security database is
2016 Aug 01
2
Why does AWS instance always lost around 500MB memory
Hi,
I launched an AWS instance `t2.medium` (use CentOS 7 image "ami-7abd0209",
product code: https://aws.amazon.com/marketplace/pp/B00O7WM7QW), which is
supposed to have 4GB Memory in total, but turn out it is only "3.5GB".
```
$ free -h
total used free shared buff/cache
available
Mem: 3.5G 441M 1.4G 16M
2012 Dec 18
1
Infiniband performance issues answered?
In IRC today, someone who was hitting that same IB performance ceiling
that occasionally gets reported had this to say
[11:50] <nissim> first, I ran fedora which is not supported by Mellanox
OFED distro
[11:50] <nissim> so I moved to CentOS 6.3
[11:51] <nissim> next I removed all distibution related infiniband rpms
and build the latest OFED package
[11:52] <nissim>
2005 Jan 28
4
Error: cannot allocate vector of size... but with a twist
Hi,
I have a memory problem, one which I've seen pop up in the list a few
times, but which seems to be a little different. It is the Error: cannot
allocate vector of size x problem. I'm running R2.0 on RH9.
My R program is joining big datasets together, so there are lots of
duplicate cases of data in memory. This (and other tasks) prompted me
to... expand... my swap partition to
2016 Sep 17
2
(Thin)LTO llvm build
So, when I embark on the next ThinLTO try build, probably this Sunday,
should I append -Wl,-plugin-opt,jobs=NUM_PHYS_CORES to LDFLAGS
and run ninja without -j or -jNUM_PHYS_CORES?
2015 Nov 13
2
Samba 4.3 restrictions
Hi,
I have nowhere seen information about restrictions of Samba:
How many objects Samba can store in sambadb?
What maximum size tdb database?
How many domain controllers can be in one samba domain?
How many sites can be store in one Samba domain?
Best regards,
DMITRIY LUCHKO
2006 Jun 22
1
x86 uniprocessor 4GB memory
Hi there,
I'm currently using CentOS 4.3 (Server edition) in a HP DC 5100 with the
IntelR 915GV chipset, powered by a PIV 3.0GHZ.
Now I'm facing a problem, with memory.
I got myself 4GB of memory, the system bios detects it correctly, but in
Linux can only see around 3.5gb memory. With the default kernel-smp (I use
hyperthread) or with kernel-hugemem , I have the same results.
2018 Oct 01
1
unexpected memory.limit on windows in embedded R
Dear All,
I'm linking R from another application and embedding it as described in the
R-exts manual, i.e. with initialization done via Rf_initEmbeddedR.
While everything works the same as in standalone R for Linux, under Windows
I found a difference in the default memory.limit, which is fixed to 2GB
(both win32 and win64) - compared to a limit in standalone R of 3.5GB for
win32 and 16GB on
2007 Feb 23
2
OCFS 1.2.4 memory problems still?
I have a 2 node cluster of HP DL380G4s. These machines are attached via
scsi to an external HP disk enclosure. They run 32bit RH AS 4.0 and
OCFS 1.2.4, the latest release. They were upgraded from 1.2.3 only a
few days after 1.2.4 was released. I had reported on the mailing list
that my developers were happy, and things seemed faster. However, twice
in that time, the cluster has gone down due
2005 Apr 24
1
large dataset import, aggregation and reshape
Dear useRs
We have a data-set (comma delimited) with 12Millions of rows, and 5
columns (in fact many more, but we need only 4 of them): id, factor 'a'
(5 levels), factor 'b' (15 levels), date-stamp, numeric measurement. We
run R on suse-linux 9.1 with 2GB RAM, (and a 3.5GB swap file).
on average we have 30 obs. per id. We want to aggregate (eg. sum of the
measuresments under
2016 Sep 17
5
(Thin)LTO llvm build
On Sun, Sep 18, 2016 at 12:32 AM, Mehdi Amini <mehdi.amini at apple.com> wrote:
>
>> On Sep 17, 2016, at 3:19 PM, Carsten Mattner <carstenmattner at gmail.com> wrote:
>>
>> So, when I embark on the next ThinLTO try build, probably this Sunday,
>> should I append -Wl,-plugin-opt,jobs=NUM_PHYS_CORES to LDFLAGS
>> and run ninja without -j or
2006 Jun 22
1
x86 uniprocessor 4GB memory (fwd)
If you have an AGP video card which you aren't actually using you can try
selecting the minimum possible AGP apperture (window whatever) size (possibly
even disable it?). This might help. Also removing the AGP card and using a
junk 1-8MB svga card will also possibly work. In my experience the linear
framebuffer of video cards is by far the greatest memory hog these days.
Cheers,
MaZe.
2005 Nov 15
1
cannot.allocate.memory.again and 32bit<--->64bit
hello!
------
i use 32bit.Linux(SuSe)Server, so i'm limited with 3.5Gb of memory
i demonstrate, that there is times to times a problem with allocating of
objects of large size, for example
0.state (no objects yet created)
------------------------------------
> gc()
used (Mb) gc trigger (Mb) max used (Mb)
Ncells 162070 4.4 350000 9.4 350000
2009 May 09
5
Reading large files quickly
I'm finding that readLines() and read.fwf() take nearly two hours to
work through a 3.5 GB file, even when reading in large (100 MB) chunks.
The unix command wc by contrast processes the same file in three
minutes. Is there a faster way to read files in R?
Thanks!
2007 Mar 21
1
Unexpected behaviour when deleteing a big mailbox
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello,
my last tests left me with a mailbox with > 80'000 messages in
Maildir. Dovecot served them fine. No problem. I usually use IMAP's "Mark
messages a deleted" function and expunge later.
Because the "Move to Trash" is default in many MUAs, I decided to try it
on this mailbox.
I do think that it is hard to
2010 Jul 16
12
Recommended RAM for ZFS on various platforms
I''m currently planning on running FreeBSD with ZFS, but I wanted to double-check
how much memory I''d need for it to be stable. The ZFS wiki currently says you
can go as low as 1 GB, but recommends 2 GB; however, elsewhere I''ve seen someone
claim that you need at least 4 GB. Does anyone here know how much RAM FreeBSD
would need in this case?
Likewise, how much RAM
2016 Aug 01
0
Why does AWS instance always lost around 500MB memory
On 1 August 2016 at 07:30, D?ng Tr?n-D??ng <chris.duong83 at gmail.com> wrote:
> Hi,
>
> I launched an AWS instance `t2.medium` (use CentOS 7 image "ami-7abd0209",
> product code: https://aws.amazon.com/marketplace/pp/B00O7WM7QW), which is
> supposed to have 4GB Memory in total, but turn out it is only "3.5GB".
>
A rough guess would be that the system
2015 Feb 28
0
Looking for a life-save LVM Guru
On Fri, Feb 27, 2015 at 8:24 PM, John R Pierce <pierce at hogranch.com> wrote:
> On 2/27/2015 4:52 PM, Khemara Lyn wrote:
>>
>> I understand; I tried it in the hope that, I could activate the LV again
>> with a new PV replacing the damaged one. But still I could not activate
>> it.
>>
>> What is the right way to recover the remaining PVs left?
>
>