search for: 192gb

Displaying 20 results from an estimated 28 matches for "192gb".

Did you mean: 12gb
2018 Apr 04
4
memory cgroup max_usage_in_bytes question
hi all, can someone help explaining what we are seeing? it makes no sense to us. this is a host running centos 7.4 with 3.10.0-693.17.1 kernel, and it has 192GB of ram > [] free -b > total used free shared buff/cache available > Mem: 201402642432 14413479936 75642777600 48586752 111346384896 185689632768 > Swap: 21474832384 31961088 21442871296 > [] cat /sys/fs/cgroup/memory/memory.max_usage_in...
2012 May 20
4
R Memory Issues
---------- Forwarded message ---------- From: Emiliano Zapata <ezapataika@gmail.com> Date: Sun, May 20, 2012 at 12:09 PM Subject: To: R-help@r-project.org Hi, I have a 64 bits machine (Windows) with a total of 192GB of physical memory (RAM), and total of 8 CPU. I wanted to ask how can I make R make use of all the memory. I recently ran a script requiring approximately 92 GB of memory to run, and got the massage: cannot allocate memory block of size 2.1 Gb I read on the web that if you increase the memory...
2012 May 20
1
(no subject)
Hi, I have a 64 bits machine (Windows) with a total of 192GB of physical memory (RAM), and total of 8 CPU. I wanted to ask how can I make R make use of all the memory. I recently ran a script requiring approximately 92 GB of memory to run, and got the massage: cannot allocate memory block of size 2.1 Gb I read on the web that if you increase the memory...
2023 Mar 14
1
How to configure?
Hello all. Our Gluster 9.6 cluster is showing increasing problems. Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 cores dual thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 [12TB]), configured in replica 3 arbiter 1. Using Debian packages from Gluster 9.x latest repository. Seems 192G RAM are not enough to handle 30 data bricks + 15 arbiters and I often had to reload glusterfsd because glusterfs processed got killed for OOM. On top o...
2023 Mar 15
1
How to configure?
...ng ? Best Regards,Strahil Nikolov? On Tue, Mar 14, 2023 at 16:44, Diego Zuccato<diego.zuccato at unibo.it> wrote: Hello all. Our Gluster 9.6 cluster is showing increasing problems. Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 cores dual thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 [12TB]), configured in replica 3 arbiter 1. Using Debian packages from Gluster 9.x latest repository. Seems 192G RAM are not enough to handle 30 data bricks + 15 arbiters and I often had to reload glusterfsd because glusterfs processed got killed for OOM. On top o...
2018 Jan 16
2
Samba46 Listen queue overflow in FreeBSD 11.1
Hello everyone, We are trying to track down some samba issues and wondering there are some settings we can tweak. We have a new Supermicro server running the following with 192GB of RAM, 32 active CPUs and 54TB of usable zfs mirrors (raid10). uname -a FreeBSD hostname 11.1-RELEASE-p4 FreeBSD 11.1-RELEASE-p4 #0: Tue Nov 14 06:12:40 UTC 2017 root at amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 [root at hostname /var/log]# freebsd-version -k 11.1-RELEASE...
2023 Mar 15
1
How to configure?
...Mar 14, 2023 at 16:44, Diego Zuccato > <diego.zuccato at unibo.it> wrote: > Hello all. > > Our Gluster 9.6 cluster is showing increasing problems. > Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 cores dual > thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 [12TB]), > configured in replica 3 arbiter 1. Using Debian packages from Gluster > 9.x latest repository. > > Seems 192G RAM are not enough to handle 30 data bricks + 15 arbiters > and > I often had to reload glusterfsd because gl...
2018 Apr 05
0
memory cgroup max_usage_in_bytes question
On 05/04/18 01:56, Stijn De Weirdt wrote: > hi all, > > can someone help explaining what we are seeing? it makes no sense to us. > this is a host running centos 7.4 with 3.10.0-693.17.1 kernel, and it > has 192GB of ram > >> [] free -b >> total used free shared buff/cache available >> Mem: 201402642432 14413479936 75642777600 48586752 111346384896 185689632768 >> Swap: 21474832384 31961088 21442871296 >> [] cat /sys/fs/cgroup/m...
2018 Apr 04
0
memory cgroup max_usage_in_bytes question
Stijn De Weirdt wrote: > hi all, > > can someone help explaining what we are seeing? it makes no sense to us. > this is a host running centos 7.4 with 3.10.0-693.17.1 kernel, and it > has 192GB of ram > >> [] free -b >> total used free shared buff/cache >> available >> Mem: 201402642432 14413479936 75642777600 48586752 111346384896 >> 185689632768 >> Swap: 21474832384 31961088 21442871296 >> [] cat /...
2023 Dec 17
1
Gluster -> Ceph
...emons is implemented on quality HW. > With Gluster, it's just files on disks, easily recovered. I've already had to do it twice in a year with the coming third time that's the "definitive migration". The first time there were too many little files, the second it seemed 192GB RAM are not enough to handle 30 bricks per server, and now that I reduced to just 6 bricks per server (creating RAIDs) and created a brand new volume in august, I already find lots of FUSE-inaccessible files that doesn't heal. Should be impossible since I'm using "replica 3 arbiter...
2014 May 20
0
sluggish behavior
...ues. No I/O wait, the load average show noting outside of the normal, nothing taking CPU or Memory. I have notice that in a couple of the instances the libvirtd daemon was stopped. We have more than twenty (20 +) VM's running - they all seem fine. The metal server is a beast of an 820 with 192GB of RAM and 64 CPU's. The load average rarely creeps above 2 or 3. Do not think it is a resource issue. I can write a test file to the filesystem/volume where the images live at over 350MB/s. The sluggishness disappears after a reboot. If I clone a VM ... back again ... Any ideas? Thanks...
2010 Apr 19
20
Lustre Client - Memory Issue
Hi Guys, My users are reporting some issues with memory on our lustre 1.8.1 clients. It looks like when they submit a single job at a time the run time was about 4.5 minutes. However, when they ran multiple jobs (10 or less) on a client with 192GB of memory on a single node the run time for each job was exceeding 3-4X the run time for the single process. They also noticed that the swap space kept climbing even though there was plenty of free memory on the system. Could this possibly be related to the lustre client? Does it reserve any mem...
2023 Mar 15
1
How to configure?
...Tue, Mar 14, 2023 at 16:44, Diego Zuccato >? ? <diego.zuccato at unibo.it> wrote: >? ? Hello all. > >? ? Our Gluster 9.6 cluster is showing increasing problems. >? ? Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 cores dual >? ? thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 [12TB]), >? ? configured in replica 3 arbiter 1. Using Debian packages from Gluster >? ? 9.x latest repository. > >? ? Seems 192G RAM are not enough to handle 30 data bricks + 15 arbiters >? ? and >? ? I often had to reload glusterfsd because gluster...
2009 Nov 28
2
R on Large Data Sets (again)
Dear R users, I’ve search the R site for help on this topic but it is hard to find a precise answer for my questions. Which are the best options to overcome the RAM memory limitation problems when using R on “large” data sets (such as 2 or 3 million records)? - Is the free available version of R (as opposed to the one provided by REvolution Computing) compatible with a windows 64-bit
2018 Jan 16
0
Samba46 Listen queue overflow in FreeBSD 11.1
...16 Jan 2018, at 20:08, Wallace Barrow via samba <samba at lists.samba.org> wrote: > > Hello everyone, > > We are trying to track down some samba issues and wondering there are some > settings we can tweak. > > We have a new Supermicro server running the following with 192GB of RAM, 32 > active CPUs and 54TB of usable zfs mirrors (raid10). > > uname -a > FreeBSD hostname 11.1-RELEASE-p4 FreeBSD 11.1-RELEASE-p4 #0: Tue Nov 14 06:12:40 UTC 2017 > root at amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 > > [root at hostname /var/lo...
2023 Mar 16
1
How to configure?
...uccato at unibo.it>> wrote: > >? ? Hello all. > > > >? ? Our Gluster 9.6 cluster is showing increasing problems. > >? ? Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 > cores dual > >? ? thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 > [12TB]), > >? ? configured in replica 3 arbiter 1. Using Debian packages from > Gluster > >? ? 9.x latest repository. > > > >? ? Seems 192G RAM are not enough to handle 30 data bricks + 15 > arbiters &g...
2023 Mar 16
1
How to configure?
...zuccato at unibo.it>> wrote: >? ? ? >? ? Hello all. >? ? ? > >? ? ? >? ? Our Gluster 9.6 cluster is showing increasing problems. >? ? ? >? ? Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 >? ? cores dual >? ? ? >? ? thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 >? ? [12TB]), >? ? ? >? ? configured in replica 3 arbiter 1. Using Debian packages from >? ? Gluster >? ? ? >? ? 9.x latest repository. >? ? ? > >? ? ? >? ? Seems 192G RAM are not enough to handle 30 data bricks + 15 >? ? arbiters >?...
2012 Dec 01
3
6Tb Database with ZFS
...Tb database? obviously the more the better but i cant set too much memory. Have someone implemented succesfully something similar? We ran some test and the usage of memory was as follow: (With Arc_max at 30Gb) Kernel = 18Gb ZFS DATA = 55Gb Anon = 90Gb Page Cache = 10Gb Free = 25Gb My system have 192Gb RAM and the database SGA = 85Gb. I would appreciate if someone could tell me about their experience. Best Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20121201/07dbd312/attachment.html>
2023 Mar 21
1
How to configure?
...all. >? ? ? >? ? ? > >? ? ? >? ? ? >? ? Our Gluster 9.6 cluster is showing increasing problems. >? ? ? >? ? ? >? ? Currently it's composed of 3 servers (2x Intel Xeon >? ? 4210 [20 >? ? ? >? ? cores dual >? ? ? >? ? ? >? ? thread, total 40 threads], 192GB RAM, 30x HGST >? ? HUH721212AL5200 >? ? ? >? ? [12TB]), >? ? ? >? ? ? >? ? configured in replica 3 arbiter 1. Using Debian >? ? packages from >? ? ? >? ? Gluster >? ? ? >? ? ? >? ? 9.x latest repository. >? ? ? >? ? ? > >? ? ? >? ? ? >? ? Seems...
2023 Mar 21
1
How to configure?
...? >? ? Our Gluster 9.6 cluster is showing increasing > problems. > >? ? ? >? ? ? >? ? Currently it's composed of 3 servers (2x Intel Xeon > >? ? 4210 [20 > >? ? ? >? ? cores dual > >? ? ? >? ? ? >? ? thread, total 40 threads], 192GB RAM, 30x HGST > >? ? HUH721212AL5200 > >? ? ? >? ? [12TB]), > >? ? ? >? ? ? >? ? configured in replica 3 arbiter 1. Using Debian > >? ? packages from > >? ? ? >? ? Gluster > >? ? ? >? ? ? >? ? 9.x latest repository....