search for: 96gb

Displaying 16 results from an estimated 16 matches for "96gb".

Did you mean: 16gb
2014 Oct 14
2
CentOS 6.4 kernel panic on boot after upgrading kernel to 2.6.32-431.29.2
I'm on a Supermicro server, X9DA7 motherboard, Intel C602 chipset, 2x 2.4GHz Intel Xeon E5-2665 8-core CPU, 96GB RAM, and I'm running CentOS 6.4. I just tried to use yum to upgrade the kernel from 2.6.32-358 to 2.6.32-431.29.2. However, I get a kernel panic on boot. The first kernel panic I got included stuff about acpi, so I tried adding noacpi noapic to the kernel boot parameters, which at least cha...
2012 Mar 29
2
BUG: soft lockup - CPU#0 stuck for 61s!
Ian, I came across the subject line on a 96GB server with over 85 VMs running. Completely frozen and unresponsive, qemu-dm processes hung on event channels. I''m using the XenServer 6.0 dom0 kernel on top of the xen-unstable tip hypervisor. I believe you solved the issue backporting some event channel patches to the 2.6.32 kernel, as...
2010 Jun 24
1
Gonna be stupid here...
But it''s early (for me), and I can''t remember the answer here. I''m sizing an Oracle database appliance. I''d like to get one of the F20 96GB flash accellerators to play with, but I can''t imagine I''d be using the whole thing for ZIL. The DB is likely to be a couple TB in size. Couple of questions: (a) since everything is going to be zvols, and I''m going to be doing lots of sync writes to them, I''m...
2011 Aug 03
3
e-mail serving
I am going to try an experiment with e-mail aggregation where I expect to receive over 1 million e-mails a day from public lists. Can anyone shed some light on hard disk space (to retain this e-mail for long periods) and system specs to be able to handle the load? I am looking to buy a low end box, but that can hold lots of RAM and accomodate a fair number of HD's to store the e-mail while
2003 Apr 17
1
Odd error: Physical size does not match superblock size
Hello, I had something interesting happen on a RH8 ext3 system I setup.I am at a loss to understand what happened. Info: This system has two IDE disks, partitioned identically, and the largest partition on each (/dev/hda3 and /dev/hdb3, 96GB each) was mirrored in a linux software RAID-1 configuration. It was running fine for many months. Then I updated the kernel and needed to reboot accordingly. (2.4.18-14.8.0smp -> 2.4.18-27.8.0smp) Problem: Upon a reboot, fsck complained that /first1 (/dev/hda3)'s superblock or partition tab...
2014 Oct 14
3
Filesystem writes unexpectedly slow (CentOS 6.4)
I have a rather large box (2x8-core Xeon, 96GB RAM) where I have a couple of disk arrays connected on an Areca controller. I just added a new external array, 8 3TB drives in RAID5, and the testing I'm doing right now is on this array, but this seems to be a problem on this machine in general, on all file systems (even, possibly, NFS, bu...
2013 Jul 17
1
syncer causing latency spikes
...result of some behaviour in the syncer. This is with FreeBSD 8.2 (with some local modifications and backports, r231160 in particular). The system has an LSI 9261-8i RAID controller (backed by mfi(4)) and the database and WALs are on separate volumes, a RAID 6 and a RAID 1 respectively. It has about 96GB of RAM installed. What's happening is that the syncer tries to fsync a large database file and goes to sleep in getpbuf() with the corresponding vnode lock held and the following stack: #3 0xffffffff805fceb5 in _sleep (ident=0xffffffff80ca8e20, lock=0xffffffff80d6bc20, priority=-2134554464,...
2011 Dec 29
4
BalloonWorkerThread issue
Hello List, Merry Christmas to all !! Basically I''m trying to boot a Windows 2008R2 DC HVM with 90GB static max memory and 32GB static min. The node config is Dell M610 with X5660 and 96GB RAM and its running XCP 1.1 Many times the node crashes while booting HVM. Sometimes I get success. I have attached the HVM boot log of successful start. Many times the node hangs as soon as the BalloonWorkerThread is activated. In attached txt the ballon inflation rate is constant 4090 *XENUTIL:...
2017 Feb 17
0
vm running slowly in powerful host
...0 -20 10460 4976 2268 S 3 0.1 19:20.38 atop 10806 root 16 0 5540 1216 880 R 0 0.0 0:00.51 top 126 root 15 0 0 0 0 S 0 0.0 0:23.33 pdflush 3531 postgres 15 0 68616 1600 792 S 0 0.0 0:41.24 postmaster The host in which the guest runs has 96GB RAM and 8 cores. It does not seem to do much: top - 11:21:19 up 15 days, 15:53, 14 users, load average: 1.40, 1.39, 1.40 Tasks: 221 total, 2 running, 219 sleeping, 0 stopped, 0 zombie Cpu0 : 15.9%us, 2.7%sy, 0.0%ni, 81.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 5.0%us, 3.0%sy,...
2013 Jul 05
4
What FileSystems for large stores and very very large stores?
I was learning about the different FS exists. I was working on systems that ReiserFS was the star but since there is no longer support from the creator there are other consolidations to be done. I want to ask about couple FS options. EXT4 which is amazing for one node but for more it's another story. I have heard about GFS2 and GlusterFS and read the docs and official materials from RH on
2007 Mar 15
20
C''mon ARC, stay small...
Running an mmap-intensive workload on ZFS on a X4500, Solaris 10 11/06 (update 3). All file IO is mmap(file), read memory segment, unmap, close. Tweaked the arc size down via mdb to 1GB. I used that value because c_min was also 1GB, and I was not sure if c_max could be larger than c_min....Anyway, I set c_max to 1GB. After a workload run....: > arc::print -tad { . . . ffffffffc02e29e8
2014 Jan 05
3
Architecture for large Dovecot cluster
Hi All, I am trying to determine whether a mail server cluster based on Dovecot will be capable of supporting 500,000+ mailboxes with about 50,000 IMAP and 5000 active POP3 connections. I have looked at the Dovecot clustering suggestions here: http://blog.dovecot.org/2012/02/dovecot-clustering-with-dsync-based.html and some other Dovecot mailing list threads but I am not sure how many
2005 Jun 29
8
Hot swap CPU
From: Rodrigo Barbosa <rodrigob at suespammers.org> > Btw, don't quote me on this one :) > I'm only 90% sure of the hotswapping capabilities, and less than 50% > sure about the price :) There _are_ systems with hot-swap CPUs, memory and/or, PCI[-X] slots. They are _not_ commodity and pricey, and require OS-level support. In fact, I believe Linux 2.6 has some support for
2010 Nov 16
0
Bug#603713: xen-hypervisor-4.0-amd64: amd64 Dom0-Kernel crashes in early boot-stage
Package: xen-hypervisor-4.0-amd64 Version: 4.0.1-1 Severity: important The amd64 Dom0 crashes in early boot-stage. For debugging purpose I logged Kernel-Dump with minicom over Serial Console: System is: Dell Poweredge R710 2x Intel XEON X5650 96GB RAM Perc H200 SAS Controller 3x SAS-Drive I see a possible conjunction with Bug #600241 but acpi=off doesn't solve this problem. Regards, Ulli Hochholdinger Hier comes the Dump: (XEN) Xen version 4.0.1 (Debian 4.0.1-1) (waldi at debian.org) (gcc version 4.4.5 20100824 (prerelease) (Debian...
2011 Jan 05
52
Offline Deduplication for Btrfs
Here are patches to do offline deduplication for Btrfs. It works well for the cases it''s expected to, I''m looking for feedback on the ioctl interface and such, I''m well aware there are missing features for the userspace app (like being able to set a different blocksize). If this interface is acceptable I will flesh out the userspace app a little more, but I believe the
2010 Nov 16
0
Bug#603727: xen-hypervisor-4.0-amd64: i386 Dom0 crashes after doing some I/O on local storage (software Raid1 on SAS-drives with mpt2sas driver)
...enough I/O to trigger this bug in these situations. So I think this is a problem of either the mpt2sas-driver which is relatively new in the kernel or a combination of raid1 and this driver. and appears only running with a HV in front of the kernel. System is Dell Poweredge R710 2x Xeon X5650 96GB RAM Perc H200 3 SAS-Harddrives (Linux-Software Raid1 over some partitons) I attach kernel, and xen debugging output at the end of this report. Regards Ulli Kernel and Debugging-Output: (XEN) Xen version 4.0.1 (Debian 4.0.1-1) (waldi at debian.org) (gcc version 4.4.5 20100728 (prerelease) (Deb...