similar to: Asterisk Performance

Displaying 20 results from an estimated 1000 matches similar to: "Asterisk Performance"

2010 Nov 11
2
How does KVM handle multiple cores?
Hi All, How does KVM handle multiple cores. I have an x5650 with 6 real cores that presents itself to my OS as 12 virtual cores (hyperthreading). Does KVM see 6 or 12 cores. And, can I tell KVM how many cores I want it to use? Am I misunderstanding how KVM works? What I am after is if my guest is 100% busy, I still want some power left over for my host. Many thanks, -T
2010 Nov 10
1
Need sub for virtual box
Hi All, I am running Virtual box on a CentOS 5.5 x64 bit host. The guest is 23 bit Windows Server 2008 SP2 (not R2). Virtual Box bug 7606 is causing a customer of mine serious financial harm. http://www.virtualbox.org/ticket/7607#comment:3 I have tried in vain month or month over month to get Oracle to sell me a tech support contract, so that we can get some priority on this issue.
2017 Nov 04
1
low end file server with h/w RAID - recommendations
John R Pierce wrote: > On 11/3/2017 1:25 AM, hw wrote: >> That only goes when you buy new. Look at what you can get used, and you?ll >> see that there?s basically nothing that fits 3.5" drives. > > > I bought a used HP DL180g6 a couple years ago, 12 x 3.5" on the front panel, and 2 more in back, came with all 14 HP trays, dual X5650. its a personal/charity
2012 Jan 17
1
BLAS
I'm setting up an Ubuntu?virtual machine?that will use 4-Intel Xeon CPU x5650.? I'd like to compile R with a BLAS but the question is whcih one.? Seems like the only free ones are GotoBLAS which I'm not sure is being maintained for newer CPUs and OpenBLAS for Loongson CPUs.? I saw a favorable report on OpenBLAS
2012 May 25
1
R memory allocation
Dear All, I am running R in a system with the following configuration *Processor: Intel(R) Xeon(R) CPU X5650 @ 2.67GHz OS: Ubuntu X86_64 10.10 RAM: 24 GB* The R session info is * R version 2.14.1 (2011-12-22) Platform: x86_64-pc-linux-gnu (64-bit) locale: [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 [5] LC_MONETARY=en_US.UTF-8
2010 Apr 05
14
Are there (non-Sun/Oracle) vendors selling OpenSolaris/ZFS based NAS Hardware?
I''ve seen the Nexenta and EON webpages, but I''m not looking to build my own. Is there anything out there I can just buy? -Kyle
2010 Nov 21
10
Running on Dell hardware?
> From: Edward Ned Harvey [mailto:shill at nedharvey.com] > > I have a Dell R710 which has been flaky for some time.? It crashes about once > per week.? I have literally replaced every piece of hardware in it, and > reinstalled Sol 10u9 fresh and clean. It has been over 3 weeks now, with no crashes, and me doing everything I can to get it to crash again. So I''m going to
2017 Mar 28
6
Xen C6 kernel 4.9.13 and testing 4.9.15 only reboots.
The mystery gets more interesting... I now have a CentOS 7.3 Dell R710 server doing the exact same thing of rebooting immediately after the Xen kernel load. Just to note this is a second system and not just the first system with an update. I hope I'm not introducing something odd. They only "interesting" thing I have done for historical reasons is to change the following
2017 Apr 19
2
Xen C6 kernel 4.9.13 and testing 4.9.15 only reboots.
On 04/18/2017 12:39 PM, PJ Welsh wrote: > Here is something interesting... I went through the BIOS options and > found that one R710 that *is* functioning only differed in that "Logical > Processor"/Hyperthreading was *enabled* while the one that is *not* > functioning had HT *disabled*. Enabled Logical Processor and the system > starts without issue! I've rebooted 3
2017 Apr 05
1
Xen C6 kernel 4.9.13 and testing 4.9.15 only reboots.
On Tue, Apr 4, 2017 at 11:13 AM, Johnny Hughes <johnny at centos.org> wrote: > On 03/28/2017 04:55 PM, PJ Welsh wrote: > > The mystery gets more interesting... I now have a CentOS 7.3 Dell R710 > > server doing the exact same thing of rebooting immediately after the Xen > > kernel load. Just to note this is a second system and not just the first > > system with an
2014 Sep 21
1
UEFI PXE / split config / TFTP attempted to DHCP server, not TFTP server
All, I realize this is not strictly a PXELINUX question. So I hope you'll indulge me; hopefully some of these PXELINUX experts have seen this before. And can tell me what I'm doing wrong. Or confirm my suspicions. I have a test lab server at work. Split config. The network team manages the DHCP servers, points to our TFTP server. Test subnet has 3 DHCP pools. BIOS PXE, UEFI PXE and
2017 Mar 24
2
Xen C6 kernel 4.9.13 and testing 4.9.15 only reboots.
As a follow up I was able to test fresh install on Dell R710 and a Dell R620 with success on CentOS 7.3 without issue on the new kernel. My new plan will be to just move this C6 to one of the C7 I just created. On Wed, Mar 22, 2017, 6:27 AM PJ Welsh <pjwelsh at gmail.com> wrote: > The last few lines are > NMI watchdog: disabled CPU0 hardware events not enabled > NMI watchdog:
2010 Dec 20
0
simple question about xenoprof
I''m trying to use xenoprof in my machine (xeon x5650, westmere). While oprof works for this machine correctly, xenoprof doesn''t work properly. I want to use xenoprof in NMI mode (which is the opposite side of timer mode, right?) However, kernel message always shows that xenoprof works in the timer mode only. Is there any restriction or specs which is not
2017 Apr 19
2
Xen C6 kernel 4.9.13 and testing 4.9.15 only reboots.
On 04/19/2017 12:18 PM, PJ Welsh wrote: > > On Wed, Apr 19, 2017 at 5:40 AM, Johnny Hughes <johnny at centos.org > <mailto:johnny at centos.org>> wrote: > > On 04/18/2017 12:39 PM, PJ Welsh wrote: > > Here is something interesting... I went through the BIOS options and > > found that one R710 that *is* functioning only differed in that
2017 Nov 03
2
low end file server with h/w RAID - recommendations
m.roth at 5-cent.us wrote: > hw wrote: >> Richard Zimmerman wrote: >>> hw wrote: >>>> Next question: you want RAID, how much storage do you need? Will 4 or 8 >>>> 3.5" drives be enough (DO NOT GET crappy 2.5" drives - they're *much* >>>> more expensive than the 3.5" drives, and >smaller disk space. For the >>>>
2011 Oct 24
5
[Xen-API] CloudLinux on Xen
Hello List, I am testing a XCP1 CentOS 5 paravirt vm with CloudLinux. http://cloudlinux.com/ CloudLinux is a great product for shared hosting and I was evaluation same on a paravirt guest. I found a strange thing in XCP. CloudLinux provides xen kernel for DomUs and after installing the CloudLinux in a DomU it actually got 32 cpus inside. I had assigned only 2 vCPU to that DomU. To confirm this I
2011 Oct 24
5
[Xen-API] CloudLinux on Xen
Hello List, I am testing a XCP1 CentOS 5 paravirt vm with CloudLinux. http://cloudlinux.com/ CloudLinux is a great product for shared hosting and I was evaluation same on a paravirt guest. I found a strange thing in XCP. CloudLinux provides xen kernel for DomUs and after installing the CloudLinux in a DomU it actually got 32 cpus inside. I had assigned only 2 vCPU to that DomU. To confirm this I
2012 Mar 05
12
Cluster xen
Bonjour, J''aimerai mettre en place un cluster sous Xen ou XenServer avec 2 serveurs dell R 710. J''aimerai pouvoir monter un cluster en utilisant l''espace disque entiere des 2 serveurs cumulés ainsi que la mémoire Quelles sont vos retour d''expériences et vos configurations? Merci d''avance Cordialement Mat
2011 Feb 02
3
Help for 5000 clients server + 50 sources
Hi, i need a server for 5000 clients and 50 sources at the same time Can you help me to choose the right server? I was thinking this one PowerEdge R710 n.2 x Intel Xeon E5502 (1,86GHz, cache 4MB, 4,86 GT/s QPI) 2GB Memory for 1CPU (2x1GB Single Rank UDIMMs) 1066MHz 2 X 146GB SAS 15.000rpm 3,5" hd hot-plug PERC 6/i RAID controller 256MB PCIe 2x4 1 gbit network card Thanks --------------
2017 Mar 20
3
Xen C6 kernel 4.9.13 and testing 4.9.15 only reboots.
Updating my CentOS 6.8 Xen server with new 4.9.13 kernel yields a kernel boot message of a few like "APIC ID MISMATCH" and the system reboots immediately without any other bits of info. This is on a Dell R710 with 64GB RAM and 2x 6-core Intel CPU's. As an additional test, I installed and attempted to run the current "testing" kernel of 4.9.16 with the exact same results.