similar to: Just thanks

Displaying 20 results from an estimated 10000 matches similar to: "Just thanks"

2005 Mar 21
0
audio frequency with wcfxs and K8t
Friday and Saturday I was wrestling with a VoIP system that was having very strange problems. It was playing the outgoing IVR audio at 2-5x faster than it should have been. I found that if I stopped asterisk, removed the wcfxs driver and installed the ztdummy driver, the audio would play fine. I tested this in and out several times and it always worked fine with ztdummy and never worked right
2014 Apr 10
1
replication + attachment sis + zlib bug ? (HEAD version from xi.rename-it.nl)
Hi, i have setup with mail_attachment single instance store + replication + zlib and got this bug when i try to replicate one test mailbox: On master1 in mail.log: Apr 10 13:25:22 master1 dovecot: dsync-local(zzz at blabla666.sk): Error: read(/nfsmnt/mailnfs1/attachments1/6b/57/6b57ad34cf6c414662233d833a7801fde4e1cdcb-92b5052558774653a728000013e2b982[base64:18 b/l]) failed: Stream is larger than
2024 Feb 05
1
Graceful shutdown doesn't stop all Gluster processes
Hello Everyone, I am using GlusterFS 9.4, and whenever we use the systemctl command to stop the Gluster server, it leaves many Gluster processes running. So, I just want to check how to shut down the Gluster server in a graceful manner. Is there any specific sequence or trick I need to follow? Currently, I am using the following command: [root at master2 ~]# systemctl stop glusterd.service
2013 Jul 11
0
gspca - a followup
I *think* the problem I've been having with the gspca_zc3xx video drivers isn't directly that driver. One of my users, on one of the two servers that broke, started having continuing crashes from where he enabled mediawiki to server thumbnails for some images. That crash, according to the [abrt] full crash report, is from /usr/bin/convert, and /var/log/messages tells me kernel:
2012 Jul 19
1
Issue with Cent OS 5.5 x64 on AMD machine
Hi, Cent OS 5.5 x64 does not boot on certain AMD-based servers. Following error is displayed: Code: 8b 72 40 48 8d 4c 24 1c 48 8b 7a 20 ba c4 01 00 00 e8 5f 77 RIP [<ffffffff8008192f>] cpuid4_cache_lookup+0x256/0x356 RSP <ffff810104737d60> CR2: 000000000040 <0>Kernel panic -- not syncing: Fatal exception This issue was noticed on Dell PowerEdge R715 with AMD Opteron 12-core
2024 Jan 22
1
Geo-replication status is getting Faulty after few seconds
Hi There, We have a Gluster setup with three master nodes in replicated mode and one slave node with geo-replication. # gluster volume info Volume Name: tier1data Type: Replicate Volume ID: 93c45c14-f700-4d50-962b-7653be471e27 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: master1:/opt/tier1data2019/brick Brick2: master2:/opt/tier1data2019/brick
2009 Mar 06
2
compatibility dell]
The systems will be operating the Center 5.2 32-bit, below the model, one will doubt that the system of controlling the media to do a RAID 0+1 PowerEdge SC1435 Processor AMD Opteron ? 2344HE Quad Core (1.7 GHz, 4x512 KB L2 cache, 1 GHz HyperTransport) - BRH8835 edit Memory 8GB Memory, 667MHz (4x2GB), Single Ranked Primary controller Integrated SATA Controller - No RAID edit First Hard Drive SAS
2004 Dec 10
3
OT: How do I know if I should have IO-APIC?
With regards to the IRQ sharing situation on 400P/X100P cards how would I know if I can use IO-APIC? I am running RHEL 3 on a Dell PowerEdge 1400SC. RHEL installs without IO-APIC support. Is this because RH is overly conservative or because it queried my machine and that is the appropriate option? Does RHEL 3 have a kernel for IO-APIC if appropriate or am I expected to do a custom kernel build
2012 Dec 17
1
multiple puppet masters
Hi, I would like to set up an additional puppet master but have the CA server handled by only 1 puppet master. I have set this up as per the documentation here: http://docs.puppetlabs.com/guides/scaling_multiple_masters.html I have configured my second puppet master as follows: [main] ... ca = false ca_server = puppet-master1.test.net I am using passenger so I am a bit confused how the
2024 Jan 24
1
Geo-replication status is getting Faulty after few seconds
Hi All, I have run the following commands on master3, and that has added master3 to geo-replication. gluster system:: execute gsec_create gluster volume geo-replication tier1data drtier1data::drtier1data create push-pem force gluster volume geo-replication tier1data drtier1data::drtier1data stop gluster volume geo-replication tier1data drtier1data::drtier1data start Now I am able to start the
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
Hi Anant, i would first start checking if you can do ssh from all masters to the slave node.If you haven't setup a dedicated user for the session, then gluster is using root. Best Regards, Strahil Nikolov ? ?????, 26 ?????? 2024 ?. ? 18:07:59 ?. ???????+2, Anant Saraswat <anant.saraswat at techblue.co.uk> ??????: Hi All, I have run the following commands on master3,
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
Don't forget to test with the georep key. I think it was /var/lib/glusterd/geo-replication/secret.pem Best Regards, Strahil Nikolov ? ??????, 27 ?????? 2024 ?. ? 07:24:07 ?. ???????+2, Strahil Nikolov <hunter86_bg at yahoo.com> ??????: Hi Anant, i would first start checking if you can do ssh from all masters to the slave node.If you haven't setup a dedicated user for the
2005 May 26
1
How do I know that my machine will support APIC?
Regarding the SMP and interrupt issues. I know my machine is not running APIC now, but how do I determine if it is capable? Can I find out from the running system or is this something I need to know from the mfg? Currently the X100P shares IRQ with the secondary SCSI (yeah, go ahead and laugh). The box is a Dell PowerEdge 1400SC. Apparently the "SC" means Simplified Configuration and
2009 Aug 17
2
Building and Installing Xen 4.3 in Fedora11
HI there, I am using a Dell PowerEdge 2970 with following specs:2 x AMD Dual Core Opteron processors, 2GHz, 2 MB L2 Cache.8 GB RAM (4 x 2 GB)4 x 146 GB Hard DisksDual Port Gigabit Ethernet NIC10 Gbps PCI EthernetFedora 11 operating system installed.What I want here is Building and Installing Xen 4.3 in Fedora 11. for this I followed the following steps
2001 Jul 22
1
Just back in town
Hi folks, I've been out of twon for a few days (by motorcycle, net-less) and just got back into town tonight. I'll be catching up on email tomorrow. (OK, I know I'm more than a few days behind on email, but I'm still going to try ;-) Monty --- >8 ---- List archives: http://www.xiph.org/archives/ Ogg project homepage: http://www.xiph.org/ogg/ To unsubscribe from this list,
2001 Jul 22
1
Just back in town
Hi folks, I've been out of twon for a few days (by motorcycle, net-less) and just got back into town tonight. I'll be catching up on email tomorrow. (OK, I know I'm more than a few days behind on email, but I'm still going to try ;-) Monty --- >8 ---- List archives: http://www.xiph.org/archives/ Ogg project homepage: http://www.xiph.org/ogg/ To unsubscribe from this list,
2016 Sep 17
0
IPMI ??
On Sat, Sep 17, 2016 at 6:25 AM, Alice Wonder <alice at domblogger.net> wrote: > Never used IPMI in my life and while I thought it was cool when I heard > about it, had no plans to. > Under many different names (Sun called it LOM; I forgot IBM's name), this has been out there for a while. And it is IMHO the best way to deal with servers. My normal server installing
2004 Aug 07
0
improve energy levels
speelser thyat tomkwong_slaptm bas p_harm from u`s.a & 0ve'rnight deliv~er http://www.welcome2your-rxworld.com Slowly he ascended, this time to a height of nearly twenty feet -----Original Message----- From: Val Boyd [mailto:cbwwbyywr at in.com] To: frankie gamboa; loren silvers; jamey reven; monroe jara; esteban mazar Sent: Saturday, October, 2004 11:51 AM Subject: improve
2014 Dec 09
0
Two identical hosts, different capabilities and topologies
Hello, I've two twin servers (Dell PowerEdge R815) dedicated to kvm+libvirt and they have the same exact CPU (AMD Opteron(TM) Processor 6272) with the same topology (2 sockets, 16 cores per socket, so a total of 32 CPUs seen by the kernel) but both virsh -r capabilities and virsh nodeinfo say that the two machines are different - but they really are not! - and this prevents live migration
2005 Sep 11
4
[RFC] The Early Demise of Myriad (Thanks To Ruby Threads)
Hi Everyone, I figured out this weekend that Ruby''s Thread implementation causes the Ruby/Event binding I wrote to completely stall and go dead. After reviewing the Ruby source and watching several strace runs, it''s clear that the Ruby Thread implementation uses select in a way that--while not being bad--just isn''t compatible with libevent. The second a thread is