similar to: Recommended filesystem for GlusterFS bricks.

Displaying 20 results from an estimated 1000 matches similar to: "Recommended filesystem for GlusterFS bricks."

2013 Aug 21
1
FileSize changing in GlusterNodes
Hi, When I upload files into the gluster volume, it replicates all the files to both gluster nodes. But the file size slightly varies by (4-10KB), which changes the md5sum of the file. Command to check file size : du -k *. I'm using glusterFS 3.3.1 with Centos 6.4 This is creating inconsistency between the files on both the bricks. ? What is the reason for this changed file size and how can
2013 Sep 19
2
Support for GlusterFS
Hi, Is there an option to procure support for glusterfs deployment. ? As we moving into core production scenarios with glusterfs in mind, it would be slightly relieving to have this confirmation !! Thanks & Regards, Bobby Jacob P SAVE TREES. Please don't print this e-mail unless you really need to. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2009 Jun 29
2
Building Custom Kernel - CentOS 4.4
Hi All, I am having issue when trying to build a custom kernel in CentOS 4.4. Current Kernel Version is 2.6.9-42.ELsmp and the server is HP Proliant DL380 G3. I downloaded the source rpm and then installed it. Gave the command, rpmbuild -bp --target=i686 /usr/src/redhat/SPECS/kernel-2.6.spec. But it throws me an error after patch operations. The following is the error.
2009 May 30
2
Queue - Multiple Transfer
Hi all, I ve setup a queue with 2+ agents for managing our inbound calls from customer. Using Asterisk 1.2.18 in a CentOS box. Agents login using AgentCallbackLogin application and I use a BASH AGI to accomplish this as there are some validations done with MySQL DB. Im aware that transfer could be done with option 't' in the queue() application and I was able to successfully transfer
2013 Oct 02
1
Shutting down a GlusterFS server.
Hi, I have a 2-node replica volume running with GlusterFS 3.3.2 on Centos 6.4. I want to shut down one of the gluster servers for maintenance. Any best practice that is to be followed while turning off a server in terms of services etc. Or can I just shut down the server. ? Thanks & Regards, Bobby Jacob -------------- next part -------------- An HTML attachment was scrubbed... URL:
2009 Jul 04
2
x86_64 EDAC throwing error
Hi All, We have installed CentOS 5.3 x86_64 in an HP DL585 server with AMD Opteron 64 bit processor and 16 GB RAM. The kernel version is 2.6.18-128.el5 . Now this has thrown an error message in /var/log/message, Jul 3 21:41:11 db1 kernel: EDAC k8 MC0: general bus error: participating processor(local node origin), time-out(no timeout) memory transaction type(generic read), mem or i/o(mem
2009 Mar 10
1
Asterisk and WebIntegration
Hi All, Is there a way that I can include call dialing functionality in a webinterface. I have EyeBeam configured with a SIP user say 8440. Will I be able to design an inteface which agent can choose a number and the Dial without punching in the number in Eyebeam. I tried using the .call file. ie The agent can choose which number to dial from a web interface. Then, a .call file is created with
2009 May 20
1
Queue and Dial operation - Common Variables?
Hi All, I am trying to implement ACD using Asterisk 1.2.18 and I've chosen AgentCallbackLogin for login purpose. One AGI is written which will actually get executed when agent dials '1001' (say) from his SIP phone and enters into the queue. Second AGI gets executed when the Dial operation is performed. I see the agi_uniqueid obtained from both AGI instances are different and I
2013 Jul 09
2
Gluster Self Heal
Hi, I have a 2-node gluster with 3 TB storage. 1) I believe the "glusterfsd" is responsible for the self healing between the 2 nodes. 2) Due to some network error, the replication stopped for some reason but the application was accessing the data from node1. When I manually try to start "glusterfsd" service, its not starting. Please advice on how I can maintain
2009 Jun 07
2
Call recording in - out
Hello to all I'm trying to record the calls going to my queues, but asterisk creates 2 files, one with the inbound and another with the outbound sound. I know Sox should mix the 2 files automatically in the end, but this isn't happening. I have sox installed in my server. How can I force Sox to mix the files? Here is my config: queues.conf----------------------------- [general]
2011 Apr 09
1
custom iptables rules
Hi, Is there a way that we can add custom IP-Tables rules in a nat'd physical host? I need some custom rules mentioned in physical host to access some services in the guest systems. Any hints on this? Regards, Kurian Thayil. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2009 Apr 22
5
Asterisk routine maintenance activities
Hello(s), I know this might be test book question or one best suited for google but I will take the risk of asking. Here I go. What common routine maintenance tasks do you run on your asterisk box? Thanks James. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.digium.com/pipermail/asterisk-users/attachments/20090422/8cfeb434/attachment.htm
2013 Jul 24
2
Healing in glusterfs 3.3.1
Hi, I have a glusterfs 3.3.1 setup with 2 servers and a replicated volume used by 4 clients. Sometimes from some clients I can't access some of the files. After I force a full heal on the brick I see several files healed. Is this behavior normal? Thanks -- Paulo Silva <paulojjs at gmail.com> -------------- next part -------------- An HTML attachment was scrubbed... URL:
2008 Dec 01
2
Error while copying/moving file
An HTML attachment was scrubbed... URL: http://zresearch.com/pipermail/gluster-users/attachments/20081201/151a90cd/attachment.htm
2012 Jun 11
3
centos 6.2 xfs + nfs space allocation
Centos 6.2 system with xfs filesystem. I'm sharing this filesystem using nfs. When I create a 10 gigabyte test file from a nfs client system : dd if=/dev/zero of=10Gtest bs=1M count=10000 10000+0 records in 10000+0 records out 10485760000 bytes (10 GB) copied, 74.827 s, 140 MB/s Output from 'ls -al ; du' during this test : -rw-r--r-- 1 root root 429170688 Jun 8 10:13 10Gtest
2013 Dec 03
3
Self Heal Issue GlusterFS 3.3.1
Hi, I'm running glusterFS 3.3.1 on Centos 6.4. ? Gluster volume status Status of volume: glustervol Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick KWTOCUATGS001:/mnt/cloudbrick 24009 Y 20031 Brick KWTOCUATGS002:/mnt/cloudbrick
2013 Aug 22
2
Error when creating volume
Hello, I've removed a volume and I can't re-create it : gluster volume create gluster-export gluster-6:/export gluster-5:/export gluster-4:/export gluster-3:/export /export or a prefix of it is already part of a volume I've formatted the partition and reinstalled the 4 gluster servers and the error still appears. Any idea ? Thanks. -- -------------- next part --------------
2017 Oct 26
5
Re: Need to increase the rx and tx buffer size of my interface
Hi Ashish, I have tested with your xml in the first mail, and it works for rx_queue_size(see below). multiqueue need to work with vhost backend driver. And when you set "queues=1" it will ignored. Please check your qemu-kvm-rhev package, should be newer than qemu-kvm-rhev-2.9.0-16.el7_4.2 And the logs? tx_queue_size='512' will not work in the guest with direct type interface,
2017 Oct 26
2
Re: Need to increase the rx and tx buffer size of my interface
On 10/26/2017 10:38 AM, Ashish Kurian wrote: > Hi Yalan and Michal, > > Thank you for your response. So what I understand is that I can change > rx_queue size even if I use direct type interface and qemu driver as long > as the driver is virtio. Am I right? Yes. > If that is the case why am I getting > the error saying that > > error: XML document failed to validate
2012 Mar 15
2
Usage Case: just not getting the performance I was hoping for
All, For our project, we bought 8 new Supermicro servers. Each server is a quad-core Intel cpu with 2U chassis supporting 8 x 7200 RPM Sata drives. To start out, we only populated 2 x 2TB enterprise drives in each server and added all 8 peers with their total of 16 drives as bricks to our gluster pool as distributed replicated (2). The replica worked as follows: 1.1 -> 2.1 1.2