Displaying 20 results from an estimated 20000 matches similar to: "open file heal"
2009 Jul 29
2
Xen - Backend or Frontend or Both?
I have 6 boxes with a client config (see below) across 6 boxes. I am using
distribute across 3 replicate pairs. Since I am running xen I need to
disable-direct-io and that slows things down quite a bit. My thought was
to move the replicate / distribute to the backend server config so that
self heal can happen on faster backend rather then frontend client with
disable-direct-io.
Does this
2009 Feb 23
1
Interleave or not
Lets say you had 4 servers and you wanted to setup replicate and
distribute. What methoid would be better:
server sdb1
xen0 brick0
xen1 mirror0
xen2 brick1
xen3 mirror1
replicate block0 - brick0 mirror0
replicate block1 - brick1 mirror1
distribute unify - block0 block1
or
server sdb1 sdb2
xen0 brick0 mirror3
xen1 brick1 mirror0
xen2 brick2 mirror1
xen3 brick3 mirror2
replicate block0 -
2008 Aug 23
7
Bridge Networking stops working at random times
Supermicro X7DWN+, XEON 5410, Centos 5.2, Xen 3.2.1
At what looks like random times network traffic over xen bridge stops
working, the only way I found to fix it is a reboot. This sometimes takes
10 min, other times it may be up for 10 days. This happened with default
xen that comes with RedHat EL 5.2 as well as a default install of Fedora
8.
Any ideas?
><>
Nathan Stratton
2009 Feb 27
0
Xen issues with Gluster
Trying to get Xen working with Gluster. I understand you need
--disable-direct-io-mode, however that cuts my disk I/O from 130 MB/s to
only 22 MB/s.
Also when I try to use xen with tap:aio, but only get to Creating root
device.
Red Hat nash version 5.1.19.6 starting
Mounting proc filesystem
Mounting sysfs filesystem
Creating /dev
Creating initial device nodes
Setting up hotplug.
Creating
2008 Sep 23
1
Converting XenEnterprise image to XenSource
I am running Xen 3.3.0 and am trying to run a XenEnterprise image.
The image has a hda directory with files called chunk-000000000.gz -
chunk-000000042.gz and a filed called ova.xml with the following:
<appliance version="0.1">
<vm name="VM">
<label> Manager </label>
<shortdesc> </shortdesc>
<config
2006 Sep 04
7
Xeon 5160 vs 5080
Chip Clock HT Cache Bus Speed
---------------------------------------------------------
5080 3.7 GHz YES 2MB 1066 MHz
5160 3.0 GHz NO 4MB 1333 MHz
Does the .7 GHz and HT worth more then 4MB cache and higher bus speed? The
application is VoIP so there is not a lot of IO so I would not think Bus
Speed would matter. I am finding mixed information on HT, some say it is
great, others say it
2009 Jul 18
1
GlusterFS & XenServer Baremetal
Hello,
What is for you the best GlusterFS scenario in using XenServer (i'm not
talking about Xen on a linux but XenServer baremetal) for a web farm
(Apache-Tomcat) ? I were thinking of using ZFS as the filesystem for the
different nodes.
The objectives/needs :
* A storage cluster with the capacity equal to at least 1 node(assuming all
nodes are the same).
* being able to lose/take down any
2009 Feb 23
3
Infiniband Drivers under Xen kernel?
Hello folks,
I am running Xen 2.6 under Centos 5.2 installation, and am trying to port
Xen migration abilities to Infiniband. But, the InfiniBand drivers do not
work under Xen kernel, only under non-Xen kernels. It looks like I need to
install some patches to rectify the situation.
Has anyone ever worked with Infiniband and Xen and would be able to give me
inputs about the same? I would really
2009 Jan 11
2
drifting clock in domUs
Hello,
On a xenserver with several (39) domUs, we experience problems with the
system clock of the domUs. The clock seems to drift away several seconds
up to two minutes from the dom0 clock.
We do have set independent_wallclock=0. According to the docs (i.e.
http://wiki.xensource.com/xenwiki/InstallationNotes) that way domUs
should use the dom0 clock, but apparently that''s not the case.
2018 Feb 09
1
self-heal trouble after changing arbiter brick
Hi Karthik,
Thank you very much, you made me much more relaxed. Below is getfattr output for a file from all the bricks:
root at gv2 ~ # getfattr -d -e hex -m . /data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
getfattr: Removing leading '/' from absolute path names
# file: data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
2005 Jun 02
2
voip provider request
I am looking for a voip provider that provides good rates (below 5
cents/min or unlimited) to UK NCFA numbers. Braodvoice advertises they
do unlimited to NCFA but does not have the ability to actually termiate
those calls as per the CTO Nathan Stratton, and last he said they dont
even have contracts in place to get service provisioned for that. As
such I am looking for another provider to take
2018 Feb 09
0
self-heal trouble after changing arbiter brick
Hey,
Did the heal completed and you still have some entries pending heal?
If yes then can you provide the following informations to debug the issue.
1. Which version of gluster you are running
2. gluster volume heal <volname> info summary or gluster volume heal
<volname> info
3. getfattr -d -e hex -m . <filepath-on-brick> output of any one of the
which is pending heal from all
2018 Feb 09
0
self-heal trouble after changing arbiter brick
Hi Karthik,
Thank you for your reply. The heal is still undergoing, as the /var/log/glusterfs/glustershd.log keeps growing, and there's a lot of pending entries in the heal info.
The gluster version is 3.10.9 and 3.10.10 (the version update in progress). It doesn't have info summary [yet?], and the heal info is way too long to attach here. (It takes more than 20 minutes just to collect
2009 Jan 27
2
Monitoring time drift of hosts
In reading the Xen list there are frequent postings regarding NTP time
drift issues for virtualized guests, correct configurations, etc.
We have a solution (no cost) for monitoring time drift of hosts for
anyone needing to do so or to determine whether their environment is
maintaining time synchronization. For etiquette I am not publishing the
information to the list mailing. If interested,
2008 Jul 31
6
drbd 8 primary/primary and xen migration on RHEL 5
Greetings.
I''ve reviewed the list archives, particularly the posts from Zakk, on
this subject, and found results similar to his. drbd provides a
block-drbd script, but with full virtualization, at least on RHEL 5,
this does not work; by the time the block script is run, the qemu-dm has
already been started.
Instead I''ve been simply musing the possibility of keeping the drbd
2009 Jan 27
20
Xen SAN Questions
Hello Everyone,
I recently had a question that got no responses about GFS+DRBD clusters for Xen VM storage, but after some consideration (and a lot of Googling) I have a couple of new questions.
Basically what we have here are two servers that will each have a RAID-5 array filled up with 5 x 320GB SATA drives, I want to have these as useable file systems on both servers (as they will both be
2018 Feb 08
5
self-heal trouble after changing arbiter brick
Hi folks,
I'm troubled moving an arbiter brick to another server because of I/O load issues. My setup is as follows:
# gluster volume info
Volume Name: myvol
Type: Distributed-Replicate
Volume ID: 43ba517a-ac09-461e-99da-a197759a7dc8
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gv0:/data/glusterfs
Brick2: gv1:/data/glusterfs
Brick3:
2008 Dec 23
2
DomU strange network behavior
I am experiencing strange network behavior from several DomU''s. I can ssh
from DomU to any host on the local lan/vlan, I can ping those hosts. However
when I go to resolve a hostname DNS fails. I have verified that three other
DomU''s are exhibiting the same behavior. I have also verified that Dom0 is
functioning properly and can resolve hostnames and access hosts outside of
the
2015 Sep 12
0
tinc generating invalid packet checksums?
On Thu, Sep 10, 2015 at 19:34:21 -0400, Nathan Stratton Treadway wrote:
> It seems that when the cksum is incorrect, it is always off by 1.
>
[...]
> Am I correct in concluding that this cksum problem is a bug in Tinc?
After investigating this further, I'm fairly certain that problem
originates in the following lines of the clamp_mss() function in
route.c:
[...]
csum ^=
2012 Jun 22
1
Fedora 17 GlusterFS 3.3.0 problmes
When I do a NFS mount and do a ls I get:
[root at ovirt share]# ls
ls: reading directory .: Too many levels of symbolic links
[root at ovirt share]# ls -fl
ls: reading directory .: Too many levels of symbolic links
total 3636
drwxr-xr-x 3 root root 16384 Jun 21 19:34 .
dr-xr-xr-x. 21 root root 4096 Jun 21 19:29 ..
drwxr-xr-x 3 root root 16384 Jun 21 19:34 .
dr-xr-xr-x. 21 root root 4096