Displaying 14 results from an estimated 14 matches for "volues".
Did you mean:
values
2013 Aug 12
2
Error while starting the node on ESXi hypervisor
...t the following error message:
*virsh # define /local/myNode/esxdomain.xml*
*Domain testNode defined from /local/myNode/esxdomain.xml*
*virsh # start testNode*
*error: Failed to start domain testNode*
*error: internal error: Could not start domain: GenericVmConfigFault -
Cannot open the disk
/vmfs/volues/5208f99d-760cf4a2-000c29520788/testNode.vmdk or one of the
snapshot disks it depends on.*
*
*
*
*
I checked the datastore of my ESX server and found out that instead of file
one directory is getting created with the name* cluster.vmdk *and inside it
there are few more files but cluster.vmdk file is...
2013 Aug 13
1
Re: Error while starting the node on ESXi hypervisor
.../local/myNode/esxdomain.xml
> > Domain testNode defined from /local/myNode/esxdomain.xml
> > virsh # start testNode
> > error: Failed to start domain testNode
> > error: internal error: Could not start domain: GenericVmConfigFault -
> Cannot
> > open the disk /vmfs/volues/5208f99d-760cf4a2-000c29520788/testNode.vmdk
> or
> > one of the snapshot disks it depends on.
> >
> >
> > I checked the datastore of my ESX server and found out that instead of
> file
> > one directory is getting created with the name cluster.vmdk and inside it...
2013 Aug 13
0
Re: Error while starting the node on ESXi hypervisor
...sage:
>
> virsh # define /local/myNode/esxdomain.xml
> Domain testNode defined from /local/myNode/esxdomain.xml
> virsh # start testNode
> error: Failed to start domain testNode
> error: internal error: Could not start domain: GenericVmConfigFault - Cannot
> open the disk /vmfs/volues/5208f99d-760cf4a2-000c29520788/testNode.vmdk or
> one of the snapshot disks it depends on.
>
>
> I checked the datastore of my ESX server and found out that instead of file
> one directory is getting created with the name cluster.vmdk and inside it
> there are few more files but c...
2011 Mar 18
5
Replace NIS by Active Directory
Hi,
I'm looking a wiki or share experience for replace NIS authentication by
an existing Active directory Server (W2003). The problem is on the
management of id and gid.
How to move 1000 actual NIS users to AD ?
How to keep the same id and gid for this 1000 users ?
What's happen with nfs linux server and acess with gid and/id ?
Use the same user/password for linux and Windows clients
2011 Sep 18
2
sarg
I am running squid + sarg, how can I change the ip-address in the
generated report into username? The users are free to surf the web
anonymously, no need to provide a login or any authentication to the
proxy.
Thanks
2012 May 22
2
Limit max number of e-mails sent per hour
Hello list
I use sendmail-8.14.4-8.el6.x86_64 and I wonder how to restrict the
number of emails sendmail sent over an hour.
Is the define(`confMAX_QUEUE_RUN_SIZE', `200') command what I'm looking for?
Thank you in advance.
Nikos
2011 Jun 27
1
No USB 3.0 and audio sound with CentOS 5.6
Hi,
I installed CentOS 5.6 on a Dell Precision Laptop M4600.
This laptop has 2 USB 3 connectors. Nothing work (mouse or usb key...)
when I connected something on this 2 ports.
And sound not working on this laptop.
[root at localhost ~]# lspci
00:00.0 Host bridge: Intel Corporation Sandy Bridge DRAM Controller (rev
09)
00:01.0 PCI bridge: Intel Corporation Sandy Bridge PCI Express Root
2018 Apr 12
2
Turn off replication
On Wed, Apr 11, 2018 at 7:38 PM, Jose Sanchez <josesanc at carc.unm.edu> wrote:
> Hi Karthik
>
> Looking at the information you have provided me, I would like to make sure
> that I?m running the right commands.
>
> 1. gluster volume heal scratch info
>
If the count is non zero, trigger the heal and wait for heal info count to
become zero.
> 2. gluster volume
2018 Apr 25
0
Turn off replication
Hello Karthik
Im having trouble adding the two bricks back online. Any help is appreciated
thanks
when i try to add-brick command this is what i get
[root at gluster01 ~]# gluster volume add-brick scratch gluster02ib:/gdata/brick2/scratch/
volume add-brick: failed: Pre Validation failed on gluster02ib. Brick: gluster02ib:/gdata/brick2/scratch not available. Brick may be containing or be
2018 Apr 25
2
Turn off replication
Looking at the logs , it seems that it is trying to add using the same port was assigned for gluster01ib:
Any Ideas??
Jose
[2018-04-25 22:08:55.169302] I [MSGID: 106482] [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received add brick req
[2018-04-25 22:08:55.186037] I [run.c:191:runner_log] (-->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0x33045)
2018 Apr 27
0
Turn off replication
Hi Jose,
Why are all the bricks visible in volume info if the pre-validation
for add-brick failed? I suspect that the remove brick wasn't done
properly.
You can provide the cmd_history.log to verify this. Better to get the
other log messages.
Also I need to know what are the bricks that were actually removed,
the command used and its output.
On Thu, Apr 26, 2018 at 3:47 AM, Jose Sanchez
2018 Apr 30
2
Turn off replication
Hi All
We were able to get all 4 bricks are distributed , we can see the right amount of space. but we have been rebalancing since 4 days ago for 16Tb. and still only 8tb. is there a way to speed up. there is also data we can remove from it to speed it up, but what is the best procedures removing data , is it from the Gluster main export point or going on each brick and remove it . We would like
2018 May 02
0
Turn off replication
Hi,
Removing data to speed up from rebalance is not something that is recommended.
Rebalance can be stopped but if started again it will start from the beginning
(will have to check and skip the files already moved).
Rebalance will take a while, better to let it run. It doesn't have any
down side.
Unless you touch the backend the data on gluster volume will be
available for usage
in spite of
2018 Jul 13
2
Upgrade to 4.1.1 geo-replication does not work
Hi Kotresh,
Yes, all nodes have the same version 4.1.1 both master and slave.
All glusterd are crashing on the master side.
Will send logs tonight.
Thanks,
Marcus
################
Marcus Peders?n
Systemadministrator
Interbull Centre
################
Sent from my phone
################
Den 13 juli 2018 11:28 skrev Kotresh Hiremath Ravishankar <khiremat at redhat.com>:
Hi Marcus,
Is the