similar to: how to recover a accidentally delete brick directory?

Displaying 20 results from an estimated 100 matches similar to: "how to recover a accidentally delete brick directory?"

2017 Dec 07
4
GlusterFS, Pacemaker, OCF resource agents on CentOS 7
Hi guys I'm wondering if anyone here is using the GlusterFS OCF resource agents with Pacemaker on CentOS 7? yum install centos-release-gluster yum install glusterfs-server glusterfs-resource-agents The reason I ask is that there seem to be a few problems with them on 3.10, but these problems are so severe that I'm struggling to believe I'm not just doing something wrong. I created
2017 Dec 08
0
GlusterFS, Pacemaker, OCF resource agents on CentOS 7
Hi, Can u please explain for what purpose pacemaker cluster used here? Regards, Jiffin On Thursday 07 December 2017 06:59 PM, Tomalak Geret'kal wrote: > > Hi guys > > I'm wondering if anyone here is using the GlusterFS OCF resource > agents with Pacemaker on CentOS 7? > > yum install centos-release-gluster > yum install glusterfs-server
2013 Nov 21
3
Sync data
I guys! i have 2 servers in replicate mode, the node 1 has all data, and the cluster 2 is empty. I created a volume (gv0) and start it. Now, how can I synchronize all files on the node 1 by the node 2 ? Steps that I followed: gluster peer probe node1 gluster volume create gv0 replica 2 node1:/data node2:/data gluster volume gvo start thanks! -------------- next part -------------- An HTML
2017 Dec 07
0
GlusterFS, Pacemaker, OCF resource agents on CentOS 7
> > With the node in standby (just one is online in this example, but another > is configured), I then set up the resources: > > pcs node standby > pcs resource create gluster_data ocf:heartbeat:Filesystem > device="/dev/cl/lv_drbd" directory="/gluster" fstype="xfs" > pcs resource create glusterd ocf:glusterfs:glusterd > pcs resource create
2023 Nov 21
1
Announcing Gluster release 11.1
The Gluster community is pleased to announce the release of Gluster11.1 Packages available at [1]. Release notes for the release can be found at [2]. *Highlights of Release: * - Fix upgrade issue by reverting posix change related to storage.reserve value - Fix possible data loss during rebalance if there is any link file on the system - Fix maximum op-version for release 11 Thanks, Shwetha
2023 Nov 25
1
Announcing Gluster release 11.1
Great news! Best Regards,Strahil Nikolov? On Fri, Nov 24, 2023 at 3:32, Shwetha Acharya<sacharya at redhat.com> wrote: The Gluster community is pleased to?announce?the release of Gluster11.1? Packages available at [1]. Release notes for the release can be found at [2]. Highlights of Release:? -??Fix upgrade issue by reverting posix change related to storage.reserve value -??Fix
2023 Nov 27
2
Announcing Gluster release 11.1
I tried downloaded the file directly from the website but wget gave me errors: wget https://download.gluster.org/pub/gluster/glusterfs/11/11.1/Debian/12/amd64/apt/pool/main/g/glusterfs/glusterfs-client_11.1-1_amd64.deb --2023-11-27 11:25:50-- https://download.gluster.org/pub/gluster/glusterfs/11/11.1/Debian/12/amd64/apt/pool/main/g/glusterfs/glusterfs-client_11.1-1_amd64.deb Resolving
2023 Nov 27
1
Announcing Gluster release 11.1
I am getting this errors: Err:10 https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt bookworm/main amd64 glusterfs-server amd64 11.1-1 Error reading from server - read (5: Input/output error) [IP: 8.43.85.185 443] Fetched 35.9 kB in 36s (1,006 B/s) E: Failed to fetch
2023 Nov 27
1
Announcing Gluster release 11.1
Hi, on console with wget i see this: 2023-11-27 15:04:35 (317 MB/s) - Read error at byte 32408/3166108 (Error decoding the received TLS packet.). Retrying. that looks strange :-) Best regards, Hubert Am Mo., 27. Nov. 2023 um 14:44 Uhr schrieb Gilberto Ferreira <gilberto.nunes32 at gmail.com>: > > I am getting this errors: > > Err:10
2016 Sep 20
1
[PATCH] libvirt: read disk paths from pools (RHBZ#1366049)
A disk of type 'volume' is stored as <source pool='pool_name' volume='volume_name'/> and its real location is inside the 'volume_name', as 'pool_name': in this case, query libvirt for the actual path of the specified volume in the specified pool. Adjust the code so that: - for_each_disk gets the virConnectPtr, needed to do operations with libvirt
2009 Jul 13
1
[PATCH] Use volume key instead of path to identify volume.
This patch teaches taskomatic to use the volume 'key' instead of the path from libvirt to key the volume off of in the database. This fixes the duplicate iscsi volume bug we were seeing. The issue was that libvirt changed the way they name storage volumes and included a local ID that changed each time it was attached. Note that the first run with this new patch will cause duplicate
2016 Sep 22
1
[PATCH v2] libvirt: read disk paths from pools (RHBZ#1366049)
A disk of type 'volume' is stored as <source pool='pool_name' volume='volume_name'/> and its real location is inside the 'volume_name', as 'pool_name': in this case, query libvirt for the actual path of the specified volume in the specified pool. Adjust the code so that: - for_each_disk gets the virConnectPtr, needed to do operations with libvirt
2017 Aug 30
0
Unable to use Heketi setup to install Gluster for Kubernetes
Hi, I have the following setup in place: 1 node : RancherOS having Rancher application for Kubernetes setup 2 nodes : RancherOS having Rancher agent 1 node : CentOS 7 workstation having kubectl installed and folder cloned/downloaded from https://github.com/gluster/gluster-kubernetes using which i run Heketi setup (gk-deploy -g) I also have rancher-glusterfs-server container running with
2018 Jan 07
1
Clear heal statistics
Is there any way to clear the historic statistic from the command "gluster volume heal <volume_name> statistics" ? It seems the command takes longer and longer to run each time it is used, to the point where it times out and no longer works. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2010 Apr 10
3
nfs-alpha feedback
I ran the same dd tests from KnowYourNFSAlpha-1.pdf and performance is inconsistent and causes the server to become unresponsive. My server freezes every time when I run the following command: dd if=/dev/zero of=garb bs=256k count=64000 I would also like to mount a path like: /volume/some/random/dir # mount host:/gluster/tmp /mnt/test mount: host:/gluster/tmp failed, reason given by server: No
2016 Nov 16
3
[PATCH 1/2] libvirt: un-duplicate XPath code
Move the checks for empty xmlXPathObjectPtr, and for extracting the result string out of it, to a new helper functions. This is just code motion, there should be no behaviour changes. --- src/libvirt-domain.c | 122 +++++++++++++++++++++------------------------------ 1 file changed, 50 insertions(+), 72 deletions(-) diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index 4d4142d..baab307
2010 Feb 25
1
[PATCH] fix storage problem.
Since Ruby::Qmf moves, the .key() method does not work anymore. It forces to use a .get_attr('key') in order to get the good value. Signed-off-by: Loiseleur Michel <mloiseleur at linagora.com> --- src/task-omatic/taskomatic.rb | 15 +++++++-------- 1 files changed, 7 insertions(+), 8 deletions(-) diff --git a/src/task-omatic/taskomatic.rb b/src/task-omatic/taskomatic.rb index
2010 Aug 29
2
OSX 10.6.4 error with -R option
Hi All, I have had reports of problems with the -R option on OSX 10.6.4. Just tested it myself and found this odd result: When I run this "dtruss -f path/to/rsync -aHAXNR --fileflags --force-change --protect-decmpfs --stats -v /Users/astrid/Documents/main.m /Users/astrid/Desktop/rrr it produces the expected results with the relative folder paths in place
2008 Jul 29
4
OCFS2 and VMware ESX
Hi, We are haing some serious issues trying to configure an OCFS2 cluster on 3 SLES 10 SP2 boxes running in VMware ESX 3.0.1. Before I go into any of the detailed errors we are experiencing I first wanted to ask everyone if they have successfully configured this solution? We would be interested to find out what needs to be set at the VMware level (RDM, VMFS, NICS etc) and what needs to be
2023 Nov 27
0
Announcing Gluster release 10.5
The Gluster community is pleased to announce the release of Gluster 10.5 Packages available at [1]. Release notes for the release can be found at [2]. *Highlights of Release:* - Fix upgrade issue by reverting posix change related to storage.reserve value - Fix the issue of the brick process is getting crashed during the upcall event Thanks, Shwetha References: [1] Packages for 10.5