Displaying 20 results from an estimated 40000 matches similar to: "any testing suite available"
2006 Feb 03
2
warnings on symlinks using link-dest
Hi, I'm using rsync with link-dest to make snapshot like backups into
/sawmill/backup/{hostname}/snapshot/{timestamp}/{root}
I'm getting warnings that I don't understand...
On Fri, Feb 03, 2006 at 05:00:01AM -0000, Cron Daemon wrote:
>+ rsync --recursive --links --perms --times --group --owner --devices --numeric-ids --exclude '*.boot' --exclude '*.lock' --exclude
2005 Aug 15
2
encrypted destination
In the archives I see the question about encrypted destination and it's
mostly answered with the --source-filter / --dest-filter patch by Kyle
Jones. There are also some proposed updates to the patch.
A lot of these posts 3 years old, is there plans or reasons not to
include them in the main line code?
// George
--
George Georgalis, systems architect, administrator <IXOYE><
2007 Jan 25
3
r tidy
Is there an r-tidy program? something that works similar to perl
tidy? http://perltidy.sourceforge.net/ which takes program code
and reformats white space with standard indentations and spacing?
I did find a ruby based rtidy, but that is for html formatting.
// George
--
George Georgalis, systems architect, administrator <IXOYE><
2007 Jan 14
4
feature request, hardlink progress......
I'm copying a partition that has a bunch of hardlink
based snapshots (-aPH). I think there's about
250,000 files in each backup and between 100 and 200
snapshots.
Earlier today, I saw the files had completed and it
was making all the hardlinks. I thought it would be
"not long" but it's been making hardlinks for 12
hours (at least).
There's only 36Gb in snapshot, the
2008 May 19
3
R static is dynamically linked!!
Hi,
After doing all I could find with the confiure script...
I set some env too...
export enable_R_static_lib=yes
export want_R_static=yes
export WANT_R_STATIC_TRUE=yes
./configure \
--prefix=${i} \
--enable-R-static-lib \
--enable-static \
--without-readline \
--without-iconv \
&& make \
&& make install \
&& echo "R ${v}
2008 May 08
1
odd behavior on remote
I've been using rsync for some time (years) to generate
many hardlink snapshots per day; but I'm seeing an odd
new problem today.
the remote/destination host gets a file list from the
source machine via ssh, and begins to write files until
it "hangs". On this run only one file was transferred; on
other runs many screenfulls went across
+ rsync --recursive --links --perms
2008 Mar 30
1
using rsync on raw device
Hi -- congratulations on the 3.0 release!
I'm trying to use rsync to manage a raw disk image file.
rsync --checksum --perms --owner --group --sparse --partial --progress \
192.168.80.189:/dev/rwd0d /u0510a/rwd0d.img
skipping non-regular file "rwd0d"
sent 20 bytes received 69 bytes 178.00 bytes/sec
total size is 0 speedup is 0.00
rsync version 2.6.9 protocol version 29
2017 Aug 09
0
Release 3.12: RC0 build is available for testing!
Hi,
3.12 release has been tagged RC0 and the builds are available here [1]
(signed with [2]).
3.12 comes with a set of new features as listed in the release notes [3].
We welcome any testing feedback on the release.
If you find bugs, we request a bug report for the same at [4]. If it is
deemed as a blocker add it to the release tracker (or just drop a note
on the bug itself) [5].
Thanks,
2008 May 22
1
tests/ok-errors.R ## bad infinite recursion
I've come across a handful of tests that
fail at our site. I consider this one the
worst because the process does not return.
The patch below simply bypasss the test,
but the errors in the out file are included
as well. I suspect this is due to more or
tighter ulimits on this system.
But I'm not sure if this is result of
different expectations (kernel/userland) of
what should be done in
2017 Aug 21
0
GlusterFS 3.8.15 is available, likely the last 3.8 update
[from http://blog.nixpanic.net/2017/08/last-update-for-gluster-38.html
and also available on https://planet.gluster.org/ ]
GlusterFS 3.8.15 is available, likely the last 3.8 update
The next Long-Term-Maintenance release for Gluster is around the
corner. Once GlusterFS-3.12 is available, the oldest maintained version
(3.8) will be retired and no maintenance updates are planned. With this
last
2017 Jun 29
0
GlusterFS 3.8.13 update available, and 3.8 nearing End-Of-Life
[Repost of a clickable blog to make it easier to read over email
http://blog.nixpanic.net/2017/06/glusterfs-3813-update-available.html
and also expected to arrive on https://planet.gluster.org/ soon. The
release notes can also be found in the release-3.8 branch on GitHub
https://github.com/gluster/glusterfs/blob/release-3.8/doc/release-notes/3.8.13.md]
The Gluster releases follow a 3-month
2011 Sep 29
0
New Blog Series: TheStraightTech
One of the positive outcomes of the community contest (http://gluster.org/contest/ ->ends tomorrow!) is a series of helpful HOWTO articles designed for those looking to tune GlusterFS for better performance, or for those looking to get up to speed faster on how to use.
These articles and posts originated on http://community.gluster.org/ -
To see the entire list of articles -
2013 Mar 08
1
Debian Squeeze packages available for Gluster 3.4.0-alpha2
I've made packages for Debian Squeeze for Gluster 3.4.0-alpha2,
they are available on
http://torbjorn-dev.trollweb.net/gluster-3.4.0alpha2-debs/.
They built and installed successfully, and have been running nicely
for a couple of hours,
but your mileage may vary.
The Debian packaging is on
http://torbjorn-dev.trollweb.net/gluster-3.4.0alpha2-debs/glusterfs-3.4.0-debian.tar.gz.
I took the
2018 Jan 05
0
Another VM crashed
Hi all,
I still experience vm crashes with glusterfs.
The VM I had problems (kept on crashing) was moved away from gluster and
have had no problems since.
Now another VM is doing the same. It just shutsdown.
gluster is 3.8.13
I know now you are on 3.10 and 3.12, but I had troubles upgrading
another cluster to 3.10 (although the processes were off and no files
where in use gluster had to heal
2017 Jul 24
0
gluster-heketi-kubernetes
Hi Bishoy,
Adding Talur who can help address your queries on Heketi.
@wattsteve's github repo on glusterfs-kubernetes is a bit dated. You can
either refer to gluster/gluster-kubernetes or heketi/heketi for current
documentation and operational procedures.
Regards,
Vijay
On Fri, Jul 21, 2017 at 2:19 AM, Bishoy Mikhael <b.s.mikhael at gmail.com>
wrote:
> Hi,
>
> I'm
2017 Jul 31
0
gluster-heketi-kubernetes
Adding more people to the thread. I am currently not able to analyze the logs.
On Thu, Jul 27, 2017 at 5:58 AM, Bishoy Mikhael <b.s.mikhael at gmail.com> wrote:
> Hi Talur,
>
> I've successfully got Gluster deployed as a DaemonSet using k8s spec file
> glusterfs-daemonset.json from
> https://github.com/heketi/heketi/tree/master/extras/kubernetes
>
> but then when I
2011 Feb 24
1
Experiencing errors after adding new nodes
Hi,
I had a 2 node distributed cluster running on 3.1.1 and I added 2 more nodes. I then ran a rebalance on the cluster.
Now I am getting permission denied errors and I see the following in the client logs:
[2011-02-24 09:59:10.210166] I [dht-common.c:369:dht_revalidate_cbk] loader-dht: subvolume loader-client-3 returned -1 (Invalid argument)
[2011-02-24 09:59:11.851656] I
2017 Jul 27
2
gluster-heketi-kubernetes
Hi Talur,
I've successfully got Gluster deployed as a DaemonSet using k8s spec
file glusterfs-daemonset.json from
https://github.com/heketi/heketi/tree/master/extras/kubernetes
but then when I try deploying heketi using heketi-deployment.json spec
file, I end up with a CrashLoopBackOff pod.
# kubectl get pods
NAME READY STATUS RESTARTS AGE
2014 Jul 15
1
No glusterfs-server available. on CentOS 7
[root at icehouse1 ~(keystone_admin)]# yum install glusterfs glusterfs-server glusterfs-fuseLoaded plugins: fastestmirror, langpacks, prioritiesLoading mirror speeds from cached hostfile * base: centos-mirror.rbc.ru * epel: mirror.logol.ru * extras: centos-mirror.rbc.ru * updates: centos-mirror.rbc.ru16 packages excluded due to repository priority protectionsPackage
2017 Oct 17
1
Distribute rebalance issues
Nithya,
Is there any way to increase the logging level of the brick? There is
nothing obvious (to me) in the log (see below for the same time period as
the latest rebalance failure). This is the only brick on that server that
has disconnects like this.
Steve
[2017-10-17 02:22:13.453575] I [MSGID: 115029]
[server-handshake.c:692:server_setvolume] 0-video-server: accepted
client from