Displaying 20 results from an estimated 11000 matches similar to: "newlines with write-append"
2014 Jan 20
0
Re: newlines with write-append
On Mon, Jan 20, 2014 at 08:54:17PM +0100, Olaf Hering wrote:
> Silly bash scripts have stuff like below to get things done, but equally
> silly guestfish scripts fail to add the required newline. Why is that?
>
> echo "$dev1 $mnt1 $fs $opts 1 2" >> /etc/fstab
> echo "$dev2 $mnt2 $fs $opts 1 2" >> /etc/fstab
>
> write-append /etc/fstab
2007 Nov 15
1
Problem with rsync recent file logic ?
Hello,
I have 2 servers I'm synchronizing using rsync, I have a situation where I :
1. rsync from rnd-dev2 to rnd-dev1
2. change the rsynched file on rnd-dev1
3. rsync from rnd-dev2 to rnd-dev1 again
4. File gets overridden on rnd-dev1 over though it has newer change
time then file on rnd-dev2.
here is the bug(?) reproduction:
[root@rnd-dev1 test_rsync]# rsync --version
rsync version
2010 Oct 19
4
rename zpool
Hi,
I have two questions:
1) Is there any way of renaming zpool without export/import ??
2) If I took hardware snapshot of devices under a zpool ( where the snapshot device will be exact copy including metadata i.e zpool and associated file systems) is there any way to rename zpool name of snapshotted devices ?? without losing data part?
Thanks & Regards,
sridhar.
--
This message posted
2013 Feb 01
2
Nested loop and output help
Hello Everyone,
My name is Thomas and I have been using R for one week. I recently found
your site and have been able to search the archives of posts. This has
given me some great information that has allowed me to craft an initial
design to an inquiry I would like to make into the breakdown of McNemar's
test. I have read an intro to R manual and the posting guides and hope I am
not violating
2003 Dec 10
3
pridump
Hi All,
Can anyone tell me what are the <dev1> <dev2> parameters that I should
use to run pridump? I took a look at the source code but couldn't figure
this one out.
Best,
PauloHM
2017 Dec 29
0
"file changed as we read it" message during tar file creation on GlusterFS
Hi Nithya,
thank you very much for your support and sorry for the late.
Below you can find the output of ?gluster volume info tier2? command and the gluster software stack version:
gluster volume info
Volume Name: tier2
Type: Distributed-Disperse
Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c
Status: Started
Snapshot Count: 0
Number of Bricks: 6 x (4 + 2) = 36
Transport-type: tcp
Bricks:
2018 Jan 02
2
"file changed as we read it" message during tar file creation on GlusterFS
Hi All,
any news about this issue?
Can I ignore this kind of error message or I have to do something to correct it?
Thank you in advance and sorry for my insistence.
Regards,
Mauro
> Il giorno 29 dic 2017, alle ore 11:45, Mauro Tridici <mauro.tridici at cmcc.it> ha scritto:
>
>
> Hi Nithya,
>
> thank you very much for your support and sorry for the late.
> Below
2018 Jan 02
0
"file changed as we read it" message during tar file creation on GlusterFS
I think it is safe to ignore it. The problem exists? due to the minor
difference in file time stamps in the backend bricks of the same sub
volume (for a given file) and during the course of tar, the timestamp
can be served from different bricks causing it to complain . The ctime
xlator[1] feature once ready should fix this issue by storing time
stamps as xattrs on the bricks. i.e. all bricks
2018 Jan 02
1
"file changed as we read it" message during tar file creation on GlusterFS
Hi Ravi,
thank you very much for your support and explanation.
If I understand, the ctime xlator feature is not present in the current gluster package but it will be in the future release, right?
Thank you again,
Mauro
> Il giorno 02 gen 2018, alle ore 12:53, Ravishankar N <ravishankar at redhat.com> ha scritto:
>
> I think it is safe to ignore it. The problem exists due to the
2007 Apr 10
15
Poor man''s backup by attaching/detaching mirror drives on a _striped_ pool?
Hi,
one quick&dirty way of backing up a pool that is a mirror of two devices is to
zpool attach a third one, wait for the resilvering to finish, then zpool detach
it again.
The third device then can be used as a poor man''s simple backup.
Has anybody tried it yet with a striped mirror? What if the pool is
composed out of two mirrors? Can I attach devices to both mirrors, let them
2018 Feb 13
2
wbinfo -U id gives different users on same dc
Hello.
I have 2 clustered server and they're using same DC. But wbinfo gives me
different user with same "UID" and on every failover I'm facing with this
problem.
Server 1:
[root at DEV1~]# getfacl a1 -n -dc
user::rwx
user:0:rwx
user:8003:rwx
group::---
group:0:---
mask::rwx
other::---
[root at DEV1~]# wbinfo -U 8003
S-1-5-21-3833684748-2620639523-3326022584-1110
I moved the
2009 Oct 14
0
Problem with NLSstClosestX; and suggested fix
Problem is demonstrated with this code, intended to find the approximate 'x'
at which the 'y' is midway between the left and right asymptotes. This
particular data set returns NA, which is a bit silly!
--------------
sXY <- structure(list(x = c(0, 24, 27, 48, 51, 72, 75, 96, 99), y =
c(4.98227,
6.38021, 6.90309, 7.77815, 7.64345, 7.23045, 7.27875, 7.11394,
6.95424)), .Names =
2010 Dec 09
0
[PATCH linux-2.6.18-xen] make netloop permanent
Hi,
with reference to RH BZ#567540 [0], this patch makes the netloop module permanent (like netback is currently). It reverts parts of xen-unstable c/s 9019:271cb04a4f2b [1] [2] (though that has a typo: "__init clean_loopback", so it was probably changed later too).
The patch fixes the problem of "rmmod netloop" hanging, resulting in blocked tasks and inability to shut down
2002 Aug 30
1
syslinux unable to find vmlinuz
Hi:
Two scripts (mkbd.fails and mkbd.works) are attached to this message. The only
difference between them is in how syslinux.cfg (on a RedHat boot image) is
replaced.
mkbd.fails: syslinux.cfg is replaced via "mv -f". vmlinuz can't be found.
mkbd.works: syslinux.cfg is replaced via "cp". vmlinuz is found.
The only difference I can see between these two methods is that
2017 Sep 06
0
First Gluster Volume deploy: recommended configuration and suggestions?
Dear users,
I just started my first Gluster test volume using 3 servers (each server contains 12 hdd).
I would like to create a "distributed disperse volume? but I?m a little bit confused about the right configuration schema that I should use.
Should I use JBOD disks? How many bricks to be defined? Ideal redundancy value? Ideal disperse-data count value? 6x(4+2) or 3x(8+4) volume
2019 Nov 12
4
[PATCH 1/2] options: Fixes and enhancements to --key parsing.
The first patch fixes a rather serious bug, the second patch allows
multiple --key parameters and default parameters.
There is a third patch to libguestfs which adds a test, coming up.
I did not yet review and fix the documentation. I think we need to
centralize it in one place because at the moment the same
documentation for --key is copy/pasted all over the tools.
Rich.
2017 Dec 29
2
"file changed as we read it" message during tar file creation on GlusterFS
Hi Mauro,
What version of Gluster are you running and what is your volume
configuration?
IIRC, this was seen because of mismatches in the ctime returned to the
client. I don't think there were issues with the files but I will leave it
to Ravi and Raghavendra to comment.
Regards,
Nithya
On 29 December 2017 at 04:10, Mauro Tridici <mauro.tridici at cmcc.it> wrote:
>
> Hi All,
2017 Sep 25
2
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Dear Gluster Users,
I implemented a distributed disperse 6x(4+2) gluster (v.3.10.5) volume with the following options:
[root at s01 tier2]# gluster volume info
Volume Name: tier2
Type: Distributed-Disperse
Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c
Status: Started
Snapshot Count: 0
Number of Bricks: 6 x (4 + 2) = 36
Transport-type: tcp
Bricks:
Brick1: s01-stg:/gluster/mnt1/brick
Brick2:
2017 Sep 26
0
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Hi Mauro,
We would require complete log file to debug this issue.
Also, could you please provide some more information of the core after attaching to gdb and using command "bt".
---
Ashish
----- Original Message -----
From: "Mauro Tridici" <mauro.tridici at cmcc.it>
To: "Gluster Users" <gluster-users at gluster.org>
Sent: Monday, September 25,
2017 Sep 26
2
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Hi Ashish,
thank you for your answer.
Do you need complete client log file only or something else in particular?
Unfortunately, I never used ?bt? command. Could you please provide me an usage example string?
I will provide all logs you need.
Thank you again,
Mauro
> Il giorno 26 set 2017, alle ore 09:30, Ashish Pandey <aspandey at redhat.com> ha scritto:
>
> Hi Mauro,
>
>