Displaying 20 results from an estimated 38 matches for "repce".
2012 Mar 20
1
issues with geo-replication
...-----------------------------------
[2012-03-20 19:29:10.118295] I [monitor(monitor):60:monitor] Monitor: starting gsyncd worker
[2012-03-20 19:29:10.168212] I [gsyncd:289:main_i] <top>: syncing: gluster://localhost:myvol -> ssh://root at remoteip:/data/path
[2012-03-20 19:29:10.222372] D [repce:130:push] RepceClient: call 23154:47903647023584:1332271750.22 __repce_version__() ...
[2012-03-20 19:29:10.504734] E [syncdutils:133:exception] <top>: FAIL:
Traceback (most recent call last):
File "/opt/glusterfs/3.2.5/local/libexec//glusterfs/python/syncdaemon/syncdutils.py", li...
2024 Oct 31
16
[PATCH v3 00/15] NVKM GSP RPC kernel docs, cleanups and fixes
Hi folks:
Here is the leftover of the previous spin of NVKM GSP RPC fixes, which
is handling the return of large GSP message. PATCH 1 and 2 in the previous
spin were merged [1], and this spin is based on top of PATCH 1 and PATCH 2
in the previous spin.
Besides the support of the large GSP message, kernel doc and many cleanups
are introduced according to the comments in the previous spin [2].
2023 Dec 22
11
nouveau GSP fixes
This is a collection of nouveau debug prints, memory leak, a very
annoying race condition causing system hangs with prime scenarios,
and a fix from Lyude to get the panel on my laptop working.
I'd like to get these into 6.7,
Dave.
2024 Nov 11
4
[PATCH 1/2] nouveau: handle EBUSY and EAGAIN for GSP aux errors.
From: Dave Airlie <airlied at redhat.com>
The upper layer transfer functions expect EBUSY as a return
for when retries should be done.
Fix the AUX error translation, but also check for both errors
in a few places.
Fixes: eb284f4b3781 ("drm/nouveau/dp: Honor GSP link training retry timeouts")
Signed-off-by: Dave Airlie <airlied at redhat.com>
---
2023 Dec 04
1
[PATCH] nouveau/gsp: add three notifier callbacks that we see in normal operation
These seem to get called, but it doesn't look like we have to care too much
at this point.
Signed-off-by: Dave Airlie <airlied at redhat.com>
---
.../gpu/drm/nouveau/nvkm/subdev/gsp/r535.c | 24 ++++++++++++++++++-
1 file changed, 23 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
index
2023 Dec 22
1
[PATCH 07/11] nouveau/gsp: convert gsp errors to generic errors
This should let the upper layers retry as needed on EAGAIN.
There may be other values we will care about in the future, but
this covers our present needs.
Signed-off-by: Dave Airlie <airlied at redhat.com>
---
.../gpu/drm/nouveau/nvkm/subdev/gsp/r535.c | 26 +++++++++++++++----
1 file changed, 21 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
2017 Aug 17
0
Extended attributes not supported by the backend storage
...10635] E [fuse-bridge.c:3484:fuse_xattr_cbk] 0-glusterfs-fuse: extended attribute not supported by the backend storage
Then it starts syncing the data but it stops at the exact same point every time I try. In the master, I'm getting the following error messages:
[2017-08-16 12:57:45.205311] E [repce(/mnt/storage/lapbacks):207:__call__] RepceClient: call 17769:140586894673664:1502888257.97 (entry_ops) failed on peer with OSError
[2017-08-16 12:57:45.205593] E [syncdutils(/mnt/storage/lapbacks):312:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/lib/x...
2017 Aug 16
0
Geo replication faulty-extended attribute not supported by the backend storage
...635] E [fuse-bridge.c:3484:fuse_xattr_cbk] 0-glusterfs-fuse: extended attribute not supported by the backend storage
Then it starts syncing the data but it stops at the exact same point every time I try. In the master, I'm getting the following error messages:
[2017-08-16 12:57:45.205311] E [repce(/mnt/storage/lapbacks):207:__call__] RepceClient: call 17769:140586894673664:1502888257.97 (entry_ops) failed on peer with OSError
[2017-08-16 12:57:45.205593] E [syncdutils(/mnt/storage/lapbacks):312:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/lib/x...
2018 Jan 22
1
geo-replication initial setup with existing data
2013 Mar 20
2
Geo-replication broken in 3.4 alpha2?
...uster volume geo-replication vmstorage slave:/backup/vmstorage start
After that geo-replication status was "starting?" for a while and after that it switched to "N/A". I set log-level to DEBUG and saw lines like these appearing every 10 seconds:
[2013-03-20 18:48:19.417107] D [repce:175:push] RepceClient: call 27756:140178941277952:1363798099.42 keep_alive(None,) ...
[2013-03-20 18:48:19.418431] D [repce:190:__call__] RepceClient: call 27756:140178941277952:1363798099.42 keep_alive -> 34
[2013-03-20 18:48:29.427959] D [repce:175:push] RepceClient: call 27756:140178941277952...
2011 Jun 28
2
Issue with Gluster Quota
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110628/64de4f5c/attachment.html>
2011 May 03
3
Issue with geo-replication and nfs auth
...tatus:
2011-05-03 09:57:40.315774] E [syncdutils:131:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/lib/glusterfs/glusterfs/python/syncdaemon/syncdutils.py", line 152, in twrap
tf(*aa)
File "/usr/lib/glusterfs/glusterfs/python/syncdaemon/repce.py", line 118, in listen
rid, exc, res = recv(self.inf)
File "/usr/lib/glusterfs/glusterfs/python/syncdaemon/repce.py", line 42, in recv
return pickle.load(inf)
EOFError
Command line :
gluster volume geo-replication test slave.mydomain.com:/data/test/ start
On /etc/glust...
2024 Jan 29
1
[PATCH] nouveau: offload fence uevents work to workqueue
From: Dave Airlie <airlied at redhat.com>
This should break the deadlock between the fctx lock and the irq lock.
This offloads the processing off the work from the irq into a workqueue.
Signed-off-by: Dave Airlie <airlied at redhat.com>
---
drivers/gpu/drm/nouveau/nouveau_fence.c | 24 ++++++++++++++++++------
drivers/gpu/drm/nouveau/nouveau_fence.h | 1 +
2 files changed, 19
2018 Mar 06
1
geo replication
...ime time=1520325171
[2018-03-06 08:32:51.926801] E [syncdutils(/gfs/testtomcat/mount):299:log_raise_exception] <top>: master volinfo unavailable
[2018-03-06 08:32:51.936203] I [syncdutils(/gfs/testtomcat/mount):271:finalize] <top>: exiting.
[2018-03-06 08:32:51.938469] I [repce(/gfs/testtomcat/mount):92:service_loop] RepceServer: terminating on reaching EOF.
[2018-03-06 08:32:51.938776] I [syncdutils(/gfs/testtomcat/mount):271:finalize] <top>: exiting.
[2018-03-06 08:32:52.743696] I [monitor(monitor):363:monitor] Monitor: worker died in startup phase...
2018 Jul 13
2
Upgrade to 4.1.1 geo-replication does not work
...hanks,
Kotresh HR
On Fri, Jul 13, 2018 at 1:26 AM, Marcus Peders?n <marcus.pedersen at slu.se<mailto:marcus.pedersen at slu.se>> wrote:
Hi Kotresh,
i have replaced both files (gsyncdconfig.py<https://review.gluster.org/#/c/20207/1/geo-replication/syncdaemon/gsyncdconfig.py> and repce.py<https://review.gluster.org/#/c/20207/1/geo-replication/syncdaemon/repce.py>) in all nodes both master and slave.
I rebooted all servers but geo-replication status is still Stopped.
I tried to start geo-replication with response Successful but status still show Stopped on all nodes.
Noth...
2017 Sep 29
1
Gluster geo replication volume is faulty
...lication/secret.pem -p 22 -oControlMaster=auto -S
/tmp/gsyncd-aux-ssh-fdyDHm/78cf8b204207154de59d7ac32eee737f.sock --compress
geo-rep-user at gfs6:/proc/17554/cwd error=12
[2017-09-29 15:53:29.797259] I [syncdutils(/gfs/brick2/gv0):271:finalize]
<top>: exiting.
[2017-09-29 15:53:29.799386] I [repce(/gfs/brick2/gv0):92:service_loop]
RepceServer: terminating on reaching EOF.
[2017-09-29 15:53:29.799570] I [syncdutils(/gfs/brick2/gv0):271:finalize]
<top>: exiting.
[2017-09-29 15:53:30.105407] I [monitor(monitor):280:monitor] Monitor:
starting gsyncd worker brick=/gfs/brick1/gv0
slave_node=...
2017 Oct 06
0
Gluster geo replication volume is faulty
...olMaster=auto -S
> /tmp/gsyncd-aux-ssh-fdyDHm/78cf8b204207154de59d7ac32eee737f.sock
> --compress geo-rep-user at gfs6:/proc/17554/cwderror=12
> [2017-09-29 15:53:29.797259] I
> [syncdutils(/gfs/brick2/gv0):271:finalize] <top>: exiting.
> [2017-09-29 15:53:29.799386] I
> [repce(/gfs/brick2/gv0):92:service_loop] RepceServer: terminating on
> reaching EOF.
> [2017-09-29 15:53:29.799570] I
> [syncdutils(/gfs/brick2/gv0):271:finalize] <top>: exiting.
> [2017-09-29 15:53:30.105407] I [monitor(monitor):280:monitor] Monitor:
> starting gsyncd
> worker...
2023 Dec 04
1
[PATCH] nouveau/gsp: add three notifier callbacks that we see in normal operation
On Tue, 2023-12-05 at 08:55 +1000, Dave Airlie wrote:
> +static int
> +r535_gsp_msg_ucode_libos_print(void *priv, u32 fn, void *repv, u32 repc)
> +{
> +???????/* work out what we should do here. */
> +???????return 0;
> +}
This is part of my logrm debugfs patch. ?It contains the printf log from a
PMU exception.
Do you want me to research the other two RPCs and tell you exactly
2005 Apr 24
1
random interactions in lme
Hi All,
I'm taking an Experimental Design course this semester, and have spent
many long hours trying to coax the professor's SAS examples into
something that will work in R (I'd prefer that the things I learn not
be tied to a license). It's been a long semester in that regard.
One thing that has really frustrated me is that lme has an extremely
counterintuitive way for
2024 Jun 18
1
[PATCH 2/2] [v5] drm/nouveau: expose GSP-RM logging buffers via debugfs
On Mon, 2024-06-17 at 21:54 +0200, Danilo Krummrich wrote:
Hi Timur,
thanks for the follow-up on this patch series.
On Wed, Jun 12, 2024 at 06:52:53PM -0500, Timur Tabi wrote:
The LOGINIT, LOGINTR, LOGRM, and LOGPMU buffers are circular buffers
that have printf-like logs from GSP-RM and PMU encoded in them.
LOGINIT, LOGINTR, and LOGRM are allocated by Nouveau and their DMA
addresses are