Displaying 7 results from an estimated 7 matches for "4ff9".
Did you mean:
4ff1
2018 Aug 20
0
[PATCH 1/2] v2v: rhv-upload-plugin: Handle send send failures
...d-plugin.py:
pwrite: error: ('%s: %d %s: %r', 'could not write sector offset 1841154048 size 1024', 403,
'Forbidden', b'{"explanation": "Access was denied to this resource.", "code": 403, "detail":
"Ticket u\'6071e16f-ec60-4ff9-a594-10b0faae3617\' expired", "title": "Forbidden"}')
---
v2v/rhv-upload-plugin.py | 22 ++++++++++++++++------
1 file changed, 16 insertions(+), 6 deletions(-)
diff --git a/v2v/rhv-upload-plugin.py b/v2v/rhv-upload-plugin.py
index 2d686c2da..7327ea4c5 100644
---...
2018 Aug 20
3
[PATCH 0/2] v2v: rhv-upload-plugin: Improve error handling
These patches improve error handling when PUT request fail, including
the error response from oVirt server. This will make it easier to debug
issue when oVirt server logs have been rotated.
Nir Soffer (2):
v2v: rhv-upload-plugin: Handle send send failures
v2v: rhv-upload-plugin: Fix error formatting
v2v/rhv-upload-plugin.py | 24 +++++++++++++++++-------
1 file changed, 17 insertions(+), 7
2017 May 10
2
Kcc connection
Hello Christian,
Ive had a quick look at you output, that looks pretty normal.
I'll have a good look Friday, tomorrow day off.
So my suggestion is post the output to the list but -d10 is not needed.
The regular output should be sufficiant.
Im wondering why there is no dc3 in kcc also but why i see over 15 RPC sessions.
Maybe normal maybe not, this i dont know.
Greetz,
Louis
>
2017 May 10
0
Kcc connection
...PC
DSA object GUID: 27ea875c-f283-4a31-b2ab-70db62cd530d
Last attempt @ NTTIME(0) was successful
0 consecutive failure(s).
Last success @ NTTIME(0)
==== KCC CONNECTION OBJECTS ====
Connection --
Connection name: 4855bd0d-7a85-4ff9-a17a-310933548220
Enabled : TRUE
Server DNS name : dc2.hq.brain-biotech.de
Server DN name : CN=NTDS Settings,CN=DC2,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=hq,DC=brain-biotech,DC=de
TransportType: RPC
options...
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...b9358/C3D98B2367C99072ED0CBF11E5B81B3531CC349B (10b6e836-07f8-4f37-b82d-f746029b76c3) on home-client-2
[2017-10-25 10:14:12.729669] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/5685B493ABDD9B5419C9D4DC54FF92107E0C3BCE (9bc3f5a2-e4d8-41e0-9bd3-3d91939bb74c) on home-client-2
[2017-10-25 10:14:12.738990] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/56FFEF0BD66C7A33DD271F6206532E3B3DA55236 (dfe2d19b-c044-4421...