Displaying 8 results from an estimated 8 matches for "byte_len".
2004 May 17
1
samba 3.0.4 on SLES8: password sync will not work...(decode_pw_buffer: incorrect password length)
.../17 09:44:08, 0] libsmb/smbencrypt.c:decode_pw_buffer(520)
decode_pw_buffer: check that 'encrypt passwords = yes'
and here is the DEBUG source code from smbencrypt.c generating the
output above:
/* Password cannot be longer than the size of the password buffer */
if ( (byte_len < 0) || (byte_len > 512)) {
DEBUG(0, ("decode_pw_buffer: incorrect password length
(%d).\n", byte_len));
DEBUG(0, ("decode_pw_buffer: check that 'encrypt passwords
= yes'\n"));
return False;
}
So somewhow the byte_...
2018 Apr 25
2
RDMA Client Hang Problem
...nodes, fuse client
gives this error below and hangs.
[2018-04-17 07:42:55.506422] W [MSGID: 103070]
[rdma.c:4284:gf_rdma_handle_failed_send_completion]
0-rpc-transport/rdma: *send work request on `mlx5_0' returned error
wc.status = 5, wc.vendor_err = 245, post->buf = 0x7f8b92016000,
wc.byte_len = 0, post->reused = 135*
When I change transport mode from rdma to tcp, fuse client works well.
No hangs.
I also tried Gluster 3.8, 3.10, 4.0.0 and 4.0.1 (from Ubuntu PPAs) on
Ubuntu 16.04.4 and Centos 7.4. But results were the same.
Thank you.
Necati.
-------------- next part ------------...
2018 Apr 25
0
RDMA Client Hang Problem
...gives
> this error below and hangs.
>
> [2018-04-17 07:42:55.506422] W [MSGID: 103070] [rdma.c:4284:gf_rdma_handle_failed_send_completion]
> 0-rpc-transport/rdma: *send work request on `mlx5_0' returned error
> wc.status = 5, wc.vendor_err = 245, post->buf = 0x7f8b92016000, wc.byte_len
> = 0, post->reused = 135*
>
> When I change transport mode from rdma to tcp, fuse client works well. No
> hangs.
>
> I also tried Gluster 3.8, 3.10, 4.0.0 and 4.0.1 (from Ubuntu PPAs) on
> Ubuntu 16.04.4 and Centos 7.4. But results were the same.
>
> Thank you.
> N...
2018 Apr 25
1
RDMA Client Hang Problem
...d hangs.
>
> [2018-04-17 07:42:55.506422] W [MSGID: 103070]
> [rdma.c:4284:gf_rdma_handle_failed_send_completion]
> 0-rpc-transport/rdma: *send work request on `mlx5_0' returned
> error wc.status = 5, wc.vendor_err = 245, post->buf =
> 0x7f8b92016000, wc.byte_len = 0, post->reused = 135*
>
> When I change transport mode from rdma to tcp, fuse client works
> well. No hangs.
>
> I also tried Gluster 3.8, 3.10, 4.0.0 and 4.0.1 (from Ubuntu PPAs)
> on Ubuntu 16.04.4 and Centos 7.4. But results were the same.
>
> Th...
2010 Mar 03
5
[PATCH, PV-GRUB DOC] Add details to PV-GRUB documentation
Add a couple of documentation details about PV-GRUB support
- the menu.lst content can be passed as a ramdisk.
- virtual partitions are not supported.
Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
diff -r b8d2a4134a68 stubdom/README
--- a/stubdom/README Wed Mar 03 17:41:58 2010 +0000
+++ b/stubdom/README Wed Mar 03 20:42:53 2010 +0100
@@ -52,11 +52,17 @@
extra =
2018 Feb 26
1
Problems with write-behind with large files on Gluster 3.8.4
..._error] 0-rpcsvc: rpc actor failed to complete successfully
[2018-02-22 18:07:45.243416] W [MSGID: 103070] [rdma.c:4282:gf_rdma_handle_failed_send_completion] 0-rpc-transport/rdma: send work request on `mlx4_0' returned error wc.status = 1, wc.vendor_err = 105, post->buf = 0x7f6462b85000, wc.byte_len = 45056, post->reused = 1
[2018-02-22 18:07:45.243692] W [MSGID: 103070] [rdma.c:4282:gf_rdma_handle_failed_send_completion] 0-rpc-transport/rdma: send work request on `mlx4_0' returned error wc.status = 5, wc.vendor_err = 249, post->buf = 0x7f6462b85000, wc.byte_len = 131072, post->re...
2012 Mar 20
13
[PATCH 0 of 3 v2] PV-GRUB: add support for ext4 and btrfs
Hi,
The following patches add support for ext4 and btrfs to
PV-GRUB. These patches are taken nearly verbatim from those provided
by Fedora and Gentoo.
We''ve been using these patches for the PV-GRUB images available in EC2
for some time now with no problems.
Changes from v1:
- Makefile has been changed to check the exit code from patch
- The btrfs patch has been rebased to apply
2009 Feb 13
44
[PATCH 0/40] ocfs2: Detach ocfs2 metadata I/O from struct inode
The following series of patches attempts to detach metadata I/O from
struct inode. They are currently tied together pretty tightly.
Metadata reads happen via the ocfs2_read_blocks() functions, writes via
both jbd2 and ocfs2_write_blocks().
- Each inode has a cache of associated metadata blocks stored on its
ip_metadata_cache member. The ocfs2_read/write_blocks() functions
take a struct