Displaying 18 results from an estimated 18 matches for "1638400".
Did you mean:
163840
2009 Oct 19
1
local copy microsoft/credentials directory profile redirection
...passwd chat = *New*password* %n\n *Retype*new*password* %n\n
*all*authentication*tokens*updated*
log level = 5 vfs:0 smb:0
syslog = 0
log file = /var/log/samba/log.%h
max log size = 10000000
max xmit = 65535
socket options = TCP_NODELAY SO_SNDBUF=1638400 SO_RCVBUF=1638400
SO_KEEPALIVE
printcap name = cups
show add printer wizard = No
max stat cache size = 1024
add user script = /usr/sbin/smbldap-useradd -m "%u"
delete user script = /usr/sbin/smbldap-userdel "%u"
add group script =...
2018 Apr 16
2
Getting glusterfs to expand volume size to brick size
...152
RDMA Port : 0
Online : Y
Pid : 1263
File System : ext4
Device : /dev/sdd
Mount Options : rw,relatime,data=ordered
Inode Size : 256
Disk Space Free : 23.0GB
Total Disk Space : 24.5GB
Inode Count : 1638400
Free Inodes : 1625429
------------------------------------------------------------
------------------
Brick : Brick pylon:/mnt/pylon_block2/dev_apkmirror_data
TCP Port : 49153
RDMA Port : 0
Online : Y
Pid : 1288
File Syst...
2018 Apr 17
5
Getting glusterfs to expand volume size to brick size
...: 1263
>> File System : ext4
>> Device : /dev/sdd
>> Mount Options : rw,relatime,data=ordered
>> Inode Size : 256
>> Disk Space Free : 23.0GB
>> Total Disk Space : 24.5GB
>> Inode Count : 1638400
>> Free Inodes : 1625429
>> ------------------------------------------------------------
>> ------------------
>> Brick : Brick pylon:/mnt/pylon_block2/dev_apkmirror_data
>> TCP Port : 49153
>> RDMA Port : 0
>>...
2018 Apr 17
0
Getting glusterfs to expand volume size to brick size
...: Y
> Pid : 1263
> File System : ext4
> Device : /dev/sdd
> Mount Options : rw,relatime,data=ordered
> Inode Size : 256
> Disk Space Free : 23.0GB
> Total Disk Space : 24.5GB
> Inode Count : 1638400
> Free Inodes : 1625429
> ------------------------------------------------------------
> ------------------
> Brick : Brick pylon:/mnt/pylon_block2/dev_apkmirror_data
> TCP Port : 49153
> RDMA Port : 0
> Online : Y
&g...
2018 Apr 17
0
Getting glusterfs to expand volume size to brick size
...gt; File System : ext4
>>> Device : /dev/sdd
>>> Mount Options : rw,relatime,data=ordered
>>> Inode Size : 256
>>> Disk Space Free : 23.0GB
>>> Total Disk Space : 24.5GB
>>> Inode Count : 1638400
>>> Free Inodes : 1625429
>>> ------------------------------------------------------------
>>> ------------------
>>> Brick : Brick pylon:/mnt/pylon_block2/dev_apkmirror_data
>>> TCP Port : 49153
>>> RDMA Port...
2018 Apr 17
0
Getting glusterfs to expand volume size to brick size
...gt; File System : ext4
>>> Device : /dev/sdd
>>> Mount Options : rw,relatime,data=ordered
>>> Inode Size : 256
>>> Disk Space Free : 23.0GB
>>> Total Disk Space : 24.5GB
>>> Inode Count : 1638400
>>> Free Inodes : 1625429
>>> ------------------------------------------------------------
>>> ------------------
>>> Brick : Brick pylon:/mnt/pylon_block2/dev_apkmirror_data
>>> TCP Port : 49153
>>> RDMA Port...
2018 Apr 17
1
Getting glusterfs to expand volume size to brick size
...: ext4
>>>> Device : /dev/sdd
>>>> Mount Options : rw,relatime,data=ordered
>>>> Inode Size : 256
>>>> Disk Space Free : 23.0GB
>>>> Total Disk Space : 24.5GB
>>>> Inode Count : 1638400
>>>> Free Inodes : 1625429
>>>> ------------------------------------------------------------
>>>> ------------------
>>>> Brick : Brick pylon:/mnt/pylon_block2/dev_apkmirror_data
>>>> TCP Port : 49153
&g...
2008 Jul 30
3
Large file - match process taking days
I've been trying to figure out why some large files are taking a long time
to rsync (80GB file). With this file, the match process is taking days.
I've added logging to verbose level 4. The output from match.c is at the
point where it is writing out the "potential match at" message. In a 9 hour
period the match verbiage has changed from:
potential match at 14993337175 i=2976
2018 Apr 17
1
Getting glusterfs to expand volume size to brick size
...: ext4
>>>> Device : /dev/sdd
>>>> Mount Options : rw,relatime,data=ordered
>>>> Inode Size : 256
>>>> Disk Space Free : 23.0GB
>>>> Total Disk Space : 24.5GB
>>>> Inode Count : 1638400
>>>> Free Inodes : 1625429
>>>> ------------------------------------------------------------
>>>> ------------------
>>>> Brick : Brick pylon:/mnt/pylon_block2/dev_apkmirror_data
>>>> TCP Port : 49153
&g...
2018 Apr 16
0
Getting glusterfs to expand volume size to brick size
What version of Gluster are you running? Were the bricks smaller earlier?
Regards,
Nithya
On 15 April 2018 at 00:09, Artem Russakovskii <archon810 at gmail.com> wrote:
> Hi,
>
> I have a 3-brick replicate volume, but for some reason I can't get it to
> expand to the size of the bricks. The bricks are 25GB, but even after
> multiple gluster restarts and remounts, the
2018 Apr 14
2
Getting glusterfs to expand volume size to brick size
Hi,
I have a 3-brick replicate volume, but for some reason I can't get it to
expand to the size of the bricks. The bricks are 25GB, but even after
multiple gluster restarts and remounts, the volume is only about 8GB.
I believed I could always extend the bricks (we're using Linode block
storage, which allows extending block devices after they're created), and
gluster would see the
2020 Feb 01
1
[Bug 14260] New: leading / added to file name causing file not found when setting permissions
...=0 n=0 rem=0
send_files mapped 20200125_110331.jpg of size 2649522
calling match_sums 20200125_110331.jpg
20200125_110331.jpg
generate_files phase=1
recv_files(1) starting
recv_files(20200125_110331.jpg)
data recv 32768 at 0
data recv 32768 at 32768
...
data recv 32768 at 1605632
data recv 32768 at 1638400
sending file_sum
false_alarms=0 hash_hits=0 matches=0
sender finished 20200125_110331.jpg
data recv 32768 at 1671168
data recv 32768 at 1703936
...
data recv 28082 at 2621440
got file_sum
set modtime of .20200125_110331.jpg.kkjGaK to (1579946612) Sat Jan 25 10:03:32
2020
rsync: failed to set permis...
2004 May 06
2
ID mismatch
Hi,
When do the sftp to remote server with file system full, sftp connection
get "ID mismatch"
and connection closed. Is it supposed to get the "sftp>" prompt back? I try
with V3.7.1p2 and V3.8.1p1 and have the same result.
Please help.
rdsosl.sef_cdf-831# sftp -v edosuser at rdsosl
Connecting to rdsosl...
OpenSSH_3.8.1p1, OpenSSL 0.9.7d 17 Mar 2004
debug1: Reading
2002 May 04
2
Failure to update differing file
...at 1277952
data recv 32768 at 1310720
data recv 32768 at 1343488
data recv 32768 at 1376256
data recv 32768 at 1409024
data recv 32768 at 1441792
data recv 32768 at 1474560
data recv 32768 at 1507328
data recv 32768 at 1540096
data recv 32768 at 1572864
data recv 32768 at 1605632
data recv 32768 at 1638400
data recv 32768 at 1671168
data recv 32768 at 1703936
data recv 32768 at 1736704
data recv 32768 at 1769472
data recv 32768 at 1802240
data recv 32768 at 1835008
data recv 32768 at 1867776
data recv 32768 at 1900544
data recv 32768 at 1933312
data recv 32768 at 1966080
data recv 32768 at 1998848
da...
2008 Dec 09
1
File uploaded to webDAV server on GlusterFS AFR - ends up without xattr!
...glusterfs-fuse: 27: WRITE => 102400/102400,1433600/1536000
2008-12-09 14:53:09 D [fuse-bridge.c:1677:fuse_write] glusterfs-fuse:
28: WRITE (0x1eeadf0, size=102400, offset=1536000)
2008-12-09 14:53:09 D [fuse-bridge.c:1640:fuse_writev_cbk]
glusterfs-fuse: 28: WRITE => 102400/102400,1536000/1638400
2008-12-09 14:53:09 D [fuse-bridge.c:1677:fuse_write] glusterfs-fuse:
29: WRITE (0x1eeadf0, size=102400, offset=1638400)
2008-12-09 14:53:09 D [fuse-bridge.c:1640:fuse_writev_cbk]
glusterfs-fuse: 29: WRITE => 102400/102400,1638400/1740800
2008-12-09 14:53:09 D [fuse-bridge.c:1677:fuse_write] g...
2014 Mar 26
11
[Bug 10518] New: rsync hangs (100% cpu)
...h (f=1, s=0x192d220, buf=0x192d1e0,
len=214748364800) at match.c:236
l = 131072
done_csum2 = 1
hash_entry = 0
i = 93079
prev = 0x7f7a70dd1fb8
offset = 4668161391
aligned_offset = 0
end = 214748233729
k = 131072
want_i = 1638400
aligned_i = 0
backup = 146704
sum2 = "\021\306D\256\071\233\273a$\335\000\063\371<", <incomplete
sequence \333>
s1 = 4294836224
s2 = 317063168
sum = 0
more = 1
map = 0x256570f
"\377\377\377\377\377\377\377\377\3...
2004 Dec 09
1
resize2fs on LVM on MD raid on Fedora Core 3 - inode table conflicts in fsck
...de bitmap at 1572865, inode
table at 1572
866
32254 free blocks, 16384 free inodes, 0 used directories
Group 49: block bitmap at 1605634, inode bitmap at 1605635, inode
table at 1605
636
32252 free blocks, 16384 free inodes, 0 used directories
Group 50: block bitmap at 1638400, inode bitmap at 1638401, inode
table at 1638
402
32254 free blocks, 16384 free inodes, 0 used directories
Group 51: block bitmap at 1671168, inode bitmap at 1671169, inode
table at 1671
170
32254 free blocks, 16384 free inodes, 0 used directories
Group 52: block bitma...
2018 Mar 01
29
[Bug 13317] New: rsync returns success when target filesystem is full
https://bugzilla.samba.org/show_bug.cgi?id=13317
Bug ID: 13317
Summary: rsync returns success when target filesystem is full
Product: rsync
Version: 3.1.2
Hardware: x64
OS: FreeBSD
Status: NEW
Severity: major
Priority: P5
Component: core
Assignee: wayned at samba.org