search for: 262144

Displaying 20 results from an estimated 853 matches for "262144".

2013 May 15
1
still mbuf leak in 9.0 / 9.1?
Hi list, since we activated 10gbe on ixgbe cards + jumbo frames(9k) on 9.0 and now on 9.1 we recognize that after a random period of time, sometimes a week, sometimes only a day, the system doesn't send any packets out. The phenomenon is that you can't login via ssh, nfs and istgt is not operative. Yet you can login on the console and execute commands. A clean shutdown isn't possible
2013 Aug 21
1
Gluster 3.4 Samba VFS writes slow in Win 7 clients
Hello? We have used glusterfs3.4 with the lasted samba-glusterfs-vfs lib to test samba performance in windows client. two glusterfs server nodes export share with name of "gvol": hardwares: brick use a raid 5 logic disk with 8 * 2T SATA HDDs 10G network connection one linux client mount the "gvol" with cmd: [root at localhost current]# mount.cifs //192.168.100.133/gvol
2011 Aug 22
2
btrfs over nfs
...nd mount /documents as /mnt/documents and all data is present. However, if I mount user1 as /mnt/user and then mount /documents as /mnt/documents the data from the last subvolume mounted shows in both mounts. Mount also shows: 172.16.0.28:/documents/ on /mnt/user type nfs (rw,relatime,vers=3,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.0.28,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=172.16.0.28) 172.16.0.28:/documents/ on /mnt/documents type nfs (rw,relatime,vers=3,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,re...
2015 Nov 07
2
mkfs.ext2 succeeds despite nbd write errors?
...to fallocate '/home/cell/nbds/default/chunks/00000000000000031232' Indeed, there is definitely a problem with fallocate, as some of the chunks are the correct size (256k), and some are zero length: cell@pi1$ pwd /home/cell/nbds/default/chunks cell@pi1$ ls -l | tail -rw------- 1 cell cell 262144 Nov 7 06:01 00000000000000032256 -rw------- 1 cell cell 262144 Nov 7 06:01 00000000000000032257 -rw------- 1 cell cell 262144 Nov 7 06:01 00000000000000032258 -rw------- 1 cell cell 262144 Nov 7 06:01 00000000000000032259 -rw------- 1 cell cell 262144 Nov 7 06:01 00000000000000032260 -rw------...
2005 Sep 20
2
Nulls instead of data
...interprocess communication endpoint (writing to destination file takes place in another rsync process) 13:21:42.172254 open("/mnt/somedir/rsync-2.6.6.tar.gz", O_RDONLY|O_LARGEFILE) = 3 ... 13:21:47.045818 read(3, "\37\213\10\0\3102\351B\0\3\354<ks\333\266\262\371j\375"..., 262144) = 36864 [rsync requested 262144 bytes, but got only 36864 bytes] ... 13:22:43.980934 write(4, "\37\213\10\0\3102\351B\0\3\354<ks\333\266\262\371j\375"..., 512) = 512 3:22:43.981111 gettimeofday({1127215363, 981132}, NULL) = 0 13:22:43.981325 select(5, NULL, [4], NULL, {60, 0}) = 1 (ou...
2023 Feb 23
1
[nbdkit PATCH] server: Don't assert on send if client hangs up early
libnbd's copy/copy-nbd-error.sh was triggering an assertion failure in nbdkit: $ nbdcopy -- [ nbdkit --exit-with-parent -v --filter=error pattern 5M error-pread-rate=0.5 ] null: ... nbdkit: pattern.2: debug: error-inject: pread count=262144 offset=4718592 nbdkit: pattern.2: debug: pattern: pread count=262144 offset=4718592 nbdkit: pattern.1: debug: error-inject: pread count=262144 offset=4456448 nbdkit: pattern.1: error: injecting EIO error into pread nbdkit: pattern.1: debug: sending error reply: Input/output error nbdkit: pattern.3:...
2010 Dec 13
3
Slow I/O on ocfs2 file system
Hello, I have found, that ocfs2 is very slow when doing I/O operation without cache. See a simple test: ng-vvv1:~# dd if=/data/verejna/dd-1G bs=1k | dd of=/dev/null 1048576+0 records in 1048576+0 records out 1073741824 bytes (1.1 GB) copied, 395.183 s, 2.7 MB/s 2097152+0 records in 2097152+0 records out 1073741824 bytes (1.1 GB) copied, 395.184 s, 2.7 MB/s The underlying block device is quite
2016 Oct 16
1
rsync: connection unexpectedly closed
...dred times) ...and apparently the second rsync server side process is actually still performing reads and writes more than an hour after the connection died... ... read(1, "\337\363\356\1^\26L\316\17\31izD\254\27\346\267\266H\343\223\v\357\252d'h\351\371\0ny"..., 262144) = 262144 write(3, "\337\363\356\1^\26L\316\17\31izD\254\27\346\267\266H\343\223\v\357\252d'h\351\371\0ny"..., 262144) = 262144 ... This makes me wonder if this has something to do with memory buffers for file i/o being exhausted on the server and this server side...
2018 Apr 11
2
Unreasonably poor performance of replicated volumes
...id=0,group_id=0,default_permissions,allow_other,max_read=131072) *The problem *is that there is a significant performance loss with smaller block sizes. For example: *4K block size* [replica 3 volume] root at centos7u3-nogdesktop2 ~ $ dd if=/dev/zero of=/mnt/gluster/r3/file$RANDOM bs=4096 count=262144 262144+0 records in 262144+0 records out 1073741824 bytes (1.1 GB) copied, 11.2207 s, *95.7 MB/s* [replica 2 volume] root at centos7u3-nogdesktop2 ~ $ dd if=/dev/zero of=/mnt/gluster/r2/file$RANDOM bs=4096 count=262144 262144+0 records in 262144+0 records out 1073741824 bytes (1.1 GB) copied, 12.0...
2006 Dec 30
1
CentOS 4.4 e1000 and wire-speed
...fault IP send/receiver buffer size for improved UDP transmission over 1Gbps. The CPU is a P4 Dual Core 3GHz, not top of the line but adequate for my needs (strictly block io). Here are the TCP/IP tunables from my sysctl.conf: # Controls default receive buffer size (bytes) net.core.rmem_default = 262144 # Controls IP default send buffer size (bytes) net.core.wmem_default = 262144 # Controls IP maximum receive buffer size (bytes) net.core.rmem_max = 262144 # Controls IP maximum send buffer size (bytes) net.core.wmem_max = 262144 # Controls TCP memory utilization (pages) net.ipv4.tcp_mem = 49152...
2010 Jul 07
1
Installing Dungeon Siege LOA
...39;m trying to install LOA on Ubuntu 10.04 Lucid with wine 1.2-rc6. Every time I try to install it i get this readout on the konsole ---- fixme:ole:DllRegisterServer stub fixme:exec:SHELL_execute flags ignored: 0x00000100 lime at usr-desktop:~$ fixme:setupapi:SetupDefaultQueueCallbackW notification 262144 params 33f668,0 err:setupapi:SetupDefaultQueueCallbackW copy error 0 L"C:\\users\\lime\\Temp\\IXP000.TMP\\comcat.dll" -> L"C:\\windows\\system32\\comcat.dll" fixme:setupapi:SetupDefaultQueueCallbackW notification 262144 params 33f668,0 err:setupapi:SetupDefaultQueueCallbackW...
2013 Sep 03
0
Slow Read Performance With Samba GlusterFS VFS
...er volume also set: performance.read-ahead-page-count: 16 performance.cache-size: 256MB smb.conf [global] workgroup = MYGROUP server string = DCS Samba Server log file = /var/log/samba/log.vfs max log size = 500000 # use sendfile = true aio read size = 262144 aio write size = 262144 aio write behind = true min receivefile size = 262144 write cache size = 268435456 security = user passdb backend = tdbsam load printers = yes cups options = raw read raw = yes write raw = yes...
2014 Jun 09
0
Performance optimization of glusterfs with samba-glusterfs-vfs plugin
...which glusterfs with samba-glusterfs-vfs runs on it, the other for samba client glusterfs version: 3.4.2 samba-glusterfs-vfs: git://forge.gluster.org/samba-glusterfs/samba-glusterfs-vfs.git. gluster volume type : Distributed smb.conf: large readwrite = yes aio read size = 262144 aio write size = 262144 aio write behind = true ;min receivefile size = 262144 ;write cache size = 268435456 read raw = yes write raw = yes max xmit = 262144 socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=262144 SO_SNDBUF=262144 kernel oplocks = no sta...
2005 Mar 01
2
EXTLINUX 3.07 - trouble with extlinux.conf
...05-03-01 19:35 vmlinuz /mnt/hda1/boot/extlinux: -rw-r--r-- 1 root root 210 2005-03-01 21:32 extlinux.cfg -r--r--r-- 1 root root 9220 2005-03-01 21:28 extlinux.sys My extlinux.cfg is timeout 0 default vmlinuz append loadramdisk=1 initrd=initrd ramdisk_blocksize=4096 root=/dev/rd/0 ramdisk_size=262144 splash=silent vga=791 display thinstation.txt but I tried other configuration in extlinux.cfg file timeout 0 default /dev/hda1/boot/vmlinuz append loadramdisk=1 initrd=/dev/hda1/boot/initrd ramdisk_blocksize=4096 root=/dev/rd/0 ramdisk_size=262144 splash=silent vga=791 display thinstation.txt W...
2012 Apr 17
1
Help needed with NFS issue
...abit links in balance-alb mode. Turning off one interface in the bond made no difference. Relevant /etc/sysctl.conf parameters: vm.dirty_ratio = 50 vm.dirty_background_ratio = 1 vm.dirty_expire_centisecs = 1000 vm.dirty_writeback_centisecs = 100 vm.min_free_kbytes = 65536 net.core.rmem_default = 262144 net.core.rmem_max = 262144 net.core.wmem_default = 262144 net.core.wmem_max = 262144 net.core.netdev_max_backlog = 25000 net.ipv4.tcp_reordering = 127 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_max_syn_backlog = 8192 net.ipv4.tcp_no_metrics_save = 1...
2013 Sep 03
2
rsync -append "chunk" size
I'm transferring 1.1 Mb files over very poor GSM EDGE connection. My rsync command is: rsync --partial --remove-source-files --timeout=120 --append --progress --rsh=ssh -z LOCAL_FILE root at SERVER:REMOTE_PATH File on remote server "grows" in size in steps of 262144 bytes. That is a lot, because system needs to transfer at least 262144 (before compression) every time connection is established. When I use scp --C chunk size is about 32Kb. Is there a way to change chunk size for rsync? I use rsync 3.0.7. -- Regards, Marcin Polkowski Zak?ad Fizyki Litosf...
2018 Apr 12
0
Unreasonably poor performance of replicated volumes
...r,max_read=131072) > > > > *The problem *is that there is a significant performance loss with > smaller block sizes. For example: > > *4K block size* > [replica 3 volume] > root at centos7u3-nogdesktop2 ~ $ dd if=/dev/zero > of=/mnt/gluster/r3/file$RANDOM bs=4096 count=262144 > 262144+0 records in > 262144+0 records out > 1073741824 bytes (1.1 GB) copied, 11.2207 s, *95.7 MB/s* > > [replica 2 volume] > root at centos7u3-nogdesktop2 ~ $ dd if=/dev/zero > of=/mnt/gluster/r2/file$RANDOM bs=4096 count=262144 > 262144+0 records in > 262144+0 record...
2015 Nov 07
0
Re: mkfs.ext2 succeeds despite nbd write errors?
...nks/00000000000000031232' > > > Indeed, there is definitely a problem with fallocate, as some of the > chunks are the correct size (256k), and some are zero length: > > cell@pi1$ pwd > /home/cell/nbds/default/chunks > cell@pi1$ ls -l | tail > -rw------- 1 cell cell 262144 Nov 7 06:01 00000000000000032256 > -rw------- 1 cell cell 262144 Nov 7 06:01 00000000000000032257 > -rw------- 1 cell cell 262144 Nov 7 06:01 00000000000000032258 > -rw------- 1 cell cell 262144 Nov 7 06:01 00000000000000032259 > -rw------- 1 cell cell 262144 Nov 7 06:01 0000000000...
2006 Sep 15
1
Xen Installation problems
...to install Xen but after a while it crashes and starts rebooting recursively. I have tried both the entries listed below. I am using the following entries in the grub.conf file: title Xen-bhatia 3.0 / XenLinux-bhatia 2.6.16 root (hd0,0) kernel /bhatia/boot/xen-3.0.gz dom0_mem=262144 console=ttyS0,9600n8 console=tty0 module /bhatia/boot/vmlinuz-2.6-xen root=/dev/VolGroup00/LogVol00 rhgb quiet console=ttyS0,9600n8 console=tty0 title Xen 3.0 / XenLinux 2.6.16 root (hd0,0) kernel /xen.gz dom0_mem=262144 console=ttyS0,9600n8 console=tty0 module /vmli...
2010 Nov 12
6
xen guest not booting
...g suddenly and giving me the below error message. Any idea what is going wrong here? DOM 0 boots OK though. ata5.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 ata5.00: irq_stat 0x40000008 ata5.00: failed command: READ FPDMA QUEUED ata5.00: cmd 60/00:00:cd:ee:36/02:00:09:00:00/40 tag 0 ncq 262144 in res 51/40:72:5b:f0:36/d9:00:09:00:00/40 Emask 0x409 (media error) <F> ata5.00: status: { DRDY ERR } ata5.00: error: { UNC } ata5.00: exception Emask 0x0 SAct 0x3 SErr 0x0 action 0x0 ata5.00: irq_stat 0x40000008 ata5.00: failed command: READ FPDMA QUEUED ata5.00: cmd 60/00:00:cd:ee...