similar to: default cluster.stripe-block-size for striped volumes on 3.0.x vs 3.3 beta (128kb), performance change if i reduce to a smaller block size?

Displaying 20 results from an estimated 1200 matches similar to: "default cluster.stripe-block-size for striped volumes on 3.0.x vs 3.3 beta (128kb), performance change if i reduce to a smaller block size?"

2010 Mar 02
2
crash when using the cp command to copy files off a striped gluster dir but not when using rsync
Hi, I've got this strange problem where a striped endpoint will crash when I try to use cp to copy files off of it but not when I use rsync to copy files off: [user at gluster5 user]$ cp -r Python-2.6.4/ ~/tmp/ cp: reading `Python-2.6.4/Lib/lib2to3/tests/data/fixers/myfixes/__init__.py': Software caused connection abort cp: closing
2011 Feb 24
0
No subject
which is a stripe of the gluster storage servers, this is the performance I get (note use a file size > amount of RAM on client and server systems, 13GB in this case) : 4k block size : 111 pir4:/pirstripe% /sb/admin/scripts/nfsSpeedTest -s 13g -y pir4: Write test (dd): 142.281 MB/s 1138.247 mbps 93.561 seconds pir4: Read test (dd): 274.321 MB/s 2194.570 mbps 48.527 seconds testing from 8k -
2012 Dec 18
1
Infiniband performance issues answered?
In IRC today, someone who was hitting that same IB performance ceiling that occasionally gets reported had this to say [11:50] <nissim> first, I ran fedora which is not supported by Mellanox OFED distro [11:50] <nissim> so I moved to CentOS 6.3 [11:51] <nissim> next I removed all distibution related infiniband rpms and build the latest OFED package [11:52] <nissim>
2014 Jul 16
2
smbd's using up 100% of all cpu's and load avg slowly going up
Hi, Running samba sernet 4.1.6-7, I've noticed the load avg slowly / steadily creeping up (.e.g > 100). I'm now noticing that several smbd processes are at 100%. I don't actually notice that much bandwidth usage on the system (e.g. iptraf/iftop). Any idea what's causing this? Restarting smbd helps for a few days, but then the high load avg returns. Thanks, Sabuj
2014 Mar 13
2
things that break with unix extensions = yes, samba 4.1.5 and osx 10.9 clients?
I'm about to do some testing with a an OSX 10.9 client connected to sernet samba 4.1.5 to see what things work and don't both from the finder and from the terminal with unix extensions = yes and no. Does anyone know of any show stopping issues that occur with unix extensions = yes and the latest samba (or 3.6.x, or 4.0.x a) and the latest OSX (or latest update with 10.7 & 10.8)?
2013 Jan 15
2
1024 char limit for auth.allow and automatically re-reading auth.allow without having to restart glusterd?
Hi, Anyone know if the 1024 char limit for auth.allow still exists in the latest production version (seems to be there in 3.2.5). Also anyone know if the new versions check if auth.allow has been updated without having to restart glusterd? Is there anyway to restart glusterd without killing it and restarting the process, is kill -1 (HUP) possible with it (also with the version i'm running?)
2008 Jul 28
1
why does mkfs.ocfs2 take so long?
Hi, Why does mkfs.ocfs2 take so long compared to gfs2 (pretty fast iirc), xfs (almost instantaneous), or ext3 (slow but still ok)? I'm using: # mkfs.ocfs2 -F -b 4k -C 4k -L san1 -T mail /dev/vg/san1 mkfs.ocfs2 1.3.9 Overwriting existing ocfs2 partition. WARNING: Cluster check disabled. Proceed (y/N): y Filesystem Type of mail Filesystem label=san1 Block size=4096 (bits=12) Cluster size=4096
2014 Jan 23
2
gpfs + sernet samba + ctdb + transparent failover confusion
Hi all, We're running gpfs 3.5.0.12 (5 total nsds & quorum servers, 2 nsds running samba), sernet-samba 4.1.4-7, and ctdb 1.0.114.7-1 and trying to get transparent failover to work from a windows 8 client. We have ctdb failover working, i.e. if I run mmshutdown on one of the nodes the IPs failover in a few seconds after the GPFS mount is unmounted. For our transparent failover test, I
2012 Mar 15
2
Usage Case: just not getting the performance I was hoping for
All, For our project, we bought 8 new Supermicro servers. Each server is a quad-core Intel cpu with 2U chassis supporting 8 x 7200 RPM Sata drives. To start out, we only populated 2 x 2TB enterprise drives in each server and added all 8 peers with their total of 16 drives as bricks to our gluster pool as distributed replicated (2). The replica worked as follows: 1.1 -> 2.1 1.2
2014 Mar 13
1
smbcontrol smbd reload-config or service smbd reload doesn't reload include files
Hi, I noticed that smbcontrol smbd reload-config or service smbd reload doesn't reload include files. Is there anyway to get a reload to reload files that have been included from the main smb.conf ? Otherwise it only looks like restart works, but that causes connections to reset, even in a ctdb/clustered environment . The only other option it looks like is to just put everything into the
2014 Feb 28
1
can't get one specific group to show up in the output of id on one system but it does show up in another identically configured server in the same cluster
Hi all, I have two rhel 6.3 servers running sernet samba 4.1.4-7 with winbind connecting to AD. They're also running ctdb. For some very strange reason I can't get one specific AD group to show up in the output of "id username" or the gid of that group to show up in "wbinfo -r username" for the user on one of the servers but it shows up fine on the other. The strange
2014 Aug 20
1
vfs_acl_xattr doesn't work unless all the inherit and map inherit acl parameters are set to yes, but want to set inherit owner = no
I noticed that vfs_acl_xattr doesn't work unless all the inherit and map inherit acl parameters are set to yes. Which is fine but if I turn off inherit owner it completely breaks inheritance and security.NTACL never gets set for the file/directory that's created by the user. I want the uid of the user who's connected to be written and not the owner of the parent directory. Is there
2014 Jan 24
2
vfs_shadow_copy2 with different snapshot format
Hi all, Does anyone have vfs_shadow_copy2 working with an alternate snapshot directory format? My snapshots look like this : drwxr-xr-x 55 root root 32768 Jan 10 12:37 20140113_12:00 drwxr-xr-x 55 root root 32768 Jan 10 12:37 20140114_12:00 drwxr-xr-x 55 root root 32768 Jan 10 12:37 20140115_12:00 drwxr-xr-x 55 root root 32768 Jan 10 12:37 20140116_12:00 drwxr-xr-x 55 root root 32768 Jan 10
2012 Mar 14
2
QA builds for 3.2.6 and 3.3 beta3
Greetings, There are 2 imminent releases coming soon to a download server near you: 1. GlusterFS 3.2.6 - a maintenance release that fixes some bugs. 2. GlusterFS 3.3 beta 3 - the next iteration of the exciting new hotness that will be 3.3 You can find both of these in the "QA builds" server: http://bits.gluster.com/pub/gluster/glusterfs/ There are source tarballs and binary RPMs
2014 Jul 16
1
Must Samba4 AD be provisionned with rfc2307 to use winbind ?
I have been reading through an old thread and to be honest confused.com root at zent1:~# samba-tool domain level show params.c:pm_process() - Processing configuration file "/etc/samba/shares.conf" ldb_wrap open of secrets.ldb Domain and forest function level for domain 'DC=office,DC=zentyal,DC=lan' Forest function level: (Windows) 2003 Domain function level: (Windows) 2003
2011 Jul 25
3
gluster client performance
Hi- I'm new to Gluster, but am trying to get it set up on a new compute cluster we're building. We picked Gluster for one of our cluster file systems (we're also using Lustre for fast scratch space), but the Gluster performance has been so bad that I think maybe we have a configuration problem -- perhaps we're missing a tuning parameter that would help, but I can't find
2017 Dec 21
3
Wrong volume size with df
Sure! > 1 - output of gluster volume heal <volname> info Brick pod-sjc1-gluster1:/data/brick1/gv0 Status: Connected Number of entries: 0 Brick pod-sjc1-gluster2:/data/brick1/gv0 Status: Connected Number of entries: 0 Brick pod-sjc1-gluster1:/data/brick2/gv0 Status: Connected Number of entries: 0 Brick pod-sjc1-gluster2:/data/brick2/gv0 Status: Connected Number of entries: 0 Brick
2018 Jan 10
0
Blocking IO when hot tier promotion daemon runs
Hi, Can you send the volume info, and volume status output and the tier logs. And I need to know the size of the files that are being stored. On Tue, Jan 9, 2018 at 9:51 PM, Tom Fite <tomfite at gmail.com> wrote: > I've recently enabled an SSD backed 2 TB hot tier on my 150 TB 2 server / 3 > bricks per server distributed replicated volume. > > I'm seeing IO get blocked
2018 Jan 18
2
Blocking IO when hot tier promotion daemon runs
Hi Tom, The volume info doesn't show the hot bricks. I think you have took the volume info output before attaching the hot tier. Can you send the volume info of the current setup where you see this issue. The logs you sent are from a later point in time. The issue is hit earlier than the logs what is available in the log. I need the logs from an earlier time. And along with the entire tier
2018 Jan 10
2
Blocking IO when hot tier promotion daemon runs
The sizes of the files are extremely varied, there are millions of small (<1 MB) files and thousands of files larger than 1 GB. Attached is the tier log for gluster1 and gluster2. These are full of "demotion failed" messages, which is also shown in the status: [root at pod-sjc1-gluster1 gv0]# gluster volume tier gv0 status Node Promoted files Demoted files