search for: haleys

Displaying 20 results from an estimated 167 matches for "haleys".

Did you mean: haley
2017 Jun 02
2
Slow write times to gluster disk
Are you sure using conv=sync is what you want? I normally use conv=fdatasync, I'll look up the difference between the two and see if it affects your test. -b ----- Original Message ----- > From: "Pat Haley" <phaley at mit.edu> > To: "Pranith Kumar Karampuri" <pkarampu at redhat.com> > Cc: "Ravishankar N" <ravishankar at redhat.com>,
2017 Jun 12
0
Slow write times to gluster disk
Hi Guys, I was wondering what our next steps should be to solve the slow write times. Recently I was debugging a large code and writing a lot of output at every time step. When I tried writing to our gluster disks, it was taking over a day to do a single time step whereas if I had the same program (same hardware, network) write to our nfs disk the time per time-step was about 45 minutes.
2017 Jun 20
2
Slow write times to gluster disk
Hi Ben, Sorry this took so long, but we had a real-time forecasting exercise last week and I could only get to this now. Backend Hardware/OS: * Much of the information on our back end system is included at the top of http://lists.gluster.org/pipermail/gluster-users/2017-April/030529.html * The specific model of the hard disks is SeaGate ENTERPRISE CAPACITY V.4 6TB
2017 Jun 27
0
Slow write times to gluster disk
On Mon, Jun 26, 2017 at 7:40 PM, Pat Haley <phaley at mit.edu> wrote: > > Hi All, > > Decided to try another tests of gluster mounted via FUSE vs gluster > mounted via NFS, this time using the software we run in production (i.e. > our ocean model writing a netCDF file). > > gluster mounted via NFS the run took 2.3 hr > > gluster mounted via FUSE: the run took
2017 Jun 23
2
Slow write times to gluster disk
On Fri, Jun 23, 2017 at 2:23 AM, Pat Haley <phaley at mit.edu> wrote: > > Hi, > > Today we experimented with some of the FUSE options that we found in the > list. > > Changing these options had no effect: > > gluster volume set test-volume performance.cache-max-file-size 2MB > gluster volume set test-volume performance.cache-refresh-timeout 4 > gluster
2017 Jun 22
0
Slow write times to gluster disk
Hi, Today we experimented with some of the FUSE options that we found in the list. Changing these options had no effect: gluster volume set test-volume performance.cache-max-file-size 2MB gluster volume set test-volume performance.cache-refresh-timeout 4 gluster volume set test-volume performance.cache-size 256MB gluster volume set test-volume performance.write-behind-window-size 4MB gluster
2017 Jun 26
3
Slow write times to gluster disk
Hi All, Decided to try another tests of gluster mounted via FUSE vs gluster mounted via NFS, this time using the software we run in production (i.e. our ocean model writing a netCDF file). gluster mounted via NFS the run took 2.3 hr gluster mounted via FUSE: the run took 44.2 hr The only problem with using gluster mounted via NFS is that it does not respect the group write permissions which
2017 Jun 24
0
Slow write times to gluster disk
On Fri, Jun 23, 2017 at 9:10 AM, Pranith Kumar Karampuri < pkarampu at redhat.com> wrote: > > > On Fri, Jun 23, 2017 at 2:23 AM, Pat Haley <phaley at mit.edu> wrote: > >> >> Hi, >> >> Today we experimented with some of the FUSE options that we found in the >> list. >> >> Changing these options had no effect: >> >>
2014 Sep 12
1
compiling Asterisk
I am trying to compile the certified-asterisk-11.6-cert5 code and when I try to start it and then go into the console I am getting the error message "asterisk dead but subsys locked". Can anyone help with why this is happening? I have never seen this before. This is a fresh install on a new server CentOS 6.5. Thanks, Scott Haley IS Voice Projects Team Edward Jones Investments Phone:
2012 Nov 30
3
Cannot mount gluster volume
Hi, We recently installed glusterfs 3.3.1. We have a 3 brick gluster system running that was being successfully mounted earlier. Yesterday we experienced a power outage and now after rebooting our systems, we are unable to mount this gluster file system. On the gluster client, a df -h command shows 41TB out of 55TB, while an ls command shows broken links for directories and missing files.
2014 Feb 26
1
SIP 603 Declined error message
I have a SIP trunk from my Asterisk server to an Avaya CM server. If I place calls inbound, everything works fine. If I place calls outbound, originating from the Asterisk box, everything works fine (I have done this with the use of the .call files). If I setup an extension with the findme-followme feature and have it try to hair-pin a call back out the same trunk to the Avaya, I get a
2012 Feb 26
1
"Structure needs cleaning" error
Hi, We have recently upgraded our gluster to 3.2.5 and have encountered the following error. Gluster seems somehow confused about one of the files it should be serving up, specifically /projects/philex/PE/2010/Oct18/arch07/BalbacFull_250_200_03Mar_3.png If I go to that directory and simply do an ls *.png I get ls: BalbacFull_250_200_03Mar_3.png: Structure needs cleaning (along with a listing
2011 Jun 27
2
Using TSM to back-up glusterfs
Hi We have been trying back-up a glusterfs (v3.1.4) area using the Tivoli TSM software to an off-site area. The back-up keeps failing with the following typical error messages 06/14/2011 22:22:58 ANS1587W I/O error reading file attributes for: /gdata/projects/philex/OAG/2011/May16/mdor3km10/coast_den2.in. errno = 22, Invalid argument 06/14/2011 22:22:59 ANS4007E Error processing
2009 Jul 01
1
[LLVMdev] build failure on ARM linux
Andrew Haley wrote: > Nick Lewycky wrote: >> 2009/6/30 Andrew Haley <aph at redhat.com <mailto:aph at redhat.com>> >> >> Nick Lewycky wrote: >> > I'm seeing this new build failure, starting some time yesterday on >> ARM: >> >> Yes. It's just a matter of defining __sync_val_compare_and_swap_4: >> >>
2016 Aug 29
6
CentOS 6: files now owned by nobody:nobody
Hi, We are running a cluster under CentOS 6.6. We recently attached a new NAS device, running CentOS 6.8 and rsync'd our user file system to it. We noticed that all the files were owned by nobody (with nobody as the group). We copied over the /etc/passwd and /etc/group files from our front-end server to our NAS server. If we log in to the NAS server we see the files owned by their
2017 Jul 07
2
Slow write times to gluster disk
Hi, On 07/07/2017 06:16 AM, Pat Haley wrote: > > Hi All, > > A follow-up question. I've been looking at various pages on nfs-ganesha > & gluster. Is there a version of nfs-ganesha that is recommended for > use with > > glusterfs 3.7.11 built on Apr 27 2016 14:09:22 > CentOS release 6.8 (Final) For glusterfs 3.7, nfs-ganesha-2.3-* version can be used. I see
2009 Jan 19
6
[LLVMdev] Load from abs address generated bad code on LLVM 2.4
This is x86_64. I have a problem where an absolute memory load define i32 @foo() { entry: %0 = load i32* inttoptr (i64 12704196 to i32*) ; <i32> [#uses=1] ret i32 %0 } generates incorrect code on LLVM 2.4: 0x7ffff6d54010: mov 0xc1d9c4(%rip),%eax # 0x7ffff79719da 0x7ffff6d54016: retq should be 0x7ffff6d54010: mov 0xc1d9c4, %eax 0x7ffff6d54016: retq
2009 May 19
3
[LLVMdev] llvm-java
Nicolas Geoffray wrote: > Andrew Haley wrote: >> Right, so that part should be trivial. So, does the array bounds check >> elimination already work? If it does, that will considerably reduce >> the work that Andre needs to do. To say the least... >> >> > > Trivial bounds check elimination already work, such as tab[2] = 1; > tab[1] = 2 (the second
2009 Jun 30
0
[LLVMdev] build failure on ARM linux
Nick Lewycky wrote: > 2009/6/30 Andrew Haley <aph at redhat.com <mailto:aph at redhat.com>> > > Nick Lewycky wrote: > > I'm seeing this new build failure, starting some time yesterday on > ARM: > > Yes. It's just a matter of defining __sync_val_compare_and_swap_4: > >
2017 Jul 05
2
Slow write times to gluster disk
Hi Soumya, (1) In http://mseas.mit.edu/download/phaley/GlusterUsers/TestNFSmount/ I've placed the following 2 log files etc-glusterfs-glusterd.vol.log gdata.log The first has repeated messages about nfs disconnects. The second had the <fuse_mnt_direcotry>.log name (but not much information). (2) About the gluster-NFS native server: do you know where we can find documentation on