Displaying 20 results from an estimated 800 matches similar to: "Slow write times to gluster disk"
2005 Jan 31
1
[LLVMdev] Question about Global Variable
Hi,
Sorry for bothering you guys again.
I got problem when I am trying to recover the Global Variable Initial value. What I did is like the following
ConstantArray *Cstr = dyn_cast<ConstantArray>(gI->getInitializer());
// the above instruction enable me to get the content of initial string of global variable, like char a[10] ="test global";
And then I make some change for
2011 Mar 24
1
.Fortran successful, R locks up.
Howdy,
I am having a problem with a library compiled from some legacy fortran
code. I can call the library, it runs as it should, returns a list,
and gives a ">" prompt, but then locks up the R session. Functions
typed in return nothing. ctrl-c results in a new prompt that is still
locked up, and R overwhelms the processor. This happens on Mac,
Windows, and Linux exactly the same. I
2005 Feb 02
1
[LLVMdev] RE: Question about Global Variable
Thanks for your reply.
After I change Cstr to gI, it compiled successfully. Thanks again.
Another question is for constructing getelementpt.
// C code
char gStrA[10] = "test str"; // here is Global variable,gStrA and initializer "test str"
char gStrB[10]= "test str2";
main(){
int = i;
char *pGVars[20]; // here, the pGVar is for storing the address of each
2016 Aug 31
2
group write permissions not being respected
So far, those look the same
client:
[root at mseas FixOwn]# getfacl /gdata/bibliography/Work/GroupBib/trunk/
getfacl: Removing leading '/' from absolute path names
# file: gdata/bibliography/Work/GroupBib/trunk/
# owner: phaley
# group: mseasweb
# flags: -s-
user::rwx
group::rwx
other::r-x
server:
[root at mseas-data2 ~]# getfacl /gdata/bibliography/Work/GroupBib/trunk/
getfacl:
2013 Dec 05
1
Issue mounting /home area from NAS server
Hi,
Just before the Thanksgiving break, we enabled quotas on
the /home areas on the mseas-data server (running CentOS 5.8),
using the following line in the updated /etc/fstab
/dev/mapper/the_raid-lv_home /home ext3
defaults,usrquota,grpquota 1 0
Following the Thanksgiving reboot of mseas-data we have been
experiencing problems with svn on mseas (our front-end machine,
running
2017 Aug 08
1
Slow write times to gluster disk
Soumya,
its
[root at mseas-data2 ~]# glusterfs --version
glusterfs 3.7.11 built on Apr 27 2016 14:09:20
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
2017 Aug 08
0
Slow write times to gluster disk
----- Original Message -----
> From: "Pat Haley" <phaley at mit.edu>
> To: "Soumya Koduri" <skoduri at redhat.com>, gluster-users at gluster.org, "Pranith Kumar Karampuri" <pkarampu at redhat.com>
> Cc: "Ben Turner" <bturner at redhat.com>, "Ravishankar N" <ravishankar at redhat.com>, "Raghavendra
2006 Aug 30
4
Barplot
Dear all,
I have a dataset. I want to make barplot from this data.
Zero1 <- "
V1 V2 V3 V4 V5 V6 V7 V8 V9
1 1 0 0 0 1 0 0 0 Positive
2 0 0 1 0 1 0 1 1 Negative
3 0 0 1 0 0 0 1 1 Positive
4 0 1 0 1 1 1 0 1 Negative
5 0 0 1 0 1 1 0 0 Positive
6 0 1 0 0 1 1 1 1 Negative
7 1 0 1 1 1 1 1 1 Negative
8 0 0 0 0 1 0 0 1
2016 Aug 30
2
group write permissions not being respected
Hi
We have just migrated our data to a new file server (more space, old
server was showing its age). We have a volume for collaborative use,
based on group membership. In our new server, the group write
permissions are not being respected (e.g. the owner of a directory can
still write to that directory but any other member of the associated
group cannot, even though the directory clearly
2017 Aug 07
2
Slow write times to gluster disk
Hi Soumya,
We just had the opportunity to try the option of disabling the
kernel-NFS and restarting glusterd to start gNFS. However the gluster
demon crashes immediately on startup. What additional information
besides what we provide below would help debugging this?
Thanks,
Pat
-------- Forwarded Message --------
Subject: gluster-nfs crashing on start
Date: Mon, 7 Aug 2017 16:05:09
2014 Feb 04
1
NFS not recognizing available file space
Hi,
I have a server running under CentOS 5.8 and I appear to be in a
situation in which the NFS file server is not recognizing the available
space on a particular disk (actually a hardware RAID-6 of 13 2Tb disks).
If I try to write to the disk I get the following error message
[root at nas-0-1 mseas-data-0-1]# touch dum
touch: cannot touch `dum': No space left on device
However, if I check
2012 Oct 05
0
No subject
# gluster --version
glusterfs 3.3.1 built on Oct 11 2012 22:01:05
# gluster volume info
Volume Name: gdata
Type: Distribute
Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data
[root at mseas-data ~]# ps -ef | grep gluster
root 2783
2017 Jun 24
0
Slow write times to gluster disk
On Fri, Jun 23, 2017 at 9:10 AM, Pranith Kumar Karampuri <
pkarampu at redhat.com> wrote:
>
>
> On Fri, Jun 23, 2017 at 2:23 AM, Pat Haley <phaley at mit.edu> wrote:
>
>>
>> Hi,
>>
>> Today we experimented with some of the FUSE options that we found in the
>> list.
>>
>> Changing these options had no effect:
>>
>>
2017 Jun 12
0
Slow write times to gluster disk
Hi Guys,
I was wondering what our next steps should be to solve the slow write times.
Recently I was debugging a large code and writing a lot of output at
every time step. When I tried writing to our gluster disks, it was
taking over a day to do a single time step whereas if I had the same
program (same hardware, network) write to our nfs disk the time per
time-step was about 45 minutes.
2017 Jun 02
2
Slow write times to gluster disk
Are you sure using conv=sync is what you want? I normally use conv=fdatasync, I'll look up the difference between the two and see if it affects your test.
-b
----- Original Message -----
> From: "Pat Haley" <phaley at mit.edu>
> To: "Pranith Kumar Karampuri" <pkarampu at redhat.com>
> Cc: "Ravishankar N" <ravishankar at redhat.com>,
2017 Jun 22
0
Slow write times to gluster disk
Hi,
Today we experimented with some of the FUSE options that we found in the
list.
Changing these options had no effect:
gluster volume set test-volume performance.cache-max-file-size 2MB
gluster volume set test-volume performance.cache-refresh-timeout 4
gluster volume set test-volume performance.cache-size 256MB
gluster volume set test-volume performance.write-behind-window-size 4MB
gluster
2017 Jun 26
3
Slow write times to gluster disk
Hi All,
Decided to try another tests of gluster mounted via FUSE vs gluster
mounted via NFS, this time using the software we run in production (i.e.
our ocean model writing a netCDF file).
gluster mounted via NFS the run took 2.3 hr
gluster mounted via FUSE: the run took 44.2 hr
The only problem with using gluster mounted via NFS is that it does not
respect the group write permissions which
2017 Jun 20
2
Slow write times to gluster disk
Hi Ben,
Sorry this took so long, but we had a real-time forecasting exercise
last week and I could only get to this now.
Backend Hardware/OS:
* Much of the information on our back end system is included at the
top of
http://lists.gluster.org/pipermail/gluster-users/2017-April/030529.html
* The specific model of the hard disks is SeaGate ENTERPRISE CAPACITY
V.4 6TB
2017 Jun 23
2
Slow write times to gluster disk
On Fri, Jun 23, 2017 at 2:23 AM, Pat Haley <phaley at mit.edu> wrote:
>
> Hi,
>
> Today we experimented with some of the FUSE options that we found in the
> list.
>
> Changing these options had no effect:
>
> gluster volume set test-volume performance.cache-max-file-size 2MB
> gluster volume set test-volume performance.cache-refresh-timeout 4
> gluster
2009 Dec 03
3
Xen DomU with high IOWAIT and low disk performance (lvm raid1)
Hello list!
My setup:
Dom0: Debain 5.0.3 with xen-hypervisor-3.2-1-i386 (2.6.26-2-xen-686)
DomU: Ubuntu 8.04 2.6.26-2-xen-686
System is running on two hard drives mirrored with raid1 and organized
by LVM. Dom0 and DomU are running on logical volumes.
Partitions for DomUs are connected via ''phy:/dev/lvm/disk1,sda1,w'' for
example.
Here are some scenarios I testet, where you