search for: carc

Displaying 20 results from an estimated 22 matches for "carc".

Did you mean: arc
2018 Jan 09
2
Creating cluster replica on 2 nodes 2 bricks each.
...e 1 and 2 (brick A) but I?ve not been able to add to the replica together, Gluster switches to distributed replica when i add it with only 14Tb. Any help will be appreciated. Thanks Jose --------------------------------- Jose Sanchez Center of Advanced Research Computing Albuquerque, NM 87131 carc.unm.edu <http://carc.unm.edu/> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180109/bb533134/attachment.html>
2018 Apr 25
0
Turn off replication
...4349:__glusterd_handle_status_volume] 0-management: Received status volume req for volume scratch [root at gluster02 glusterfs]# --------------------------------- Jose Sanchez Systems/Network Analyst 1 Center of Advanced Research Computing 1601 Central Ave. MSC 01 1190 Albuquerque, NM 87131-0001 carc.unm.edu <http://carc.unm.edu/> 575.636.4232 > On Apr 12, 2018, at 12:11 AM, Karthik Subrahmanya <ksubrahm at redhat.com> wrote: > > > > On Wed, Apr 11, 2018 at 7:38 PM, Jose Sanchez <josesanc at carc.unm.edu <mailto:josesanc at carc.unm.edu>> wrote: > H...
2018 Apr 12
2
Turn off replication
On Wed, Apr 11, 2018 at 7:38 PM, Jose Sanchez <josesanc at carc.unm.edu> wrote: > Hi Karthik > > Looking at the information you have provided me, I would like to make sure > that I?m running the right commands. > > 1. gluster volume heal scratch info > If the count is non zero, trigger the heal and wait for heal info count to become z...
2018 Jan 10
0
Creating cluster replica on 2 nodes 2 bricks each.
Hi, Please let us know what commands you ran so far and the output of the *gluster volume info* command. Thanks, Nithya On 9 January 2018 at 23:06, Jose Sanchez <josesanc at carc.unm.edu> wrote: > Hello > > We are trying to setup Gluster for our project/scratch storage HPC machine > using a replicated mode with 2 nodes, 2 bricks each (14tb each). > > Our goal is to be able to have a replicated system between node 1 and 2 (A > bricks) and add an addi...
2018 Jan 10
2
Creating cluster replica on 2 nodes 2 bricks each.
...1/scratch Brick3: gluster01ib:/gdata/brick2/scratch Brick4: gluster02ib:/gdata/brick2/scratch Options Reconfigured: performance.readdir-ahead: on nfs.disable: on [root at gluster01 ~]# -------------------------------- Jose Sanchez Center of Advanced Research Computing Albuquerque, NM 87131-0001 carc.unm.edu <http://carc.unm.edu/> > On Jan 9, 2018, at 9:04 PM, Nithya Balachandran <nbalacha at redhat.com> wrote: > > Hi, > > Please let us know what commands you ran so far and the output of the gluster volume info command. > > Thanks, > Nithya > > On...
2018 Apr 25
2
Turn off replication
...a/brick1/scratch N/A N/A N N/A Task Status of Volume scratch ------------------------------------------------------------------------------ There are no active volume tasks [root at gluster02 glusterfs]# > On Apr 25, 2018, at 3:23 PM, Jose Sanchez <josesanc at carc.unm.edu> wrote: > > Hello Karthik > > > Im having trouble adding the two bricks back online. Any help is appreciated > > thanks > > > when i try to add-brick command this is what i get > > [root at gluster01 ~]# gluster volume add-brick scratch glu...
2018 Apr 30
2
Turn off replication
...rebalancing again. is there a down side, doing this, What happens with Gluster missing data when rebalancing? Thanks Jose --------------------------------- Jose Sanchez Systems/Network Analyst 1 Center of Advanced Research Computing 1601 Central Ave. MSC 01 1190 Albuquerque, NM 87131-0001 carc.unm.edu <http://carc.unm.edu/> 575.636.4232 > On Apr 27, 2018, at 4:16 AM, Hari Gowtham <hgowtham at redhat.com> wrote: > > Hi Jose, > > Why are all the bricks visible in volume info if the pre-validation > for add-brick failed? I suspect that the remove brick wasn&...
2011 Jan 27
2
help for a loop procedure
...) has sample events by rows (U1,U2,U3) and detected species by columns. U<-read.table("C:\\Documents \\tre_usc.txt",header=T,row.names=1,sep="\t",dec = ",") U # global matrix with 3 samples SPECIE Aadi Aagl Apap Aage Bdia Beup Crub Carc Cpam U1 0 0 0 0 7 0 5 0 1 U2 0 0 0 0 4 2 1 0 0 U3 0 0 0 0 0 0 0...
2018 May 02
0
Turn off replication
...d the data on gluster volume will be available for usage in spite of rebalance running. If you want to speed things up rebalance throttle option can be set to aggressive to speed things up. (this might increase the cpu and disk usage). On Mon, Apr 30, 2018 at 6:24 PM, Jose Sanchez <josesanc at carc.unm.edu> wrote: > Hi All > > We were able to get all 4 bricks are distributed , we can see the right > amount of space. but we have been rebalancing since 4 days ago for 16Tb. and > still only 8tb. is there a way to speed up. there is also data we can remove > from it to speed...
2018 Apr 27
0
Turn off replication
...the remove brick wasn't done properly. You can provide the cmd_history.log to verify this. Better to get the other log messages. Also I need to know what are the bricks that were actually removed, the command used and its output. On Thu, Apr 26, 2018 at 3:47 AM, Jose Sanchez <josesanc at carc.unm.edu> wrote: > Looking at the logs , it seems that it is trying to add using the same port > was assigned for gluster01ib: > > > Any Ideas?? > > Jose > > > > [2018-04-25 22:08:55.169302] I [MSGID: 106482] > [glusterd-brick-ops.c:447:__glusterd_handle_add_br...
2018 Jan 11
0
Creating cluster replica on 2 nodes 2 bricks each.
...file will exist on either Brick1 and Brick2 or Brick3 and Brick4. After the add brick, the volume will have a total capacity of 28TB and store 2 copies of every file. Let me know if that is not what you are looking for. Regards, Nithya On 10 January 2018 at 20:40, Jose Sanchez <josesanc at carc.unm.edu> wrote: > > > Hi Nithya > > This is what i have so far, I have peer both cluster nodes together as > replica, from node 1A and 1B , now when i tried to add it , i get the error > that it is already part of a volume. when i run the cluster volume info , i > see th...
2018 Jan 15
1
Creating cluster replica on 2 nodes 2 bricks each.
On Fri, 12 Jan 2018 at 21:16, Nithya Balachandran <nbalacha at redhat.com> wrote: > ---------- Forwarded message ---------- > From: Jose Sanchez <josesanc at carc.unm.edu> > Date: 11 January 2018 at 22:05 > Subject: Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks > each. > To: Nithya Balachandran <nbalacha at redhat.com> > Cc: gluster-users <gluster-users at gluster.org> > > > Hi Nithya > > Thanks...
2018 Apr 05
1
Turn off replication
...ble or gain the storage space on our gluster storage node. what happens with the data, do i need to erase one of the nodes? Jose --------------------------------- Jose Sanchez Systems/Network Analyst Center of Advanced Research Computing 1601 Central Ave. MSC 01 1190 Albuquerque, NM 87131-0001 carc.unm.edu <http://carc.unm.edu/> 575.636.4232 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180405/7f392936/attachment.html>
2018 Jan 11
3
Creating cluster replica on 2 nodes 2 bricks each.
...k3 and Brick4. > > > After the add brick, the volume will have a total capacity of 28TB and store 2 copies of every file. Let me know if that is not what you are looking for. > > > Regards, > Nithya > > > On 10 January 2018 at 20:40, Jose Sanchez <josesanc at carc.unm.edu <mailto:josesanc at carc.unm.edu>> wrote: > > > Hi Nithya > > This is what i have so far, I have peer both cluster nodes together as replica, from node 1A and 1B , now when i tried to add it , i get the error that it is already part of a volume. when i run the clu...
2018 Jan 12
0
Creating cluster replica on 2 nodes 2 bricks each.
---------- Forwarded message ---------- From: Jose Sanchez <josesanc at carc.unm.edu> Date: 11 January 2018 at 22:05 Subject: Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks each. To: Nithya Balachandran <nbalacha at redhat.com> Cc: gluster-users <gluster-users at gluster.org> Hi Nithya Thanks for helping me with this, I understand now , b...
2018 Apr 07
0
Turn off replication
...After the remove-brick erase the data from the backend. Then you can expand the volume by following the steps at [1]. [1] https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#expanding-volumes Regards, Karthik On Fri, Apr 6, 2018 at 11:39 PM, Jose Sanchez <josesanc at carc.unm.edu> wrote: > Hi Karthik > > this is our configuration, is 2x2 =4 , they are all replicated , each > brick has 14tb. we have 2 nodes A and B, each one with brick 1 and 2. > > Node A (replicated A1 (14tb) and B1 (14tb) ) same with node B (Replicated > A2 (14tb) and B2...
2015 Dec 21
2
Lustre 2.7.0
...cratch and I was wondering if anyone knows what the most current version of CentOS with Lustre support is server-wise. As I poke around the documentation it looks like CentOS 6.3, 6.4, and possibly 6.5 are supported. Regards, Ben J. Archuleta Systems Staff Center for Advanced Research Computing (CARC)
2004 Apr 23
0
Problem at night
I'm using asterisk with isdn hfcpci carc (driver zaphfc) all work correctly during the day but during the night it happend something that hang the card with this message: zaphfc: empty HDLC frame received. Asterisk work without any error message but isdn doesen't work I must stop asterisk unload the driver and reload it and then all w...
2012 Sep 24
1
Atheros Communications Inc. AR8161 Gigabit Ethernet
..., first of all I must admit I posted this problem on the CentOs forum yesterday, with 0 replies so far. :-( I'm googling this problem all weekend, and getting a bit desperate actually. I'm working on a new Dell Vostro 3460. It has a Atheros Communications Inc. AR8161 Gigabit Ethernet carc that I cannot get to work. There's no enty in /etc/udev/rules.d/70-persistent-net-rules or in /etc/sysconfig/network-scripts for wired network. The laptop has Centos6.3 fully updated, lspci -v shows : -------------------------------------------------------------------------------------------...
2018 Feb 26
1
Problems with write-behind with large files on Gluster 3.8.4
Hello, I'm having problems when write-behind is enabled on Gluster 3.8.4. I have 2 Gluster servers each with a single brick that is mirrored between them. The code causing these issues reads two data files each approx. 128G in size. It opens a third file, mmap()'s that file, and subsequently reads and writes to it. The third file, on sucessful runs (without write-behind enabled)