We have a test 10gR2 RAC cluster using ocfs2 filesystems for the Clusterware files and the Database files. We need to increase the node slots to accomodate new RAC nodes. Is it true that we will need to umount these filesystems for the upgrade (i.e. Database and Clusterware also)? We are planning to use the following command format to perform the node slot increase: # tunefs.ocfs2 ?N 3 /dev/mapper/mpath1p1
Marcos E. Matsunaga
2007-Oct-01 05:16 UTC
[Ocfs2-users] Question about increasing node slots
>From http://oss.oracle.com/projects/ocfs2/dist/documentation/ocfs2_faq.html19 - How do I add a new node to an online cluster? You can use the console to add a new node. However, you will need to explicitly add the new node on all the online nodes. That is, adding on one node and propagating to the other nodes is not sufficient. If the operation fails, it will most likely be due to bug#741 <http://oss.oracle.com/bugzilla/show_bug.cgi?id=741>. In that case, you can use the o2cb_ctl utility on all online nodes as follows: # o2cb_ctl -C -i -n NODENAME -t node -a number=NODENUM -a ip_address=IPADDR -a ip_port=IPPORT -a cluster=CLUSTERNAME Ensure the node is added both in /etc/ocfs2/cluster.conf and in /config/cluster/CLUSTERNAME/node on all online nodes. You can then simply copy the cluster.conf to the new (still offline) node as well as other offline nodes. At the end, ensure that cluster.conf is consistent on all the nodes. 20 - How do I add a new node to an offline cluster? You can either use the console or use o2cb_ctl or simply hand edit cluster.conf. Then either use the console to propagate it to all nodes or hand copy using scp or any other tool. The o2cb_ctl command to do the same is: # o2cb_ctl -C -n NODENAME -t node -a number=NODENUM -a ip_address=IPADDR -a ip_port=IPPORT -a cluster=CLUSTERNAME Notice the "-i" argument is not required as the cluster is not online. Regards, Marcos Eduardo Matsunaga Oracle USA Linux Engineering Tim Lank wrote:> We have a test 10gR2 RAC cluster using ocfs2 filesystems for the > Clusterware files and the Database files. > > We need to increase the node slots to accomodate new RAC nodes. Is it > true that we will need to umount these filesystems for the upgrade (i.e. > Database and Clusterware also)? > > We are planning to use the following command format to perform the node > slot increase: > > # tunefs.ocfs2 ?N 3 /dev/mapper/mpath1p1 > > _______________________________________________ > Ocfs2-users mailing list > Ocfs2-users@oss.oracle.com > http://oss.oracle.com/mailman/listinfo/ocfs2-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20071001/212fdfff/attachment.html
So does the o2cb_ctl command touch the ocfs2 filesystem superblock and increase the node slot value in this example?>>From >> http://oss.oracle.com/projects/ocfs2/dist/documentation/ocfs2_faq.html > > 19 - How do I add a new node to an online cluster? > You can use the console to add a new node. However, you will need to > explicitly add the new node on all the online nodes. That is, adding on > one node and propagating to the other nodes is not sufficient. If the > operation fails, it will most likely be due to bug#741 > <http://oss.oracle.com/bugzilla/show_bug.cgi?id=741>. In that case, you > can use the o2cb_ctl utility on all online nodes as follows: > > # o2cb_ctl -C -i -n NODENAME -t node -a number=NODENUM -a > ip_address=IPADDR -a ip_port=IPPORT -a cluster=CLUSTERNAME > > Ensure the node is added both in /etc/ocfs2/cluster.conf and in > /config/cluster/CLUSTERNAME/node on all online nodes. You can then > simply copy the cluster.conf to the new (still offline) node as well as > other offline nodes. At the end, ensure that cluster.conf is consistent > on all the nodes. > > 20 - How do I add a new node to an offline cluster? > > You can either use the console or use o2cb_ctl or simply hand edit > cluster.conf. Then either use the console to propagate it to all nodes > or hand copy using scp or any other tool. The o2cb_ctl command to do the > same is: > > # o2cb_ctl -C -n NODENAME -t node -a number=NODENUM -a > ip_address=IPADDR -a ip_port=IPPORT -a cluster=CLUSTERNAME > > Notice the "-i" argument is not required as the cluster is not online. > > > Regards, > > Marcos Eduardo Matsunaga > > Oracle USA > Linux Engineering > > > > > > Tim Lank wrote: >> We have a test 10gR2 RAC cluster using ocfs2 filesystems for the >> Clusterware files and the Database files. >> >> We need to increase the node slots to accomodate new RAC nodes. Is it >> true that we will need to umount these filesystems for the upgrade (i.e. >> Database and Clusterware also)? >> >> We are planning to use the following command format to perform the node >> slot increase: >> >> # tunefs.ocfs2 ???N 3 /dev/mapper/mpath1p1 >> >> _______________________________________________ >> Ocfs2-users mailing list >> Ocfs2-users@oss.oracle.com >> http://oss.oracle.com/mailman/listinfo/ocfs2-users >> >
Marcos E. Matsunaga
2007-Oct-01 06:59 UTC
[Ocfs2-users] Question about increasing node slots
The console does that (You can use the console to add a new node). tunefs.ocfs2 is actually the tool that will change the superblock to add more slots (see the man pages) and it is called by the console (a more user friendly interface) to perform the action. o2cb_ctl only defines the new node(s) on the existing nodes that are running. Regards, Marcos Eduardo Matsunaga Oracle USA Linux Engineering Tim Lank wrote:> So does the o2cb_ctl command touch the ocfs2 filesystem superblock and > increase the node slot value in this example? > > > >>> From >>> http://oss.oracle.com/projects/ocfs2/dist/documentation/ocfs2_faq.html >>> >> 19 - How do I add a new node to an online cluster? >> You can use the console to add a new node. However, you will need to >> explicitly add the new node on all the online nodes. That is, adding on >> one node and propagating to the other nodes is not sufficient. If the >> operation fails, it will most likely be due to bug#741 >> <http://oss.oracle.com/bugzilla/show_bug.cgi?id=741>. In that case, you >> can use the o2cb_ctl utility on all online nodes as follows: >> >> # o2cb_ctl -C -i -n NODENAME -t node -a number=NODENUM -a >> ip_address=IPADDR -a ip_port=IPPORT -a cluster=CLUSTERNAME >> >> Ensure the node is added both in /etc/ocfs2/cluster.conf and in >> /config/cluster/CLUSTERNAME/node on all online nodes. You can then >> simply copy the cluster.conf to the new (still offline) node as well as >> other offline nodes. At the end, ensure that cluster.conf is consistent >> on all the nodes. >> >> 20 - How do I add a new node to an offline cluster? >> >> You can either use the console or use o2cb_ctl or simply hand edit >> cluster.conf. Then either use the console to propagate it to all nodes >> or hand copy using scp or any other tool. The o2cb_ctl command to do the >> same is: >> >> # o2cb_ctl -C -n NODENAME -t node -a number=NODENUM -a >> ip_address=IPADDR -a ip_port=IPPORT -a cluster=CLUSTERNAME >> >> Notice the "-i" argument is not required as the cluster is not online. >> >> >> Regards, >> >> Marcos Eduardo Matsunaga >> >> Oracle USA >> Linux Engineering >> >> >> >> >> >> Tim Lank wrote: >> >>> We have a test 10gR2 RAC cluster using ocfs2 filesystems for the >>> Clusterware files and the Database files. >>> >>> We need to increase the node slots to accomodate new RAC nodes. Is it >>> true that we will need to umount these filesystems for the upgrade (i.e. >>> Database and Clusterware also)? >>> >>> We are planning to use the following command format to perform the node >>> slot increase: >>> >>> # tunefs.ocfs2 ?N 3 /dev/mapper/mpath1p1 >>> >>> _______________________________________________ >>> Ocfs2-users mailing list >>> Ocfs2-users@oss.oracle.com >>> http://oss.oracle.com/mailman/listinfo/ocfs2-users >>> >>> > >-------------- next part -------------- An HTML attachment was scrubbed... URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20071001/6ebbbb32/attachment.html
So since tunefs.ocfs2 is the tool that actually does all of the work to add more slots and the tunefs.ocfs2 man page states: DESCRIPTION tunefs.ocfs2 is used to adjust OCFS2 file system parameters on disk. In order to prevent data loss, tunefs.ocfs2 will not perform any action on the specified device if it is mounted on any node in the cluster. This tool requires the O2CB cluster to be online. I return to my original question: "Is it true that we will need to umount these filesystems for the upgrade (i.e. Database and Clusterware also)?" Since our cluster is running entirely on ocfs2 filesystems, this will cause a cluster outage. Is there a way to do this while the filesystems are mounted? If not, are there plans for allowing a node count increase while the fileystems are mounted, and if so, when?> The console does that (You can use the console to add a new node). > tunefs.ocfs2 is actually the tool that will change the superblock to add > more slots (see the man pages) and it is called by the console (a more > user friendly interface) to perform the action. > > o2cb_ctl only defines the new node(s) on the existing nodes that are > running. > > Regards, > > Marcos Eduardo Matsunaga > > Oracle USA > Linux Engineering > > > > > > Tim Lank wrote: >> So does the o2cb_ctl command touch the ocfs2 filesystem superblock and >> increase the node slot value in this example? >> >> >> >>>> From >>>> http://oss.oracle.com/projects/ocfs2/dist/documentation/ocfs2_faq.html >>>> >>> 19 - How do I add a new node to an online cluster? >>> You can use the console to add a new node. However, you will need to >>> explicitly add the new node on all the online nodes. That is, adding on >>> one node and propagating to the other nodes is not sufficient. If the >>> operation fails, it will most likely be due to bug#741 >>> <http://oss.oracle.com/bugzilla/show_bug.cgi?id=741>. In that case, you >>> can use the o2cb_ctl utility on all online nodes as follows: >>> >>> # o2cb_ctl -C -i -n NODENAME -t node -a number=NODENUM -a >>> ip_address=IPADDR -a ip_port=IPPORT -a cluster=CLUSTERNAME >>> >>> Ensure the node is added both in /etc/ocfs2/cluster.conf and in >>> /config/cluster/CLUSTERNAME/node on all online nodes. You can then >>> simply copy the cluster.conf to the new (still offline) node as well as >>> other offline nodes. At the end, ensure that cluster.conf is consistent >>> on all the nodes. >>> >>> 20 - How do I add a new node to an offline cluster? >>> >>> You can either use the console or use o2cb_ctl or simply hand edit >>> cluster.conf. Then either use the console to propagate it to all nodes >>> or hand copy using scp or any other tool. The o2cb_ctl command to do >>> the >>> same is: >>> >>> # o2cb_ctl -C -n NODENAME -t node -a number=NODENUM -a >>> ip_address=IPADDR -a ip_port=IPPORT -a cluster=CLUSTERNAME >>> >>> Notice the "-i" argument is not required as the cluster is not online. >>> >>> >>> Regards, >>> >>> Marcos Eduardo Matsunaga >>> >>> Oracle USA >>> Linux Engineering >>> >>> >>> >>> >>> >>> Tim Lank wrote: >>> >>>> We have a test 10gR2 RAC cluster using ocfs2 filesystems for the >>>> Clusterware files and the Database files. >>>> >>>> We need to increase the node slots to accomodate new RAC nodes. Is it >>>> true that we will need to umount these filesystems for the upgrade >>>> (i.e. >>>> Database and Clusterware also)? >>>> >>>> We are planning to use the following command format to perform the >>>> node >>>> slot increase: >>>> >>>> # tunefs.ocfs2 ???N 3 /dev/mapper/mpath1p1 >>>> >>>> _______________________________________________ >>>> Ocfs2-users mailing list >>>> Ocfs2-users@oss.oracle.com >>>> http://oss.oracle.com/mailman/listinfo/ocfs2-users >>>> >>>> >> >> >