Upgrading the GPFS cluster on AIX. All the GPFS nodes should be upgrade at same time. Make sure the Application is stopped fully. Before starting with OS Upgrade all the GPFS file system should be unmounted. If there are any application process running, and those process using the GPFS file systems, we cannot unmount the GPFS file systems. Before the OS upgrade starts, the GPFS cluster should be stopped. 1) View the cluster information Before starting the OS Upgrade starts complete below steps.
# mmshutdown -a Example output: Wed May 11 00:08:22 CDT 2011: 6027-1341 mmshutdown: Starting force unmount of GPFS file systems Wed May 11 00:08:27 CDT 2011: 6027-1344 mmshutdown: Shutting down GPFS daemons test3-gpfs: Shutting down! Test2-gpfs: Shutting down! Test3-gpfs: 'shutdown' command about to kill process 516190 test2-gpfs: 'shutdown' command about to kill process 483444 test1-gpfs: Shutting down! Test1-gpfs: 'shutdown' command about to kill process 524420 test1-gpfs: Master did not clean up; attempting cleanup now test1-gpfs: Wed May 11 00:09:28.423 2011: GPFS: 6027-311 mmfsd64 is shutting down. Test1-gpfs: Wed May 11 00:09:28.424 2011: Reason for shutdown: mmfsadm shutdown command timed out test1-gpfs: Wed May 11 00:09:28 CDT 2011: mmcommon mmfsdown invoked. Subsystem: mmfs Status: down test1-gpfs: Wed May 11 00:09:28 CDT 2011: 6027-1674 mmcommon: Unmounting file systems. Surfboard shape.
Wed May 11 00:09:33 CDT 2011: 6027-1345 mmshutdown: Finished 8) Verify any cluster process running. # smitty update all Example output: INPUT device / directory for software. SOFTWARE to updateupdateallPREVIEW only? (update operation will NOT occur) yes Select yes for Preview COMMIT software updates?
No Select no for COMMIT filesets SAVE replaced files? Yes Select yes here AUTOMATICALLY install requisite software? Yes+ EXTEND file systems if space needed? Yes+ VERIFY install and check file sizes? No+ DETAILED output?
No+ Process multiple volumes? Yes+ ACCEPT new license agreements? Yes Accept new licence agreement Preview new LICENSE agreements? No+ If everything is fine in PREVIEW stage, proceed with upgrading the GPFS filesets bye selecting PREVIEW as no 12) Now verify the GPFS filesets version. # mmlsconfig Example output: Configuration data for cluster HOST.test1-gpfs: - clusterName HOST.test1-gpfs clusterId clusterType lc autoload yes minReleaseLevel 3.2.1.5 dmapiFileHandleSize 32 maxblocksize 4096K pagepool 1024M maxFilesToCache 5000 maxStatCache 40000 maxMBpS 3200 prefetchPct 60 seqDiscardThreshhold 0 worker1Threads 400 prefetchThreads 145 adminMode allToAll File systems in cluster HOST.test1-gpfs: - /dev/gpfs1001 /dev/gpfs1002 /dev/gpfs1003 /dev/gpfs1004.
Gpfs Commands
. Overview This document assumes that you have already purchased your GPFS product, have the Linux rpms or AIX packages available, and are familiar with the GPFS documentation: These instructions are based on GPFS 3.3, GPFS 3.4 and GPFS 3.5. If you are using a different version of GPFS, you may need to make adjustments to the information provided here.
Before proceeding with these instructions, you should have the following already completed for your xCAT cluster:. Your xCAT management node is fully installed and configured. If you are using xCAT hierarchy, your service nodes are installed and running. Your compute nodes are defined to xCAT, and you have verified your hardware control capabilities, gathered MAC addresses, and done all the other necessary preparations for a stateful (full-disk) install. You should have a test node that you have installed with the base OS and xCAT postscripts to verify that the basic network configuration and installation process are correct. Linux To set up GPFS in a stateful cluster, follow these steps: Install/Build GPFS. Copy the GPFS rpms from your distribution media onto the xCAT management node (MN), following the instructions you received and accepting the product licenses as required.
Aix How To Install Java
Suggested target location to put the rpms on the xCAT MN. /usr/lpp/mmfs/src/README. NOTE: This requires that the kernel source rpms are installed on your xCAT management node. For example, for SLES11, make sure the kernel-source and kernel-ppc64-devel rpms are installed. For rhels6, make sure the cpp.ppc64,gcc.ppc64,gcc-c.ppc64,kernel-devel.ppc64 and rpm-build.ppc64 are installed; If not, please run 'yum install cpp.ppc64 gcc.ppc64 gcc-c.ppc64 kernel-devel.ppc64 rpm-build.ppc64' on rhels6 xCAT MN.
If the kernel of your compute nodes will be different from that of your xCAT management node, you will first need to install one node with all of the GPFS and kernel source rpms, follow these instructions to build the GPFS portability layer rpm there, copy that rpm back to your xCAT management node and then continue with the rest of these procedures to add the rpm to your image and install/configure GPFS. Install the new rpm on your xCAT management node and copy it to your otherpkgs directory in preparation for installing it into your diskless images. Cd /root/rpmbuild/RPMS/ppc64/ rpm -Uvh gpfs.gplbin.rpm cp gpfs.gplbin.rpm /install/post/otherpkgs/rhels6/ppc64/gpfs/ createrepo /install/post/otherpkgs/rhels6/ppc64/gpfs/ Note: If the createrepo command is not found, you may need to install the createrepo rpm package that is shipped with your OS distribution. Add GPFS to your stateful image definition Include GPFS in your stateful image definition:. Install the optional xCAT-IBMhpc rpm on your xCAT management node and service nodes.
This rpm is available with xCAT and should already exist in your zypper or yum repository that you used to install xCAT on your management node. A new copy can be downloaded from:. To install the rpm in SLES. #INCLUDE:/opt/xcat/share/xcat/IBMhpc/IBMhpc.rhels6.ppc64.pkglist# Verify that the above sample pkglist contains the correct packages. If you need to make changes, you can copy the contents of the file into your.pkglist and edit as you wish instead of using the #INCLUDE.# entry. Note: This pkglist support is available with xCAT 2.5 and newer releases.
If you are using an older release of xCAT, you will need to add the entries listed in these pkglist files to your Kickstart or AutoYaST install template file. Add to otherpkgs: Edit your /install/custom/install//.otherpkgs.pkglist and add.
#INCLUDE:/opt/xcat/share/xcat/IBMhpc/gpfs/gpfs.otherpkgs.pkglist# Verify that the above sample pkglist contains the correct gpfs packages. If you need to make changes, you can copy the contents of the file into your.otherpkgs.pkglist and edit as you wish instead of using the #INCLUDE.# entry. These packages will be installed on the node after the first reboot by the xCAT postbootscript otherpkgs. You can find more information on the xCAT otherpkgs package list files and their use in the xCAT documentation. You should create repodata in your /install/post/otherpkgs///. directory so that yum or zypper can be used to install these packages and automatically resolve dependencies for you.
Review and edit this script to meet your needs. This script will run on the node after OS has been installed, the node has rebooted for the first time, and the xCAT default postbootscripts have run. NOTE: If you are installing a GPFS I/O node, you MUST make a local copy of the gpfsupdates script and comment out the lines that create a non-functional nsddevices script. You need a working copy of this script for your I/O server so that it can find the disks it needs to build your GPFS filesystems.
Add this script to the postbootscripts list for your node. For example, if all nodes in your compute nodegroup will be using this script and the nodes' attribute postbootscripts value is otherpkgs. Chdef -p postbootscripts=gpfsupdates Instructions for adding GPFS Software to existing xCAT nodes If your nodes are already installed with the correct OS, and you are adding GPFS software to the existing nodes, continue with these instructions and skip the next step to 'Network boot the nodes'. The updatenode command will be used to add the GPFS software and run the postscripts using the pkglist and otherpkgs.pkglist files created above. Note that support was added to updatenode in xCAT 2.5 to install packages listed in pkglist files (previously, only otherpkgs.pkglist entries were installed). If you are running an older version of xCAT, you may need to add the pkglist entries to your otherpkgs.pkglist file or install those packages in some other way on your existing nodes.
You will want updatenode to run zypper or yum to install all of the packages. Make sure their repositories have access to the base OS rpms. Xdsh yum repolist -v xcoll If you installed these nodes with xCAT, you probably still have repositories set pointing to your distro directories on the xCAT MN or SNs. If there is no OS repository listed, add appropriate remote repositories using the zypper ar command or adding entries to /etc/yum/repos.d. Also, for updatenode to use zypper or yum to install packages from your /install/post/otherpkgs directories, make sure you have run the createrepo command for each of your otherpkgs directories (see instructions in the document. Update the software on your nodes.
Updatenode -P Network boot the nodes Network boot your nodes:. Run 'nodeset install' for all your nodes.
Run rnetboot to boot and install your nodes. When the nodes are up, verify that the GPFS rpms are all correctly installed. Spykee world. GPFS installation documentation advises having all your nodes running and installed with the GPFS rpms before creating your GPFS cluster. However, with very large clusters, you may choose to only have your main GPFS infrastructure nodes up and running, create your cluster, and then add your compute nodes later. If so, only install and boot those nodes that are critical to configuring your GPFS cluster and bringing your GPFS filesystems online. You can network boot the compute nodes later and add them to your GPFS configuration using the mmaddnode command. AIX As stated at the beginning of this page, these instructions assume that you have already created a stateful image with a base AIX operating system and tested a network installation of that image to at least one compute node.
This will ensure you understand all of the processes, networks are correctly defined, NIM operates well, NFS is correct, xCAT postscripts run, and you can xdsh to the node with proper ssh authorizations. For detailed instructions, see the xCAT document for deploying AIX nodes. Add GPFS to your stateful image Include GPFS in your image:. Install the optional xCAT-IBMhpc rpm on your xCAT management node. This rpm is available with xCAT and should already exist in the directory that you downloaded your xCAT rpms to. It did not get installed when you ran the instxcat script.
A new copy can be downloaded from. To install the rpm. Cd rpm -Uvh xCAT-IBMhpc.rpm. TEAL GPFS Connector Feature (optional). If you want to use the optional TEAL GPFS connector feature, install the teal.base and teal.gpfs installp packages onto your xCAT management node, refer to for details.
If you have a hierarchical cluster with service nodes, and want to use the optional TEAL GPFS connector feature, install teal.gpfs-sn installp package on the GPFS collector service node (and backup GPFS collector service nodes if possible) following the instruction below. Note: (optional) If you want to use the optional TEAL GPFS connector feature, copy teal.gpfs-sn package to /install/post/otherpkgs/aix/ppc64/gpfs on your xCAT management node. The teal.gpfs-sn package is shipped with TEAL product. The packages that will be installed by the xCAT HPC Integration support are listed in sample bundle files. Review the following file to verify you have all the product packages you wish to install (instructions are provided below for copying and editing this file if you choose to use a different list of packages). Inutoc /install/post/otherpkgs/aix/ppc64/gpfs nim -o update -a packages=all -a source=/install/post/otherpkgs/aix/ppc64/gpfs. Add additional base AIX packages to your lppsource: Some of the HPC products require additional AIX packages that may not be part of your default AIX lppsource.
Review the following file to verify all the AIX packages needed by the HPC products are included in your lppsource (instructions are provided below for copying and editing this file if you choose to use a different list of packages). Review these sample scripts carefully and make any changes required for your cluster. Note that some of these scripts may change tuning values and other system settings. The scripts will be run on the node after it has booted as part of the xCAT diskless node postscript processing. Instructions for adding GPFS Software to existing xCAT nodes If your nodes are already installed with the correct OS, and you are adding GPFS software to the existing nodes, continue with these instructions and skip the next step to 'Network boot the nodes'. The updatenode command will be used to add the GPFS software and run the postscripts. To have updatenode install both the OS prereqs and the base GPFS packages, complete the previous instructions to add GPFS software to your image.
Update the software on your nodes. Updatenode -P Network boot the nodes Follow the instructions in the xCAT AIX documentation to network boot your nodes:. If the nodes are not already defined to NIM, run xcat2nim for all your nodes. Run nimnodeset for your nodes.
![Gpfs commands Gpfs commands](/uploads/1/2/3/9/123954916/935694408.jpg)
Run rnetboot to boot your nodes. When the nodes are up, verify that GPFS is correctly installed. GPFS installation documentation advises having all your nodes running and installed with the GPFS lpps before creating your GPFS cluster.
However, with very large clusters, you may choose to only have your main GPFS infrastructure nodes up and running, create your cluster, and then add your compute nodes later. If so, only install and boot those nodes that are critical to configuring your GPFS cluster and bringing your GPFS filesystems online. You can network boot the compute nodes later and add them to your GPFS configuration using the mmaddnode command. Build your GPFS cluster Follow the GPFS documentation to create your GPFS cluster, define manager nodes, quorum nodes, accept licenses, create NSD disk definitions, and create your filesystems.
Once you have verified that your GPFS cluster is operational and that the GPFS filesystems are available to the currently active nodes, you can add your remaining compute nodes to the GPFS cluster. The mmaddnode command will accept a file containing a list of node names as input. XCAT can help you create this file. Simply run the xCAT nodels command against the desired noderange and redirect the output to a file.
Nodels compute /tmp/gpfsnodes mmaddnode -N /tmp/gpfsnodes Starting GPFS on cluster nodes There are several ways you can start GPFS on your cluster nodes. Manually start GPFS Use the xCAT xdsh command to run the GPFS mmstartup command individually on all nodes, or use GPFS to distribute the commands by running 'mmstartup -a'. Note that for very large clusters, running an mmstartup command to all nodes in the cluster at one time can cause a heavy load on your network. Therefore, using xdsh with appropriate fanout values may be a better choice. GPFS autoload option The mmchconfig command allows you to set a cluster-wide option to automatically start GPFS anytime a node is booted. Mmchconfig autoload=yes The default setting for this option is 'autoload=no'.
When you are first setting up GPFS across your cluster, you will probably choose NOT to turn this on until after you have initially installed all your nodes and done some cluster-wide verification. TEAL GPFS Connector Feature (optional) If you want to use the optional TEAL GPFS connector feature, after the GPFS cluster is correctly configured, verify that the teal-base and teal-gpfs rpms for Linux or the teal.base and teal.gpfs installps for AIX are correctly installed on your xCAT management node, the teal-gpfs-sn rpm for Linux or the teal.gpfs-sn installp for AIX is correctly installed on your GPFS collector service node(and backup GPFS collector service nodes if possible). Then, you can specify the selected service node as your TEAL GPFS collector node. Run the following command on the xCAT management node.