首页 > 代码库 > Vmware Workstation + Suse Linux 11 SP3 + db2 purescale V10.5 (五)

Vmware Workstation + Suse Linux 11 SP3 + db2 purescale V10.5 (五)

    本文详细记录了DB2 purescale 10.5在VMware Workstation 上的安装过程,如果大家看了本人的博文后,实践过程中有什么问题,欢迎加本人微信84077708,我将尽我所能为大家解惑。


    在上一篇博文Vmware Workstation + Suse Linux 11 SP3 + db2 purescale V10.5 (四)中,已经将操作系统上的绝大部分配置都已经做完了,本文继续进行余下的三个重要的配置,分别是:

    1、iscsi服务器配置

    2、iscsi客户端配置

    3、ssh用户免密登陆配置

    

一、iscsi服务器配置(node01)

    因为在前面创建虚拟机以及为虚拟机安装操作系统的过程中,已经说明了,将node01配置为iscsi服务器,将node01和node02配置为iscsi客户端。具体配置如下:

    iscsi服务器端的配置文件为/etc/ietd.conf (node01虚拟机上),在文件末尾加上以下内容:


Target iqn.2012-06.com.ibm:pureScaleDisk01

Lun 0 Path=/dev/sda3,Type=fileio,ScsiId=3456789012,ScsiSN=456789012

Target iqn.2012-06.com.ibm:pureScaleDisk02

Lun 1 Path=/dev/sda4,Type=fileio,ScsiId=1234567890,ScsiSN=345678901



二、iscsi客户端配置(node01 , node02)

配置文件为/etc/init.d/iscsiclient (该文件默认不存在,需要手动创建),文件内容如下:

#! /bin/sh


### BEGIN INIT INFO

# Provides: iscsiclsetup

# Required-Start: $network $syslog $remote_fs smartd

# Required-Stop:

# Default-Start: 3 5

# Default-Stop: 0 1 2 6

# Description: ISCSI client setup

### END INIT INFO


IPLIST="192.168.142.101"


# Shell functions sourced from /etc/rc.status:

#      rc_check         check and set local and overall rc status

#      rc_status        check and set local and overall rc status

#      rc_status -v     ditto but be verbose in local rc status

#      rc_status -v -r  ditto and clear the local rc status

#      rc_failed        set local and overall rc status to failed

#      rc_reset         clear local rc status (overall remains)

#      rc_exit          exit appropriate to overall rc status

. /etc/rc.status



# catch mis-use right here at the start

if [  "$1" != "start"  -a  "$1" != "stop"  -a  "$1" != "status" -a "$1" != "restart" -a "$1" != "rescan" -a "$1" != "mountall" ]; then

    echo "Usage: $0 {start|stop|status|restart|rescan|mountall}"

    exit 1

fi


# First reset status of this service

rc_reset


iscsimount() {

        rc_reset

        echo -n "Mounting $1: "

        /usr/lpp/mmfs/bin/mmmount $1

        rc_status -v

        return $?

}

iscsiumount() {

        rc_reset

        echo -n "Umounting $1: "

        /usr/lpp/mmfs/bin/mmumount $1

        rc_status -v

        return $?

}


iscsicheck() {

        rc_reset

        echo -n "Verify if $1 is mounted: "

        mount | grep "on $1\b" > /dev/null

        rc_status -v

        return $?

}


iscsimountall() {

        # Find all fstab lines with gpfs as fstype

        for mountpoint in `grep "gpfs" /etc/fstab | awk ‘{print $2}‘`

        do

           # Only try to mount filesystems that are not currently mounted

           if ! mount | grep "on $mountpoint\b" > /dev/null

           then

              iscsimount $mountpoint || overallstatus=$?

           fi

        done

        return $overallstatus

}


iscsiumountall() {

        # Find all fstab lines with gpfs as fstype

        for mountpoint in `grep "gpfs" /etc/fstab | awk ‘{print $2}‘`

        do

           # Only try to umount filesystems that are currently mounted

           if mount | grep "on $mountpoint\b" > /dev/null

           then

              iscsiumount $mountpoint || overallstatus=$?

           fi

        done

        return $overallstatus

}

iscsicheckall() {

        # Find all fstab lines with gpfs as fstype

        for mountpoint in `grep "gpfs" /etc/fstab | awk ‘{print $2}‘`

        do

           iscsicheck $mountpoint || overallstatus=$?

        done

        return $overallstatus

}

case "$1" in

  start)

        modprobe -q iscsi_tcp

        iscsid

        for IP in $IPLIST

        do

           ping -q $IP -c 1 -W 1 > /dev/null

           RETURN_ON_PING=$?

           if [ ${RETURN_ON_PING} == 0 ]; then

                ISCSI_VALUES=`iscsiadm -m discovery -t st -p $IP \

                           | awk ‘{print $2}‘ | uniq`

                if [ "${ISCSI_VALUES}" != "" ] ; then

                   for target in $ISCSI_VALUES

                   do

                      echo "Logging into $target on $IP"

                      iscsiadm --mode node --targetname $target \

                          --portal $IP:3260 --login

                   done

                else

                   echo "No iscsitarget were discovered"

                fi

           else

               echo "iscsitarget is not available"

           fi

        done

        if [ ${RETURN_ON_PING} == 0 ]; then

           if [ "${ISCSI_VALUES}" != "" ] ; then

              /usr/lpp/mmfs/bin/mmstartup -a &> /dev/null

              iscsimountall

           fi

        fi

        ;;

  stop)        

     for IP in $IPLIST

        do

           ping -q $IP -c 1 -W 1 > /dev/null

           RETURN_ON_PING=$?

           if [ ${RETURN_ON_PING} == 0 ]; then

                ISCSI_VALUES=`iscsiadm -m discovery -t st --portal $IP \

                      | awk ‘{print $2}‘ | uniq`

                if [ "${ISCSI_VALUES}" != "" ] ; then

                   for target in $ISCSI_VALUES

                   do

                      echo "Logging out for $target from $IP"

                      iscsiadm -m node --targetname $target \

                         --portal $IP:3260 --logout

                   done

                else

                   echo "No iscsitarget were discovered"

                fi

           fi

        done

        if [ ${RETURN_ON_PING} == 0 ]; then

           if [ "${ISCSI_VALUES}" != "" ] ; then

              iscsiumountall

           fi

        fi

        ;;

  status)

        echo "Running sessions"

        iscsiadm -m session -P 1

        iscsicheckall

        rc_status -v

        ;;


  rescan)

        echo "Perform a SCSI rescan on a session"

        iscsiadm -m session -r 1 --rescan

        rc_status -v

        ;;

  

  mountall)

        iscsimountall

        rc_status -v

        ;;


  restart)

    ## Stop the service and regardless of whether it was

    ## running or not, start it again.

    $0 stop

    $0 start

    ;;

  *)

    echo "Usage: $0 {start|stop|status|restart|rescan|mountall}"

    exit 1

esac


rc_status -r


rc_exit


到此,iscsi服务器和 iscsi客户端就配置完成了。接着执行以下命令,将iscsitarget服务和iscsiclient服务设置为开机自动启动。

node01:/etc/init.d # chkconfig -a iscsitarget

iscsitarget               0:off  1:off  2:off  3:on   4:off  5:on   6:off

node01:/etc/init.d # chkconfig -a iscsiclient

iscsiclient               0:off  1:off  2:off  3:on   4:off  5:on   6:off


node02:~ # chkconfig -a iscsiclient

iscsiclient               0:off  1:off  2:off  3:on   4:off  5:on   6:off


三、ssh用户免密登陆配置

root用户免密登陆:

node01:~ # ssh-keygen -t dsa

node01:~ # ssh-keygen -t rsa


node02:~ # ssh-keygen -t dsa

node02:~ # ssh-keygen -t rsa


node01:~ # cd .ssh 

node01:~/.ssh # cat id_dsa.pub >> authorized_keys

node01:~/.ssh # cat id_rsa.pub >> authorized_keys

node01:~/.ssh # scp authorized_keys  root@node02:~/.ssh/


node02:~ # cd .ssh

node02:~/.ssh # cat id_dsa.pub >> authorized_keys

node02:~/.ssh # cat id_rsa.pub >> authorized_keys

node02:~/.ssh # scp authorized_keys  root@node01:~/.ssh/


node01:~ # ssh node01 date

node01:~ # ssh node02 date

node01:~ # ssh node01.purescale.ibm.local date

node01:~ # ssh node02.purescale.ibm.local date


node02:~ # ssh node01 date

node02:~ # ssh node02 date

node02:~ # ssh node01.purescale.ibm.local date

node02:~ # ssh node02.purescale.ibm.local date

  

db2sdin1用户免密登陆:

db2sdin1@node01:~> ssh-keygen -t dsa

db2sdin1@node01:~> ssh-keygen -t rsa


db2sdin1@node02:~> ssh-keygen -t dsa

db2sdin1@node02:~> ssh-keygen -t rsa


db2sdin1@node01:~> cd .ssh 

db2sdin1@node01:~/.ssh> cat id_dsa.pub >> authorized_keys

db2sdin1@node01:~/.ssh> cat id_rsa.pub >> authorized_keys

db2sdin1@node01:~/.ssh> scp authorized_keys  db2sdin1@node02:~/.ssh/


db2sdin1@node02:~> cd .ssh

db2sdin1@node02:~/.ssh> cat id_dsa.pub >> authorized_keys

db2sdin1@node02:~/.ssh> cat id_rsa.pub >> authorized_keys

db2sdin1@node02:~/.ssh> scp authorized_keys  db2sdin1@node01:~/.ssh/


db2sdin1@node01:~> ssh node01 date

db2sdin1@node01:~> ssh node02 date

db2sdin1@node01:~> ssh node01.purescale.ibm.local date

db2sdin1@node01:~> ssh node02.purescale.ibm.local date


db2sdin1@node02:~> ssh node01 date

db2sdin1@node02:~> ssh node02 date

db2sdin1@node02:~> ssh node01.purescale.ibm.local date

db2sdin1@node02:~> ssh node02.purescale.ibm.local date

     
到此,Suse Linux 11SP3操作系统上的所有配置就基本配置完成了,但是此时还不能安装DB2 purescale,因为还有一些比较细致的东西在安装前需要进行配置,否则安装会失败。关于这些内容,请参阅《Vmware Workstation + Suse Linux 11 SP3 + db2 purescale V10.5 (六)》


本文出自 “涛哥的回忆” 博客,谢绝转载!

Vmware Workstation + Suse Linux 11 SP3 + db2 purescale V10.5 (五)