Quantcast
Channel: » AIX
Viewing all 12 articles
Browse latest View live

ipldevice is missing

$
0
0

ipldevice is critical to the operation of installing certain products. It should be a hard link of your boot device.

We may receive ipldevice related problem during installation or while taking mksysb backup.

During OS update

==============

I have received the below error message when I try to update the TL 6100-07-03-1207 from 6100-05-01-1016

installp:  bosboot verification starting…
installp:  An error occurred during bosboot verification processing.
Checking installation of Level 6100-05-01-1016
restarting update. Current Level 6100-05-01-1016
 

 

When I check the ipldevice on the location /dev , it is not found.

To solve this problem :

I checked the luns under rootvg
 

# lsvg -p rootvg

rootvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk25           active            1023        742         204..72..57..204..205
hdisk26           active            1023        870         204..200..57..204..205

 

 

To find Which lun is having the boot device hd5

 

# lslv -m hd5
hd5:N/A

LP    PP1  PV1               PP2  PV2               PP3  PV3
0001  0001 hdisk25           0001 hdisk26

 

Goto /dev path and searched for the rhdisk25 and rhdisk26

 

# ls -l|egrep “rhdisk25|rhdisk26″

crw——-    2 root     system       17, 26 Jun 17 15:29 rhdisk25
crw——-    1 root     system       17, 25 Jul 21 08:52 rhdisk26

Now checked with the major number and minor number like below :-

 

# ls -l|grep “17, 26″

brw——-    1 root     system       17, 26 Jul 21 09:03 hdisk25
crw——-    2 root     system       17, 26 Jun 17 15:29 rhdisk25

 

# ls -l|grep “17, 25″

brw——-    1 root     system       17, 25 Jul 21 09:03 hdisk26
crw——-    1 root     system       17, 25 Jul 21 08:52 rhdisk26

 

And didn’t find the ipldevice. Hence created the hardlink from source as character device file.

In our case “rhdisk25”

 

ln /dev/rhdisk25 /dev/ipldevice

# cd /dev

# ls -l|grep “17, 26″

brw——-    1 root     system       17, 26 Jul 21 09:03 hdisk25
crw——-    2 root     system       17, 26 Jun 17 15:29 ipldevice
crw——-    2 root     system       17, 26 Jun 17 15:29 rhdisk25

 

If ipldevice already exist then we have to force the hard link creation

 

ln –f /dev/rhdisk25 /dev/ipldevice

 

During mksysb backup

====================

While taking mksysb backup and facing the ipldevice missing problem , then
We can follow the same procedure as shown above.
After performing it, then create the boot image

 

bosboot –ad /dev/ipldevice

bootlist –om normal

 
Now rectified the ipldevice problem and good to go for mksysb backup

The post ipldevice is missing appeared first on web-manual.net.


vg information not found in hacmp ( 0516-034 : Failed to open VG special file )

$
0
0

0516-034 : Failed to open VG special file is the error message

VG information not found in hacmp

We have faced a problem while reading the vg (volume group) information as (0516-034 : Failed to open VG special file in hacmp). Our San team was doing some maintenance activity and we disabled 1 lun path on the mirrored vg.
The HACMP cluster’s resource group is showing as online,however due to some reason vg information not found while trying to read using lsvg vgname. The file systems of that volume group were used by processes where as our application team was unable to connect with the application.

In order to solve the vg problem we requested the (sap) application team to bring down the application and they were unable to do it.

We have performed the below steps in order to solve the vg information reading problem.
Checked the vg is online

ot@node1:/usr/es/sbin/cluster/utilities # lsvg -o|grep sapHP1data2vg
sapHP1data2vg
1:root@node1:/usr/es/sbin/cluster/utilities #

Tried to read the vg information

1:root@node1:/usr/es/sbin/cluster/sbin # lsvg sapHP1data2vg

” 0516-034 : Failed to open VG special file” . Probable cause is the VG was forced offline. Execute the varyoffvg and varyonvgcommands to bring the VG online.1:root@node1:/usr/es/sbin/cluster/sbin #

There are 2 luns, hdisk56 which is working fine and hdisk57 which is one of the lun from the storage which is currently under maintenance.

ot@node1:/usr/es/sbin/cluster/sbin# lspv|grep sapHP1data2vg
hdisk56 00c1905a2dc49d75 sapHP1data2vg
hdisk57 00c1905a2dc491a8 sapHP1data2vg
1:root@node1:/usr/es/sbin/cluster/sbin #

To identify which lun is good

1:root@node1:/usr/es/sbin/cluster/utilities # lquerypv -h /dev/hdisk56

00000000 C9C2D4C1 00000000 00000000 00000000 |…………….|
00000010 00000000 00000000 00000000 00000000 |…………….|
00000020 00000000 00000000 00000000 00000000 |…………….|
00000030 00000000 00000000 00000000 00000000 |…………….|
00000040 00000000 00000000 00000000 00000000 |…………….|
00000050 00000000 00000000 00000000 00000000 |…………….|
00000060 00000000 00000000 00000000 00000000 |…………….|
00000070 00000000 00000000 00000000 00000000 |…………….|
00000080 00C1905A 2DC49D75 00000000 00000000 |…Z-..u……..|
00000090 00000000 00000000 00000000 00000000 |…………….|
000000A0 00000000 00000000 00000000 00000000 |…………….|
000000B0 00000000 00000000 00000000 00000000 |…………….|
000000C0 00000000 00000000 00000000 00000000 |…………….|
000000D0 00000000 00000000 00000000 00000000 |…………….|
000000E0 00000000 00000000 00000000 00000000 |…………….|
000000F0 00000000 00000000 00000000 00000000 |…………….|

1:root@node1:/usr/es/sbin/cluster/utilities # lquerypv -h /dev/hdisk57

1:root@node1:/usr/es/sbin/cluster/utilities #

No output from lquerypv command for hdisk57 . This means some problem with the PV .

The below command is used to identify the LVs details about a volume group on a pv

1:root@node1:/usr/es/sbin/cluster/sbin # readvgda -s hdisk56|grep lvname
lvname:         lvHP128
lvname:         loglv01
1:root@node1:/usr/es/sbin/cluster/sbin #

To check the mount point of the faulty vg whether it is currently mounted or not

1:root@node1:/usr/es/sbin/cluster/sbin # mount|grep lvHP128
/dev/lvHP128     /usr/sap/HP1/D06 jfs2   Apr 22 17:48 rw,log=/dev/loglv01

1:root@node1:/usr/es/sbin/cluster/sbin # mount|grep loglv01
/dev/lvHP128     /usr/sap/HP1/D06 jfs2   Apr 22 17:48 rw,log=/dev/loglv01

While trying to read the LVs information we are getting the below error as ? mark

1:root@node1:/usr/es/sbin/cluster/sbin # lslv lvHP128
LOGICAL VOLUME:     lvHP128                VOLUME GROUP:   sapHP1data2vg
LV IDENTIFIER:      00c1905a00004c000000013137bfe199.1 PERMISSION:     ?
VG STATE:           active/complete        LV STATE:       ?
TYPE:               jfs2                   WRITE VERIFY:   ?
MAX LPs:            ?                      PP SIZE:        ?
COPIES:             ?                      SCHED POLICY:   ?
LPs:                ?                      PPs:            ?
STALE PPs:          ?                      BB POLICY:      ?
INTER-POLICY:       minimum                RELOCATABLE:    yes
INTRA-POLICY:       middle                 UPPER BOUND:    2
MOUNT POINT:        /usr/sap/HP1/D06       LABEL:          /usr/sap/HP1/D06
MIRROR WRITE CONSISTENCY: ?
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?:     ?

lslv: open(): There is a request to a device or address that does not exist.

DEVICESUBTYPE : DS_LVZ

Identify which processes are utilizing the file system

1:root@node1:/usr/es/sbin/cluster/sbin # fuser -cux /usr/sap/HP1/D06

/usr/sap/HP1/D06:   545014c(hp1adm)  589824c(hp1adm)  606232c(hp1adm)  618712c(hp1adm)  643136c(hp1adm)  659560c(hp1adm)  671938c(hp1adm)  700644c(hp1adm)  708684c(hp1adm)  717052c(hp1adm)  721118c(hp1adm)  786506c(hp1adm)  794656c(hp1adm)  798902c(hp1adm)  819374c(hp1adm)  831574c(hp1adm)  852046c(hp1adm)  864462c(hp1adm)  868402c(hp1adm)  876710c(hp1adm)  880874c(hp1adm)  893170c(hp1adm)  897100c(hp1adm)  901248c(hp1adm)  905434c(hp1adm)  913544c(hp1adm)  921614c(h


Use the below details to bring the resource group offline / online

smit cl_admin

HACMP Resource Group and Application Management
Bring a Resource Group Online
Bring a Resource Group Offline

Then varyoff the volume group using varyoffvg vgname, export the volume group , using exportvg vgname.
Tried to do learning import

3:root@node1:/root # importvg -L sapHP1data2vg hdisk56
0516-306 getlvodm: Unable to find volume group sapHP1data2vg in the Device
Configuration Database.
0516-306 redefinevg: Unable to find volume group sapHP1data2vg in the Device
Configuration Database.
0516-780 importvg: Unable to import volume group from hdisk56.
3:root@node1:/root # importvg -cL sapHP1data2vg hdisk56
0516-306 getlvodm: Unable to find volume group sapHP1data2vg in the Device
Configuration Database.
0516-306 redefinevg: Unable to find volume group sapHP1data2vg in the Device
Configuration Database.
0516-780 importvg: Unable to import volume group from hdisk56.

Reading the lun config

3:root@node1:/root # lscfg -vl hdisk56

hdisk56          U9117.MMA.651902A-V6-C20-T1-W5005076308080613-L4011401000000000  IBM MPIO FC 2107
Manufacturer…………….IBM
Machine Type and Model……2107900
Serial Number……………75LZ7711110
EC Level…………………253
Device Specific.(Z0)……..10
Device Specific.(Z1)……..0100
Device Specific.(Z2)……..075
Device Specific.(Z3)……..07109
Device Specific.(Z4)……..08
Device Specific.(Z5)……..00
3:root@node1:/root # \

Since it is the enhanced concurrent VG, we are using the -c option

3:root@node1:/root # importvg -c -L sapHP1data2vg hdisk56

0516-306 getlvodm: Unable to find volume group sapHP1data2vg in the Device
Configuration Database.
0516-306 redefinevg: Unable to find volume group sapHP1data2vg in the Device
Configuration Database.
0516-780 importvg: Unable to import volume group from hdisk56.

3:root@node1:/root # importvg -c -y sapHP1data2vg hdisk56

0516-052 varyonvg: Volume group cannot be varied on without a
quorum. More physical volumes in the group must be active.
Run diagnostics on inactive PVs.
0516-780 importvg: Unable to import volume group from hdisk56.

3:root@node1:/root # lsvg sapHP1data2vg
0516-306 : Unable to find volume group sapHP1data2vg in the Device
Configuration Database.
3:root@node1:/root # varyonvg -c sapHP1data2vg
0516-008 varyonvg: LVM system call returned an unknown
error code (3).

3:root@node1:/root # lsvg|grep sapHP1data2vg

3:root@node1:/root # lspv|grep -w hdisk56
hdisk56         00c1905a2dc49d75                    None

3:root@node1:/root # importvg -c -y sapHP1data2vg hdisk56
0516-052 varyonvg: Volume group cannot be varied on without a
quorum. More physical volumes in the group must be active.
Run diagnostics on inactive PVs.
0516-780 importvg: Unable to import volume group from hdisk56.

3:root@node1:/root # importvg -c -y sapHP1data2vg -Q n hdisk56
getopt: Not a recognized flag: Q
Usage: importvg [ [ [-V MajorNumber] [-y VGname] [-f] [-c] [-x] ] | [-L VGname] ]
[-n] [-F] [-R] PVname


Imports the definition of a volume group.

3:root@node1:/root # importvg -c -y sapHP1data2vg -f hdisk56

PV Status:      hdisk56 00c1905a2dc49d75        PVACTIVE

00c1905a2dc491a8        NONAME

varyonvg: Volume group sapHP1data2vg is varied on.

sapHP1data2vg

0516-783 importvg: This imported volume group is concurrent capable.

Therefore, the volume group must be varied on manually.

3:root@node1:/root # lsvg -l sapHP1data2vg

0516-010 : Volume group must be varied on; use varyonvg command.

3:root@node1:/root # varyonvg -c sapHP1data2vg

0516-052 varyonvg: Volume group cannot be varied on without a

quorum. More physical volumes in the group must be active.

Run diagnostics on inactive PVs.


For changing quorum, chvg is erroring to varyon the vg

3:root@node1:/root # chvg -Qn sapHP1data2vg
0516-024 lqueryvg: Unable to open physical volume.
Either PV was not configured or could not be opened. Run
diagnostics.
0516-010 chvg: Volume group must be varied on; use varyonvg command.
0516-732 chvg: Unable to change volume group sapHP1data2vg.
3:root@node1:/root # varyonvg -c sapHP1data2vg -Q n
Usage: varyonvg [-f] [-n] [-M LTGSize] [-u ] [-s] [-c] [-b] [-r] [-p] [-t] VGname
Varies a volume group on.

As we know the problem ,using the force varyon option

3:root@node1:/root # varyonvg -fc sapHP1data2vg

PV Status:     hdisk57 00c1905a2dc491a8        PVREMOVED

hdisk56 00c1905a2dc49d75        PVACTIVE
varyonvg: Volume group sapHP1data2vg is varied on.
3:root@node1:/root # 516-934 /usr/sbin/syncvg: Unable to synchronize logical volume lvHP128.
0516-932 /usr/sbin/syncvg: Unable to synchronize volume group sapHP1data2vg.

Now able to read the vg information

3:root@node1:/root # lsvg -l sapHP1data2vg

sapHP1data2vg:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
lvHP128             jfs2       256     512     2    closed/stale  /usr/sap/HP1/D06
loglv01             jfs2log    1       2       2    closed/syncd  N/A

File system check is done

3:root@node1:/root # fsck /usr/sap/HP1/D06
The current volume is: /dev/lvHP128
Primary superblock is valid.
J2_LOGREDO:log redo processing for /dev/lvHP128
Primary superblock is valid.
*** Phase 1 – Initial inode scan
*** Phase 2 – Process remaining directories
*** Phase 3 – Process remaining files
*** Phase 4 – Check and repair inode allocation map
*** Phase 5 – Check and repair block allocation map
File system is clean.

Now fixing the problem

3:root@node1:/root # fsck -y /usr/sap/HP1/D06
The current volume is: /dev/lvHP128
Primary superblock is valid.
J2_LOGREDO:log redo processing for /dev/lvHP128
Primary superblock is valid.
*** Phase 1 – Initial inode scan
*** Phase 2 – Process remaining directories
*** Phase 3 – Process remaining files
*** Phase 4 – Check and repair inode allocation map
*** Phase 5 – Check and repair block allocation map
File system is clean.

Testing the mount now manually

3:root@node1:/root # mount /usr/sap/HP1/D06

3:root@node1:/root # df -k /usr/sap/HP1/D06

Filesystem    1024-blocks      Free %Used    Iused %Iused Mounted on

/dev/lvHP128     16777216   9755396   42%     2834     1% /usr/sap/HP1/D06

3:root@node1:/root # cd /usr/sap/HP1/D06

3:root@node1:/usr/sap/HP1/D06 # ls

data                                         lost+found                                   trace_HP1_node1.140610.152742.tar.1.gz

exe                                          sec                                          trace_HP1_node1.150810.135657.tar.gz

igs                                          trace_HP1_node1.110610.130505.tar.3.gz  work

log                                          trace_HP1_node1.110610.134902.tar.2

Unmount the filesystem, varyoff the vg and test it on the cluster level now

3:root@node1:/usr/sap/HP1/D06 # cd /

3:root@node1:/ # umount /usr/sap/HP1/D06

3:root@node1:/ # varyoffvg sapHP1data2vg

3:root@node1:/ # exportvg sapHP1data2vg

3:root@node1:/ # importvg -cL sapHP1data2vg hdisk56

0516-306 getlvodm: Unable to find volume group sapHP1data2vg in the Device

Configuration Database.

0516-306 redefinevg: Unable to find volume group sapHP1data2vg in the Device

Configuration Database.

0516-780 importvg: Unable to import volume group from hdisk56.

3:root@node1:/ # importvg -c -y sapHP1data2vg hdisk56

PV Status:      hdisk56 00c1905a2dc49d75        PVACTIVE

00c1905a2dc491a8        NONAME

varyonvg: Volume group sapHP1data2vg is varied on.

sapHP1data2vg

0516-783 importvg: This imported volume group is concurrent capable.

Therefore, the volume group must be varied on manually.

3:root@node1:/ #

Now using the clRGinfo checking the cluster status

—————————————————————————–

1:root@node1:/root # clRGinfo

—————————————————————————–

Group Name     State                        Node

—————————————————————————–

RG_HP1_CI      ONLINE                       node2

OFFLINE                      node1

RG_HP1_APPL    ONLINE                       node1

1:root@node1:/root #

“0516-034 : Failed to open VG special file” issue resolved .. Enjoy

The post vg information not found in hacmp ( 0516-034 : Failed to open VG special file ) appeared first on web-manual.net.

add memory to lpar using hmc command line,HSCL297A some mismatches between pending and current

$
0
0

In this article, I have explained the problem which I faced while adding memory to the LPAR using HMC command line (with the error code HSCL297A). When I tried to add the memory using command line due to network problem the connection with HMC got disconnected and less memory got add to the LPAR than the requested value.

On server the current memory 38 GB as you can see in below output

1:root@dehensv123:/ # prtconf -m
Memory Size: 384000 MB
1:root@dehensv123:/ #
1:root@dehensv123:/ # lparstat -i
Node Name : dehensv123
Partition Name : dehensv123
Partition Number : 6
Type : Shared-SMT-4
Mode : Capped
Entitled Capacity : 20.50
Partition Group-ID : 32774
Shared Pool ID : 0
Online Virtual CPUs : 22
Maximum Virtual CPUs : 30
Minimum Virtual CPUs : 4
Online Memory : 384000 MB <——————————————- Current Memory
Maximum Memory : 537600 MB
Minimum Memory : 230400 MB
Variable Capacity Weight : 0
Minimum Capacity : 2.50
Maximum Capacity : 30.00
Capacity Increment : 0.01
Maximum Physical CPUs in system : 64
Active Physical CPUs in system : 64
Active CPUs in Pool : 64
Shared Physical CPUs in system : 64
Maximum Capacity of Pool : 6400
Entitled Capacity of Pool : 2333
Unallocated Capacity : 0.00
Physical CPU Percentage : 93.18%
Unallocated Weight : 0
Memory Mode : Dedicated
Total I/O Memory Entitlement : -
Variable Memory Capacity Weight : -
Memory Pool ID : -
Physical Memory in the Pool : -
Hypervisor Page Size : -
Unallocated Variable Memory Capacity Weight: -
Unallocated I/O Memory entitlement : -
Memory Group ID of LPAR : -
Desired Virtual CPUs : 22
Desired Memory : 384000 MB
Desired Variable Capacity Weight : 0
Desired Capacity : 20.50
Target Memory Expansion Factor : -
Target Memory Expansion Size : -
1:root@dehensv123:/ #

To find the LPAR resides on which frame run the below for loop

maniperu@henhmcdco2:~> for i in `lssyscfg -r sys -Fname`
> do echo $i
> lssyscfg -m $i -r lpar -Fname,lpar_id|grep dehensv123
> echo —-
> done

Out Put

HENDCS701
—-
HENDCO703
dehensv123,6
—-
HENDCS702
—-
HENDCS703
—-
HENDCO701
—-
HENDCO702
—-
maniperu@henhmcdco2:~>

Checking the memory information in Frame

maniperu@henhmcdco2:~> lshwres -r mem -m HENDCO703 –level sys
configurable_sys_mem=1048576,curr_avail_sys_mem=592128,pend_avail_sys_mem=592128,installed_sys_mem=1048576,max_capacity_sys_mem=deprecated,deconfig_sys_mem=0,sys_firmware_mem=15104,mem_region_size=256,configurable_num_sys_huge_pages=0,curr_avail_num_sys_huge_pages=0,pend_avail_num_sys_huge_pages=0,max_num_sys_huge_pages=60,requested_num_sys_huge_pages=0,huge_page_size=16384,total_sys_bsr_arrays=256,bsr_array_size=4096,curr_avail_sys_bsr_arrays=256,max_mem_pools=1,max_paging_vios_per_mem_pool=2,default_hpt_ratios=1:64,”possible_hpt_ratios=1:32,1:64,1:128,1:256,1:512″
maniperu@henhmcdco2:~>

Searching the memory information using LPAR name

maniperu@henhmcdco2:~> lshwres -r mem -m HENDCO703 –level lpar –filter lpar_names=dehensv123
lpar_name=dehensv123,lpar_id=6,curr_min_mem=230400,curr_mem=384000,curr_max_mem=537600,pend_min_mem=230400,pend_mem=384000,pend_max_mem=537600,run_min_mem=230400,run_mem=384000,curr_min_num_huge_pages=0,curr_num_huge_pages=0,curr_max_num_huge_pages=0,pend_min_num_huge_pages=0,pend_num_huge_pages=0,pend_max_num_huge_pages=0,run_num_huge_pages=0,mem_mode=ded,curr_mem_expansion=0.0,pend_mem_expansion=0.0,curr_hpt_ratio=1:64,curr_bsr_arrays=0
maniperu@henhmcdco2:~>

Searching the memory information using lpar Id

maniperu@henhmcdco2:~> lshwres -r mem -m HENDCO703 –level lpar –filter lpar_ids=6
lpar_name=dehensv123,lpar_id=6,curr_min_mem=230400,curr_mem=384000,curr_max_mem=537600,pend_min_mem=230400,pend_mem=384000,pend_max_mem=537600,run_min_mem=230400,run_mem=384000,curr_min_num_huge_pages=0,curr_num_huge_pages=0,curr_max_num_huge_pages=0,pend_min_num_huge_pages=0,pend_num_huge_pages=0,pend_max_num_huge_pages=0,run_num_huge_pages=0,mem_mode=ded,curr_mem_expansion=0.0,pend_mem_expansion=0.0,curr_hpt_ratio=1:64,curr_bsr_arrays=0
maniperu@henhmcdco2:~>

I have ran the following command to add the memory but command hung

maniperu@henhmcdco2:~> chhwres -r mem -m HENDCO703 -o a -p dehensv123 -q 79872

Pressed control C , now tried to run the same command with 5 minutes time out duration and got failed.

maniperu@henhmcdco2:~> chhwres -r mem -m HENDCO703 -o a -p dehensv123 -q 79872 -w 5
HSCL3205 The managed system is busy, please try the operation again later.

Hence restarting the ctrmc daemon

1:root@dehensv123:/ # stopsrc -s ctrmc
0513-044 The ctrmc Subsystem was requested to stop.
1:root@dehensv123:/ # lssrc -a|grep -i rmc
ctrmc rsct inoperative
1:root@dehensv123:/ # startsrc -s ctrmc
0513-059 The ctrmc Subsystem has been started. Subsystem PID is 2687666.
1:root@dehensv123:/ # lssrc -a|grep -i rmc
ctrmc rsct 2687666 active
1:root@dehensv123:/ #

Now if we check the memory on the lpar 428544 MB got added as we interrupted the memory addition process

1:root@dehensv123:/ # prtconf -m
Memory Size: 428544 MB
1:root@dehensv123:/ #
maniperu@henhmcdco2:~> lshwres -r mem -m HENDCO703 –level lpar –filter “lpar_ids=6″
lpar_name=dehensv123,lpar_id=6,curr_min_mem=230400,curr_mem=428544,curr_max_mem=537600,pend_min_mem=230400,pend_mem=428544,pend_max_mem=537600,run_min_mem=230400,run_mem=428544,curr_min_num_huge_pages=0,curr_num_huge_pages=0,curr_max_num_huge_pages=0,pend_min_num_huge_pages=0,pend_num_huge_pages=0,pend_max_num_huge_pages=0,run_num_huge_pages=0,mem_mode=ded,curr_mem_expansion=0.0,pend_mem_expansion=0.0,curr_hpt_ratio=1:64,curr_bsr_arrays=0
maniperu@henhmcdco2:~>


Again tried to add the pending memory and getting the below error

maniperu@henhmcdco2:~> chhwres -r mem -m HENDCO703 -o a -p dehensv123 -q 35328 -w 5
HSCL297A There are some mismatches between pending and current values. Run the rsthwres command to re-sync values.
maniperu@henhmcdco2:~>

Now lets try the command rsthwres . The use of rsthwres is – “Restores the hardware resource configuration of a managed system, following the failure of a dynamic logical partitioning operation.”

maniperu@henhmcdco2:~> rsthwres -r mem -m HENDCO703 -p dehensv123
maniperu@henhmcdco2:~>

Now add the required memory value in mb 35328

maniperu@henhmcdco2:~> chhwres -r mem -m HENDCO703 -o a -p dehensv123 -q 35328 -w 5
maniperu@henhmcdco2:~> lshwres -r mem -m HENDCO703 –level lpar –filter “lpar_ids=6″
lpar_name=dehensv123,lpar_id=6,curr_min_mem=230400,curr_mem=463872,curr_max_mem=537600,pend_min_mem=230400,pend_mem=463872,pend_max_mem=537600,run_min_mem=230400,run_mem=463872,curr_min_num_huge_pages=0,curr_num_huge_pages=0,curr_max_num_huge_pages=0,pend_min_num_huge_pages=0,pend_num_huge_pages=0,pend_max_num_huge_pages=0,run_num_huge_pages=0,mem_mode=ded,curr_mem_expansion=0.0,pend_mem_expansion=0.0,curr_hpt_ratio=1:64,curr_bsr_arrays=0
maniperu@henhmcdco2:~>
1:root@dehensv123:/tmp # prtconf -m
Memory Size: 463872 MB
1:root@dehensv123:/tmp #

Now update the profile for the memory , to avoid the loss of the additional memory details in case of reboot of the
lpar

maniperu@henhmcdco2:~> lssyscfg -m HENDCO703 -r lpar –filter “lpar_ids=6″
name=dehensv123,lpar_id=6,lpar_env=aixlinux,state=Running,resource_config=1,os_version=AIX 6.1 6100-01-08-1014,logical_serial_num=1084EAB6,default_profile=default,curr_profile=default,work_group_id=none,shared_proc_pool_util_auth=0,allow_perf_collection=0,power_ctrl_lpar_ids=none,boot_mode=norm,lpar_keylock=norm,auto_start=0,redundant_err_path_reporting=0,rmc_state=active,rmc_ipaddr=10.175.192.66,time_ref=0,lpar_avail_priority=127,desired_lpar_proc_compat_mode=default,curr_lpar_proc_compat_mode=POWER7,suspend_capable=0,remote_restart_capable=0,affinity_group_id=none
maniperu@henhmcdco2:~>
maniperu@henhmcdco2:~> lssyscfg -r prof -m HENDCO703 –filter “lpar_ids=6″
name=default,lpar_name=dehensv123,lpar_id=6,lpar_env=aixlinux,all_resources=0,min_mem=230400,desired_mem=384000,max_mem=537600,min_num_huge_pages=0,desired_num_huge_pages=0,max_num_huge_pages=0,mem_mode=ded,mem_expansion=0.0,hpt_ratio=1:64,proc_mode=shared,min_proc_units=2.5,desired_proc_units=20.5,max_proc_units=30.0,min_procs=4,desired_procs=22,max_procs=30,sharing_mode=cap,uncap_weight=0,shared_proc_pool_id=0,shared_proc_pool_name=DefaultPool,affinity_group_id=none,io_slots=none,lpar_io_pool_ids=none,max_virtual_slots=300,”virtual_serial_adapters=0/server/1/any//any/1,1/server/1/any//any/1″,virtual_scsi_adapters=none,”virtual_eth_adapters=210/0/801//0/0/ETHERNET0//all/none,230/0/1801//0/0/ETHERNET0//all/none,250/0/601//0/0/ETHERNET0//all/none”,vtpm_adapters=none,”virtual_fc_adapters=”"101/client/2/henvo703a1/101/c0507603995e0002,c0507603995e0003/1″”,”"102/client/3/henvo703a2/102/c0507603995e0004,c0507603995e0005/1″”,”"103/client/2/henvo703a1/103/c0507603995e0006,c0507603995e0007/0″”,”"104/client/3/henvo703a2/104/c0507603995e0000,c0507603995e0001/0″”",hca_adapters=none,boot_mode=norm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,redundant_err_path_reporting=0,bsr_arrays=0,lhea_logical_ports=none,lhea_capabilities=none,lpar_proc_compat_mode=default,electronic_err_reporting=null
maniperu@henhmcdco2:~>
maniperu@henhmcdco2:~> chsyscfg -r prof -m HENDCO703 -i “name=default,lpar_name=dehensv123,desired_mem=463872″
maniperu@henhmcdco2:~> lssyscfg -r prof -m HENDCO703 –filter “lpar_ids=6″
name=default,lpar_name=dehensv123,lpar_id=6,lpar_env=aixlinux,all_resources=0,min_mem=230400,desired_mem=463872,max_mem=537600,min_num_huge_pages=0,desired_num_huge_pages=0,max_num_huge_pages=0,mem_mode=ded,mem_expansion=0.0,hpt_ratio=1:64,proc_mode=shared,min_proc_units=2.5,desired_proc_units=20.5,max_proc_units=30.0,min_procs=4,desired_procs=22,max_procs=30,sharing_mode=cap,uncap_weight=0,shared_proc_pool_id=0,shared_proc_pool_name=DefaultPool,affinity_group_id=none,io_slots=none,lpar_io_pool_ids=none,max_virtual_slots=300,”virtual_serial_adapters=0/server/1/any//any/1,1/server/1/any//any/1″,virtual_scsi_adapters=none,”virtual_eth_adapters=210/0/801//0/0/ETHERNET0//all/none,230/0/1801//0/0/ETHERNET0//all/none,250/0/601//0/0/ETHERNET0//all/none”,vtpm_adapters=none,”virtual_fc_adapters=”"101/client/2/henvo703a1/101/c0507603995e0002,c0507603995e0003/1″”,”"102/client/3/henvo703a2/102/c0507603995e0004,c0507603995e0005/1″”,”"103/client/2/henvo703a1/103/c0507603995e0006,c0507603995e0007/0″”,”"104/client/3/henvo703a2/104/c0507603995e0000,c0507603995e0001/0″”",hca_adapters=none,boot_mode=norm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,redundant_err_path_reporting=0,bsr_arrays=0,lhea_logical_ports=none,lhea_capabilities=none,lpar_proc_compat_mode=default,electronic_err_reporting=null
maniperu@henhmcdco2:~>

Check the profile now in GUI from HMC

Resolve the problem with HSCL297A error … Have fun HSCL297A !!!!!!!!!!!11

The post add memory to lpar using hmc command line,HSCL297A some mismatches between pending and current appeared first on web-manual.net.

Log Files in PowerHA SystemMirror ( HACMP )

$
0
0

Log Files in PowerHA SystemMirrorOn this tutorial , I just want to share few details about Log Files in PowerHA SystemMirror  or HACMP from IBM PowerHA SystemMirror 7.1 redbook..

Log Files in PowerHA SystemMirror : The PowerHA SystemMirror software writes the messages it generates to the system console and to several log files. Because each log file contains a different level of detail, system administrators can focus on different aspects of PowerHA SystemMirror processing by viewing different log files. PowerHA SystemMirror allows you to view, redirect, save and change parameters of the Log Files in PowerHA SystemMirror, so you can customized them to your particular needs.

The main log files include:

  • /var/hacmp/adm/cluster.log : This file tracks cluster events.
  • /var/hacmp/log/hacmp.out : This file records the output generated by configuration scripts as they execute. Event summaries appear after the verbose output for events initiated by the Cluster Manager, making it easier to scan the hacmp.out file for important information. In addition, event summaries provide HTML links to the corresponding events within the hacmp.out file.
  • /var/hacmp/adm/history/cluster.mmddyyyy : This log file logs the daily cluster history.
  •  /var/hacmp/clverify/clverify.log : This file contains the verbose messages output during verification.Cluster verification consists of a series of checks performed against various PowerHA SystemMirror configurations. Each check attempts to detect either a cluster consistency issue or an error. The messages output by the verification utility indicate where the error occurred (for example, the node, device, command, and so forth).

The post Log Files in PowerHA SystemMirror ( HACMP ) appeared first on web-manual.net.

Backup VIOS using command backupios with -mksysb flag

$
0
0

 backupios with -mksysb flagOn my first article on VIO server backup, I have explained how to “Backing up a VIO server or Creating a “nim_resources.tar” file using ‘backupios’ ” command, on this article we will see “Backing up VIOS using command backupios with -mksysb flag” which takes less time and space ..

Difference

When the  command backupios is used with -mksysb flag  is , the resources used by the installios command are not saved in the image. Therefore to restore a VIO server from this image can be used only with NIM.

Whereas Creating a “nim_resources.tar for Virtual I/O Server allows this backup can be reinstalled from the HMC using the installios command.

Procedure to run the command  backupios with -mksysb flag.

Task 1: Login with padmin priviledge to VIO server
Task 2: Change to root priviledge using command

$oem_setup_env

Task 3: Create a mount directory where the backup image, .mksysb will be written to

# mkdir /vios01/backup

Task 4: Mount a filesystem from the NIM master on the mount directory /vios01/backup on VIOS01

# mount server1:/export/mksysb /vios01/backup

Task 5: Run exit to go back to the padmin privilege for running backupios command

#exit

Task 6. Run the backupios command with the –file option. Make sure to specify the path to the mounted directory

$ backupios –file /vios01/backup/`hostname`.mksysb -mksysb

Enjoy backupios with -mksysb flag command !!!!!!

 

The post Backup VIOS using command backupios with -mksysb flag appeared first on web-manual.net.

VIOS vs AIX matrix

$
0
0

On this article we have just published the VIOS vs AIX matrix or VIOS vs AIX OS level .. I find this matrix very helpful while making decision to upgrade VIO of my client environment  .. Please have a look and let me know for any confusion.

To determine AIX level , run the command  :

oslevel -s

To determine VIOS level  :
1) Login in to the VIO partition using the user “padmin”

2) Issue the ioslevel command:

# ioslevel

 

VIO Level IOS Level AIX Level bos.mp64
61S
61S – Gold Install mksysb 2.2.1.4 AIX 6.1 TL 7 SP4 6.1.7.15
FP25-SP02 2.2.1.4 AIX 6.1 TL 7 SP3 6.1.7.15
61Q
61Q- Gold Install mksysb 2.2.1.0 AIX 6.1 TL 7 6.1.7.0
FP25 2.2.1.1 AIX 6.1 TL 7 SP1 6.1.7.1
FP25-SP01 2.2.1.3 AIX 6.1 TL 7 SP2 6.1.7.2
61N
61N- Gold Install mksysb 2.2.0.12 -FP24 AIX 6.1 TL6 SP5 6.1.6.15
FP24 SP02 2.2.0.12-FP24-SP02 AIX 6.1 TL6 SP5 6.1.6.15
FP24 SP03 2.2.0.13-FP24-SP03 AIX 6.1 TL6 SP5 6.1.6.15
61L
61L- Gold Install mksysb 2.2.0.0 AIX 6.1 TL6 6.1.6.0
FP24 2.2.0.10-FP24 AIX 6.1 TL6 SP1 6.1.6.1
FP24-SP1 2.2.0.11-FP24 AIX 6.1 TL6 SP3 6.1.6.3
61J
61J Gold Install mksysb 2.1.3.0 AIX 6.1 TL5 6.1.5.0
FP23 2.1.3.0-FP23 AIX 6.1 TL5 SP2 6.1.5.2
61H
61H- Gold Install mksysb 2.1.2.0 AIX 6.1 TL4 6.1.4.0
FP 22 2.1.2.10-FP22 AIX 6.1 TL4 SP1 6.1.4.1
FP 22.1 2.1.2.10-FP22.1 AIX 6.1 TL4 SP2 6.1.4.2
61F
61F- Gold Install mksysb 2.1.1.0 AIX 6.1 TL3 6.1.3.0
FP 21 2.1.1.10-FP21 AIX 6.1 TL3 6.1.3.0
61D
61D – Gold Install mksysb 2.1.0.0 AIX 6.1 TL2 6.1.2.0
61D – Gold Migration 2.1.0.0 AIX 6.1 TL2 6.1.2.0
FP 20.0 2.1.0.1-FP20.1 AIX 6.1 TL2 6.1.2.0
FP 20.1 ( FP 20.1 is required for NPIV support ) 2.1.0.10-FP20.1 AIX 6.1 TL2 SP2 6.1.2.2

The post VIOS vs AIX matrix appeared first on web-manual.net.

hacmp concurrent status not showing on lspv

$
0
0

Concurrent status not showing on lspv output: Cluster vg problem
As we know that if we run the lspv command to list the lun’s the HACMP cluster VG should show as concurrent on both nodes.When I check the HACMP cluster volume groups found that cllsvg not showing a volume group which Should be on cluster.
The problems will be :
1. Can’t visible newly added luns on the pair node with the clustervg.
2. Can’t extend the file system using cluster commands.
3. Fail over may not happen due to the cluster vg problem
In my case one node in cluster showing as concurrent and the pair node didn’t show it
On Node 1: sapP11datavg is showing as concurrent.
node1_lspv_output
On Node 2: sapP11datavg is not showing as concurrent. It understands like a local vg
node2_lspv_output
The cllsvg command is not showing the volume group sapP11datavg

root@node1:/root # cllsvg
P11 logP11vg
P11 sapP11vg
root@node1:/root #

To solve this problem : from lspv output on node1 , note down the major number for it

root@node1:/root # lspv|sort +2 -u
hdisk27 00c3e4dd82ff3e85 hb_a_vg
hdisk1 00c3e4dd82fefd62 hb_b_vg
hdisk14 00c3e4dd82fef839 logP11vg concurrent
hdisk11 00c3e4ddcc5fda30 rootvg active
hdisk102 00c3e4ddf2777172 sapP11datavg concurrent
hdisk10 00c3e4dd82ff3927 sapP11vg concurrent
hdisk0 00c3e4ddccd3e7b9 swapvg active
root@node1:/root #
root@node1:/root # clRGinfo
—————————————————————————–
Group Name State Node
—————————————————————————–
P11 ONLINE node1
OFFLINE node2
P11_APPL ONLINE node2
1:root@node1:/root #

The major number is 102 for the vg sapP11datavg

1:root@node1:/root # ls -l /dev/sapP11datavg
crw-rw—- 1 root system 102, 0 Jun 12 2011 /dev/sapP11datavg
1:root@node1:/root #

Now on the node2, the vg is shows like the local volume group

1:root@node2:/root # lspv|sort +2 -u
hdisk120 00c3e4dd22285152 None
hdisk2 00c3e4dd82ff3e85 hb_a_vg
hdisk27 00c3e4dd82fefd62 hb_b_vg
hdisk13 00c3e4dd82fef8b7 logP11vg concurrent
hdisk0 00c3e48dbae71875 rootvg active
hdisk10 00c3e4dd3181b5bb sapP11datavg
hdisk100 00c3e4ddb1a859ef sapP11vg concurrent
hdisk1 00c3e48dbb195897 swapvg active
1:root@node2:/root #

We can find the major number as different for this vg in respect to node1

1:root@node2:/root # ls -l /dev/sapP11datavg
crw-rw—- 1 root system 38, 0 Apr 24 12:22 /dev/sapP11datavg
1:root@node2:/root #

Bring down the resource group, cluster on the problem node 1st

1:root@node2:/tmp # smit cl_admin
HACMP Resource Group and Application Management
Bring a Resource Group Offline
Now down cluster services
1:root@node2:/tmp # smit cl_admin
Manage HACMP Services
Stop Cluster Services

Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]

* Stop now, on system restart or both now +
Stop Cluster Services on these nodes [node2]
BROADCAST cluster shutdown? true
* Select an Action on Resource Groups Bring Resource Groups Offline +

Repeat the same step for the other node
Before doing learning import on the problem node, we notice the total number of luns is 68

1:root@node2:/tmp # lspv|grep sapP11datavg|wc -l
68
1:root@node2:/tmp #

Now do the learning import with concurrent option for the problem vg on the problem node side

1:root@node2:/tmp # importvg -cL sapP11datavg 00c3e4dd3181b612
sapP11datavg
1:root@node2:/tmp #

After doing import, now shows as 72 luns as total for the sapP11datavg

1:root@node2:/tmp # lspv|grep sapP11datavg|wc -l
72
1:root@node2:/tmp #

Check the major number for the vg on both nodes

1:root@node2:/tmp # ls -l /dev/sapP11datavg
crw-rw—- 1 root system 38, 0 Oct 21 02:09 /dev/sapP11datavg
2:root@node1:/root # ls -l /dev/sapP11datavg
crw-rw—- 1 root system 102, 0 Jun 12 2011 /dev/sapP11datavg
2:root@node1:/root #

Do export on the problem node

1:root@node2:/tmp # exportvg sapP11datavg
1:root@node2:/tmp #

Now importvg with concurrent mode and along with the same node1’s major number and the problem is solved now.

1:root@node2:/tmp # importvg -V 102 -y sapP11datavg -c 00c3e4dd3181b612
sapP11datavg
0516-783 importvg: This imported volume group is concurrent capable.
Therefore, the volume group must be varied on manually.
1:root@node2:/tmp # ls -l /dev/sapP11datavg
crw-rw—- 1 root system 102, 0 Oct 21 02:19 /dev/sapP11datavg
2:root@node1:/root # ls -l /dev/sapP11datavg
crw-rw—- 1 root system 102, 0 Jun 12 2011 /dev/sapP11datavg
You have mail in /usr/spool/mail/root
2:root@node1:/root #

Now Start the cluster services and resource group

1:root@node1:/root # smit cl_admin
Manage HACMP Services
Start Cluster Services
Start Cluster Services
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Start now, on system restart or both both +
Start Cluster Services on these nodes [node1] +
* Manage Resource Groups Automatically +
BROADCAST message at startup? false +
Startup Cluster Information Daemon? true +
Ignore verification errors? false +
Automatically correct errors found during Interactively +
cluster start?

Check the cluster stable state

1:root@node1:/root # lssrc -ls clstrmgrES
Current state: ST_STABLE
sccsid = “@(#)36 1.135.4.7 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe, 52haes_r541, 1028A_hacmp541 5/7/10 03:11:09″
i_local_nodeid 0, i_local_siteid -1, my_handle 1
ml_idx[1]=0
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 9
local node vrmf is 5419
cluster fix level is “0″
The following timer(s) are currently active:
Current DNP values
DNP Values for NodeId – 0 NodeName – node1
PgSpFree = 0 PvPctBusy = 0 PctTotalTimeIdle = 0.000000
DNP Values for NodeId – 0 NodeName – node2
PgSpFree = 0 PvPctBusy = 0 PctTotalTimeIdle = 0.000000
1:root@node1:/root #
1:root@node1:/root # clstat -o
clstat – HACMP Cluster Status Monitor
————————————-
Cluster: node1_34 (1289339159)
Sun Oct 21 02:30:13 UTC 2012
State: UP Nodes: 2
SubState: STABLE
Node: node1 State: UP
Interface: node1b2 (3) Address: 169.254.196.6
State: UP
Interface: node1b1 (2) Address: 192.168.192.5
State: UP
Interface: node1_hdisk27 (0) Address: 0.0.0.0
State: UP
Interface: node1_hdisk1 (1) Address: 0.0.0.0
State: UP
Interface: node1s1 (2) Address: 10.175.192.26
State: UP
Interface: node1s2 (3) Address: 10.175.196.33
State: UP
Resource Group: P11 State: On line
Node: node2 State: DOWN
Interface: node2b2 (3) Address: 169.254.196.7
State: DOWN
Interface: node2b1 (2) Address: 192.168.192.6
State: DOWN
1:root@node1:/root

Proceed the same on another node

2:root@node1:/root # clRGinfo
—————————————————————————–
Group Name State Node
P11 ONLINE node1
OFFLINE node2
P11_APPL ONLINE node2
root@node1:/root#

The post hacmp concurrent status not showing on lspv appeared first on web-manual.net.

Download entire website with wget command

$
0
0

Sometime you need to download the all pages of a particular directory. You can do it with FTP but what if you are not the valid user to do so.

Let’s Try This to download entire website:

wget -m http://www.example.com/

-m, –mirror
shortcut for -N -r -l inf –no-remove-listing.

wget -H -r –level=1 -k -p http://www.example.com

-r, –recursive
Specify recursive download.
-l, –level=NUMBER
Maximum recursion depth (inf or 0 for infinite).
-k, –convert-links
Make links in downloaded HTML point to local files.
-p, –page-requisites
Get all images, etc. needed to display HTML page.

The web is so vulnerable because of its open nature. But there are are ways to protect this. Open developer uses robot.txt file to stop download from the server. In such case you might trick the command in this way…

wget -m –user-agent=” http://www.example.com

There are lot more option to use with wget read the man page for other options available.

The post Download entire website with wget command appeared first on web-manual.net.


English Dictionary for Ubuntu – Artha

$
0
0

Artha is the off-lilne English Dictionary for users. It is an Open Source cross-platform English thesaurus that works completely off-line and is based on WordNet(A superb database).

english dictionary for ubuntu

If you are using Debian or its derivatives like Ubuntu, Kubuntu, Xubuntu, etc. then Artha is already available in the repositories. You can install it by:

sudo apt-get install artha

Both Debian and Ubuntu has the version 1.0.2 of Artha in their respective repositories; so by installing using the above command, you’ll not get the latest 1.0.3 release. Hence if you prefer to have the latest version, there are two options available ~ PPA and direct .deb download.

1. PPA
Artha now has a PPA in launchpad. This can be added to your system’s repository sources through

sudo apt-add-repository ppa:legends2k/artha

This will prompt you to accept adding the PPA’s keys to your system. Accept it by pressing RETURN. Once this is done, update the package data and install Artha.

sudo apt-get update
sudo apt-get install artha

DEB

Alternatively, you can also download the deb file below for your architecture and install the same (instructions below). This is more of a manual method and the PPA method is encouraged over this one, since this method lacks automatic updation of the package, should there be a newer version of Artha.

Architecture Binary
i386  artha_1.0.3-1_i386.deb
AMD64 artha_1.0.3-1_amd64.deb

The post English Dictionary for Ubuntu – Artha appeared first on web-manual.net.

How to run remote command on VIO server

$
0
0

This article explain How to run remote command on VIO server . Few days back , I came across a situation where I had to check IBM Virtual I/O server version ( ioslevel ) on more than 200 servers. Then only I came to know that lot experienced people as well as myself do not know how to run command remotely on a Restricted Shell as we all know VIO server is supplied with Restricted Korn Shell and we login with primary administrator user id padmin .

Restricted shell : 

After logging in with padmin user id on a VIO server, you will be placed into a restricted Korn shell. The restricted Korn shell works in the same way as a standard Korn shell, except that you cannot perform the following:

  • Change the current working directory
  • Set the value of the SHELL, ENV, or PATH variables
  • Specify the path name of the command that contains a forward slash (/)
  • Redirect output of a command using any of the following characters: >, >|, <>, >>

As a result of these restrictions, you cannot execute commands that are not accessible to your PATH variables. In addition, these restrictions prevent you from sending command output directly to a file. Instead, command output can be piped to the tee command.

Work Around :

Secure Shell (SSH) is shipped with the Virtual I/O Server. Hence, scripts and commands can run remotely after an exchange of SSH keys .

Step 1 ) Generate the public ssh key on the remote system. To know how to generate ssh key in details, please visit my earlier article SSH to a server without password for Admin Ease

Step 2 ) Transfer the ssh key to the Virtual I/O Server. The transfer can be done using File Transfer Protocol (FTP).

Step 3 ) On the Virtual I/O Server, type the following command to copy the public key to the .ssh directory:

$ cat id_rsa.pub >> .ssh/authorized_keys2

Step 4) Run any command from remote server with ioscli flag as below example . The command might prompts for a password for first time if it has not already been added as a known host.
Example :

ssh padmin@VIO1 ioscli ioslevel
2.2.1.4

Then I have wrote a script vioversion.sh on our NIM server to ran remotely above command on more 200 servers and my work was finished in 10 mins .. and rest of the day I was free to roam

$cat /tmp/vioversion/vioversion.sh
#!/bin/ksh
echo;
echo “Input the VIO server list :
read input

for i in `cat /tmp/vioversion/$input`
do
echo “————————————————————————–”
echo ” IOSLEVEL on VIO $i ”
ssh padmin@$i ioscli ioslevel > /tmp/vioversion/vioversion.out
echo “————————————————————————–”
done

This script will put the out put in a file with name vioversion.out under /tmp/vioversion/.. Have fun and  run remote command on VIO server!!!

The post How to run remote command on VIO server appeared first on web-manual.net.

/etc/services file and Ports Concepts on Unix

$
0
0
On UNIX the /etc/services file maps port numbers to named services. The port numbers on which certain standard services are offered are defined in the RFC-1700 Assigned Numbers .The /etc/services file enables server and client programs to convert service names to these numbers -ports. The list is kept on each host and it is stored in the file /etc/services.  For each service, a single line should be present with the following information: official_service_name     port_number/protocol_name         aliases For example: [root@webmanual01 ~]# grep -w 80 /etc/services http 80/tcp www www-http # WorldWideWeb HTTP http 80/udp www www-http # HyperText Transfer Protocol The fields are as follows: http : Service name. 80 : Port number. tcp : Protocol name....

Unix Shell Script : While Loop

$
0
0
The general syntax as follows for “bash while loop” : while [ CONTROL-COMMAND ]; do CONSEQUENT-COMMANDS; done while [ CONDITION ] do     command1     command2     commandN done Where : CONTROL-COMMAND or CONDITION can be any command, program, script or shell construct that can exit with a success or failure status. As soon as the CONTROL-COMMAND fails, the loop exits. CONSEQUENT-COMMANDS : The CONDITION is evaluated, and if the condition is true, the CONSEQUENT-COMMANDS are executed. This repeats until the condition becomes false. The return status is the exit status of the last CONSEQUENT-COMMANDS command, or zero if none was executed. Examples : Example 1 : Simple Script with while loop for posting output for five...
Viewing all 12 articles
Browse latest View live