Concurrent status not showing on lspv output: Cluster vg problem
As we know that if we run the lspv command to list the lun’s the HACMP cluster VG should show as concurrent on both nodes.When I check the HACMP cluster volume groups found that cllsvg not showing a volume group which Should be on cluster.
The problems will be :
1. Can’t visible newly added luns on the pair node with the clustervg.
2. Can’t extend the file system using cluster commands.
3. Fail over may not happen due to the cluster vg problem
In my case one node in cluster showing as concurrent and the pair node didn’t show it
On Node 1: sapP11datavg is showing as concurrent.
On Node 2: sapP11datavg is not showing as concurrent. It understands like a local vg
The cllsvg command is not showing the volume group sapP11datavg
P11 logP11vg
P11 sapP11vg
root@node1:/root #
To solve this problem : from lspv output on node1 , note down the major number for it
hdisk27 00c3e4dd82ff3e85 hb_a_vg
hdisk1 00c3e4dd82fefd62 hb_b_vg
hdisk14 00c3e4dd82fef839 logP11vg concurrent
hdisk11 00c3e4ddcc5fda30 rootvg active
hdisk102 00c3e4ddf2777172 sapP11datavg concurrent
hdisk10 00c3e4dd82ff3927 sapP11vg concurrent
hdisk0 00c3e4ddccd3e7b9 swapvg active
root@node1:/root #
—————————————————————————–
Group Name State Node
—————————————————————————–
P11 ONLINE node1
OFFLINE node2
P11_APPL ONLINE node2
1:root@node1:/root #
The major number is 102 for the vg sapP11datavg
crw-rw—- 1 root system 102, 0 Jun 12 2011 /dev/sapP11datavg
1:root@node1:/root #
Now on the node2, the vg is shows like the local volume group
hdisk120 00c3e4dd22285152 None
hdisk2 00c3e4dd82ff3e85 hb_a_vg
hdisk27 00c3e4dd82fefd62 hb_b_vg
hdisk13 00c3e4dd82fef8b7 logP11vg concurrent
hdisk0 00c3e48dbae71875 rootvg active
hdisk10 00c3e4dd3181b5bb sapP11datavg
hdisk100 00c3e4ddb1a859ef sapP11vg concurrent
hdisk1 00c3e48dbb195897 swapvg active
1:root@node2:/root #
We can find the major number as different for this vg in respect to node1
crw-rw—- 1 root system 38, 0 Apr 24 12:22 /dev/sapP11datavg
1:root@node2:/root #
Bring down the resource group, cluster on the problem node 1st
HACMP Resource Group and Application Management
Bring a Resource Group Offline
Now down cluster services
Manage HACMP Services
Stop Cluster Services
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Stop now, on system restart or both now +
Stop Cluster Services on these nodes [node2]
BROADCAST cluster shutdown? true
* Select an Action on Resource Groups Bring Resource Groups Offline +
Repeat the same step for the other node
Before doing learning import on the problem node, we notice the total number of luns is 68
68
1:root@node2:/tmp #
Now do the learning import with concurrent option for the problem vg on the problem node side
sapP11datavg
1:root@node2:/tmp #
After doing import, now shows as 72 luns as total for the sapP11datavg
72
1:root@node2:/tmp #
Check the major number for the vg on both nodes
crw-rw—- 1 root system 38, 0 Oct 21 02:09 /dev/sapP11datavg
2:root@node1:/root # ls -l /dev/sapP11datavg
crw-rw—- 1 root system 102, 0 Jun 12 2011 /dev/sapP11datavg
2:root@node1:/root #
Do export on the problem node
1:root@node2:/tmp #
Now importvg with concurrent mode and along with the same node1’s major number and the problem is solved now.
sapP11datavg
0516-783 importvg: This imported volume group is concurrent capable.
Therefore, the volume group must be varied on manually.
crw-rw—- 1 root system 102, 0 Oct 21 02:19 /dev/sapP11datavg
crw-rw—- 1 root system 102, 0 Jun 12 2011 /dev/sapP11datavg
You have mail in /usr/spool/mail/root
2:root@node1:/root #
Now Start the cluster services and resource group
Manage HACMP Services
Start Cluster Services
Start Cluster Services
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Start now, on system restart or both both +
Start Cluster Services on these nodes [node1] +
* Manage Resource Groups Automatically +
BROADCAST message at startup? false +
Startup Cluster Information Daemon? true +
Ignore verification errors? false +
Automatically correct errors found during Interactively +
cluster start?
Check the cluster stable state
Current state: ST_STABLE
sccsid = “@(#)36 1.135.4.7 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe, 52haes_r541, 1028A_hacmp541 5/7/10 03:11:09″
i_local_nodeid 0, i_local_siteid -1, my_handle 1
ml_idx[1]=0
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 9
local node vrmf is 5419
cluster fix level is “0″
The following timer(s) are currently active:
Current DNP values
DNP Values for NodeId – 0 NodeName – node1
PgSpFree = 0 PvPctBusy = 0 PctTotalTimeIdle = 0.000000
DNP Values for NodeId – 0 NodeName – node2
PgSpFree = 0 PvPctBusy = 0 PctTotalTimeIdle = 0.000000
1:root@node1:/root #
clstat – HACMP Cluster Status Monitor
————————————-
Cluster: node1_34 (1289339159)
Sun Oct 21 02:30:13 UTC 2012
State: UP Nodes: 2
SubState: STABLE
Node: node1 State: UP
Interface: node1b2 (3) Address: 169.254.196.6
State: UP
Interface: node1b1 (2) Address: 192.168.192.5
State: UP
Interface: node1_hdisk27 (0) Address: 0.0.0.0
State: UP
Interface: node1_hdisk1 (1) Address: 0.0.0.0
State: UP
Interface: node1s1 (2) Address: 10.175.192.26
State: UP
Interface: node1s2 (3) Address: 10.175.196.33
State: UP
Resource Group: P11 State: On line
Node: node2 State: DOWN
Interface: node2b2 (3) Address: 169.254.196.7
State: DOWN
Interface: node2b1 (2) Address: 192.168.192.6
State: DOWN
1:root@node1:/root
Proceed the same on another node
—————————————————————————–
Group Name State Node
P11 ONLINE node1
OFFLINE node2
P11_APPL ONLINE node2
root@node1:/root#
The post hacmp concurrent status not showing on lspv appeared first on web-manual.net.