NetApp_Usefull_Commands   Leave a comment

NETWORK
To check the network interface/Configuration: ifconfig -a

DISKS
To assign all disks in one go:
netapp1> disk assign all

To check what disks are part of which aggr/vol/diskgroup:
netapp1> vol status -r or
netapp1> aggr status -r

To check what disks are spare:
netapp1> vol status -s or
netapp1> aggr status -s

To zero a disk:
netapp1> disk zero spares

To see what disks are owned by netapp1
netapp1> disk show -o netapp1

To assign the disks to a node in a cluster
a. Verify on each filer each disk has 2 paths
netapp1> storage show disk -p
PRIMARY PORT SECONDARY PORT SHELF BAY
———————— —- ———————— —- ———
switch2:44.126L85 – switch1:12.126L85 – – –
switch1:12.126L86 – switch2:44.126L86 – – –
switch2:44.126L87 – switch1:12.126L87 – – –
switch1:12.126L88 – switch2:44.126L88 – – –

netapp2> storage show disk -p
PRIMARY PORT SECONDARY PORT SHELF BAY
———————— —- ———————— —- ———
switch2:60.126L85 – switch1:28.126L85 – – –
switch1:28.126L86 – switch2:60.126L86 – – –
switch2:60.126L87 – switch1:28.126L87 – – –
switch1:28.126L88 – switch2:60.126L88 – – –

1. Find out disks not owned by any filer
netapp1> disk show -n
DISK OWNER POOL SERIAL NUMBER
———— ————- —– ————-
switch2:44.126L85 Not Owned NONE 6001438005DEAAE00000700004300000
switch1:12.126L86 Not Owned NONE 6001438005DEAAE00000700004360000
switch2:44.126L87 Not Owned NONE 6001438005DEAAE000007000043C0000
switch1:12.126L88 Not Owned NONE 6001438005DEAAE00000700004420000

2. Assign the disk to the filer
netapp1> disk assign switch2:44.126L155 -o netapp1
Sat Aug 7 10:17:15 BST [netapp2: diskown.changingOwner:info]: changing ownership for disk switch1:12.126L155 (S/N 6001438005DEAAE00000700005D40000) from unowned (ID -1) to netapp1 (ID 151735370)

STORAGE PORTS
To check the alias for storage ports/devices a netapp is zoned with:
netapp1> storage alias
Alias Mapping
————————————– ————————————-
mc0 WWN[5:005:07630f:0f2601]L1
mc1 WWN[5:005:07630f:0f2611]L1
st0 WWN[5:005:07630f:0f2604]
st1 WWN[5:005:07630f:0f2602]
st10 WWN[5:005:07630f:0f2613]
st11 WWN[5:005:07630f:0f2614]
st12 WWN[5:005:07630f:0f2615]
st13 WWN[5:005:07630f:0f2618]

To unalias the storage port aliases and configrue the correct desired names:
netapp1> storage alias st1 WWN[5:005:07630f:0f2601]

If you had several tape drives to be aliased, it is better to put them together in a file under /etc/, say /etc/tape_aliases.dc1. And then source this file using:
netapp1> source /etc/tape_aliases.dc1
netapp1> storage alias
Alias Mapping
————————————– ————————————-
mc0 WWN[5:005:07630f:0f2601]L1
mc1 WWN[5:005:07630f:0f2611]L1
st0 WWN[5:005:07630f:0f2604]
st1 WWN[5:005:07630f:0f2602]
st10 WWN[5:005:07630f:0f2613]

USER
To create a user and set its password:
netapp1> useradmin user add backups -g Users -c “Backup user” -m password
New password:
Retype new password:
User added.

To change the login mode (privilege/diag):
netapp1> priv set advanced
netapp1> priv set diag

VOLUMES
Create a normal flexible volume
netapp1> vol create vol1 -l en agg1 1112g

Check the size of volume
netapp1> vol size vol1

Find out which aggr a volume is created on
netapp1> vol status vol1 -v

AGGREGATES
Create aggregate
Example: 1
netapp1> aggr create agg_1 -r 17 -t raid_dp 51@137104M
-r –> so many disks in one raid group
137104M –> size of disks – get that from vol status -r
This will create an aggregate spanning 3 raid groups each containing 17 disks – if disks were not in use before (assigned to raidgroup) the aggr will be created straightaway. However if they were in use, they will need to be zeroed which will take long time.

Example: 2
In this case we have 36 disks owned by each filer. so we can create 2 aggregates with raid0 and raidgroup having 9 disks (size 874496M obtained from vol status -r)
netapp1> aggr create agg_101_fa_2 -r 9 -t raid0 18@874496M
Creation of an aggregate with 18 disks has been initiated. The disks need
to be zeroed before addition to the aggregate. The process has been initiated
and you will be notified via the system log as disks are added.
Note however, that if system reboots before the disk zeroing is complete, the
volume won’t exist.

Delete an aggregate
netapp1> aggr status
Aggr State Status Options
vol0 online raid4, trad root
aggr0 online raid_dp, aggr
netapp1> aggr offline aggr0
netapp1> aggr status
Aggr State Status Options
vol0 online raid4, trad root
aggr0 offline raid_dp, aggr lost_write_protect=off
netapp1> aggr destroy aggr0
netapp1> aggr status
Aggr State Status Options
vol0 online raid4, trad root

To set the snap reserve on aggregate
Aggregates are created by default with 5% reserve space. Generally, we don’t reserve the space on aggr instead we reserve on per volume. So, disable the snap reserve on aggrs and reduce it to 0%.
netapp1> aggr options agg_2
nosnap=off, raidtype=raid_dp, raidsize=16, ignore_inconsistent=off,
snapmirrored=off, resyncsnaptime=60, fs_size_fixed=off,
snapshot_autodelete=on, lost_write_protect=on
netapp1> aggr options agg_2 nosnap 1
netapp1> aggr options agg_2
nosnap=on, raidtype=raid_dp, raidsize=16, ignore_inconsistent=off,
snapmirrored=off, resyncsnaptime=60, fs_size_fixed=off,
snapshot_autodelete=on, lost_write_protect=on
netapp1> snap reserve -A
Volume vol0: current snapshot reserve is 20% or 156217696 k-bytes.
Aggregate agg_1: current snapshot reserve is 0% or 0 k-bytes.
Aggregate agg_2: current snapshot reserve is 5% or 546761940 k-bytes.
netapp1> snap reserve -A agg_2 0
netapp1> snap reserve -A
Volume vol0: current snapshot reserve is 20% or 156217696 k-bytes.
Aggregate agg_1: current snapshot reserve is 0% or 0 k-bytes.
Aggregate agg_2: current snapshot reserve is 0% or 0 k-bytes.

NTP
Options to be set for ntp:

netapp1> options timed.protocol ntp
netapp1> options timed.servers ntp1.domain.com,ntp2.domain.com
netapp1> options timed.sched 5m

NIS/DNS
To set up NIS

1. Set up nis options

netapp1> options nis.enable on
netapp1> options nis.domainname prod.yourdomain.com

2. Set up DNS options

logisnap233a> options dns.enable on
logisnap233a> options dns.domainname yourdomain.com

3. Edit rc file so that it looks something like as below (should have NIS/DNS related info)
netapp1> rdfile /etc/rc
vif create multi vif1 -b ip e4a e3a
vif create multi vif2 -b ip e4b e3b
vif create single vif233a vif1 vif2
ifconfig vifnetapp1 `hostname`-vifnetapp1 netmask 255.255.255.0 partner vifnetapp2
route add default 192.168.1.1 1
routed on
options dns.domainname yourdomain.com
options dns.enable on
options nis.domainname prod.domain.com
options nis.enable on
savecore
timezone Europe/London

4. Set up /etc/resolv.conf file

netapp1> rdfile /etc/resolv.conf
nameserver 192.168.1.250
nameserver 192.168.1.251
search yourdomain.com subdomain.yourdomain.com

5. Verify dns is set up correctly
netapp1> dns info

6. Verify nis is set up correctly
netapp1> nis info

AUTOSUPPORT
Options to be set for autosupport (will require a reboot to take effect)

netapp1> options autosupport.enable on
netapp1> options autosupport.from admin@domain.com
netapp1> options autosupport.mailhost mailhost.domain.com
netapp1> options autosupport.noteto admin@domain.com
netapp1> options autosupport.to autosupport@netapp.com,admin@domain.com

To generate autosupport manually:
Suppose node1/node2 are clustered node, and node2 has panicked, and taken over by node1. You are asked to generate autosupport on node2. Follow the steps below:

node1(takeover)> partner
Login to partner shell: node2
node2/node1> Mon Sep 27 11:24:13 BST [node1 (takeover): cf.partner.login:notice]: Login to partner shell: node2
node2/node1> options autosupport
autosupport.cifs.verbose off
autosupport.content complete
autosupport.doit DONT
.
.
.
node2/node1> options autosupport.doit DO

SYSTEM
To find the WWN/System related information:
netapp1> sysconfig -av -> shows all the information
netapp1> sysconfig -av 2 -> shows info about FC Adapter
slot 2: FC Host Adapter 2a (QLogic 2432 rev. 3, N-port, )
Board name: QLE2462
Serial Number: RFC0933M00518
Firmware rev: 4.4.0
Host Port Id: 0×614613
FC Node Name: 2:100:001b32:928651
SFF Vendor: FINISAR CORP.
SFF Part Number: FTLF8524E2KNL
SFF Serial Number: PFP45SG
SFF Capabilities: 1, 2 or 4 Gbit
Link Data Rate: 2 Gbit
Switch Port: ??
slot 2: FC Host Adapter 2b (QLogic 2432 rev. 3, N-port, )
Board name: QLE2462
Serial Number: RFC0933M00518
Firmware rev: 4.4.0
Host Port Id: 0×614613
FC Node Name: 2:101:001b32:b28651
SFF Vendor: FINISAR CORP.
SFF Part Number: FTLF8524E2KNL
SFF Serial Number: PFP46EP
SFF Capabilities: 1, 2 or 4 Gbit
Link Data Rate: 2 Gbit

To use CLI to add disk to the existing aggregation, follow below steps:

1). First check the aggregation that need to be extended:

lab> aggr status aggr1 -v

Aggr State Status Options

aggr1 online raid_dp, aggr nosnap=off, raidtype=raid_dp,

raidsize=16,

ignore_inconsistent=off,

snapmirrored=off,

resyncsnaptime=60,

fs_size_fixed=off,

snapshot_autodelete=on,

lost_write_protect=on

Volumes:

Plex /aggr1/plex0: online, normal, active

RAID group /aggr1/plex0/rg0: normal

lab> df -Am

Aggregate total used avail capacity

aggr0 256MB 244MB 12MB 95%

aggr0/.snapshot 13MB 10MB 2MB 79%

aggr1 128MB 0MB 128MB 0%

aggr1/.snapshot 6MB 0MB 6MB 0%

2). Check the available spares by run sysconfig -r, in this example,

lab> sysconfig -r

Aggregate aggr1 (online, raid_dp) (zoned checksums)

Plex /aggr1/plex0 (online, normal, active)

RAID group /aggr1/plex0/rg0 (normal)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)

——— —— ————- —- —- —- —– ————– ————–

dparity v4.24 v4 1 8 FC:B – FCAL N/A 70/144384 77/158848

parity v4.25 v4 1 9 FC:B – FCAL N/A 70/144384 77/158848

data v4.26 v4 1 10 FC:B – FCAL N/A 70/144384 77/158848

data v4.27 v4 1 11 FC:B – FCAL N/A 70/144384 77/158848

data v4.28 v4 1 12 FC:B – FCAL N/A 70/144384 77/158848

Aggregate aggr0 (online, raid0) (zoned checksums)

Plex /aggr0/plex0 (online, normal, active)

RAID group /aggr0/plex0/rg0 (normal)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)

——— —— ————- —- —- —- —– ————– ————–

data v4.16 v4 1 0 FC:B – FCAL N/A 120/246784 127/261248

data v4.17 v4 1 1 FC:B – FCAL N/A 120/246784 127/261248

data v4.18 v4 1 2 FC:B – FCAL N/A 120/246784 127/261248

Spare disks

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)

——— —— ————- —- —- —- —– ————– ————–

Spare disks for zoned checksum traditional volumes or aggregates only

spare v4.29 v4 1 13 FC:B – FCAL N/A 70/144384 77/158848

spare v4.32 v4 2 0 FC:B – FCAL N/A 70/144384 77/158848

spare v4.33 v4 2 1 FC:B – FCAL N/A 70/144384 77/158848

spare v4.34 v4 2 2 FC:B – FCAL N/A 70/144384 77/158848

spare v4.35 v4 2 3 FC:B – FCAL N/A 70/144384 77/158848

spare v4.36 v4 2 4 FC:B – FCAL N/A 70/144384 77/158848

spare v4.37 v4 2 5 FC:B – FCAL N/A 70/144384 77/158848

spare v4.38 v4 2 6 FC:B – FCAL N/A 70/144384 77/158848

spare v4.39 v4 2 7 FC:B – FCAL N/A 70/144384 77/158848

spare v4.40 v4 2 8 FC:B – FCAL N/A 70/144384 77/158848

spare v4.41 v4 2 9 FC:B – FCAL N/A 70/144384 77/158848

spare v4.42 v4 2 10 FC:B – FCAL N/A 70/144384 77/158848

spare v4.43 v4 2 11 FC:B – FCAL N/A 70/144384 77/158848

spare v4.44 v4 2 12 FC:B – FCAL N/A 70/144384 77/158848

spare v4.45 v4 2 13 FC:B – FCAL N/A 70/144384 77/158848

spare v4.19 v4 1 3 FC:B – FCAL N/A 1020/2089984 1027/2104448

spare v4.20 v4 1 4 FC:B – FCAL N/A 1020/2089984 1027/2104448

spare v4.21 v4 1 5 FC:B – FCAL N/A 1020/2089984 1027/2104448

spare v4.22 v4 1 6 FC:B – FCAL N/A 1020/2089984 1027/2104448

3) Next add 15 spare disk to the Aggr1, because the max size of spindle for Raid Group(Raid-DP) is 16 so when 15 new spindles added, a new Raid Group is created with additional 4 disks.

lab> aggr add aggr1 -d v4.29 v4.32 v4.33 v4.34 v4.35 v4.36 v4.37 v4.38 v4.39 v4.40 v4.41 v4.42 v4.43 v4.44 v4.45

Note: preparing to add 13 data disks and 2 parity disks.

Continue? ([y]es, [n]o, or [p]review RAID layout) p

The RAID group configuration will change as follows:

RAID Group Current NEW

———- ——- —-

/aggr1/plex0/rg0 5 disks 16 disks

/aggr1/plex0/rg1 4 disks

Continue? ([y]es, [n]o, or [p]review RAID layout) yes

Sun Aug 30 17:25:33 GMT [raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg0/v4.41 Shelf 2 Bay 9 [NETAPP VD-50MB 0042] S/N [37981022] to aggregate aggr1 has completed successfully

Sun Aug 30 17:25:33 GMT [raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg0/v4.40 Shelf 2 Bay 8 [NETAPP VD-50MB 0042] S/N [37981021] to aggregate aggr1 has completed successfully

Sun Aug 30 17:25:33 GMT [raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg0/v4.39 Shelf 2 Bay 7 [NETAPP VD-50MB 0042] S/N [37981020] to aggregate aggr1 has completed successfully

Sun Aug 30 17:25:33 GMT [raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg0/v4.38 Shelf 2 Bay 6 [NETAPP VD-50MB 0042] S/N [37981019] to aggregate aggr1 has completed successfully

Sun Aug 30 17:25:33 GMT [raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg0/v4.37 Shelf 2 Bay 5 [NETAPP VD-50MB 0042] S/N [37980918] to aggregate aggr1 has completed successfully

Sun Aug 30 17:25:33 GMT [raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg0/v4.36 Shelf 2 Bay 4 [NETAPP VD-50MB 0042] S/N [37980917] to aggregate aggr1 has completed successfully

Sun Aug 30 17:25:33 GMT [raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg0/v4.35 Shelf 2 Bay 3 [NETAPP VD-50MB 0042] S/N [13292816] to aggregate aggr1 has completed successfully

Sun Aug 30 17:25:33 GMT [raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg0/v4.34 Shelf 2 Bay 2 [NETAPP VD-50MB 0042] S/N [13292715] to aggregate aggr1 has completed successfully

Sun Aug 30 17:25:33 GMT [raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg0/v4.33 Shelf 2 Bay 1 [NETAPP VD-50MB 0042] S/N [13292614] to aggregate aggr1 has completed successfully

Sun Aug 30 17:25:33 GMT [raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg0/v4.32 Shelf 2 Bay 0 [NETAPP VD-50MB 0042] S/N [13292513] to aggregate aggr1 has completed successfully

Sun Aug 30 17:25:33 GMT [raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg0/v4.29 Shelf 1 Bay 13 [NETAPP VD-50MB 0042] S/N [13292512] to aggregate aggr1 has completed successfully

Sun Aug 30 17:25:33 GMT [raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg1/v4.45 Shelf 2 Bay 13 [NETAPP VD-50MB 0042] S/N [37981126] to aggregate aggr1 has completed successfully

Sun Aug 30 17:25:33 GMT [raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg1/v4.44 Shelf 2 Bay 12 [NETAPP VD-50MB 0042] S/N [37981125] to aggregate aggr1 has completed successfully

Sun Aug 30 17:25:33 GMT [raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg1/v4.43 Shelf 2 Bay 11 [NETAPP VD-50MB 0042] S/N [37981124] to aggregate aggr1 has completed successfully

Sun Aug 30 17:25:33 GMT [raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg1/v4.42 Shelf 2 Bay 10 [NETAPP VD-50MB 0042] S/N [37981123] to aggregate aggr1 has completed successfully

Addition of 15 disks to the aggregate has completed.

lab> aggr status aggr1

Aggr State Status Options

aggr1 online raid_dp, aggr

Volumes:

Plex /aggr1/plex0: online, normal, active

RAID group /aggr1/plex0/rg0: normal

RAID group /aggr1/plex0/rg1: normal

4). Next create volume 1 with 200MB of total.

lab> vol create vol1 aggre1 200m

vol create: Containing aggregate ‘aggre1’ was not found

lab> vol create vol1 aggr1 200m

Creation of volume ‘vol1’ with size 200m on containing aggregate

‘aggr1’ has completed.

lab> vol status

Volume State Status Options

vol0 online raid0, flex root, no_atime_update=on,

create_ucode=on,

convert_ucode=on,

maxdirsize=2621

vol1 online raid_dp, flex create_ucode=on,

convert_ucode=on

lab>

5). Check the aggregation utilization:

lab> df -Am

Aggregate total used avail capacity

aggr0 256MB 244MB 12MB 95%

aggr0/.snapshot 13MB 10MB 2MB 79%

aggr1 684MB 201MB 482MB 30%

aggr1/.snapshot 36MB 0MB 36MB 0%

lab>

Posted February 11, 2013 by g6237118

Leave a comment