Know about SAN and Storage Features   1 comment

EMC VMAX

The major components of Symmetrix VMAX
System Bay:
VMAX Engines
Matrix Interface Board Enclosure (MIBE)
Power subsystem components
Standby power supplies (SPS)
Service Processor (server, keyboard/video/mouse, UPS).VMAX Engine:
Symmetrix VMAX systems provide up to eight VMAX Engines within a system bay on standard configurations and up to four VMAX Engines on extended drive loop configurations. Each VMAX Engine includes:
– Two directors that support front-end, back-end and SRDF connections. Each director has:
– 16, 32,or 64 GB of physical memory
– One System Interface board (SIB) that connects the director and the Matrix Interface Board Enclosure (MIBE)
– Two Back End I/O Modules (2 ports, 4 Gb/s) that connect to storage bay drives.
– Two I/ O Module carriers that provide connectivity between each director and the front-end I/O ports. Front EndI/O Modules support:
– Fibre Channel host connectivity (4 ports, 2, 4, or 8 Gb/s)
– Fibre Channel SRDF connectivity (2 ports, 2, 4, 8 Gb/s)
– FICON host connectivity (2 ports, 2, 4, or 8 Gb/s)
– iSCSI host connectivity (2 ports, 1 Gb/s)
– GigE SRDF connectivity.
– Two Management Modules that provide environmental monitoring
– Two VMAX engine power supplies
– Four cooling fans.

Storage Bay:
Disk array enclosures (DAEs)
Power subsystem: PDPs, PDUs, AC connectors
Standby power supply unit (SPS)
Disk array enclosures (DAEs)

Each disk array enclosure contains:
– Two redundant disk array enclosure power /cooling fans
– Two link control card (LCC) modules
– From 5 to 15 drives per direct-attach enclosure
– From 4 to 15 drives per daisy-chain enclosure.  

The features of VMAX SE
Supported Disks: 48 to 360 disks
Single V-Max Engine—two directors
FICON, Fibre Channel, iSCSI, Gigabit Ethernet connectivity
Up to 128 GB global memory  
The  features of V-MAX?
96 to 2,400 disks, up to 2 PB—three times more usable capacity
One to eight V-Max Engines (16 directors)
Up to 1 TB (512 GB usable) global mirrored memory
Twice the host ports—Fibre Channel, iSCSI, Gigabit Ethernet, FICON connectivity (up to 128 ports*)
Twice the back-end connections for Flash, Fibre Channel, and SATA drives (up to 128 ports)
Quad-core 2.3 GHz processors to provide more than twice the IOPS
Virtual Matrix architecture connects and shares resources across director pairs, providing massive scalability
Details  about Vaulting in VMAX
Each director pair (2 – odd / even) on the V-Max system will require 200GB of vault space, that is 40 x 5GB chucks of dedicated vault data space
The vault drives are M1 devices with not Raid or mirroring protection
The vault drive cannot to be used by any host and is reserved for the Symmetrix
Vault drives cannot be configured by Timefinder/Snap, virtual or dynamic sparing
The data space created by the vault drives will be almost equivalent to the size of the cache – memory installed on the machine
flash drives, EFD’s cannot be used for vaulting operations
For permanent sparing, 5 vault drives per loop are essential  
The management tools for Symmetrix VMAX
Symmetrix Management Console:
The primary interface for managing Symmetrix arrays.EMC z/OS Storage Manager (EzSM):
An Interactive System Productivity Facility (ISPF) interface that manages Symmetrix system arrays in mainframe environments.EMC ControlCenter:
An intuitive, browser-based family of products that provides management of the overall storage environment, including multivender storage reporting, monitoring, configuration, and control.

EMC Solutions Enabler SYMCLI:
A library of commands that are entered from a command line or from a script.

EMC SMI-S Provider:
An SMI-compliant interface for EMC Symmetrix and CLARiiON arrays.

  
The supported disk drive types in VMAX
Symmetrix VMAX systems support Flash, Fibre Channel, and SATA II drives.
The features of Enguinity 5874
Auto-provisioning Groups:
Simplifies provisioning of configurations with alarge number of hosts by allowing the creation of initiator, port and storage groups. Auto-provisioning Groups is especially helpful in large, virtualized server environments that require the availability of many volumes to many host initiators and many storage ports.Dynamic configuration changes:
Allows the dynamic configuration of BCV and SRDF device attributes. Decreases impact to hosts during BCV and SRDF set and clear operations.Concurrent configuration changes:
Provides the ability run scripts concurrently instead of serially.

New Management Integration:
New Management Integration features free up host resources and simplify Symmetrix system management by allowing you to:
Load Symmetrix Management Console on the Service Processor
Attach the Service Processor to your network
Open a browser window from any server in the network
Manage the Symmetrix system from anywhere in your enterprise

SRDF Features:
SRDF Extended Distance Protection (SRDF/EDP), Add and remove dynamic devices to SRDF/A (consistency exempt), Two-mirror SRDF Enginuity Consistency Assist (ECA), SRDF/Star with R22 device protection, 250 SRDF group support ,etc..

Tiered storage optimization:
Fully Automated Storage Tiering (FAST) — Reduces cost for performance, saves energy, and simplifies storage tier management by allowing the dynamic allocation of data across storage tiers, based on user defined policies and on the changing performance requirements of the applications.

Enhanced Virtual LUN technology:
Provides the ability to non disruptively change disk and protection type of a logical volume, and allows the migration of open systems, mainframe, and IBM i volumes to unallocated storage, or to existing volume

Virtual Provisioning:

Simplifies storage management:
Improves capacity utilization by allowing the allocation of more storage to than is physically consumed by a host at onset, and is allocated, as needed, from a shared pool.

Automates pool re balancing:
Allows users to nondisruptively balance workloads and extend thin pool capacity, in small increments if required, while maximizing performance and reducing total cost of ownership.

Maximum number of hypers per physical disk in VMAX (Enguinity 5847)
512
Concurrent configuration change sessions on VMX
each session holds the different configuration change locks.
Number of  configuration change sessions we can run concurrently on VMAX
Up to four concurrent configuration change sessions are allowed to run at the same time, when they are non-conflicting. This means that multiple parallel configuration change sessions can run at the same time as long as the changes do not include any conflicts on Device back-end port, Device front-end port and on Device.
FAN-OUT ration of VMAX
512:1
Number of Mirror position RAID-5 will occupy in VMAX
One
Number of Initiators can be masked to one FA Port in VMAX
256
Single FA Port can be a member of multiple Port Groups
Yes.
Flag has to be enabled before it is member of port group
ACLX
A device can be a member of multiple storage groups
Yes
A single HBA can be a member of multiple Initiator groups
No
members can have one Initiator group
32
The maximum meta size in VMAX
255 member * 240GB hyper = ~60TB
How to list the reserved devices
symconfigure -sid “SymID” list -reserved
Advantages of Auto-Provisioning Groups
Eliminates of searching for required storage on arrays
Eliminates the mapping and masking devices which requires separate tasks for each initiator/port combinations.
Eliminates the host interruptions
Eliminates the storage reclamination
Initiators can be dynamically added or removed from initiator groups
Ports can be dynamically added or removed from port groups
Storage can be dynamically added or removed from storage groups
The different types of ports can be a member of Initiator group?
Only Fibre and Gig-E ports on front end directors allowed
The restrictions of Initiator Group
Initiator can belongs to only one Initiator Group
Maximum of 32 Initiators contains one initiator group
Initiator groups can be cascaded
The steps to replace a faulty HBA
Findout and Note down the old HBA WWN
symaccess list logins
Swap out the old HBA card with new HBA
Discover the new HBA and note down the WWN
Symaccess discover hba or symaccess list hba
Replace the WWN
symaccess -sid “SymID” replace -wwwn “old_WWN” -new_wwn “new_WWN”
Establish the new alias for the HBA
symaccess discover hba -rename
The advantages of Thin Provisioning
Reduce the amount of allocated but unused physical storage
Avoid over allocation of physical storage to applications
Reduces energy consumption and footprint
Provision independently of physical storage infrastructure
Minimize the challenges of growth and expansion
Simplifies data layout
Saves costs by simplifying procedures to add new storage
Reduces disk contention and enhances performance
Maximize return on investment
Avoides the application interuptions/host downtime.
Step by step procedure to impliment Virtual/Thin Provisioning
Creating Data Devices:
symconfigure -sid “SymID” -cmd “create dev count=16, config=2-Way-Mir, attribute=datadev, emulation=FBA, size=4602;” commit -v -nopCreating Thin Pool:
symconfigure -sid “SymID” -cmd “create pool PoolName type=thin;” commit -nopAdding Data Devices to thin pool:
symconfigure -sid “SymID” -cmd “add dev 10E4:10E5 to pool PoolName type=thin, member_state=ENABLE;” commit -nop

Creating Thin Devices:
symconfigure -sid “SymID” -cmd “create dev count=16, size=4602, emulation=fba, config=TDEV;” commit -nop

Binding Thin devices to Thin Pool:
symconfigure -sid “SymID” -cmd “bind tdev 10F4:10F7 to pool PoolName;” commit -nop

Mapping and Masking TDEVs to host:

Status of TDEVs before they bound to thin pool
Not Ready
Number of  Thin Pools can we create in a Array
The number of pools that can be configured in a Symmetrix array is 512.
This is the total number of pools, including Virtual Provisioning thin v pools, SRDF/A Delta Set Extension (DSE) pools, or TimeFinder/Snap pools.
Maximum no. Of data devices in a thin pool
As many as data devices can be member of a thin pool, however the limit to the number of thin and data devices that can be configured within a Symmetrix system is 64,000.
Thin Pools recommendations when you are adding data devices
Only data devices may be placed in a thin pool.
The data devices must all have the same emulation.
The data devices must all have the same protection type.
It is recommended that data devices in a pool all reside on drives that have the same rotational speed.
The data devices in the pool should generally be spread across as many DAs and drives of a given speed as possible.
The devices should be evenly spread across the DAs and drives.
The wide striping provided by Virtual Provisioning will spread thin devices evenly across the data devices. The storage administrator must ensure that the data devices are evenly spread across the back end.
It is recommended that all data devices in a pool are of the same size. Using different size devices could result in uneven data distribution.
The data device sizes should be as large as possible to minimize the number of devices required to encompass the desired overall pool capacity.
The VMAX Storage Optimization features
Dynamic Cache Partitioning:
Allows the allocation of portions of cache to specific device groups.Symmetrix Priority Controls:
Allows the prioritization of read I/O and SRDF/S transfers by host applications.Symmetrix Optimizer:
Optimizes performance by monitoring access patterns on storage arrays and transparently moving data between storage tiers.

Virtual LUN:
Allows the movement of data between storage tiers.

Fully Automated Storage Tiering (FAST):
provides sophisticated background algorithms that can automate the allocation and relocation of data across storage tiers based on the changing performance of applications.

Storage provisioning with symaccess allows you to create a group of devices, a group of director ports, a group of host initiators, and with one command, associate them in what is called a masking view. Once a masking view exists, devices, ports, and initiators can be easily added or removed from their respective groups.

Step by step procedure for creating Auto provisioning Groups
The steps for creating a masking view are:
Search the environment for Symmetrix devices on each HBA
symaccess discover hbaCreate a storage group (one or more devices)
symaccess -sid XXXX create -name StorageGroupName -type storage devs 3250:3350Create a port group (one or more director/port combinations)
symaccess -sid XXXX create -name PortGroupName -type port -dirport 7E:0,7G:1,8F:0

Create an initiator group (one or more host WWNs or iSCSIs)
symaccess -sid XXXX create -name InitiatorGroupName -type initiator -wwn wwn

Create a masking view containing the storage group, port group, and initiator group.
When a masking view is created, the devices are automatically masked and mapped.
symaccess -sid XXXX create view -name MaskingViewName -sg StorageGroupName -pg PortGroupName -ig InitiatorGroupName

The  purpose FAST (Fully Automated Storage Tiering)
FAST is Symmetrix software that runs background algorithms continuously analyze the utilization (busy rate) of the Symmetrix array devices.
FAST can move the most-used data to the fastest storage, such as Enterprise Flash Drives, the least-used data to the slowest storage, such as SATA drives, while maintaining the remaining data on Fibre Channel drives, based on user-defined Symmetrix tiers and FAST policies.
The objective of tiered storage is to minimize the cost of storage by putting the right data, on the right Symmetrix tier, at the right time.

configure the symmetrix array for FAST
By defining symmetrix tiers
By defining FAST Policies
By defining storage groups

emc powerpath powermt cheatsheet

powermt in a shutshell
Usage:
powermt [class=]
powermt check [force] [hba=|all] [dev=|all]
powermt check_registration
powermt config
powermt display [ports] [dev=|all] [every=]
powermt display options
powermt display paths [every=]
powermt display unmanaged
powermt manage {dev= | class={invista | hitachi | hpxp | ess | hphsx}}
powermt unmanage {dev= | class={invista | hitachi | hpxp | ess | hphsx}}
powermt load [file=]
powermt release
powermt remove [force] hba=|all | dev=|all
powermt restore [hba=|all] [dev=|all]
powermt save [file=]
powermt set mode=active|standby [hba=|all] [dev=|all]
powermt set periodic_autorestore=on|off
powermt set policy={ad|bf|co|lb|li|nr|re|rr|so} [dev=|all]
powermt set priority= [dev=|all]
powermt update lun_names
powermt version

usage examples:
powermt display dev=all
powermt check_registration
powermt watch

An advance usage is as follows:

powermt set policy=policy [class={symm|clariion|hitachi|hpxp|ess|all}]
[dev=device|all]

powermt set policy sets the load balancing and failover
policy for PowerPath devices.

Arguments

policy=policy
Sets the load balancing and failover policy to one
of the following values:

ad Adaptive. I/O requests are assigned to paths
based on an algorithm that takes into account
path load and logical device priority. This
policy is valid only for Hitachi Lightning,
HP xp, and IBM ESS storage systems and is the
default policy for them on platforms with a
valid PowerPath license.

bf Basic failover. Load balancing is not in
effect. I/O routing on failure is limited to
one HBA and one port on each storage
processor. When a host boots, it designates
one path (through one interface) for all I/O.
If an I/O is issued to a logical device that
cannot be reached via that path (that is, the
I/O cannot reach that logical device through
the device’s assigned interface), a trespass
is performed: the logical device is assigned
to the other interface.

Posted August 14, 2012 by g6237118

One response to “Know about SAN and Storage Features

Subscribe to comments with RSS.

  1. This is really fascinating, You’re an excessively skilled blogger.
    I have joined your feed and sit up for in quest of more
    of your wonderful post. Also, I have shared your
    website in my social networks

Leave a comment