Thursday, 27 September 2012

Vios The Lsmap Command :


The Lsmap Command : 

Used to list mappings between virtual adapters and physical resources.
List all (virtual) disks attached to the vhost0 adapter 
lsmap -vadapter vhost0

List only the virtual target devices attached to the vhost0 adapter 
lsmap -vadapter vhost0 -field vtd

This line can be used as a list in a for loop 
lsmap -vadapter vhost0 -field vtd -fmt :|sed -e "s/:/ /g"

List all shared ethernet adapters on the system 
lsmap -all -net -field sea

List all (virtual) disks and their backing devices 
lsmap -all -type disk -field vtd backing

List all SEAs and their backing devices 
lsmap -all -net -field sea backing

VIOS Devices Concept :


Devices:

Discover new devices
cfgdev
›››   This is the VIOS equivalent of the AIX cfgmgr command.

List all adapters (physical and virtual) on the system 
lsdev -type adapter

List only virtual adapters 
lsdev -virtual -type adapter

List all virtual disks (created with mkvdev command) 
lsdev -virtual -type disk

Find the WWN of the fcs0 HBA 
lsdev -dev fcs0 -vpd | grep Network

List the firmware levels of all devices on the system 
lsfware -all
›››   The invscout command is also available in VIOS.

Get a long listing of every device on the system
lsdev -vpd

List all devices (physical and virtual) by their slot address 
lsdev -slots

List all the attributes of the sys0 device
lsdev -dev sys0 -attr

List the port speed of the (physical) ethernet adapter eth0
lsdev -dev ent0 -attr media_speed

List all the possible settings for media_speed on ent0
lsdev -dev ent0 -range media_speed

Set the media_speed option to auto negotiate on ent0
chdev -dev ent0 -attr media_speed=Auto_Negotiation

Set the media_speed to auto negotiate on ent0 on next boot 
chdev -dev ent0 \
      -attr media_speed=Auto_Negotiation \
      -perm
Turn on disk performance counters
chdev -dev sys0 -attr iostat=true

Wednesday, 26 September 2012

Storage Pool Concept in Vios :


Storage Pool Concept :

• Storage pools work much like AIX VGs (Volume Groups) in that they reside on one or more PVs (Physical Volumes). One key difference is the concept of a default storage pool. The default storage pool is the target of storage pool commands where the storage pool is not explicitly specified.
• The default storage pool is rootvg. If storage pools are used in a configuration then the default storage pool should be changed to something other than rootvg.

List the default storage pool
lssp -default

List all storage pools 
lssp

List all disks in the rootvg storage pool 
lssp -detail -sp rootvg

Create a storage pool called client_boot on hdisk22
mksp client_boot hdisk22

Make the client_boot storage pool the default storage pool 
chsp -default client_boot

Add hdisk23 to the client_boot storage pool 
chsp -add -sp client_boot hdisk23

List all the physical disks in the client_boot storage pool 
lssp -detail -sp client_boot

List all the physical disks in the default storage pool 
lssp -detail

List all the backing devices (LVs) in the default storage pool 
lssp -bd

›››   Note: This command does NOT show virtual media repositories. Use the lssp command (with no options) to list free space in all storage pools.

Create a client disk on adapter vhost1 from client_boot storage pool
mkbdsp -sp client_boot 20G \
       -bd lv_c1_boot \
       -vadapter vhost1

Remove the mapping for the device just created, but save the backing device
rmbdsp -vtd vtscsi0 -savebd

Assign the lv_c1_boot backing device to another vhost adapter
mkbdsp -bd lv_c1_boot -vadapter vhost2

Completely remove the virtual target device ld_c1_boot
rmbdsp -vtd ld_c1_boot

Remove last disk from the sp to delete the sp
chsp -rm -sp client_boot hdisk22

Create a client disk on adapter vhost2 from rootvg storage pool
mkbdsp -sp rootvg 1g \
       -bd murugan_hd1 \
       -vadapter vhost2 \
       -tn lv_murugan_1
›››   The LV name and the backing device (mapping) name is specified in this command. This is different than the previous mkbdsp example. The -tn option does not seem to be compatible with all versions of the command and might be ignored in earlier versions of the command. (This command was run on VIOS 2.1) Also note the use of a consistent naming convention for LV and mapping - this makes understanding LV usage a bit easier. Finally note that rootvg was used in this example because of limitations of available disk in the rather small example system it was run on - Putting client disk on rootvg does not represent an ideal configuration

Virtual Disk Assisngment :


Virtual Disk Assignment through Vios Server to Clinet Logical Partitions :

• Disks are presented to VIOC by creating a mapping between a physical disk or storage pool volume and the vhost adapter that is associated with the VIOC.
• Best practices configuration suggests that the connecting VIOS vhost adapter and the VIOC vscsi adapter should use the same slot number. This makes the typically complex array of virtual SCSI connections in the system much easier to comprehend.
• The mkvdev command is used to create a mapping between a physical disk and the vhost adapter.

Create a mapping of hdisk3 to the virtual host adapter vhost2.
mkvdev -vdev hdisk3 -vadapter vhost2 -dev vhd3

›››   It is called vhd3 for "WholeDisk_Client3_HDisk3". The intent of this naming convention is to relay the type of disk, where from, and who to.

Delete the virtual target device vhd3
rmvdev -vtd vhd3

Delete the above mapping by specifying the backing device hdisk3
rmvdev -vdev hdisk3

VIOS Unix SubSystems:


Vios Unix Subsyatems :

 -The current VIOS runs on an AIX subsystem. (VIOS functionality is available for Linux. This document only deals with the AIX based versions.)

• The padmin account logs in with a restricted shell. A root shell can be obtained by the oem_setup_env command.

• The root shell is designed for installation of OEM applications and drivers only. It may be required for a small subset of commands. (The purpose of this document is to provide a listing of most frequent tasks and the proper VIOS commands so that access to a root shell is not required.)

• The restricted shell has access to common Unix utilities such as awk, grep, sed, and vi. The syntax and usage of these commands has not been changed in VIOS. (Use "ls /usr/ios/utils" to get a listing of available Unix commands.)

• Redirection to a file is not allowed using the standard ">" character, but can be accomplished with the "tee" command.

Redirect the output of ls to a file
ls | tee ls.out

Determine the underlying (AIX) OS version (for driver install)
oem_platform_level

Exit the restricted shell to a root shell
oem_setup_env

Mirror the rootvg in VIOS to hdisk1
extendvg rootvg hdisk1 
mirrorios hdisk1
›››    The VIOS will reboot when finished

Monday, 24 September 2012

TOPAS Command Description

Topas Command :



Command line options :

To control the variable (left) side of the default topas screen
  -d x Number of disks (x) to list in top disk section (Default: 20)
  -c x Number of CPU lines (x) to list in top CPU section (Default: 20). Note: This section displays the CPU graph in the default display/mode and you must toggle to the top-CPU mode using the 'c' runtime option.
  -n x Number of network interfaces (x) to list in top network section (Default: 20)
  -p x Number of processes (x) to list in the top process section (Default: 20)

To control the content of the initial (non-default) topas screen
  -P Display only process listing. (similar to default "top" behavior.)
  -U username Used with the -P option to limit process listing to only those owned by username
  -D Display only disk listing (similar to iostat)
  -L Logical partition display
  -C Display cross-system (multiple LPAR) statistics

Other
  -h Display (a more complete list of) command line options
  -i x Number of seconds (x) in each screen refresh interval (Default: 2)
Runtime options

  a Return to the default topas screen
  c Toggles the CPU section between default (graph), off, and top CPU list
  C Changes to the cross-LPAR display (same as starting with -C)
  d Toggles the Disk section between top disks, no disk section, and summary disk statistics.
  D Changes to the disk statistics display (same as starting with -D)
  f Toggles the file statistic section from summary, top-3 filesystems, to off.
  h Changes to the help screen (includes additional help information / runtime keys)
  p Toggles the top process section on and off
  P Changes to the process view display (same as starting with -P)
  n Toggles the network section between top network interfaces, summary only, and off
  L Changes to LPAR view (same as starting with -L)
  q Quit topas
topas Tips

The right hand side of the default topas screen is hard-set and cannot be toggled. It will only display as many lines as fit in a default terminal size. The left hand side items can be toggled with keys for each section and (frequently) be expanded to use the entire screen with additional data not available in the default screen.
Columns on the left hand side of the screen (and the full screen modes) can be sorted by moving the cursor between the section headings. The current sort order can be determined by the highlighted section heading.
Alias the string top to topas -P to (alias top='topas -P') to have a top-like command. Another alternative that gives more data to processes but still uses the the default topas screen is topas -d 0 -n 0 (alias top='topas -d 0 -n 0').
Where to go from here

CPU:

topas provides a fairly deep analysis of system CPU load. It will allow you to identify the offending process that is consuming CPU on a system. Once a system is identified as CPU bound from the main screen of topas, you only need to look at the process list to see the process that is driving the usage. topas -P will display a more complete list of top processes that are by default sorted by CPU utilization.

Finding an offending process is the easy part of the battle when diagnosing CPU issues. The next step is to determine how the CPU is consumed. Application tools (such as those found in databases) can be helpful in this area. AIX provides a number of profiling tools that tell what an application is doing. truss, ProbeVue (AIX 6), and a number of trace-based utilities (curt, pprof, locktrace).

When looking at total CPU utilization, one should watch for natural plateaus that form when a single threaded process becomes CPU bound on a multiple processor system. Symptoms of this show up as a consistent CPU utilization number that is at or near a fraction of the total CPU capacity of the system. For example, a single threaded CPU bound process will show up as 50% utilization on a 2 processor system, 33% on a 3 processor system, 25% on a four processor system, etc...

An additional issue associated with many larger / partitioned systems is the concept of processor affinity. While not directly a CPU related performance measurement, poor processor affinity on a LPARed system will cause additional CPU time as memory is accessed from remote cache or memory locations. Processor affinity can be monitored with the lparstat and mpstat tools. This can be monitored using the -d option to mpstat and looking for higher numbers in SXrd columns, where an increasing value of X represents poorer processor affinity.

Finally, it is essential to understand the underlying nature of a virtualized system. The number (or parts of) physical CPUs that back a virtual processor is key in understanding how loaded the system is. From a system point of view it may appear that a single CPU has been consumed, but in reality it may actually only be 1/10 th of a processor in a capped micro-partition. topas will tell you the Physc (number of physical CPUs consumed) and %Entc (percentage of entitled capacity) when the system is running in a micro-partition.

Memory:

High memory utilization manifests itself as CPU bound, but over consumption or over-subscription of memory takes the form of paging. This is when processes have allocated and used more memory than the system physically has. To maintain the increased memory footprint of each process the system must write portions of memory to disk in a paging space.

The standard method of measuring memory is by looking for paging. Most healthy systems will page to some degree to favor more active applications and files (to cache). So when looking at paging statistics it is key to note how much paging space is in use, but how much paging activity is happening at this time.

The amount of paging space in use is visible on the main screen of topas. slightly more detailed information is available from the lsps -a and svmon commands. It should be noted that a higher value in paging space utilization may not necessarily represent memory stress. It is possible that a one time event may have pushed many unused pages to disk. It is necessary to determine if the paging is ongoing and / or if the event that is causing it is one that is persistent and repeatable.

The primary method to look for bloated processes is to start topas with the -P switch and sort the list by "PAGE SPACE" (by moving the cursor over that field). This does not tell how much of the application is paged out but how much of the application memory is backed by paging space.

It is not necessarily useful to see what application is paged out as this is not as relevant as the fact that the system is paging and how much the system is paging. (It is possible to see individual process memory usage using svmon -P <PID>) The key pieces of information are the amount of paging, the rate of paging, and who is pushing the others out (who is using more than expected).

Disk:

Extended disks statistics are viewable using the D runtime or -D command line option to topas. In both the default screen and the extended disk statistics screen you can sort the disk results based upon various fields by moving the cursor over that field.

The most important fields to look for on a disk is the transfer rate and the % Busy. Neither of these items alone will tell the story of disk I/O. A low transfer rate may be a disk operating at full capacity handling less optimal I/O requests as opposed to the relatively quiet disk that it appears to be. A poorly performing disk, such as a disk array that is rebuilding, will go to 100% busy but may not be transferring much data at all.

The transfer rate is an indication of how much data is moving between the disk and the system. This number can vary based upon the kind of I/O that the system is doing. Random I/O will generate less of a transfer rate because the disk will be forced to seek to new locations between I/Os. This time spent seeking will subtract from the amount of data that can be transfered. If the system is processing larger, more sequential I/O then it will tend to have a larger transfer rate.

Reading the % Busy rate tells how busy the system considers the disk. It is a measurement of how much time the system spent waiting for I/O requests to that disk. It is not a measurement of how much the disk is actually transferring or how efficient the disk is. When used with the transfer rate it can be used to determine what the maximum transfer rate is. A disk may not go 100% busy if the system lacks the processing power to support the I/O.

When looking for data beyond what topas gives the next place to look is to the iostat command. This is a rather comprehensive tool in terms of data that it provides for disk statistics. Once a hot disk has been found, filemon can be used to watch what files are being accessed and fileplace can be used to look for fragmentation in individual files.

Network :

topas does not display more networking information than what is available on the default screen. Additional information is available from the netstat or XXXstat (where XXX is the interface type - such as entstat, tokstat, etc...).

entstat (and its variants) work on the device layer (ent0) while netstat works on the upper layer (en0). The following command will display a screen full of detailed device statistics every second:
  while [ 1 ] ; do clear ; entstat ent0 ; sleep 2 ; done
A similar command for en0 is:
  netstat -I en0 2

There are a number of application specific tools, diagnostic aids, and trace based tools for network analysis. These include, but are not limited to: nfsstat, netpmon, iptrace, ipreport, ipfilter.

Sunday, 23 September 2012

VIOS User Mangement :

VIOS User Related Questions :

•padmin is the only user for most configurations. It is possible to configure additional users, such as operational users for monitoring purposes.
 
List attributes of the padmin user 
lsuser padmin

List all users on the system 
lsuser
›››   The optional parameter "ALL" is implied with no parameter.
Change the password for the current user 
passwd

VIOS Network Related Commands :


Vios Network Commands :

Enable jumbo frames on the ent0 device 
chdev -dev ent0 -attr jumbo_frames=yes

View settings on ent0 device 
lsdev -dev ent0 -attr

List TCP and UDP sockets listening and in use 
lstcpip -sockets -family inet

List all (virtual and physical) ethernet adapters in the VIOS 
lstcpip -adapters
Equivalent of no -L command

optimizenet -list
Set up initial TCP/IP config (en10 is the interface for the SEA ent10)

mktcpip -hostname vios1 \
        -inetaddr 10.143.181.207 \
        -interface en10 \
        -start -netmask 255.255.252.0 \
        -gateway 10.143.180.1

Find the default gateway and routing info on the VIOS 
netstat -routinfo

List open (TCP) ports on the VIOS IP stack 
lstcpip -sockets | grep LISTEN

Show interface traffic statistics on 2 second intervals 
netstat -state 2

Show verbose statistics for all interfaces 
netstat -cdlistats

Show the default gateway and route table 
netstat -routtable

Change the default route on en0 (fix a typo from mktcpip) 
chtcpip -interface en0 \
        -gateway \
        -add 192.168.1.1 \
        -remove 168.192.1.1

Change the IP address on en0 to 192.168.1.2 
chtcpip -interface en0 \
        -inetaddr 192.168.1.2 \
        -netmask 255.255.255.0

VIOS Useful Commands :


VIOS Commands :

Accept all VIOS license agreements
license -accept

(Re)Start the (initial) configuration assistant
cfgassist

Shutdown the server
shutdown
›››    Optionally include -restart

List the version of the VIOS system software 
ioslevel

List the boot devices for this lpar 
bootlist -mode normal -ls

List LPAR name and ID
lslparinfo

Display firmware level of all devices on this VIOS LPAR
lsfware -all

Display the MOTD
motd

Change the MOTD to an appropriate message 
motd "*****    Unauthorized access is prohibited!    *****"

List all (AIX) packages installed on the system 
lssw
------ Equivalent to lslpp -L in AIX
Display a timestamped list of all commands run on the system 

lsgcl

   display the current date and time of the VIOS 

chdate
Change the current time and date to 3:03 PM AUGUST 4, 2012

chdate -hour 1 \
       -minute 2 \
       -month 3 \
       -day 4 \
       -year 2009

Change just the timezone to AST 
chdate -timezone AST (Visible on next login)
›››   The date command is availible and works the same as in Unix.
Brief dump of the system error log

errlog
Detailed dump of the system error log

errlog -ls | more
Remove error log events older than 30 days

errlog -rm 30
›››   The errlog command allows you to view errors by sequence, but does not give the sequence in the default format.

• errbr works on VIOS provided that the errpt command is in padmin's PATH.


CPU Related Topics :



CPU Related Concepts:

Shared (virtual) processor partitions (Micro-Partitions) can utilize additional resources from the shared processor pool when available. Dedicated processor partitions can only use the "desired" amount of CPU, and only above that amount if another CPU is (dynamically) added to the LPAR.

• An uncapped partition can only consume up to the number of virtual processors that it has. (Ie: A LPAR with 5 virtual CPUs, that is backed by a minimum of .5 physical CPUs can only consume up to 5 whole / physical CPUs.) A capped partition can only consume up to its entitled CPU value. Allocations are in increments of 1/100th of a CPU, the minimal allocation is 1/10th of a CPU for each virtual CPU.

• All Micro-Partitions are guaranteed to have at least the entitled CPU value. Uncapped partitions can consume beyond that value, capped cannot. Both capped and uncapped relinquish unused CPU to a shared pool. Dedicated CPU partitions are guaranteed their capacity, cannot consume beyond their capacity, and on Power 6 systems, can relinquish CPU capacity to a shared pool.

• All uncapped micro-partitions using the shared processor pool compete for the remaining resources in the pool. When there is no contention for unused resources, a micro-partition can consume up to the number of virtual processors it has or the amount of CPU resources available to the pool.

• The physical CPU entitlement is set with the "processing units" values during the LPAR setup in the HMC. The values are defined as:
Minimum: The minimum physical CPU resource required for this partition to start.
Desired: The desired physical CPU resource for this CPU. In most situations this will be the CPU entitlement. The CPU entitlement can be higher if resources were DLPARed in or less if the LPAR started closer to the minimum value.
Maximum: This is the maximum amount of physical CPU resources that can be DLPARed into the partition. This value does not have a direct bearing on capped or uncapped CPU utilization.

• The virtual CPU entitlement is set in the LPAR configuration much like the physical CPU allocation. Virtual CPUs are allocated in whole integer values. The difference with virtual CPUs (from physical entitlements) is that they are not a potentially constrained resource and the desired number is always received upon startup. The minimum and maximum numbers are effectively limits on DLPAR operations.

Processor folding is an AIX CPU affinity method that insures that an AIX partition only uses as few CPUs as required. This is achieved by insuring that the LPAR uses a minimal set of physical CPUs and idles those it does not need. The benefit is that the system will see a reduced impact of configuring additional virtual CPUs. Processor folding was introduced in AIX 5.3 TL 3.
• When multiple uncapped micro-partitions compete for remaining CPU resources then the uncapped weight is used to calculate the CPU available to each partition. The uncapped weight is a value from 0 to 255. The uncapped weight of all partitions requesting additional resources is added together and then used to divide the available resources. The total amount of CPU received by a competing micro-partition is determined by the ratio of the partitions weight to the total of the requesting partitions. (The weight is not a nice value like in Unix.) The default priority for this value is 128. A partition with a priority of 0 is effectively a capped partition.



above figure shows that Virtualized and dedicated CPUs in a four CPU system with a single SPP.


    Dedicated CPU partitions do not have a setting for virtual processors. LPAR 3 in Figure 0 has a single dedicated CPU.
• LPAR 1 and LPAR2 in Figure 0 are Micro-Partitions with a total of five virtual CPUs backed by three physical CPUs. On a Power 6 system, LPAR 3 can be configured to relinquish unused CPU cycles to the shared pool where they will be available to LPAR 1 and 2 (provided they are uncapped).

PowerVM Definitions:


PowerVM Definitions:


CoD - Capacity on Demand. The ability to add compute capacity in the form of CPU or memory to a running system by simply activating it. The resources must be pre-staged in the system prior to use and are (typically) turned on with an activation key. There are several different pricing models for CoD.



DLPAR - Dynamic Logical Partition. This was used originally as a further clarification on the concept of an LPAR as one that can have resources dynamically added or removed. The most popular usage is as a verb; ie: to DLPAR (add) resources to a partition.



HEA - Host Ethernet Adapter. The physical port of the IVE interface on some of the Power 6 systems. A HEA port can be added to a port group and shared amongst LPARs or placed in promiscuous mode and used by a single LPAR (typically a VIOS LPAR).



HMC - Hardware Management Console. An "appliance" server that is used to manage Power 4, 5, and 6 hardware. The primary purpose is to enable / control the virtualization technologies as well as provide call-home functionality, remote console access, and gather operational data.

IVE - Integrated Virtual Ethernet. The capability to provide virtualized Ethernet services to LPARs without the need of VIOS. This functionality was introduced on several Power 6 systems.


IVM - Integrated Virtualization Manager. This is a management interface that installs on top of the VIOS software that provides much of the HMC functionality. It can be used instead of a HMC for some systems. It is the only option for virtualization management on the blades as they cannot have HMC connectivity.


LHEA - Logical Host Ethernet Adapter. The virtual interface of a IVE in a client LPAR. These communicate via a HEA to the outside / physical world.
LPAR - Logical Partition. This is a collection of system resources that can host an operating system. To the operating system this collection of resources appears to be a complete physical system. Some or all the resources on a LPAR may be shared with other LPARs in the physical system.


Lx86 - Additional software that allows x86 Linux binaries to run on Power Linux without recompilation.

MES - Miscellaneous Equipment Specification. This is a change order to a system, typically in the form of an upgrade. A RPO MES is for Record Purposes Only. Both specify to IBM changes that are made to a system.


MSPP - Multiple Shared Processor Pools. This is a Power 6 capability that allows for more than one SPP.



SEA - Shared Ethernet Adapter. This is a VIOS mapping of a physical to a virtual Ethernet adapter. A SEA is used to extend the physical network (from a physical Ethernet switch) into the virtual environment where multiple LPARs can access that network.



SPP - Shared Processor Pool. This is an organizational grouping of CPU resources that allows caps and guaranteed allocations to be set for an entire group of LPARs. Power 5 systems have a single SPP, Power 6 systems can have multiple.



VIOC - Virtual I/O Client. Any LPAR that utilizes VIOS for resources such as disk and network.

VIOS - Virtual I/O Server. The LPAR that owns physical resources and maps them to virtual adapters so VIOC can share those resources.


RAID Levels:

RAID (Redundancy Array Of Independent Disks): 




RAID 0:

Technology: Striping Data with No Data Protection.
Performance: Highest
Overhead: None
Minimum Number of Drives: 2 since striping
Data Loss: Upon one drive failure
Example: 5TB of usable space can be achieved through 5 x 1TB of disk.
Advantages:
>
High Performance
Disadvantages: Guaranteed Data loss
Hot Spare: Upon a drive failure, a hot spare can be invoked, but there will be no data to copy over. Hot Spare is not a good option for this RAID type.
Supported: Clariion, Symmetrix, Symmetrix DMX (Meta BCV’s or DRV’s)
In RAID 0, the data is written / stripped across all of the disks. This is great for performance, but if one disk fails, the data will be lost because since there is no protection of that data.


RAID 1:

Technology: Mirroring and Duplexing
Performance: Highest
Overhead: 50%
Minimum Number of Drives: 2
Data Loss: 1 Drive failure will cause no data loss. 2 drive failures, all the data is lost.
Example: 5TB of usable space can be achieved through 10 x 1TB of disk.
Advantages: Highest Performance, One of the safest.
Disadvantages: High Overhead, Additional overhead on the storage subsystem. Upon a drive failure it becomes RAID 0.

Hot Spare: A Hot Spare can be invoked and data can be copied over from the surviving paired drive using Disk copy.
Supported: Clariion, Symmetrix, Symmetrix DMX
The exact data is written to two disks at the same time. Upon a single drive failure, no data is lost, no degradation, performance or data integrity issues. One of the safest forms of RAID, but with high overhead. In the old days, all the Symmetrix supported RAID 1 and RAID S. Highly recommended for high end business critical applications.
The controller must be able to perform two concurrent separate Reads per mirrored pair or two duplicate Writes per mirrored pair. One Write or two Reads are possible per mirrored pair. Upon a drive failure only the failed disk needs to be replaced.


RAID 1+0 :

Technology: Mirroring and Striping Data
Performance: High
Overhead: 50%
Minimum Number of Drives: 4
Data Loss: Upon 1 drive failure (M1) device, no issues. With multiple drive failures in the stripe (M1) device, no issues. With failure of both the M1 and M2 data loss is certain.
Example: 5TB of usable space can be achieved through 10 x 1TB of disk.
Advantages: Similar Fault Tolerance to RAID 5, Because of striping high I/O is achievable.
Disadvantages: Upon a drive failure, it becomes RAID 0.
Hot Spare: Hot Spare is a good option with this RAID type, since with a failure the data can be copied over from the surviving paired device.
Supported: Clariion, Symmetrix, Symmetrix DMX
RAID 1+0 is implemented as a mirrored array whose segments are RAID 0 arrays.


RAID 3 :

Technology: Striping Data with dedicated Parity Drive.
Performance: High
Overhead: 33% Overhead with Parity (in the example above), more drives in Raid 3 configuration will bring overhead down.
Minimum Number of Drives: 3
Data Loss: Upon 1 drive failure, Parity will be used to rebuild data. Two drive failures in the same Raid group will cause data loss.
Example: 5TB of usable space would be achieved through 9 1TB disk.
Advantages: Very high Read data transfer rate. Very high Write data transfer rate. Disk failure has an insignificant impact on throughput. Low ratio of ECC (Parity) disks to data disks which converts to high efficiency.
Disadvantages: Transaction rate will be equal to the single Spindle speed
Hot Spare: A Hot Spare can be configured and invoked upon a drive failure which can be built from parity device. Upon drive replacement, hot spare can be used to rebuild the replaced drive.
Supported: Clariion


RAID 5 :

Technology: Striping Data with Distributed Parity, Block Interleaved Distributed Parity
Performance: Medium
Overhead: 20% in our example, with additional drives in the Raid group you can substantially bring down the overhead.
Minimum Number of Drives: 3
Data Loss: With one drive failure, no data loss, with multiple drive failures in the Raid group data loss will occur.
Example: For 5TB of usable space, we might need 6 x 1 TB drives
Advantages: It has the highest Read data transaction rate and with a medium write data transaction rate. A low ratio of ECC (Parity) disks to data disks which converts to high efficiency along with a good aggregate transfer rate.
Disadvantages: Disk failure has medium impact on throughput. It also has most complex controller design. Often difficult to rebuild in the event of a disk failure (as compared to RAID level 1) and individual block data transfer rate same as single disk. Ask the PSE’s about RAID 5 issues and data loss?
Hot Spare: Similar to RAID 3, where a Hot Spare can be configured and invoked upon a drive failure which can be built from parity device. Upon drive replacement, hot spare can be used to rebuild the replaced drive.
Supported: Clariion, Symmetrix DMX code 71
RAID Level 5 also relies on parity information to provide redundancy and fault tolerance using independent data disks with distributed parity blocks. Each entire data block is written onto a data disk; parity for blocks in the same rank is generated on Writes, recorded in a distributed location and checked on Reads.
This would classify to be the most favorite RAID Technology used today.



RAID 6 :

Technology: Striping Data with Double Parity, Independent Data Disk with Double Parity
Performance: Medium
Overhead: 28% in our example, with additional drives you can bring down the overhead.
Minimum Number of Drives: 4
Data Loss: With one drive failure and two drive failures in the same Raid Group no data loss. Very reliable.
Example: For 5 TB of usable space, we might need 7 x 1TB drives
Advantages: RAID 6 is essentially an extension of RAID level 5 which allows for additional fault tolerance by using a second independent distributed parity scheme (two-dimensional parity). Data is striped on a block level across a set of drives, just like in RAID 5, and a second set of parity is calculated and written across all the drives; RAID 6 provides for an extremely high data fault tolerance and can sustain multiple simultaneous drive failures which typically makes it a perfect solution for mission critical applications.
Disadvantages: Very poor Write performance in addition to requiring N+2 drives to implement because of two-dimensional parity scheme.
Hot Spare: Hot Spare can be invoked against a drive failure, built it from parity or data drives and then upon drive replacement use that hot spare to build the replaced drive.
Supported: Clariion Flare 26, 28, Symmetrix DMX Code 72, 73
Clariion Flare Code 26 supports RAID 6. It is also being implemented with the 72 code on the Symmetrix DMX. The simplest explanation of RAID 6 is double the parity. This allows a RAID 6 RAID Groups to be able to have two drive failures in the RAID Group, while maintaining access to the data.

RAID S (3+1) :

Technology: RAID Symmetrix
Performance:  > High
Overhead: 25%
Minimum Number of Drives: 4
Data Loss: Upon two drive failures in the same Raid Group
Example: For 5 TB of usable space, 8 x 1 TB drives
Advantages: High Performance on Symmetrix Environment
Disadvantages: Proprietary to EMC. RAID S can be implemented on Symmetrix 8000, 5000 and 3000 Series. Known to have backend issues with director replacements, SCSI Chip replacements and backend DA replacements causing DU or offline procedures.
Hot Spare: Hot Spare can be invoked against a failed drive, data can be built from the parity or the data drives and upon a successful drive replacement, the hot spare can be used to rebuild the replaced drive.
Supported: Symmetrix 8000, 5000, 3000. With the DMX platform it is just called RAID (3+1)
EMC Symmetrix / DMX disk arrays use an alternate, proprietary method for parity RAID that they call RAID-S. Three Data Drives (X) along with One Parity device. RAID-S is proprietary to EMC but seems to be similar to RAID-5 with some performance enhancements as well as the enhancements that come from having a high-speed disk cache on the disk array.
The data protection feature is based on a Parity RAID (3+1) volume configuration (three data volumes to one parity volume).

RAID (7+1):

Technology: RAID Symmetrix
Performance: High
Overhead: 12.5%
Minimum Number of Drives: 8
Data Loss: Upon two drive failures in the same Raid Group
Example: For 5 TB of usable space, 8 x 1 TB drives (rather you will get 7 TB)
Advantages: High Performance on Symmetrix Environment
Disadvantages: Proprietary to EMC. Available only on Symmetrix DMX Series. Known to have a lot of backend issues with director replacements, backend DA replacements since you have to verify the spindle locations. Cause of concern with DU.
Hot Spare: Hot Spare can be invoked against a failed drive, data can be built from the parity or the data drives and upon a successful drive replacement, the hot spare can be used to rebuild the replaced drive.
Supported: With the DMX platform it is just called RAID (7+1). Not supported on the Symms.
EMC DMX disk arrays use an alternate, proprietary method for parity RAID that is called RAID. Seven Data Drives (X) along with One Parity device. RAID is proprietary to EMC but seems to be similar to RAID-S or RAID5 with some performance enhancements as well as the enhancements that come from having a high-speed disk cache on the disk array.
The data protection feature is based on a Parity RAID (7+1) volume configuration (seven data volumes to one parity volume).