Custom Search

Tuesday, December 20, 2005

Still Around

Work and now world of warcraft have taken most of my spare time.

Please check the links to the right.

Sunday, August 21, 2005

An irony

I put together this page to help me by having all of my reference material in one place.

Then I go to work with an employer that blocks pretty much any and all internet sites (including Sun's big admin).

hmmmmm

Maybe it shall morph to include other things...

It'll be a bit as I'm still getting used to the new shop, new area, and moving house.

Later.

Monday, July 25, 2005

Point-In-Time Copies: Enterprise

This uses FlashSnap, which requires a separate license to use.

Veritas Storage Foundation with FlashSnap provides three types of PITC solutions:
* Volume-Level PITC Solutions:
- Full-Sized Instant Volume Snapshots
- Space-Optimized Instant Volume Snapshots
* Filesystem-Level Solution:
- Storage Checkpoints


Preparing to Create a Full-Sized Instant Volume Snapshot: CLI

Enable FastResync:
vxsnap –g diskgroup [-b] prepare origvol

Allocate the storage using ONE of these methods:
1) Add a mirror to use as a full-sized instant snapshot:
# vxsnap –g diskgroup addmir volume
2) Use an existing ACTIVE plex in the volume
3) Create an empty volume for use as the snapshot volume:
# LEN=`vxprint –g diskgroup –F%len volume`
# DCONAME=`vxprint –g diskgroup –F%sco_name volume`
# RSZ=`vxprint –g diskgroup –F%regionsz $DCONAME`
# vxassist –g diskgroup make snap-origvol $LEN
# vxsnap –g diskgroup prepare snap-origvol regionsize=$RSZ


Creating and Managing Full-Sized Instant Volume Snapshots: CLI

Create the snapshot volume using ONE of these methods:
1) Break off an existing plex to create the new snapshot
# vxsnap –g diskgroup make source=origvol newvol=snap-origvol plex=name

2) Specify an empty volume to be used as the snapshot:
# vxsnap –g diskgroup source=origvol snapvol=snap-origvol

Update:
# vxsnap –g diskgroup refresh snap-origvol source=origvol
# vxsnap –g diskgroup reattach snap-origvol source=origvol
# vxsnap –g diskgroup restore origvol source=snap-origvol
# vxsnap –g diskgroup dis snap-origvol

Remove:
# vxedit –g diskgroup –r rm snapvol


Note: These are a few more items that I have not transcribed. The reason being that they require additional licensing and are useful only in specific setups.

Performance Monitoring

Performance Analysis Process
1) You must understand your application workload and your performance objectives for each application workload
2) You must identify all components of the data transfer model of your storage architecture, that is, the complete I/O path of your data from application to disk
3) For each of the hardware components in your architecture, determine the theoretical performance characteristics of each component
4) Use performance monitoring and workload generation tools to measure performance for each of the components in your configuration.

Tools:
* vxstat
* vxtrace

Note: I was lazy on this one. There is too much system specific info to put in. Besides the VxVM tools, everything else is more system performance related

Volume Maintenance

Topics:
Changing the Volume Layout
Managing Volume Tasks
Analyzing Configurations with Storage Expert


Changing the Volume Layout

Online relayout: Change the colume layout or layout characteristics while the volume is online.

By using online relayout, you can change the layout of an entire volume or a specific plex. Use online relayout to change the column or plex layout to or from:
* Concatenated
* Striped
* RAID-5
* Striped mirrored
* Concatenated mirrored


Online Relayout Notes
* You can reverse online relayout at any time
* Some layout transformations can cause a slight increase or decrease in the volume length due to subdisk alignment policies. If volume length increases during relayout, VxVM resizes the file system using vxresize.
* Relayout does not change log plexes
* You cannot:
- Create a snapshot during relayout
- Change the number of mirrors during relayout
- Perform multiple relayouts at the same time
- Perform relayout on a volume with a sparse plex


Changing the Layout: VEA

Highlight a volume and select Actions -> Change Layout


Changing the Layout: CLI

# vxassist relayout
- Used for nonlayered relayout operations
- Used for changing layout characteristics, such as stripe width and number of columns

# vxassist convert
- Changes nonlayered volumes to layered volumes, and vice versa

Note: vxassist relayout cannot create a nonlayered mirrored volume in a single step. The command always creates a layered mirrored volume even if you specify a nonlayered mirrored layout. Use vxassist convert to convert the resulting layered volume into a nonlayered volume.


vxassist –g diskgroup relayout volume|plex layout=layout ncol=+|- ncol stripeunit=size

To change to a striped layout:
# vxassist –g datadg relayout datavol layout=stripe ncol=2

To add a column to striped volume datavol:
# vxassist –g datadg relayout datavol ncol=+1

To remove a column from datavol:
# vxassist –g datadg relayout datavol ncol=-1

To change stripe unit size and number of columns:
# vxassist –g datadg relayout datavol stripeunit=32k ncol=5

To change mirrored layouts to RAID-5, specify the plex to be converted (instead of the volume):
# vxassist –g datadg relayout datavol01-01 layout=raid5 stripeunit=32k ncol=3

To convert the striped mirrored volume datavol to a layered stripe-mirror layout:
# vxassist –g datadg convert datavol layout=stripe-mirror


Managing Volume Tasks: CLI

Use the vxtask command to:
- Display task information
- Pause, continue, and about tasks
- Modify the progress rate of a task


The vxrelayout command can be used to display the status of, reverse, or start a relayout operation:
# vxrelayout –g diskgroup status|reverse|start volume


What is Storage Expert?
Veritas Storage Expert (VxSE) is a CLI utility that provides volume configuration analysis.

Storage Expert:
* Analyzes configurations based on a set of “rules” or VxVM “best practices”
* Produces a report of results in ASCII format
* Provides recommendations, but does not launch any administrative operations


Running Storage Expert Rules
* VxVM and VEA must be installed
* Rules are located in /opt/VRTS/vxse/vxvm
* Syntax:
rule_name options info|list|check|run
* In the syntax:
- info
Displays rule description
- list
Displays attributes of rule
- check
Displays default values
- run
Runs the rule

Troubleshooting the Boot Process

Topics:
Operating System Boot Processes
Troubleshooting the Boot Process
Recovering the Boot Disk Group


Files Used in the Boot Process

* /etc/system (Sun only)
Contains VxVM entries
* /etc/vfstab (Sun), /etc/fstab (HP-UX and Linux)
Maps mount points to devices
* /etc/vx/volboot
Contains disk ownership data
* /etc/vx/licenses/lic, /etc/vx/elm
Contains license files
* /var/vxvm/tempdb
Stores data about diskgroups
* /etc/vx/reconfig.d/state.d/install-db
Indicates VxVM is not initialized
* /VXVM#.#.#-UPGRADE/.start_runed
Indicates that the VxVM upgrade is not complete


Troubleshooting: The Boot Device Cannot be Opened

Possible causes:
* Boot disk is not powered on
* Boot disk has failed
* SCSI bus is not terminated
* Controller failure has occurred
* Disk is failing and locking the bus

To resolve:
* Check SCSI bus connections:
- On Sun, use probe-scsi-all
- On Linux, use non-fast or verbose boot in the BIOS
* Boot from an alternate boot disk


Troubleshooting: Startup Scripts Exit Without Initialization

Possible causes:
Either one of the following files is present:

* /etc/vx/reconfig.d/state.d/install-db
This file indicates that VxVM software packages have been added, but VxVM has not been initialized with vxinstall. Therefore, vxconfigd is not started.

* / VXVM#.#.#-UPGRADE/.start_runed
This file indicates that a VxVM upgrade has been started but not completed. Therefore, vxconfigd is not started.


Troubleshooting: Conflicting Host ID in volboot

The volboot file contains the host ID that was on the system when you installed VxVM.

If you manually edit this file, VxVM does not function.

* To change the hostname in the volboot file:
vxdctl hosted newhostname

* To re-create the volboot file:
vxdctl init [hostname]


Troubleshooting: License Problems (keys corrupted, missing, or expired)

Save /etc/vx/licenses/lic/* to a backup device. If the license files are removed or corrupted, you can copy the files back.

License problems can occur if:
* The /etc/vx/licenses/lic files become corrupted
* An evaluation license was installed and not updated to a full license.

To resolve license issues:
* vxlicinst (installs a new license)
* vxiod set 10 (starts the I/O daemons)
* vxconfigd (starts the configuration daemon)


Troubleshooting: Missing /var/vxvm/tempdb (missing, misnamed, or corrupted)

This directory stores configuration information about imported diskgroups. The contents are recreated after a reboot. If this directory is missing, misnamed, or corrupted, vxconfigd does not start.

To remove and recreate this directory:
# vxconfigd –k –x cleartempdir


Troubleshooting: Debugging with vxconfigd

Running vxconfigd in debug mode:
# vxconfigd –k –m enable –x debug_level
* debug_level = 0 – No debugging (default)
* debug_level = 9 – Highest debug level

Some debugging options:
* -x log
Logs all console output to the /var/vxvm/vxconfigd.log file
* -x logfile=name
Use the specified log file instead
* -x syslog
Direct all console output through the syslog() interface
* -x timestamp
Attach a timestamp to all messages
* -x tracefile=name
Log all possible tracing information in the given file.


Troubleshooting: Invalid or Missing /etc/system File (Sun only)

The /etc/system file is used in the kernel initialization and /sbin/init phases of the boot process.

This file is a standard Sun system file to which VxVM add entries to:
* Specify drivers to be loaded
* Specify root encapsulation

If the file or these entries are missing, you encounter problems in the boot process.

When booting from an alternate system file, do not go past the maint mode. Boot up on the alternate system file, fix the VxVM problem, and then reboot with the original system file.

ok> boot –a

When prompted specify the /etc/hosts file for the system file. You will get many errors but you’ll get far enough in order to fix the original system file.


Temporarily Importing the Boot Diskgroup

Through a temporary import, you can bring the boot diskgroup to a working system and repair it there:
1) Obtain the diskgroup ID (dgid) of the boot diskgroup:
# vxdisk –s list
2) On the importing host, import and temporarily rename the diskgroup:
# vxdg –tC –n tmpdg import dgid
3) Fix and replace the files and volumes as necessary.
4) Deport the diskgroup back to the original host:
# vxdg –h orig_hostname deport tmpdg

Encapsulation and Rootability

Topics:
Placing the Boot Disk Under VxVM Control
Creating an Alternate Boot Disk
Removing the Boot Disk from VxVM Control
Upgrading to a New VxVM Version


What is Encapsulation?
Encapsulation is a method of placing a disk under VxVM control.in which the data that exists on a disk is preserved. Encapsulation converts existing partitions into volumes, which provides continued access to the data on the disk after a reboot. After a disk has been encapsulated, the disk is handled in the same way as an initialized disk.

Requirements:
One free partition (for public and private region)
s2 slice that represents the full disk
2048 sectors free at beginning or end of disk for the private region


What is Rootability?
Rootability, or root encapsulation, is the process of placing the root file system, swap device, and other file systems on the boot disk under VxVM control. VxVM converts existing partitions of the boot disk into VxVM volumes. The system can then mount the standard boot disk file systems (that is, /, /usr, and so on) from volumes instead of disk partitions.

Requirements are the same as for data disk encapsulation, but the private region can be created from swap space.


Why Encapsulate the Boot Disk?
You should encapsulate the boot disk only if you plan to mirror the boot disk.

Benefits of mirroring the boot disk:
1) Enables high availability
2) Fixes bad blocks automatically (for reads)
3) Improves performance (ed. I don’t buy this point)

There is no benefit to boot disk encapsulation for its own sake. You should not encapsulate the boot disk if you do not plan to mirror the boot disk.


Limitations of Boot Disk Encapsulation
Encapsulating the boot disk adds steps to OS upgrades.

A system cannot boot from a boot disk that spans multiple devices

You should never expand or change the layout of boot volumes. No volume associated with an encapsulated boot disk (rootvol, usr, var, opt, swapvol, and so on) should be expanded or shrunk, because these volumes map to a physical underlying partition on the disk and must be contiguous.

If you attempt to expand these volumes, the system can become unbootable if it becomes necessary to revert back to slices in order to boot the system. Expanding these volumes can also prevent a successful OS upgrade, and a fresh install can be required.

Note: regarding Solaris, the upgrade_start script may fail.


Solaris File System Requirements
For root, use, var, and opt volumes:
1) Use UFS file systems (VxFS is not available until later in the boot process)
2) Use contiguous disk space. (Volumes cannot use striped, RAID-5, concatenated mirrored, or striped mirrored layouts)
3) Do not use dirty region logging on the system volumes. (You can use DRL for the opt and var volumes)

For swap volumes:
1) The first swap volume must be contiguous, and, therefore, cannot use striped or layered layouts.
2) Other swap volumes can be noncontiguous and can use any layout. However, there is an implied 2Gb limit of usable swap space per device for 32-bit operating systems.


Before Encapsulating the Boot Disk
Plan your rootability configuration. bootdg is a system-wide reserved disk group name that is an alias for the disk group that contains the volumes that are used to boot the system. When you place the boot disk under VxVM control, VxVM sets bootdg to the appropriate disk group. You should never attempt to change the assigned value of bootdg; doing so may render you system unbootable. An example configuration would be to place the boot disk into a disk group named sysdg, and add at least two more disks to the disk group: one for a boot disk mirror and one as a spare disk. VxVM will set bootdg to sysdg.

For Solaris, enable boot disk aliases: eeprom “use-nvramrc?=true”

Record the layout of the partitions on the unencapsulated boot disk to save for future use.


Encapsulating the Boot Disk

vxdiskadm:
“Encapsulate one or more disks”

Follow the prompts by specifying:
1) Name of the device to add
2) Name of the disk group to which the disk will be added
3) Sliced disk format (The boot disk cannot be a CDS disk)

vxencap:
/etc/vx/bin/vxencap –g diskgroup accessname
/etc/init.d/vxvm-reconfig accessname


After Boot Disk Encapsulation
You can view operating system-specific files to better understand the encapsulation process.

Solaris:
1) VTOC (prtvtoc device)
2) /etc/system
3) /etc/vfstab

Linux:
/etc/fstab


Alternate Boot Disk: Requirements
An alternate boot disk is a mirror of the entire boot disk. It preserves the boot block in case the primary boot disk fails

Creating an alternate boot disk requires that:
1) The boot disk be encapsulated by VxVM
2) Another disk be available with enough space to contain all of the boot disk partitions
3) All disks be in the boot disk group

The root mirror places the private region at the beginning of the disk. The remaining partitions are placed after the private region.


Creating an Alternate Boot Disk

VEA:
1) Highlight the boot disk, and select Actions -> Mirror Disk
2) Specify the target disk to use as the alternate boot disk.

vxdiskadm:
“Mirror volumes on a disk”

CLI:
To mirror the root volume only:
vxrootmir alternate_disk

To mirror all other unmirrored, concatenated columes on the boot disk to the alternate disk:
vxmirror –g diskgroup boot_disk alternate_disk

To mirror other volumes to the boot disk or other disks:
vxassist –g diskgroup mirror homevol alternate_disk

On Solaris, to set up system boot information on a VxVM disk:
vxbootsetup


Booting from an Alternate Mirror (Solaris)
1) Set the eeprom variable use-nvramrc? to true:
ok> setenv use-nvramrc? true
ok> reset
This variable must be set to true to enable the use of alterate boot disks.

2) Check for available boot disk aliases:
ok> devalias
Output displays the name of the boot disk and available mirrors.

3) Boot from an available boot disk alias:
ok> boot vx-diskname


Unencapsulating a Boot Disk
To unencapsulate a boot disk, use vxunroot

Requirements: Remove all but one plex of rootvol, swapvol, use, var, opt, and home.

Use vxunroot when you need to:
* Boot from physical system partitions
* Change the size or location of the private region on the boot disk.
* Upgrade both the OS and VxVM

Do not use vxunroot if you are only upgrading VxVM packages, including the VEA package.


The vxunroot Command
1) Ensure that the boot disk volumes only have one plex each:
vxprint –ht rootvol swapvol use var

2) If boot disk volumes have more than one plex each, remove the unnecessary plexes:
vxplex –g diskgroup –o rm dis plex_name

3) Run the vxunroot utility:
vxunroot


Notes on Upgrading Storage Foundation
* Determine what you are upgrading: Storage Foundation, VxVM only, both VxVM and the OS, or the OS only.
* Follow documentation for Storage Foundation and the OS
* Install appropriate patches
* A license is not required to upgrade VxVM only
* Your existing VxVM configuration is retained
* Upgrading VxVM does not upgrade existing disk group of file system versions. You may need to manually upgrade each after a VxVM upgrade.
* Get the latest upgrade information from the support.veritas.com website
* Backup data before upgrading (Note: copy /kernel/drv/sd.conf to a safe location)


Upgrading Storage Foundation
1) Unmount any mounted VxFS file systems
2) Reboot the system to single-user mode
3) When the system comes up, mount the /opt and /var filesystems
4) Mount the Veritas cdrom
5) Invoke the common installer, run the install command:
cd /cdrom/cdrom0
./installer
6) Answer the prompts appropriately


Upgrading VxVM Only

Methods:
* VxVM installation script (installvm)
* Manual package upgrade
* VxVM upgrade scripts (Solaris only)
- upgrade_start
- upgrade_finish

Note: on Sun, the upgrade_finish script changes /etc/vfstab to point to /dev/vx/bootdg/… even if bootdg doesn’t exist. Remember to change it back by hand before reboot. Unless you like booting off of CD in order to change the vfstab by hand.


Upgrading VxVM Only: installvm
* Invoke the installvm script and follow the instructions when prompted
* If you are performing a multihost installation, you can avoid copying packages to each system. For example, to ensure that packages are not copied remotely when using the NFS mountable filesystem $NFS_FS:
# cd /cdrom/CD_NAME
# cp –r * $NFS_FS
# cd volume_manager
# ./installvm –pkgpath $NFS_FS/volume_manager/pkgs –patchpath $NFS_FS/volume_manager/patches
* This copies the files to an NFS mounted file system that is connected to all of the systems on which you want to install the software.


Upgrading VxVM Only: Manual Packages Upgrade
1) Bring the system to single-user mode
2) Stop the vxconfigd and vxiod daemons:
# vxdctl stop
# vxiod –f set 0
3) Remove the VMSA software packages VRTSvmsa (optional)
4) Add the new VxVM packages using OS specific package installation commands
5) Perform a reconfiguration reboot (i.e. on Sun: reboot -- -r)


Scripts Used in Upgrades: Sun only
The upgrade_start and upgrade_finish scripts preserve your VxVM configuration

To check for potential problems before and upgrade, run:
# upgrade_start –check

Note: on Sun: save off a copy of your /etc/vfstab and /kernel/drv/sd.conf. The upgrade_finish will screw both up. Your /etc/vfstab will point to bootdg even if you don’t use that diskgroup name. Also, your sd.conf will be messed up if you use SAN and you’ll not see all of your disks. The vfstab can be corrected by hand, but you’ll need to copy the sd.conf back to your system to correct the “fix”.


Upgrading VxVM Only: Upgrade Scripts: Sun Only
1) Mount the Veritas cdrom
2) Run upgrade_start –check
3) Run upgrade_start
4) Reboot to single-user
5) mount /opt if not part of the root filesystem
6) Remove the VxVM package and other related VxVM packages with pkgrm
7) Reboot the system to multiuser mode
8) Verify that /opt is mounted, and than install the new VxVM packages with pkgadd
9) Run the upgrade_finish script


Upgrading Solaris Only
To prepare:
1) Detach any boot disk mirrors
2) Check alignment of boot disk volumes
3) Ensure that /opt is not a symbolic link

To upgrade:
1) Bring the system to single-user mode
2) Mount the Veritas cdrom
3) Run upgrade_start -check
4) Run upgrade_start
5) Reboot to single-user mode
6) Upgrade the OS
7) Reboot to single-user mode
8) Mount the Veritas cdrom
9) Run upgrade_finish
10) Reboot to multiuser mode


Upgrading VxVM and Solaris
To prepare:
1) Install license keys if needed
2) Detach any boot disk mirrors
3) Check alignment of boot disk volumes
4) Ensure that /opt is not a symbolic link

To remove old version:
1) Bring system to single-user mode
2) Mount the Veritas cdrom
3) Run upgrade_start –check
4) Run upgrade_start
5) Reboot to single-user mode
6) Remove VxVM packages

To install new version:
1) Reboot to single-user mode
2) Upgrade OS
3) Reboot to single-user mode
4) Mount Veritas cdrom
5) Add new licensing and VxVM packages
6) Run upgrade_finish
7) Reboot to multiuser mode
8) Add additional packages


After Upgrading
1) Confirm that key VxVM processes (vxconfigd, vxnotify, vxcache, vxrelocd, vxconfigbackupd, and vxesd) are running:
# ps –ef grep vx
2) Verify the existence of the boot disk’s volumes:
# vxprint –ht


Upgrading VxFS
1) Unmount any mounted Veritas file systems
2) Remove old VxFS packages
3) Comment out VxFS filesystems in /etc/vfstab, then reboot
4) Upgrade the OS if necessary for VxFS version compatibility.
5) Add the new VxFS packages
6) Undo any changes made to /etc/vfstab
7) Reboot

Friday, July 15, 2005

Sunday, July 10, 2005

Jobs and NC

Looks like I'll be starting the first of August down there.

I'm getting too old for this job hopping stuff.

Admiral James B. Stockdale - RIP

Godspeed, Admiral Stockdale

On London

A few round-ups on London

Terrorist Bombs in London

Terror in London

Thursday, July 07, 2005

Plex Problems and Solutions

Topics:
Displaying State Information for VxVM Objects
Interpreting Plex States
Interpreting Volume States
Interpreting Kernel States
Resolving Plex Problems
Analyzing Plex Problems


Identifying Plex Problems

To identify and solve plex problems, use the following information:
- Plex states
- Volume states
- Plex kernel states
- Volume kernel states
- Object condition flags

Commands to display plex, volume, and kernel states:
vxprint –g diskgroup –ht [volume_name]
vxinfo –p –g diskgroup [volume_name]


Plex States and Condition Flags

EMPTY: indicates that you have not yet defined which plex has the good data (CLEAN), and which plex does not have the good data (STALE).

CLEAN: is normal and indicates that the plex has a copy of the data that represents the volume. CLEAN also means that the volume is not started and is not currently able to handle I/O (by the admin’s control).

ACTIVE: is the same as CLEAN, but the colume is or was currently started, and the colume is or was able to perform I/O.

SNAPDONE: is the same as ACTIVE or CLEAN, but is a plex that has been synchronized with the volume as a result of a “vxassist snapstart” operation. After a reboot or a manual start of the volume, a plex in the SNAPDONE state is removed along with its subdisks.

STALE: indicates that VxVM has reason to believe that the data in the plex is not synchronized with the data in the CLEAN plexes. This state is usually caused by taking the plex offline or by a disk failure.

SNAPATT: indicates that the object is a snapshot that is currently being synchronized but does not yet have a complete copy of the data.

OFFLINE: indicates that the administrator has issued the “vxmend off” command on the plex. When the admin brings the plex back online using the “vxmend on” command, the plex changes to the STALE state.

TEMP: the TEMP state flags (TEMP, TEMPRM, TEMPRMSD) usually indicate that the data was never a copy of the volume’s data, and you should not use these plexes. These temporary states indicate that the plex is currently involved in a synchronization operation with the volume.

NODEVICE: indicates that the disk drive below the plex has failed.

REMOVED: has the same meaning as NODEVICE, but the system admin has requested that the device appear as failed.

IOFAIL: is similar to NODEVICE, but it indicates that an unrecoverable failure occurred on the device, and VxVM has not yet verified whether the disk is actually bad. Note: I/O to both the public and the private regions must fail to change the state from IOFAIL to NODEVICE.

RECOVER: is set on a plex when two conditions are met:
1) A failed disk has been fixed (by using vxreattach or the vxdiskadm option, “Replace a failed or removed disk”).
2) The plex was in the ACTIVE state prior to the failure.


Volume States

EMPTY, CLEAN, and ACTIVE: have the same meanings as they do for plexes.

NEEDSYNC: is the same as SYNC, but the internal read thread has not been started. This state exists so that volumes that use the same disk are not synchronized at the same time, and head thrashing is avoided.

SYNC: indicates that the plexes are involved in read-writeback or RAID-5 parity synchronization:

- Each time that a read occurs from a plex, it is written back to all the other plexes that are in the ACTIVE state.

- An internal read thread is started to read the entire volume (or, after a system crash, only the dirty regions if dirty region logging (DRL) is being used), forcing the data to be synchronized completely. On a RAID-5 volume, the presence of a RAID-5 log speeds up a SYNC operation.

NODEVICE: indicates that none of the plexes have currently accessible disk devices underneath the volume.


Kernel States
Kernel states represent VxVM’s ability to transfer I/O to the volume or plex.

ENABLED: The object can transfer both system I/O and user I/O
DETACHED: The object can transfer system I/O, but not user I/O (maintenance mode)
DISABLED: No I/O can be transferred.


Solving Plex Problems

Commands used to fix plex problems:
vxrecover
vxvol init
vxvol –f start
vxmend fix
vxmend offon


The vxrecover Command

vxrecover –g diskgroup –s [volume_name]
- Recovers and resynchronizes all plexes in a started volume.
- Runs “vxvol start” and “vxplex att” commands (and sometimes “vxvol resync”)
- Works in normal situations
- Resynchronizes all volumes that need recovery if a volume name is not included.


Initializing a Volume’s Plexes

vxvol –g diskgroup init init_type volume_name [plexes]

init_type:
zero: sets all plexes to a value of 0, which means that all bytes are null
active: sets all plexes to active and enables the volume and its plexes
clean: If you know that one of the plexes has the correct data, you can select that particular plex to represent the data of the volume. In this case, all other plexes will copy their content from the clean plex when the volume is started.
enable: use this option to temporarily enable the volume so that data can be loaded onto it to make the plexes consistent.


The “vxvol start” Command

vxvol –g diskgroup –f start volume_name

- This command ignores problems with the volume and starts the volume
- Only use this command on nonredundant volumes. If used on nonredundant volumes, data can be corrupted, unless all mirrors have the same data.


The vxmend Command

vxmend –g diskgroup fix stalecleanactiveempty plex


vxmend fix stale

vxmend –f diskgroup fix stale plex
- This command changes a CLEAN or ACTIVE (RECOVER) state to STALE
- The volume that the plex is associated with must be in DISABLED mode.
- Use this command as an intermediate step to the final destination for the plex state.


vxmend fix clean

vxmend –g diskgroup fix clean plex
- This command changes a STALE plex to CLEAN
- Only run this command if:
1) the associated volume is in the DISABLED state
2) There is no other plex that has a state of clean
3) All of the plexes are in the STALE or OFFLINE states.
- After you change the state of a plex to clean, recover the volume by using:
vxrecover –s


vxmend fix active

vxmend –g diskgroup fix active plex
- This command changes a STALE plex to SCTIVE
- The volume that the plex is associated with must be in DISABLED mode
When you run “vxvol start”:
ACTIVE plexes are synchronized (SYNC) together
RECOVER plexes are set to STALE and are synchronized from the ACTIVE plexes.


vxmend fix empty

vxmend –f diskgroup fix empty volume_name
- Sets all plexes and the volume to the EMPTY state
- Requires the volume to be in DISABLED mode
- Runs on the volume, not on a plex
- Returns to the same state as bottom-up creation


vxmend offon
When analyzing plexes, you can temporarily take plexes offline while validating the data in another plex.
- To take a plex offline, use the command:
vxmend –g diskgroup off plex
- To take the plex out of the offline state, use:
vxmend –g diskgroup on plex


Fixing Layered Volumes
- For layered volumes, vxmend functions the same as with nonlayered volumes.
- When starting the volume, use either:
1) “vxrecover –s” – starts both the top-level volume and the subvolumes
2) “vxvol start” with VxVM 4.0 and later, “vxvol start” completely starts (and stops) layered volumes.


Example: If the Good Plex Is Known
- For plex vol01-01, the disk was turned off and back on and still has data.
- Plex vol01-02 has been offline for several hours.

To recover:
1) Set all plexes to STALE (vxmend fix stale vol01-01)
2) Set the good plex to CLEAN (vxmend fix clean vol01-01)
3) Run “vxrecover –s vol01”


Example: If the Good Plex Is Not Known
The volume is disabled and not startable, and you do not know what happened. There are no CLEAN plexes.

To resolve:
1) Take all but one plex offline and set that plex to CLEAN (vxmend off vol01-02; vxmend fix clean vol01-01)
2) Run “vxrecover –s”
3) Verify data on the volume
4) Run “vxvol stop”
5) Repeat for each plex until you identify the plex with the good data

Wednesday, July 06, 2005

Disk Problems and Solutions

Topics:
Identifying I/O Failure
Disk Failure Types
Resolving Permanent Disk Failure
Resolving Temporary Disk Failure
Resolving Intermittent Disk Failure


Disk Failure Handling

Follow the path:
The OS detects an error and informs vxconfigd – Is the volume redundant? 1) No 2) Yes

1) Display error messages, detach the disk from the disk group, and change volume’s kernel state.

2) Is the private region accessible? 3) No 4) Yes

3) Mark the disk as FAILED, detach the disk, mark the affected plex with NODEVICE, and relocate redundant volumes.

4) Mark the disk as FAILING, mark the affected plex with IOFAIL, and relocate subdisks.


Permanent Disk Failure: Volume States After the Failure
“vxprint –htg diskgroup” will list the device as NODEVICE.


Permanent Disk Failure: Resolving
1) Replace the disk
2) Have VxVM scan the devices: vxdctl enable
3) Initialize the new drive: vxdisksetup –i device_name
4) Attach the disk media name (datadg02) to the new drive:
vxdg –g diskgroup –k adddisk datadg02=device_name
5) Recover the redundant volumes: vxrecover
6) Start any nonredundant volumes: vxvol –g diskgroup –f start volume_name
7) Restore data of any nonredundant volumes from backup.


Temporary Disk Failure: Volume States After Reattaching Disk
“vxprint –htg diskgroup” will list the device as DISABLED IOFAIL


Temporary Disk Failure: Resolving
1) Fix the failure
2) Ensure that the OS recognizes the device
3) Force VxVM to reread all drives: vxdctl enable
4) Reattach the disk media name to the disk access name: vxreattach
5) Recover the redundant volumes: vxrecover
6) Start any nonredundant volumes: vxvol –g diskgroup –f start volume_name
7) Check data for consistency, for example:
fsck /dev/vx/rdsk/diskgroup/volume_name


Intermittent Disk Failure: Resolving
1) If any volumes on the failing disk are not redundant, attempt to mirror those volumes:
- If you can mirror the volumes, continue with the procedure for redundant volumes.
- If you cannot mirror the volume, prepare for backup and restore.

2) If the volume is redundant
- Prevent read I/O from accessing the failing disk by changing the volume read policy
- Remove the failing disk
- Set the volume read policy back to the original policy


Forced Removal
To forcibly remove a disk and not evacuate the data:
1) Use the vxdiskadm option, “Remove a disk for replacement.” VxVM handles the drive as if it has already failed.
2) Use the vxdiskadm option, “Replace a failed or removed disk.”

CLI:
vxdg –k –g diskgroup rmdisk [device_name]
vxdisksetup –if [newdisk]
vxdg –k –g diskgroup adddisk [device_name]=[newdisk]


Identifying a Degraded Plex of a RAID-5 Volume
“vxprint –htg diskgroup” will list the device as NODEVICE and the subdisk as NDEV.

The following commands will also indicate degradation:
vxprint –l volume_name
vxinfo –p –g diskgroup

Monday, July 04, 2005

Managing Devices Within the VxVM Architecture

Topics:
Managing Components in the VxVM Architecture
Discovering Disk Devices
Administering the Device Discovery Layer
Dynamic Multipathing
Preventing Multipathing for a Device
Managing DMP
Controlling Automatic Restore Processes


VxVM Daemons

vxconfigd – The VxVM configuration daemon maintains disk and group configurations, communicates configuration changes to the kernel, and modifies configuration information stored on disks. When a system is booted, the command “vxdctl enable” is automatically executed to start vxconfigd. VxVM reads the /etc/vx/volboot file to determine disk ownership and automatically imports disk groups owned by the host.

vxiod – The VxVM I/O daemon provides extended I/O operations without blocking calling processes. Several vxiod daemons are usually started at boot time, and they continue to run at all times.

vxrelocd – is the hot-relocation daemon that monitors events that affect data redundancy.


VxVM Configuration Database
- Contains all disk, volume, plex, and subdisk configuration records
- Is stored in the private region of a VxVM disk
- Is replicated to maintain a copy on multiple disks in a disk group
- Is updated by the vxconfigd process


Displaying VxVM Configuration Database Information
vxdg list diskgroup

Displaying Disk Header Information
vxdisk –g diskgroup list disk_name


VxVM Disk Types and Formats
- auto:cdcdisk
- auto:simple
- auto:sliced
- auto:none

simple – Public and private regions are contiguous on the same partition
sliced – Public and private regions are on separate partitions.
nopriv – No private region.


VxVM Configuration Daemon

vxconfigd:
- Maintains the configuration database
- Synchronizes changes between multiple requests, based on a database transaction model:
* All utilities make changes through vxconfigd
* Utilities identify resources needed at the start of the transaction.
* Transactions are serialized, as needed.
* Changes are reflected in all copies immediately
- Does not interfere with access to data on disk
- Must be running for changes to be made to the configuration database.

If vxconfigd is not running, VxVM operates, but configuration changes are not allowed and queries of the database are not possible.

- vxconfigd reads the kernel log to determine current states of VxVM components and updates the configuration database.
- Kernel logs are updated even if vxconfigd is not running. For example, upon startup, vxconfigd reads the kernel log and determines that a volume needs to be resynchronized.

- vxconfigd modes:
enabled – normal operating state
disabled – Most operations not allowed
booted – Part of the normal system startup while acquiring the boot disk group


The vxdctl Command
Use vxdctl to control vxconfigd

vxdctl mode – Displays vxconfigd status
vxdctl enable – Enables vxconfigd
vxdctl disable – Disables vxconfigd
vxdctl stop – Stops vxconfigd
vxdctl –k stop – Sends a kill -9
vxconfigd – Starts vxconfigd
vxdctl license – Checks licensing
vxdctl support – Displays version information


The volboot File

/etc/vx/volboot contains:
- The host ID (this is really the hostname) that is used by VxVM to establish ownership of physical disks
- The values of defaultdg and bootdg if these values were set by the user.

Caution: Do not edit volboot, or its checksum is invalidated.

To display the contents of volboot:
vxdctl list

To change the host ID in volboot:
vxdctl hosted new_hostid
vxdctl enable

To re-create volboot:
vxdctl init hosted


Device Discovery Layer (DDL)
Device discovery is the process of locating and identifying disks attached to a host

Prior to VxVM 3.2, device discovery occurred at boot time. With VxVM 3.2 and later, device discovery occurs automatically whenever you add a new disk array.


Adding Disk Array Support
To add support for a new type of disk array, add vendor-supplied libraries.

Then scan for new devices:
vxdctl enable

This invokes vxconfigd to scan for all disk devices, updates the device list, and reconfigures DMP


Partial Device Discovery

Discover newly added devices previously unknown to VxVM:
vxdisk scandisks new

Discover fabric devices:
vxdisk scandisks fabric

Scan for the specific devices:
vxdisk scandisks device=c1t1d0,c2t2d0

Scan for all devices except those that are listed:
vxdisk scandisks !device=c1t1d0,c2t2d0

Scan for devices that are connected to logical or physical controllers:
vxdisk scandisks ctlr=c1,c2

Discover devices that are connected to the specified physical controller:
vxdisk scandisks pctlr=/pci@1f,4000/scsi@3/


Administering DDL

To add/remove/list support for disk arrays:
vxddladm listsupport
vxddladm excludearray libname=library
vxddladm excludearray vid=ACME pid=X1
vxddladm includearray libname=library
vxddladm includearray vid=ACME pid=X1
vxddladm listexclude

To add/remove/list support for JBODs:
vxddladm listjbod
vxddladm addjbod vid=vendor_ID pid=prod_ID
vxddladm rmjbod vid=vendor_ID pid=prod_ID

To add a foreign device:
vxddladm addforeign blockdir=path chardir=path


Dynamic Multipathing (DMP)
Dynamic multipathing is a method that VxVM uses to manage two or more hardware paths directing I/O to a single drive. VxVM arbitrarily selects one of the two names and creates a single device entry, and then transfers data across both paths to spread the I/O.

VxVM detects multipath systems by using the Universal World-Wide-Device Identifiers (WWD IDs) and manages multipath targets, such as disk arrays, which define polices for using more than one path.


Types of Multiported Arrays
A multiported disk array is an array that can be connected to host systems through multiple paths. The two basic types of multiported disk arrays are:
1) active/active disk arrays
2) active/passive disk arrays


Preventing DMP for a Device
If an array cannot support DMP, you can prevent multipathing for the device by using vxdiskadm:
“Prevent multipathing/Suppress devices from VxVM’s view”

Warning: If you do not prevent DMP for unsupported arrays:
- Commands like “vxdisk list” show duplicate sets of disks as ONLINE, even through only one path is used for I/O.
- Disk failures can be represented incorrectly.

- The option “Suppress all paths through a controller from VxVM’s view” continues to allow the I/O to use both paths internally. After a reboot, “vxdisk list” does not show the suppressed disks.

- “Prevent multipathing of all disks on a controller by VxVM” does not allow the I/O to use internal multipathing. The “vxdisk list” command shows all disks as ONLINE. This option has no effect on arrays that are performing dynamic multipathing or that do not support VxVM DMP.


Listing Controllers
vxdmpadm listctlr all


Displaying Paths
vxdmpadm getsubpaths ctlr=controller_name

To display paths connected to a LUN:
vxdmpadm getsubpaths dmpnodename=node_name


Displaying DMP Nodes
vxdmpadm getdmpnode nodename=c3t2d1


Disabling I/O to a controller

VEA:
Select Actions -> Disable/Enable

CLI:
To disable I/O to a particular controller:
vxdmpadm disable ctlr=ctlr_name

To disable I/O to a particular enclosure:
vxdmpadm disable enclosure=enc0

To reenable I/O to a particular controller:
vxdmpadm enable ctlr=ctlr_name


Displaying I/O Statistics for Paths

Enable the gathering of statistics:
vxdmpadm iostat start [memory=size]

Reset the I/O counters to zero:
vxdmpadm iostat reset

Display the accumulated statistics for all paths:
vxdmpadm iostat show all


Setting I/O Policies and Path Attributes

To change the I/O policy for balancing the I/O load across multiple paths to a disk array or enclosure:
vxdmpadm setattr enclosure enc_name iopolicy=policy

Policies:
adaptive – automatically determines the paths that have the least delay
balanced – (default) takes the track cache into consideration when balancing I/O across paths.
minimumq – sends I/O on paths that have the minimum number of I/O requests in the queue.
priority – assigns the path with the highest load carrying capacity as the priority path.
round-robin – sets a simple round-robin policy for I/O
singleactive – channels I/O through the single active path

To set path attributes for a disk array or enclosure:
vxdmpadm setattr path path_name pathtype=type

Type:
active – changes a standby path to active
nomanual – restores the original primary or secondary attributes of a path
nopreferred – restores the normal priority of the path
preferred [priority=N] – specifies a preferred path and optionally assignes a priority to it.
primary – assignes a primary path for an Active/Passive disk array
secondary - assignes a secondary path for an Active/Passive disk array
standby – marks a path as not available for normal I/O scheduling.


Managing Enclosures

CLI:
To display attributes of all enclosures:
vxdmpadm listenclosure all

To change the name of an enclosure:
vxdmpadm setattr enclosure orig_name name=new_name

VEA:
Highlight an enclosure, and select Actions -> Rename Enclosure.


Controlling the Restore Daemon

The DMP restore daemon is an internal process that monitors DMP paths. To check status:
vxdmpadm stat restored.

To change daemon properties:

Stop the DMP restore Daemon:
vxdmpadm stop restore

Restart the daemon with new attributes:
vxdmpadm start restore interval=400 policy=check_all

Friday, July 01, 2005

Recovery Essentials

Topics:
Maintaining Data Consistency
Hot Relocation
Managing Spare Disks
Replacing a Disk
Unrelocating a Disk
Recovering a Volume
Protecting the VxVM Configuration
Accessing the Technical Support Website


Atomic-copy resynchronization involves the sequential writing of all blocks of a volume to a plex.

This type of resynchronization is used in:
Adding a new plex (mirror)
Reattaching a detached plex (mirror) to a volume
Online reconfiguration operations:
- Moving a plex
- Copying a plex
- Creating a snapshot
- Moving a subdisk


Read-writeback resynchronization is used for volumes that were fully mirrored prior to a system failure.

In this type of resynchronization:
- Mirrors marked ACTIVE remain ACTIVE, and volume is placed in the SYNC state
- An internal read thread is started. Blocks are read from the plex specified in the read policy, and the data is written to the other plexes.
- Upon completion, the SYNC flag is turned off


Impact of Resynchronization

Resynchronization takes time and impacts performance.

To minimize this performance impact, VxVM provides the following solutions:
- Dirty region logging (DRL) for mirrored volumes
- RAID-5 logging for RAID-5 volumes
- FastResync for mirrored and snapshot columes


Dirty Region Logging

For mirrored volumes with logging enabled, DRL speeds plex resynchronization. Only regions that are dirty (different from the primary) need to be resynchronized after a crash.

VxVM selects an appropriate log size based on volume size

If you resize a volume, the log size does not change. To resize the log, you must delete the log and add it back after resizing the volume.


RAID-5 Logging
- For RAID-5 volumes, logging helps to prevent data corruption during recovery
- RAID-5 logging records changes to data and parity on a persistent device (log disk) before committing the changes to the RAID-5 volume.
- Logs are associated with a RAID-5 volume by being attached as log plexes.


Hot Relocation
Hot relocation is a feature of VxVM that enables a system to automatically react to I/O failures on redundant (mirrored or RAID-5) VxVM objects and restore redundancy and access to those objects. VxVM detects I/O failures on objects and relocates the affected subdisks. The subdisks are relocated to disks designed as spare disks or to free space within the disk group. VxVM then reconstructs the objects that existed before the failure and makes them redundant and accessible again.


How is Space Selected?
- Hot relocation attempts to move all subdisks from a failing drive to a single spare destination disk
- If no disks have been designated as spares, VxVM uses any available free space in the disk group in which the failure occurs.
- If there is not enough spare disk space, a combination of spare disk space and free space is used.
- Free space that you exclude from hot relocation is not used.


Managing Spare Disks

VEA:
Actions -> Set Disk Usage

vxdiskadm:
- “Mark a disk as a spare for a disk group”
- “Turn off the spare flag on a disk”
- “Exclude a disk from hot-relocation use”
- “Make a disk available for hot-relocation use”

CLI:
To designate a disk as a spare:
vxedit –g diskgroup set spare=onoff diskname

To exclude/include a disk for hot relocation:
vxedit –g diskgroup set nohotuse=onoff diskname

To force hot relocation to only use spare disks:
Add spare=only to /etc/default/vxassist


Disk Replacement Tasks
1) Replace the failed/failing disk
2) Logical Replacement
- Replace the disk in VxVM
- Start disabled volumes
- Resynchronize mirrors
- Resynchronize RAID-5 parity


Adding a New Disk
1) Connect the new disk
2) Get the OS to recognize the disk:
Sun –
devfsadm
prtvtoc /dev/dsk/device_name
HP-UX
ioscan –fC disk
insf –e
3) Get VxVM to recognize that a failed disk is now working again:
vxdctl enable
4) Verify that VxVM recognizes the disk:
vxdisk list

Note: In VEA, use the Actions -> Rescan option to run disk setup commands appropriate for the OS. This option ensures that VxVM recognizes newly attached hardware.


Unrelocating a Disk

VEA:
Select the disk to be unrelocated
Select Actions -> Undo Hot Relocation

vxdiskadm:
“Unrelocate subdisks back to a disk”

CLI:
vxunreloc [-f] [-g diskgroup] [-t tasktag] [-n disk_name] orig_disk_name

- orig_disk_name = Original disk before relocation
- -n disk_name = Unrelocates to a disk other than the original
- -f = Forces unrelocation if exact offsets are not possible


Viewing Relocated Subdisks: CLI

When a subdisk is hot-relocated, its original disk media name is stored in the sd_orig_dmname field of the subdisk record files. You can search this field to find all the subdisks that originated from a failed disk using the vxprint command:

vxprint –g diskgroup –se ‘sd_orig_dmname=”disk_name”’

For example, to display all the subdisks that were hot-relocated from datadg01 within the datadg disk group:

vxprint –g datadg –se ‘sd_orig_dmname=”datadg01”’


Recovering a Volume

VEA:
Select the volume to be recovered
Select Actions -> Recover Volume

CLI:
vxreattach [-bcr] [device_tag]
- Reattaches disks to a disk group if disk has a transient failure, such as when a drive is turned off and then turned back on, or if the Volume Manager starts with some disk drivers unloaded and unloadable.
- -r attempts to recover stale plexes using vxrecover

vxrecover [-bnpsvV] [-g diskgroup] [volume_namedisk_name]
i.e. vxrecover –b –g datadg datavol

Note: the vxrecover command only works on a started volume. A started volume displays an ENABLED state in vxprint –ht.
Note: use the –s argument to start a disabled volume


Configuration Backup and Restore

Backup:
vxconfigbackup [diskgroup]

Precommit:
vxconfigrestore –p diskgroup

Commit:
vxconfigrestore-c diskgroup

Bt default, VxVM configuration data is automatically backed up to the files:
/etc/vx/cbr/bk/diskgroup.dgid/dgid.dginfo
/etc/vx/cbr/bk/diskgroup.dgid/dgid.diskinfo
/etc/vx/cbr/bk/diskgroup.dgid/dgid.binconfig
/etc/vx/cbr/bk/diskgroup.dgid/dgid.cfgrec

Configuration data from a backup enables you to reinstall private region headers of VxVM disks in disk group, re-create a corrupted disk group configuration, or re-create a disk group and the VxVM objects within it.


And that is the end of the “Fundamentals” book.

Point-in-Time Copies: Standard

Topics:
What is a Point-In-Time Copy?
Traditional Volume Snapshots
File System Snapshots


A point-in-time copy (PITC) enables you to capture an image of data at a selected instant for use in applications, such as backups, decision support, reporting, and development testing.


Physical vs. Logical PITCs

Physical PITCs –
The physical PITC is a physically distinct copy of the data usually produced by breaking off a mirror of the storage container.

Advantages:
Complete copy of the primary data
Fully synchronized

Disadvantages:
Requires the same amount of storage space as the original
Requires time for synchronization of data


Logical PITCs –
This PITC identifies and maintains modified blocks, and in addition, there is a reference to the original data. The logical PITC is dependent on the primary copy of the data.

Advantages:
Available for use instantaneously

Disadvantages:
Dependent on the original.


Performance Issues with Physical PITCs
The primary impact for physical PITCs is the initial synchronization. This is especially important when large amounts of data need to be copied.

After this full synchronization is complete, there is very little, if any performance impact on the original volume or the PITC because they are separate objects.


Performance Issues with Logical PITCs
The logical PITC is connected to the primary data. Therefore, the I/O of a logical PITC is subject to the rate of change of the original data. The overall impact of the PITC is dependent on the read-to-write ratio of an application and the mixing of the I/O operations.

Note: Both the primary data and the logical PITC become faster as more data is copied out from the primary, because the PITC slowly becomes a complete physical copy over time.


Life Cycle of Point-in-Time Copies
1) Make PITC (Assign Resources) – vxassist snapstart & vxassist snapshot
2) Use PITC (Testing, Backup, etc.)
3) Update PITC (update PITC with new data from the primary or repopulate the primary from the PITC) – vxassist [-o resyncfromreplica] snapback
4) Destroy PITC (Release Resources) – vxassist remove


Traditional Volume Snapshots
The traditional type of volume snapshot that was originally provided in VxVM is the third-mirror break-off type.

When you create a traditional volume snapshot, you create a temporary mirror of an existing volume. After the contents of the third mirror (or snapshot plex) are synchronized from the original plexes of the volume, the snapshot plex can be detached as a snapshot volume for use in backup or decision support applications.


Creating and Managing Traditional Volume Snapshots

Create:
vxassist –g diskgroup [-b] snapstart origvol
(vxassist –g diskgroup snapwait origvol - use this command to force a wait for the snapshot mirror to finish synchronizing)
vxassist –g diskgroup snapshot [origvol] [snapvol]

Reassociate:
vxassist –g diskgroup snapback snapvol
or
vxassist –g diskgroup –o resyncfromreplica snapback snapvol

Dissociate:
vxassist –g diskgroup snapclear snapvol

Destroy:
vxassist –g diskgroup remove volume snapvol

Snapabort:
To remove a snapshot mirror that has not been detached and moved to a snapshot volume, you use the vxassist snapabort option.

vxassist –g diskgroup snapabort origvol


Displaying Traditional Volume Snapshot Information

vxprint –g diskgroup –ht (or vxprint –htg diskgroup)


Creating And Managing File System Snapshots

Create:
mount –F vxfs –o snapof=origfs[,snapsize=size] destination snap_mount_point

Refresh:
mount –F vxfs –o remount, snapof=origfs[,snapsize=size] destination snap_mount_point

Remove:
umount snap_mount_point


Using a File System Snapshot
After creating a snapshot file system, you can back up the file system from the snapshot while the snapped file system remains online.

To backup a snapshot:
vxdump [options] [snap_mount_point]

To backup the snapshot to tape:
vxdump –cf /dev/rmt/0 /snapmount

To restore the file system from tape:
vxrestore –vx /mount_point

Thursday, June 30, 2005

Administrating File Systems

Topics:
Adding a File System to a Volume
Using Veritas File System Commands
Comparing the Allocation Policies of VxFS and Traditional File Systems
Upgrading the VxFS File System Layout
Controlling File System Fragmentation
Logging in VxFS

Adding a File System: VEA
Select Actions -> File System -> New File System

Mounting a File System: VEA
Select Action -> File System -> Mount File System

Unmounting a File System: VEA
Select Action -> File System -> Unmount File System


Adding a File System: CLI
To create and mount a VxFS file system:
mkfs –F vxfs /dev/vx/rdsk/diskgroup/volume_name
i.e. mkfs –F vxfs /dev/vx/rdsk/datadg/datavol

mkdir mount_point
i.e. mkdir /data

mount –F vxfs /dev/vx/dsk/diskgroup/volume_name mount_point
i.e. mount –F vxfs /dev/vx/dsk/datadg/datavol /data

To create and mount a ufs file system:
newfs /dev/vx/rdsk/diskgroup/volume_name
i.e. newfs /dev/vx/rdsk/datadg/datavol

mkdir mount_point
i.e. mkdir /data

mount /dev/vx/dsk/diskgroup/volume_name mount_point
i.e. mount /dev/vx/dsk/datadg/datavol /data


The vxupgrade Command
For better performance, use file system layout Version 6 for new file systems.

To upgrade the layout online, use vxupgrade:
vxupgrade [-n new_version] [-o noquota] [-r rawdev] mount_point

To display the current file system layout version number:
vxupgrade mount_point

Upgrading must be done in stages. For example, to upgrade the file system layout from Version 4 to Version 6:
vxupgrade –n 5 /mnt
vxupgrade –n 6 /mnt


Monitoring Fragmentation

To monitor directory fragmentation:
fsadm –D mount_point

A high total in the “Dirs to Reduce” column indicates fragmentation.

To monitor extent fragmentation:
fsadm –E mount_point

Free space in extents of less than 64 blocks in length –
lt 5% = unfragmented, gt 50% badly fragmented

Free space in extents of less than 8 blocks in length –
lt 1% = unfragmented, gt 5% badly fragmented

Total file system size in extents of length 64 blocks or greater –
gt 5% = unfragmented, lt 5% badly fragmented


Defragmenting a File System

CLI:
fsadm [-d] [-D] [-e] [-E] [-t time] [-p passes] mount_point
Note: the lowercase “d” and “e” actually do the defrag of directories and extents

VEA:
Actions -> Defrag File System

Testing Performance Using vxbench

Obtained from: ftp://ftp.veritas.com/pub/support/vxbench.tar.Z

vxbench –w workload [options] filename


Intent Log
1) The intent log records pending file system changes before metadata is changed
2) After the intent log is written, other file system updates are made
3) If the system crashes, the intent log is replayed by VxFS fsck


Maintaining VxFS Consistency

To check file system consistency by using the intent log for VxFS on the volume datavol:
fsck [fs_type] /dev/vx/rdsk/datadg/datavol

To perform a full check without using the intent log:
fsck [fs_type] –o full,nolog /dev/vx/rdsk/datadg/datavol

To check two file systems in parallel using the intent log:
fsck [fs_type] –o p /dev/rdsk/c1t2d0s4 /dev/rdsk/c1t0d0s5

To perform a file system check using VEA:
Highlight an unmounted file system
Select Actions -> Check File System


Resizing the Intent Log
Larger log sizes may improve performance for intensive synchronous writes, but may increase recovery time, memory requirements, and log maintenance time.

Default log size depends on file system size (in the range of 256K to 64MB)
Maximum log size is 2Gb for version 6 and 16MB in versions 4 and 5.
Minimum log size is 256K

VEA:
Highlight a file system
Select Actions -> Set Intent Log Options

CLI:
fsadm [-F vxfs] –o log=size [,logdev=device] mount_point


Logging mount Options

mount –F vxfs [-o specific_options] …

-o log = Better integrity through logging all structural changes. If a system failure occurs, fsck replays recent changes so that they are not lost.

-o delaylog = (default) Improved performance due to some logging being delayed

-o tmplog = Best performance due to all logging being delayed. but some changes could be lost on a system failure.

Configuring Volumes

Topics:
Administrating Mirrors
Adding a Log to a Volume
Changing the Volume Read Policy
Allocating Storage for Volumes
Resizing a Volume


Adding a Mirror to a Volume
Only concatenated or striped volumes can be mirrored
By default, a mirror is created with the same plex layout as the original volume
Each mirror must reside on separate disks
All disks must be in the same disk group
A volume can have up to 32 plexes, or mirrors
Adding a mirror requires plex resynchronization

Adding a Mirror

VEA:
Select the volume to be mirrored
Select Actions -> Mirror -> Add

CLI:
vxassist –g diskgroup mirror volume_name [layout=layout_type] [disk_name]


Removing a Mirror

VEA:
Select Actions -> Mirror -> Remove
Remove by mirror name, quantity, or disk

CLI:
vxassist –g diskgroup remove mirror volume_name [!]disk_name


Adding a Log to a Volume

Dirty Region Logging (for mirrored volumes)
Log keeps track of changed regions
If the system fails, only the changed regions of the volume must be recovered
DRL is not enabled by default. When DRL is enabled, one log is created
You can create additional logs to mirror log data

RAID-5 Logging
Log keeps a copy of data and parity writes
If the system fails, the log is replayed to speed resynchronization
RAID-5 logging is enabled by default
RAID-5 logs can be mirrored
Store logs on disks separate from volume data and parity

VEA:
Actions -> Log -> Add
Actions -> Log -> Remove

CLI:
vxassist –g diskgroup addlog volume_name [logtype=drl] [nlog=n] [attributes]

Examples:

To add a dirty region log to an existing mirrored volume:
vxassist –g datadg addlog datavol logtype=drl

To add a RAID-5 log to a RAID-5 volume, no log type is needed:
vxassist –g datadg addlog datavol

To remove a log from a volume:
vxassist –g diskgroup remove log [nlog=n] volume_name


Volume Read Policies
Round robin – VxVM reads each plex in turn in “round-robin” manner for each nonsequential I/O detected.

Preferred plex – VxVM reads first from a named plex and reads from the next only if the first has failed.

Selected plex – (Default) Will use round-robin unless the volume has exactly one striped plex, in which case the read policy defaults to the plex


Setting the Volume Read Policy

VEA:
Actions -> Set Volume Usage
Select from “Based on layouts”, “Round robin”, or “Preferred”

CLI:
vxvol –g diskgroup rdpol policy volume_name [plex]

Examples:
Round robin: vxvol –g datadg rdpol round datavol
Preferred: vxvol –g datadg rdpol prefer datavol datavol-02
Selected; vxvol –g datadg rdpol select datavol


Ordered Allocation
Ordered allocation enables you to control how columns and mirrors are laid out when creating a volume.

With ordered allocation, storage is allocated in a specific order:
first, VxVM concatenates subdisks in columns
Secondly, VxVM groups columns in striped plexes
Finally, VxVM forms mirrors

Note: When using ordered allocation, the number of disks specified must exactly match the number of disks needed for a given layout.


Ordered Allocation: Methods

VEA:
In the “New volume Wizard”, select “Manually select disks for use by this volume.” Select the disks and the storage allocation policy, and mark the “Ordered” check box.

CLI:
Use the “–o ordered” option:
vxassist [-g diskgroup] [-o ordered] make volume_name length [layout=layout]

Specifying the order of columns:
vxassist –g datadg –o ordered make datavol 2g layout=stripe ncol=3 datadg02 datadg04 datadg06

Specifying the order of mirrors:
vxassist –g datadg –o ordered make datavol 2g layout=mirror datadg02 datadg04


Resizing a volume: VEA
Highlight a volume, and select Actions -> Resize Volume

Resizing a volume: vxresize
vxresize [-b] fs_type –g diskgroup volume_name [+-] new_length

Set size to: vxresize –g mydg myvol 50m
Grow by: vxresize –g mydg myvol +10m
Shrink by: vxresize –g mydg myvol -10m

Resizing a volume: vxassist
vxassist –g diskgroup growtogrowbyshrinktoshrinkby volume_name size

Grow to: vxassist –g datadg growto datavol 40m
Shrink to: vxassist –g datadg shrinkto datavol 30m
Grow by: vxassist –g datadg growby datavol 10m
Shrink by: vxassist –g datadg shrinkby datavol 10m

Resizing a volume: fsadm
fsadm [fs_type] [-b newsize] [-r rawdev] mount_point

Verify free space: vxdg –g datadg free
Expand the volume using vxassist: vxassist –g datadg growto myvol 1024000
Expand the file system using fsadm:
fsadm –F vxfs –b 1024000 –r /dev/vx/rdsk/datadg/datavol /datavol
Verify that the file system was resized by using df: df –k /datavol


Resizing a Dynamic LUN
If you resize a LUN in the hardware, you should resize the VxVM disk corresponding to that LUN.

VEA:
Select the disk that you want to expand
Select Actions -> Resize Disk

CLI:
vxdisk [-f] –g diskgroup resize accessnamemedianame length=attribute
i.e. vxdisk –g datadg resize datadg01 length=8GB

Creating Volumes (More useful stuff)

Topics:
Selecting a Volume Layout
Creating a Volume
Displaying Volume Layout Information
Creating a Layered Volume
Removing a Volume

Concatenated Layout: A concatenated volume layout maps data in a linear manner onto one or more subdisks in a plex.

Striped Layout: A striped volume layout maps data so that the data is interleaved, or allocated in stripes, among two or more subdisks on two or more physical disks.

Mirrored Layout: By adding a mirror to a concatenated or striped volume, you create a mirrored layout. A mirrored volume layout consists of more than one plex that duplicates the information contained in a volume.

RAID-5 Layout: A RAID-5 layout has the same attributes as a striped plex, but includes one additional column of data that is used for parity. Parity provides redundancy.

RAID-5 requires a minimum of three disks for the data and parity. When implemented as recommended, an additional disk is required for the log. Note: RAID-5 cannot be mirrored.


Comparing Volume Layouts

Concatenation: Advantages
Removes disk size restrictions
Better utilization of free space
Simplified administration

Concatenation: Disadvantages
No protection against disk failure

Striping: Advantages
Improved performance through parallel data transfer
Load balancing

Striping: Disadvantages
No protection against disk failure


Mirroring: Advantages
Improved reliability and availability
Improved read performance

Mirroring: Disadvantages
Requires more disk space (duplicate data copy)
Slightly slower write performance

RAID-5: Advantages
Redundancy through parity
Requires less space than mirroring (not entirely true if set up as recommended (i.e. 3+ disks for the RAID-5 and mirrored log disks))
Improved read performance
Fast recovery though logging

RAID-5: Disadvantages
Slow write performance


Before Creating a Volume

Initialize disks and assign them to disk groups.
Striped: Requires at least 2 disks
Mirrored: Requires one disk for each mirror
RAID-5: Requires at least 3 disks plus one disk to contain the log

Creating a Volume: VEA
Step 1: Select disks to use for the new volume
Select Actions -> New Volume

Step 2: Specify volume attributes

Step 3: Create a file system on the volume (optional (i.e. can be done later))

Creating a Volume: CLI

vxassist –g diskgroup make volume_name length [attributes]

The above command creates your device files (i.e. /dev/vx/[r]dsk/diskgroup/volume_name)

To display volume attributes: vxassist –g diskgroup help showattrs

Concatenated Volume: CLI

vxassist –g diskgroup make volume_name length
i.e. vxassist –d datadg make datavol 10g

If the /etc/default/vxassist default layout is not concatenated, make the concatenated request explicit (i.e. vxassist –d datadg make datavol 10g layout=nostripe)

To specify which disks to use (as opposed to letting VM decide for you) explicitly indicate the disks to use (i.e. vxassist –d datadg make datavol 10g datadg02 datadg03).

Striped Volume: CLI
vxassist –g diskgroup make volume_name length layout=stripe [ncol=n] [stripeunit=size] [disks…]
i.e. vxassist –g acctdg make expvol 2g layout=stripe ncol=3 stripeunit=256k acctdg01 acctdg02 !acctdg03

layout=stripe => designates the striped layout
ncol=n => the number of stripes/columns (min 2, max 8)
stripeunit=size => the size of the stripe (default is 64K)
!acctdg => specifies that the disk indicated should not be used

RAID-5 Volume: CLI
vxassist –g diskgroup make volume_name length layout=raid5 [ncol=n] [stripeunit=size] [disks…]

Default ncol is 3
Default stripeunit is 16K
Log is created by default. Therefore, you need at least one more disk than the number of columns.

Mirrored Volume: CLI
vxassist –g diskgroup [-b] make volume_name length layout=mirror [nmirror=number]

The vxassist command normally waits for the mirrors to be synchronized before returning control, but if the –b argument is given, the sync will happen in the background.

Concatenated and mirrored:
vxassist –g datadg make datavol 5g layout=mirror

Specify three mirrors:
vxassist –g datadg make datavol 5g layout=stripe,mirror nmirror=3

Run process in background:
vxassist –g datadg –b make datavol 5g layout=stripe,mirror nmirror=3


Mirrored Volume with Log: CLI
vxassist –g diskgroup [-b] make volume_name length layout=mirror logtype=drl [nlog=n]

logtype=drl enables dirty region logging
nlog=n creates n logs and is used when you want more than one log plex to be created.


Estimating Volume Size: CLI

To determine largest possible size for a volume:
vxassist –g diskgroup maxsize attributes
i.e. vxassist –g datadg maxsize layout=raid5

To determine how much a volume can expand:
vxassist –g diskgroup maxgrow volume
i.e. vxassist –g datadg maxgrow datavol


Displaying Volume Information: CLI
vxprint –g diskgroup [options]

-vspd => Select only volumes, plexes, subdisks, or disks.
-h => List hierarchies below selected records
-r => Display related records of a volume containing subvolumes.
-t => Print single-line output records that depend upon the configuration record type
-l => Display all information from each selected record
-a => Display all information about each selected record, one record per line
-A => Select from all active disk groups
-e pattern => Show records that match an editor pattern


How Do Layered Volumes Work?

Volumes are constructed from subvolumes.
The top-level volume is accessible to applications.

Advantages
Improved redundancy
Faster recovery times

Disadvantages
Requires more VxVM objects.


The Four Types of Mirroring in VxVM:

mirror-concat (non-layered):
- The top-level volume contains more than one plex (mirror)
- Plexes are concatenated

mirror-stripe (non-layered):
- The top-level volume contains more than one plex (mirror)
- Plexes are striped

concat-mirror (layered)
- The top-level volume is a concatenated plex
- Subvolumes are mirrored

stripe-mirror (layered)
- The top-level volume is a striped plex
- Subvolumes are mirrored


Created Layered Volumes

VEA:
In the New Volume Wizard, select Concatenated Mirrored or Striped Mirrored as the volume layout.

CLI:
vxassist –g diskgroup make volume_name size layout=[stripe-mirror concat-mirror]

To create simple mirrored volumes (nonlayered), you can use:
layout=mirror-concat
layout=mirror-stripe


Viewing Layered Volumes
vxprint –rth volume_name


Remove a Volume

VEA:
Select the volume that you want to remove.
Select Actions -> Delete Volume

CLI:
vxassist –g diskgroup remove volume volume_name

or

vxedit –g diskgroup –rf rm volume­_name

Tuesday, June 28, 2005

technorati tags

I've been using technorati tags in an effort to create categories.

But, it is very slow to update and it doesn't look to be giving me what I'm looking for.

I'll be looking for a something else.

What I want is to be able to put topics together and be able to reference them at some later date. For example, I do some funky kernel thing and then six months later need to repeat the process. Instead of reinventing the wheel, I'll scan my category links first.

The technorati tags looked to be giving that to me at first, but I'm still waiting for it to update with some of my posts.

hmmm

Monday, June 27, 2005

Random Musings (i.e. whining and moaning)

I'm looking to relocate to NC from VA.

I've been focused on Raleigh, but things have been slow. I'm thinking this is for two reasons.

#1 Raleigh is a big AIX and Linux town (as is probably obvious, I'm a HP-UX guy that can work with Sun)

#2 I was/am an idiot that wasted about 7 weeks with my current head-hunter (contracting agency) thinking they could place me down there quicker than I could find a position.

I say idiot in #2 because I know my current agency is a bottom feeder that charges the client a crazy overhead on my rate and won't give raises out of that overhead nor offer decent health coverage (eHealthInsurance rocks for private insurance (cheaper than some employee health coverage I've had in the past)). Also, for some stupid reason I've been patient with the NC rep I'd been referred to even though for the past seven weeks the dude has been to two out of state conferences (one was one week and the other two). Stupid.

I sort of blame my stupidity on my constant state of exhaustion that comes from my commute that, as of 4 months ago, takes 4 to 6 hours a day. Yes, I purposely live in the boonies (approx. 66 miles from my current jobsite), but the current cost of housing in VA has sent such a hoard of people out my way that my 1.25 hour one-way commute has grown to 2-3 hour commute (and that is at off hours (like 5 FREAKING am)).

On the plus side, my house has nearly tripled in value (on a mountain and on a dirt road no less). One of the many reasons for living out here was to be able to pay off my house early (I've got over 80% paid for).

On the sorta minus side, my wife and I have to sort through 15 years of accumulated stuff and get that thrown out or put in storage. Then we need to get the place painted and a few other things.

Now, here's where the good part comes in...

You see, NC has a few tech centers and from the headhunters I've come to learn that the pay rate is just about equal to northern VA but at a cost of living at 85-95% the rate of the town I live in (and 69-79% of the town I work in (Arlington)). Also, the houses are about one third the cost (more so if comparing housing around the DC metro area). So, I'm looking to sell in VA and buy outright in NC. I was hoping to take a little while longer to get my house in order, but my commute is just killing me.

Note: I'm not some rich dude. It's just that during the boom days of the dot com era, instead of getting a mortgage on some $500-700K house like many of my old co-workers (who subsequently had to get a second job or go bankrupt after the dot com crash), I chose a very rural sub-$100K house that I could pay off quickly (living beneath one's means is a wise thing to do).

Anyway, I'm starting to look in Winston-Salem and Charlotte. I'm not keen on Charlotte as it has a crime rate index of nearly three times the national average (as point of reference, Washington DC is five times the national average).

All of this leads me to a phone interview I had with a prospective employer. It was a tech interview with a Brit. I've worked with Brits in the past but never interviewed with one. He was pleasant enough, but gave me no feedback as to what he was looking for in the interview. He would ask a question and I'd answer it and he would say something like "yes, thank you" and move on to the next question. Beyond answering the specific questions that he posed to me, I had no clue if I was the kind of Unix admin he was looking for. I was so flustered that after the interview I googled some of his questions to make sure that I did answer them correctly. If it had been an American (assuming that he wasn't a US citizen), I would have asked a few questions seeking to determine if I wasn't a fit and then thanked him for his time. I just didn't know what to make of the guy. To make matters worse, I had been suffering from a monster sinus headache that morning and I took a triple dose of decongestant. Well, that alleviated the pain, but left me all squirmy (glad it was a phone interview) and had me saying "uhh" or "err" after nearly every word during the interview.

It's been my experience that Indians conduct interviews in generally the same way (aggressive probing of specific command/scripting/programming syntax (almost a "gotcha" hunt) as opposed to the overall solutions to business needs the candidate brings to the table). Outside of specific ethnic groups, your average northern Virginia tech interview tends to be one of three: 1) cult of personality test, 2) a "gotcha" hunt, or 3) probing what solutions a candidate brings and/or bringing up common problems that the shop faces and asking the candidate how he would approach them.

This Brit guy, assuming that his was the way that most Brits conduct tech interviews, just rattled me with the lack of feedback.

Generic Unix links

Guide to cloning a SUN Blade 1000 drive running Solaris 8 on a second drive

Mirroring Disks with Solstice DiskSuite

CLI HP-UX Disk Mirroring

Including other volume groups in ignite

Kernel Parameters for Oracle 9.2.0 (64bit) on HPUX 11i (11.11)

HP-UX Listing Product and Serial Numbers (Also)

HP-UX Recurring ITRC questions

HP-UX Patch Assessment

HP-UX Performance Cookbook

HP-UX Memory issues (ninode)

HP-UX NFS Perf Tuning

HP-UX Host Intrution Detection System Admins Guide

HP-UX Kernel Tuning and Perf Guide

Managing Systems and Workgroups: A Guide for HP-UX System Administrators

HP-UX Security Patch Check

Sunday, June 26, 2005

Managing Disks and Disk groups (getting into the useful stuff)

Managing Disks and Disk groups (getting into the useful stuff)

After 3.2 you can use enclosure names to get away from OS dependent pathing. So if you want to call a certain array “blue” and another one “red” you can do so.

The following are reserved disk group names: bootdg, defaultdg, nodg.

If you’ve encapsulated your root disk, bootdg is an alias for the volumes that are used to boot the system.

“defaultdg” is an alias for the disk group that should be assumed if the –g option is not specified on a command.

By default, both bootdg and defaultdg are set to nodg.

A default disk group can be specified with: vxdctl defaultdg diskgroup


Disk Configuration Stages

1. Initialize the disk
2. Assign disk to a disk group
3. Assign disk space to volumes


Creating a Disk Group

You can add a single disk or multiple disks.
You cannot add a disk to more than one disk group
Disk media names must be unique within a disk group
Adding a disk to a disk group makes the disk space available for use in creating VM volumes.


Creating a Disk Group: VEA

Select Actions-> New Disk Group
Specify a name
Add at least one disk
Specify disk media names for the disks added
To add another disk: Actions-> Add Disk to Disk Group

Creating a Disk Group: vxdiskadm
“Add or initialize one or more disks”

Creating a Disk Group: CLI
Initialize disk(s):
vxdisksetup –i device_tag [attributes]
i.e. vxdisksetup –i c2t0d0

Initialize disk group by adding at least one disk:
vxdg init diskgroup disk_name=device_tag
i.e. vxdg init newdg newdg01=c2t0d0

Add more disks to the disk group:
vxdg –g diskgroup adddisk disk_name=device_tag
i.e. vxdg –g newdg adddisk newdg02=c2t1d0


Viewing All Disks: VEA

In VEA, disks are represented under the Disks node in the object tree, in the Disk View window, and in the grid for several object types, including controllers, disk groups, enclosures, and volumes.

The status of a disk can be:
Not Initialized: The disk is not under VxVM control
Free: The disk is in the free disk pool; it is initialized by VxVM but is not in a disk group
Foreign: The disk is under the control of another host
Imported: The disk is in an imported disk group
Deported: The disk is in a deported disk group
Disconnected: The disk contains subdisks that are not available because of hardware failure
External: The disk is in use by a foreign manager, such as Logical Volume Manager


Viewing Disk Information: CLI

vxdisk –o alldgs list

In the output:
Status of online – disk is under VxVM control and is available for creating volumes
Status of online invalid – disk is not under VxVM control

Viewing Detailed Information: CLI

vxdisk –g diskgroup list disk_name
i.e. vxdisk –g datadg list datadg01


Viewing Disk Groups: CLI

Display imported disk groups only - vxdg list
Display all disk groups, including deported disk groups – vxdg –o alldgs list
Display free space – vxdg free


Creating a Non-CDS Disk and Disk Group

To initialize a disk as a sliced disk:
vxdisksetup –i device_tag format=sliced

To initialize a non-CDS disk group:
vxdg init diskgroup disk_name=device_tag cds=off


Before Removing a Disk

Either move the disk to the free disk pool or return disk to an uninitialized state.
You cannot remove the last disk in a disk group, unless you destroy the disk group.
Before removing a disk, ensure that the disk does not contain needed data.


Evacuating a Disk

Before removing a disk you may need to evacuate data to another disk.

VEA:
Select disk to be evacuated
Select Actions -> Evacuate Disk

vxdiskadm:
“Move volumes from a disk”

CLI:
vxevac –g diskgroup from_disk [to_disk]
i.e vxevac –g datadg datadg02 datadg03

If the “to disk” is not specified VM will find the space for you.


Removing a Disk from VxVM

VEA:
Select disk to be removed
Select Actions -> Remove Disk from Dynamic Disk Group

vxdiskadm:
“Remove a disk”

CLI:
vxdg –g diskgroup rmdisk disk_name
i.e. vxdg –g newdg rmdisk newdg02

vxdiskunsetup [-C] device_tag
i.e vxdiskunsetup c0t2d0


Renaming a Disk

VEA:
Select disk to be renamed
Select Actions -> Rename Disk
Specify the original disk name and the new name

CLI:
vxedit –g diskgroup rename old_name new_name

Note:
The new disk name must be unique within the disk group
Renaming a disk does not automatically rename subdisks on the disk.


Deporting a Disk Group: VEA
Select Actions -> Deport Disk Group


Deporting a Disk Group: vxdiskadm
“Remove access to (deport) a disk group”


Deporting a Disk Group: CLI

Deport: vxdg deport diskgroup
Deport and rename: vxdg –n new_name deport old_name
Deport to a new host: vxdg –h hostname deport diskgroup


Importing a Disk Group: VEA
Select Actions -> Import Disk Group


Importing a Disk Group: vxdiskadm
“Enable access to (import) a disk group”


Importing a Disk Group: CLI
Import: vxdg import diskgroup
After import, start all volumes: vxvol –g diskgroup startall

To import and rename a disk group: vxdg –n new_name import old_name
To import and rename temporarily: vxdg –t –n new_name import old_name
To clear import locks: vxdg –tC –n new_name import old_name


Renaming a Disk Group:
VEA: Actions -> Rename Disk Group
CLI: follow directions on deporting and importing a disk group to rename.


Destroying a Disk Group:
VEA: Actions -> Destroy Disk Group
CLI: vxdg destroy diskgroup

Veritas Install Stuff

To add a license key – vxlicinst

License keys are installed in /etc/vx/licenses/lic

To view installed license key info – vxlicrep


To start/stop the VEA server manually – /etc/init.d/isisd start/stop/restart

To confirm that the VEA server is running – vxsvc -m

GUI - Veritas Enterprise Administrator – vea &

For disk specific actions – vxdiskadm

Command log file - /var/adm/vx/veacmdlog

Volume Manager RAID Levels

RAID is an acronym for Redundant Array of Independent Disks

RAID-0 – Striping (disk space is striped across two or more disks)

RAID-1 – Mirroring (data from one plex is duplicated on another plex to provide redundancy)

RAID-5 – Parity (RAID-5 is a striped layout that also includes the calculation of parity information, and the striping of that parity information across the disks. If a disk fails, the parity is used to reconstruct the missing data.)

RAID-0+1 – Mirror-stripe (disks are first striped (or plain concat) and then mirrored)

RAID-1+0 – Stripe-mirror (disks are first mirrored and then striped (or plain concat))

Volume Manager Storage Objects

Disk Groups:
A disk group is a collection of VxVM disks that share a common configuration. Disk groups are configured by the system administrator and represent management and configuration boundaries. VM objects cannot span disk groups.

Disk groups ease the use of devices in a high availability environment, because a disk group and its components can be moved as a unit from one host machine to another. Disk drives can be shared by two or more hosts, but can be accessed by only one host at a time.

Volume Manager Disks:
A Volume Manager (VxVM) disk represents the public region of a physical disk that is under Volume Manager control. Each VxVM disk corresponds to one physical disk. Each VxVM disk has a unique virtual disk name called a disk media name. VM uses the disk media name when assigning space to volumes. A VxVM disk is given a disk media name when it is added to a disk group.

Subdisks:
A subdisk is a set of contiguous disk blocks that represent a specific portion of a VxVM disk, which is mapped to a specific region of a physical disk. A subdisk is a subsection of a disk’s public region. A subdisk is the smallest unit of storage in VM.

Plexes:
A plex is a structured or ordered collection of subdisks that represent one copy of the data in a volume. A plex consists of one or more subdisks located on one or more physical disks.

Plex types:
Complete plex – A complete plex holds a complete copy of a volume and therefore maps the entire address space of the volume.

Sparse plex – A sparse plex is a plex that has a length that is less than the length of the volume or that map to only part of the address space of a volume.

Log plex – A log plex is a plex that is dedicated to logging. A log plex is used to speed up data consistency checks and repairs after system failure. RAID-5 and mirrored volumes typically use a log plex.

A volume must have at least one complete plex that has a complete copy of the data in the volume with at least one associated subdisk. Other plexes in the volume can be complete, sparse, or log plexes. A volume can have up to 32 plexes; however, you should never use more than 31 plexes in a single volume. Volume manager requires one plex for automatic or temporary online operations.

Volumes:
A volume is a virtual storage device that is used by applications in a manner similar to a physical disk. A VxVM volume can be as large as the total of available, unreserved free physical disk space in the disk group. A volume is comprised of one or more plexes.

Volume Manager Control

When you place a disk under VM control, a CDS disk layout is used, which ensures that the disk is accessible on different platforms, regardless of the platform on which the disk was initialized. By default in VxVM 4.0 and later, VM uses a cross-platform data sharing (CDS) disk layout.

A CDS disk consists of:

OS-reserved areas: The first 128K and the last two cylinders on a disk are reserved for disk labels, platform blocks, and platform-coexistence labels.

Private region: The private region stores information, such as disk headers, configuration copies, and kernel logs, and other platform-specific management areas that VxVM uses to manage virtual objects.

Public region: Represents the available space that VM can use to assign to volumes and is where an application stores data.

Comparing CDS and Sliced Disks

The sliced disk layout is still available in VxVM 4.0 and later, and is used for bringing the boot disk under VxVM control.

On platforms that support bringing the boot disk under VxVM control, CDS disks cannot be used for boot disks

Virtual Data Storage

Virtual Storage Management
Veritas VM creates a virtual level of storage management above the physical device level by creating virtual storage objects. The virtual storage object that is visible to users and applications is called a “volume”.

What is a volume?
A volume is a virtual object, created by VM, that stores data. A volume is made up of space from one or more physical disks on which the data is physically stored.

How do you access a volume?
Volumes created by VM appear to the OS as physical disks, and applications that interact with the volumes work in the same way as with physical disks.

Physical Data Storage

Reads and writes on unmanaged physical disks can be a slow process.

Disk arrays and multipathed disk arrays can improve I/O speed and throughput.

Disk array: A collection of physical disks used to balance I/O across multiple disks

Multipathed disk array: Provides multiple ports to access disks to achieve performance and availability benefits

Note: Throughout this course, the term “disk” is used to mean either disk or LUN. Whatever the OS sees as a storage device, VxVM sees as a disk.

Physical Disk Naming

VxVM parses disk names to retrieve connectivity information for disks. Operating systems have different conventions:

Solaris
/dev/[r]dsk/c#t#d#s#

HP-UX
/dev/[r]dsk/c#t#d#

AIX
/dev/hdisk#

Linux
SCSI –
/dev/sda[1-4] – primary partitions
/dev/sda[5 – 16] – logical partitions
/dev/sdb# - on the second disk
/dev/sdc# - on the third disk

IDE –
/dev/had#, /dev/hdb#, /dev/hdc#