Custom Search

Thursday, June 30, 2005

Administrating File Systems

Topics:
Adding a File System to a Volume
Using Veritas File System Commands
Comparing the Allocation Policies of VxFS and Traditional File Systems
Upgrading the VxFS File System Layout
Controlling File System Fragmentation
Logging in VxFS

Adding a File System: VEA
Select Actions -> File System -> New File System

Mounting a File System: VEA
Select Action -> File System -> Mount File System

Unmounting a File System: VEA
Select Action -> File System -> Unmount File System


Adding a File System: CLI
To create and mount a VxFS file system:
mkfs –F vxfs /dev/vx/rdsk/diskgroup/volume_name
i.e. mkfs –F vxfs /dev/vx/rdsk/datadg/datavol

mkdir mount_point
i.e. mkdir /data

mount –F vxfs /dev/vx/dsk/diskgroup/volume_name mount_point
i.e. mount –F vxfs /dev/vx/dsk/datadg/datavol /data

To create and mount a ufs file system:
newfs /dev/vx/rdsk/diskgroup/volume_name
i.e. newfs /dev/vx/rdsk/datadg/datavol

mkdir mount_point
i.e. mkdir /data

mount /dev/vx/dsk/diskgroup/volume_name mount_point
i.e. mount /dev/vx/dsk/datadg/datavol /data


The vxupgrade Command
For better performance, use file system layout Version 6 for new file systems.

To upgrade the layout online, use vxupgrade:
vxupgrade [-n new_version] [-o noquota] [-r rawdev] mount_point

To display the current file system layout version number:
vxupgrade mount_point

Upgrading must be done in stages. For example, to upgrade the file system layout from Version 4 to Version 6:
vxupgrade –n 5 /mnt
vxupgrade –n 6 /mnt


Monitoring Fragmentation

To monitor directory fragmentation:
fsadm –D mount_point

A high total in the “Dirs to Reduce” column indicates fragmentation.

To monitor extent fragmentation:
fsadm –E mount_point

Free space in extents of less than 64 blocks in length –
lt 5% = unfragmented, gt 50% badly fragmented

Free space in extents of less than 8 blocks in length –
lt 1% = unfragmented, gt 5% badly fragmented

Total file system size in extents of length 64 blocks or greater –
gt 5% = unfragmented, lt 5% badly fragmented


Defragmenting a File System

CLI:
fsadm [-d] [-D] [-e] [-E] [-t time] [-p passes] mount_point
Note: the lowercase “d” and “e” actually do the defrag of directories and extents

VEA:
Actions -> Defrag File System

Testing Performance Using vxbench

Obtained from: ftp://ftp.veritas.com/pub/support/vxbench.tar.Z

vxbench –w workload [options] filename


Intent Log
1) The intent log records pending file system changes before metadata is changed
2) After the intent log is written, other file system updates are made
3) If the system crashes, the intent log is replayed by VxFS fsck


Maintaining VxFS Consistency

To check file system consistency by using the intent log for VxFS on the volume datavol:
fsck [fs_type] /dev/vx/rdsk/datadg/datavol

To perform a full check without using the intent log:
fsck [fs_type] –o full,nolog /dev/vx/rdsk/datadg/datavol

To check two file systems in parallel using the intent log:
fsck [fs_type] –o p /dev/rdsk/c1t2d0s4 /dev/rdsk/c1t0d0s5

To perform a file system check using VEA:
Highlight an unmounted file system
Select Actions -> Check File System


Resizing the Intent Log
Larger log sizes may improve performance for intensive synchronous writes, but may increase recovery time, memory requirements, and log maintenance time.

Default log size depends on file system size (in the range of 256K to 64MB)
Maximum log size is 2Gb for version 6 and 16MB in versions 4 and 5.
Minimum log size is 256K

VEA:
Highlight a file system
Select Actions -> Set Intent Log Options

CLI:
fsadm [-F vxfs] –o log=size [,logdev=device] mount_point


Logging mount Options

mount –F vxfs [-o specific_options] …

-o log = Better integrity through logging all structural changes. If a system failure occurs, fsck replays recent changes so that they are not lost.

-o delaylog = (default) Improved performance due to some logging being delayed

-o tmplog = Best performance due to all logging being delayed. but some changes could be lost on a system failure.

Configuring Volumes

Topics:
Administrating Mirrors
Adding a Log to a Volume
Changing the Volume Read Policy
Allocating Storage for Volumes
Resizing a Volume


Adding a Mirror to a Volume
Only concatenated or striped volumes can be mirrored
By default, a mirror is created with the same plex layout as the original volume
Each mirror must reside on separate disks
All disks must be in the same disk group
A volume can have up to 32 plexes, or mirrors
Adding a mirror requires plex resynchronization

Adding a Mirror

VEA:
Select the volume to be mirrored
Select Actions -> Mirror -> Add

CLI:
vxassist –g diskgroup mirror volume_name [layout=layout_type] [disk_name]


Removing a Mirror

VEA:
Select Actions -> Mirror -> Remove
Remove by mirror name, quantity, or disk

CLI:
vxassist –g diskgroup remove mirror volume_name [!]disk_name


Adding a Log to a Volume

Dirty Region Logging (for mirrored volumes)
Log keeps track of changed regions
If the system fails, only the changed regions of the volume must be recovered
DRL is not enabled by default. When DRL is enabled, one log is created
You can create additional logs to mirror log data

RAID-5 Logging
Log keeps a copy of data and parity writes
If the system fails, the log is replayed to speed resynchronization
RAID-5 logging is enabled by default
RAID-5 logs can be mirrored
Store logs on disks separate from volume data and parity

VEA:
Actions -> Log -> Add
Actions -> Log -> Remove

CLI:
vxassist –g diskgroup addlog volume_name [logtype=drl] [nlog=n] [attributes]

Examples:

To add a dirty region log to an existing mirrored volume:
vxassist –g datadg addlog datavol logtype=drl

To add a RAID-5 log to a RAID-5 volume, no log type is needed:
vxassist –g datadg addlog datavol

To remove a log from a volume:
vxassist –g diskgroup remove log [nlog=n] volume_name


Volume Read Policies
Round robin – VxVM reads each plex in turn in “round-robin” manner for each nonsequential I/O detected.

Preferred plex – VxVM reads first from a named plex and reads from the next only if the first has failed.

Selected plex – (Default) Will use round-robin unless the volume has exactly one striped plex, in which case the read policy defaults to the plex


Setting the Volume Read Policy

VEA:
Actions -> Set Volume Usage
Select from “Based on layouts”, “Round robin”, or “Preferred”

CLI:
vxvol –g diskgroup rdpol policy volume_name [plex]

Examples:
Round robin: vxvol –g datadg rdpol round datavol
Preferred: vxvol –g datadg rdpol prefer datavol datavol-02
Selected; vxvol –g datadg rdpol select datavol


Ordered Allocation
Ordered allocation enables you to control how columns and mirrors are laid out when creating a volume.

With ordered allocation, storage is allocated in a specific order:
first, VxVM concatenates subdisks in columns
Secondly, VxVM groups columns in striped plexes
Finally, VxVM forms mirrors

Note: When using ordered allocation, the number of disks specified must exactly match the number of disks needed for a given layout.


Ordered Allocation: Methods

VEA:
In the “New volume Wizard”, select “Manually select disks for use by this volume.” Select the disks and the storage allocation policy, and mark the “Ordered” check box.

CLI:
Use the “–o ordered” option:
vxassist [-g diskgroup] [-o ordered] make volume_name length [layout=layout]

Specifying the order of columns:
vxassist –g datadg –o ordered make datavol 2g layout=stripe ncol=3 datadg02 datadg04 datadg06

Specifying the order of mirrors:
vxassist –g datadg –o ordered make datavol 2g layout=mirror datadg02 datadg04


Resizing a volume: VEA
Highlight a volume, and select Actions -> Resize Volume

Resizing a volume: vxresize
vxresize [-b] fs_type –g diskgroup volume_name [+-] new_length

Set size to: vxresize –g mydg myvol 50m
Grow by: vxresize –g mydg myvol +10m
Shrink by: vxresize –g mydg myvol -10m

Resizing a volume: vxassist
vxassist –g diskgroup growtogrowbyshrinktoshrinkby volume_name size

Grow to: vxassist –g datadg growto datavol 40m
Shrink to: vxassist –g datadg shrinkto datavol 30m
Grow by: vxassist –g datadg growby datavol 10m
Shrink by: vxassist –g datadg shrinkby datavol 10m

Resizing a volume: fsadm
fsadm [fs_type] [-b newsize] [-r rawdev] mount_point

Verify free space: vxdg –g datadg free
Expand the volume using vxassist: vxassist –g datadg growto myvol 1024000
Expand the file system using fsadm:
fsadm –F vxfs –b 1024000 –r /dev/vx/rdsk/datadg/datavol /datavol
Verify that the file system was resized by using df: df –k /datavol


Resizing a Dynamic LUN
If you resize a LUN in the hardware, you should resize the VxVM disk corresponding to that LUN.

VEA:
Select the disk that you want to expand
Select Actions -> Resize Disk

CLI:
vxdisk [-f] –g diskgroup resize accessnamemedianame length=attribute
i.e. vxdisk –g datadg resize datadg01 length=8GB

Creating Volumes (More useful stuff)

Topics:
Selecting a Volume Layout
Creating a Volume
Displaying Volume Layout Information
Creating a Layered Volume
Removing a Volume

Concatenated Layout: A concatenated volume layout maps data in a linear manner onto one or more subdisks in a plex.

Striped Layout: A striped volume layout maps data so that the data is interleaved, or allocated in stripes, among two or more subdisks on two or more physical disks.

Mirrored Layout: By adding a mirror to a concatenated or striped volume, you create a mirrored layout. A mirrored volume layout consists of more than one plex that duplicates the information contained in a volume.

RAID-5 Layout: A RAID-5 layout has the same attributes as a striped plex, but includes one additional column of data that is used for parity. Parity provides redundancy.

RAID-5 requires a minimum of three disks for the data and parity. When implemented as recommended, an additional disk is required for the log. Note: RAID-5 cannot be mirrored.


Comparing Volume Layouts

Concatenation: Advantages
Removes disk size restrictions
Better utilization of free space
Simplified administration

Concatenation: Disadvantages
No protection against disk failure

Striping: Advantages
Improved performance through parallel data transfer
Load balancing

Striping: Disadvantages
No protection against disk failure


Mirroring: Advantages
Improved reliability and availability
Improved read performance

Mirroring: Disadvantages
Requires more disk space (duplicate data copy)
Slightly slower write performance

RAID-5: Advantages
Redundancy through parity
Requires less space than mirroring (not entirely true if set up as recommended (i.e. 3+ disks for the RAID-5 and mirrored log disks))
Improved read performance
Fast recovery though logging

RAID-5: Disadvantages
Slow write performance


Before Creating a Volume

Initialize disks and assign them to disk groups.
Striped: Requires at least 2 disks
Mirrored: Requires one disk for each mirror
RAID-5: Requires at least 3 disks plus one disk to contain the log

Creating a Volume: VEA
Step 1: Select disks to use for the new volume
Select Actions -> New Volume

Step 2: Specify volume attributes

Step 3: Create a file system on the volume (optional (i.e. can be done later))

Creating a Volume: CLI

vxassist –g diskgroup make volume_name length [attributes]

The above command creates your device files (i.e. /dev/vx/[r]dsk/diskgroup/volume_name)

To display volume attributes: vxassist –g diskgroup help showattrs

Concatenated Volume: CLI

vxassist –g diskgroup make volume_name length
i.e. vxassist –d datadg make datavol 10g

If the /etc/default/vxassist default layout is not concatenated, make the concatenated request explicit (i.e. vxassist –d datadg make datavol 10g layout=nostripe)

To specify which disks to use (as opposed to letting VM decide for you) explicitly indicate the disks to use (i.e. vxassist –d datadg make datavol 10g datadg02 datadg03).

Striped Volume: CLI
vxassist –g diskgroup make volume_name length layout=stripe [ncol=n] [stripeunit=size] [disks…]
i.e. vxassist –g acctdg make expvol 2g layout=stripe ncol=3 stripeunit=256k acctdg01 acctdg02 !acctdg03

layout=stripe => designates the striped layout
ncol=n => the number of stripes/columns (min 2, max 8)
stripeunit=size => the size of the stripe (default is 64K)
!acctdg => specifies that the disk indicated should not be used

RAID-5 Volume: CLI
vxassist –g diskgroup make volume_name length layout=raid5 [ncol=n] [stripeunit=size] [disks…]

Default ncol is 3
Default stripeunit is 16K
Log is created by default. Therefore, you need at least one more disk than the number of columns.

Mirrored Volume: CLI
vxassist –g diskgroup [-b] make volume_name length layout=mirror [nmirror=number]

The vxassist command normally waits for the mirrors to be synchronized before returning control, but if the –b argument is given, the sync will happen in the background.

Concatenated and mirrored:
vxassist –g datadg make datavol 5g layout=mirror

Specify three mirrors:
vxassist –g datadg make datavol 5g layout=stripe,mirror nmirror=3

Run process in background:
vxassist –g datadg –b make datavol 5g layout=stripe,mirror nmirror=3


Mirrored Volume with Log: CLI
vxassist –g diskgroup [-b] make volume_name length layout=mirror logtype=drl [nlog=n]

logtype=drl enables dirty region logging
nlog=n creates n logs and is used when you want more than one log plex to be created.


Estimating Volume Size: CLI

To determine largest possible size for a volume:
vxassist –g diskgroup maxsize attributes
i.e. vxassist –g datadg maxsize layout=raid5

To determine how much a volume can expand:
vxassist –g diskgroup maxgrow volume
i.e. vxassist –g datadg maxgrow datavol


Displaying Volume Information: CLI
vxprint –g diskgroup [options]

-vspd => Select only volumes, plexes, subdisks, or disks.
-h => List hierarchies below selected records
-r => Display related records of a volume containing subvolumes.
-t => Print single-line output records that depend upon the configuration record type
-l => Display all information from each selected record
-a => Display all information about each selected record, one record per line
-A => Select from all active disk groups
-e pattern => Show records that match an editor pattern


How Do Layered Volumes Work?

Volumes are constructed from subvolumes.
The top-level volume is accessible to applications.

Advantages
Improved redundancy
Faster recovery times

Disadvantages
Requires more VxVM objects.


The Four Types of Mirroring in VxVM:

mirror-concat (non-layered):
- The top-level volume contains more than one plex (mirror)
- Plexes are concatenated

mirror-stripe (non-layered):
- The top-level volume contains more than one plex (mirror)
- Plexes are striped

concat-mirror (layered)
- The top-level volume is a concatenated plex
- Subvolumes are mirrored

stripe-mirror (layered)
- The top-level volume is a striped plex
- Subvolumes are mirrored


Created Layered Volumes

VEA:
In the New Volume Wizard, select Concatenated Mirrored or Striped Mirrored as the volume layout.

CLI:
vxassist –g diskgroup make volume_name size layout=[stripe-mirror concat-mirror]

To create simple mirrored volumes (nonlayered), you can use:
layout=mirror-concat
layout=mirror-stripe


Viewing Layered Volumes
vxprint –rth volume_name


Remove a Volume

VEA:
Select the volume that you want to remove.
Select Actions -> Delete Volume

CLI:
vxassist –g diskgroup remove volume volume_name

or

vxedit –g diskgroup –rf rm volume­_name

Tuesday, June 28, 2005

technorati tags

I've been using technorati tags in an effort to create categories.

But, it is very slow to update and it doesn't look to be giving me what I'm looking for.

I'll be looking for a something else.

What I want is to be able to put topics together and be able to reference them at some later date. For example, I do some funky kernel thing and then six months later need to repeat the process. Instead of reinventing the wheel, I'll scan my category links first.

The technorati tags looked to be giving that to me at first, but I'm still waiting for it to update with some of my posts.

hmmm

Monday, June 27, 2005

Random Musings (i.e. whining and moaning)

I'm looking to relocate to NC from VA.

I've been focused on Raleigh, but things have been slow. I'm thinking this is for two reasons.

#1 Raleigh is a big AIX and Linux town (as is probably obvious, I'm a HP-UX guy that can work with Sun)

#2 I was/am an idiot that wasted about 7 weeks with my current head-hunter (contracting agency) thinking they could place me down there quicker than I could find a position.

I say idiot in #2 because I know my current agency is a bottom feeder that charges the client a crazy overhead on my rate and won't give raises out of that overhead nor offer decent health coverage (eHealthInsurance rocks for private insurance (cheaper than some employee health coverage I've had in the past)). Also, for some stupid reason I've been patient with the NC rep I'd been referred to even though for the past seven weeks the dude has been to two out of state conferences (one was one week and the other two). Stupid.

I sort of blame my stupidity on my constant state of exhaustion that comes from my commute that, as of 4 months ago, takes 4 to 6 hours a day. Yes, I purposely live in the boonies (approx. 66 miles from my current jobsite), but the current cost of housing in VA has sent such a hoard of people out my way that my 1.25 hour one-way commute has grown to 2-3 hour commute (and that is at off hours (like 5 FREAKING am)).

On the plus side, my house has nearly tripled in value (on a mountain and on a dirt road no less). One of the many reasons for living out here was to be able to pay off my house early (I've got over 80% paid for).

On the sorta minus side, my wife and I have to sort through 15 years of accumulated stuff and get that thrown out or put in storage. Then we need to get the place painted and a few other things.

Now, here's where the good part comes in...

You see, NC has a few tech centers and from the headhunters I've come to learn that the pay rate is just about equal to northern VA but at a cost of living at 85-95% the rate of the town I live in (and 69-79% of the town I work in (Arlington)). Also, the houses are about one third the cost (more so if comparing housing around the DC metro area). So, I'm looking to sell in VA and buy outright in NC. I was hoping to take a little while longer to get my house in order, but my commute is just killing me.

Note: I'm not some rich dude. It's just that during the boom days of the dot com era, instead of getting a mortgage on some $500-700K house like many of my old co-workers (who subsequently had to get a second job or go bankrupt after the dot com crash), I chose a very rural sub-$100K house that I could pay off quickly (living beneath one's means is a wise thing to do).

Anyway, I'm starting to look in Winston-Salem and Charlotte. I'm not keen on Charlotte as it has a crime rate index of nearly three times the national average (as point of reference, Washington DC is five times the national average).

All of this leads me to a phone interview I had with a prospective employer. It was a tech interview with a Brit. I've worked with Brits in the past but never interviewed with one. He was pleasant enough, but gave me no feedback as to what he was looking for in the interview. He would ask a question and I'd answer it and he would say something like "yes, thank you" and move on to the next question. Beyond answering the specific questions that he posed to me, I had no clue if I was the kind of Unix admin he was looking for. I was so flustered that after the interview I googled some of his questions to make sure that I did answer them correctly. If it had been an American (assuming that he wasn't a US citizen), I would have asked a few questions seeking to determine if I wasn't a fit and then thanked him for his time. I just didn't know what to make of the guy. To make matters worse, I had been suffering from a monster sinus headache that morning and I took a triple dose of decongestant. Well, that alleviated the pain, but left me all squirmy (glad it was a phone interview) and had me saying "uhh" or "err" after nearly every word during the interview.

It's been my experience that Indians conduct interviews in generally the same way (aggressive probing of specific command/scripting/programming syntax (almost a "gotcha" hunt) as opposed to the overall solutions to business needs the candidate brings to the table). Outside of specific ethnic groups, your average northern Virginia tech interview tends to be one of three: 1) cult of personality test, 2) a "gotcha" hunt, or 3) probing what solutions a candidate brings and/or bringing up common problems that the shop faces and asking the candidate how he would approach them.

This Brit guy, assuming that his was the way that most Brits conduct tech interviews, just rattled me with the lack of feedback.

Generic Unix links

Guide to cloning a SUN Blade 1000 drive running Solaris 8 on a second drive

Mirroring Disks with Solstice DiskSuite

CLI HP-UX Disk Mirroring

Including other volume groups in ignite

Kernel Parameters for Oracle 9.2.0 (64bit) on HPUX 11i (11.11)

HP-UX Listing Product and Serial Numbers (Also)

HP-UX Recurring ITRC questions

HP-UX Patch Assessment

HP-UX Performance Cookbook

HP-UX Memory issues (ninode)

HP-UX NFS Perf Tuning

HP-UX Host Intrution Detection System Admins Guide

HP-UX Kernel Tuning and Perf Guide

Managing Systems and Workgroups: A Guide for HP-UX System Administrators

HP-UX Security Patch Check

Sunday, June 26, 2005

Managing Disks and Disk groups (getting into the useful stuff)

Managing Disks and Disk groups (getting into the useful stuff)

After 3.2 you can use enclosure names to get away from OS dependent pathing. So if you want to call a certain array “blue” and another one “red” you can do so.

The following are reserved disk group names: bootdg, defaultdg, nodg.

If you’ve encapsulated your root disk, bootdg is an alias for the volumes that are used to boot the system.

“defaultdg” is an alias for the disk group that should be assumed if the –g option is not specified on a command.

By default, both bootdg and defaultdg are set to nodg.

A default disk group can be specified with: vxdctl defaultdg diskgroup


Disk Configuration Stages

1. Initialize the disk
2. Assign disk to a disk group
3. Assign disk space to volumes


Creating a Disk Group

You can add a single disk or multiple disks.
You cannot add a disk to more than one disk group
Disk media names must be unique within a disk group
Adding a disk to a disk group makes the disk space available for use in creating VM volumes.


Creating a Disk Group: VEA

Select Actions-> New Disk Group
Specify a name
Add at least one disk
Specify disk media names for the disks added
To add another disk: Actions-> Add Disk to Disk Group

Creating a Disk Group: vxdiskadm
“Add or initialize one or more disks”

Creating a Disk Group: CLI
Initialize disk(s):
vxdisksetup –i device_tag [attributes]
i.e. vxdisksetup –i c2t0d0

Initialize disk group by adding at least one disk:
vxdg init diskgroup disk_name=device_tag
i.e. vxdg init newdg newdg01=c2t0d0

Add more disks to the disk group:
vxdg –g diskgroup adddisk disk_name=device_tag
i.e. vxdg –g newdg adddisk newdg02=c2t1d0


Viewing All Disks: VEA

In VEA, disks are represented under the Disks node in the object tree, in the Disk View window, and in the grid for several object types, including controllers, disk groups, enclosures, and volumes.

The status of a disk can be:
Not Initialized: The disk is not under VxVM control
Free: The disk is in the free disk pool; it is initialized by VxVM but is not in a disk group
Foreign: The disk is under the control of another host
Imported: The disk is in an imported disk group
Deported: The disk is in a deported disk group
Disconnected: The disk contains subdisks that are not available because of hardware failure
External: The disk is in use by a foreign manager, such as Logical Volume Manager


Viewing Disk Information: CLI

vxdisk –o alldgs list

In the output:
Status of online – disk is under VxVM control and is available for creating volumes
Status of online invalid – disk is not under VxVM control

Viewing Detailed Information: CLI

vxdisk –g diskgroup list disk_name
i.e. vxdisk –g datadg list datadg01


Viewing Disk Groups: CLI

Display imported disk groups only - vxdg list
Display all disk groups, including deported disk groups – vxdg –o alldgs list
Display free space – vxdg free


Creating a Non-CDS Disk and Disk Group

To initialize a disk as a sliced disk:
vxdisksetup –i device_tag format=sliced

To initialize a non-CDS disk group:
vxdg init diskgroup disk_name=device_tag cds=off


Before Removing a Disk

Either move the disk to the free disk pool or return disk to an uninitialized state.
You cannot remove the last disk in a disk group, unless you destroy the disk group.
Before removing a disk, ensure that the disk does not contain needed data.


Evacuating a Disk

Before removing a disk you may need to evacuate data to another disk.

VEA:
Select disk to be evacuated
Select Actions -> Evacuate Disk

vxdiskadm:
“Move volumes from a disk”

CLI:
vxevac –g diskgroup from_disk [to_disk]
i.e vxevac –g datadg datadg02 datadg03

If the “to disk” is not specified VM will find the space for you.


Removing a Disk from VxVM

VEA:
Select disk to be removed
Select Actions -> Remove Disk from Dynamic Disk Group

vxdiskadm:
“Remove a disk”

CLI:
vxdg –g diskgroup rmdisk disk_name
i.e. vxdg –g newdg rmdisk newdg02

vxdiskunsetup [-C] device_tag
i.e vxdiskunsetup c0t2d0


Renaming a Disk

VEA:
Select disk to be renamed
Select Actions -> Rename Disk
Specify the original disk name and the new name

CLI:
vxedit –g diskgroup rename old_name new_name

Note:
The new disk name must be unique within the disk group
Renaming a disk does not automatically rename subdisks on the disk.


Deporting a Disk Group: VEA
Select Actions -> Deport Disk Group


Deporting a Disk Group: vxdiskadm
“Remove access to (deport) a disk group”


Deporting a Disk Group: CLI

Deport: vxdg deport diskgroup
Deport and rename: vxdg –n new_name deport old_name
Deport to a new host: vxdg –h hostname deport diskgroup


Importing a Disk Group: VEA
Select Actions -> Import Disk Group


Importing a Disk Group: vxdiskadm
“Enable access to (import) a disk group”


Importing a Disk Group: CLI
Import: vxdg import diskgroup
After import, start all volumes: vxvol –g diskgroup startall

To import and rename a disk group: vxdg –n new_name import old_name
To import and rename temporarily: vxdg –t –n new_name import old_name
To clear import locks: vxdg –tC –n new_name import old_name


Renaming a Disk Group:
VEA: Actions -> Rename Disk Group
CLI: follow directions on deporting and importing a disk group to rename.


Destroying a Disk Group:
VEA: Actions -> Destroy Disk Group
CLI: vxdg destroy diskgroup

Veritas Install Stuff

To add a license key – vxlicinst

License keys are installed in /etc/vx/licenses/lic

To view installed license key info – vxlicrep


To start/stop the VEA server manually – /etc/init.d/isisd start/stop/restart

To confirm that the VEA server is running – vxsvc -m

GUI - Veritas Enterprise Administrator – vea &

For disk specific actions – vxdiskadm

Command log file - /var/adm/vx/veacmdlog

Volume Manager RAID Levels

RAID is an acronym for Redundant Array of Independent Disks

RAID-0 – Striping (disk space is striped across two or more disks)

RAID-1 – Mirroring (data from one plex is duplicated on another plex to provide redundancy)

RAID-5 – Parity (RAID-5 is a striped layout that also includes the calculation of parity information, and the striping of that parity information across the disks. If a disk fails, the parity is used to reconstruct the missing data.)

RAID-0+1 – Mirror-stripe (disks are first striped (or plain concat) and then mirrored)

RAID-1+0 – Stripe-mirror (disks are first mirrored and then striped (or plain concat))

Volume Manager Storage Objects

Disk Groups:
A disk group is a collection of VxVM disks that share a common configuration. Disk groups are configured by the system administrator and represent management and configuration boundaries. VM objects cannot span disk groups.

Disk groups ease the use of devices in a high availability environment, because a disk group and its components can be moved as a unit from one host machine to another. Disk drives can be shared by two or more hosts, but can be accessed by only one host at a time.

Volume Manager Disks:
A Volume Manager (VxVM) disk represents the public region of a physical disk that is under Volume Manager control. Each VxVM disk corresponds to one physical disk. Each VxVM disk has a unique virtual disk name called a disk media name. VM uses the disk media name when assigning space to volumes. A VxVM disk is given a disk media name when it is added to a disk group.

Subdisks:
A subdisk is a set of contiguous disk blocks that represent a specific portion of a VxVM disk, which is mapped to a specific region of a physical disk. A subdisk is a subsection of a disk’s public region. A subdisk is the smallest unit of storage in VM.

Plexes:
A plex is a structured or ordered collection of subdisks that represent one copy of the data in a volume. A plex consists of one or more subdisks located on one or more physical disks.

Plex types:
Complete plex – A complete plex holds a complete copy of a volume and therefore maps the entire address space of the volume.

Sparse plex – A sparse plex is a plex that has a length that is less than the length of the volume or that map to only part of the address space of a volume.

Log plex – A log plex is a plex that is dedicated to logging. A log plex is used to speed up data consistency checks and repairs after system failure. RAID-5 and mirrored volumes typically use a log plex.

A volume must have at least one complete plex that has a complete copy of the data in the volume with at least one associated subdisk. Other plexes in the volume can be complete, sparse, or log plexes. A volume can have up to 32 plexes; however, you should never use more than 31 plexes in a single volume. Volume manager requires one plex for automatic or temporary online operations.

Volumes:
A volume is a virtual storage device that is used by applications in a manner similar to a physical disk. A VxVM volume can be as large as the total of available, unreserved free physical disk space in the disk group. A volume is comprised of one or more plexes.

Volume Manager Control

When you place a disk under VM control, a CDS disk layout is used, which ensures that the disk is accessible on different platforms, regardless of the platform on which the disk was initialized. By default in VxVM 4.0 and later, VM uses a cross-platform data sharing (CDS) disk layout.

A CDS disk consists of:

OS-reserved areas: The first 128K and the last two cylinders on a disk are reserved for disk labels, platform blocks, and platform-coexistence labels.

Private region: The private region stores information, such as disk headers, configuration copies, and kernel logs, and other platform-specific management areas that VxVM uses to manage virtual objects.

Public region: Represents the available space that VM can use to assign to volumes and is where an application stores data.

Comparing CDS and Sliced Disks

The sliced disk layout is still available in VxVM 4.0 and later, and is used for bringing the boot disk under VxVM control.

On platforms that support bringing the boot disk under VxVM control, CDS disks cannot be used for boot disks

Virtual Data Storage

Virtual Storage Management
Veritas VM creates a virtual level of storage management above the physical device level by creating virtual storage objects. The virtual storage object that is visible to users and applications is called a “volume”.

What is a volume?
A volume is a virtual object, created by VM, that stores data. A volume is made up of space from one or more physical disks on which the data is physically stored.

How do you access a volume?
Volumes created by VM appear to the OS as physical disks, and applications that interact with the volumes work in the same way as with physical disks.

Physical Data Storage

Reads and writes on unmanaged physical disks can be a slow process.

Disk arrays and multipathed disk arrays can improve I/O speed and throughput.

Disk array: A collection of physical disks used to balance I/O across multiple disks

Multipathed disk array: Provides multiple ports to access disks to achieve performance and availability benefits

Note: Throughout this course, the term “disk” is used to mean either disk or LUN. Whatever the OS sees as a storage device, VxVM sees as a disk.

Physical Disk Naming

VxVM parses disk names to retrieve connectivity information for disks. Operating systems have different conventions:

Solaris
/dev/[r]dsk/c#t#d#s#

HP-UX
/dev/[r]dsk/c#t#d#

AIX
/dev/hdisk#

Linux
SCSI –
/dev/sda[1-4] – primary partitions
/dev/sda[5 – 16] – logical partitions
/dev/sdb# - on the second disk
/dev/sdc# - on the third disk

IDE –
/dev/had#, /dev/hdb#, /dev/hdc#

Physical Disk Structure

Physical storage objects:

  • The basic physical storage device that ultimately stores your data is the hard disk.
  • When you install your operating system, hard disks are formatted as part of the installation program.
  • Partitioning is the basic method of organizing a disk to prepare for files to be written to and retrieved from the disk.
  • A partitioned disk has a prearranged storage pattern that is designed for the storage and retrieval of data.

(The book then goes into the basic disk layouts that Sun, HP-UX, AIX, and Linux uses.)


Fundamentals: Virtual Objects

Topic 1: Physical Data Storage

  • Identify the structural characteristics of a disk that are affected by placing a disk under VxVM control.

Topic 2: Virtual Data Storage

  • Describe the structural characteristics of a disk after it is placed under VxVM control.

Topic 3: VM Storage Objects

  • Identify the virtual objects that are created by VxVM to manage data storage, including disk groups, VxVM disks, subdisks, plexes, and volumes.

Topic 4: VM RAID levels

  • Define VxVM RAID levels and identify virtual storage layout types used by VxVM to remap address space.

Veritas Storage Foundation 4.1 Overview

Just finished Veritas’ Storage Foundation 4.1 class and I want to put the info on here for two reasons: 1) Quick reference for issues and 2) the cert test is based on this class.

I’ll need to come back in order to emphasize the cert topics.

There are two books given in the class. “Veritas Storage Foundation 4.1 for Unix: Fundamentals” goes over basic terminology and commands. “Veritas Storage Foundation 4.1 for Unix: Maintenance” is geared more for working with an established setup.

Saturday, June 18, 2005

Sun irritates me at times

Tasked with upgrading another sysadmin's app and Oracle boxes (app on a 15K domain and database on a v880) on short notice. Thus had no time for proper research.

Due to my unfamiliarity to the other sysadmin's system, I chose to do a upgrade instead of installing from a flar. This added significantly to my pucker factor, as upgrades have a reputation (undeserved with current versions?) for being problematic. Further, /opt was not on the bootdg and had to be moved.

Well, Veritas' upgrade_start went without a hitch. I then upgraded the OSs (off CD on v880 and jumpstart on 15K). I then ran Veritas' upgrade_finish and upon reboot the fun began.

The box failed into maint mode as it couldn't stat what vfstab indicated were the root partitions. I was able to look at vfstab read-only and see that Veritas' script incorrectly modified my vfstab.

Joy.

I booted off of CD and was able to correct my vfstab. On reboot I was able to get to run level 3, but my Oracle and app partitions were unmountable. So after spending a few hours with a competent Brit Veritas tech, we found the following problems:

1) Sun 9's upgrade install (I was going from 8 to 9) blew away my sd.conf. This allowed me to only stat seven of my disks. I had saved off the sd.conf before starting on the advice of a co-worker. The funny thing is that I ribbed him for being overly paranoid. If I hadn't had the file on hand, I would have had to restore from NetBackup.

2) Veritas' vxfs Sun 8 binaries are incompatible with Sun 9. So, I had to pkgrm VRTSvxfs and then pkgadd VRTSvxfs in order to make it happy.

Note: none of the above was even hinted at in either Sun's or Veritas' documentation. Even the Veritas tech indicated that the script was known to be buggy (!!!).

HP-UX is SOOOOOO much better. Jumpstart is a pathetic, broken thing when compared to HP's Ignite. Further, I'd have no need for Veritas if I was using HP-UX.

And yet Sun is all over the place and HP keeps loosing market share.

????