Volume Manager RAID-1 maintance procedures for X2100

Hi all,
we are using the new Sun Fire X2100 with Solaris 10 and a Volume Manager RAID-1 configuration.
Configuring Mirroring with Volume Manager isn't complicated at all (apart from some tricks necessary on x86 systems).
Does anybody know how to properly handle maintenance situations. We would like to replace a faulty disk without a need for restarting the system, switching it off or changing to single user mode. Cause the X2100's two drives are hot swappable, in principle, this should be possible, isn't it?
Thanks in advance
Regards,
gb

Hi all,
we are using the new Sun Fire X2100 with Solaris 10 and a Volume Manager RAID-1 configuration.
Configuring Mirroring with Volume Manager isn't complicated at all (apart from some tricks necessary on x86 systems).
Does anybody know how to properly handle maintenance situations. We would like to replace a faulty disk without a need for restarting the system, switching it off or changing to single user mode. Cause the X2100's two drives are hot swappable, in principle, this should be possible, isn't it?
Thanks in advance
Regards,
gb

Similar Messages

  • Volume Manager in Solaris 10

    Hi All -
    I loaded Solaris 10 OS on two separate disks to be managed by RAID. The idea was that if one disk OS crashes, then RAID will switch me to the other disk and I should be able to boot from that without major loss in operational time of the node. For such a scenario, do I need to have Veritas Volume Manager or does Solaris 10 come with an default volume manager which will do this for me.
    Your help is greatly appreciated.
    Regards

    RTFM :-)
    http://docs.sun.com/app/docs/doc/816-4520
    alan

  • WebStart Flash and Veritas Volume Manager

    I would like to use WebStart Flash for backup of the system
    disk. The goal is to be able to recover the system disk rapidly.
    It works perfectly for systems without Veritas Volume
    Manager on the system disk.
    However, if Veritas Volume Manager is installed and used
    for mirroring the root disk, the system is not able to boot using
    WebStart Flash. This is probably because the "private region"
    of the disk is not included in the flash archive.
    Does anybody have a solution for this, or does any of
    you successfully combine WebStart Flash and Veritas
    Volume Manager?
    I use Jumpstart and the install_type is configured to
    flash_install.
    The question was also asked in the newsgroup
    comp.unix.solaris.
    Rgds,
    Henrik

    For many reasons, today you cannot save the VxVM
    private region information as an implicit part of a
    flash archive. The procedure would likely be to
    unencapsulate the root drive, create the flash archive,
    then re-encapsulate the root drive. This is an ugly
    procedure and may cause more pain than it is worth.
    When a root disk is encapsulated, an entry is put
    into the /etc/system file which says to use the VxVM
    or SVM logical volume for the rootdev rather than
    the actual device from which the system was originally
    booted. When you create a flash archive, this modification
    to the /etc/system is carried along. But, when you install
    it on a new system which doesn't have VxVM properly
    installed already (a chicken-and-the-egg problem)
    then the change of the rootdev to the logical volume
    will fail. The result is an unbootable system (without
    using 'boot -a' and pointing to a different /etc/system
    file like /dev/null).
    The current recommended process is to use a prototype
    system which does not have an encapsulated root
    to create the flash archive.
    VxVM also uses the ELM license manager which will
    tie the VxVM runtime license to the hostid. This makes
    moving flash archives with VxVM to other machines
    impractical or difficult.
    The long term solution would be to add logical volume
    management support to the JumpStart infrastructure.
    I'm not counting on this anytime soon :-(
    -- richard

  • Veritas Volume Manager errors

    I am running a Solaris Ultrasparc Enterprise 450 Server with Solaris 8 operating system. This past weekend, I received the following errors relating to Volumen Management on our disks.
    I need help in understanding what happened and how to fix it. The only thing I found was that 2-3 filesystems were over 90% and I was able to have files removed, and thus reduce the capacity by 20%. But I still would like to know what happened and how to prevent these errors from happening again. I am new to storage management.
    "Failures have been detected by the VERITAS Volume Manager:<br /><br />failed disks:<br /> disk04<br /><br />failed plexes:<br />vol02-02<br />vol04-02<br />vol06-P04<br /><br />These volumes are still usable, but the the redundancy of<br />those volumes is reduced. Any RAID-5 volumes with storage on the failed disk may become unusable in the face of further failures.<br /><br />VERITAS Volume Manager is preparing to relocate for diskgroup rootdg. Saving the current configuration in:<br />/etc/vx/saveconfig.d/rootdg.060312_032923.mpvsh<br /><br />Relocation was not successful for subdisks on disk disk04 in volume vol04 in disk group rootdg. No replacement was made and the disk is still unusable."
    Please advise. Thanks.

    Out of the 11 or 12 disk drives on the system in question, disk# 8 (c3t1d0) was labeled "unformatted", when I ran the format, analyze, and read commands. Does this mean the disk can be re-formatted or replaced? I will dig into the documenation in the meantime.
    This is the result I get when I run the vxprint -g rootdg -thf command:
    DG NAME NCONFIG NLOG MINORS GROUP-ID
    DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
    RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
    RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
    V NAME RVG KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
    PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
    SD NAME PLEX DISK DISKOFFS LENGTH &#91;COL/&#93;OFF DEVICE MODE
    SV NAME PLEX VOLNAME NVOLLAYR LENGTH &#91;COL/&#93;OFF AM/NM MODE
    DC NAME PARENTVOL LOGVOL
    SP NAME SNAPVOL DCO
    dg rootdg default default 0 1043163330.1025.cstep2
    dm disk01 c2t2d0s2 sliced 4711 35358848 -
    dm disk02 c2t3d0s2 sliced 4711 35358848 -
    dm disk03 c3t0d0s2 sliced 4711 35358848 -
    dm disk04 - - - - NODEVICE
    dm disk05 c3t2d0s2 sliced 4711 35358848 -
    dm disk06 c3t3d0s2 sliced 4711 35358848 -
    v vol01 - ENABLED ACTIVE 24576000 SELECT - fsgen
    pl vol01-01 vol01 ENABLED ACTIVE 24577792 CONCAT - RW
    sd disk01-01 vol01-01 disk01 0 24577792 0 c2t2d0 ENA
    pl vol01-02 vol01 ENABLED ACTIVE 24577792 CONCAT - RW
    sd disk02-01 vol01-02 disk02 0 24577792 0 c2t3d0 ENA
    v vol02 - ENABLED ACTIVE 15360000 SELECT - fsgen
    pl vol02-01 vol02 ENABLED ACTIVE 15361120 CONCAT - RW
    sd disk03-01 vol02-01 disk03 0 15361120 0 c3t0d0 ENA
    pl vol02-02 vol02 DISABLED NODEVICE 15361120 CONCAT - RW
    sd disk04-01 vol02-02 disk04 0 15361120 0 - NDEV
    v vol03 - ENABLED ACTIVE 15360000 SELECT - fsgen
    pl vol03-01 vol03 ENABLED ACTIVE 15361120 CONCAT - RW
    sd disk05-01 vol03-01 disk05 0 15361120 0 c3t2d0 ENA
    pl vol03-02 vol03 ENABLED ACTIVE 15361120 CONCAT - RW
    sd disk06-01 vol03-02 disk06 0 15361120 0 c3t3d0 ENA
    v vol04 - ENABLED ACTIVE 15360000 SELECT - fsgen
    pl vol04-01 vol04 ENABLED ACTIVE 15361120 CONCAT - RW
    sd disk03-02 vol04-01 disk03 15361120 15361120 0 c3t0d0 ENA
    pl vol04-02 vol04 DISABLED NODEVICE 15361120 CONCAT - RW
    sd disk04-02 vol04-02 disk04 15361120 15361120 0 - RLOC
    v vol05 - ENABLED ACTIVE 15360000 SELECT - fsgen
    pl vol05-01 vol05 ENABLED ACTIVE 15361120 CONCAT - RW
    sd disk05-02 vol05-01 disk05 15361120 15361120 0 c3t2d0 ENA
    pl vol05-02 vol05 ENABLED ACTIVE 15361120 CONCAT - RW
    sd disk06-02 vol05-02 disk06 15361120 15361120 0 c3t3d0 ENA
    v vol06 - ENABLED ACTIVE 15360000 SELECT - fsgen
    pl vol06-03 vol06 ENABLED ACTIVE 15360000 CONCAT - RW
    sv vol06-S01 vol06-03 vol06-L01 1 10781056 0 2/2 ENA
    sv vol06-S02 vol06-03 vol06-L02 1 4578944 10781056 1/2 ENA
    v vol06-L01 - ENABLED ACTIVE 10781056 SELECT - fsgen
    pl vol06-P01 vol06-L01 ENABLED ACTIVE 10781056 CONCAT - RW
    sd disk01-03 vol06-P01 disk01 24577792 10781056 0 c2t2d0 ENA
    pl vol06-P02 vol06-L01 ENABLED ACTIVE 10781056 CONCAT - RW
    sd disk02-03 vol06-P02 disk02 24577792 10781056 0 c2t3d0 ENA
    v vol06-L02 - ENABLED ACTIVE 4578944 SELECT - fsgen
    pl vol06-P03 vol06-L02 ENABLED ACTIVE 4578944 CONCAT - RW
    sd disk03-04 vol06-P03 disk03 30722240 4578944 0 c3t0d0 ENA
    pl vol06-P04 vol06-L02 DISABLED RECOVER 4578944 CONCAT - RW
    sd disk05-03 vol06-P04 disk05 30722240 4578944 0 c3t2d0 ENA

  • Kde-volume-manager .. cause manual click on eject on cdrom..

    ...doesnt work anymore...
    Is somewhere kde-volume-manager?? Graphical UI for KDE users ...??
    Because Gnome-volume-manager has the worst support for automount...
    This is stupid  that I want to eject my cdrom and I can't manual click on eject button on cdrom.. beacuse it's locked... <confused>
    Even subfs [submount] or supermount has this implementation...
    Any help??
    post scriptum.... Rest HAL&DBUS work fine..

    ...doesnt work anymore...
    Is somewhere kde-volume-manager?? Graphical UI for KDE users ...??
    Because Gnome-volume-manager has the worst support for automount...
    This is stupid  that I want to eject my cdrom and I can't manual click on eject button on cdrom.. beacuse it's locked... <confused>
    Even subfs [submount] or supermount has this implementation...
    Any help??
    post scriptum.... Rest HAL&DBUS work fine..

  • SVM equivalent command for veritas volume manager "VXEVAC" command

    Hi All
    I am working on a major migration project , where servers are heterogenous with part of servers with Veritas volume manager and rest with Solaris volume manager.
    Migration is quite easy on Veritas servers using "VXEVAC COMMAND " i can easily move my data to new luns
    But need to know any equivalent procedure in SVM. ..
    all servers with latest solaris 10
    Quick reply is highly appreciated.
    Rgds
    Md

    Hello,
    I�m not an expert on volume management, but maybe these considerations that come to my mind can help you to improve your performance:
    1.- The interlace size of the striping. You should adjust the size of the striping to match the I/O requests made by the Operating System or by the database management software (is the data access in a raw mode?). For example, if the data access is made through normal ufs access, the stripping size should match the block size of the file system.
    2.- Are those disks on different controlers? Maybe a saturation of the controler, of the bus, etc... could slow down your I/O read/writes.
    Bye,
    jmiturbe

  • Solaris Volume Manager or Hardware RAID?

    Hi - before I build some new Solaris servers I'd like thoughts on the following please. I've previously built our Sun servers using SVM to mirror disks and one of the reasons is when I O/S patch the server I always split the mirrors beforehand and in the event of a failure I can just boot from the untouched mirror - this method has saved my bacon on numerous occasions. However we have just got some T4-1 servers that have hardware RAID and although I like this as it moves away from SVM / software RAID and to hardware RAID I'm now thinking that I will no longer have this "backout plan" in the event of issues with the O/S updates or otherwise however unlikely.
    Can anyone please tell me if I have any other options?
    Thanks - Julian.

    Thanks - just going through the 300 page ZFS admin guide now. I want to ditch SVM as it's clunky and not very friendly whenever we have a disk failure or need to O/S patch as mentioned. One thing I have just read from the ZFS admin guide is that:
    "As described in “ZFS Pooled Storage” on page 51, ZFS eliminates the need for a separate volume
    manager. ZFS operates on raw devices, so it is possible to create a storage pool comprised of
    logical volumes, either software or hardware. This configuration is not recommended, as ZFS
    works best when it uses raw physical devices. Using logical volumes might sacrifice
    performance, reliability, or both, and should be avoided."
    So looks like I need to destroy my hardware RAID as well and just let ZFS manage it all. I'll try that, amend my JET template and kick of an install and see what it looks like.
    Thanks again - Julian.

  • Veritas volume manager for solaris 10

    Hi All
    which version veritas volume manager will support solaris 10 06/06.
    can you just update a link for reference
    Regards
    RPS

    Hello,
    we are currently using solaris 9 with veritas volume manager 3.5.So i would like to know if i upgrade to solaris 10 06/06.whether i can use 3.5 or not.
    Using the Veritas (Symantec) support site, I have found the following document
    VERITAS Storage Solutions 3.5 Maintenance Pack 4 for Solaris
    http://seer.support.veritas.com/docs/278582.htm
    The latest supported version listed for VxVM 3.5 with MP4 applied is Solaris 9. That means the answer is NO.
    I understand that searching the Veritas knowledge base might be tough and time consuming, but it's their product ...
    Michael

  • Linux LVM (Logical Volume Manager) for CentOS on Azure?

    Hi.  I am trying out Azure and installed a OpenLogic CentOS 6 virtual machine.  I note that it is not running LVM (Logical Volume Manager) by default.  I would like to ask if it is possible to:
    1. have CentOS Linux installed with LVM by default when creating a Linux virtual machine on Azure
    2. switch to LVM after adding a new disk
    On the other hand, is it a good idea to use LVM at all?  Will it affect performance, features on Azure?
    Thanks.

    Hi,
    Based on my experience, you can add disk to an Azure VM. You can install the Logical Volume Manager to manage the disks attached to the VM. In addition, there is no Linux VM with LVM installed by default. If you want to have this, please submit your requirement
    in Azure feedback:
    http://feedback.azure.com/forums/34192--general-feedback
    In addition, since you can have only one OS system disk for an Azure VM, this limitation may make multi-disk logical volume manager setups unworkable.
    Best regards,
    Susie
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Suggestion for dbus, hal. new gnome-volume-manager-version.

    Heya,
    I just installed dbus, hall from cvs and gnome-volume-manager from tarball and have some suggestions:
    dbus:
    the default during configure seems to be to detect if you have the necessary things installed and based on that enable a component.
    To disbale qt-bindings add:
    --disable-qt
    to configure, else it will try to compile the qt-bindings and without a libGL.la it won't work ... (and this isn't in the Mesa package or wasn't anyway as far as I can tell).
    hal:
    could you add the option "--enable-fstab-sync" to configure in the new versions? It seems to be usefull . It can be it wasn't available earlier ...
    gnome-volume-manager:
    I just upgraded to 0.9.9 If anyone wants to have the binary just tell me where to upload.
    greetz,
    Michel

    Michel wrote:
    Heya,
    I just installed dbus, hall from cvs and gnome-volume-manager from tarball and have some suggestions:
    dbus:
    the default during configure seems to be to detect if you have the necessary things installed and based on that enable a component.
    To disbale qt-bindings add:
    --disable-qt
    to configure, else it will try to compile the qt-bindings and without a libGL.la it won't work ... (and this isn't in the Mesa package or wasn't anyway as far as I can tell).
    there are solutions on this forum to fix the libGL.la. this file will be included with future builds of xorg/xfree86. NVIDIA drivers should also provide this file. this file is only an issue for building.
    file a bug report feature report to add this build option to the package
    hal:
    could you add the option "--enable-fstab-sync" to configure in the new versions? It seems to be usefull . It can be it wasn't available earlier ...
    if this package is in one of the three official repos file a feature request.
    gnome-volume-manager:
    I just upgraded to 0.9.9 If anyone wants to have the binary just tell me where to upload.
    if a package has just fallen out of date up to about two week leave time for the maintainer to upgrade it .  you can flage the package out of date via the web page. this is way better than offering it to people or uploading it somewhere.
    the flag otu of date feature is always a better option to cluttering the list with update requests and the bug tracker is the best way to convey your wanted build changes. alot of the developers do not frequent this forum but all are member of the bug tracker notification system.

  • How do you change volume permissions with Solaris Volume Manager?

    (Previously posted in "Talk to the Sysop" - no replies)
    I'm trying to set up Solaris 9 to run Oracle on raw partitions. I have my design nailed down and I have built all the raw partitions I need as soft partitions on top of RAID 1 volumes. All this is built using Solaris Volume Manager (SVM).
    However, all the partitions are still owned by root. Before I can create my Oracle database, I need to change the owner of the Oracle partitions to oracle:oinstall. The only reference I found telling me how to do this was in a Sun Blueprint and it essentially said "You can't change volume permissions directly or permanently using SVM and chown will only remain effective until the next reboot. To make the changes permanent, you must modify /etc/minor_perm". Unfortunately, I can't find an example of how to do this anywhere and the online man pages are not particularly helpful (at least not to me).
    I'd appreciate a quick pointer, either to a good online resource or, even better, a simple example. For background, the volumes Oracle needs to own are:
    /dev/md/rdsk/d101-109
    /dev/md/rdsk/d201-203
    /dev/md/rdsk/d301-303
    /dev/md/rdsk/d401-403
    /dev/md/rdsk/d501-505
    I provide this information because I'd like to assign some, but not all, of the devices under /dev/md/rdsk to the oracle user and I was hoping some smart person out there could illustrate an approach using simle regular expressions, at which I'm horribly poor.
    Thanks in advance,
    Adrian

    Ron, I feel your pain.  I just came from an HTC also and a lot of stuff with this iPhone is bugging the crap out of me.  Who makes a phone where you can't adjust the ringer and alert volumes independently?  Instead, I have to adjust the alert volume when it is active.  C'mon guys.  Get with the program.  You won a bunch of Android users over with the 4S, but you're going to chase us all back when we're done with our contract.  Frustrating.  

  • Unable to Initialize Volume Manager from a Configuration File

    I'd like to reattach a D1000 to a rebuilt system. The array contains a raid 5 partition that was built with Solaris Volume Manager (Solaris 9). Since it is the same system all controller/target/slice ids have not changed. I was able to restore the Volume Manager configuration files (/etc/lvm) from a tape backup and followed the instructions provided in the Solaris Volume Manager Administration Guide: How to Initialize Solaris Volume Manager from a Configuration File <http://docs.sun.com/db/doc/816-4519/6manoju60?a=view>.
    All of the state database replicas for this partition are contained on the disks within the array so I began by creating new database replicas on a local disk.
    I then copied the /etc/md.cf file to /etc/md.tab
    # more /etc/lvm/md.tab
    # metadevice configuration file
    # do not hand edit
    d0 -r c1t10d0s0 c1t11d0s0 c1t12d0s0 c1t8d0s0 c1t9d0s0 c2t10d0s0 c2t11d0s0 c2t12d0s0 c2t8d0s0 c2t9d0s0 -k -i 32b -h hsp000
    hsp000 c1t13d0s0 c2t13d0s0
    I then tested the syntax of the md.tab file (this output is actually from my secomd attempt).
    # /sbin/metainit -n -a
    d0: RAID is setup
    metainit: <hostname>: /etc/lvm/md.tab line 4: hsp000: hotspare pool is already setup
    Not seeing any problems I then attempted to recreate the d0 volume, but it fails with the error below:
    # /sbin/metainit -a
    metainit: <hostname>: /etc/lvm/md.tab line 3: d0: devices were not RAIDed previously or are specified in the wrong order
    metainit: <hostname>: /etc/lvm/md.tab line 4: hsp000: hotspare pool is already setup
    Any suggestions on how to reinitialize this volume would be appreciated.
    Thanks, Doug

    You have UserPrincipalName column heading in the csv file so this should be your cmdlet.
    import-csv C:\temp\sharedMailboxCreationTest.csv | ForEach-Object {New-Mailbox -shared  -Name $_.Name  -Alias $_.Alias  -OrganizationalUnit $_.OrganizationalUnit -UserPrincipalName $_.UserPrincipalName -Database $_.Database}
    Blog |
    Get Your Exchange Powershell Tip of the Day from here

  • Proper procedure for patching from Single User mode

    Typically when I install a patch cluster from Sun, I do a sanity reboot from the console of the server using:
    shutdown -y -g0 -i6
    When the system comes back online, I log into the console again and then do:
    shutdown -y -g0 -i0 (to go into OBP)
    then
    boot -s (to go into single user mode)
    The procedure above was given to me from a Sun technician.
    Then I install the patch cluster and reboot. It has come to my attention that Sun recommends breaking any mirrors between your disks before patching. I wanted to know what is the best way to do this for both Veritas Volume Manager and Solaris Volume Manager. For Veritas Volume Manager, I was thinking of going into the vxdiskadm menu driven utility and choosing the option to "Remove a disk for replacement" for the rootmirror disk and then after a reboot to check that the patches did not cause a problem, go back into vxdiskadm and choose the option "Replace a failed or removed disk" and select the rootmirror which should then begin to automatically resync itself to the primary rootdisk. Any comments on if this is a proper way to do this or if someone has a better method, I would love to hear it. I am assuming a system with just two internal disks: c1t0d0s2 and c1t1d0s2
    Also, if anyone can comment on how to do this with Solaris Volume Manager or if it is required would be great also.
    Thanks much for any advice.

    Typically when I install a patch cluster from Sun, I
    do a sanity reboot from the console of the server
    using:
    shutdown -y -g0 -i6
    When the system comes back online, I log into the
    console again and then do:
    shutdown -y -g0 -i0 (to go into OBP)
    then
    boot -s (to go into single user mode)
    The procedure above was given to me from a Sun
    technician.Not a bad thing to check reboot before patching, but I don't think it's in any official documentation that I'm aware of.
    Then I install the patch cluster and reboot. It has
    come to my attention that Sun recommends breaking any
    mirrors between your disks before patching.Again, I don't know if it's "official", but if you have a backup copy that you could boot from, it does reduce the possibilities of critical problems from a bad patch.
    I wanted
    to know what is the best way to do this for both
    Veritas Volume Manager and Solaris Volume Manager.
    For Veritas Volume Manager, I was thinking of going
    into the vxdiskadm menu driven utility and choosing
    the option to "Remove a disk for replacement" for the
    rootmirror disk and then after a reboot to check that
    the patches did not cause a problem, go back into
    vxdiskadm and choose the option "Replace a failed or
    removed disk" and select the rootmirror which should
    then begin to automatically resync itself to the
    primary rootdisk. Any comments on if this is a proper
    way to do this or if someone has a better method, I
    would love to hear it. I am assuming a system with
    just two internal disks: c1t0d0s2 and c1t1d0s2
    Also, if anyone can comment on how to do this with
    Solaris Volume Manager or if it is required would be
    great also.Well, it'll work as you've described, but what if the patches fail? The disconnected mirror is not bootable. You'd have to go through an unencapsulation and other things from a CD.
    I've often simply pulled one side of the mirror while the machine was shutdown. Since the mirror was valid prior to pulling, it will boot. If there's a problem, I shut down, swap disks, and boot from the untouched mirror. If no problem, I re-insert, reattach the disk to the diskgroup, then recover the volumes.
    I don't know of any nice supported method of booting from an offline VxVM mirror that doesn't involve a very long series of steps. My method isn't supported, but it does work. If you have both disks in the machine at the same time though, it'll update the private regions. Don't do that until you're ready to sync up one way or the other. Test before doing it in production.
    In any event, you should have a backup ready to go.
    Darren

  • Procedures for implementing a snapshot scenario with custom DataSources

    Hi Gurus,
    I have checked the How To paper ([How to Handle Inventory Management Scenarios in BW (NW2004)|http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328]). However, only SAP standard BW objects are mentioned in the paper e.g. InfoCube (0IC_C03), Material Stock InfoSource (2LIS_03_BX), Material movements IS (2LIS_03_BF) and Revaluations IS (0LIS_03_UM).
    On the contrary, I need to handle custom DataSources for the Snapshot scenario. Are there any differences in the implementation methodology? Which additional aspects should I take into consideration? For example, the load sequence, delta type, etc.
    Could you please list out the step-by-step procedures for such an implementation?
    Thanks in advance!
    Regards,
    Meng

    Hi Meng,
    You can approach this in two ways.
    1) If the volume of data is not much, you can derive the balance at query level, as follows.
    User enters the date, based on this restrict your key figure to display all values less than this date.
    2) If the volume of data is high, then you will have issues with performance if you are calculating the balance in the front end. In this case, you can model this with 'Non cumulative' key figure.  Again there are 2 ways of approaching this back end solution based on the volume of data. ( Say in one case you have 2 years of history in your DSO and in the second case, you have  5 years of history ).
    A) For example, If there are only 2 years of history
    Create a non cumulative Key figure 'ZBALANCE' with inflow and outflow, in a cube.
    Map this to your credit and debit as + and - respectively and map the calender day to posting date.
    Just initialise the dataload with data transfer and start loading the delta as normal.
    You will be able to see the balances for each and every calday in your reporting.
    This approach is straight forward and simple.
    Compress the cube for getting the better performance.
    B) If there are 5 years of history and you are not interested in loading all the 5 years data in getting the balance
    Here you want to have the initial balance, continue delta and would like to load 2 years of history.
    The cube and non cumulative KF are created as mentioned above.
    For generating initial balance, you have to create another DSO without calander day and ZBalance mapped to credits and debits in additive mode. Load your DSO data into this new DSO to generate initial balance. This balance will be loaded to your cube as initial balance. ( Like 2LIS_03_BX ).
    You have to compress this request with marker update ( Must ).
    Load your historical data for 2 years from the original DSO. Compress without marker update ( Must ).
    initialise without data transfer from DSO to cube and load deltas normally.
    Compress the delta requests normally for performance reasons.
    Please read the 'Inventory document' in detail.
    Please let me know, if any of the information is still not clear.
    Thanks,
    Krishnan

  • How Solaris Volume Manager sync submirrors

    HI Gurus,
    I have a question on how Solaris volume manager (SVM) does re-synchronization (in raid 1). In another word, in case of one submirror was modified during boot process, how SVM detects it and how SVM validate which sumirror is gold.
    One scenario I ran into: We had a software installed, this software updated /etc/name_to_sysnum which conflicts with new Solaris 10 release. So the system could not boot any more (not even to single user mode) after software installed. This box had root disk mirrored. To fix this, we boot from CDROM and mounted 1st mirror drive root partition (c0t0d0s0) and remove the bad entry in /etc/system (we did not break the mirror to make the changes). Then the box was able to boot up. After server was up, it was found /etc/system was rolled back with bad entries. Apparently it was synced back from 2nd mirror. So now the question is how SVM decides which submirror is invalid and should be re-sync from good submirror?
    2nd scenario I saw: someone accidently added one root file system submirror into zone as a file system. But during zone installation, system got panic and rebooted. During reboot, system kept crashing. We managed to boot from network, break the mirror (update /etc/vfstab and /etc/system) and finally was able to boot the system. So in case one submirror was accidently accessed, how does SVM protect data and will the corrupted data written to disk slice synced to good submirror?
    Please share your thoughts and point me with some good references. I could not find related info in SVM doc.
    Thanks,
    Wei

    SVM doesnt "sync" disks ie copy data from one disk to another except in the case when your first setting up a mirror. Or your replacing a disk etc.
    Those are circumstances when it realises the disks are out of sync.
    Once it has a mirrored pair, it will keep them in sync since all writes will go to both sides.
    And reads take alternate blocks from both disks.
    So if the the two sides of a mirror have gotten out of sync, you will see strange results as half your content will come from one and half your content will come from the other. Even inside a single file, assuming the file is bigger than the stripe size.
    So anything writing to one side of the mirror outside of SVM's control will corrupt things. And SVM has no mechanisms for detecting this and "fixing" things up.
    So its vital to break the mirror if your going to be writing to the disk outside of SVM control.
    If your brave and the amount of changes is small, you can try to edit both sides of the mirror.
    But you have to remember that SVM works at the block level, not the filesystem level.
    So if you do anything to make the two sides even minorly different. Even something as miror as update 2 files in a different order.
    Then the layout of blocks on the disks in the 2 halves could be different. And your're screwed.
    So don't do it on any system you care about. Its really easy to make a mistake and the consequences are usually catastrophic.
    When in doubt break the mirrors. Its the only safe way.

Maybe you are looking for