Emcli set_metric_promotion for aggregate_service

I am trying to promote a metric from generic_service to an aggregate_service. One can do this through the browser interface; is this also supported by emcli? If so, what is the syntax?
Here is an example of a simple attempt, including the command that successfully promotes the metric for an underlying generic_service:
emcli set_metric_promotion -name="DBDEVOEMOSService" -type=generic_service -category=Usage -basedOn=system \
-aggFunction=MAX -promotedMetricKey='Max Aggregate CPU (%)' -column=cpuUtil -metricName=Load -depTargets="hostA;hostB" -mode=CREATE -depTargetType=host
I have tried to use this syntax to promote the metric to an aggregate service:
emcli set_metric_promotion -name="DBDEVOEM" -type=aggregate_service -category=Usage -basedOn=system \
-aggFunction=COPY -promotedMetricKey='Max Aggregate CPU (%)' -column=Usage -metricName=UsageValue \
-depTargets="DEVOEMOSService" -mode=CREATE -depTargetType=generic_service
This returns an error:
Internal Error: The EM CLI system has encountered an internal error. Details have been added to the EM CLI log files.
Unfortunately, there isn't anything in the EM CLI log file, despite the error message saying that there is.
Any tips appreciated.

Thanks but I tried '%' it does not work for promoted metrics.
I tried changing the promoted metric key values via the console ie.
- Click on the Service created
- click on "Monitoring Configuration"
- click "Performance metrics" and edit the metric.
The gui does not allow you to enter values!
You have to select key values that have been collected so far.
However, it would be nice to define the values before we start collecting metric data, and also, be able to define wildcard selections.
I then tried creating the promoted metric via the command line use the verb "set_metric_promotion" and set the key value to '%'.
For example,
emcli set_metric_promotion -name=MY_SERVICE -type=generic_service -category=Performance -basedOn=system -aggFunction=SUM -promotedMetricKey=MyCounter -metricName=MyMetric -column=MyCounter -depTargetType=MyCustomPlugin -depTargetKeyValues=monitor_bld02_08:%;monitor_bld02_09:% -threshold=20;15;GE -mode=CREATE
This executes successfully but when I view the promoted metric via the console, there is an error message
*"ORA-01465: invalid hex number "*
The other problem with this verb is that it doesn't seem to handle composite keys. The documentation only has syntax and examples for single key values.

Similar Messages

  • EMC training for RAC DBA

    I am looking for some one who can teach EMC storage administration to a oracle RAC dba

    Now, I'm not saying starting from a blank piece paper, now that clusterware has matured a bit, with no other limitations, I would choose to have veritas in the mix as well.
    However, there are situations where whether for coporate standards, or maybe even legacy reasons (perhaps, though I don't think it's a good idea, some other application is running there clustered with veritas).
    My point, was that it could be done - hey choice is good, right?
    But yes, Veritas/symantec must have been well miffed when they heard that Oracle were going to write their own clusterware.
    regards,
    jason.
    http://jarneil.wordpress.com

  • Emcli runCollection for Metric Extention

    Hi
    I'm testing EM12cR2 (12.1.0.2.0) on SPARC Solaris. Is it possible to manyally run collection gathering using emctl control agent runCollection?
    I'm getting error
    ./emctl control agent runCollection DB_S2:oracle_database ME$CNT_BUFFER
    Oracle Enterprise Manager Cloud Control 12c Release 2
    Copyright (c) 1996, 2012 Oracle Corporation. All rights reserved.
    EMD runCollection error:
    ME:No metric collection named ME for oracle_database,DB_S2
    ./emctl control agent runCollection DB_S2:oracle_database "ME$CNT_BUFFER"
    Oracle Enterprise Manager Cloud Control 12c Release 2
    Copyright (c) 1996, 2012 Oracle Corporation. All rights reserved.
    EMD runCollection error:
    ME:No metric collection named ME for oracle_database,DB_S2
    My metric exists in SYSMAN tables...

    The answer is easy
    I have to use \ before $
    ./emctl control agent runCollection DB_S2:oracle_database ME\$CNT_BUFFER
    Oracle Enterprise Manager Cloud Control 12c Release 2
    Copyright (c) 1996, 2012 Oracle Corporation. All rights reserved.
    EMD runCollection completed successfully
    :)

  • Would like to know if this is correct disk configuration for 11.2.0.3

    Hello, please see the procedure below that I used to allow the grid infratstructure 11.2.0.3 oui to be able
    to recognize mY EMC San disks as candidate disks for use with ASM.
    we are using EMC powerpath for our multipathing as stated in the original problem description.I want to know if this is a fully supported method for
    configuring our san disks for use with oracle ASM becuase this is redhat 6 and we do not have the option to use the asmlib driver.Please note that I have
    been able to successfully install the gird infrastructure successfully for a 2 node RAC cluster at this point using this method.Please let me know if there
    any issue with configuring disks using this method.
    We have the following EMC devices which have been created in the /dev directory.I will be using device emcpowerd1 as my disk for the ASM diskgroup I will be
    creating for the ocr and voting device during grid install.
    [root@qlndlnxraccl01 grid]# cd /dev
    [root@qlndlnxraccl01 dev]# ls -l emc*
    crw-r--r--. 1 root root 10, 56 Aug 1 18:18 emcpower
    brw-rw----. 1 root disk 120, 0 Aug 1 19:48 emcpowera
    brw-rw----. 1 root disk 120, 1 Aug 1 18:18 emcpowera1
    brw-rw----. 1 root disk 120, 16 Aug 1 19:48 emcpowerb
    brw-rw----. 1 root disk 120, 17 Aug 1 18:18 emcpowerb1
    brw-rw----. 1 root disk 120, 32 Aug 1 19:48 emcpowerc
    brw-rw----. 1 root disk 120, 33 Aug 1 18:18 emcpowerc1
    brw-rw----. 1 root disk 120, 48 Aug 1 19:48 emcpowerd
    brw-rw----. 1 root disk 120, 49 Aug 1 18:54 emcpowerd1
    brw-rw----. 1 root disk 120, 64 Aug 1 19:48 emcpowere
    brw-rw----. 1 root disk 120, 65 Aug 1 18:18 emcpowere1
    brw-rw----. 1 root disk 120, 80 Aug 1 19:48 emcpowerf
    brw-rw----. 1 root disk 120, 81 Aug 1 18:18 emcpowerf1
    brw-rw----. 1 root disk 120, 96 Aug 1 19:48 emcpowerg
    brw-rw----. 1 root disk 120, 97 Aug 1 18:18 emcpowerg1
    brw-rw----. 1 root disk 120, 112 Aug 1 19:48 emcpowerh
    brw-rw----. 1 root disk 120, 113 Aug 1 18:18 emcpowerh1
    As you can see the permissions by default are root:disk and this will be set at boot time.These permissions do not allow the Grid Infrastructure to recognize
    the devices as candidates for use with ASM so I have to add udev rules to assign new names and permissions during boot time.
    Step 1. Use the scsi_id command to get the unique scsi id for the device as follows.
    [root@qlndlnxraccl01 dev]# scsi_id whitelisted replace-whitespace --device=/dev/emcpowerd1
    360000970000192604642533030434143
    Step 2. Create the file /etc/udev/rules.d/99-oracle-asmdevices.rules
    Step 3. With the scsi_id that was obtained for the device in step 1 you need to create a new rule for that device in the /etc/udev/rules.d/99-oracle-
    asmdevices.rules file. Here is what the rule for that one device looks like.
    KERNEL=="sd*1" SUBSYSTEM=="block",PROGRAM="/sbin/scsi_id --whitelisted --replace-whitespace /dev/$name",RESULT=="360000970000192604642533030434143",NAME="asm
    crsd1",OWNER="grid",GROUP="asmadmin",MODE="0660"
    ( you will need to create a new rule for each device that you plan to use as a candidate disk for use with oracle ASM).
    Step 4. Reboot the host for the new udev rule to take affect and then verify that the new device entry will be added into the /dev directory with the
    specified name, ownership and permissions that are required for use with ASM once the host is back online.
    Note: You will need to replicate/copy the /etc/udev/rules.d/99-oracle-asmdevices.rules file to all nodes in the cluster and restart them for the changes to
    be in place so that all nodes can see the new udev device name in the /dev directory on each respective node.
    You should now see the following device on the host.
    [root@qlndlnxraccl01 rules.d]# cd /dev
    [root@qlndlnxraccl01 dev]# ls -l asm*
    brw-rw----. 1 grid asmadmin 65, 241 Aug 2 10:10 asmcrsd1
    Step 5. Now when you are running the oui installer for the grid installation when you get to the step where you define your ASM diskgroup you should choose
    external redundancy and then click on the change disk dicovery path and change the disk discovery path as follows.
    /dev/asm*
    Now at this point you will see the new disk name asmcrsd1 showing as a condidate disk for use wiith ASM.
    PLease let us know if this is a supported method for our shared disk configuration.
    Thank you.

    Hi,
    I've seen this solution in a lot of forums but I do not agree or don't like at all; even if we have 100 luns of 73GB each.
    so the thing is, as in any other unix flavor we don't have asmlib***just EMCPOWERPATH running on differentes unix/linux flavors we dont like either udev-rules, dm-path and stuff***
    Try this as root user
    ls -ltr emcpowerad1
    brw-r----- 1 root disk 120, 465 Jul 27 11:26 emcpowerad1
    # mknod /dev/asmdisks
    # chown oragrid:asmadmin /dev/asmdisks
    # cd /dev/asmdisks
    # mknod VOL1 c 120 465
    # chmod 660 /dev/asmdisks/VOL*
    repeat above steps on second node
    asm_diskstring='/dev/asmdisks/*'
    talk with sysadmin and stoadm guys to garanty naming and persistents in all nodes of your RAC using emcpowerpath. (even after reboot or san migration)

  • Restore to a different node with RMAN and EMC Networker

    Please, really need help :)
    I took hotbackup using RMAN catalog which went to tape. Here is the script for that:
    connect target sys/****@PROD;*
    connect rcvcat rman/***@catalog;*
    *run {*
    allocate channel t1 type 'SBT_TAPE';
    send 'NSR_ENV=(NSR_SERVER="**",NSR_DATA_VOLUME_POOL=Oracle)';*
    *backup database plus archivelog;*
    *release channel t1;*
    My settings include control file autobackup as follows:
    *RMAN> show all;*
    *RMAN configuration parameters are:*
    *CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 10 DAYS;*
    *CONFIGURE BACKUP OPTIMIZATION OFF; # default*
    *CONFIGURE DEFAULT DEVICE TYPE TO DISK;*
    *CONFIGURE CONTROLFILE AUTOBACKUP ON;*
    *CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE 'SBT_TAPE' TO '/NMO_%F/';*
    *CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default*
    *CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default*
    *CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default*
    *CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default*
    *CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/data/backups/PROD/ora_df%t_s%s_s%p';*
    *CONFIGURE MAXSETSIZE TO UNLIMITED; # default*
    *CONFIGURE ENCRYPTION FOR DATABASE OFF; # default*
    *CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default*
    *CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default*
    *CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/data/backups/PROD/hotbkp/snapcf_PROD.f';*
    I am trying to restore this backup that is on tape (EMC Networker) to a different node. I can do this outside of Networker without any problems. However, when I try to get the backup from tape I have this issue:
    1. initPROD.ora is created and modified accordingly on a new server
    2. Export sid, startup nomount on new server
    export ORACLE_SID=PROD (on this new server)
    *rman target /*
    *startup nomount*
    3. I then run the following and get this error:
    *RMAN> run {*
    *2> allocate channel t1 type 'SBT_TAPE'*
    *3> send 'NSR_ENV=(NSR_SERVER="***",NSR_CLIENT="********")';*
    *4> restore controlfile from autobackup;*
    *5> sql 'alter database mount';*
    *6> sql 'alter database rename file "/data/dbf/PROD/redo01.log" to "/data/scratch/dbf/PROD/redo01.log"';*
    *7> sql 'alter database rename file "/data/dbf/PROD/redo02.log to "/data/scratch/dbf/PROD/redo02.log"';*
    *8> sql 'alter database rename file "/data/dbf/PROD/redo03.log to "/data/scratch/dbf/PROD/redo03.log"';*
    *9> set until sequence 22; (I get this from archive logs)*
    *10> set newname for datafile 1 to '/data/scratch/dbf/PROD/system01.dbf';*
    *11> set newname for datafile 2 to '/data/scratch/dbf/PROD/undotbs01.dbf';*
    *12> set newname for datafile 3 to '/data/scratch/dbf/PROD/sysaux01.dbf';*
    *13> set newname for datafile 4 to '/data/scratch/dbf/PROD/users01.dbf';*
    *14> restore database;*
    *15> switch datafile all;*
    *16> recover database;*
    *17> alter database open resetlogs;*
    *18> }*
    *allocated channel: t1*
    *channel t1: sid=149 devtype=SBT_TAPE*
    *channel t1: NMO v4.5.0.0*
    *Starting restore at 27-FEB-09*
    *released channel: t1*
    *RMAN-00571: ===========================================================*
    *RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============*
    *RMAN-00571: ===========================================================*
    *RMAN-03002: failure of restore command at 02/27/2009 22:10:43*
    *RMAN-06495: must explicitly specify DBID with SET DBID command*
    I cannot seem to pass this error. This error indicates that the db has to be either mounted or DBID set before the restore of teh controlfile. Mounting db before the restore of the controlfile does not work. Setting db id 1) does not make sense because if the primary db is down then I cannot get it, right? My primary is up now, so I was able to get db id, but setting it made the restore hang with the following:
    *RMAN> run {*
    *connect target*
    *set dbid=466048808*
    *2> allocate channel t1 type 'SBT_TAPE'*
    *3> send 'NSR_ENV=(NSR_SERVER="***",NSR_CLIENT="********")';*
    *4> restore controlfile from autobackup;*
    *5> sql 'alter database mount';*
    *6> sql 'alter database rename file "/data/dbf/PROD/redo01.log" to "/data/scratch/dbf/PROD/redo01.log"';*
    *7> sql 'alter database rename file "/data/dbf/PROD/redo02.log to "/data/scratch/dbf/PROD/redo02.log"';*
    *8> sql 'alter database rename file "/data/dbf/PROD/redo03.log to "/data/scratch/dbf/PROD/redo03.log"';*
    *9> set until sequence 22; (I get this from archive logs)*
    *10> set newname for datafile 1 to '/data/scratch/dbf/PROD/system01.dbf';*
    *11> set newname for datafile 2 to '/data/scratch/dbf/PROD/undotbs01.dbf';*
    *12> set newname for datafile 3 to '/data/scratch/dbf/PROD/sysaux01.dbf';*
    *13> set newname for datafile 4 to '/data/scratch/dbf/PROD/users01.dbf';*
    *14> restore database;*
    *15> switch datafile all;*
    *16> recover database;*
    *17> alter database open resetlogs;*
    *18> }*
    *connected to target database: PROD (not mounted)*
    *executing command: SET DBID*
    *using target database control file instead of recovery catalog*
    *allocated channel: t1*
    *channel t1: sid=152 devtype=SBT_TAPE*
    *channel t1: NMO v4.5.0.0*
    *Starting restore at 27-FEB-09*
    *channel t1: looking for autobackup on day: 20090227*
    *channel t1: looking for autobackup on day: 20090226*
    *channel t1: looking for autobackup on day: 20090225*
    It looks like it cannot find the autbackup of controlfile? But when I list backup while connected to the target db and rman catalog, I see that autobackup is included in every hot backup that we do:
    *BS Key Type LV Size Device Type Elapsed Time Completion Time*
    *4639 Full 7.00M SBT_TAPE 00:00:04 27-FEB-09*
    *BP Key: 4641 Status: AVAILABLE Compressed: NO Tag: TAG20090227T150141*
    *Handle: /NMO_c-466048808-20090227-01/ Media:*
    *Control File Included: Ckp SCN: 23352682865 Ckp time: 27-FEB-09*
    *SPFILE Included: Modification time: 26-FEB-09*
    I am stuck What I am doing wrong? Thank you!
    Edited by: rysalka on Feb 28, 2009 8:35 AM

    Do a list command to see if the controlfile backup is on tape.
    RMAN> list backup of controlfile;
    I suggest you to read the manual of EMC Networker or call EMC on how to restore the database to another node. I am using Veritas Netbackup, it may be similar to EMC Networke for your reference. There are a few things that need to be followed when you restore database to another node:
    1) set DB_ID, which you have done
    2) set NB_CLIENT (in your case NSR_CLIENT) to the primary node from where backup was taken
    3) configure the backup server to allow redirect restore between the two nodes.
    Hope this helps.

  • Messaging Server and Calendar Server Mount points for SAN

    Hi! Jay,
    We are planning to configure "JES 05Q4" Messaging and Calendar Servers on 2 v490 Servers running Solaris 9.0, Sun Cluster, Sun Volume Manager and UFS. The Servers will be connected to the SAN (EMC Symmetrix) for storage.
    I have the following questions:
    1. What are the SAN mount points to be setup for Messaging Server?
    I was planning to have the following on SAN:
    - /opt/SUNWmsgsr
    - /var/opt/SUNWmsgsr
    - Sun Cluster (Global Devices)
    Are there any other mount points that needs to be on SAN for Messaging to be configured on Sun Cluster?
    2. What are the SAN mount points to be setup for Calendar Server?
    I was planning to have the following on SAN:
    - /opt/SUNWics5
    - /var/opt/SUNWics5
    - /etc/opt/SUNWics5
    3. What are the SAN mount points to be setup for Web Server (v 6.0) for Delegated Admin 1.2?
    - /opt/ES60 (Planned location for Web Server)
    Delegated Admin will be installed under /opt/ES60/ida12
    Directory server will be on its own cluster. Are there any other storage needs to be considered?
    Also, Is there a good document that walks through step-by-step on how to install Messaging, Calendar and Web Server on 2 node Sun Cluster.
    The installation document doesn't do a good job or atleast I am seeing a lot of gaps.
    Thanks

    Hi,
    There are basically two choices..
    a) Have local binaries in cluster nodes (e.g 2 nodes) ... which means there will be two sets of binaries, one on each node in your case.
    Then when you configure the software ..you will have to point the data directory to a cluster filesystem which may not be neccasarily global. But this filsystem should be mountable on both nodes.
    The advantage of this method is that ... during patching and similar system maintenance activities....the downtime is minimum...
    The disadvantage is that you have to maintain two sets of binaries ..i.e patch twice.
    The suggested filesystems can be e.g.
    /opt for local binaries
    /SJE/SUNWmsgr for data (used during configure option)
    This will mean installing the binaries twice...
    b) Having a single copy of binaries on clustered filesystem....
    This was the norm in iMS5.2 era ...and Sun would recommend this...though I have seen type a) also for iMs 5.2
    This means there should no configuration files in local fs. Everything related to iPlanet on clustered filesystem.
    I have not come accross type b) post SUNONE i.e 6.x .....it seems 6.x has to keep some files on the local filesystem anyway..so b) is either not possible or needs some special configuration
    so may be you should try a) ...
    The Sequence would be ..
    After the cluster framework is ready:
    1) Insall the binaries on both side
    2 ) Install agent on one side
    3) switch the filesytem resource on any node
    4) Configure the software with the clustered FS
    5) Switch the filesystem resource on the other node and useconfig of first node.
    Cheers--

  • Any 10.2.0.3 ASM experience for dwh?

    We want to test and use ASM on a single instance 10.2.0.3 database with solaris 10 and emc storage for our datawarehouse environment. So please if you have any experience can you advice on below questions;
    1. any kind of bugs you might have experienced on this release?
    2. any kind of test experience which you might advice to check for both performance and availability of ASM?
    3. Some init.ora parameter which may be critical for a dwh ASM?
    4. how ASM instance can be backup up, no export or rman seems to be available?
    5. what is the best method to migrate xxTB of data on tru64 filesystem emc source to solaris emc asm target environment?
    Thank you.

    1) We had bugs but with the DB and not with ASM. Others may have experienced bugs specific to ASM but I have not.
    2) I think you would want to look at your waits. See what your disk seek time is. Again, that goes back to doing what you would normally do whether it was asm or not. The difference is with ASM you need to run queries and even use Grid Control to see how it is performing. If you really want to do ASM, buy the new Oracle ASM book from Oracle Press.
    3) It's in the documentation. ASM has it's on init.ora file.
    4) ASM is basically disk groups of disks. You want your DB to be backed up. Obviously there are things you can do if you lose a disk, but that's just as if a disk went bad. Worse case scenario you recreate your ASM instance and readd your disks and then restore your DB.
    5) With 5 you may want to look at Transportable Tablespaces and RMAN. GC gives you a GUI but I don't think it will give you all the detail you need.
    I would recommend you read through the docs and buy that ASM book.

  • Can we execute multiple tasks using emcli

    Hi,
    I know we can execute any task on target using emcli but just wondering if we can execute multiple command in one emcli execution,
    also does any one has any idea about how to pass the target variables in emcli command.
    Thanks,
    Ritu

    Hi Salman,
    Thanks for your reply but I do not want to use deplyoment procedured as this means for patching and provisioning related tasks,
    I want to automate the health checks using EMCLI and for that I can use submit job verb of emcli,
    I just wanted to if i have multiple command to pass to emcli,how to do that ?
    Regards,
    Ritu

  • Error mirroring across sites - Cluster 3.1, EMC Clariion

    Hi,
    Sun are here helping us build a 2 node cluster on V245s, solaris 10, Solstice Volume Manager,Cluster 3.1 using EMC CX700s for disk storage. Everthing appeared to be Ok until we tried to sync the disk sets across sites. Some disk sets worked Ok ,others never got past 0% and gave the error :[Command Timeout on path /pci@1e,600000/pci@0/pci@2/emlx@0/fp@0,0 (fp2).
    I have been told it is a Clariion problem, but when I checked SunSolve I noticed there was a document ID 6524209 with the same error. Trouble is I cannot find a suggested fix.
    Can anyone help as this is our first cluster project?
    Thanks                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    Just to let you know I have resolved the problem by trespassing the luns, which were not resyncing, to the other SP on the Clariion at the DR site.
    Some of the data sets were OK so I checked their config on the Clariion and I noticed the failed luns were on a different SP from the luns which were OK. I will need to speak to Dell/EMC to find out why they timeout on SPB.

  • Geographic Cluster and zones

    Hello,
    I am attempting to setup a geographic cluster to failover an Informix application to our disaster recovery (BCP) site. I have a Sun Fire V440 in each location running Solaris 10 08/07 update. The application is currently running on a Solaris 8 02/04 server and must continue to do so. The catch is that the server in the BCP site is also used as a QA server. My thought was to create on the Solaris 10 server, two Solaris 8 containers, one for failover from the home office and the other used for QA. At the home office site, the server would run one Solaris 8 container. We are using EMC SRDF for replication and storage of the Informix database. The container on the home office server would failover to the BCP container on the server in the BCP site. My questions are: 1) Is this scenario possible, and 2) How would I configure the clustering on the servers? Should I be using the the data services for Containers and for Informix? I have so far created one node clusters in each site and was in the process of configuring the resource groups but was unsure how to proceed. Thanks for any help anyone can give. Thanks.

    I work for the Sun Cluster Geographic Edition (SCGE) team so I hope I can give some definitive answers...
    First, I'm slightly confused as to whether this is a single cluster with geographically split nodes or two single node clusters joined together with SCGE. From re-reading your posting it looks like it is the latter, which although far from optimal, is possible. The point to make is that any failure on the primary site is probably going to have to be treated as a disaster. You will need to decide whether the primary site node will be back up any time soon and if not, take-over the service on the remote (DR) site. Once you've taken over the service, the reverse re-sync is going to be quite expensive. If you'd had a local cluster at the primary site, then only site failures would have forced this decision.
    Back to the configuration. You'll need to install single node Solaris Clusters (3.2 01/09) at each site. You would then create an HA-container agent resource group/resource to hold you Solaris 8 brand zone. You'd then put your Informix database in this container. You'd do the same at the remote site. Your options for storage are raw DIDs devices or file system. You can't use SVM with SRDF yet and I don't think there is a supported way to map VxVM volumes into the HA-container (though I may be wrong). Personally, I'd use UFS on raw DID (or VxVM) in the global zone mounted with forcedirectio and map that into the HA-container. (http://docs.sun.com/app/docs/doc/820-5025/ciagbcbg?l=en&a=view)
    I don't know off-hand whether the the Informix agent will run in with an HA-container agent with a Solaris 8 brand container. I'll ask a colleague.
    If you need any more information, it might be more helpful to contact me directly at Sun. (First.Last)
    Regards,
    Tim
    ---

  • Failover Cluster Hyper-V Storage Choice

    I am trying to deploy a 2 nodes Hyper-V Failover cluster in a closed environment.  My current setup is 2 servers as hypervisors and 1 server as AD DC + Storage server.  All 3 are running Windows Server 2012 R2.
    Since everything is running on Ethernet, my choice of storage is between iSCSI and SMB3.0 ?
    I am more inclined to use SMB3.0 and I did find some instructions online about setting up a Hyper-V cluster connecting to a SMB3.0 File server cluster.  However, I only have budget for 1 storage Server.  Is it a good idea to choice SMB over iSCSI
    in this scenario?  ( where there is only 1 storage for the Hyper-V Cluster ). 
    What do I need to pay attention to in this setup?  Except some unavoidable single points of failures. 
    In the SMB3.0 File server cluster scenario that I mentioned above, they had to use SAS drives for the file server cluster (CSV).  I am guessing in my scenario, SATA drives should be fine, right?

    "I suspect that Starwind solution achieves FT by running shadow copies of VMs on the partner Hypervisor"
    No, it does not run shadow VMs on the partner hypervisor.  Starwind is a product in a family known as 'software defined storage'.  There are a number of solutions on the market.  They all provide a similar service in that they allow for the
    use of local storage, also known as Direct Attached Storage (DAS), instead of external, shared storage for clustering.  Each of these products provides some method to mirror or 'RAID' the storage among the nodes in the software defined storage. 
    So, yes, there is some overhead to ensure data redundancy, but none of this class of product will 'shadow' VMs on another node.  Products like Starwind, Datacore, and others are nice entry points to HA without the expense of purchasing an external storage
    shelf/array of some sort because DAS is used instead.
    1) "Software Defined Storage" is a VERY wide term. Many companies use it for solutions that DO require actual hardware to run on. Say Nexenta claims they do SDS and they need a separate physical servers running Solaris and their (Nexenta) storage app. Microsoft
    we all love so much because they give us infrastructure we use to make our living also has Clustered Storage Spaces MSFT tells is a "Software Defined Storage" but they need physical SAS JBODs, SAS controllers and fabric to operate. These are hybrid software-hardware
    solutions. More pure ones don't need any hardware but they still share actual server hardware with hypervisor (HP VSA, VMware Virtual SAN, oh, BTW, it does require flash to operate so it's again not pure software thing). 
    2) Yes there are number of solutions but devil is in details. Technically all virtualization world is sliding away from ancient way of VM-running storage virtualization stacks to ones being part of a hypervisor (VMware Virtual Storage Appliance replaced
    with VMware Virtual SAN is an excellent example). So talking about Hyper-V there are not so many companies who have implemented VM-less solutions. Except the ones you've named it's also SteelEye and that's all probably (Double-Take cannot replicate running
    VMs effectively so cannot be counted). Running storage virtualization stack as part of a Hyper-V has many benefits compared to VM-running stuff:
    - Performance. Obviously kernel-space running DMA engines (StarWind) and polling driver model (DataCore) are faster in terms of latency and IOPS compared to VM-running I/O all routed over VMBus and emulated storage and network hardware.
    - Simplicity. With native apps it's click and install. With VMs it's UNIX management burden (BTW, who will update forked-out Solaris VSA is running on top of? Sun? Out of business. Oracle? You did not get your ZFS VSA from Oracle. Who?) and always "hen and
    chicken" issue. Cluster starts, it needs access to shared storage to spawn a VMs but VMs are inside a VM VSA that need to be spawned. So first you start storage VMs, then make them sync (few terabytes, maybe couple of hours to check access bitmaps for volumes)
    and only after that you can start your other production VMs. Very nice!
    - Scenario limitations. You want to implement a SCV for Scale-Out File Servers? You canont use HP VSA or StorMagic because SoFS and Hyper-V roles cannot mix on the same hardware. To surf SMB 3.0 tide you need native apps or physical hardware behind. 
    That's why current virtualization leader VMware had clearly pointed where these types of things need to run - side-by-side with hypervisor kernel.
    3) DAS is not only cheaper but also faster then SAN and NAS (obviously). So sure there's no "one size fits all" but unless somebody needs a a) very high LUN density (Oracle or huge SQL database or maybe SAP) and b) very strict SLAs (friendly telecom company
    we provide Tier2 infrastructure for runs cell phone stats on EMC, $1M for a few terabytes. Reason? EMC people have FOUR units like that marked as a "spare" and have requirement to replace failed one in less then 15 minutes) there's no point to deploy hardware
    SAN / NAS for shared storage. SAN / NAS is an sustained innovation and Virtual SAN is disruptive. Disruptive comes to replace sustained for 80-90% of business cases to allow sustained live in a niche deployments. Clayton Christiansen\s "Innovator's Dilemma".
    Classic. More here:
    Disruptive Innovation
    http://en.wikipedia.org/wiki/Disruptive_innovation
    So I would not consider Software Defined Storage as a poor-mans HA or usable to Test & Development only. Thing is ready for prime time long time ago. Talk to hardware SAN VARs if you have connections: How many stand-alone units did they sell to SMBs
    & ROBO deployments last year?
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • ORACLEASM_SCANEXCLUDE parameter in ASMLib configuration

    Oracle grid version: 11.2.0.4
    Platform : Oracle Linux 6.4
    We are using EMC powerpath for LUNs. We want to avoid oracleasm from scanning Linux's native multipaths starting with /dev/sd*.
    So, should we be setting ORACLEASM_SCANEXCLUDE="sd" or ORACLEASM_SCANEXCLUDE="/dev/sd" ?
    # /usr/sbin/oracleasm configure
    ORACLEASM_ENABLED=true
    ORACLEASM_UID=grid
    ORACLEASM_GID=asmadmin
    ORACLEASM_SCANBOOT=true
    ORACLEASM_SCANORDER="/dev/emcpower"   ------------> Scan only EMC Power devices
    ORACLEASM_SCANEXCLUDE="sd"   ------------------> this is to avoid scanning /dev/sd* paths generated by Linux's native multipaths
    ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"
    If we set ORACLEASM_SCANEXCLUDE="sd" , then I am worried that when we add more LUNs and reach /dev/emcpowersd*  , oracleasm might exclude these powerpath LUNs by mistake due to the ORACLEASM_SCANEXCLUDE="sd" setting !! When I googled about this , I always see it is set as ORACLEASM_SCANEXCLUDE="sd" though, like shown in the link below .
    http://surachartopun.com/2009/06/check-device-asmlib-on-multi-path.html

    Thank you Dude.
    BTW. I didn't get to choose ORACLEASM_SCANEXCLUDE when running the configure command as shown below.
    /etc/init.d/oracleasm configure -i
    Configuring the Oracle ASM library driver.
    This will configure the on-boot properties of the Oracle ASM library
    driver.  The following questions will determine whether the driver is
    loaded on boot and what permissions it will have.  The current values
    will be shown in brackets ('[]').  Hitting <ENTER> without typing an
    answer will keep that current value.  Ctrl-C will abort.
    Default user to own the driver interface []: grid
    Default group to own the driver interface []: asmadmin
    Start Oracle ASM library driver on boot (y/n) [n]: y
    Scan for Oracle ASM disks on boot (y/n) [y]: y
    Writing Oracle ASM library driver configuration: done
    1. How can I add ORACLEASM_SCANEXCLUDE parameter for oracleasm ?
    2. If I modify oracleasm parameters, will that affect a running Cluster, Rdbms ? Do I need downtime ?

  • What is the best? NAS or SAN??

    Hi all,
    What do you think is the best NAS or SAN??
    Regards,
    Paulo Portugal.

    Effectively ... most of the configurations I've seen boil down to:
    - SAN provides LVMs on a closed network loop (using fibre)
    - NAS provides file system on an open network loop (using GB ethernet)
    Documentation on the differences and similarities is available from SAN and NAS vendors ... look at EMC site for SAN and NetApp (Network Applicance) for NAS.
    Personal preference is NAS for SME and workgroup RAC, and SAN for MLE (medium to large enterprise which can afford trained storage professionals)

  • So, what are we missing?

    I'm aware of the fact that Lenovo is new on the servermarket, but there are still some points I really would like to see from Lenovo.
    1)
    The company I work for, also distributes IBM System X. The 1U dual socket rackserver is one of the most sold System X servers. I really would like to see an announcement of Lenovo for such similar server (RD110?).
    2)
    Virtualisation is a hot issue at the moment. I'm noticing a higher demand of VMWare from SMB customers. One of the biggest costs of a serverroom is power and cooling. Customers are looking for more power efficient solutions and running several virtual servers on one (or more) physical server(s), is a solution for this issue. Lenovo should offer VMWare OEM options as Lenovo options, like all other server vendors do.
    3)
    The demand for external storage is rising rapidly. Offices are getting more digital and new laws are pushing enduser to use more storage. And it's not only effecting larger company's but also the SMB market.
    Lenovo is the only server vendor, who can't offer external storage devices. A customer will most likely choose a vendor who can offer both servers and storage.
    IBM uses LSI OEM (DS3000 series) and DELL uses EMC OEM for their low end storage offerings. (By the way: it's verly likely that DELL's contract with EMC will not be continued)
    In my opinion, Lenovo should find an OEM storage partner, so we can offer Lenovo branded low end external disk storage.
    As mentioned in the beginning of this message, I'm aware of the fact that Lenovo is new to the servermarket and that it's not possible to announce a fully lined up series of servers and options in the early start. But if Lenovo really wants a portion of the servermarket, the mentioned issues should be addressed.
    What's your opinion about this of what would you like to see from Lenovo?
    Met vriendelijke groet, kind regards,
    Rob Vermelis @ Copaco Nederland - Dutch Lenovo Distributor

    ericjmail,
    you make some great points!
    if you don't mind, i'd like to address a few of them below...
    2.  all of the ThinkServer models support 2048x1536 @ 32-bit color over VGA.   it only takes 12.58 MB of video RAM to drive this resolution.   all ThinkServers have at least 16 MB of on-board video memory.   short of installing a secondary video card to drive 2560x1600 over DVI-D, you should have plenty of resolution in a native system -- especially for a server.
    4.  server 2008 standard and enterprise editions are available options.   they're listed in all of the ThinkServer datasheets found here.   option part numbers can be found here.
    7.  the rails are available, albeit in a slightly obscure way.   i've posted information about obtaining them here.
    a few questions i have...
    3.  have you checked your currently-installed drivers in device manager to see if they match the versions posted to the website?   it's possible that these drivers are already installed, hence the message you're seeing.
    5.  how many drives would you like to have?   10?   how is your current RAID topography set up?   if given more drives, how would you like it to be set up?
    6.  would you prefer 2.5" drives over 3.5" or would you prefer the option to install 2.5" drives in a 3.5" bay?
    thank you for your candid feedback.
    ThinkStation C20
    ThinkPad X1C · X220 · X60T · s30 · 600

  • Database Crash - Media Recovery Required - [SOLVED]

    I am trying to understand why media recovery was required when our database had crashed ( this was 2 months ago ). This is just for my understanding and I would appreciate if you could explain it to me.
    We have a 10g database on Solaris 10 and EMC SAN for storage. One of the technicians came in look at some electrical issue. And accidentally turned off the power supply to the SAN.
    This caused the database instance to crash.
    When the DBA tried to startup the database, he said that he was getting errors that some datafiles were corrupted and ruled that media recovery was required.
    He took the backups taken 12 hours ago and applied the archived redo logs. Everything was fine except for the downtime ( it took 6 hours for this whole process).
    I have several questions:
    1. why would the datafiles get corrupted? was the disk in the process of writing and could not complete the task?
    2. why did the database not do crash recovery i.e use redo logs? why did we have to use backup?
    Thanks.
    Message was edited by:
    sayeed

    1. why would the datafiles get corrupted? was the
    disk in the process of writing and could not complete
    the task?This could happen if a datafile being accessed during the power failure resulting in corrupting the file which can happen to any file during power failure..In this case it was datafile resulting in dataabse crash...
    2. why did the database not do crash recovery i.e
    use redo logs? why did we have to use backup?Since the file is corrupted you need to replace it with a last backup and apply logs to bring it consistent state....
    Again ..This is what i have understood so far
    correct me if i am wrong

Maybe you are looking for

  • ICloud doesn't work with numbers

    I d like to Share my Files in numbers on my iPad 1 with the iPad 2 which Belongs to my wife by unsing iCloud . We are using the Same Apple id and have switched on iCloud and back up. But we Get no Files for Sharing with iCloud. Any help for this issu

  • Error message: "Item is being modified, please try again later"

    I'm trying to buy Dhafer Youssef album "Digital Prophecy" from finnish iTMS. When i click "Buy Now" in my shopping cart, iTunes will give me various error messages, "Item is being modified", "Your shopping cart's contents have changed." or "We could

  • File Adapter - Multiple Record Types to same Target

    I am currently reading in a fixed length file and loading into a table. The issue is, some of the lines in my file differ by two spaces at the end. See example below. For example, Record 1 might look like : DD/MM/YY - length of 8 Record 2 might look

  • Winhelp 4, problem compiling

    Hy everyone, I try to investigate on a problem that  one of our user have with Robohelp 8. I don't know if i'm giving the  right information about my problem, so let me know if you need any other  information. The computer is a Windows XP SP3 with 2

  • Purchase option grayed out?

    When I go onto iTunes to buy some songs the place where you usually click to buy the song is grayed out and says "Purchased". Some more info: I did not buy these songs yet. They are not in my "Purchased" category. I did try restarting and logging int