Shell scripts to monitor data guard

Hi All,
Please help me to have the shell scripts for monitoring the data guard.
Thanks,
Mahi

here is the shell script we use to monitor dataguard, it sends mail if there is a gap for more than 20 archive logs..
#set Oracle environment for Sql*Plus
#ORACLE_BASE=/oracle/app/oracle ; export ORACLE_BASE
ORACLE_HOME=/oracle/app/oracle/product/10.2.0 ; export ORACLE_HOME
ORACLE_SID=usagedb ; export ORACLE_SID
PATH=$PATH:/oracle/app/oracle/product/10.2.0/bin
#set working directory. script is located here..
cd /oracle/scripts
#Problem statemnt is constructed in message variable
MESSAGE=""
#hostname of the primary DB.. used in messages..
HOST_NAME=`/usr/bin/hostname`
#who will receive problem messages.. DBAs e-mail addresses seperated with space
DBA_GROUP='[email protected] '
#SQL statements to extract Data Guard info from DB
LOCAL_ARC_SQL='select archived_seq# from V$ARCHIVE_DEST_STATUS where dest_id=1; \n exit \n'
STBY_ARC_SQL='select archived_seq# from V$ARCHIVE_DEST_STATUS where dest_id=2; \n exit \n'
STBY_APPLY_SQL='select applied_seq# from V$ARCHIVE_DEST_STATUS where dest_id=2; \n exit \n'
#Get Data guard information to Unix shell variables...
LOCAL_ARC=`echo $LOCAL_ARC_SQL | sqlplus -S / as sysdba | tail -2|head -1`
STBY_ARC=`echo $STBY_ARC_SQL | sqlplus -S / as sysdba | tail -2|head -1`
STBY_APPLY=`echo $STBY_APPLY_SQL | sqlplus -S / as sysdba | tail -2|head -1`
#Allow 20 archive logs for transport and Apply latencies...
let "STBY_ARC_MARK=${STBY_ARC}+20"
let "STBY_APPLY_MARK= ${STBY_APPLY}+20"
if [ $LOCAL_ARC -gt $STBY_ARC_MARK ] ; then
MESSAGE=${MESSAGE}"$HOST_NAME Standby -log TRANSPORT- error! \n local_Arc_No=$LOCAL_ARC but stby_Arc_No=$STBY_ARC \n"
fi
if [ $STBY_ARC -gt $STBY_APPLY_MARK ] ; then
MESSAGE=${MESSAGE}"$HOST_NAME Standby -log APPLY- error! \n stby_Arc_No=$STBY_ARC but stby_Apply_no=$STBY_APPLY \n"
fi
if [ -n "$MESSAGE" ] ; then
MESSAGE=${MESSAGE}"\nWarning: dataguard error!!! \n .\n "
echo $MESSAGE | mailx -s "$HOST_NAME DataGuard error" $DBA_GROUP
fi

Similar Messages

  • Shell script to monitor the data guard

    Hi,
    Can any body please provide the shell scripts to monitor the data guard in all scenarios and to get the mail when problem occurs in dataguard.
    Thanks,
    Mahipal

    Sorry Mahi. Looks like all of the scripts i've got are for logical standbys and not physical. Have a look at the link ualual posted - easy enough to knock up a script from one or more of those data dictionary views. Just had a look on metalink and there's what looks to be a good script in note 241438.1. Its a good starting point definately.
    regards,
    Mark

  • Monitoring Data Guard with SNMP?

    I have configured Data Guard within two Oracle environments and have written a small Perl script which monitors the applied log service and sends an email if something fails to be applied.
    I am assuming this is not the most efficient way of monitoring the systems and would like to use SNMP.
    Can anyone tell me if it is possible to monitor Data Guard using SNMP (traps)? If so would you know what documents are available?
    Cheers!

    Some of the parameters that you need to have with Physical standby database are :
    *.background_dump_dest='/ford/app/oracle/admin/xchbot1/bdump'
    *.compatible='9.2.0.7'
    *.control_files='/home30/oradata/xchange/xchbot1/control01.ctl','/home30/oradata/xchange/xchbot1/control02.ctl','/home30/orad
    ata/xchange/xchbot1/control03.ctl'
    *.core_dump_dest='/ford/app/oracle/admin/xchbot1/cdump'
    *.db_block_buffers=1024
    *.db_block_size=8192
    *.db_file_multiblock_read_count=8# SMALL
    *.db_files=1000# SMALL
    *.db_name='xchbot1'
    *.global_names=TRUE
    *.log_archive_dest_1='LOCATION=/home30/oradata/xchange/xchbot1/archivelog'
    *.log_archive_dest_2='SERVICE=standby'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_format='arch_%t_%s.arc'
    *.log_archive_start=true
    *.log_buffer=16384000# SMALL
    *.log_checkpoint_interval=10000
    *.max_dump_file_size='10240'# limit trace file size to 5 Meg each
    *.parallel_max_servers=5
    *.parallel_min_servers=1
    *.processes=50# SMALL
    *.rollback_segments='rbs01','rbs02','rbs03','rbs04','rbs05','rbs06','rbs07','rbs08','rbs09','rbs10'
    *.shared_pool_size=67108864
    *.sort_area_retained_size=2048
    *.sort_area_size=10240
    *.user_dump_dest='/ford/app/oracle/admin/xchbot1/udump'

  • Using OEM to monitor Data Guard database

    Can someone please send me the link or docs of how to use OEM monitoring data guard? In specific, I would like to use OEM to monitor and be sure logs are applying to stand by site.
    Any ideas?

    Hello ,
    I will extend the document on the Fast Failover feature one of these days.
    What you need to do is:
    For using Fast Failover, you need to configure the Data Guard observer process.
    The configuration of this process can be done by selecting the Fast-Start Failover Disabled link on the Data Guard page of the database (Primary or Standby).
    You will be automatically redirected to the Fast-Start Failover: Configure Page.
    From this page you can do the configuration of the DG observer process, that then will enable you to activate the Fast Failover feature.
    Regards
    Rob
    http://oemgc.wordpress.com

  • Shell scripts to read data from a text file and to load it into a table

    Hi All,
    I have a text file consisting of rows and columns as follows,
    GEF001 000093625 MKL002510 000001 000000 000000 000000 000000 000000 000001
    GEF001 000093625 MKL003604 000001 000000 000000 000000 000000 000000 000001
    GEF001 000093625 MKL005675 000001 000000 000000 000000 000000 000000 000001 My requirement is that, i should read the first 3 columns of this file using a shell script and then i have to insert the data into a table consisting of 3 rows in oracle .
    the whole application is deployed in unix and that text file comes from mainframe. am working in the unix side of the application and i cant access the data directly from the mainframe. so am required to write a script which reads the data from text file which is placed in certain location and i have to load it to oracle database.
    so I can't use SQL * loader.
    Please help me something with this...
    Thanks in advance.

    1. Create a dictionary object in Oracle and assign it to the folder where your file resides
    2. Write a little procedure which opens the file in the newly created directory object using ULT_FILE and inside the FOR LOOP and do INSERTs to table you want
    3. Create a shell script and call that procedure
    You can use the post in my Blog for such issues
    [Using Oracle UTL_FILE, UTL_SMTP packages and Linux Shell Scripting and Cron utility together|http://kamranagayev.wordpress.com/2009/02/23/using-oracle-utl_file-utl_smtp-packages-and-linux-shell-scripting-and-cron-utility-together-2/]
    Kamran Agayev A. (10g OCP)
    http://kamranagayev.wordpress.com

  • Shell Script Programming -- Loading data into table

    Hello Gurus
    I am using Oracle's sql*loader utility to load data into a table. Lately, I got an unlikely scenario where in I need to process the data file first before loading it into the table and where I need help from you guys.
    Consider the following data line
    "Emp", DOB, Gender, Subject
    "1",01/01/1980,"M","Physics:01/05/2010"
    "2",01/01/1981,"M","Chemistry:02/05/2010|Maths:02/06/2011"
    "3",01/01/1982,"M","Maths:03/05/2010|Physics:06/07/2010|Chemistry:08/09/2011"
    "4",01/01/1983,"M","Biology:09/09/2010|English:10/10/2010"Employee - 1 will get loaded as a single record in the table. But I need to put Subject value into two separate fields into table. i.e. Physics into one column and date - 01/05/2010 into separate column.
    Here big problem starts
    Employee - 2 Should get loaded as 2 records into the table. The first record should have Chemistry as subject and date as 02/05/2010 and the next record should have all other fields same except the subject should be Maths and date as 02/06/2011. The subjects are separated by a pipe "|" in the data file.
    Similarly, Employee 3 should get loaded as 3 records. One as Maths, second as Physics and third as Chemistry along with their respective dates.
    I hope I have made my problem clear to everyone.
    I am looking to do something in shell scripting such that before finally running the sql*loader script, the above 4 employees have their records repeated as many times as their subject changes.
    In summary 2 problems are described above.
    1. To load subject and date into 2 separate fields in Oracle table at the time of load.
    2. If their exists multiple subjects then a record is to be loaded that many times as there exists any changes in employee's subject.
    Any help would be much appreciated.
    Thanks.

    Here are some comments. Perl can be a little cryptic but once you get used to it, it can be pretty powerful.
    #!/usr/bin/perl -w
    my $line_count = 0;
    open FILE, "test_file" or die $!;
    # Read each line from the file.
    while (my $line = <FILE>) {
        # Print the header if it is the first line.
        if ($line_count == 0) {
            chomp($line);
            print $line . ", Date\n";
            ++$line_count;
            next;   
        # Get all the columns (as separated by ',' into an array)
        my @columns = split(',', $line);
        # Remove the newline from the fourth column.
        chomp($columns[3]); 
        # Read the fields (separated by pipe) from the fourth column into an array.
        my @subject_and_date = split('\|', $columns[3]);     
        # Loop for each subject and date.
        foreach my $sub_and_date (@subject_and_date) {
            # Print value of Emp, DOB, and Gender first.
            print $columns[0] . "," . $columns[1] . "," . $columns[2] . ",";
            # Remove all double quotes from the subject and date string.
            $sub_and_date =~ s/"//g;
            # Replace ':' with '","'
            $sub_and_date =~ s/:/","/;
            print '"' . $sub_and_date . '"' . "\n";       
        ++$line_count;
    close FILE;

  • Shell script to monitor the application health deployed on weblogic

    Hi All,
    Is it possible to monitor the health of a application deployed on weblogic and send a mail if it is not running. I have JasperServer reporting application deployed on weblogic. The shell script should check if the application is running, if it down a mail should be sent.
    Thanks

    You can use WLST to get information on the state of your deployments, for example,
    the python script can be used
    class DeploymentInfo:
         def __init__(self,name,target):
              self.name = name;
              self.target = target;
         def getName(self):
              return self.name;
         def getTarget(self):
              return self.target;
    print 'CONNECT TO ADMIN SERVER';
    connect('weblogic', 'transfer11g');
    print 'OBTAINING DEPLOYMENT INFORMATION';
    deploymentsInfo = [];
    applications = cmo.getAppDeployments();
    for application in applications:
         name = application.getName();
         target = application.getTargets()[0].getName();
         deploymentsInfo.append(DeploymentInfo(name, target));
    print 'CHANGE TO DOMAIN RUNTIME ENVIRONMENT';
    domainRuntime();
    print 'APPLICATION LIFE CYCLE INFORMATION';
    applicationRuntime = cmo.getAppRuntimeStateRuntime();
    for deploymentInfo in deploymentsInfo:
         state = applicationRuntime.getCurrentState(deploymentInfo.getName(), deploymentInfo.getTarget())
         print 'Application: ' + deploymentInfo.getName() + ', State: ' + state;
         if (state != 'STATE_ACTIVE'):
              startApplication(deploymentInfo.getName());In the example above the application gets started when it is not running, you can also
    send an e-mail by using the smtplib package, for example
    import smtplib;
    server = smtplib.SMTP('email-server-host');
    server.set_debuglevel(1);
    server.sendmail(fromaddress, toaddress, message);
    server.quit();More information on the package smtplib can be found here:
    http://www.jython.org/docs/library/smtplib.html

  • Data Guard monitoring Scripts.

    I am looking for scripts to monitor Data Guard?
    Can someone help me with this please?
    Thanks in advance.

    These scripts are Unix specific:
    ## THIS ONE IS CALLED BY THE NEXT
    #!/bin/ksh
    # last_log_applied.ksh <oracle_sid> [connect string]
    if [ $# -lt 1 ]
    then
         echo "$0: <oracle_sid> [connect string]"
         exit -1
    fi
    oracle_sid=$1
    connect_string=$2
    ORACLE_HOME=`grep $oracle_sid /var/opt/oracle/oratab | awk -F":" {'print $2'}`
    export ORACLE_HOME
    LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
    export LD_LIBRARY_PATH
    PATH=$PATH:$ORACLE_HOME/bin
    export PATH
    ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
    export ORA_NLS33
    ORACLE_SID=$oracle_sid
    export ORACLE_SID
    ofile="/tmp/${oracle_sid}_last_log_seq.log"
    #### STANDBY SERVER NEEDS TO CONNECT VIA SYSDBA
    if [ ${connect_string:="NULL"} = "NULL" ]
    then
         $ORACLE_HOME/bin/sqlplus -s /nolog << EOF >tmpfile 2>&1
         set pagesize 0;
         set echo off;
         set feedback off;
         set head off;
         spool $ofile;
         connect / as sysdba;
         select max(sequence#) from v\$log_history;
    EOF
    #### PASS CONNECT STRING IN FOR PRIMARY SERVER
    else
         $ORACLE_HOME/bin/sqlplus -s $connect_string << EOF >tmpfile 2>&1
         set pagesize 0;
         set echo off;
         set feedback off;
         set head off;
         spool $ofile;
    select max(sequence#) from v\$log_history;
    EOF
    fi
    tmp=`grep -v [^0-9,' '] $ofile | tr -d ' '`
    rm $ofile tmpfile
    echo "$tmp"
    # standby_check.ksh
    #!/bin/ksh
    export STANDBY_DIR="/opt/oracle/admin/standby"
    if [ $# -ne 1 ]
    then
         echo "Usage: $0: <ORACLE_SID>"
         exit -1
    fi
    oracle_sid=$1
    # Max allowable logs to be out of sync on standby
    machine_name=`uname -a | awk {'print $2'}`
    . $STANDBY_DIR/CONFIG/params.$oracle_sid.$machine_name
    user_pass=`cat /opt/oracle/admin/.opass`
    echo "Running standby check on $oracle_sid..."
    standby_log_num=`$STANDBY_DIR/last_log_applied.ksh $oracle_sid`
    primary_log_num=`$STANDBY_DIR/last_log_applied.ksh $oracle_sid ${user_pass}@${oracle_sid}`
    echo "standby_log_num = $standby_log_num"
    echo "primary_log_num = $primary_log_num"
    log_difference=`expr $primary_log_num - $standby_log_num`
    if [ $log_difference -ge $ALARM_DIFF ]
    then
         /bin/mailx -s "$oracle_sid: Standby is $log_difference behind primary." -r $FROM_EMAIL $EMAIL_LIST < $STANDBY_DIR/standby_warning_mail
         # Page the DBA's if we're falling way behind
         if [ $log_difference -ge $PAGE_DIFF ]
         then
              /bin/mailx -s "$oracle_sid: Standby is $log_difference behind primary." -r $FROM_EMAIL $PAGE_LIST < $STANDBY_DIR/standby_warning_mail
         fi
    else
         echo "Standby is keeping up ok ($log_difference logs behind)"
    fi

  • Shell script directed to one Dynamic Dashboard to monitor status of all DB

    Hi team,
    straight to scenario now..
    I have 15 databases to manage..i wrote shell scripts for monitoring each database status of ping,listener,vnc server, concurrent server, forms server, metric server, workflow, filesystem usage, alert-log etc..
    Now each and every time i will get 15 mails for one database each half n hour and it will get filled for sure if i get many alerts at single time...i thought of having a dashboard where my script output should display alerts on 3-D pie chart, bar chart etc..
    Imagine all databases statuses on one dashboard with colours displaying peaks and lows also the data gets dynamic changes every 30 mins.....
    charts will let me know to fix the issue easier..so next time i wont care how many mails reach me i will look up to dashboard and can observe what went wrong...
    please let me know any third party software available or self oracle or linux tools available.....
    hope anyone would give me suitable solution
    thanks
    dkoracle

    AFAIK Grid Control is completely free*, you just have to be careful to not go into the pages that require Management Pack licensing if you haven't purchased them for the DBs you're monitoring.
    Personally I have always used a combination of GC and shell script alerts. You don't want the GC environment to be your SPoF.
    *other than the associated server costs                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Error reading data from Infocube using shell script.

    Dear all ,
    I am facing a problem while reading data from an infocube using a shell script.
    The details are as follows.
    One of the shell script reads the data from the infocube to extract files with the values.
    The tables used for extraction by the shell script are :
    from   SAPR3."/BIC/F&PAR_CUBE.COPA"     FCOPA,
           SAPR3."/BIC/D&PAR_CUBE.COPAU"    COPAU,
           SAPR3."/BIC/D&PAR_CUBE.COPAP"    COPAP,
           SAPR3."/BIC/D&PAR_CUBE.COPA1"    CCPROD,
           SAPR3."/BIC/D&PAR_CUBE.COPA2"    CCCUST,
           SAPR3."/BIC/D&PAR_CUBE.COPA3"    COPA3,
           SAPR3."/BIC/D&PAR_CUBE.COPA4"    COPA4,
           SAPR3."/BIC/D&PAR_CUBE.COPA5"    COPA5,
           SAPR3."/BIC/MCCPROD"      MCCPROD,
           SAPR3."/BIC/SCCPROD"      SCCPROD,
           SAPR3."/BIC/MCCCUSTOM"    MCCCUSTOM,
           SAPR3."/BIC/SCCCUSTOM"    SCCCUSTOM,
           SAPR3."/BIC/SORGUNIT"     SORGUNIT,
           SAPR3."/BIC/SUNIMOYEAR"   SUNIMOYEAR,
    /*     SAPR3."/BI0/SFISCPER"     SFISCPER, */
           SAPR3."/BI0/SREQUID"      SREQUID,
           SAPR3."/BI0/SCURRENCY"    SCURRENCY,
           SAPR3."/BIC/SSCENARIO"    SSCENARIO,
           SAPR3."/BIC/SSOURCE"      SSOURCE
    The problem is that the file generation by this script (after reading the data from teh infocube) is taking an unexpected time of 2 hours which needs to be maximum 10 mins only.
    I used RSRV to get the info about these tables for the infocube:
    Entry '00046174', SID = 37 in SID table is missing in master data table /BIC/MCUSLEVEL2
    Entry '00081450', SID = 38 in SID table is missing in master data table /BIC/MCUSLEVEL2
    and so on for SID = 39  and SID = 35 .
    Checking of SID table /BIC/SCUSLEVEL2 produced errors
    Checking of SID table /BIC/SCUSLEVEL3 produced errors
    Can you please let me know if this can be a reason of delay in file generation (or reading of data from the infocube).
    Also , Please let me know how to proceed with this issue.
    Kindly let me know for more information, if required.
    Thanks in advance for your help.
    -Shalabh

    Hi ,
    In continuation with searching the solution to the problem , I could manage to note a difference in the partition of the Fact table of the infocube.
    Using SE14 -> Storage Parameters, I could find the partition done for the fact table as :
    PARTITION BY: RANGE
    COLUMN_LIST: KEY_ABACOPA
    and subsequently there are partitions with data in it.
    I need to understand the details of these partitions .
    Do they correspond to each requests in the infocube(which may not be possible as there are 13 requests in infocube and much more partitions).
    Most importantly, since this partition is observed for this onfocube only and not for other infocubes, it is possible that it can be a reason for SLOW RETRIEVAL of data from this ionfocube( not sure since the partition is used to help in fast retreival of data from the infocubes).
    Kindly help.
    Thanks for your co-operation in advance.
    -Shalabh

  • Monitoring Shell script

    I want shell script for monitoring pupose (database , server ,space monitoring ) on linux , how can get the script ?
    do u know the link

    Ramkrishna wrote:
    I want shell script for monitoring pupose (database , server ,space monitoring ) on linux , how can get the script ?
    do u know the linkIf you talk about each part, there is so many monitoring events are there. if you go for space monitoring is it for trace files? archive files? FRA? ASM space? and so on..
    your requirement should be clear
    according to that you have to design...or you can get sample scripts from google.
    Thanks

  • Dgutil.pl line 99 ERROR when creating 11.2.0.3 data guard via GRID CONTROL

    Oracle Grid Control 11g Release 1 (11.1.0.1.0)
    (64bit)
    Oracle Enterprise Linux Server release 5.4 x86_64
    Oracle Database 11.2.0.3 (64bit)
    1. Grid Control>Targets>Databases
    2.Click on pc01prmy (Primary)
    3.Availability > Data Guard > Setup and Manage
    4.Add Standby Database
    5.Create a new physical standby database.
    Fails with message:
    38706 at /u01/app/oracle/agent11g/sysman/admin/scripts/db/dg/dgutil.pl line 99.
    I can create a standby with a rman script without problems, but via the Grid Control page fails.
    Have you seen this before?

    I contact Oracle Support on this issue, here are the details.
    Generic Note
    Hi Marcelo, the note you sited Creating Standby Database With Enterprise Manager Failing [ID 1400482.1
    Says something a little different than what you tried.
    You did RECOVER DATABASE for manual recover and the error says use backup control file.
    For a standby to do manual recovery it's
    *alter database recover standby database ;*
    It may behave better.
    But the note implies to just apply enough redo to have the standby consistent enough to turn on flashback.
    This can be done by starting managed recovery and applying some redo log sequences.
    So the flashback being turned on is too soon, but should have worked anyway. I think it would turn on managed recovery though.
    This also happens since the job doesn't use dorecover since recovery can be done with managed recovery.
    So most likely there isn't enough activity on the primary and the online redo has not been archived yet.
    So some log switches on the primary will send enough redo to get consistent so you can turn on flashback and finish it.
    *So why did it happen?*
    Possibly
    Bug 13250486 - ADD STANDBY DATABASE FOR TARGET WITH FLASHBACK ON FAILS WITH ERROR
    Base bug 12923814 FLASHBACK AND ARL DELETION OPTIONS IGNORED IN ADD STANDBY DATABASE WIZARD
    fixed in Grid Control 12.1c
    There is currently no patch available for 11.1.0.1 Grid Control. We would have to open a bug to confirm you hit this in 11.1.0.1 and get a patch.
    *A workaround would be to turn off flashback at the primary then try to create the standby.*
    *Once the standby is created you can turn flashback back on for the primary and if required, the standby.*
    You can try again to see, or finish the standby manually as I stated above.
    I will still need the diagnostic information below.
    Action Plan
    =========
    The product verion in this SR is for 12.1.0.1 Grid Control. I assume it should be 11.1 since you stated it is.
    Grid Control job log that shows the errors.
    alert logs from the primary and standby.
    Please run these two diagnostic scripts and upload the output.
    Script to Collect Data Guard Primary Site Diagnostic (Doc ID 241374.1)
    Script to Collect Data Guard Physical Standby Diagnostic (Doc ID 241438.1)
    Edited by: Marcelo Marques - ESRI on Mar 24, 2012 9:51 AM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Data Guard - insufficient privileges

    Spec:
    Windows 2008
    Oracle 11.1.0.2
    I continue to get an error when running my below RMAN script to create data guard for a very small database.
    The error is stating that I don't have privilege to write to the remote server - here is the script and error included below:
    RMAN> run {
    2> allocate channel prmy3 type disk;
    3> allocate channel prmy4 type disk;
    4> allocate auxiliary channel stby1 type disk;
    5> duplicate target database for standby from active database
    6> spfile
    7> parameter_value_convert 'test1', 'test1'
    8> set 'db_unique_name'='test1_coop'
    9> set control_files='D:\oradata\test1\control01.ctl'
    10> set db_create_file_dest='D:\oradata\test1'
    11> set audit_file_dest='C:\app\diag\rdbms\test1'
    12> set diagnostic_dest='C:\app\diag\rdbms\test1'
    13> set db_create_online_log_dest_1='D:\oradata\test1'
    14> set db_recovery_file_dest='D:\Flash_Recovery_Area'
    15> set db_recovery_file_dest_size='5G'
    16> nofilenamecheck;
    17> }
    using target database control file instead of recovery catalog
    allocated channel: prmy3
    channel prmy3: SID=149 device type=DISK
    allocated channel: prmy4
    channel prmy4: SID=148 device type=DISK
    allocated channel: stby1
    channel stby1: SID=94 device type=DISK
    Starting Duplicate Db at 23-FEB-12
    contents of Memory Script:
    backup as copy reuse
    file 'C:\app\product\11.1.0\db_1\DATABASE\PWDtest1.ORA' auxiliary format
    'C:\app\product\11.1.0\db_1\DATABASE\PWDtest1.ORA' file
    'C:\APP\PRODUCT\11.1.0\DB_1\DATABASE\SPFILETEST1.ORA' auxiliary format
    'C:\APP\PRODUCT\11.1.0\DB_1\DATABASE\SPFILETEST1.ORA' ;
    sql clone "alter system set spfile= ''C:\APP\PRODUCT\11.1.0\DB_1\DATABASE\SPF
    ILETEST1.ORA''";
    executing Memory Script
    Starting backup at 23-FEB-12
    RMAN-03009: failure of backup command on prmy3 channel at 02/23/2012 18:43:29
    ORA-17629: Cannot connect to the remote database server
    ORA-17627: ORA-01031: insufficient privileges
    ORA-17629: Cannot connect to the remote database server
    continuing other job steps, job failed will not be re-run
    released channel: prmy3
    released channel: prmy4
    released channel: stby1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of Duplicate Db command at 02/23/2012 18:43:29
    RMAN-03015: error occurred in stored script Memory Script
    RMAN-03009: failure of backup command on prmy4 channel at 02/23/2012 18:43:29
    ORA-17629: Cannot connect to the remote database server
    ORA-17627: ORA-01031: insufficient privileges
    ORA-17629: Cannot connect to the remote database server

    The Parameter should be in the INIT file
    remote_login_passwordfile='EXCLUSIVE'
    you mean copy the password from the primary database? Yes! And rename
    I have an example - give me a second
    If my Primary password file is 'orapwRECOVER2'
    I would copy that to the Standby server and rename to orapwSTANDBY ( STANDBY would be your database name )
    My path for this is
    /u01/app/oracle/product/11.2.0.2/dbs
    It will be different on Windows.
    Are you doing an Active Duplicate?
    mseberg
    Edited by: mseberg on Feb 24, 2012 11:53 AM
    Overview RMAN DUPLICATE FOR STANDBY
    1. Standby database need small INIT file for Duplicate in most cases.
    2. You can either create a backup on the Primary and move it, or do an Active Duplication
    3. The Tnsnames.ora should hold entries for both the Primary and Standby database on BOTH servers.
    4. Having a Static listener entry for the Standby ( which does not exist yet is import ) restart the listener
    Tnsnames.ora Example
    Tnsnames.ora
    RECOVER2=
        (DESCRIPTION=
          (ADDRESS=
             (PROTOCOL=TCP)
             (HOST=hostname)
             (PORT=1521)
          (CONNECT_DATA=
             (SERVICE_NAME=RECOVER2.hostname)
             (UR = A)
    RECLONE=
        (DESCRIPTION=
          (ADDRESS=
             (PROTOCOL=TCP)
             (HOST=hostname)
             (PORT=1521)
          (CONNECT_DATA=
             (SERVICE_NAME=RECLONE.hostname)
             (UR = A)
    Listener.ora Example
    BEFORE
    SID_LIST_LISTENER =
       (SID_LIST =
           (SID_DESC =
           (SID_NAME = PLSExtProc)
           (ORACLE_HOME = /u01/app/oracle/product/11.2.0.2)
           (PROGRAM = extproc)
    AFTER
    SID_LIST_LISTENER =
       (SID_LIST =
           (SID_DESC =
           (SID_NAME = PLSExtProc)
           (ORACLE_HOME = /u01/app/oracle/product/11.2.0.2)
           (PROGRAM = extproc)
           (SID_DESC =
           (global_dbname = RECLONE.hostname)
           (ORACLE_HOME = /u01/app/oracle/product/11.2.0.2)
           (sid_name = RECLONE)
    )So the extra entry for the clone database is needed because otherwise RMAN has nothing to connect to.
    Prevent Timeouts
    Add these to both servers
    To listener.ora
    INBOUND_CONNECT_TIMEOUT_ = 120
    To sqlnet.ora
    SQLNET.INBOUND_CONNECT_TIMEOUT = 120
    Then stop and start the listener.So on the Primary you could make a backup like this :
    RUN {
    allocate channel d1 type disk;
    backup format 'c:\backups\PRIMARY\df_t%t_s%s_p%p' database;
    sql 'alter system archive log current';
    backup format 'c:\backups\PRIMARY\al_t%t_s%s_p%p' archivelog all;
    backup current controlfile for standby format 'c:\backups\PRIMARY\sb_t%t_s%s_p%p';
    release channel d1;
    }And then after moving it to the Standby Server duplicate like this
    run {
    allocate channel C1 device type disk;
    allocate auxiliary channel C2 device type disk;
    duplicate target database for standby nofilenamecheck;
    }Edited by: mseberg on Feb 24, 2012 12:24 PM

  • Shell script and plsql

    Please guide me with parameter passing at 3 levels. Here is the scenario.
    a. A plsql-generateMaster.plsql- invokes stored procedure genMDetails(param1, query)
    b. A shell script genM.sh invokes generateMaster.plsql
    c. Need to pass date range as parameters (2 dates) to shell script.
    d. The shell scripts accepts the date range parameters and passes them to -generateMaster.plsql
    e. generteMaster.plsql uses the 2 date parameters to pass as part of query during the invocation of stored procedure genMDetails(param1, query)
    In short - shell script->plsql->stored procedure
    Platform is Sun Unix 8, oracle db 9i R2
    Thanks.

    This script shows how to pass parameters to PL/SQL anonymous block.
    sqlplus "mob/mob" << EOF
    set serveroutput on
    begin
    dbms_output.put_line('Parameter one: ' || '$1');
    dbms_output.put_line('Parameter two: ' || '$2');
    -- Here you can invoke procedure proc1(‘$1’,’$2’);
    end;
    EOF
    You can just invoke it like this:
    ./passParameters.sh 12-13-1981 03-04-2006
    Best Regards
    Krystian Zieja / mob

  • Shell script to get the mail whenever the primary or standby DBs down

    Hi,
    Can any one please provide the shell script to monitor the primary and standby databases, such that whenever the primary or standby database is down we have to get the mail.
    Thanks,
    Mahi

    Hi Mahi,
    in 10g you can configure this through EM dbconsole which will be very easy.
    (i am not sure if this exists in 9i EM)

Maybe you are looking for