Data guard monitoring shell script

uname -a
Linux DG1 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
Hi Guys,
I am looking for a shell script that i can cron ,which monitors dataguard env (10g and 11g )and sent email alerts if DR go out of sync say by 10 or 15 logs
i found couple on the net but not working for some reason
http://emrebaransel.blogspot.com/2009/07/shell-script-to-check-dataguard-status.html
if you guys have some please share

You are using an advanced version of Oracle and want to plug an obsolete script into it??
Why not just monitor the Data Guard with EM or Grid Control and setup emails in there? It is 100% more reliable than anything else.

Similar Messages

  • Data Guard monitoring Scripts.

    I am looking for scripts to monitor Data Guard?
    Can someone help me with this please?
    Thanks in advance.

    These scripts are Unix specific:
    ## THIS ONE IS CALLED BY THE NEXT
    #!/bin/ksh
    # last_log_applied.ksh <oracle_sid> [connect string]
    if [ $# -lt 1 ]
    then
         echo "$0: <oracle_sid> [connect string]"
         exit -1
    fi
    oracle_sid=$1
    connect_string=$2
    ORACLE_HOME=`grep $oracle_sid /var/opt/oracle/oratab | awk -F":" {'print $2'}`
    export ORACLE_HOME
    LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
    export LD_LIBRARY_PATH
    PATH=$PATH:$ORACLE_HOME/bin
    export PATH
    ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
    export ORA_NLS33
    ORACLE_SID=$oracle_sid
    export ORACLE_SID
    ofile="/tmp/${oracle_sid}_last_log_seq.log"
    #### STANDBY SERVER NEEDS TO CONNECT VIA SYSDBA
    if [ ${connect_string:="NULL"} = "NULL" ]
    then
         $ORACLE_HOME/bin/sqlplus -s /nolog << EOF >tmpfile 2>&1
         set pagesize 0;
         set echo off;
         set feedback off;
         set head off;
         spool $ofile;
         connect / as sysdba;
         select max(sequence#) from v\$log_history;
    EOF
    #### PASS CONNECT STRING IN FOR PRIMARY SERVER
    else
         $ORACLE_HOME/bin/sqlplus -s $connect_string << EOF >tmpfile 2>&1
         set pagesize 0;
         set echo off;
         set feedback off;
         set head off;
         spool $ofile;
    select max(sequence#) from v\$log_history;
    EOF
    fi
    tmp=`grep -v [^0-9,' '] $ofile | tr -d ' '`
    rm $ofile tmpfile
    echo "$tmp"
    # standby_check.ksh
    #!/bin/ksh
    export STANDBY_DIR="/opt/oracle/admin/standby"
    if [ $# -ne 1 ]
    then
         echo "Usage: $0: <ORACLE_SID>"
         exit -1
    fi
    oracle_sid=$1
    # Max allowable logs to be out of sync on standby
    machine_name=`uname -a | awk {'print $2'}`
    . $STANDBY_DIR/CONFIG/params.$oracle_sid.$machine_name
    user_pass=`cat /opt/oracle/admin/.opass`
    echo "Running standby check on $oracle_sid..."
    standby_log_num=`$STANDBY_DIR/last_log_applied.ksh $oracle_sid`
    primary_log_num=`$STANDBY_DIR/last_log_applied.ksh $oracle_sid ${user_pass}@${oracle_sid}`
    echo "standby_log_num = $standby_log_num"
    echo "primary_log_num = $primary_log_num"
    log_difference=`expr $primary_log_num - $standby_log_num`
    if [ $log_difference -ge $ALARM_DIFF ]
    then
         /bin/mailx -s "$oracle_sid: Standby is $log_difference behind primary." -r $FROM_EMAIL $EMAIL_LIST < $STANDBY_DIR/standby_warning_mail
         # Page the DBA's if we're falling way behind
         if [ $log_difference -ge $PAGE_DIFF ]
         then
              /bin/mailx -s "$oracle_sid: Standby is $log_difference behind primary." -r $FROM_EMAIL $PAGE_LIST < $STANDBY_DIR/standby_warning_mail
         fi
    else
         echo "Standby is keeping up ok ($log_difference logs behind)"
    fi

  • Data guard monitoring

    Hi All,
    I wanted to know whether difference between output of below queries would give me log gap between primary and standby database on which the dataguard is configured.
    SELECT MAX(SEQUENCE#) LOG_ARCHIVED FROM V$ARCHIVED_LOG WHERE DEST_ID=1 AND ARCHIVED='YES';
    SELECT MAX(SEQUENCE#) LOG_APPLIED FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND APPLIED='YES';
    I am trying to write a linux script to monitor the data guard.. Please suggest..
    Thanks

    What I would do is create an SQL file and call it from a Linux script. Here's a start :
    SPOOL /tmp/quickcheck.lst
    PROMPT
    PROMPT Checking Size and usage in GB of Flash Recovery Area
    PROMPT
    SELECT
      ROUND((A.SPACE_LIMIT / 1024 / 1024 / 1024), 2) AS FLASH_IN_GB,
      ROUND((A.SPACE_USED / 1024 / 1024 / 1024), 2) AS FLASH_USED_IN_GB,
      ROUND((A.SPACE_RECLAIMABLE / 1024 / 1024 / 1024), 2) AS FLASH_RECLAIMABLE_GB,
      SUM(B.PERCENT_SPACE_USED)  AS PERCENT_OF_SPACE_USED
    FROM
      V$RECOVERY_FILE_DEST A,
      V$FLASH_RECOVERY_AREA_USAGE B
    GROUP BY
      SPACE_LIMIT,
      SPACE_USED ,
      SPACE_RECLAIMABLE ;
    PROMPT
    PROMPT Checking free space In Flash Recovery Area
    PROMPT
    column FILE_TYPE format a20
    select * from v$flash_recovery_area_usage;
    PROMPT
    PROMPT Checking last sequence in v$archived_log
    PROMPT
    clear screen
    set linesize 100
    column STANDBY format a20
    column applied format a10
    SELECT  name as STANDBY, SEQUENCE#, applied, completion_time from v$archived_log WHERE  DEST_ID = 2 AND NEXT_TIME > SYSDATE -1;
    prompt
    prompt Checking Last log on Primary
    prompt
    select max(sequence#) from v$archived_log where NEXT_TIME > sysdate -1;
    SPOOL OFFRun on your primary
    If you find this helpful please mark it so.
    Best Regards
    mseberg

  • Monitoring Shell script

    I want shell script for monitoring pupose (database , server ,space monitoring ) on linux , how can get the script ?
    do u know the link

    Ramkrishna wrote:
    I want shell script for monitoring pupose (database , server ,space monitoring ) on linux , how can get the script ?
    do u know the linkIf you talk about each part, there is so many monitoring events are there. if you go for space monitoring is it for trace files? archive files? FRA? ASM space? and so on..
    your requirement should be clear
    according to that you have to design...or you can get sample scripts from google.
    Thanks

  • Sql Loader by using shell script, not able to insert data

    Hi,
    I am trying to dump the data by using shell script.(in shell script i am having sqlldr command)(its a host excutable method cocurrent program)
    When i am loading the data, by placing my files(.ctl,.prog,.csv,symbolink file for .prog) in $Custom_top/bin, it is loading exactly. 17000 records inserted.
    But if i am loading the data by placing my files in $custom_top/custom_folders. unable to insert total data. only 43 records inserting.
    Please any one can help me.
    Thanks in advance.
    Rama.

    Srini, Thanks a lot for ur reply,
    Oracle Apps version R12,
    Microsoft windows XP profissional
    Version 2002 service Pack 3
    My Control file Script is:
    load data
    infile '$XADP_TOP/data/CPIU/in/XXOKS_Price_Increase.csv'
    append
    into table XXOKS_CONTRACT_PRICE_INCR_DTLS
    fields terminated BY ',' optionally enclosed by '"'
    TRAILING NULLCOLS
    (EXCLUSION_FLAG,
    LEGACY_NUMBER,
    CUSTOMER_NUMBER,
    CUSTOMER_NAME,
    REQUEST_ID,
    CONTRACT_NUMBER,
    CONTRACT_START_DATE,
    CONTRACT_END,
    REQUEST_LINE_ID,
    LINE_START_DATE,
    LINE_END_DATE,
    ITEM_NUMBER,
    ITEM_DESCRIPTION,
    UNIT_PRICE,
    QTY,
    NEW_UNIT_PRICE,
    LINE_AMOUNT,
    NEW_LINE_AMOUNT,
    PRICE_INCREASED_DATE,
    PERCENTAGE_INCREASED,
    ORIGINAL_CONTRACT_AMOUNT,
    NEW_CONTRACT_AMOUNT,
    PRICE_INCREASE_AMOUNT)
    My .prog File is: Please fidn that i created symbolink file also for my .prog.
    if [ -z $XADP_TOP ];then
    echo "XADP_TOP environment variable is not set!"
    exit 1
    fi
    cd $XADP_TOP/data/CPIU/in
    DATE=`date +%y%m%d:%H%M`
    i_program_name="$0"
    i_ora_pwd="$1"
    i_user_id="$2"
    i_user_name="$3"
    i_request_id="$4"
    i_ftp_host_name="$5"
    i_ftp_user_name="$6"
    i_ftp_user_password="$7"
    ftp_prog() {
    # FTP Function to reuse the FTP Commands
    if [ $# -ne 6 ];then
    echo "Usage : ftp_prog <Hostname> <User name> <Password> <Remote Directory> <command> <filename>"
    exit 2
    fi
    l_ftp_host_name="$1"
    l_ftp_user_name="$2"
    l_ftp_user_password="$3"
    l_ftpdir="$4"
    l_ftp_command="$5"
    l_ftp_filename="$6"
    ftp -v -n ${l_ftp_host_name} <<EOF
    user ${l_ftp_user_name} ${l_ftp_user_password}
    ascii
    cd ${l_ftpdir}
    ${l_ftp_command} ${l_ftp_filename}
    quit
    EOF
    #exit $?
    # setting the ftp directory
    #ftpdir="/`echo ${TWO_TASK:-$ORACLE_SID}|tr "[A-Z]" "[a-z]"`/CPIU"
    ##ftpdir="/FinTEST/quoting/PS/ar"
    ftpdir="$XADP_TOP/data/CPIU/in"
    # setting the in directory and out directory
    indir="$XADP_TOP/data/CPIU/in"
    outdir="$XADP_TOP/data/CPIU/out"
    ftp_prog ${i_ftp_host_name} ${i_ftp_user_name} ${i_ftp_user_password} ${ftpdir} get XXOKS_Price_Increase.csv
    echo $ftpdir
    echo "Converting the data file into unix mode"
    dos2unix XXOKS_Price_Increase.csv XXOKS_Price_Increase.csv
    chmod 777 XXOKS_Price_Increase.csv
    cd $XADP_TOP/bin
    echo "Trying to excute sqlldr and entering into the into control file"
    $ORACLE_HOME/bin/sqlldr userid=$i_ora_pwd control=XXOKS_PRICE_INCR_LOAD log=$XADP_TOP/log/XXOKS_PRICE_INCR_LOAD_${DATE}.log;
    exit_status=$?
    echo "Checking the status and giving permissions to the data file which in in dir"
    if [ $exit_status -eq 0 ]; then
    cd $XADP_TOP/data/CPIU/in
         chmod 777 XXOKS_Price_Increase.csv
    echo "try to move data file into out dir"
    # Moving the file to out directory
    mv XXOKS_Price_Increase.csv ${outdir}/XXOKS_Price_Increase.csv_${DATE}
    #echo "ready to zip file in out dir step6"
    # Zipping the file
    #gzip -f ${outdir}/XXOKS_Price_Increase.csv_${DATE}
    echo "deleting the file which is in dir"
    # Deleting the file from in directory
    /bin/rm -f ${indir}/XXOKS_Price_Increase.csv
    # Deleting from the remote directory
    ftp_prog ${i_ftp_host_name} ${i_ftp_user_name} ${i_ftp_user_password} ${ftpdir} delete XXOKS_Price_Increase.csv
    echo "sqlloader finished successfully."
    else
    echo "Error in loader"
    ##echo "Loader error in Price Increase Detials File ${i_file}"
    fi
    exit $exit_status
    And My Log file Comments are
    SQL*Loader: Release 10.1.0.5.0 - Production on Thu Dec 3 01:32:08 2009
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Control File: XXOKS_PRICE_INCR_LOAD.ctl
    Data File: /oesapp/applmgr/GIS11/apps/apps_st/appl/xadp/12.0.0/data/CPIU/in/XXOKS_Price_Increase.csv
    Bad File: XXOKS_Price_Increase.bad
    Discard File: none specified
    (Allow all discards)
    Number to load: ALL
    Number to skip: 0
    Errors allowed: 50
    Bind array: 64 rows, maximum of 256000 bytes
    Continuation: none specified
    Path used: Conventional
    Table XXOKS_CONTRACT_PRICE_INCR_DTLS, loaded from every logical record.
    Insert option in effect for this table: APPEND
    TRAILING NULLCOLS option in effect
    Column Name Position Len Term Encl Datatype
    EXCLUSION_FLAG FIRST * , O(") CHARACTER
    LEGACY_NUMBER NEXT * , O(") CHARACTER
    CUSTOMER_NUMBER NEXT * , O(") CHARACTER
    CUSTOMER_NAME NEXT * , O(") CHARACTER
    REQUEST_ID NEXT * , O(") CHARACTER
    CONTRACT_NUMBER NEXT * , O(") CHARACTER
    CONTRACT_START_DATE NEXT * , O(") CHARACTER
    CONTRACT_END NEXT * , O(") CHARACTER
    REQUEST_LINE_ID NEXT * , O(") CHARACTER
    LINE_START_DATE NEXT * , O(") CHARACTER
    LINE_END_DATE NEXT * , O(") CHARACTER
    ITEM_NUMBER NEXT * , O(") CHARACTER
    ITEM_DESCRIPTION NEXT * , O(") CHARACTER
    UNIT_PRICE NEXT * , O(") CHARACTER
    QTY NEXT * , O(") CHARACTER
    NEW_UNIT_PRICE NEXT * , O(") CHARACTER
    LINE_AMOUNT NEXT * , O(") CHARACTER
    NEW_LINE_AMOUNT NEXT * , O(") CHARACTER
    PRICE_INCREASED_DATE NEXT * , O(") CHARACTER
    PERCENTAGE_INCREASED NEXT * , O(") CHARACTER
    ORIGINAL_CONTRACT_AMOUNT NEXT * , O(") CHARACTER
    NEW_CONTRACT_AMOUNT NEXT * , O(") CHARACTER
    PRICE_INCREASE_AMOUNT NEXT * , O(") CHARACTER
    value used for ROWS parameter changed from 64 to 43
    Table XXOKS_CONTRACT_PRICE_INCR_DTLS:
    43 Rows successfully loaded.
    0 Rows not loaded due to data errors.
    0 Rows not loaded because all WHEN clauses were failed.
    0 Rows not loaded because all fields were null.
    Space allocated for bind array: 255162 bytes(43 rows)
    Read buffer bytes: 1048576
    Total logical records skipped: 0
    Total logical records read: 43
    Total logical records rejected: 0
    Total logical records discarded: 0
    Run began on Thu Dec 03 01:32:08 2009
    Run ended on Thu Dec 03 01:32:08 2009
    Elapsed time was: 00:00:00.19
    CPU time was: 00:00:00.04
    Plz srini help me.
    Thanks in advance
    Rama..

  • How to parse a text file and produce a dynamic shell script for linking?

    I have some mapping files, one example is like this one;
    $ cat CON_xfrm_contract_to_20080302.map
    (object mfile_c_type
    (path "file:OBSOLETE")
    (fs "file://amanos/s01/abinitio/data/prod/mfs/mfs_16way")
    (local_paths 16
      "file://amanos/s01/abinitio/data/prod/mfs/parts/mfs_16way_001/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat"
      "file://amanos/s01/abinitio/data/prod/mfs/parts/mfs_16way_002/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat"
      "file://amanos/s01/abinitio/data/prod/mfs/parts/mfs_16way_003/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat"
      "file://amanos/s01/abinitio/data/prod/mfs/parts/mfs_16way_004/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat"
      "file://amanos/s01/abinitio/data/prod/mfs/parts/mfs_16way_005/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat"
      "file://amanos/s01/abinitio/data/prod/mfs/parts/mfs_16way_006/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat"
      "file://amanos/s01/abinitio/data/prod/mfs/parts/mfs_16way_007/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat"
      "file://amanos/s01/abinitio/data/prod/mfs/parts/mfs_16way_008/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat"
      "file://amanos/s01/abinitio/data/prod/mfs/parts/mfs_16way_009/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat"
      "file://amanos/s01/abinitio/data/prod/mfs/parts/mfs_16way_010/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat"
      "file://amanos/s01/abinitio/data/prod/mfs/parts/mfs_16way_011/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat"
      "file://amanos/s01/abinitio/data/prod/mfs/parts/mfs_16way_012/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat"
      "file://amanos/s01/abinitio/data/prod/mfs/parts/mfs_16way_013/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat"
      "file://amanos/s01/abinitio/data/prod/mfs/parts/mfs_16way_014/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat"
      "file://amanos/s01/abinitio/data/prod/mfs/parts/mfs_16way_015/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat"
      "file://amanos/s01/abinitio/data/prod/mfs/parts/mfs_16way_016/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat"))In this file's content I have some exracted text files with same names under different folders;
    $ ls -lt /s01/abinitio/data/prod/mfs/parts/mfs_16way_*/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat   [
    -rw-rw-rw-   1 ab_live  abinitio 438652105 Mar  3 01:42 /s01/abinitio/data/prod/mfs/parts/mfs_16way_010/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    -rw-rw-rw-   1 ab_live  abinitio 438490410 Mar  3 01:42 /s01/abinitio/data/prod/mfs/parts/mfs_16way_016/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    -rw-rw-rw-   1 ab_live  abinitio 438219252 Mar  3 01:42 /s01/abinitio/data/prod/mfs/parts/mfs_16way_007/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    -rw-rw-rw-   1 ab_live  abinitio 438521432 Mar  3 01:42 /s01/abinitio/data/prod/mfs/parts/mfs_16way_014/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    -rw-rw-rw-   1 ab_live  abinitio 438488130 Mar  3 01:42 /s01/abinitio/data/prod/mfs/parts/mfs_16way_003/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    -rw-rw-rw-   1 ab_live  abinitio 438249547 Mar  3 01:42 /s01/abinitio/data/prod/mfs/parts/mfs_16way_002/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    -rw-rw-rw-   1 ab_live  abinitio 438312177 Mar  3 01:42 /s01/abinitio/data/prod/mfs/parts/mfs_16way_012/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    -rw-rw-rw-   1 ab_live  abinitio 439074566 Mar  3 01:42 /s01/abinitio/data/prod/mfs/parts/mfs_16way_015/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    -rw-rw-rw-   1 ab_live  abinitio 438722261 Mar  3 01:42 /s01/abinitio/data/prod/mfs/parts/mfs_16way_004/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    -rw-rw-rw-   1 ab_live  abinitio 438742477 Mar  3 01:42 /s01/abinitio/data/prod/mfs/parts/mfs_16way_001/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    -rw-rw-rw-   1 ab_live  abinitio 438517268 Mar  3 01:42 /s01/abinitio/data/prod/mfs/parts/mfs_16way_008/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    -rw-rw-rw-   1 ab_live  abinitio 438645835 Mar  3 01:42 /s01/abinitio/data/prod/mfs/parts/mfs_16way_011/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    -rw-rw-rw-   1 ab_live  abinitio 438334994 Mar  3 01:42 /s01/abinitio/data/prod/mfs/parts/mfs_16way_006/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    -rw-rw-rw-   1 ab_live  abinitio 438470743 Mar  3 01:42 /s01/abinitio/data/prod/mfs/parts/mfs_16way_005/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    -rw-rw-rw-   1 ab_live  abinitio 438095853 Mar  3 01:42 /s01/abinitio/data/prod/mfs/parts/mfs_16way_009/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    -rw-rw-rw-   1 ab_live  abinitio 438434204 Mar  3 01:42 /s01/abinitio/data/prod/mfs/parts/mfs_16way_013/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.datI need a shell script which will produce a shell script from the content of the mapping file so that I can be able to symbolicly link these files with different names and under the same folder, like;
    ln -s /s01/abinitio/data/prod/mfs/parts/mfs_16way_001/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat ./mfs_16way_001.CON_xfrm_contract_to_20080302.dat
    ln -s /s01/abinitio/data/prod/mfs/parts/mfs_16way_016/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat ./mfs_16way_016.CON_xfrm_contract_to_20080302.datI am a newbie for shell scripting, I tried several awk and sed operations but couldn't get close to this output :(
    If you guide me I will be so glad, thank you.
    ps: amanos is the name of this server.

    this is thepoint that I am stuck, I can not add the destination sym.link name to the end of each line;
    $ grep "  \"file://amanos" CON_xfrm_contract_to_20080302.map | cut -c17- | sed  's/"//;s/)//g' | sed 's/\/s01/ln -s \/s01/g'
    ln -s /s01/abinitio/data/prod/mfs/parts/mfs_16way_001/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    ln -s /s01/abinitio/data/prod/mfs/parts/mfs_16way_002/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    ln -s /s01/abinitio/data/prod/mfs/parts/mfs_16way_003/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    ln -s /s01/abinitio/data/prod/mfs/parts/mfs_16way_004/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    ln -s /s01/abinitio/data/prod/mfs/parts/mfs_16way_005/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    ln -s /s01/abinitio/data/prod/mfs/parts/mfs_16way_006/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    ln -s /s01/abinitio/data/prod/mfs/parts/mfs_16way_007/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    ln -s /s01/abinitio/data/prod/mfs/parts/mfs_16way_008/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    ln -s /s01/abinitio/data/prod/mfs/parts/mfs_16way_009/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    ln -s /s01/abinitio/data/prod/mfs/parts/mfs_16way_010/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    ln -s /s01/abinitio/data/prod/mfs/parts/mfs_16way_011/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    ln -s /s01/abinitio/data/prod/mfs/parts/mfs_16way_012/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    ln -s /s01/abinitio/data/prod/mfs/parts/mfs_16way_013/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    ln -s /s01/abinitio/data/prod/mfs/parts/mfs_16way_014/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    ln -s /s01/abinitio/data/prod/mfs/parts/mfs_16way_015/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    ln -s /s01/abinitio/data/prod/mfs/parts/mfs_16way_016/mfs_16way/Applications/RDS/CON_PUB/main/CON_xfrm_contract_to_20080302.dat
    $ /)//g' | sed 's/\/s01/ln -s \/s01/g' | awk -F\/ '{print $8"."$14}'                                                       <
    mfs_16way_001.CON_xfrm_contract_to_20080302.dat
    mfs_16way_002.CON_xfrm_contract_to_20080302.dat
    mfs_16way_003.CON_xfrm_contract_to_20080302.dat
    mfs_16way_004.CON_xfrm_contract_to_20080302.dat
    mfs_16way_005.CON_xfrm_contract_to_20080302.dat
    mfs_16way_006.CON_xfrm_contract_to_20080302.dat
    mfs_16way_007.CON_xfrm_contract_to_20080302.dat
    mfs_16way_008.CON_xfrm_contract_to_20080302.dat
    mfs_16way_009.CON_xfrm_contract_to_20080302.dat
    mfs_16way_010.CON_xfrm_contract_to_20080302.dat
    mfs_16way_011.CON_xfrm_contract_to_20080302.dat
    mfs_16way_012.CON_xfrm_contract_to_20080302.dat
    mfs_16way_013.CON_xfrm_contract_to_20080302.dat
    mfs_16way_014.CON_xfrm_contract_to_20080302.dat
    mfs_16way_015.CON_xfrm_contract_to_20080302.dat
    mfs_16way_016.CON_xfrm_contract_to_20080302.datMessage was edited by:
    antu
    Message was edited by:
    antu

  • Data Guard Broker - platform requirements

    Hi there,
    I've been checking the docs and haven't been able to find a definitive answer - my question is, if you have your primary and standby databases on a 64-bit architecture (HP-UX64 for example), can you have the broker that manages that configuration on a 32-bit architecture (Windows, Linux etc)?
    Any advice would be greatly appreciated. If anyone has a setup like what I've described, please let me know.
    Many thanks,
    IM

    Hi again, I've managed to answer my own question (from AskTom):
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4111318776437#60461502943814
    "observer"? you mean the data guard monitor. Yes, the standby must be on the same platform as the
    primary - but the monitoring software (enterprise manager stuff) may be elsewhere.
    Cheers,
    IM

  • Exports (from shell script) without password given explicitly in script

    Hi All,
    I have Oracle 10g2 on SLES 10 64-bit. I would like to do export the database using data pump from shell scripts. Is there any method to hide the password in the script file.
    Currently in doing by command:
    expdp system/password@database ....
    So any user who see the script will know the passowrd.
    Regards
    Groxy

    Hi,
    Have you considered the "Secure External Password Store" feature that was added in 10gR2? It uses the Oracle Wallet to store a database_alias with username/password credentials in encrypted format. Then you can enter your expdp command like this:
    expdp /@database_alias ...
    Take a look at the 10gR2 Security Guide. There's a chapter there on how to set this up.
    John

  • Shell script to monitor the data guard

    Hi,
    Can any body please provide the shell scripts to monitor the data guard in all scenarios and to get the mail when problem occurs in dataguard.
    Thanks,
    Mahipal

    Sorry Mahi. Looks like all of the scripts i've got are for logical standbys and not physical. Have a look at the link ualual posted - easy enough to knock up a script from one or more of those data dictionary views. Just had a look on metalink and there's what looks to be a good script in note 241438.1. Its a good starting point definately.
    regards,
    Mark

  • Shell scripts to monitor data guard

    Hi All,
    Please help me to have the shell scripts for monitoring the data guard.
    Thanks,
    Mahi

    here is the shell script we use to monitor dataguard, it sends mail if there is a gap for more than 20 archive logs..
    #set Oracle environment for Sql*Plus
    #ORACLE_BASE=/oracle/app/oracle ; export ORACLE_BASE
    ORACLE_HOME=/oracle/app/oracle/product/10.2.0 ; export ORACLE_HOME
    ORACLE_SID=usagedb ; export ORACLE_SID
    PATH=$PATH:/oracle/app/oracle/product/10.2.0/bin
    #set working directory. script is located here..
    cd /oracle/scripts
    #Problem statemnt is constructed in message variable
    MESSAGE=""
    #hostname of the primary DB.. used in messages..
    HOST_NAME=`/usr/bin/hostname`
    #who will receive problem messages.. DBAs e-mail addresses seperated with space
    DBA_GROUP='[email protected] '
    #SQL statements to extract Data Guard info from DB
    LOCAL_ARC_SQL='select archived_seq# from V$ARCHIVE_DEST_STATUS where dest_id=1; \n exit \n'
    STBY_ARC_SQL='select archived_seq# from V$ARCHIVE_DEST_STATUS where dest_id=2; \n exit \n'
    STBY_APPLY_SQL='select applied_seq# from V$ARCHIVE_DEST_STATUS where dest_id=2; \n exit \n'
    #Get Data guard information to Unix shell variables...
    LOCAL_ARC=`echo $LOCAL_ARC_SQL | sqlplus -S / as sysdba | tail -2|head -1`
    STBY_ARC=`echo $STBY_ARC_SQL | sqlplus -S / as sysdba | tail -2|head -1`
    STBY_APPLY=`echo $STBY_APPLY_SQL | sqlplus -S / as sysdba | tail -2|head -1`
    #Allow 20 archive logs for transport and Apply latencies...
    let "STBY_ARC_MARK=${STBY_ARC}+20"
    let "STBY_APPLY_MARK= ${STBY_APPLY}+20"
    if [ $LOCAL_ARC -gt $STBY_ARC_MARK ] ; then
    MESSAGE=${MESSAGE}"$HOST_NAME Standby -log TRANSPORT- error! \n local_Arc_No=$LOCAL_ARC but stby_Arc_No=$STBY_ARC \n"
    fi
    if [ $STBY_ARC -gt $STBY_APPLY_MARK ] ; then
    MESSAGE=${MESSAGE}"$HOST_NAME Standby -log APPLY- error! \n stby_Arc_No=$STBY_ARC but stby_Apply_no=$STBY_APPLY \n"
    fi
    if [ -n "$MESSAGE" ] ; then
    MESSAGE=${MESSAGE}"\nWarning: dataguard error!!! \n .\n "
    echo $MESSAGE | mailx -s "$HOST_NAME DataGuard error" $DBA_GROUP
    fi

  • Data Guard Gap Monitoring script

    Hello,
    Can anyone please provide me data guard gap monitoring script for databases(primary,standby) on RAC.
    Oracle RDBMS 11.2.0.2(4-node RAC) on RHEL 5.6.
    Thanks
    Edited by: 951368 on Dec 26, 2012 9:21 AM

    951368 wrote:
    Hello,
    Can anyone please provide me data guard gap monitoring script for databases(primary,standby) on RAC.
    Oracle RDBMS 11.2.0.2(4-node RAC) on RHEL 5.6.
    Thanks
    Edited by: 951368 on Dec 26, 2012 9:21 AMUse the script of MSeberg, Modify v$instance as gv$instance for RAC

  • Monitoring Data Guard with SNMP?

    I have configured Data Guard within two Oracle environments and have written a small Perl script which monitors the applied log service and sends an email if something fails to be applied.
    I am assuming this is not the most efficient way of monitoring the systems and would like to use SNMP.
    Can anyone tell me if it is possible to monitor Data Guard using SNMP (traps)? If so would you know what documents are available?
    Cheers!

    Some of the parameters that you need to have with Physical standby database are :
    *.background_dump_dest='/ford/app/oracle/admin/xchbot1/bdump'
    *.compatible='9.2.0.7'
    *.control_files='/home30/oradata/xchange/xchbot1/control01.ctl','/home30/oradata/xchange/xchbot1/control02.ctl','/home30/orad
    ata/xchange/xchbot1/control03.ctl'
    *.core_dump_dest='/ford/app/oracle/admin/xchbot1/cdump'
    *.db_block_buffers=1024
    *.db_block_size=8192
    *.db_file_multiblock_read_count=8# SMALL
    *.db_files=1000# SMALL
    *.db_name='xchbot1'
    *.global_names=TRUE
    *.log_archive_dest_1='LOCATION=/home30/oradata/xchange/xchbot1/archivelog'
    *.log_archive_dest_2='SERVICE=standby'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_format='arch_%t_%s.arc'
    *.log_archive_start=true
    *.log_buffer=16384000# SMALL
    *.log_checkpoint_interval=10000
    *.max_dump_file_size='10240'# limit trace file size to 5 Meg each
    *.parallel_max_servers=5
    *.parallel_min_servers=1
    *.processes=50# SMALL
    *.rollback_segments='rbs01','rbs02','rbs03','rbs04','rbs05','rbs06','rbs07','rbs08','rbs09','rbs10'
    *.shared_pool_size=67108864
    *.sort_area_retained_size=2048
    *.sort_area_size=10240
    *.user_dump_dest='/ford/app/oracle/admin/xchbot1/udump'

  • Shell script directed to one Dynamic Dashboard to monitor status of all DB

    Hi team,
    straight to scenario now..
    I have 15 databases to manage..i wrote shell scripts for monitoring each database status of ping,listener,vnc server, concurrent server, forms server, metric server, workflow, filesystem usage, alert-log etc..
    Now each and every time i will get 15 mails for one database each half n hour and it will get filled for sure if i get many alerts at single time...i thought of having a dashboard where my script output should display alerts on 3-D pie chart, bar chart etc..
    Imagine all databases statuses on one dashboard with colours displaying peaks and lows also the data gets dynamic changes every 30 mins.....
    charts will let me know to fix the issue easier..so next time i wont care how many mails reach me i will look up to dashboard and can observe what went wrong...
    please let me know any third party software available or self oracle or linux tools available.....
    hope anyone would give me suitable solution
    thanks
    dkoracle

    AFAIK Grid Control is completely free*, you just have to be careful to not go into the pages that require Management Pack licensing if you haven't purchased them for the DBs you're monitoring.
    Personally I have always used a combination of GC and shell script alerts. You don't want the GC environment to be your SPoF.
    *other than the associated server costs                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Shell scripts to read data from a text file and to load it into a table

    Hi All,
    I have a text file consisting of rows and columns as follows,
    GEF001 000093625 MKL002510 000001 000000 000000 000000 000000 000000 000001
    GEF001 000093625 MKL003604 000001 000000 000000 000000 000000 000000 000001
    GEF001 000093625 MKL005675 000001 000000 000000 000000 000000 000000 000001 My requirement is that, i should read the first 3 columns of this file using a shell script and then i have to insert the data into a table consisting of 3 rows in oracle .
    the whole application is deployed in unix and that text file comes from mainframe. am working in the unix side of the application and i cant access the data directly from the mainframe. so am required to write a script which reads the data from text file which is placed in certain location and i have to load it to oracle database.
    so I can't use SQL * loader.
    Please help me something with this...
    Thanks in advance.

    1. Create a dictionary object in Oracle and assign it to the folder where your file resides
    2. Write a little procedure which opens the file in the newly created directory object using ULT_FILE and inside the FOR LOOP and do INSERTs to table you want
    3. Create a shell script and call that procedure
    You can use the post in my Blog for such issues
    [Using Oracle UTL_FILE, UTL_SMTP packages and Linux Shell Scripting and Cron utility together|http://kamranagayev.wordpress.com/2009/02/23/using-oracle-utl_file-utl_smtp-packages-and-linux-shell-scripting-and-cron-utility-together-2/]
    Kamran Agayev A. (10g OCP)
    http://kamranagayev.wordpress.com

  • Error reading data from Infocube using shell script.

    Dear all ,
    I am facing a problem while reading data from an infocube using a shell script.
    The details are as follows.
    One of the shell script reads the data from the infocube to extract files with the values.
    The tables used for extraction by the shell script are :
    from   SAPR3."/BIC/F&PAR_CUBE.COPA"     FCOPA,
           SAPR3."/BIC/D&PAR_CUBE.COPAU"    COPAU,
           SAPR3."/BIC/D&PAR_CUBE.COPAP"    COPAP,
           SAPR3."/BIC/D&PAR_CUBE.COPA1"    CCPROD,
           SAPR3."/BIC/D&PAR_CUBE.COPA2"    CCCUST,
           SAPR3."/BIC/D&PAR_CUBE.COPA3"    COPA3,
           SAPR3."/BIC/D&PAR_CUBE.COPA4"    COPA4,
           SAPR3."/BIC/D&PAR_CUBE.COPA5"    COPA5,
           SAPR3."/BIC/MCCPROD"      MCCPROD,
           SAPR3."/BIC/SCCPROD"      SCCPROD,
           SAPR3."/BIC/MCCCUSTOM"    MCCCUSTOM,
           SAPR3."/BIC/SCCCUSTOM"    SCCCUSTOM,
           SAPR3."/BIC/SORGUNIT"     SORGUNIT,
           SAPR3."/BIC/SUNIMOYEAR"   SUNIMOYEAR,
    /*     SAPR3."/BI0/SFISCPER"     SFISCPER, */
           SAPR3."/BI0/SREQUID"      SREQUID,
           SAPR3."/BI0/SCURRENCY"    SCURRENCY,
           SAPR3."/BIC/SSCENARIO"    SSCENARIO,
           SAPR3."/BIC/SSOURCE"      SSOURCE
    The problem is that the file generation by this script (after reading the data from teh infocube) is taking an unexpected time of 2 hours which needs to be maximum 10 mins only.
    I used RSRV to get the info about these tables for the infocube:
    Entry '00046174', SID = 37 in SID table is missing in master data table /BIC/MCUSLEVEL2
    Entry '00081450', SID = 38 in SID table is missing in master data table /BIC/MCUSLEVEL2
    and so on for SID = 39  and SID = 35 .
    Checking of SID table /BIC/SCUSLEVEL2 produced errors
    Checking of SID table /BIC/SCUSLEVEL3 produced errors
    Can you please let me know if this can be a reason of delay in file generation (or reading of data from the infocube).
    Also , Please let me know how to proceed with this issue.
    Kindly let me know for more information, if required.
    Thanks in advance for your help.
    -Shalabh

    Hi ,
    In continuation with searching the solution to the problem , I could manage to note a difference in the partition of the Fact table of the infocube.
    Using SE14 -> Storage Parameters, I could find the partition done for the fact table as :
    PARTITION BY: RANGE
    COLUMN_LIST: KEY_ABACOPA
    and subsequently there are partitions with data in it.
    I need to understand the details of these partitions .
    Do they correspond to each requests in the infocube(which may not be possible as there are 13 requests in infocube and much more partitions).
    Most importantly, since this partition is observed for this onfocube only and not for other infocubes, it is possible that it can be a reason for SLOW RETRIEVAL of data from this ionfocube( not sure since the partition is used to help in fast retreival of data from the infocubes).
    Kindly help.
    Thanks for your co-operation in advance.
    -Shalabh

Maybe you are looking for

  • Regarding IPC Configuration in CRM 6.0

    Hi Friends,                 Can anyone please tell me if there any configuration required for IPC in CRM 6.0 . (I am getting the Variant Configuration data from R/3). Kindly reply immediately as this is bit urgent. Regards, Satish

  • HFM reports giving "Access Denied" error

    Hi All, I am in a really critical situation. I am getting "Access Denied" error on accessing any of HFM reports through web or FR studio...all planning reports are working fine... I think the DCOM settings have some problem... FR is not able to commu

  • DMS Characteristics Table

    Hi We are in Petroleum Company and using ECC 5, I am trying to create Document Management System (DMS) for maintaining information and routing with PO. While creating characteristic, using character field in Basic TAB, in Restriction TAB put 017, how

  • Converting Crystal Date Group to SQL Command

    When selecting the option to group on the server, I found that there are requirements in addition to selecting the option. I have a date grouping in Crystal like the following: dateadd("d", - weekday({table.date}, crMonday)+1,{table.date}) Can this b

  • Setting Variables to embedded SWF in Director

    I have a print button on screen at the end of a test within my director project. Once the user completes they are able to print there results. I have a certificate created in Flash publised using player 7 AS2 which is suppose to recieve varibles and