RMAN : Mechanism to implement

Hi Team,
We have few Exadata customers for which RMAN is the method for database backups.
If backup is failed, cit-backups team get email and they will fix it.  But if backups are not started/happening from couple of days, weeks, months,, there is no mechanism to identify that.
The GIT scirpt shall extend the functionality,, to check DB is UP, crontab level0 or other entries are uncommented, but still RMAN is not starting/happening on time.. such issues it shall report.  We have experienced for some database, RMAN is not happening for a month & no one is aware of that & no one has any SR or email.
This mechanism to be implemented.  Included Raja R & Nitin V.
Please let us know, if someone knows this mechanism is already in place.
Regards
Praveen

As suggested ,
Create the logfile using RMAN and then if you do not want too many mails then simply do this :
generate the logfile
run a shell script to see the logfile exists , if does not then send an email..
after that mv the logfile to another location ( this will ensure that if the  backup did not run , you wont have the old logfile in place).
Occasionally check the logfiles to ensure the backup is going fine and not failing. dont delete the archives before checking.
regards
Karan

Similar Messages

  • How to implement a time watch for an object

    Greetings!
    I have a container called TupleSpace which consists of elements of type Tuple. Among the attributes, the Tuple also has a "time to live" (the maximum amount of time its presence is allowed on the TupleSpace). After this amount of time passes it should be automatically deleted from the TupleSpace. My question is: what is the best mechanism to implement this ?
    My solution so far looks like this:
    public void removeExpiredTuple(){
    // ts = instance of TupleSpace class
              for (int i = 0; i < ts.size(); i++){
                   ITuple currentTuple = (ITuple) ts.elementAt(i);
                   long tupleAge = System.currentTimeMillis() - currentTuple.getTimeSubmitted();
    // currentTuple.getTtl() = retrieves the Tuple's "time to live"
                   if (tupleAge > currentTuple.getTtl())
                        ts.remove(i);
         }but I am not at all satisfied with it
    Thanks in advance for your answers
    Edited by: OctavianS on Jan 18, 2008 12:10 PM

    ok, I'll give it a try and come back with a reply
    LE: problem solved. thx again
    Edited by: OctavianS on Jan 22, 2008 3:56 PM

  • Retry Mechanism for BPM in PI 7.1

    Hi Experts,
    Is it possible to have a retry mechanism in implementing BPM on a solution? Here's my scenario:
    With BPM initial scenario:
    Sending system sends a message to PI and will be routed to another system (let's say a Master Data System) via BPM to get a specific data that will be included on the initial data that was sent to from a sending system. The BPM then handles the mapping of that specific data and then sent to a Receiving System.
    Additional Scenario (if it is possible):
    What if the Master Data System is down?!
    - an additional BPM will be added to the initial BPM and the purpose is solely for retry mechanism. The BPM will then be in retry mechanism mode let say 5 attempts or 5 hours..and if it successful then the initial scenario will push..but if the system is still down, and retry mechanism reached its limit then it will send will throw an error or an alert.
    If this is possible, how can this be handled?
    Appreciate your response with this. Thanks!
    Cheers,
    R-jay

    The BPM will then be in retry mechanism mode let say 5 attempts or 5 hours
    Your scenario is: SYS1 > BPM <> SYS2 ....and rest of processing
    Your question is if BPM <--> SYS2 communication fails in first attempt
    So the above step will be SYNC.....include a BLOCK with an Exception Branch in your BPM....if the SYNC steps fails then the Exception Branch will be triggered....now in the exception branch include a WAIT step (set the time accordingly...avoid setting it for a big time interval),
    After the WAIT step include a SYNC-Send step which will again send the same message (BPM <--> SYS2) .....check if you get the response or still in error....if in error then implement a proper error handling.
    Just ensure that the wait step does not make the BPM wait for too long
    Regards,
    Abhishek.

  • GG Implementation

    Hi Experts,
    There is a GG requiremnet. I have to install oracle goldengate on unix server on both source and target. source currently as a OLTP . the db version is oracle 11gr1. i would like to know the suitable initial load mechanism for implementing this.
    1. Only one schema have to replicate to target server. only uni directional mechanism basic one.
    2. that schema having only 13 tables maximum number of rows is 5 million
    3. db size is 30GB only...
    what is the recommented solution for initial loading process, whether we can go with expdp/impdp wth help of scn number. or can use GG initial load and change sync symaltaniously. please suggest what is the recommentation for this requirement. source DB is a production database. please advice.

    Hi,
    Thanks for you suggestion, can you please provide the step for initial load..... when we have to start the initial load extract and change data extract. when start the change capture extract , whether need to start the extract with scn or any point in time? how gg exactly start from after initial load point? which process needs to be started first? please guide me
    Edited by: ASP on Feb 24, 2012 12:02 PM

  • A semiautomatic alternative to /etc/fstab

    This is probably highly redundant... the chances are, someone will likely say "XYZ does that for you and you can configure it in 5 minutes", but here goes anyways.
    I wanted a simple way to mount the disks in my computer to the same location regardless of where they were in the system (thus via UUID) but what I *didn't* want was to have to copy/type the UUID myself. The following possibly shaky bash script is the result.
    First, however, a (very real-world) demonstration of its functionality!
    /disks/ + ./domount
    Using scriptdir "/disks/.mountscripts".
    Running mount... [ok]
    [Disk ST3250620A_5QE4M336]
    group0-root -> /disks/250gb: [ok]
    38067a33-0556-4cab-a5c5-c96b313bd174 -> /disks/250gb/boot: [ok]
    21D4-2E62 -> /disks/250gb/data: [ok]
    group0-home -> /disks/250gb/home:
    == mount error ==
    mount: wrong fs type, bad option, bad superblock on /dev/mapper/group0-home,
    missing codepage or helper program, or other error
    In some cases useful info is found in syslog - try
    dmesg | tail or so
    =================
    [R]etry/Skip [P]artition/Skip [D]isk/[Q]uit? q
    /disks/ + fsck.jfs /dev/mapper/group0-home
    fsck.jfs version 1.1.15, 04-Mar-2011
    processing started: 11/10/2011 20:28:10
    Using default parameter: -p
    The current device is: /dev/mapper/group0-home
    Block size in bytes: 4096
    Filesystem size in blocks: 52099072
    **Phase 0 - Replay Journal Log
    Filesystem is clean.
    /disks/ + ./domount
    Using scriptdir "/disks/.mountscripts".
    Running mount... [ok]
    [Disk ST3250620A_5QE4M336]
    group0-root -> /disks/250gb: (already mounted)
    38067a33-0556-4cab-a5c5-c96b313bd174 -> /disks/250gb/boot: (already mounted)
    21D4-2E62 -> /disks/250gb/data: (already mounted)
    group0-home -> /disks/250gb/home: [ok]
    group0-var -> /disks/250gb/var: [ok]
    partition1.vfat -> /disks/250gb/home/backup/80gb/mnt/partition1.vfat: [ok]
    partition2.vfat -> /disks/250gb/home/backup/80gb/mnt/partition2.vfat: [ok]
    partition3.vfat -> /disks/250gb/home/backup/80gb/mnt/partition3.vfat: [ok]
    partition4.ext3 -> /disks/250gb/home/backup/80gb/mnt/partition4.ext3: [ok]
    data2 -> /disks/250gb/home/backup/32gb-2/mnt/data2: [ok]
    [Disk ST340014A_5MQ4HB90]
    0854-08DE -> /disks/20gb-1/data-1: [ok]
    4846-D7E2 -> /disks/20gb-1/data-2: [ok]
    3070DB1E70DAE99C -> /disks/20gb-1/winnt: [ok]
    38BB-158D -> /disks/20gb-1/pool: [ok]
    [Disk WDC_WD800BB-22J_WD-WCAM9H677098]
    e336c404-fca8-4f2b-9c75-81c22f339741 -> /disks/80gb: [ok]
    4738-E723 -> /disks/80gb/vfat: [ok]
    a827cfa1-08cf-4a24-a989-aae94ea0801b -> /disks/80gb/boot: [ok]
    7bb5df89-3a90-4c92-8aa7-a94271806087 -> /disks/80gb/var: [ok]
    09b652b7-4f5e-4895-8464-6f972a44fdd6 -> /disks/80gb/home: [ok]
    a2534aa6-b70f-442d-805e-365ee626d4be -> /disks/80gb/tmpspace: [ok]
    4871-993D -> /disks/80gb/tmpspace2: [ok]
    386a2a83-22e2-425c-bd48-cb0a1fad8a87 -> /disks/80gb/pool: [ok]
    /disks/ +
    Here's the script! (I can pastebin it if neccessary)
    #!/bin/bash
    # ohai from i336 :P <[email protected]>
    # Oct-Nov 2011
    # Public domain, no warranty. Be sure to use the "t" flag on the first run!
    # This program has two modes: scan mode and run mode.
    # Configuration
    # =============
    # You first need to create/go into the directory you want to mount your disks
    # in, such as /mnt (I use /disks), and create the subdirectory ".mountscripts", or
    # alternatively "programname-mountscripts" (the second directory bearing the
    # name of the program/symlink, a simple mechanism to implement some flexibility).
    # You can substitute any created symlinks whereever "./domount" is mentioned.
    # The existance of this directory indicate that this is the work directory.
    # (For added flexibility, the program will look for the second directory, the
    # one bearing its name, first, then fall back on ".mountscripts" if this is not
    # found.)
    # Scan Mode
    # =========
    # After creating this directory for the first time you will want to run
    # "./domount s" to generate the mountscripts into the mountscript directory
    # (which is selected as specified above).
    # Run Mode
    # ========
    # At this point, go into the mountscript directory, open all the files you find
    # there in a text editor, and add in the mountpoints you want to use after the
    # UUID parameter to 'partop' (an internal function defined in this file for the
    # scripts).
    # ** The first time you simply MUST run "./domount t" in order to see that the
    # 'mount' commands are correct! **
    # After this is done, run "./domount" and it will go ahead and mount the disks.
    # Run "./domount u" and it will unmount everything. (No options exist for
    # individual partitions as yet).
    # Limitations
    # ===========
    # * If you use domount to mount loopback images inside real partitions and the
    # real partitions are also mounted by domount, well, domount will try to
    # unmount them in the same order as when it mounts... and it will break.
    # Simple solution: skip however many real [p]artitions you have, then
    # re-run domount again. :)
    # * If you change a disk (eg add a partition), well, you'll have to delete the
    # file for that disk, re-scan (domount will not touch the other scripts) then
    # re-add your partitions back in. This program wasn't really designed to deal
    # with that kind of situation :)
    # * This program does not support LVM partitions - quite frankly, it doesn't
    # even realize such things exist. Thus you will not find any LVM partitions
    # listed in the generated scripts, or any "LVM partitions ignored"
    # messages - indeed, if you only have LVM partitions on a given disk, the
    # resulting syntactically incorrect script will contain an 'if' block with
    # no content and the shell will produce an error.
    toollist=
    needtool=0
    for tool in find lsblk blkid cfdisk xargs grep tail mountpoint; do
    type -P $tool > /dev/null 2>&1
    if [ $? -eq 0 ]; then
    toollist="${toollist} ${tool}"
    else
    toollist="${toollist} [${tool}]"
    needtool=1
    fi
    done
    if [ $needtool -eq 1 ]; then
    echo "This program requires the following tools in order to run. Those marked with"
    echo "brackets cannot be found (using \`type') and their containing packages"
    echo "likely need to be installed."
    echo $toollist
    exit 1
    fi
    sizes=(bytes KB MB GB TB)
    progname=$(basename $0)
    if [ -d ".mountscripts" ]; then
    scriptdir="$(pwd)/.mountscripts"
    elif [ -d ".${progname}-mountscripts" ]; then
    scriptdir="$(pwd)/.${progname}-mountscripts"
    fi
    if ([[ ! -d "${scriptdir}" ]] && [[ "$1" != "s" ]]) || [[ "$1" == "h" ]]; then
    cat << EOF
    usage: $0 [s] [t]
    s = scan
    t = test run (USE THIS THE FIRST TIME AFTER YOU HAVE DONE A SCAN)
    EOF
    exit 1
    fi
    if [[ "$1" = "s" ]]; then
    echo -n "Scanning disk tables... (by name)"
    parttable=(); while IFS= read -r line; do parttable+=("$line"); done < \
    <(find /dev/disk/by-id/ -name "scsi-SATA*" -name "*-part*" -type l | xargs stat -L -c "%t-%T %n")
    echo -n ", (by UUID)"
    uuidtable=(); while IFS= read -r line; do uuidtable+=("$line"); done < \
    <(find /dev/disk/by-uuid/ -type l | xargs stat -L -c "%t-%T %n")
    echo -ne " [ok]\nRunning blkid..."
    blkidtable=(); while IFS= read -r line; do blkidtable+=("$line"); done < \
    <(blkid)
    echo -ne " [ok]\nRunning lsblk (uno momento)..."
    lsblktable=(); while IFS= read -r line; do lsblktable+=("$line"); done < \
    <(lsblk -bro name,size,fstype,model | grep -v group | tail -n +2)
    echo -e " [ok]\n"
    if [ ${#parttable[@]} -ne ${#uuidtable[@]} ]; then
    echo 'Something is very wrong with either this program'
    echo 'or your disk configuration. O.o'
    exit 1
    fi
    echo -e "Using scriptdir \"${scriptdir}\".\n"
    echo -ne "\e[1GCompiling mapping table... [ ]\e[?25l"
    max=$[${#parttable[@]}*${#parttable[@]}]
    runindex=0
    for ((i = 0; i < "${#parttable[@]}"; i++)); do
    partsplit=(${parttable[$i]})
    devok=0
    devname="$(readlink -f ${partsplit[1]})"
    partsize=
    for uuid in "${uuidtable[@]}"; do
    uuidsplit=($uuid)
    c=$[((runindex*43)/$[max-1])]
    echo -ne "\e[29G"
    if [ $c -gt 0 ]; then eval \printf "%.s#" {0..$c}; else echo -n '.'; fi
    if [ $c -lt 43 ]; then eval \printf "%.s." {$[c+1]..43}; fi
    ((runindex++))
    if [[ "${partsplit[0]}" = "${uuidsplit[0]}" ]]; then
    partlabel=
    devok=1
    for entry in "${blkidtable[@]}"; do
    if [[ "${entry:0:$[${#devname}+9]}" != "${devname}: LABEL=\"" ]]; then continue; fi
    partlabel="${entry:$[${#devname}+9]}"
    partlabel=$(echo -n $(echo $partlabel | cut -d'"' -f1))
    done
    for entry in "${lsblktable[@]}"; do
    entry=($entry)
    if [[ "/dev/${entry[0]}" != "$devname" ]]; then continue; fi
    partsize=${entry[1]}
    parttype=${entry[2]}
    done
    if [ ! partsize ]; then
    echo "$0: error: cannot determine partition size for $devname"
    exit 1
    fi
    devline="${partsplit[1]:26} ${uuidsplit[1]:18} ${partsize} ${parttype}${partlabel:+ $partlabel}"
    map[${#map[@]}]="$devline"
    fi
    done
    if [ $devok -eq 0 ]; then
    checkparttable[${#checkparttable[@]}]="${parttable[$i]#* }"
    fi
    done
    echo -e "\e[?25h\e[75Gdone.\n"
    if [[ ${#checkparttable[@]} -gt 0 ]]; then
    cat << EOF
    Warning: The following partitions do not have matching UUID entries
    in /dev/disk/by-uuid/.
    Linux seems to be quite smart, and won't list UUIDs for LVM
    members, partitions \`mount' cannot mount without the -t flag,
    or extended partition headers, but /dev/disk/by-id/ will still
    list them. So these are probably not a problem but may still
    warrant a double-check; if these contain valid filesystems you
    will need to insert them manually since their UUIDs cannot be
    calculated.
    EOF
    for partition in "${checkparttable[@]}"; do
    echo " >> $(readlink -f $partition) (/dev..by-id/${partition:26})";
    done
    echo
    fi
    find /dev/disk/by-id/ -name "scsi-SATA*" -not -name "*-part*" -type l | while read disk; do
    scriptfile="${scriptdir}/${disk:26}.mount.sh"
    rm -f "${scriptfile}"
    if [ ! -f "${scriptfile}" ]; then
    echo -ne "No mountscript found for disk ID \"${disk:26}\", creating one...\nRunning cfdisk... "
    cfdtable=(); while IFS= read -r line; do cfdtable+=("$line"); done < \
    <(cfdisk -Ps $disk | grep -v "Free Space" | grep -v "Unusable" | tail -n +6)
    echo -ne "[ok]\nRunning smartctl... "
    smartctlinfo="$(smartctl -i $disk)"
    diskdevname="$(readlink -f ${disk})"
    diskdevname=${diskdevname:5}
    disk="${disk:26}"
    disktable[${#disktable[@]}]="${disk}"
    tmp=
    diskparttable=
    for entry in "${lsblktable[@]}"; do
    entry=($entry)
    if [[ "${diskdevname}" != "${entry[0]}" ]]; then continue; fi
    devicename=$(echo -n $(echo "${entry[@]}" | cut -d' ' -f3-))
    done
    echo -e "# Script generated by domount at $(date +'%T on %D (MM/DD/YY)') for disk \"${devicename}\"\n" > "${scriptfile}"
    echo '# '$(echo "$smartctlinfo" | grep '^Model Family:') >> "${scriptfile}"
    echo '# '$(echo "$smartctlinfo" | grep '^Device Model:') >> "${scriptfile}"
    echo -e '# '$(echo "$smartctlinfo" | grep '^User Capacity:')"\n" >> "${scriptfile}"
    for part in "${map[@]}"; do
    if [[ "${part:0:$[${#disk}+1]}" != "${disk}-" ]]; then continue; fi
    diskparttable="${diskparttable}${part}\n";
    done
    mapfile -t diskparttable < <(echo -ne "${diskparttable%%\\n}" | sort -n -k1.$[${#disk}+6]n)
    echo -ne "if diskexists ${disk}; then\n\t\n" >> "${scriptfile}"
    for part in "${diskparttable[@]}"; do
    partsplit=($part)
    parttype=
    for line in "${cfdtable[@]}"; do
    line=($line)
    if [[ "X${partsplit[0]:${#disk}+5}X" != "X${line[0]}X" ]]; then continue; fi
    parttype="${line[1]}"
    done
    if [[ "X${parttype}X" = "XX" ]]; then
    echo "$0: error: Cannot parse cfdisk output"
    rm -f "${scriptfile}"
    exit 1
    fi
    echo -ne "\t# Partition: #${partsplit[0]:${#disk}+5} (${parttype}, ${partsplit[3]}" >> "${scriptfile}"
    if [[ "${partsplit[3]}" = "swap" ]]; then
    echo -n " - Skipping" >> "${scriptfile}"
    fi
    echo -n "); Size: " >> "${scriptfile}"
    sizeidx=0
    size=${partsplit[2]}
    while [ $size -gt 0 ]; do
    sizetext="${size}${sizes[$sizeidx]} ${sizetext}"
    size=$(($size/1024))
    ((sizeidx++))
    done
    sizetext=($sizetext)
    for ((i = 0; i < 2; i++)); do
    if [ $i -eq 1 ]; then echo -n ' (' >> "${scriptfile}"; fi
    if [[ "${sizetext[$i]: -1:1}" = "s" ]]; then
    echo -n "${sizetext[$i]:0:-5} bytes" >> "${scriptfile}"
    else
    echo -n "${sizetext[$i]:0:-2} ${sizetext[$i]: -2:2}" >> "${scriptfile}"
    fi
    if [ $i -eq 1 ]; then echo -n ')' >> "${scriptfile}"; fi
    done
    if [[ "X${partsplit[4]}X" != "XX" ]]; then
    echo -n "; Label: \"" >> "${scriptfile}"
    echo $(echo -n "${part}" | cut -d' ' -f5-)"\"" >> "${scriptfile}"
    else
    echo >> "${scriptfile}"
    fi
    if [[ "${partsplit[3]}" != "swap" ]]; then
    echo -e "\tmountpart /dev/disk/by-uuid/${partsplit[1]} \n\t" >> "${scriptfile}"
    else
    echo -e "\t" >> "${scriptfile}"
    fi
    done
    echo "fi" >> "${scriptfile}"
    echo -e "[ok]\nSuccess!\n"
    #echo ---; cat $scriptfile; echo ---
    else
    echo "Script found for disk ID ${disk}"
    fi
    done
    exit
    fi
    trap 'echo; exit' SIGINT
    echo -ne "Using scriptdir \"${scriptdir}\".\nRunning mount..."
    mapfile -t mounttable < <(mount)
    echo -e " [ok]"
    function spin() {
    trap 'echo -e "\e[?25h"' SIGINT SIGQUIT SIGKILL
    echo -ne "\e[?25l"
    if [[ $unicode -eq 1 ]]; then s=$(printf \\u2580\\u259C\\u2590\\u259F\\u2584\\u2599\\u258C\\u259B); m=8; d=0.03; else s='/-\|'; m=4; d=0.07; fi
    ("$@" & pid=$! ; c=1; while ps -c $pid 2>&1>/dev/null; do echo -ne "\e[s${s:c:1} \e[u"; c=$[c+1]; test $c -eq $m && c=0; sleep $d; done)
    echo -ne "\e[?25h"
    trap SIGINT SIGQUIT SIGKILL
    function diskexists {
    disk=/dev/disk/by-id/scsi-SATA_${@}
    if [[ ! -L $disk ]]; then
    echo "(Disk $0 is not installed)"
    else
    echo "[Disk ${1}]"
    fi
    function partop {
    if [[ $mode -eq 1 ]]; then
    while true; do
    echo -n "Unmounting ${1##*/}... "
    if ! mountpoint > /dev/null 2>&1 $2; then
    echo "(Not mounted, or not a mountpoint)"
    break;
    fi
    if [ ! -d $2 ]; then
    echo "error: Not a directory!"
    break
    fi
    cmd="umount $1"
    if [[ ! $testmode ]]; then
    output="$(${cmd} 2>&1)"
    err=$?
    else
    echo "{would run: ${cmd}} "
    fi
    if [[ $err = 0 ]]; then
    if [[ ! $testmode ]]; then echo "[ok]"; fi
    return
    else
    echo -e "\n== umount error =="
    echo -n "${output}"
    echo -e "\n=================\n"
    c=X;
    while [[ ! $c =~ (R|r|P|p|D|d|Q|q) ]]; do read -sn1 -p"[R]etry/Skip [P]artition/Skip [D]isk/[Q]uit? " c; echo $c; done
    echo
    case $c in
    D|d) skipdisk=1; break ;;
    P|p) break ;;
    Q|q) exit ;;
    esac
    fi
    done
    else
    if [[ $skipdisk = 1 ]] && [[ $newdisk = 0 ]]; then return; fi
    err=0
    skipdisk=0
    newdisk=0
    while true; do
    echo -n "${1##*/} -> $2: "
    if mountpoint > /dev/null 2>&1 $2; then
    echo "(already mounted)"
    break;
    fi
    if [[ $testmode == 0 ]]; then echo echo -n "Mounting"; fi
    if [ ! -d $2 ]; then
    echo -n " (creating dir $2"
    cmd="mkdir -p $2 2>&1"
    if [[ ! $testmode ]]; then
    output="$(eval $cmd)"
    err=$?
    else
    echo -n " {would run: $cmd}"
    fi
    echo -n ') '
    fi
    if [[ $err = 0 ]]; then
    if [[ $testmode == 0 ]]; then echo -n '... '; fi
    cmd="mount $@"
    if [[ ! $testmode ]]; then
    output="$(${cmd} 2>&1)"
    err=$?
    else
    echo "{would run: ${cmd}} "
    fi
    else
    echo
    fi
    if [[ $err = 0 ]]; then
    if [[ ! $testmode ]]; then echo "[ok]"; fi
    return
    else
    echo -e "\n== mount error =="
    echo -n "${output}"
    echo -e "\n=================\n"
    c=X;
    while [[ ! $c =~ (R|r|P|p|D|d|Q|q) ]]; do read -sn1 -p"[R]etry/Skip [P]artition/Skip [D]isk/[Q]uit? " c; echo $c; done
    echo
    case $c in
    D|d) skipdisk=1; break ;;
    P|p) break ;;
    Q|q) exit ;;
    esac
    fi
    done
    fi
    if [[ $1 = "u" ]]; then mode=1; else mode=0; fi
    if [[ $1 = "t" ]]; then testmode=1; fi
    scripts=(${scriptdir}/*.mount.sh)
    for ((i = 0; i < ${#scripts[@]}; i++)); do
    newdisk=1
    . ${scripts[$i]}
    if (($i < ${#scripts[@]} - 1)); then echo; fi
    done
    echo -ne "\e[?25h"
    Hopefully someone else finds this helpful. I am aware of udev/automount; that was overkill, since the disks are always installed, and I don't need a system whose focus is on-the-fly detection of newly inserted media of whatever kind.
    -i336
    Last edited by i336 (2011-11-10 04:42:08)

    Thanks. I might use it soon...
    Does it automatically make folders named after the volume labels? And does it handle the conversion of spaces and non-alphanumeric characters to octal codes?
    I could read the script but it would be faster for everyone reading, if you leave the answer as a reply.
    I also think that there should be some major work done on modernizing the fstab, either by replacing it with a better implementation of file system mounting or changing the file structure and adding in better handling of non-alphanumerics. I don't want to have to look up a stupid octal table every time I type in my labels.

  • Performance problem

    Hi, I'm having a performance problem I cannot solve, so I'm hoping anyone here has some advice.
    Our system consists of a central database server, and many applications (on various servers) that connect to this database.
    Most of these applications are older, legacy applications.
    They use a special mechanism for concurrency that was designed, also a long time ago.
    This mechanism requires the applications to lock a specific table with a TABLOCKX before making any changes to the database (in any table).
    This is of course quite nasty as it essentially turns the database into single user mode, but we are planning to phase this mechanism out.
    However, there is too much legacy code to simply migrate everything right away, so for the time being we're stuck with it.
    Secondly, this central database may not become too big because the legacy applications cannot cope with large amounts of data.
    So, a 'cleaning' mechanism was implemented to move older data from the central database to a 'history' database.
    This cleaning mechanism was created in newer technology; C# .NET.
    It is actually a CLR stored procedure that is called from a SQL job that runs every night on the history server.
    But, even though it is new technology, the CLR proc *must* use the same lock as the legacy applications because they all depend on the correct use of this lock.
    The cleaning functionality is not a primary process, so it does not have high priority.
    This means that any other application should be able to interrupt the cleaning and get handled by SQL first.
    Therefore, we designed the cleaning so that it cleans as few data as possible per transaction, so that any other process can get the lock right after committing each cleaning transaction.
    On the other hand, this means that we do have a LOT of small transactions, and because each transaction is distributed, they are expensive and so it takes a long time for the cleaning to complete.
    Now, the problem: when we start the cleaning process, we notice that the CPU shoots up to 100%, and other applications are slowed down excessively.
    It even caused one of our customer's factories to shut down!
    So, I'm wondering why the legacy applications cannot interrupt the cleaning.
    Every cleaning transaction takes about 0.6 seconds, which may seem long for a single transaction, but this also means any other application should be able to interrupt within 0,6 seconds.
    This is obviously not the case; it seems the cleaning session 'steals' all of the CPU, or for some reason other SQL sessions are not handled anymore, or at least with serious delays.
    I was thinking about one of the following approaches to solve this problem:
    1) Add waits within the CLR procedure, after each transaction. This should lower the CPU, but I have no idea whether this will allow the other applications to interrupt sooner. (And I really don't want the factory to shut down twice!)
    2) Try to look within SQL Server (using DMV's or something) and see whether I can determine why the cleaning session gets more CPU. But I don't know whether this information is available, and I have no experience with DMV's.
    So, to round up:
    - The total cleaning process may take a long time, as long as it can be interrupted ASAP.
    - The cleaning statements themselves are already as simple as possible so I don't think more query tuning is possible.
    - I don't think it's possible to tell SQL Server that specific SQL statements have low priority.
    Can anyone offer some advice on how to handle this?
    Regards,
    Stefan

    Hi Stefan,
    Have you solved this issue after follow up Bob's suggestion? I’m writing to follow up with you on this post, please let us know how things go.
    If you have any feedback on our support, please click
    here.
    Elvis Long
    TechNet Community Support

  • New to Oracle - Ques. about globals.jsa

    Hi,
    I'm trying to put together a simple web application (JSPs only) to run under Oracle 9i. For this application, I need to maintain a persistent variable among all users and sessions (we would set the variable initially, once, via our app, and the variable would be available to all of our JSPs).
    I guess with other containers, I would use ServletContext, but I understand this is not supported in Oracle, and I may need to use globals.jsa instead.
    This is a simple requirement I think, and I think I actually have it working, but just want to check if I'm doing things correctly.
    What I have so far is a globals.jsa in my apps home directory:
    <%-----------------------------------------------------------
    Copyright © 1999, Oracle Corporation. All rights reserved.
    ------------------------------------------------------------%>
    <event:application_OnStart>
         <jsp:useBean id="myvar" class="oracle.jsp.jml.JmlString" scope = "application" />
    </event:application_OnStart>
    <event:application_OnEnd>
         <jsp:useBean id="myvar" class="oracle.jsp.jml.JmlString" scope = "application" />
    </event:application_OnEnd>
    <event:session_OnStart>
         <%-- Acquire beans --%>
    </event:session_OnStart>
    <event:session_OnEnd>
         <%-- Acquire beans --%>
    </event:session_OnEnd>
    And, in my JSPs, I add:
    <jsp:useBean id="myvar" class="oracle.jsp.jml.JmlString" scope = "application" />
    Then, I access the 'global' variable:
    To retrieve: String s = myvar.getValue();
    or
    To update: myvar.setValue("whatever");
    Like I said, this seems to be working, but I'm just checking. I'm kind of unclear about what I have in the globals.jsa itself. Is what I have above correct?
    Thanks,
    Jim

    Jim, I am not sure if I have ever used globals.jsa or not. However, if you got it working, then it is working. Please take a look at the documentation "chapter 9, Oracle JSP in Apache JServ" in
    Oracle9i Support for JavaServer Pages Reference
    Release 2 (9.2)
    Part Number A96657-01
    Current url is http://download-west.oracle.com/docs/cd/B10501_01/java.920/a96657/apjsvsup.htm#1015160
    Any particular reason that you are trying to use globals.jsa as of now?! Please note the j2ee aspect in the Oracle java product has been moved forward enormously. For a minor example, globals.jsa is not longer supported in later releases. It was only a mechanism for implementing the JSP specification in a servlet 2.0 environment as Web applications and servlet contexts are not fully defined in the servlet 2.0 specification. Since we have passed year 2002, there is no reason to support globals.jsa.
    Please have fun trying oc4j 10.1.2 production or oc4j 10.1.3 developer preview.

  • SOAP receiver adapter with wsc SecurityContextToken

    Is there any possibility like for UsernameToken Profile to use the axis framework SOAP adapter or any other configuration based on ws-security and wsc by oasis spec and also pass the identifier dynamically?
    Header should be extended e.g. by
    <soapenv:Header>
    <wsse:Security>
    <wsc:SecurityContextToken>
    <wsc:Identifier>some input here</wsc:Identifier>
    </wsc:SecurityContextToken>
    </wsse:Security>
    </soapenv:Header>

    Hi Mika,
    Thanks for taking the time to answer this. But it seems the thread you mentioned talks about how to implement usernametokens. I want to understand if we can create SecurityContextToken using AXIS adapter or any other way.
    The below field needs to be sent in the SOAP header
          <c:SecurityContextToken u:Id="uuid-xxxxxxxxxxxxx1" xmlns:c="http://schemas.xmlsoap.org/ws/2005/02/sc">
            <c:Identifier>urn:uuid:xxxxxxxxxxxxx</c:Identifier>
          </c:SecurityContextToken>
    This is a different mechanism of implementing security.

  • DB buffer cache vs. SQL query & PL/SQL function result cache

    Hi all,
    Started preparing for OCA cert. just myself using McGraw Hill's exam guide. Have a question about memory structures.
    Actually, DB buffer cache is used to copy e.g. SELECT queries result data blocks, that can be reused by another session (server process).
    There is also additional otion - SQL query & PL/SQL function result cache (from 11g), where also stored the results of such queries.
    Do they do the same thing or nevertheless there is some difference, different purpose?
    thanks in advance...

    There is also additional otion - SQL query & PL/SQL function result cache (from 11g), where also stored the results of such queries.Result cache located in shared pool.So it is one component of shared pool.When server process execute query(and if you configured result cache) then result will store in shared pool.Then next execution time run time mechanism will detect and consider using result cache without executing this query(if data was not changed this is happen detection time)
    Do they do the same thing or nevertheless there is some difference, different purpose?.Buffer cache and result cache are different things and purpose also,but result cache introduced to improve response time of query in 11g(but such mechanism also implemented in 10g subquery execution,in complex query).In buffer cache holds data blocks but not such results.
    Edited by: Chinar on Nov 4, 2011 4:40 AM
    (Removing lots of "But" word from sentences :-) )

  • Query on Active Sync

    Hi,
    I have a query on Active Sync.
    I am making some changes in a couple of attributes. Is there a way or a variable through which i can get only the changed values.
    activeSync.chnages gives me the entire value of the changed attributes.
    for eg:
    Original value of member of: Group1,Group2
    New value:Group1,Group2,Group3
    i am modifying the memberOf attribute. The user alreday has Group1 and Group2 under the member of attribute and i want to add Group3.Is there a way in which i can get the value of only Group3?
    Thanks,
    pdeep

    Hi,
    Please do not confuse what is displayed on the screen and when actually happens on the resource. The screen is showing that prior to the change the user will be a member of: Group1,Group2. After the change the user will be a member of:Group1,Group2,Group3.
    The actual mechanism to implement the change on the resource will depend upon the implementation of the resource adaptor.

  • Best way to detect failure in Metro ethernet networks

    Hello ,
    I am working for a well known provider and I am currently migrating one of my client from Frame-relay to Metro-ethernet link .
    I am actually looking for advices on what sort of mechanism to implement to detect a failure in the ME parth .
    As you probably know , failure on one of the links might cause the CE-SWITCH-PE interfaces to stay up/up and the network will not neceseraliy start converging .
    So far I have implemented BFD along with IP SLA route tracking , I am happy with BFD but the IP SLA is acting "weird" .
    - IP SLA ICMP tracking rely on ICMP packets and was too sensitive to packets lost
    - We switched to ip route sla tracking but I am still unsure about the best way to use or implement this .
    Is there some sort of best practices available somewhere for this ?
    thanks ¨
    T

    Hello Thomas,
    From what i have seen BFD is best bet as it allows to relax the L3 protocols timers ( BGP / any other protocol used between CE- PE ). Another option is to have gre tunnel between the PE - CE link and track this tunnel interface.
    Regards,
    Shreeram

  • Monitoring file changes in real-time

    Hi,
    Is it possible to monitor which files are being modified in Solaris within an application?
    For example Windows provides hooking mechanism and once you call the required system calls, you become able to capture file system events in real-time. In fact, real-time virus protection applications totally rely on this feature. Another way is to code virtual device drivers, as far as i know. Does Solaris 9/10 provide an API, signaling or a callback mechanism to implement a such kind of application? Or is it possible to write virtual device drivers in Solaris to achieve this functionality?
    Your help will be appreciated.
    Best regards.

    Hi,
    I'm not spamming the question to the all subjects titles of the forum and don't expect the information in pieces.
    I know that multiple postings is not nice. And my aim is not to flood the same question on the all of the subject links i see in order to go out for an information hunt even if the titles are relevant or irrelevant.
    I just posted the question to the subject titles i thought relevant as this is the way it should be.
    Best regards.

  • How to transfer parameters between two iViews?

    Hi all,
    I have two WebDynpro based iViews (StartView and SecondView). when I press the button in the StartView, the EP will navigate to the SecondView. During this process, I want send a parameter value from the StartView to the Second View. In the StartView, I use the follow method to navigator:
    WDPortalNavigation.navigateAbsolute(
                  "ROLES:portal_content/com.sap.itsamtest/com.sap.secondpage",
                     WDPortalNavigationMode.SHOW_INPLACE,
                     (String) null,
                     (String) null,
                     WDPortalNavigationHistoryMode.NO_DUPLICATIONS,
                     (String) null,
                     (String) null,
                     "value=From Start Page");
    In the SecondView, I use the WDWebContextAdapter to get parameter's value.
    String Name = WDWebContextAdapter.getWebContextAdapter().getRequestParameter("value");
    It works fine. But I found there is a warning message:
    The method getRequestParameter(String) from the type IWDWebContextAdapter is deprecated     
    So is there any other good mechanism to implement the data transfer between two iViews.
    Thanks and Best regards
    Deyang

    Hi,
      Try the following:
    <b>Subscribing to a Portal Event:</b>
    http://help.sap.com/saphelp_nw04/helpdata/en/0c/8eee31e383cd408bcb07e80b887463/frameset.htm
    <b>Code Example for Programming Portal Eventing:</b>
    http://help.sap.com/saphelp_nw04/helpdata/en/0c/8eee31e383cd408bcb07e80b887463/frameset.htm
    <b>Regards,
    Sai Krishna.
    PS: Plz do assign points if it helps. ;-)</b>

  • Multi processor Solaris 2.6 and mutex locks

    hi,
    is anyone aware of any documented issues with Solaris 2.6 running
    on dual-SPARC processors (multi-processor environment) where the
    programs using "mutex locks" (multi-threaded applications), require
    some special handling, in terms of compiling and linking, to some
    special libraries.
    as far as i remember, in some OS book, maybe Peterson's, it was said
    that the mechanism for implementing mutex-locks on multi-processor
    systems is to use low-level spin-locks. this brings down performance
    on a single-processor system, making the processor doing busy-wait,
    but this happens to be the only way of mutex-locking in a multi-processor
    system. if this is so, then where is such behaviour documented in case
    of Solaris 2.6.
    i have had problems with my applications crashing (rathing hanging up)
    in a vfork() on such a system, but same application works fine, with
    100% reproducability, on a single-processor system.
    thanks for any inputs or suggestions and/or information.
    regards,
    banibrata dutta

    I am also facing similar problem. Application which written using Mulit-Threaded using Posix Mutexes. When i run on SINGLE processor manchine, i.e.
    SunOS sund4 5.7 Generic_106541-11 sun4u sparc SUNW,Ultra-5_10
    It works perfectly.
    But when try to run on dual processor machine, i.e.
    SunOS sund2 5.7 Generic_106541-11 sun4u sparc SUNW,Ultra-250
    It is blocking in the one of mutexes.
    Please inform us what is problem. Mr. B. Datta, you comes to know
    any channel, please inform me also at [email protected]
    Thanx & regards,
    -venkat

  • Request Response : Direct bindings

    Hi all,
    I want to expose a schema as a webservice in BizTalk, but my orchestration instead of having a
    specify later port bounded to my receive location, it's bounded
    directly to the message box how can i handle that by consuming the message request
    and sending to the same Receive location my response message ?
    It is possible in BizTalk to use a request response port in direct binding mode ?

    Thank's for your reply. But you didn't explain how to route the response message to the same receive location. If you try you will get an error because no subscribers are identified to consume the response message.
    There must be a mechanism to implement, but i do not know how ?
    And without adding a filter just configure the request part to the message type exposed and it is done.

Maybe you are looking for