ASM and Bad performances

Hi ASM expert
Would you please give me some hints to test the speed of ASM ?
On the actual ASM 11gR1 "running on AIX 6.1 in Cluster RAC" i had to drop a schema of only 900M and it took reaaally long time.
Anything i can check?
thanks.

Hello Haggy
What about using the following script that is an equivalent of the unix iostat but this time for the asm ?
let me know if this helps?
#!/bin/ksh
# NAME
# asmiostat.sh
# DESCRIPTION
# iostat-like output for ASM
# $ asmiostat.sh [-s ASM ORACLE_SID] [-h ASM ORACLE_HOME] [-g Diskgroup]
# [-f disk path filter] [<interval>] [<count>]
# NOTES
# Creates persistent SQL*Plus connection to the +ASM instance implemented
# as a ksh co-process
# AUTHOR
# Doug Utzig
# MODIFIED
# dutzig 08/18/05 - original version
ORACLE_SID=+ASM
NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS'
endOfOutput="_EOP$$"
typeset -u diskgroup
typeset diskgroup_string="Disk Group: All diskgroups"
typeset usage="
$0 [-s ASM ORACLE_SID] [-h ASM ORACLE_HOME] [-g diskgroup] [<interval>] [<count>]
Output:
DiskPath - Path to ASM disk
DiskName - ASM disk name
Gr - ASM disk group number
Dsk - ASM disk number
Reads - Reads
Writes - Writes
AvRdTm - Average read time (in msec)
AvWrTm - Average write time (in msec)
KBRd - Kilobytes read
KBWr - Kilobytes written
AvRdSz - Average read size (in bytes)
AvWrSz - Average write size (in bytes)
RdEr - Read errors
WrEr - Write errors
while getopts ":s:h:g:f" option; do
case $option in
s) ORACLE_SID="$OPTARG" ;;
h) ORACLE_HOME="$OPTARG"
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH ;;
g) diskgroup="$OPTARG"
diskgroup_string="Disk Group: $diskgroup" ;;
f) print '-f option not implemented' ;;
:) print "Option $OPTARG needs a value"
print "$usage"
exit 1 ;;
\?) print "Invalid option $OPTARG"
print "$usage"
exit 1 ;;
esac
done
shift OPTIND-1
typeset -i interval=${1:-10} count=${2:-0} index=0
# Verify interval and count arguments are valid
(( interval <=0 || count<0 )) && {
print 'Invalid parameter: <interval> must be > 0; <count> must be >= 0'
print "$usage"
exit 1
# Query to run against v$asm_disk_stat
if [[ -z $diskgroup ]]; then
query="select group_number, disk_number, name, path, reads, writes, read_errs, write_errs, read_time, write_time, bytes_read, bytes_written from v\$asm_disk_stat where group_number>0 order by group_number, disk_number;"
else
query="select group_number, disk_number, name, path, reads, writes, read_errs, write_errs, read_time, write_time, bytes_read, bytes_written from v\$asm_disk_stat where group_number=(select group_number from v\$asm_diskgroup_stat where name=regexp_replace('$diskgroup','^\+','')) order by group_number, disk_number;"
fi
# Check for version 10.2 or later
typeset version minversion=10.2
version=$($ORACLE_HOME/bin/exp </dev/null 2>&1 | grep "Export: " | sed -e 's/^Export: Release \([0-9][0-9]*\.[0-9][0-9]*\).*/\1/')
if ! (print "$version<$minversion" | bc >/dev/null 2>&1); then
print "$0 requires Oracle Database Release $minversion or later"
exit 1
fi
# Fatal error
function fatalError {
print -u2 -- "Error: $1"
exit 1
# Drain all of the sqlplus output - stop when we see our well known string
function drainOutput {
typeset dispose=${1:-'dispose'} output
while :; do
read -p output || fatalError 'Read from co-process failed [$0]'
if [[ $QUERYDEBUG == ON ]]; then print $output; fi
if [[ $output == $endOfOutput* ]]; then break; fi
[[ $dispose != 'dispose' ]] && print -- $output
done
# Ensure the instance is running and it is of type ASM
function verifyASMinstance {
typeset asmcmdPath=$ORACLE_HOME/bin/asmcmd
[[ ! -x $asmcmdPath ]] && fatalError "Invalid ORACLE_HOME $ORACLE_HOME: $asmcmdPath does not exist"
$asmcmdPath pwd 2>/dev/null | grep -q '^\+$' || fatalError "$ORACLE_SID is not an ASM instance"
# Start the sqlplus coprocess
function startSqlplus {
# start sqlplus, setup the env
$ORACLE_HOME/bin/sqlplus -s '/ as sysdba' |&
print -p 'whenever sqlerror exit failure' \
&& print -p "set pagesize 9999 linesize 9999 feedback off heading off" \
&& print -p "prompt $endOfOutput" \
|| fatalError 'Write to co-process failed (startSqlplus)'
drainOutput dispose
# MAIN
verifyASMinstance
startSqlplus
# Loop as many times as requested or forever
while :; do
print -p "$query" \
&& print -p "prompt $endOfOutput" \
|| fatalError 'Write to co-process failed (collectData)'
stats=$(drainOutput keep)
print -- "$stats\nEOL"
index=index+1
(( count<index && count>0 )) && break
sleep $interval
done | \
awk '
BEGIN { firstSample=1
/^EOL$/ {
firstSample=0; firstLine=1
next
path=$4
if (path ~ /^ *$/) next
group[path]=$1; disk[path]=$2; name[path]=$3
reads[path]=$5; writes[path]=$6
readErrors[path]=$7; writeErrors[path]=$8
readTime[path]=$9; writeTime[path]=$10
readBytes[path]=$11; writeBytes[path]=$12
# reads and writes
readsDiff[path]=reads[path]-readsPrev[path]
writesDiff[path]=writes[path]-writesPrev[path]
# read errors and write errors
readErrorsDiff[path]=readErrors[path]-readErrorsPrev[path]
writeErrorsDiff[path]=writeErrors[path]-writeErrorsPrev[path]
# read time and write time
readTimeDiff[path]=readTime[path]-readTimePrev[path]
writeTimeDiff[path]=writeTime[path]-writeTimePrev[path]
# average read time and average write time in msec (data provided in csec)
avgReadTime[path]=0; avgWriteTime[path]=0
if ( readsDiff[path] ) avgReadTime[path]=(readTimeDiff[path]/readsDiff[path])*1000
if ( writesDiff[path]) avgWriteTime[path]=(writeTimeDiff[path]/writesDiff[path])*1000
# bytes and KB read and bytes and KB written
readBytesDiff[path]=readBytes[path]-readBytesPrev[path]
writeBytesDiff[path]=writeBytes[path]-writeBytesPrev[path]
readKb[path]=readBytesDiff[path]/1024
writeKb[path]=writeBytesDiff[path]/1024
# average read size and average write size
avgReadSize[path]=0; avgWriteSize[path]=0
if ( readsDiff[path] ) avgReadSize[path]=readBytesDiff[path]/readsDiff[path]
if ( writesDiff[path] ) avgWriteSize[path]=writeBytesDiff[path]/writesDiff[path]
if (!firstSample) {
if (firstLine) {
"date" | getline now; close("date")
printf "\n"
printf "Date: %s Interval: %d secs %s\n\n", now, '"$interval"', "'"$diskgroup_string"'"
printf "%-40s %2s %3s %8s %8s %6s %6s %8s %8s %7s %7s %4s %4s\n", \
"DiskPath - DiskName","Gr","Dsk","Reads","Writes","AvRdTm",\
"AvWrTm","KBRd","KBWr","AvRdSz","AvWrSz", "RdEr", "WrEr"
firstLine=0
printf "%-40s %2s %3s %8d %8d %6.1f %6.1f %8d %8d %7d %7d %4d %4d\n", \
path " - " name[path], group[path], disk[path], \
readsDiff[path], writesDiff[path], \
avgReadTime[path], avgWriteTime[path], \
readKb[path], writeKb[path], \
avgReadSize[path], avgWriteSize[path], \
readErrorsDiff[path], writeErrorsDiff[path]
readsPrev[path]=reads[path]; writesPrev[path]=writes[path]
readErrorsPrev[path]=readErrors[path]; writeErrorsPrev[path]=writeErrors[path]
readTimePrev[path]=readTime[path]; writeTimePrev[path]=writeTime[path]
readBytesPrev[path]=readBytes[path]; writeBytesPrev[path]=writeBytes[path]
END {
exit 0Edited by: Hub on Nov 21, 2009 10:53 AM

Similar Messages

  • Reporting on master data customer and bad performances : any workaround ?

    Hello,
    I've been asked to investiguate on bad performances encountered when performing reporting
    on the specific master data zcustomer.
    Basically this master data has a quite similar design that 0customer, there are 96000 entries in the master data table.
    A simple query has been developed : the reporting is done on the master data zcustomer and its attributes : no key figure, no calculation, no restriction ...
    Nevertheless, the query can not be executed .. the query runs around 10 minute in rsrt, then the private memory is exhausted and then a short dump is generated.
    I tried to buid a very simple query on 0customer, this time, without the attributes ... and it took more than 30 sec before I get the results.
    I checked the queries statistics :
    3.x Analyzer Server 10 sec
    OLAP: Read Texts : 20 sec
    How is it that it is so long to performthe reporitng on those master data, while in the same time If i try to display the content in SAP by choosing "maintain master data", I have an immediate answer.
    I there any workaround ?
    Any help would be really appreciated.
    thank you.
    Raoul

    Hi.
    How much data have you got in the cube?
    If you make no restrictions, you are asking the system to return data for all 96.000 customers. That is one thing that might take some time.
    Also, using the attributes of this customer object, fx making selection or displaying several of them, means that the system has to run through the 96.000 records in masterdata to know what goes where in the report.
    When you display the masterdata, you are by default displaying just the 250 or so first hits, and you are not joining against any cube or sorting the result set, so that is fast.
    You should make some kind of restriction on other things than zcustomer (time, org.unit, version, etc, to limit the dataset from the cube, but also a restriction on one of the zcustomer attribs, with an index for that maybe, and performance should improve.
    br
    Jacob

  • GT70 0nc GTX 670MX Driver Crashes, and BAD performance.

    I purchased a gt70 ONC 494US and at first I updated all drivers and software on win 8 x64, I tried running multiple games off of steam and none of them were running smoothly, some of them not at all, (borderlands 2, arma II, fallout 3.....) I installed an SSD and a fresh version of win 7 64 ultimate, used only drivers from MSI, and still had crashes and couldn't even run a benchmark.
    The only solution I have found to run a select few games, is by uninstalling ALL nvidia software and drivers, using windows default VGA drivers for the 670mx, however, this only works for a few games, and some others won't start because they don't find a capable GPU.
    I have read MANY forums, most of them go through the typical, use the nvidia control panel to change the power settings, global settings, and roll back drivers to the originals. NONE of this made any difference.
    I have read that there is a firmware beta ec flash? But, even the tech forum from MSI the link didn't go anywhere. I know this is a common problem, and especially with steam games. Yes, the power light turns orange, and i turn on turbo mode, I also turned on the indicator showing the nvidia gpu is being used, but games will not get past the title screen if they start at all, and the nvidia driver crashes.
    Help?

    Quote from: t.s.girdwood;111417
    Currently I am trying the new 340.52, before this I had reset to FACTORY image with the original OEM drivers, 306.14 (I believe). For the intel, again I started with the OEM hd 4000 drivers, currently I have the may 2014 drivers, 10.18.10.3621. Before hand, no there was no error, just a frozen game I had to kill with task manager, now I am getting the error that the vram is full. Using GPU-z I see that the dedicated vram isn't even being used. Ideas?
    For the BIOS, EC firmware update, you can find them under the download page of MSI.
    Win8 BIOS:http://www.msi.com/support/nb/GT70_0NC.html#down-bios&Win8 64
    Win7 BIOS:http://www.msi.com/support/nb/GT70_0NC.html#down-bios&Win7 64
    update guide:http://www.msi.com/files/pdf/Flash_BIOS_by_UEFI_BIOS_Setup_Utility_en.pdf
    Win8 EC:http://www.msi.com/support/nb/GT70_0NC.html#down-firmware&Win8 64
    Win7 EC:http://www.msi.com/support/nb/GT70_0NC.html#down-firmware&Win7 64
    update guide:http://www.msi.com/files/pdf/Win8_EC_Update_Step_by_Step_Guide.pdf
    For the game settings GeForce Experience provides, I wouldn't say it's an optimized setting but the settings only for your references.
    Since the tested game environment and system configurations might be different, I'd go for my own settings in order to find the best gaming experience.
    Like BlueAlexFPS said, the GTX670M has similar performance as GTX765M, it would be ridiculous to have the same performance like GTX780M. If what you said is true, then why they are they putting 780M out on the market?
    I'd keep the graphics driver from MSI website, since a windows generic driver could gives you only the basic level graphics performance and not to mention without the Optimus, you wouldn't know if which graphics it's running.
    *Make sure you're having battery and AC adapter both plugged while running the games.

  • SB Live! 5.1 Digital causing problems and bad performance , maybe wrong drive

    Hi everyone,?my Soundcard is aprinted on the bill ---> ? Creative (B) SB Li've Player 5.printed on the card --->Creative?Labs SoundBlaster Li've! 5. Digital? with Emu0K-JFF chip? and Modelnumber? SB0220which I own since 2003 and my System iswinXP Media Center Edition?, msi KT4Ultra Mainboard and an athlonXP2000+ , GB ddr ram and a Radeon9800Pro with usb keyboard & mouse?I've searched this forum and those threads above the thread-list but it didn't help me out. I am concerning about the recent performance of my pc , especially when it comes to sounds in games. At?"Counterstrike .5" my pc hangs for about half a second or less if I use those "radio commands" like Go-Go-Go or Need-Backup, sometimes if other players use them. Sometimes it does those stupid things like?described below.At "Mercedes Benz World Racing" I recognized that if I hit a Sign or sth else that won't move, it plays a loud sound, as everyone can expect if u hit those things. From that on i can't control my car anymore, well, in fact I am able to, but the controls I?do with my keyboard are ... i would say "buffered" and "doubled" ---- Hit the sign -> immediately
    or after that press back to reverse -> car dri'ves about 00-200 meters back, other controls I?do are added after that with delay of each about 2-3 seconds and can't be stopped. I turned down the volume of those sounds and it worked a while, but I am not allowed to increase volume of my pc, else it will go like that again. It results in minimal to zero global sound at this game... My Windows is about 4 months old and I cant find drivers for my soundcard. I've installed the drivers from my creative-cd that came together with my soundblaster. I have downloaded a few driver packages from the creative homepage in earlier times but it is always a try-and-cry to get the proper drivers installed. And?I cant remember which?I had installed?years before. That's my current?c:\windows\system32\drivers\ctaud2k.sys (5.2.0.0252-.3.020, 87,92 KB (837.548 Bytes), 27.09.2007 02:3)?I am not sure which driver?I have installed, but I don't want to?do testing anymore, aswell as searching the creative website like I did several times before, sorry, but it does make me really really really mad and angry.One driver doesn't provide the real bass and the other let's my cthelper use 99% usage, all others are not compatible to my card. Those problems occured the first time for about year or later. At the moment there are no IRQ-conflicts. Please tell me whats wrong here or help me find the right driver.
    /Edit: I have to add that I am not using the digital port,?because?I do not own digital speakers, it is just?the X-230?2. system from logitech?Message Edited by BleiBill on 0-2-20070:59 PM
    Message Edited by BleiBill on 0-2-2007:2 PM

    I have been having the same trouble since I upgraded to XP about three years ago.
    Now I have a new system and like you said I want to dedicate this older mechine to being a media pc and started looking into things again.
    I wanted to update drivers and see that I missed an update in July of 03 or I could use a new patch but my drivers are from march of 03 and I can't find the other update.
    Come on SB where is the support we pay a premium to get.

  • Bad performance quadro 4800 and premiere pro

    Could someone please help me understand if somethings wrong with my settings or if my 4800 is defected?
    I run OSX Lion 10.7.2 and have the quadro fx 4800 installed in a:
    Mac Pro 4,1
    2 x 2.93Ghz Quad-core Intel Xeon
    12GB of Ram
    with two additional Geforce GT 120 installed too.
    the latest drivers for the 4800
    GPU driver version: 7.12.9 270.05.10f03
    CUDA driver version: 4.0.50
    I run Premiere 5.5.2 and have tested the performance with the two different settings:
    Mercury Playback Engine GPU Acceleration
    Mercury Playback Engine Software Only
    I did this test because I didn’t feel my 4800 did the work I’ve read everywhere it should. Adding simple text titles to AVCHD footage made my playback drop frames.
    Anyhow, I tested my machine with 1920x1080 AVCHD and added video layers until I started to see stutter during playback. First with “Mercury Playback Engine GPU Acceleration”. With 14 video layers sized down so you could see them all beside one another playback started to drop frames. The yellow line was still yellow in the top of the timeline. Shouldn’t it turn red if the footage needs rendering?
    I then switched to “Mercury Playback Engine Software Only” and the yellow line turned red. The strange thing is that when I played back the same 14 layers of video the dropped frames where gone!! Isn’t this beyond strange??? Shouldn’t everything run more smooth with the “Mercury Playback Engine GPU Acceleration”?
    Has it got anything to do with my Geforce GT 120 installed? Should I get rid of those? My two 24 inch apple displays are both connected to the 4800.
    PLEASE help me or redirect me to some good forums!

    After testing final cut (which I love) and Premiere back and forth with the exact same media I notice that editing prores in Final cut is the best when it comes to just edit the film. Sure Premiere does take native MTS files but what good is this when it doesn't run fluidly? Comparing playback and editing with prores in Final Cut and Premiere makes me realize that on my computer, don't ask me why, a rendered sequence with prores material in Premiere is far more jerky than a sequence with prores that "do not need render" in Final cut. Shouldn't they be the same?? Movements like camera pans that originally were shot really smoothly isn't as smooth as they should be playing back in premiere (and yes I know I got the right sequence settings and all). Final cut however gives me what I want.
    To me it all comes down to how good the editing App presents what you currently are editing. Adobe has many advantages with the dynamic link and so on but playback is a such an important part of editing!! Has anyone got the same problem with "not as smooth playback as you would really want" even though the sequence is rendered displaying a green line??
    Thanks for the links lasvideo but I practically read every article there is about CUDA, premiere and mercury playback engine already.
    Anyone knows anything about Adobe CS 6 release dates?
    12 dec 2011 kl. 22:03 skrev lasvideo:
    Re: bad performance quadro 4800 and premiere pro
    created by lasvideo in Premiere Pro CS5 & CS5.5 - View the full discussion
    Some answers for you right from the horses mouth 
    http://blogs.adobe.com/premiereprotraining/2011/02/cuda-mercury-playba ck-engine-and-adobe-premiere-pro.html
    http://forums.adobe.com/message/3804386#3804386
    http://forums.adobe.com/community/premiere/faq_list
    Replies to this message go to everyone subscribed to this thread, not directly to the person who posted the message. To post a reply, either reply to this email or visit the message page: http://forums.adobe.com/message/4079978#4079978
    To unsubscribe from this thread, please visit the message page at http://forums.adobe.com/message/4079978#4079978. In the Actions box on the right, click the Stop Email Notifications link.
    Start a new discussion in Premiere Pro CS5 & CS5.5 by email or at Adobe Forums
    For more information about maintaining your forum email notifications please go to http://forums.adobe.com/message/2936746#2936746.

  • CMP 6.1 Entity bad performance.

    I'am using entity 1.1 EJB on WL 6.1 and facing very bad performances:
    around 150ms for an insert (i have 20 columns).
    When accessing an order interface to read 2 fields in a session bean method: around
    90 ms.
    I'am very disapointed and confused. What should I look up for
    to increase the performance ? Any important tuning or parameters ? Should I use EJB
    2.0 to have significant perf ?
    Thanks for any advice because we are thinking to switch all the application on stored
    procedures. A solution without Entity and fewer stateless session beans.
    My config:
    WL: 6.1 on Sun sparc
    SGBD: Sybase
    Entity: WebLogic 6.0.0 EJB 1.1 RDBMS (weblogic-rdbms11-persistence-600.dtd)
    Thanks

    Historically its hard to get good performance & scalability out of sybase
    without using stored procs. Using dynamic sql on sybase just doesnt do as
    well as procs. Oracle on the other hand can get very close to stored proc
    speed out of well written dynamic sql.
    As far as weblogic goes, my experience is the focus of their testing for db
    related stuff is Oracle, then DB2, then MSSQLServer. Sybase is usually last
    on the list.
    As far as the 6.1 cmp, haven't used it much, but because of these other
    things I would be cautious about using it with Sybase.
    Joel
    "Antoine Bas" <[email protected],> wrote in message
    news:3cc7cdcf$[email protected]..
    >
    I'am using entity 1.1 EJB on WL 6.1 and facing very bad performances:
    around 150ms for an insert (i have 20 columns).
    When accessing an order interface to read 2 fields in a session beanmethod: around
    90 ms.
    I'am very disapointed and confused. What should I look up for
    to increase the performance ? Any important tuning or parameters ? ShouldI use EJB
    2.0 to have significant perf ?
    Thanks for any advice because we are thinking to switch all theapplication on stored
    procedures. A solution without Entity and fewer stateless session beans.
    My config:
    WL: 6.1 on Sun sparc
    SGBD: Sybase
    Entity: WebLogic 6.0.0 EJB 1.1 RDBMS(weblogic-rdbms11-persistence-600.dtd)
    >
    Thanks

  • Bad performance when open a bi publisher report in excel

    We use bi publisher(xml publisher) to create a customized report. For a small report, user like it very much. But for a bigger report, users complain about the performance when they open the file.
    I know it is not a native excel file, that may cause the bad performance. So I ask my user to save it to a new file as a native excel format. The new file still worse than a normal excel file when we open it.
    I did a test. We try to save a bi publish report to excel format, the size shrink to 4Mb. But if we "copy all" and "Paste Special" value only to a new excel file, the size is only 1Mb.
    Do I have any way to improve that, users are complaining everyday. Thanks!
    I did a test today.
    I create a test report
    Test 1: Original file from BIP in EBS is 10Mb. We save it in my local disk, when we open the file, it takes 43 sec.
    Test 2: We save the file in native excel format, the file size is 2.28Mb, it takes 7 sec. to open.
    Test 3: We copy all cell and "PasteSpecial" to a new excel file with value only. The file size is 1.66Mb, it takes only 1 sec to open.
    Edited by: Rex Lin on 2010/3/31 下午 11:26

    EBS or Standalone BIP?
    If EBS see this thread for suggestions on performance tuning and hints and tips:
    EBS BIP Performance Tuning - Definitive Guide?
    Note also that I did end up rewriting my report as PL/SQL producing a csv file and have done with several large reports in BIP on EBS.
    Cheers,
    Dave

  • The Script root.sh problem - ora.asm and ASM and Clusterware Stack failed

    Folks,
    Hello. I am installing Oracle 11gR2 RAC using 2 VMs (rac1 and rac2) whose OS are Oracle Linux 5.6 in VMPlayer according to the website http://appsdbaworkshop.blogspot.com/2011/10/11gr2-rac-on-linux-56-using-vmware.html
    I am installing Grid infrastructure. On step 9 of 10 - execute script /u01/app/grid/root.sh for 2 VMs rac1 and rac2.
    After run root.sh in rac1 successfully. I run root.sh in rac2 and get an error as below:
    [root@rac2 grid]# ./root.sh
    Running Oracle 11g root.sh script...
    The following environment variables are set as:
    ORACLE_OWNER= ora11g
    ORACLE_HOME= /u01/app/grid
    Enter the full pathname of the local bin directory: [usr/local/bin]: /usr/local/bin
    Copying dbhome to /usr/local/bin ...
    Copying oraenv to /usr/local/bin ...
    Copying coraenv to /usr/local/bin ...
    Creating /etc/oratab file...
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root.sh script.
    Now product-specific root actions will be performed.
    2012-03-05 16:32:52: Parsing the host name
    2012-03-05 16:32:52: Checking for super user privileges
    2012-03-05 16:32:52: User has super user privileges
    Using configuration parameter file: /u01/app/grid/crs/install/crsconfig_params
    Creating trace directory
    LOCAL ADD MODE
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    Adding daemon to inittab
    CRS-4123: Oracle High Availability Services has been started.
    ohasd is starting
    CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
    An active cluster was found during exclusive startup, restarting to join the cluster
    CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
    CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
    CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
    CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
    CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
    CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
    CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
    CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
    CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
    CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
    Start action for octssd aborted
    CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded
    CRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac2'
    CRS-2672: Attempting to start 'ora.asm' on 'rac2'
    CRS-2676: Start of 'ora.drivers.acfs' on 'rac2' succeeded
    CRS-2676: Start of 'ora.asm' on 'rac2' succeeded
    CRS-2664: Resource 'ora.ctssd' is already running on 'rac2'
    CRS-4000: Command Start failed, or completed with errors.
    Command return code of 1 (256) from command: /u01/app/grid/bin/crsctl start resource ora.asm -init
    Start of resource "ora.asm -init" failed
    Failed to start ASM
    Failed to start Oracle Clusterware stack
    [root@rac2 grid]#
    As we see the output above, at the end of the output
    1) Start of resource ora.asm -init failed
    2) Failed to start ASM
    3) Failed to start Oracle Clusterware stack
    The runInstaller is in the first VM rac1. My question is:
    Do any folk understand how to solve the script root.sh in rac2 problem ( 3 fails of ora.asm, ASM and Clusterware stack as above) ?
    Thanks.

    Please check there is no firewall exist:
    try this like:
    root.sh fails on second node
    MOS note:
    11gR2 Grid: root.sh Fails to Start the Clusterware on the Second Node Due to Firewall on Private Network [ID 981357.1]
    Grid Infrastructure 11.2.0.2 Installation or Upgrade may fail due to Multicasting Requirement [ID 1212703.1] (Most probabily this issue)

  • HR infotype log in PCL4 and overall performance

    Hi there,
    There has been a few threads about PCL4 performance with regards to reading, but I have a slightly different question:
    We are working on an export program for HR masterdata and are considering using logging in PCL4 to be able to export only changed fields in the infotypes. To achieve this we need to add quite alot of extra fields and infotypes to the configuration in the IMG.
    Does anyone have any experience about how additional fields and infotypes affect runtime and database performance of the system? How optimized is the system with regards to writing to this cluster?
    It will obviously cause more data to be logged, and the database will grow slightly faster, but does it decrease responsiveness of PA30/40 for the end users? Is it possible to archive old data from this cluster? I'm guessing that it won't be a big problem, but any feedback is greatly appreciated.
    Best regards,
    Lars G. Gudbrandsen

    Hi Lars,
    Probably you would get a better response in the HCM section as opposed to ABAP.
    Maybe you can use change pointers, and badis rather, to acheive what you want but I am not 100% sure the requirement.
    Additional fields and infotypes don't impact the system negatively in my opinion. It wouldn't affect PA30, unless the specific infotype is selected and then provided if it has been correctly created in PM01 it should be fine, also depending how many fields you are talking about of course. PA40 would only be impacted for those transacitons for which the infotype is included.
    As for archiving, I am not sure, but once again i think HCM forum is your best bet.

  • Bad performance updating purchase order (ME22N)

    Hello!
    Recently, we face bad performance updating purchase orders using transaction ME22N. The problem occurs since we implemented change documents for a custom table T. T is used to store additional data to purchase order positions using BAdIs ME_PROCESS_PO_CUST and ME_GUI_PO_CUST.
    I've created a change document C_T for T using transaction SCDO. The update module of the change document is triggered in the method POST of BAdI ME_PROCESS_PO_CUST.
    Checking transaction SM13, I recognized that the update requests of ME22n have status INIT for several minutes before they are processed. I also tried to exclude the call of the update module for change document C_T (in Method POST) - the performance problem still occurs!
    The problem only occurs with transaction ME22N, thus I assume that the reason is the new change document C_T.
    Thanks for your help!
    Greetings,
    Wolfgang

    I agree with vikram, we don't have enough information, even not a small hint on usage of this field, so which answer do you expect (The quality of an answer depends ...) This analysis must be executed on your system...
    From a technical point of view, the BAPI_PO_CHANGE has EXTENSIONIN table parameter, fill it using structure BAPI_TE_MEPOITEM[X] alreading containing CI_EKPODB (*) and CI_EKPODBX (**)
    Regards,
    Raymond
    (*) I guess you have used this include
    (**) I guess you forgot this one (same field names but data element always BAPIUPDATE)

  • Bad Performance in a query into table BKPF

    Hi forum i have a really problem in the second query under the table
    BKPF.. some body cans help me, please
    *THIS IS THE QUERY UNDER MSEG
      SELECT tmsegmblnr tmkpfbudat tmsegbelnr tmsegbukrs tmseg~matnr
             tmsegebelp tmsegdmbtr tmsegwaers tmsegwerks tmseg~lgort
             tmsegmenge tmsegkostl
      FROM mseg AS tmseg JOIN mkpf AS tmkpf ON tmsegmblnr = tmkpfmblnr
      INTO CORRESPONDING FIELDS OF TABLE it_docs
      WHERE
        tmseg~bukrs IN se_bukrs AND
        tmkpf~budat IN se_budat AND
        tmseg~mjahr = d_gjahr AND
        ( tmsegbwart IN se_bwart AND tmsegbwart IN (201,261) ).
      IF sy-dbcnt > 0.
    I CREATE AWKEY FOR CONSULTING BKPF
        LOOP AT it_docs.
          CONCATENATE it_docs-mblnr d_gjahr INTO it_docs-d_awkey.
          MODIFY it_docs.
        ENDLOOP.
    THIS IS THE QUERY WITH BAD BAD PERFOMANCE
    I NEED KNOW "BELNR" FOR GO TO THE BSEG TABLE
        SELECT belnr awkey
        FROM bkpf
        INTO CORRESPONDING FIELDS OF TABLE it_tmp
        FOR ALL ENTRIES IN it_docs
        WHERE
          bukrs = it_docs-bukrs AND
          awkey = it_docs-d_awkey AND
          gjahr = d_gjahr AND
          bstat = space .
    THNKS

    Hi Josue,
    The bad performance is because you're not specifying the primary keys of the table BKPF in your WHERE condition; BKPF usually is a big table.
    What you really need is to create a new index on database for table BKPF via the ABAP Dictionary on fields BUKRS, AWKEY, GJAHR & BSTAT. You'll find the performace of the program will significantly increase after the new index is activated. But I would talk to the Basis first to confirm they have no issues if you create a new index for BKPF on the database system.
    Hope this helps.
    Cheers,
    Sougata.

  • Bad performance in web intelligence reports

    Hi,
    We use Business Objects with Web Intelligence documents and Crystal Reports.
    We are supporting bad performance when we use the reports specilly when we need to change the drill options
    Can someone telling me if exists some best practices to improve performance? What features should i look to?
                      Best Regards
                            João Fernandes

    Hi,
    Thank you for your interest. I know that this a issue with many variables because that i need information about anything that could cause bad performance.
    For bad performance i mean the time that we take running and refreshing reports data.
    We have reports with many lines but the performance is bad even when a few users are in the system
                                 Best Regards
                                         João Fernandes

  • Help: Bad performance in marketing documents!

    Hello,
    When creating an AR delivery note which has about 10 lines, we have really noticed that the creation of lines becomes slower and slower. This especially happens when making tab in the system field "Quantity". In fact, before going to the next field quickly, it stays in Quantity field for about 5 seconds!
    The number of formatted searches in AR delivery note is only 5. And only one is automatic. The number of user fields is about 5.
    We have heard about the bad performance when the number of lines increases in the documents when having formatted searches, but it is odd to happen this with about 10 lines in the document.
    We are using PL16 and this issue seems to have been solved already at PL10.
    Could you throw some light on this?
    Thanks in advance,

    It is solved now.
    It had to be with the automatic formated search in 2 head fields.
    If the automatic search is removed, the performance is OK.
    Hope it helps you,

  • Bad performance on system, export/import buffer many sawps

    Hello,
    I have an ECC 6.0 system on AIX with 6 application servers. There seems to be a performance problem on the system, this issue is being noticed very well when people are trying to save a sale order for example, this operation takes about 10 minutes.
    Sometimes we get short dumps TSV_TNEW_PAGE_ALLOC_FAILED or MEMORY_NO_MORE_PAGING but not very often.
    I am not very good at studying the performance issues, but from what I could see is that there are may swaps on buffer export/import, program and generic key. Also the HitRatio is 88% at  buffer export/import, which I think is pretty low.
    I know that the maximum value accepted of swaps per day is 10000, is that right?
    Can you please advice me what needs to be done in order for these swaps to decrese and hit ratio to increase? And also what else I should do in order to analyse and root cause and the bad performance of the system?
    Many thannks,
    manoliv

    Hi,
    sappfpar determines the minimum and maximum (worst-case) swap space requirements of an R/3 application server. It also checks on shared memory requirements and that the em/initial_size_MB and abap/heap_area_total parameters are correctly set with the following procedure:
    /usr/sap/<SYSTEMNAME>/SYS/exe/run/sappfpar check pf=/usr/sap/<SYSTMENAME>/SYS/profile/<Profile name>
    At the end of the list, the program reports the minimum swap space, maximum heap space, and worst case swap space requirements:
    Additional Swap Space Requirements :
    You will probably need to increase the size of the swap space in hosts in which R/3 application servers run.
    As a rule of thumb, swap space should equal
    3 x Size of Main Storage or at least 1 GB, whichever is larger.
    SAP recommends a swap space of 2-3 GB for optimal performance.
    Determining Current Swap Space Availability: memlimits
    You can find out how much swap space is currently available in your host system with R/3’s memlimits program.
    Here’s how to run memlimits:
    From the UNIX command prompt, run the R/3 memlimits program to check on the size of the available swap space on the host system on which an R/3 application server is to run.
    The application server must be stopped, not running.
    /usr/sap/<SYSTEMNAME>/SYS/exe/run/memlimits | more
    The available swap space is reported in the output line Total available swap space: at the end of the program output. The program also indicates whether this amount of swap space will be adequate and determines the size of the data segments in the system.

  • About ASM and SAN...

    Hello Guys,
    I have to implement 3 nodes RAC 10gR2 ob centOS4 operating system. I have study so many documents about rac instaltion and configurations. I have learn how to set the network requirements with private, public and virtual IPs and all other stuff. I have learn installtion of clusterware and database with cluster enable functionality.
    BUT the storage options are still not clear to me. We have purchases SAN and we are planning to implement ASM for the storage. Now i want to know:
    How many disk and disk partitions 3 node structure will require on SAN?
    How ASM will access SAN, or you can say OS will access this shared storage?
    Voting disk and OCR can not be store on sharted storage and need to be store on raw devices... what these raw device can be? How it can be access by all nodes?
    Above three questions are disturbing me a lot. If they are clear to me the whole storage concept will be clear and i can implement RAC.
    Please help me by answering the above 3 questions. I will be vert greatful to you.
    Regards,
    Imran

    How many disk and disk partitions 3 node structure will require on SAN?
    There's no real answer to that! With Oracle generally, RAC or no RAC, the answer to how many disks you should have is "as many as possible". Partitioning is really up to you, too, depending on what you find easiest to manage. If you have a single SAN array, for example, comprised of 15 disks that you choose to partition into three or four logical volumes so that you can call one 'data', one 'redo', one 'OS', and one 'other' -that's entirely up to you, since Oracle could care less how you partition, what you call them or how many of them there are. Moreover, everything on every partition is being striped across those 15 disks anyway, so who cares?
    I think, however, you might be thinking of the RAC-specific issues of the voting disk and the Oracle Cluster Registry. If you were using a cluster file system, they could be just two files on the file system, about 120M in size between them. Since you are going to use ASM and these two elements can't be stored inside an ASM array, you'll have to create two raw partitions for this purpose. The rest you then chop up for ASM's use.
    It is NOT true, incidentally, that "Voting disk and OCR can not be store on sharted storage". By definition, the voting disk and OCR must be on shared storage! Indeed, raw partitions, ASM arrays and cluster file systems are ALL shared storage technologies. It just so happens that those two files can't use ASM... but raw or cfs are fine.
    A raw partition is not, of course, intrinsically 'shared storage'... but if it's a raw partition on your SAN, to which all three of your nodes are physically attached, then it is shareable. It's shareable simply because three nodes can see it. And because there's no file system there with exclusive and blocking file locks, what one node does to a raw partition doesn't stop another node accessing it simultaneously (which is the definition of shared storage, of course).
    How will ASM access SAN? By you partitioning the SAN into a number of logical volumes, each one of which will be kept raw, and you then declaring each such volume as a candidate disk. You'll wrap all candidate disks up into an ASM disk group... and then Oracle will write to that disk group and hence through to the underlying logical volumes. Which comes back to the original question: how many logical volumes should you create out of, say, a 15 disk LUN on a SAN?
    Depends, as I said, on a lot of things, but for example RAID5 runs best when there are either 5 or 9 disks in the array (or did when last I looked at an EMC Clariion SAN!). So if your underlying RAID technology was going to be RAID5, you might well create 3 5-disk logical volumes on the one LUN. To let ASM use all 15 disks, you'd then create a 3-disk diskgroup (where 1 ASM disk = 1 SAN logical volume). On the other hand, you might want to keep some disks back for future storage, in which case a 1-disk ASM diskgroup representing a single 9-disk logical volume might be the go, with the remaining 6 disks on the LUN available for future expansion.
    It's a complicated topic, unfortunately. You're dealing with physical storage which is already abstracted into logical volumes and then abstracted even further by wrapping those logical volumes up into ASM disk groups. You balance performance, expandability, management convenience, your SAN vendor's optimisation tricks and so on... and hopefully come out with something that works for you!

Maybe you are looking for

  • No Preview Available in Thumbnails

    The past couple of times when I have tried to import pictures from my camera into Lightroom 3, when the thumbnails finally come up, it is just a gray box that says no preview available.  I have a little over 28,000 pictures in my catalog and have had

  • What happens with Typekit license, if cc subscription doesn't exist any more?

    If I use Typekit fonts to create a website for a customer in Adobe Muse, and I then cancel my CC subscription (let's say, because I am retiring), what happens to the website of my customer. Will the fonts still show up?

  • ICloud MobileMe IMAP rejected password

    Greetings, After moving my MobileMe account to iCloud, I am now getting an error message in mail that states: The MobileMe IMAP server "p99-imap.mail.me.com" rejected the password for user "xxxxxxx" It keeps popping up a window asking me to enter my

  • Can't close tab

    I can't close tab from safari. you know how when you open a new tab, and if you wants to close it, if you click on it, it shows a "x", so you can click and close the new tab. But from today, it doesn't appear, and I have to close each tab by right-cl

  • Finder unexpectedly quit! can't seem to find a fix...

    "The last time you open finder, it unexpectedly quit while reopening windows." message came up and it seem it wont go away, even if i click reopen or don't reopen. The icon on the dock keeps jumping up and down. I created a second user on the same ma