Performance problems after EHP4

Hello,
We have a small problem.
After installing EHP4 for ERP and EHP1 for Netweaver 7 on AIX Oracle installation, an increase in dialog response time is reported.
We upgraded last September 2009, and from then and to now, we have gone from an average of 800ms to 1200ms in dialog response time.
When we analyze transactions, there is no specific transaction that is the cause, it looks more like an allover trend, that the system in general has become slower.
When we look at fx. Earlywatch reports, it is quit obvious, that the u201Cproblemsu201D started just after implementing the EHP.
We are attacking this on several fronts, and I am looking on the sap basis part.
I was not a part of the upgrade team, so what I am looking into, is weather maybe some notes has been forgotten or something else. I am especially interested in notes and steps that should have been performed on Oracle, in connection with the EHP upgrade.
Is there anybody in here that have experienced similar problems, after applying EHPs?
Best Regards,
Kenneth
Edited by: Kenneth Nielsen on Jan 22, 2010 9:45 AM

Hi Kenneth,
In which area that you are having much interval  in total rsponse time? If it is looking under DB time may be cause with Oracle bug fixes..check your all Oracle patches(mainly related to performance).
And explore the DB version and patch list that presently you had.
Regards,
Nick Loy

Similar Messages

  • Performance problems after installing XP Service Pack 3

    Hello,
    After upgrading my client XP-machine to Service Pack 3, I have serious performance issues working with ApEx. Firstly, establishing the connection to the ApEx Workspace and then logging in both are very slow. Navigating through the workspace works fine, however when I then open any page for editing and/or try to run it, again it is very slow. As we work with PL/sql Embedded gateway I have changed the parameter SHARED_SERVERS to 10, but this did not make any difference. Remotely connecting to this server in other ways such as ping, connecting to xmlpserver, mapping network drives and connecting to the database all work fine.
    My collegues also working witk Service Pack 3 have no performance issues. However when I performed a rollback of the XP Service Pack 3, the performance issue was gone!
    Does anyone know of any issues similar to this problem?
    Regards,
    Ben van Dort

    This issue is probably related to an update of InternetExplorer by Service Pack 3. Working with Firefox instead of IE gives no performance problems whatsoever.
    Version of IE after Service Pack 3: 6.0.2900.5512.xpsp_sp3_gdr.080814-1236CO
    Ben

  • Serious performance problems after updating from 10.5.6 to 10.5.7

    I've got a Macbook Pro C2D 15" (late 2006). After updating the system from 10.5.6 to 10.5.7 and quicktime to 7.6.2, I experience serious performance problems which I have never seen before:
    -From time to time, kernel_task has peaks in activity monitor (consumes a lot of CPU). In that moment, the mouse hangs for a short moment and audio playback is distorted/has drop outs.
    -Recording audio consumes a lot of CPU (independent of the audio application).
    -Video playback consumes a lot of CPU, regardless if I use VLC or the quicktime player.
    Any idea what could be done? Thanks in advance.

    After a few days of using CoolBook I report that my 1st generation MacBook Air is now reliable and never overheating, regardless of environment conditions.
    I must point out that I do believe that it really is something new in 10.5.7 that is causing this problem as my environmental conditions have not changed and even Summer in Dublin still sees temperatures no more than 72F/22C. I am now in Madrid, where the environment is quite different, and I am still having no problem with kernel_task spinning.
    I plan on reporting my problem to Apple anyway and being aggressive about the issue as MacLovinRanga did in this thread (http://discussions.apple.com/thread.jspa?messageID=9647353#9647353). Perhaps many/all of those of us having this problem will obtain new MacBook Airs.
    Joe Kiniry

  • 3D performance problems after upgrading memory

    I recently purchased an additional 2GB of memory to try and extend the life of my aging computer.  I installed the memory yesterday and Windows seems to recognize it (reporting now 3.3GB) but when I dropped into WoW (pretty much the only game I have) the 3D performance was down from the usual 60FPS @ 1600x1080 to a bleak 20 (at best) and the CPU utilization went to about 80% on both cores (with ~20% kernel usages).  Basically WoW was being software 3D rendered!!!
    I went through the usual reinstall drivers, reboot, etc... and couldn't find a fix.  I powered down, pulled out 2 of the memory sticks, booted up, and dropped into WoW - it ran at the full 60FPS and CPU utilization was very low (i.e. back to GPU Hardware 3D rendering).  I powered down again, swapped the 2 sticks for the other 2 sticks, booted up, and dropped into WoW - again it ran 100% fine.  So I powered down, put all four sticks in, booted back up, and when I dropped into WoW it was running in the software 3D rendering mode (20FPS at best and High CPU/Kernel usage).
    I've tried the /PAE option in boot.ini - no joy.  I've tried /MAXMEM = to 3300, 3072, 3000, and even 2048 - no joy in any of those cases.  Has anyone seen anything like this before?  Or have suggestions to fix (other than going to Win7-64)?
    All info in signature is up to date.
    Thanks in advance for any help!

    Quote
    Well his last post was a little over 6 hours ago so he was up pretty late.
    Looks like nothing one does in here goes completely unnoticed.   
    Anyway, I am done sleeping now.
    Quote
    his 2 Pfennig's worth.  I know, I know it's Euro's now.
    Yeah, and what used to be "Pfennige" is now also called "Cents" and here are mine:
    Quote
    I've tried the /PAE option in boot.ini - no joy.  I've tried /MAXMEM = to 3300, 3072, 3000, and even 2048 - no joy in any of those cases.  Has anyone seen anything like this before?  Or have suggestions to fix (other than going to Win7-64)?
    PAE or Physical Memory Extension will not do anything as Microsoft has castrated this feature to such an extend that it has nothing to do with memory addressing anymore when in comes to Windows XP:
    http://en.wikipedia.org/wiki/Physical_Address_Extension#Microsoft_Windows
    Quote
    Windows XP Service Pack 2 and later, by default, on processors with the no-execute (NX) or execute-disable (XD) feature, runs in PAE mode in order to allow NX. The NX (or XD) bit resides in bit 63 of the page table entry and, without PAE, page table entries only have 32 bits; therefore PAE mode is required if the NX feature is to be exploited. However, desktop versions of Windows (Windows XP, Windows Vista) limit physical address space to 4 GiB for driver compatibility reasons.
    The feature is already automatically enabled.  But since is original function (Address Extension) does no longer exist when it comes to the desktop versions of Windows XP, it won't really do anything you would ever notice.
    About the /MAXMEM Switch:  In Windows 32bit operating systems, every process is limited to 2GB of memory.  The point of the switch is to allow certain applications (or their run-time process) to occupy a higher amount of system memory than 2GB.  However, the culprit here is that only those applications are able to utilize this ability that have been programmed (or compiled) accordingly.  A special flag (large memory aware) has to be implemented.  Otherwise, these application will be restricted to 2GB even though the /MAXMEM Switch has been set to extend the 2GB limit to 3GB.  Most 32bit applications come without the "large memory aware" flag and that is why usually, settings the switch won't change anything.
    In any case, it is unlikely that /PAE (even if it would not be castrated) and /MAXMEM would have an impact on your actual issue because I doubt that it has much to do with either memory adressing or the memory limit of an indiviual Windows process.
    Quote
    the 3D performance was down from the usual 60FPS @ 1600x1080 to a bleak 20 (at best) and the CPU utilization went to about 80% on both cores (with ~20% kernel usages).
    There are a couple of hardware based explanations to consider here.  Let's start with the most obvious one:
    1. 975X Memory Controller
    The main reason that the system chooses to automatically set the Memory Speed to DDR2-667 even though DDR2-800 modules are installed, is that by design the memory controller of the Intel 975X Chipset does not natively support DDR2-800 modules, but
    >>Intel® 975X Express Chipset Datasheet - For the Intel® 82975X Memory Controller Hub (MCH)<< [Page 20]
    This means, that from the point of view of the memory controller, operating the memory @DDR2-800 actually means overclocking it (with all potential side effects).
    Basically, if your initial problem disappears as soon as you reduce the memory speed to DDR2-667, the design limitation of the memory controller may explain your findings.
    2. Different memory modules
    If I read your signature correctly, you are actually mixing two different kits/models of RAM (CM2X1024-6400C4DHX and  CM2X1024-6400C4).  This can work of course, but in practice it not necessarely does under all circumstances. 
    This list  (-> http://ramlist.i4memory.com/ddr2/) indicates that there are at least 14 different module types/revisions of Corsair DDR2-800 / CL4 modules that utilize a wide range of different memory chips (Elpida, ProMos, Micron, Infinion, Powerchip, Qimonda, Samsung, Infinion etc.).  Even though the superficial specifications for these chips appear to be pretty similar (DDR2-800 / CL5 / CL4), this does not necessarely mean that the modules will respond to the same operating conditions in the same way. There may be small difference in sub-timings/sub-latencies and/or the general responsiveness of the ICs which may affect the operating behaviour of the memory controller (which by the way also includes the PCI-Express interface which your video card is hooked up to).
    And again:  If running the system @DDR2-667 solves your issue, the possible explanation is that higher clock speeds may amplify (or trigger) potential performance problems that could have to do with the use of non-identical memory modules.
    Furthermore: It is also possible that the memory controller's design limitations and the potential compatibility problems that may be attributed to mixing different modules types may reinforce each other in terms of reduced system performance.
    3. The BIOS may have an impact as well
    There has been known issue with the use of certain video cards in conjunction with 4GB of system memory on this mainboard:
    https://forum-en.msi.com/index.php?topic=107301.0
    https://forum-en.msi.com/index.php?topic=105955.0
    https://forum-en.msi.com/index.php?topic=99818.msg798951#msg798951
    What may have come out as graphics/display corruption in earlier BIOS Releases may come out as reduced system performance when using the latest BIOS Release.  Of course, this is hard to prove, but I thought I'd mention it anyway.  May I ask what amount of video memory your card has onboard?
    Fortunately, there is a BIOS version that you could consider to try in this matter.  It is not only the last BIOS Release that could be used in order to avoid the corruption issue, but it is (in my oppionion) the best BIOS Version that was ever released for the 975X Platinum PUE Mainboard:  W7246IMS.716 [v7.1b6].  I have been using this mainboard for almost two years and have tested almost every BIOS Release that ever came out and I always went back to v7.1b6 as "ground zero". 
    It will properly support your E6600 (so you don't have to worry about that) and as far as I remember, there are no known compatibility issues with other components.  So maybe, you want to give this a shot.
    The bottom line is that in a worst case scenario, the problem you describe could be caused by all of the above things at the same time.  You cannot really do anything about the 975X Chipset Specifications and the only way to rule out explanation #2 is to test modules that are actually identical (same model number, revision and memory chips).  A test of the 7.1b6 BIOS Release is something you should consider.  It may be the only way to test the BIOS Hypothesis.
    This post turned out to be longer than I intended, but then again, I am well-rested after a good sleep and the wake-up coffee is kicking in pretty good.

  • Performance problems after Update from 7.6.00.37 to 7.6.03.15

    Hi,
    after the Update from 7.6.00.37 to 7.6.03.15 in our live system we noticed a lot of performance problems. We have tested the new version before on our test system, but there don't noticed effects like this.
    We have 2 identical systems (Opteron 64 bit, openSuse 10.2) with log shipping between them. We updated first the standby system, switched from online to standby (with copy of cold log) and started the new server as online system. After that we run a complete backup (runtime: 1 hour) for starting a new backup history and for activating autolog. Then
    With the update we changed USE_OPEN_DIRECT to YES, but the performance of the system was very slow afterwards. After the backup it remains at a high load average (> 10, previous system had about 2-4), with nearly 100% of CPU usage for the db kernel process.
    Next day we switched USE_OPEN_DIRECT back to NO. The system first runs better, but periodically rises up to a load average of 6 and slow down the performance of various applications (somebody says about 10 times slower). Here we also noticed a high usage (now 200-300%) of the db kernel process.
    Our questions are:
    1. Has something basically changed from 7.6.00.37 to 7.6.03.15, so that our various applications (JDBC, ODBC and Perl/SQLDBC partially on old linux systems with drivers from 7.5.00.23) don't reach same performance as before?
    2. Are there any other (new) parameters, which can help? Maybe reducing MAXCPU from 4 to 3 for reserving capacities for the system (there is only one maxdb instance running)?
    3. Is the a possibility to switch back to 7.6.00.37 (only for worst case)?
    I have made some first steps with x_cons, but don't see any anomalies on the first look.
    Regards,
    Thomas

    Thomas Schulz wrote:>
    > > > Next day we switched USE_OPEN_DIRECT back to NO. The system first runs better, but
    > >
    > > What is it about this parameter that lets you think it may be the cause for your problems?
    > After changing it back to NO, the system runs better (lower load average) than with YES (but much slower than with old version!)
    Hmm... that is really odd. When using USE_OPEN_DIRECT there is actually less work to do for he operating system.
    > > > Our questions are
    > > >
    > > > 1. Has something basically changed from 7.6.00.37 to 7.6.03.15, so that our various applications (JDBC, ODBC and Perl/SQLDBC partially on old linux systems with drivers from 7.5.00.23) don't reach same performance as before?
    > >
    > > Yes - of course. Changes are what Patches are all about!

    > Are there any known problems with updating from 7.6.00.37 to 7.6.03.15?
    Well of course there are bugs that have been found inbetween the release of both versions, but I am not aware of something like the performance killer.
    We will have to check this in detail here.
    > > > I have made some first steps with x_cons, but don't see any anomalies on the first look.
    > >
    > > Ok, looking into what the system does when it uses CPU is a first good step.
    > > But what would be "anomalies" to you?
    >
    > Good question! I don't really know.
    Well - then I guess 'looking at the system' won't bring you far...
    > > Do you use DBAnalyzer? If not -> activate it!
    > > Does it gives you any warnings?
    > > What about TIME_MEASUREMENT? Is it activated on the system? If not -> activate it!
    >
    > OK, that will be our next steps.
    Great - let's see the warnings you get.
    Let us also see the DB Parameters you set.
    > > What parameters have changed due  to the patch installations (check the parameter history file)?
    > >
    > > What queries take longer now? What is the execution plan of them?
    >
    > It seems to happen for all selects on tables with a lot of rows (>10.000, partially without indexes because automatically generated). With the old version we had no problems with missing indexes or the general performance. Unfortunatly it is very difficult to extract some sql statements out of the JBoss applications. But even simple queries (without any join) runs slower, when the load average rises over 4-5.
    Hmm... the question here is still, if the execution plans are good enough to meet your expectations.
    E.g. for tables that you access via the primary key it actually doesn't matter how many rows a table has (not for MaxDB at least).
    > > BTW: how exactly do the tests look like that you've done on the testsystem?
    > Usage over 6 weeks with our JDBC development environment (JBoss), backup and restore with various combinations of USE_OPEN_DIRECT and USE_OPEN_DIRECT_FOR_BACKUP.
    Sounds like the I/O is your most suspect aspect for overall system performance...
    > > Was the testsystem a 1:1 copy of the productive machine before the upgrade test?
    > No - smaller hardware (32 bit), only 20% of data of the live system, few db users and applications.
    >
    > > How did you test the system performance with multiple parallel users?
    > Only while permanent development with the 2-3 developers and some parallel tests of backup/restore. Unfortunately no tests with many users/applications.
    Ok - so this is next to no testing at all when it comes to performance.
    > An UPDATE STATISTICS over all db users seems to change nothing. At the moment the system remains markedly slow and we are searching for reasons and solutions. Another attempt will be the change of MAXCPU from 4 to 3.
    Why do you want to do that? Have you observed any threads that don't get a CPU because all 4 cores are used by the MaxDB kernel?
    regards,
    Lars

  • Performance Problem After Upgrade

    Hi Gurus,
    We have successfully Upgrade our system R3 4.6c to ECC6, Database Oracle 10g on AIX.
    Now we are facing Performance Problem. when we are checking Tables are taking Huge time to update data.
    Regards,
    Darshan...

    Thanks! AC,
    Performance Problem is resolved.
    Now facing problem in RFC SAPXPG_DBDEST_ECCDB
    Logon     Connection Error
    Error Details     Error when opening an RFC connection
    Error Details     ERROR: SAP gateway connection failed. Is SAP gateway started?
    Error Details     LOCATION: SAP-Server eccci_LRP_00 on host eccci (wp 14)
    Error Details     COMPONENT: CPIC
    Error Details     COUNTER: 18
    Error Details     MODULE:
    Error Details     LINE:
    Error Details     RETURN CODE: 236
    Error Details     SUBRC: 0
    Error Details     RELEASE: 700
    Error Details     TIME: Tue Mar 17 11:57:01 2009
    Error Details     VERSION:
    Regards,
    Darshan..

  • Performance Problem After upgrade to oracle 10g

    Hi
    I have upgrade one of my datawarehouse database from oracle 9.2.0.8 to oracle 10.2.0.4 running on solaris 9
    After the upgrade jobs which were running in the database is taking hell lot of time.
    The jobs are accessing the views which is used to get the monthly report data from the database.
    what could be the solution and where to start from to get the RCA to resolve this performance issue
    Please let me know if you require any other information
    database is currently running in the automatic shared memory management mode ie SGA_MAX and SGA_TARGET parameters are defined for that

    There are a lot of differences between 10g and 9i in this regard, among these are:
    - There is a default job that gathers statistics every night which is not there in 9i. You might have totally different statistics as in 9i due to that job, depending on how and if at all you used to collect statistics in 9i
    - The 10g DBMS_STATS package collects histograms on some columns by default (parameter METHOD_OPT=>'FOR ALL COLUMNS SIZE AUTO' default in 10g whereas 'FOR ALL COLUMNS SIZE 1' in 9i) which can have a significant effect on the execution plans
    - The 10g optimizer has CPU costing enabled by default which can make significant changes to your execution plans due to different costing of table scans and order of predicate evaluation. In addition it uses NOWORKLOAD system statistics if system statistics have not been gathered explicitly
    - 10g checks the min and max values stored for columns in the data dictionary. If your predicates are way off compared to these values then 10g begins to adjust the calculated selectivity of the predicate which can again significantly affect your execution plans
    - 10g introduces the "Cost Based Query Transformation (CBQT)" feature which means that rather than applying heuristic transformation rules transformations are costed and potentially discarded whereas 9i applies transformations unconditionally whenever possible
    Check also the following note resp. white paper:
    http://optimizermagic.blogspot.com/2008/02/upgrading-from-oracle-database-9i-to.html
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Performance problem after upgrade from Web version to Basic version

    A couple days I ago I 'upgraded' my SQL Azure database from the Web service tier to Basic. Since doing that the performance on many of my queries as dropped noticeably. I've made no schema changes, and the amount of data in my database has changed very
    little since the upgrade. Has anyone had this same problem?
    Randy Minder

    Randy,
    See this blog about how to upgrade to the new tiers and some advice on finding the right fit.
    http://azure.microsoft.com/en-us/documentation/articles/sql-database-upgrade-new-service-tiers/
    Basic is not equivalent to Web in the new Editions tiers. While you do 'upgrade' to editions, granting you access to new features and the tiers themselves, you can now choose an experience which has less performance than you had previous in Web/Business.
    Try upgrading your server to a new tier based on the advice in the article above. If you read the article below:
    https://msdn.microsoft.com/en-us/library/azure/dn369873.aspx
    You can see that standard is actually targeting at many of the production use cases of Web, where Basic is targeting developers who were using Web, before, to test out their applications. Basic isn't intended for scenarios with multiple concurrent users.
    Hopefully that helped. If you have any other questions, feel free to reply.
    Regards,
    Chris

  • Performance problems after CS5 Trial install

    I installed the CS5 Trial, performed updates to get everything current, and InDesign and Illustrator have slowed to a crawl. For example, click on a text block, shift-click to select a second block and wait 2-3 seconds. Or, single-click an object to select, wait a few seconds and it jumps to another place on the page.
    Also, PDFs opened in Acrobat Professional 8 are being affected. Flipping through pages takes 10-ish seconds per page.
    And last, when I go back to using CS3, I'm having similar problems there, which I didn't have before. Not quite as slow, but still a problem.
    I have restarted many times, and also uninstalled CS5 and reinstalled.
    Has anyone experienced this? Suggestions?

    I installed the CS5 Trial, performed updates to get everything current, and InDesign and Illustrator have slowed to a crawl. For example, click on a text block, shift-click to select a second block and wait 2-3 seconds. Or, single-click an object to select, wait a few seconds and it jumps to another place on the page.
    Also, PDFs opened in Acrobat Professional 8 are being affected. Flipping through pages takes 10-ish seconds per page.
    And last, when I go back to using CS3, I'm having similar problems there, which I didn't have before. Not quite as slow, but still a problem.
    I have restarted many times, and also uninstalled CS5 and reinstalled.
    Has anyone experienced this? Suggestions?

  • Performance problem after deploying bundle files

    Hello,
    I undeployed and made some changes on a km.resource.bundle PAR to update
    some properties labels values. After deploying it again KM performance decreased
    significantly and became very slow, we had to remove the PAR and restart portal
    to restore its normal functioning.
    Could you help me diagnose this?
    some more info:
    1 - The only changes I did was on the *.properties files.
    2 - I had an error when generating the PAR again, which was solved adding the
    external jar bc.crt_api.jar as reference.
    3 - On SAP note 817876 the example project comes with this JAR into /lib folder,
    my project just referenced it so my /lib folder is empty.
    regards,
    Rafael

    Hi BP, thanks for the quick reply, I will try restarting it next time and check
    if that solves the performance issue.
    I was wondering if it was caused because of the bc.crt_api.jar library. Does it
    have to be into the project structure or do I only need it for local reference? My guess
    was that the application running on portal was accessing the lib into my machine
    and thus causing the slowdown.
    kind regards,
    Rafael

  • IPhone 5s Performance Problem after updating to 7.1.1

    Hi everyone,
    I was using ios 7 which come originally when you but iPhone 5s , it was great , fast and adorable .
    yesterday I updated my iPhone 5s to ios 7.1.1
    unfortunately I have faced some problems , the iphone 5s is now slower , the apps open slower than the ios 7,
    I have iphone 5 which has ios 7.0.4 and it's much faster and smoother than the iphone 5s which has ios 7.1.1 ,
    I was really disappointed about that ,
    what do you think the problem is ?
    Regards

    thanks for your help fred and aszter
    my wife's iphone 5s is having the same problem as mine.
    I also compared them to my daughters ipad mini that runs 7.0.4 (A5 chip). and that thing runs properly.
    FASTER than both our iphone 5s!!!!
    wait it gets worse.
    I dug up my old 4s installed 7.1.1 . Guess what?
    it opens apps with the same speed as my 5s (as slow)
    it seems that apple has on purpose implemented an artificial lag on all devices so that the new iphone will appear snappier than the older ones.
    not sure about you guys but this is definitely not making me want to go out and stand in line for their new kit and to think that I was actually looking forward to buy an iphone with a bigger screen. not so much any more..
    As you can tell I own a lot of apple products. maybe I will come around - it's not like samsung is god's gift to humanity. But This is definitely making question apple. Maybe HTC deserves a chance by me.
    I also have a developer id so I am downloading the correct 7.1 ipsw as well and I will try again right now to put it on. first attempt didn't work out, I just upgraded itunes as well to see if that helps, I am putting it on recovery mode.
    Just for the record, closing wifi, seems to randomly improve performance in some cases but not all the times..
    biggest problem is in photos.

  • Query Performance problem after upgrade from 8i to 10g

    Following query takes longer time in 10g.
    SELECT LIC_ID,FSCL_YR,KEY_NME,CRTE_TME_STMP,REMT_AMT,UNASGN_AMT,BAD_CK_IND,CSH_RCPT_PARTY_ID,csh_rcpt_id,REC_TYP,XENT_ID,CLNT_CDE,BTCH_CSH_STA,file_nbr,
    lic_nbr,TAX_NBR,ASGN_AMT FROM (
    SELECT /*+ FIRST_ROWS*/
         cpty.lic_id,
    cpty.clnt_cde,
    cpty.csh_rcpt_party_id,
    cpty.csh_rcpt_id,
    cpty.rec_typ,
    cpty.xent_id,
    cr.fscl_yr,
    cbh.btch_csh_sta,
    nam.key_nme,
    lic.file_nbr,
    lic.lic_nbr,
    cr.crte_tme_stmp,
    cr.remt_amt,
    cr.unasgn_amt,
    ee.tax_nbr,
    cr.asgn_amt,
    cr.bad_ck_ind
    FROM lic lic
    ,csh_rcpt_party cpty
    ,name nam
    ,xent ee
    ,csh_rcpt cr
    ,csh_btch_hdr cbh
    WHERE 1 = 1
    AND ee.xent_id = nam.xent_id
    AND cbh.btch_id = cr.btch_id
    AND cr.csh_rcpt_id = cpty.csh_rcpt_id
    AND ee.xent_id = cpty.xent_id
    AND cpty.lic_id = lic.lic_id(+)
    AND (cpty.clnt_cde IN ( SELECT clnt_cde
    FROM clnt
                   START WITH clnt_cde = '4006'
    CONNECT BY PRIOR clnt_cde_prnt = clnt_cde)
    OR cpty.clnt_cde IS NULL)
    AND nam.cur_nme_ind = 'Y'
    AND nam.ent_nme_typ = 'P' AND nam.key_nme LIKE 'WHITE%')
    order by lic_id
    Explain Plan in 8i
    0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=17 Card=1
    Bytes=107)
    1 0 FILTER
    2 1 NESTED LOOPS (Cost=17 Card=1 Bytes=107)
    3 2 NESTED LOOPS (Cost=15 Card=1 Bytes=101)
    4 3 NESTED LOOPS (OUTER) (Cost=13 Card=1 Bytes=73)
    5 4 NESTED LOOPS (Cost=11 Card=1 Bytes=60)
    6 5 NESTED LOOPS (Cost=6 Card=1 Bytes=35)
    7 6 INDEX (RANGE SCAN) OF 'NAME_WBSRCH1_I' (NON-UN
    IQUE) (Cost=4 Card=1 Bytes=26)
    8 6 TABLE ACCESS (BY INDEX ROWID) OF 'XENT' (Cost=
    2 Card=4649627 Bytes=41846643)
    9 8 INDEX (UNIQUE SCAN) OF 'EE_PK' (UNIQUE) (Cos
    t=1 Card=4649627)
    10 5 TABLE ACCESS (BY INDEX ROWID) OF 'CSH_RCPT_PARTY
    ' (Cost=5 Card=442076 Bytes=11051900)
    11 10 INDEX (RANGE SCAN) OF 'CPTY_EE_FK_I' (NON-UNIQ
    UE) (Cost=2 Card=442076)
    12 4 TABLE ACCESS (BY INDEX ROWID) OF 'LIC' (Cost=2 Car
    d=3254422 Bytes=42307486)
    13 12 INDEX (UNIQUE SCAN) OF 'LIC_PK' (UNIQUE) (Cost=1
    Card=3254422)
    14 3 TABLE ACCESS (BY INDEX ROWID) OF 'CSH_RCPT' (Cost=2
    Card=6811443 Bytes=190720404)
    15 14 INDEX (UNIQUE SCAN) OF 'CR_PK' (UNIQUE) (Cost=1 Ca
    rd=6811443)
    16 2 TABLE ACCESS (BY INDEX ROWID) OF 'CSH_BTCH_HDR' (Cost=
    2 Card=454314 Bytes=2725884)
    17 16 INDEX (UNIQUE SCAN) OF 'CBH_PK' (UNIQUE) (Cost=1 Car
    d=454314)
    18 1 FILTER
    19 18 CONNECT BY
    20 19 INDEX (UNIQUE SCAN) OF 'CLNT_PK' (UNIQUE) (Cost=1 Ca
    rd=1 Bytes=4)
    21 19 TABLE ACCESS (BY USER ROWID) OF 'CLNT'
    22 19 TABLE ACCESS (BY INDEX ROWID) OF 'CLNT' (Cost=2 Card
    =1 Bytes=7)
    23 22 INDEX (UNIQUE SCAN) OF 'CLNT_PK' (UNIQUE) (Cost=1
    Card=1)
    Explain Plan in 10g
    0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=19 Card=1
    Bytes=112)
    1 0 SORT (ORDER BY) (Cost=19 Card=1 Bytes=112)
    2 1 FILTER
    3 2 NESTED LOOPS (Cost=18 Card=1 Bytes=112)
    4 3 NESTED LOOPS (Cost=16 Card=1 Bytes=106)
    5 4 NESTED LOOPS (OUTER) (Cost=14 Card=1 Bytes=78)
    6 5 NESTED LOOPS (Cost=12 Card=1 Bytes=65)
    7 6 NESTED LOOPS (Cost=6 Card=1 Bytes=34)
    8 7 INDEX (RANGE SCAN) OF 'NAME_WBSRCH1_I' (INDE
    X) (Cost=4 Card=1 Bytes=25)
    9 7 TABLE ACCESS (BY INDEX ROWID) OF 'XENT' (TAB
    LE) (Cost=2 Card=1 Bytes=9)
    10 9 INDEX (UNIQUE SCAN) OF 'EE_PK' (INDEX (UNI
    QUE)) (Cost=1 Card=1)
    11 6 TABLE ACCESS (BY INDEX ROWID) OF 'CSH_RCPT_PAR
    TY' (TABLE) (Cost=6 Card=1 Bytes=31)
    12 11 INDEX (RANGE SCAN) OF 'CPTY_EE_FK_I' (INDEX)
    (Cost=2 Card=4)
    13 5 TABLE ACCESS (BY INDEX ROWID) OF 'LIC' (TABLE) (
    Cost=2 Card=1 Bytes=13)
    14 13 INDEX (UNIQUE SCAN) OF 'LIC_PK' (INDEX (UNIQUE
    )) (Cost=1 Card=1)
    15 4 TABLE ACCESS (BY INDEX ROWID) OF 'CSH_RCPT' (TABLE
    ) (Cost=2 Card=1 Bytes=28)
    16 15 INDEX (UNIQUE SCAN) OF 'CR_PK' (INDEX (UNIQUE))
    (Cost=1 Card=1)
    17 3 TABLE ACCESS (BY INDEX ROWID) OF 'CSH_BTCH_HDR' (TAB
    LE) (Cost=2 Card=1 Bytes=6)
    18 17 INDEX (UNIQUE SCAN) OF 'CBH_PK' (INDEX (UNIQUE)) (
    Cost=1 Card=1)
    19 2 FILTER
    20 19 CONNECT BY (WITH FILTERING)
    21 20 TABLE ACCESS (BY INDEX ROWID) OF 'CLNT' (TABLE) (C
    ost=2 Card=1 Bytes=15)
    22 21 INDEX (UNIQUE SCAN) OF 'CLNT_PK' (INDEX (UNIQUE)
    ) (Cost=1 Card=1)
    23 20 NESTED LOOPS
    24 23 BUFFER (SORT)
    25 24 CONNECT BY PUMP
    26 23 TABLE ACCESS (BY INDEX ROWID) OF 'CLNT' (TABLE)
    (Cost=2 Card=1 Bytes=7)
    27 26 INDEX (UNIQUE SCAN) OF 'CLNT_PK' (INDEX (UNIQU
    E)) (Cost=1 Card=1)
    28 20 TABLE ACCESS (FULL) OF 'CLNT' (TABLE) (Cost=5 Card
    =541 Bytes=5951)
    Explain plan looks different in steps 19 to 28. I am not sure why 10g have more steps

    Hi
    I have no experience in 8i. I do know 10g does costing different from 8i. So I think the other plan might got elliminated.
    Normally when I see differences. I just collect statistics on the tables and the indexes and remove the hints. Hints are not good . This has helped me to solve few problems.
    Thanks
    CT

  • Spatial query performance problem after upgrade to 10G

    I am in the process of converting my database from a 9i box to a new 10G 64-bit box. But I have found a problem which is causing some reports to be slower on the new box. I have simplified the queries down to having the user_sdo_geom_metadata table joined to use the diminfo in the queries (I know that I am not using them in these queries, but I simplified for testing purposes...)
    If I run the following I get and look at the explain plan I get full table scans for both spatial tables and index lookups for the user_sdo_geom_metadata table queries and runs for about 14 seconds.
    SELECT ROWNUM
    from COUNTIES s,
    NOMINATIONS O,
    (select diminfo from user_sdo_geom_metadata where table_name='COUNTIES') S_DIM,
    (select diminfo from user_sdo_geom_metadata where table_name='NOMINATIONS') O_DIM
    where sdo_filter(S.GEOM,o.geom, 'querytype=WINDOW')='TRUE'
    and sdo_geom.within_distance(o.geom,0,S.GEOM,.5)='TRUE';
    If I just remove the two user_sdo_geom_metadata joins, I get spatial index usage on COUNTIES and the whole thing runs in less that a second.
    SELECT ROWNUM
    from COUNTIES s,
    NOMINATIONS O
    where sdo_filter(S.GEOM,o.geom, 'querytype=WINDOW')='TRUE'
    and sdo_geom.within_distance(o.geom,0,S.GEOM,.5)='TRUE';
    I have rebuilt the indexes, gathered stats, and tried hints to force the first query to use the spatial index. None of which made any change.
    Has anyone else seen this?
    Gerard Vidrine

    Hi Gerard,
    When the query window comes from a table Oracle always recommends:
    1) Use the /*+ ordered */ hint
    2) Put the table the quiery window comes from (geometry-2 in the query) first in the from clause
    However, your query is also written very strange. Do you know about SDO_WITHIN_DISTANCE? Or are you trying to do SDO_ANYINTERACT (since the distance is 0)?
    So I would write the query you have as:
    SELECT s.ROWNUM
    from NOMINATIONS O, COUNTIES s
    where sdo_relate(S.GEOM,o.geom, 'querytype=WINDOW mask=anyinteract')='TRUE';
    or in Oracle10g:
    SELECT s.ROWNUM
    from NOMINATIONS O, COUNTIES s
    where sdo_anyinteract(S.GEOM,o.geom)='TRUE';

  • Performance Problems after migration to unicode

    Hello,
    we have migrated our crm40sp6/msa-system to unicode. But now system is very slow, and the greater problem is that when we replicate via modem and conntrans I often get connection error while receiving data. And when I receive some data, the mas-client is very slowly to import the data. I often do not get all data so that I have to replicate repeatedly.
    Before unicode I didn't have this problems.
    Does anyone have the same experince with crm/msa-unicode ?
    Best reagards
    Gerd

    Hi Gerd,
      You have mentioned that the system is migrated to CRM 4.0 unicode. What do you mean by unicode. Are u able to see in the MAS client several languages in the same screen in CRM 4.0.
    I heard that CRM 4.0 is a unicode system but I don't think still you could see different languages in the same screen. For eg display japanese, chinese and english in the same screen.
    For your information search on OSS notes you will get a note for Unicode which needs to be applied. Also in general the unicode solution is comparitevly slow when data is transfered from Middleware to mobile client via conntrans as the data is transfered in the XML format which contains more information than the regualar RFC format.
    Best regards
    Silivia

  • Performance Problems after Database Upgrade to 10.1.0.5

    Has anyone else encountered Discoverer Query performance suffering drastically once upgraded their database from 10.1.0.4 to 10.1.0.5? It is almost 5 times as slow.

    ask your DBA to analyze the schema (gather statistics) for the concerned schema. This should eliminate one possibility
    Nilesh
    http://www.infocaptor.com

Maybe you are looking for