Performance Problem - "OR" condition

All,
We have a query that is doing full outer join and the join condition has an "OR" condition
DB Version: 10.2.0.5
select /*+ opt_param('_optimizer_native_full_outer_join', 'force') */ *
from
tableA a
full outer join
tableB b
on
[(a.col1=b.col1) or (a.col2=b.col2) or (a.col3=b.col3)]
The above query is not using FULL HASH OUTER JOIN even when we forced it by using hint opt_param('_optimizer_native_full_outer_join', 'force')
even with the hint still the full outer join is executed as a union of left outer join and an anti-join
the above query taking 5 hrs to complete but when i remove the "OR" condition its running within 10 minutes(drastic chnage)
I used the following "OR" expansion:
select * from
{ select *
from
tableA a
full outer join
tableB b
on
(a.col1=b.col1) }
UNION
{ select *
from
tableA a
full outer join
tableB b
on
(a.col2=b.col2) }
UNION
{ select *
from
tableA a
full outer join
tableB b
on
(a.col3=b.col3) }
but by doing so it is using hash full outer join and query ran in just 10 minutes , but i am getting more rows than expected number of records
So please advice me on it, How can i make it work?
I will post explain plan and actual query if an one needed
Thanks

PLAN_TABLE_OUTPUT
Plan hash value: 3022803566
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 51095 | 8732K| | 15092 (1)| 00:04:32 |
| 1 | SORT UNIQUE | | 51095 | 8732K| 19M| 14311 (1)| 00:04:18 |
| 2 | VIEW | | 51095 | 8732K| | 13530 (1)| 00:04:04 |
| 3 | SORT UNIQUE | | 51095 | 93M| 199M| 13530 (1)| 00:04:04 |
| 4 | UNION-ALL | | | | | | |
| 5 | VIEW | | 51093 | 93M| | 5440 (2)| 00:01:38 |
| 6 | UNION-ALL | | | | | | |
|* 7 | FILTER | | | | | | |
| 8 | NESTED LOOPS OUTER | | 51090 | 10M| | 5303 (2)| 00:01:36 |
| 9 | TABLE ACCESS FULL | TABLE1 | 1310 | 134K| | 4 (0)| 00:00:01 |
| 10 | VIEW | | 39 | 4368 | | 4 (0)| 00:00:01 |
|* 11 | TABLE ACCESS FULL| TABLE2 | 39 | 4368 | | 4 (0)| 00:00:01 |
|* 12 | FILTER | | | | | | |
|* 13 | TABLE ACCESS FULL | TABLE2 | 66 | 7392 | | 4 (0)| 00:00:01 |
|* 14 | TABLE ACCESS FULL | TABLE1 | 39 | 4095 | | 4 (0)| 00:00:01 |
| 15 | FAST DUAL | | 1 | | | 2 (0)| 00:00:01 |
| 16 | FAST DUAL | | 1 | | | 2 (0)| 00:00:01 |
Even with hints in place its doing nested loop outer join
but when i remove the OR condition in join condition its using hash outer join
Edited by: rcc50886 on Jan 24, 2012 1:01 PM

Similar Messages

  • Performance problem in ABAP code

    hai guys,
    I created report using tables like bsis,t001 etc,( tax report).
    I have performance problem in this report.
    COuld you pls tell me how to analyse the report and find out the place where process is taking more memory etc.
    i did abap trace and runtime analysis..but could not find out exact point.
    how to do this..
    i want to analysis each subroutine,internal table and query process.
    could you pls give me some ideas.
    ambichan

    There is an excellent tool available in SAP - <b>Code Inspector.
    </b>
    Transaction is SCII
    Try the following link and I am sure you will find a bunch of useful documents.
    <a href="http://www.google.co.in/search?hl=en&safe=off&q=site%3Asdn.sap.comfiletype%3ApdfCode+Inspector&btnG=Search&meta=">ABAP Performance</a>
    I use the Code Inspector to search for
    a) All the select statements which are present within the loop
    b) Nested Loops
    c) Select query without providing criteria for primary keys, depending upon situation
    d) Can the search be narrowed with extra conditions
    e) Using READ .. BINARY SEARCH if internal table has lots of records.
    The list is actually endless, but this is something to start with.
    You can actually have a checklist, and depending upon it, go through your code. The more you adhere to checklist, you will find that, the performance would dramatically improve.
    Also use <b>ST05</b> transaction, for SQL Trace and find out which select query is taking the maximum time for response.
    Regards,
    Subramanian V.

  • LR3 "Extra Processing in Develop" Performance Problem

    I have been investigating a specific LR3 performance problem.  It may explain a small subset of the problems people have reported in the "Why is LR3 So Slow?" thread.   I'm starting this thread to focus on this particular problem.  I hope others will confirm/refute/refine my findings.
    The Problem
    In Develop, when I make an adjustment, normally the following happens: The CPU usage (as shown in Activity Monitor's bar graph) jumps to between 50 and 75% for all four cores, the updated image appears, and the CPU usage settles back down.  This all happens in less than half a second.  Note: this is with the image at the Fit size.  However, sometimes I instead get the following after an adjustment: the CPU usage jumps to 50 to 75% for all four cores and the updated image appears as usual, however, instead of settling back down, the CPU usage jumps up to 90 to 100% for all cores and stays there for 3 to 5 seconds before settling down. Thus it appears that LR is doing some kind of "extra processing" since a lot of computation is happening AFTER the updated image has already appeared.  I will refer to this problem as "EP".  Obviously, when you are getting EP, editing in Develop becomes very balky.
    Dependency on ratio between image size and displayed size
    It appears that EP only happens when the displayed size of the image (in Fit zoom level and perhaps also Fill zoom level) is above a certain percentage of the actual image size (as currently cropped).  Evidence: When editing full 21MP 5D2 images, I don't experience EP.  If I crop the 5D2 image fairly significantly, then I can get EP.  When editing 10MP images from my Canon S90, I usually get EP for landscape orientation pictures but not for portrait orientation pictures (since in Fit mode, landscape images display at a higher zoom level than portrait images).  If I am getting EP, I can eliminate it by sufficiently reducing the size that LR is displaying the image by resizing the LR window smaller, opening additional panels (I normally edit with only the right panel open), displaying the toolbar, etc.  It appears that EP is enabled when the displayed image is about 50% or larger w.r.t. the actual image (as currently cropped).  For example, EP becomes enabled when a 3648 pixel wide S90 image is displayed at least 17 and 7/8 inches wide on my 100 ppi monitor (i.e. about 1787 pixels).
    Dependency on HOW an adjustment is invoked
    Even when the displayed image size is large enough w.r.t. the actual image size to enable EP, whether you get it on a given adjustment depends on how you invoke it:
    - If you CLICK (i.e. press the mouse button down and quickly release it) on the track of one of the sliders (a technique I use often to make big jumps), EP will happen.
    - If you press the mouse button down on a slider handle, drag it to a new position, and quickly release the mouse button), EP will happen
    - If you press the mouse button down on a slider handle, drag it to a new position, but continue to hold the mouse button down until the displayed image is updated, EP does NOT happen (either before or after you then release the mouse button).
    - If you highlight the numeric field at the end of a slider and use the arrow keys (possibly along with Shift) to increment or decrement the value, EP does NOT happen.
    - EP will happen if you resize the LR window such that the displayed image size is above the threshold.  (In fact, I determined the threshold by making a series of window width increases until I saw EP indicated by the CPU bar graphs.)
    - EP can happen with local adjustment brush applications, but as with the sliders, it depends on HOW you perform the brush stroke.  Single click and drags with immediate mouse release cause EP, drags with delayed mouse button release don't.
    - Clicking an earlier History state causes EP
    - More exploration could be done.  For example, I haven't looked at Graduated Filter and Spot Removal adjustments.
    My theory of what's happening
    With LR2, my understanding is that in Develop mode when the displayed image is below 1:1 zoom level, after an adjustment is invoked, LR calculates the new version of the image to display using a fast, simplified algorithm that doesn't include the more computationally intensive adjustments like Sharpening and Noise Reduction (and perhaps works on a lower rez version of the image with multiple sensels binned together?).  It appears that in conditions described above, LR3 calculates the initial, fast image update and then goes on to do the full update of the image, including the computationally intensive adjustments.  Evidence:   setting Sharpening Amount and Luminance and Color Noise Reduction to zero eliminates EP (or reduces the amount of time it takes to be barely noticeable).  I'm not sure whether the displayed image is updated with the results of the extra processing.  I think the answer is Yes since when I tried an adjustment of changing Sharpening Amount from 0 to 90, the initial update of the displayed image showed sharpening but after the EP, the displayed image was updated again to show somewhat different sharpening. Perhaps Adobe felt that it would be useful to see the more accurate version of the image when it is at or above 50% zoom.  Maybe the UI is supposed to cancel the EP if you start to make another adjustment before it has completed but the canceling doesn't happen unless you invoke the adjustment in one of the ways described above that doesn't cause EP.  
    Misc
    - EP doesn't seem to happen for Process 2003
    - As others have mentioned, I'm surprised that LR (both version 2 and 3) in 64bit mode doesn't use more available RAM.  I don't think I've seen LR go above 4GB of virtual memory or above 3GB "Real Memory" (as reported by Activity Monitor) even though I have several GB free.
    - It should be obvious from the above that if you experience EP, there are workarounds: reduce the size of the displayed image (e.g. by window resizing), invoke adjustments in ways that don't cause EP, turn off Sharpening and Noise Reduction until the end of editing an image.
    System specs
    First generation Intel Mac Pro with two dual-core CPUs at 2.66 Ghz
    OS 10.5.8
    21GB RAM
    ACR cache on volume striped across 3 internal SATA drives
    LR catalog and RAWs on an internal SATA drive
    30" HP LP3065 monitor (2560 pixels wide)
    NVIDIA GeForce 7300 GT

    I'm impressed by your thorough analysis.
    Clearly, the programmers haven't figured out the best way to do intelligent caching and/or parallel rendering at a reduced size yet.
    In my experience reducing the settings in the "Details" panel doesn't help.
    What really bugs me is that the lag (or increasing lack of interactiveness) depends on the number of adjustments one has made.
    This shouldn't be the case. If a cache is produced then every further adjustment should only cost the effort for that latest adjustment and not include adjustments before it. There are things that stand in the way of straightforward edit applications:
    If you work below 1:1 preview, adjustments have to be shown in a reduced form. If you don't have a way to faithfully mimic the adjustments on the reduced size, you have to do them on the original image and then scale down. That's expensive.
    To the best of my knowledge LR uses a fixed image pipeline. Hence, independently of the order in which you apply edits, they are always performed in the same fixed order. Say all spot removal operations are done first. If you have a lot of adjustment brush edits and then add a spot removal operation, it means that all the adjustment brush operations have to be replayed each time you do a little adjustment on your spot removal edit.
    I believe what you are seeing is mostly related to 1.
    I also believe that the way LR currently handles a moderate number of edits is unacceptable and incompatible with the notion that it is usable in a commercial setting for more than trivial edits. I suspect there is something else going on. If everyone saw the deterioration in performance after a number of edits that I see, I don't think LR would be as accepted as it is. Having said that, I've read that the problem of repeated applications of the adjustment brush slowing LR down has existed for a long time. I truly hope that this doesn't mean we'll have to live for it for the foreseeable future.
    There are two ways I can see how 2. should be addressed:
    combine the effects of a set of operations into one bitmap operation. Instead of replaying all adjustment brush strokes one after the other (speedwise it feels like this is happening), compute a single bitmap operation that combines all effects.
    give up the idea that there is an image pipeline with a fixed execution order.
    Some might argue that the second point is at odds with the whole idea of parametric editing, but I dispute that. Either edit operations are commutable in which case the order is immaterial, or they are not. If they are not, the user applies the edits in a way as he/she sees fit and will thus compensate for any effect of a changed ordering.
    N.B., currently the doctrine of "fixed ordering of edit applications" results in the effect that even if you convert an image into B&W all your adjustment brush edits that applied colour tints will still show through. Reasoning: The user should be able to locally tint a B&W image. I agree with the latter but this could be achieved by only applying those tinting brush strokes that were created after the B&W conversion. All the ones that happened before should only be used to obtain the correct luminance values for the B&W conversion but obviously they shouldn't cause tinted areas.
    The above example demonstrates to me that users naturally expect operations to occurr in the order they have been introduced, not in a fixed predefined order. If that principle were followed, I see no reason why the speed of a single edit should depend on the number of edits that were done to the image before.
    I hope the programmers can (and the management wants to) address the performance issues. While I find LR usable for pretty modest edits, in no way the performance on my system approaches that would I would expect from an industrial strenght application.
    P.S.: Your message reminded me of the following: When I experience serious lag with LR showing the strokes I make with an adjustment brush, it helps to pause a moment after the first click before one starts moving. This allows LR to catch up and then one can see the effect of the application pretty much interactively. Otherwise, there is terrible lag and the feedback where you have brushed an effect comes way too late.

  • Serious performance problems after updating from 10.5.6 to 10.5.7

    I've got a Macbook Pro C2D 15" (late 2006). After updating the system from 10.5.6 to 10.5.7 and quicktime to 7.6.2, I experience serious performance problems which I have never seen before:
    -From time to time, kernel_task has peaks in activity monitor (consumes a lot of CPU). In that moment, the mouse hangs for a short moment and audio playback is distorted/has drop outs.
    -Recording audio consumes a lot of CPU (independent of the audio application).
    -Video playback consumes a lot of CPU, regardless if I use VLC or the quicktime player.
    Any idea what could be done? Thanks in advance.

    After a few days of using CoolBook I report that my 1st generation MacBook Air is now reliable and never overheating, regardless of environment conditions.
    I must point out that I do believe that it really is something new in 10.5.7 that is causing this problem as my environmental conditions have not changed and even Summer in Dublin still sees temperatures no more than 72F/22C. I am now in Madrid, where the environment is quite different, and I am still having no problem with kernel_task spinning.
    I plan on reporting my problem to Apple anyway and being aggressive about the issue as MacLovinRanga did in this thread (http://discussions.apple.com/thread.jspa?messageID=9647353#9647353). Perhaps many/all of those of us having this problem will obtain new MacBook Airs.
    Joe Kiniry

  • 3D performance problems after upgrading memory

    I recently purchased an additional 2GB of memory to try and extend the life of my aging computer.  I installed the memory yesterday and Windows seems to recognize it (reporting now 3.3GB) but when I dropped into WoW (pretty much the only game I have) the 3D performance was down from the usual 60FPS @ 1600x1080 to a bleak 20 (at best) and the CPU utilization went to about 80% on both cores (with ~20% kernel usages).  Basically WoW was being software 3D rendered!!!
    I went through the usual reinstall drivers, reboot, etc... and couldn't find a fix.  I powered down, pulled out 2 of the memory sticks, booted up, and dropped into WoW - it ran at the full 60FPS and CPU utilization was very low (i.e. back to GPU Hardware 3D rendering).  I powered down again, swapped the 2 sticks for the other 2 sticks, booted up, and dropped into WoW - again it ran 100% fine.  So I powered down, put all four sticks in, booted back up, and when I dropped into WoW it was running in the software 3D rendering mode (20FPS at best and High CPU/Kernel usage).
    I've tried the /PAE option in boot.ini - no joy.  I've tried /MAXMEM = to 3300, 3072, 3000, and even 2048 - no joy in any of those cases.  Has anyone seen anything like this before?  Or have suggestions to fix (other than going to Win7-64)?
    All info in signature is up to date.
    Thanks in advance for any help!

    Quote
    Well his last post was a little over 6 hours ago so he was up pretty late.
    Looks like nothing one does in here goes completely unnoticed.   
    Anyway, I am done sleeping now.
    Quote
    his 2 Pfennig's worth.  I know, I know it's Euro's now.
    Yeah, and what used to be "Pfennige" is now also called "Cents" and here are mine:
    Quote
    I've tried the /PAE option in boot.ini - no joy.  I've tried /MAXMEM = to 3300, 3072, 3000, and even 2048 - no joy in any of those cases.  Has anyone seen anything like this before?  Or have suggestions to fix (other than going to Win7-64)?
    PAE or Physical Memory Extension will not do anything as Microsoft has castrated this feature to such an extend that it has nothing to do with memory addressing anymore when in comes to Windows XP:
    http://en.wikipedia.org/wiki/Physical_Address_Extension#Microsoft_Windows
    Quote
    Windows XP Service Pack 2 and later, by default, on processors with the no-execute (NX) or execute-disable (XD) feature, runs in PAE mode in order to allow NX. The NX (or XD) bit resides in bit 63 of the page table entry and, without PAE, page table entries only have 32 bits; therefore PAE mode is required if the NX feature is to be exploited. However, desktop versions of Windows (Windows XP, Windows Vista) limit physical address space to 4 GiB for driver compatibility reasons.
    The feature is already automatically enabled.  But since is original function (Address Extension) does no longer exist when it comes to the desktop versions of Windows XP, it won't really do anything you would ever notice.
    About the /MAXMEM Switch:  In Windows 32bit operating systems, every process is limited to 2GB of memory.  The point of the switch is to allow certain applications (or their run-time process) to occupy a higher amount of system memory than 2GB.  However, the culprit here is that only those applications are able to utilize this ability that have been programmed (or compiled) accordingly.  A special flag (large memory aware) has to be implemented.  Otherwise, these application will be restricted to 2GB even though the /MAXMEM Switch has been set to extend the 2GB limit to 3GB.  Most 32bit applications come without the "large memory aware" flag and that is why usually, settings the switch won't change anything.
    In any case, it is unlikely that /PAE (even if it would not be castrated) and /MAXMEM would have an impact on your actual issue because I doubt that it has much to do with either memory adressing or the memory limit of an indiviual Windows process.
    Quote
    the 3D performance was down from the usual 60FPS @ 1600x1080 to a bleak 20 (at best) and the CPU utilization went to about 80% on both cores (with ~20% kernel usages).
    There are a couple of hardware based explanations to consider here.  Let's start with the most obvious one:
    1. 975X Memory Controller
    The main reason that the system chooses to automatically set the Memory Speed to DDR2-667 even though DDR2-800 modules are installed, is that by design the memory controller of the Intel 975X Chipset does not natively support DDR2-800 modules, but
    >>Intel® 975X Express Chipset Datasheet - For the Intel® 82975X Memory Controller Hub (MCH)<< [Page 20]
    This means, that from the point of view of the memory controller, operating the memory @DDR2-800 actually means overclocking it (with all potential side effects).
    Basically, if your initial problem disappears as soon as you reduce the memory speed to DDR2-667, the design limitation of the memory controller may explain your findings.
    2. Different memory modules
    If I read your signature correctly, you are actually mixing two different kits/models of RAM (CM2X1024-6400C4DHX and  CM2X1024-6400C4).  This can work of course, but in practice it not necessarely does under all circumstances. 
    This list  (-> http://ramlist.i4memory.com/ddr2/) indicates that there are at least 14 different module types/revisions of Corsair DDR2-800 / CL4 modules that utilize a wide range of different memory chips (Elpida, ProMos, Micron, Infinion, Powerchip, Qimonda, Samsung, Infinion etc.).  Even though the superficial specifications for these chips appear to be pretty similar (DDR2-800 / CL5 / CL4), this does not necessarely mean that the modules will respond to the same operating conditions in the same way. There may be small difference in sub-timings/sub-latencies and/or the general responsiveness of the ICs which may affect the operating behaviour of the memory controller (which by the way also includes the PCI-Express interface which your video card is hooked up to).
    And again:  If running the system @DDR2-667 solves your issue, the possible explanation is that higher clock speeds may amplify (or trigger) potential performance problems that could have to do with the use of non-identical memory modules.
    Furthermore: It is also possible that the memory controller's design limitations and the potential compatibility problems that may be attributed to mixing different modules types may reinforce each other in terms of reduced system performance.
    3. The BIOS may have an impact as well
    There has been known issue with the use of certain video cards in conjunction with 4GB of system memory on this mainboard:
    https://forum-en.msi.com/index.php?topic=107301.0
    https://forum-en.msi.com/index.php?topic=105955.0
    https://forum-en.msi.com/index.php?topic=99818.msg798951#msg798951
    What may have come out as graphics/display corruption in earlier BIOS Releases may come out as reduced system performance when using the latest BIOS Release.  Of course, this is hard to prove, but I thought I'd mention it anyway.  May I ask what amount of video memory your card has onboard?
    Fortunately, there is a BIOS version that you could consider to try in this matter.  It is not only the last BIOS Release that could be used in order to avoid the corruption issue, but it is (in my oppionion) the best BIOS Version that was ever released for the 975X Platinum PUE Mainboard:  W7246IMS.716 [v7.1b6].  I have been using this mainboard for almost two years and have tested almost every BIOS Release that ever came out and I always went back to v7.1b6 as "ground zero". 
    It will properly support your E6600 (so you don't have to worry about that) and as far as I remember, there are no known compatibility issues with other components.  So maybe, you want to give this a shot.
    The bottom line is that in a worst case scenario, the problem you describe could be caused by all of the above things at the same time.  You cannot really do anything about the 975X Chipset Specifications and the only way to rule out explanation #2 is to test modules that are actually identical (same model number, revision and memory chips).  A test of the 7.1b6 BIOS Release is something you should consider.  It may be the only way to test the BIOS Hypothesis.
    This post turned out to be longer than I intended, but then again, I am well-rested after a good sleep and the wake-up coffee is kicking in pretty good.

  • How to avoid performance problems in PL/SQL?

    How to avoid performance problems in PL/SQL?
    As per my knowledge, below some points to avoid performance proble in PL/SQL.
    Is there other point to avoid performance problems?
    1. Use FORALL instead of FOR, and use BULK COLLECT to avoid looping many times.
    2. EXECUTE IMMEDIATE is faster than DBMS_SQL
    3. Use NOCOPY for OUT and IN OUT if the original value need not be retained. Overhead of keeping a copy of OUT is avoided.

    Susil Kumar Nagarajan wrote:
    1. Group no of functions or procedures into a PACKAGEPutting related functions and procedures into packages is useful from a code organization standpoint. It has nothing whatsoever to do with performance.
    2. Good to use collections in place of cursors that do DML operations on large set of recordsBut using SQL is more efficient than using PL/SQL with bulk collects.
    4. Optimize SQL statements if they need to
    -> Avoid using IN, NOT IN conditions or those cause full table scans in queriesThat is not true.
    -> See to queries they use Indexes properly , sometimes Leading index column is missed out that cause performance overheadAssuming "properly" implies that it is entirely possible that a table scan is more efficient than using an index.
    5. use Oracle HINTS if query can't be further tuned and hints can considerably help youHints should be used only as a last resort. It is almost certainly the case that if you can use a hint that forces a particular plan to improve performance that there is some problem in the underlying statistics that should be fixed in order to resolve issues with many queries rather than just the one you're looking at.
    Justin

  • Performance problem in 7.6.6.10

    We have a performance problem after doing the update from MaxDB 7.6.6.3 to 7.6.6.10.  
    The symptom is that querys with the "<>" operator in the WHERE-Clause on a indexed Integer/SmallInteger-column slows down extremly, e.g. "WHERE FILEDNAME <> 1".
    On large tables the query is very, very slow.
    The dbanalyser shows "DIFFERENT STRATEGIES FOR OR-TERMS". 
    A way to reproduce the prob:
    Create a table with 2 columns
    CREATE TABLE "ADMIN"."TEST"
         "INTID"  Integer  NOT NULL,
         "FLAG"  Smallint,
         PRIMARY KEY("INTID")
    Index on Column FLAG
    CREATE INDEX "IDX_TEST" ON "ADMIN"."TEST"("FLAG" ASC)
    Insert about 1000 lines in the TEST
    INSERT INTO TEST (SELECT ROWNO, 1 FROM LARGETABLE WHERE ROWNO <= 1000)
    (The easiest way for me to fill the table.)
    Call the dbanalyser
    EXPLAIN SELECT * FROM TEST WHERE FLAG <> 1
    OWNER  TABLENAME  COLUMN_OR_INDEX  STRATEGY                                PAGECOUNT
    ADMIN  TEST                        DIFFERENT STRATEGIES FOR OR-TERMS                8
                      IDX_TEST         RANGE CONDITION FOR INDEX              
                                       ONLY INDEX ACCESSED                    
                      FLAG                  (USED INDEX COLUMN)               
                      IDX_TEST         RANGE CONDITION FOR INDEX              
                                       ONLY INDEX ACCESSED                    
                      FLAG                  (USED INDEX COLUMN)               
                                            RESULT IS COPIED   , COSTVALUE IS           6
                                       QUERYREWRITE - APPLIED RULES:          
                                          DistinctPullUp                                1
    The statement is fast because of the small table, but I think the startegy is wrong.

    > We have a performance problem after doing the update from MaxDB 7.6.6.3 to 7.6.6.10.  
    > The symptom is that querys with the "<>" operator in the WHERE-Clause on a indexed Integer/SmallInteger-column slows down extremly, e.g. "WHERE FILEDNAME <> 1".
    > On large tables the query is very, very slow.
    > Index on Column FLAG
    > -
    > CREATE INDEX "IDX_TEST" ON "ADMIN"."TEST"("FLAG" ASC)
    > The statement is fast because of the small table, but I think the startegy is wrong.
    Hmm.. what other strategy would you propose?
    The single table optimizer tries to estimate how many pages would need to be read to find the data required.
    It figures that for your statement there won't be many pages required, so a index access might be beneficial.
    And to use the index efficiently it transforms your unequality to a "larger then" OR "smaller then" condition.
    So you get "DIFFERENT STRATEGIES FOR OR-TERMS".
    If you look closely you'll find that both strategies actually are "RANGE CONDITION FOR INDEX" on the IDX_TEST index.
    The difference between them both is the range (the start/stop-key combination) used for the index reading.
    Anyhow - unquality conditions are always problematic to DBMS.
    They are designed to be quick to find data that is equal or like some condition.
    regards,
    Lars

  • Performance problem with table COSS...

    Hi
    Anyone encountered performance problem with these table : COSS, COSB, COEP
    Best Regards

    >
    gsana sana wrote:
    > Hi Guru's
    >
    > this is the select Query which is taking much time in Production. so please help me to improve the preformance with BSEG.
    >
    > this is my select query:
    >
    > select  bukrs
    >               belnr
    >               gjahr
    >               bschl
    >               koart
    >               umskz
    >               shkzg
    >               dmbtr
    >               ktosl
    >               zuonr
    >               sgtxt
    >               kunnr
    >         from  bseg
    >         into  table gt_bseg1
    >          for  all entries in gt_bkpf
    >        where  bukrs eq p_bukrs
    >          and  belnr eq gt_bkpf-belnr
    >          and  gjahr eq p_gjahr
    >          and  buzei in gr_buzei
    >          and  bschl eq  '40'
    >          and  ktosl  ne  'BSP'.
    >
    > UR's
    > GSANA
    Hi,
    This is what I know and please if any expert think its incorrect, please do correct me.
    BSEG is a cluster table with BUKRS, BELNR, GJAHR and BUZEI as the key whereas other key will be stored in database as raw data thus SAP apps will need to convert that raw data first if we are using other keys in where condition. Hence, I suggest to use up to buzei in the where condition and filter other condition in internal table level like using Delete statement. Hope its help.
    Regards,
    Abraham

  • Performance problem(ANEA/ANEP table) in Select statement

    Hi
    I am using below select statement to fetch data.
    Does the below where statement have performance issue?
    can you Pls suggest.
    1)In select of ANEP table, i am not using all the Key field in where condition. will it have performance problem?
    2)does the order of where condition should be same as in table, if any one field order change also will have effect performance
    SELECT bukrs                           
             anln1                          
             anln2                          
             afabe                          
             gjahr                        
             peraf                         
             lnran                         
             bzdat                          
             bwasl                        
             belnr                         
             buzei                         
             anbtr                       
             lnsan                         
        FROM anep
        INTO TABLE o_anep
        FOR ALL ENTRIES IN i_anla
       WHERE bukrs = i_anla-bukrs          
         AND anln1 = i_anla-anln1          
         AND anln2 = i_anla-anln2          
         AND afabe IN s_afabe              
         AND bzdat =< p_date                
         AND bwasl IN s_bwasl.              
      SELECT bukrs      
             anln1      
             anln2      
             gjahr      
             lnran       
             afabe      
             aufwv       
             nafal   
             safal       
             aafal      
             erlbt    
             aufwl      
             nafav    
             aafav     
             invzv   
             invzl      
        FROM anea
        INTO TABLE o_anea
        FOR ALL ENTRIES IN o_anep
       WHERE bukrs = o_anep-bukrs    
         AND anln1 = o_anep-anln1    
         AND anln2 = o_anep-anln2    
         AND gjahr = o_anep-gjahr    
         AND lnran = o_anep-lnran   
         AND afabe = o_anep-afabe.
    Moderator message: Please Read before Posting in the Performance and Tuning Forum
    Edited by: Thomas Zloch on Aug 9, 2011 9:37 AM

    1. Yes. If you have only a few primary keys in youe WHERE condition that does affect the performance. But some times requirement itself may be in that way. We may not be knowing all the primary keys to given them in WHER conditon. If you know the values, then provide them without fail.
    2. Yes. It's better to always follow the sequence in WHERE condition and even in the fields being fetched.
    One important point is, whenever you use FOR ALL ENTRIES IN, please make sure that the itab IS NOT INITIAL i.e. the itab must have been filled in. So, place the same conditin before both the SELECT queries like:
    IF i_anla[] IS NOT INITIAL.
    SELECT bukrs                           
             anln1                          
             anln2                          
             afabe                          
             gjahr                        
             peraf                         
             lnran                         
             bzdat                          
             bwasl                        
             belnr                         
             buzei                         
             anbtr                       
             lnsan                         
        FROM anep
        INTO TABLE o_anep
        FOR ALL ENTRIES IN i_anla
       WHERE bukrs = i_anla-bukrs          
         AND anln1 = i_anla-anln1          
         AND anln2 = i_anla-anln2          
         AND afabe IN s_afabe              
         AND bzdat =< p_date                
         AND bwasl IN s_bwasl.              
    ENDIF.
    IF o_anep[] IS NOT INITIAL.
      SELECT bukrs      
             anln1      
             anln2      
             gjahr      
             lnran       
             afabe      
             aufwv       
             nafal   
             safal       
             aafal      
             erlbt    
             aufwl      
             nafav    
             aafav     
             invzv   
             invzl      
        FROM anea
        INTO TABLE o_anea
        FOR ALL ENTRIES IN o_anep
       WHERE bukrs = o_anep-bukrs    
         AND anln1 = o_anep-anln1    
         AND anln2 = o_anep-anln2    
         AND gjahr = o_anep-gjahr    
         AND lnran = o_anep-lnran   
         AND afabe = o_anep-afabe.
    ENDIF.

  • A017 performance Problem

    Hi all
          I am facing performance problem, While selecting field 'DATAB' ( Validity start date of the condition record ) from Pooled table 'A017',I am getting timed out error ( slong ).IS there any efficient way to retrieve data.
    Regards
    Sreenivasa Reddy

    Hi,
         follow below condition to improve performance
    1) Remove * from select
    2) Select field in sequence as defined in database
    3) Avoid unnecessary selects
    i.e check for internal table not initial
    4) Use all entries and sort table by key fields
    5) Remove selects ferom loop and use binary search
    6) Try to use secondary index when you don't have
    full key.
    7) Modify internal table use transporting option
    8) Avoid nested loop . Use read table and loop at itab
    from sy-tabix statement.
    9) free intrenal table memory wnen table is not
    required for further processing.
    10)
    Follow below logic.
    FORM SUB_SELECTION_AUFKTAB.
    if not it_plant[] is initial.
    it_plant1[] = it_plant[].
    sort it_plant1 by werks.
    delete adjacent duplicates from it_plant1 comparing werks
    SELECT AUFNR KTEXT USER4 OBJNR INTO CORRESPONDING FIELDS OF TABLE
    I_AUFKTAB
    FROM AUFK
    FOR ALL ENTRIES IN it_plant1
    WHERE AUFNR IN S_AUFNR AND
    KTEXT IN S_KTEXT AND
    WERKS IN S_WERKS AND
    AUART IN S_AUART AND
    USER4 IN S_USER4 AND
    werks eq it_plant1-werks.
    free it_plant1.
    Endif.
    ENDFORM. "SUB_SELECTION_AUFKTAB
    Regards
    Amole

  • Performance Problem While Data updating In Customize TMS System

    Dear guys
    I have developed Customize time management system,but there is performance problem while updating machine date in monthly roster table,there is if else condition,which check record status either is late or Present on time etc.
    have any clue to improve performance.
    Thanks in advance
    --regard
    furqan

    Furqan wrote:
    Dear guys
    I have developed Customize time management system,but there is performance problem while updating machine date in monthly roster table,there is if else condition,which check record status either is late or Present on time etc.
    have any clue to improve performance.From that description and without any database version, code, explain plans, execution traces or anything that would be remotely useful.... erm... No.
    Hint:-
    How to Post an SQL statement tuning request...
    HOW TO: Post a SQL statement tuning request - template posting

  • PL/SQL Performance problem

    I am facing a performance problem with my current application (PL/SQL packaged procedure)
    My application takes data from 4 temporary tables, does a lot of validation and
    puts them into permanent tables.(updates if present else inserts)
    One of the temporary tables is parent table and can have 0 or more rows in
    the other tables.
    I have analyzed all my tables and indexes and checked all my SQLs
    They all seem to be using the indexes correctly.
    There are 1.6 million records combined in all 4 tables.
    I am using Oracle 8i.
    How do I determine what is causing the problem and which part is taking time.
    Please help.
    The skeleton of the code which we have written looks like this
    MAIN LOOP ( 255308 records)-- Parent temporary table
    -----lots of validation-----
    update permanent_table1
    if sql%rowcount = 0 then
    insert into permanent_table1
    Loop2 (0-5 records)-- child temporary table1
    -----lots of validation-----
    update permanent_table2
    if sql%rowcount = 0 then
    insert into permanent_table2
    end loop2
    Loop3 (0-5 records)-- child temporary table2
    -----lots of validation-----
    update permanent_table3
    if sql%rowcount = 0 then
    insert into permanent_table3
    end loop3
    Loop4 (0-5 records)-- child temporary table3
    -----lots of validation-----
    update permanent_table4
    if sql%rowcount = 0 then
    insert into permanent_table4
    end loop4
    -- COMMIT after every 3000 records
    END MAIN LOOP
    Thanks
    Ashwin N.

    Do this intead of ditching the PL/SQL.
    DECLARE
    TYPE NumTab IS TABLE OF NUMBER(4) INDEX BY BINARY_INTEGER;
    TYPE NameTab IS TABLE OF CHAR(15) INDEX BY BINARY_INTEGER;
    pnums NumTab;
    pnames NameTab;
    t1 NUMBER(5);
    t2 NUMBER(5);
    t3 NUMBER(5);
    BEGIN
    FOR j IN 1..5000 LOOP -- load index-by tables
    pnums(j) := j;
    pnames(j) := 'Part No. ' || TO_CHAR(j);
    END LOOP;
    t1 := dbms_utility.get_time;
    FOR i IN 1..5000 LOOP -- use FOR loop
    INSERT INTO parts VALUES (pnums(i), pnames(i));
    END LOOP;
    t2 := dbms_utility.get_time;
    FORALL i IN 1..5000 -- use FORALL statement
    INSERT INTO parts VALUES (pnums(i), pnames(i));
    get_time(t3);
    dbms_output.put_line('Execution Time (secs)');
    dbms_output.put_line('---------------------');
    dbms_output.put_line('FOR loop: ' || TO_CHAR(t2 - t1));
    dbms_output.put_line('FORALL: ' || TO_CHAR(t3 - t2));
    END;
    Try this link, http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96624/05_colls.htm#23723

  • Performance problem in Zstick report...

    Hi Experts,
    I am facing performance problem in Custoom Stock report of Material Management.
    In this report i am fetching all the materials with its batches to get the desired output, at a time this report executes 36,000 plus unique combination of material and batch.
    This report takes around 30 mins to execute. And this report is to be viewed regularly in every 2 hours.
    To read the batch characteristics value I am using FM -> '/SAPMP/CE1_BATCH_GET_DETAIL'
    Is there any way out to increase the performance of this report, the output of the report is in ALV.
    May i have any refresh button in the report so that data may get refreshed automatically without executing it again. or is there any cache memory concept.
    Note: I have declared all the itabs with type sorted, all the select queries are fetched with key and index.
    Thanks
    Rohit Gharwar

    Hello,
    SE30 is old. Switch on trace on ST12 while running this progarm and identify where exactly most of the time is being spent. If you see high CPU time this problem with the ABAP code. You can exactly figure out the program/function module from ST12 trace where exactly time is being spent. If you see high database time in ST12, problem is with database related issue. So basically you have to analyze sql statement from performance traces in ST12. These could resolve your issue.
    Yours Sincerely
    Dileep

  • SQL report performance problem

    I have a SQL classic report in Apex 4.0.2 and database 11.2.0.2.0 with a performance problem.
    The report is based on a PL/SQL function returning a query. The query is based on a view and pl/sql functions. The Apex parsing schema has select grant on the view only, not the underlying objects.
    The generated query runs in 1-2 sec in sqlplus (logged in as the Apex parsing schema user), but takes many minutes in Apex. I have found, by monitoring the database sessions via TOAD, that the explain plan in the Apex and sqlplus sessions are very different.
    The summary:
    In sqlplus SELECT STATEMENT ALL_ROWS Cost: 3,695                                                                            
    In Apex SELECT STATEMENT ALL_ROWS Cost: 3,108,551                                                        
    What could be the cause of this?
    I found a blog and Metalink note about different explain plans for different users. They suggested to set optimizer_secure_view_merging='FALSE', but that didn't help.

    Hmmm, it runs fast again in SQL Workshop. I didn't expect that, because both the application and SQL Workshop use SYS.DBMS_SYS_SQL to parse the query.
    Only the explain plan doesn't show anything.
    To add: I changed the report source to the query the pl/sql function would generate, so the selects are the same in SQL Workshop and in the application. Still in the application it's horribly slow.
    So, Apex does do something different in the application compared to SQL Workshop.
    Edited by: InoL on Aug 5, 2011 4:50 PM

  • Performance problem with WPF Viewer CRVS2010

    Hi,
    We are using Crystal Reports 2010 and the new WPF Viewer. Last week when we set up a test machine to run our integration tests (several hundred) all report tests failed (about 30 tests) with a timeout exception.
    The testmachine setup:
    HP DL 580 G5
    WMWare ESXi 4.0
    Guest OS: Windows 7 Enterprise 64-bit
    Memory (guest OS): 3GB
    CPU: 1
    Visual Studio 2010
    Crystal Reports for Visual Studio 2010 with 64 bit runtime installed
    Visual Studio 2008 installed
    Microsoft Office 2010 installed
    Macafee antivirus
    There are about 10 other virtual machines on the same HW.
    I think the performance problem is related to text obejcts on a report document viewed in a WPF Viewer. I made a simple WPF GUI with 2 buttons and the first button executes a very simple report that only has a text object with a few words in it and the other button is also a simple report with only 1 text object with approx. 100 words (about 800 charchters).
    The first report executes and displays almost instantly and the second report executes instantantly but displays after approx. 1 min 30 sec.
    And execute in this context means that all VB.Net code runs in the compiler without any exception or performance problem. The performance problem seems to come after viewer.Show() (in the code below) has executed.
    I did another test on the second report and replaced the text obejct with a formula field with the same text as the text object and this test executed and displayed the report instantly.
    So the performance problem seems to have something to do with rendering of textobjects in the WPF Viewer on a virtual machine with the above setup.
    I've made several tests on local machines with Windows XP (32 bit) or Winows 7 (64 bit) installed and none of them have this performance problem. Its not a critical issue for us because our users will run this application on their local PCs with Windows 7 64-bit but its a bit problematic for our project not being able to run all of our integration tests but I will probably solve this by using a local PC instead.
    Here is the VB.Net code Im using to View the reports:
    Private Sub LightWeight_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs)
            Dim lightWeightReport As New CrystalDecisions.CrystalReports.Engine.ReportDocument
            lightWeightReport.Load(Environment.CurrentDirectory & "\LightWeight.rpt")
            ' Initialize Viewer
            Dim viewer As LF.LIV.PEAAT.Crystal.Views.ReportViewer = New LF.LIV.PEAAT.Crystal.Views.ReportViewer()
            viewer.Owner = Me
            viewer.reportViewer.ViewerCore.ReportSource = lightWeightReport
            viewer.Show()
        End Sub
        Private Sub LightWeightSlow_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs)
            Dim lightWeightReport As New CrystalDecisions.CrystalReports.Engine.ReportDocument
            lightWeightReport.Load(Environment.CurrentDirectory & "\LightWeightSlow.rpt")
            ' Initialize Viewer
            Dim viewer As LF.LIV.PEAAT.Crystal.Views.ReportViewer = New LF.LIV.PEAAT.Crystal.Views.ReportViewer()
            viewer.Owner = Me
            viewer.reportViewer.ViewerCore.ReportSource = lightWeightReport
            viewer.Show()
        End Sub
    The reports are 2 empty default reports with only 1 textobject on the details section.
    // Thomas

    See if the KB [
    [1448013  - Connecting to Oracle database. Error; Failed to load database information|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333433343338333033313333%7D.do] helps.
    Also the following may not hurt to have a look at (if only for ideas):
    [1217021 - Err Msg: "Unable to connect invalid log on parameters" using Oracle in VS .NET|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333233313337333033323331%7D.do]
    [1471508 - Logon error when connecting to Oracle database in a VS .NET application|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333433373331333533303338%7D.do]
    [1196712 - Error: "Failed to load the oci.dll" in ASP.NET application against an Oracle database|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333133393336333733313332%7D.do]
    Ludek
    Follow us on Twitter http://twitter.com/SAPCRNetSup

Maybe you are looking for

  • I saved my photos from my old iphone to my computer.  how can i get them onto my new iphone 6?

    i saved my photos from my old iphone to my computer.  how can i get them onto my new iphone 6?

  • SAP R/3 3.1i Master data -Data Source

    BW Experts,   I am using SAP R/3 3.1i for the first time. Some of the screens look different from 4.7 (Obviously!!:-)). Well my requirement is straight forward. All I want to do is access my 0MATERIAL_ATTR datasource on SAP 3.1i R/3 and unhide some f

  • CSNSS Exam!

    I just got taking the test and some of the questions were very ambiguous. Not only that, but the exam did not allow me enough time to finish; I didn't answer the last 10 questions of 70. If I had, I would've passed. At any rate, here's a couple of qu

  • How many notes(annotations) there are? Is detection possible?

    I can create move notes from code but first I must know how many text Notes there are in a document. Is it possible using executeActionGet(ref) ?

  • Date Picker visbile by default in Apex.

    Hi All, I have created two date pickers by default i will be dsiplaying only one, but when they click on that link i wil diplsay the other date picker report. I have created demo in my application but that is showing fine in 4.0 version but when i ha