Tuning questions

Hi all,
i want to know abt some things in performance tuning since i m just a beginner in P.T
i m yet to get the concepts clear.
my question is suppose after collecting the stats of the particular table and the cost is very high in that we can find that the particular table is not having index on one leading column
if i go to create index ,then what will be impact after creating index has the cost will decrease ?
somebody saying that eventhough we are create index it will not starting to use that index unless or until that table will collect the statistics .
coz in prev stats index is not there?
is it right? or after creating index it will start to using?
pls let me know.
thanks in advance..

Once statistics are available, the optimizer will be able to evaluate the cost of using the index, as well as a number of other methods of getting the data.
If it thinks using the index will result in a lower cost, it will use the index. However, there are a lot of situations in whch using the index will actually have a higher cost than other paths.
Note that very complex, or very stupidly^H^H^H^H^H^H^H^Hpoorly written, SQL may confuse the optimizer to the point that it does not even realize that an index is available or other costs are better or worse. And there are ways of making some costs totally hidden. (For example - function calls that include their own expensive SQL will probably not be visible to the optimizer.)
The exact interpretation and set of operations, and calcuation of cost, and so on - depends on the exact version AND the patches you have applied.

Similar Messages

  • Tuning question --- please help

    Hi,
    I am new to the tuning of sql statements .. I do have a question on that .. Whenever I do execute my sql in the TOAD, i do watch the explain plan .. In most of my explain plans, i do used to come across "full table scan" that too marked in red .. why does it happen ?? and what is the better solution for it, since my colleagues, used to say that full table scan shouldnt appear in the explain plan
    Regards

    Whenever I do execute
    my sql in the TOAD, i do watch the explain plan .. In
    most of my explain plans, i do used to come across
    "full table scan" that too marked in red .. why does
    it happen ?? Because apparently the people who sell TOAD mistakenly believe that a full table scan is always bad, and so highlight them, to confuse new users and propagate the myth further.
    In sqlplus the full table scan is the same color as all other access paths, and rightly so.
    If you learn sqlplus you will learn faster, and be more productive than if you rely on user interfaces that hide the workings of the database and display a lot of confusing and misleading information.
    (See that time at the bottom left of the status bar? That shows you the time the query takes? Well no it doesn't)
    and what is the better solution for it,
    since my colleagues, used to say that full table scan
    shouldnt appear in the explain plan Probably because they use TOAD too much and never learn anything.

  • Please Help!!  Easy Application Tuning Question

    Ok - this should be an easy one for seasoned vets (but not for a newbie like me)
    I've been asked by the boss-man to "find and tune the SQL that's causing the slowdown . . . " I'm on Oracle 817 running Peoplesoft Application. I've located the main problems with the Application and have some solid ideas about how to improve the SQL. So - my question. How the heck do I get into the Peoplesoft (or any other Application for that matter) SQL code to change it. Everything I read says, " . . . and then you can go in and tune the SQL . . . " but never exaplins how exactly to 'get in'. I can't imaging there is a File-->Open-->SQL Code feature, it must be more complex than that. So what am I missing here and how do I get to the place where I can change the Application's SQL for tuning?!?!?!?
    Thanks in advance for any help you give.

    Peoplesoft owns the application code. So you can't change the application code. However, you can examine the code and see if there are some indices that you can create to improve the query. Also, you will need to rebuild your statistics. As a matter of maintenance, you should rebuild your indices and statistics on a regular schedule. Especially if your optimizer parameter is choose and you have high volatility.
    Alternatively, you could ask Peoplesoft to provide you with a patch.
    Maybe someone else has another idea?
    LM

  • Garbage collection tuning questions

    I apologize if these questions have been answered elsewhere.
    We are using java 1.4.2_08
    SunOS pswebp10 5.9 Generic_118558-06 sun4u sparc SUNW,Sun-Fire-V440
    2 CPUs
    weblogic 8.1 SP4
    Java options:
    -server
    -Xms2304m
    -Xmx2304m
    -XX:+UseAdaptiveSizePolicy
    -XX:PermSize=192m
    -XX:MaxPermSize=192m
    -Xloggc:/www/tmsproddomain/logs//gc.log
    -XX:+PrintGCDetails
    -XX:+PrintGCTimeStamps
    -XX:+DisableExplicitGC
    -XX:+PrintTenuringDistribution
    -XX:+UseParallelGC
    -Djava.awt.headless=true
    The JVM spaces get created as:
    Eden:768
    S0: 85
    S1: 85
    Old: 1.5
    Perm: 192
    The survivor spaces increase in size over time, so
    the S[01] sizes reported above are obviously larger than
    at JVM start.
    *) We are using the UseParallelGC garbage collector. Our
    system has 2 CPUs and were wondering if adding 2 more CPUs
    would improved GC times at all. If so, any ballpark
    on how much?
    *) We once attempted to use the ParNewGC collector as well
    as UseConcMarkSweepGC with
    a large allocation as in the above specs, and found that it
    made strange decisions on the sizes of the various memory
    spaces that led to very frequent GCs. This was with an older
    1.4.2 JVM though. It was strange enough that we chose to go
    with the Parallel collector. Is there information on how
    the ParNewGC collector sizes the memory spaces?
    *) Is there any documentation on these two collectors,
    i.e how objects promoted from eden to old, etc. The docs I have
    seen are all about the other, older collectors.
    *) Does the UseAdaptiveSizePolicy work with these 2 collectors?
    *) If the AggressiveHeap option is used, does this
    have any issues with either of these 2 collectors?
    *) the -Xloggc option seems to return less information
    than the verbosegc option - is this documented any where?
    Can the two options be used together to have all the output of
    verbosegc, but in the designated log file?
    Testing the ParNewGC and ConcSweep,
    I get:
    Eden Space: 7.8M
    S0: 64K
    S1: 64K
    Old: 1.45G
    Perm 192
    And this collector seems to use the tenuring system,
    with 31 slots.
    It seems like to use this collector, one might want to
    speed up the migration of objects from eden to old
    by tuning the tenuring?
    Any advice here would be greatly appreciated.

    (1) The basic stuff:
    http://java.sun.com/docs/hotspot/
    (2) The presentations from JavaOne conference might be of use:
    http://developers.sun.com/learning/javaoneonline/
    (3) Something I've stumbled across at some point:
    http://ua.sun.com/java/Performance_SRitter.pdf
    I think that you should leave the sizing and ratios up to the GC; with 1.5 it seems to totally ignore your input, and there the adaptive sizing is on by default (might be for 1.4 as well).
    Also, when using CMS, there is some GC done in parallel, so having more GC events output doesn't necessarily mean that your app is running slower; the ParallelGC is to maximize throughput, while CMS is when you need lower GC pause times, and don't mind sharing the CPU with the GC threads.

  • Performance Tuning Question

    Greetings,
    I did a few searches for any topics related to this and
    haven't found anything relevant - if I'm missing something obvious
    I apologize.
    We are doing some performance tuning of a CFMX 7.0.2 system
    running on Solaris. Of the many things we're doing one is to run
    truss on the cfmx processes to find out what in fact it's doing.
    The following is an excerpt from one of the truss outputs:
    stat64("/opt/coldfusionmx7/runtime/../lib/macromedia/jdbc/sqlserver/SQLServerURLParser.cla ss\0",
    0x254FB860, 0x254FB9BC) = -1 Err#2
    stat64("/opt/coldfusionmx7/runtime/../gateway/lib/macromedia/jdbc/oracle/OracleURLParser.c lass\0",
    0x254FB860, 0x254FB9BC) = -1 Err#2
    Err #2 means "File not found" in essence
    These lines show up quite a bit in the output - and we're
    curious as to why it's trying to find those particular classes at
    all and why in those locations - there are other entries where it
    looks like it's going through a series of paths that it knows about
    trying to find these entries. As we use Oracle as our database what
    could be the reason it's looking for SQLServer? Finally does anyone
    know of a way to stop the attempt to find these classes and save
    the system processing time to give us back those cycles for real
    work?
    Regards,
    Scott

    Please try:
    Create View View3 
    As
    SELECT a.Col1, a.Col2 
    From dbo.TableA A 
    WHERE NOT Exists (SELECT 1 From dbo.TableB B With(NoLock) WHERE A.Col1 = B.Col1)
    UNION 
    SELECT Col1, Col2 From dbo.TableB
    Also, please make sure that INDEXes on Col1 on both tables  are NOT fragmented and your STATISTICS are also up tp date. 
    Best Wishes, Arbi; Please vote if you find this posting was helpful or Mark it as answered.

  • Database performance tuning Question

    Please,
    The top wait event having all the time when tuning, oracle database is the following.
    enq: TX - row lock contention
    What could be the best way to pinpoint the cause of this?.
    I'm on Oracle 10g, windows server 2003.
    Thanks

    The first thing to do is check the underlying reason. If you query v$session_wait_history or v$lock when you spot a lock appearing, you can check if the request is for mode 4 or mode 6.
    Mode 6 means you are colliding on an attempt to update the data in a heap table. Mode 4 can mean many things - including a few "internal" problems, which is probably why the comment about initrans appeared - but often means you have a collision on indexes, (e.g. pending duplicates in unique indexes, conflicts in parent/child referential integrity) or colliding updates to the non-key part of IOTs.
    If you are allowed to use v$active_session_history, you can query that to find the sql_id of a session that was waiting for a lock and then be able to see the SQL that the waiting session was trying to operate - otherwise you may have to catch it waiting and check v$session for the sql_id.
    Once you have the SQL, you may have a better idea of why the lock was happening.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http:/www.jlcomp.demon.co.uk

  • Answers to these performance tuning questions 9i

    what are the answers to the below questions ??
    367. You have used the ALTER command to set the DB_CACHE_ADVICE parameter to READY. What is now happening in the instance?
    A. Memory has been allocated in the shared pool for cache advice buffers.
    B. CPU utilization will significantly increase because Oracle is collecting more detailed statistics about the buffer cache.
    C. The buffer cache hit ratio will now increase.
    D. The V$DB_CACHE_ADVICE view is now populated.
    184. Which two statements are true regarding the use of DB_CACHE_ADVICE init.ora parameter? (Choose two)
    A. Setting the parameter to READY reserves space in the buffer cache to store information about different buffer cache sizes, but no CPU overhead is incurred.
    B. Setting the parameter to READY reserves space in the shared pool to store information about different buffer cache sizes, but no CPU overhead is incurred.
    C. Setting the parameter to ON reserves space in the buffer cache to store information about different buffer cache sizes, and CPU overhead is incurred as statistics are collected.
    D. The V$DB_CACHE_ADVICE view contains information that predicts the estimated number of physical reads for different cache sizes for each buffer cache setup in the SGA.
    185. Which three statements are true with respect to has clusters? (Choose three)
    A. Full table scans are generally faster on cluster tables than on non-clustered tables.
    B. Hash clusters may be desirable when the number of key values is predictable and key values are evenly distributes.
    C. If tables in a hash cluster require more space than the initial allocation for the cluster, performance degradation can be substantial because overflow blocks are required.
    D. Storing a single table in a hash cluster can be useful regardless of whether the table is joined frequently with other tables or not, provided other criteria for choosing a hash cluster are met.
    346. You have been seeing poor performance for inserts into a new table.
    You queried V$WAITSTAT and V$SYSTEM_EVENT and determined that there is free list
    contention in your database.
    Next, you identified the segments by joining the DBA_SEGMENTS table and V$SESSION_WAIT
    view to identify the file and block where waits are occurring.
    What is the next step you should take?
    A. If the file and block identify a data block within a table, consider increasing the number of free lists on the table using the ALTER TABLE command and specifying the FREELISTS clause.
    B. If the file and block identify a data block within a table, consider increasing the number of free lists on the table using the ALTER TABLESPACE command and specifying the FREELISTS clause in the default storage clause.
    C. If the file and block identify a segment header for a table, consider increasing the number of free lists on the table using the ALTER TABLE command and specifying the FREELISTS keyboard in the storage clause.
    D. If the file and block identify a segment header for a table, consider increasing the number of free lists on the table using the ALTER TABLESPACE command and specifying the FREELIST clause in the default storage clause.
    235. To provide more free lists for a number of your database segments, what is one of your options?
    A. Modify them with the INSERT_ _FREELIST command.
    B. Drop and re-create them with the required FREELIST value.
    C. Change the default storage parameter of the tablespace(s) where they are stored.
    D. Modify the FREELIST_LIMIT parameter in your installation file and restart the instance.
    337. How to set the OPTIMIZER_MODE for minimizing total response time?
    A. RULE
    B. ALL_ROWS
    C. FIRST_ROWS_n
    D. FIRST_ROWS
    347. You want to enhance performance of the Database Optimizer to minimize total response time, thereby increasing overall throughput of batch process.
    Which choice identifies the best optimizer mode setting for meeting this requirement?
    A. RULE
    B. ALL_ROWS
    C. FIRST_ROWS
    D. FIRST_ROWS_n
    Message was edited by:
    Akshay

    Let's turn this around ...
    what do you think are the correct answers? and what is your logic?
    All the answers are in the Concepts manual or the DB Administrator's Guide at http://docs.oracle.com ... use http://tahiti.oracle.com and you can search the docs very easily.

  • Project Tuning Questions

    I would like create a new project and set the tuning in the project to a tuning that I heard about recently that I would like to experiment with and try and create compositions using the frequencies. - I'm just a piddler and I am not trying to break any boundaries or anything like that, It is more a curiosity. - I read and article recently and saw a couple of youtube videos about something called "Sacred Geometry" and I would like to see if there is something to these claims. - Can anyone help me by instructing me in how I can go about setting the Project Tuning settings to these frequencies, so that my virtual instruments will play them? -
    Here is what is said about these frequencies.- You can find a little information about this by searching 528Hz/Sacred Solfeggio online.- I don't know what much about the history of this stuff and cannot vouch for its validity. -
    74 Hz 285 Hz 396 Hz 417 Hz 528 Hz 639 Hz 741 Hz 852 Hz 963 Hz
    UT - 396 Hz -associated with releasing emotional patterns after: see RE-417Hz below.
    RE - 417 Hz -associated with breaking up crystallized emotional patterns
    MI - 528 Hz - relates to crown chakra; Dr. Puleo suggests an association with "DNA integrity" Transformation and Miracles
    FA - 639 Hz - associated with whole brain quadrant interconnectedness. Connecting Relationships
    SOL - 741 Hz - associated with intuitive states, non linear knowing. Awakening Intuition
    LA - 852 Hz - associated with a pure love frequency: unconditional love and returning to Spiritual Order
    P.S. I am not a religious nut or anything. - But the claims do intrigue me. Thank you

    +"P.S. I am not a religious nut or anything. - But the claims do intrigue me. Thank you "+
    — Whew! For a moment you had me worried. But scientific experimentation is never wrong. You can in fact tune your instruments to a scale which you can define — but you'll have to do a bit of maths. Ask again if you need help with that. Here's the gen:
    p948, Chapter 40:
    _Alternate Tuning Scales_
    Activation of alternate tuning scales is achieved by clicking the appropriate radio button in the Software Instrument Scale section. The selected tuning scale is saved with the project when saved, and reloaded the next time the project is opened.
    User
    Each semitone can be detuned (moved away) from equal temperament in semitone steps. To do so, just click-drag—vertically—on each semitone box until the desired value is reached.
    Alternately, you can double-click in each semitone box, and type in a value. Press Enter or click on another box to exit the text entry mode.

  • Tuning question. Self join vs Analytical function

    Hi all,
    I am a bit confused about this query cost.
    So I have this query. Now follow the original one (after rewritten by me):
    SELECT /*+ parallel (d 8) parallel(h 8) parallel(c 8) */
         DISTINCT
    d.customer_node_id AS root_customer_node_id,
    d.customer_node_id AS customer_node_id,
    nvl(h.account_id,c.account_id) AS account_id,
    nvl(h.account_name,c.account_name) AS account_name,
    d.service_id AS service_id,
    nvl((SELECT /*+ parallel(x 8) */ max(x.service_name) FROM delta_service_history x
    WHERE x.service_id=d.service_id AND v_upperbound_upd_dt BETWEEN x.effective_start_date AND x.effective_end_date GROUP BY x.service_id),d.service_name) AS service_name
    FROM
    delta_service_history d,
    delta_account c,
    stg_hierarchy h
    WHERE
    d.customer_node_id=c.customer_node_id(+) AND
    d.customer_node_id=h.customer_node_id(+)
    ......the new one (I decided to use analitycal function to calculate max(service_name) for each service_id instead of self join done for "delta_service_history" )
    I thought that self join was very heavy....
    Anyway, my two questions are:
    1. why the second one is heavier than the first. I reduce the number of join.....
    2. how can be rewritten the first one query? In particular way I don't like that self join..... :)
    Select Distinct
                   root_customer_node_id,
                   customer_node_id,
                   account_id,
                   account_name,
                   service_id,
    service_name
                   From
                                  SELECT /*+ parallel (d 8) parallel(h 8) parallel(c 8) */
                                  d.customer_node_id AS root_customer_node_id,
                                  d.customer_node_id AS customer_node_id,
                                  nvl(h.account_id,c.account_id) AS account_id,
                                  nvl(h.account_name,c.account_name) AS account_name,
                                  d.service_id AS service_id,
                                  d.service_name,
                                  row_number() over (partition by d.service_id order by d.service_name desc) r1
                             FROM
                                  delta_service_history d,
                                  delta_account c,
                                  stg_hierarchy_new h
                             WHERE
                                  d.customer_node_id=c.customer_node_id(+) AND
                                  d.customer_node_id=h.customer_node_id(+) AND
                                  v_upperbound_upd_dt BETWEEN d.effective_start_date AND d.effective_end_date
                             )a
                        Where a.r1 = 1
    Thank you all.

    I Post query plan.
    First one query (the original):
    Plan
    MERGE STATEMENT ALL_ROWSCost: 2.691.669 Bytes: 784.141.119.324 Cardinality: 1.754.230.692                                                                            
         27 MERGE STGADMIN.STG_HIERARCHY                                                                       
              26 PX COORDINATOR                                                                  
                   25 PX SEND QC (RANDOM) PARALLEL_TO_SERIAL SYS.:TQ10005 :Q1005Cost: 2.691.669 Bytes: 475.396.517.532 Cardinality: 1.754.230.692                                                             
                        24 VIEW PARALLEL_COMBINED_WITH_PARENT STGADMIN. :Q1005                                                       
                             23 HASH JOIN RIGHT OUTER BUFFERED PARALLEL_COMBINED_WITH_PARENT :Q1005Cost: 2.691.669 Bytes: 475.396.517.532 Cardinality: 1.754.230.692                                                   
                                  4 BUFFER SORT PARALLEL_COMBINED_WITH_CHILD :Q1005                                             
                                       3 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1005Cost: 20,828 Bytes: 860.278.720 Cardinality: 15.362.120                                         
                                            2 PX SEND HASH PARALLEL_FROM_SERIAL SYS.:TQ10000 Cost: 20,828 Bytes: 860.278.720 Cardinality: 15.362.120                                    
                                                 1 TABLE ACCESS FULL TABLE STGADMIN.STG_HIERARCHY Cost: 20,828 Bytes: 860.278.720 Cardinality: 15.362.120                               
                                  22 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1005Cost: 2.669.426 Bytes: 376.698.378.630 Cardinality: 1.752.085.482                                              
                                       21 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10004 :Q1004Cost: 2.669.426 Bytes: 376.698.378.630 Cardinality: 1.752.085.482                                         
                                            20 VIEW PARALLEL_COMBINED_WITH_PARENT STGADMIN. :Q1004Cost: 2.669.426 Bytes: 376.698.378.630 Cardinality: 1.752.085.482                                    
                                                 19 SORT UNIQUE PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 2.669.426 Bytes: 127.902.240.186 Cardinality: 1.752.085.482                               
                                                      18 HASH JOIN OUTER PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 35,386 Bytes: 127.902.240.186 Cardinality: 1.752.085.482                          
                                                           13 HASH JOIN OUTER PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 4,86 Bytes: 647.395.154 Cardinality: 13.212.146                     
                                                                8 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 1,961 Bytes: 158.611.600 Cardinality: 6.344.464                
                                                                     7 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10001 :Q1001Cost: 1,961 Bytes: 158.611.600 Cardinality: 6.344.464           
                                                                          6 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1001Cost: 1,961 Bytes: 158.611.600 Cardinality: 6.344.464      
                                                                               5 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT STGADMIN.DELTA_SERVICE_HISTORY :Q1001Cost: 1,961 Bytes: 158.611.600 Cardinality: 6.344.464
                                                                12 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 2,879 Bytes: 368.690.880 Cardinality: 15.362.120                
                                                                     11 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10002 :Q1002Cost: 2,879 Bytes: 368.690.880 Cardinality: 15.362.120           
                                                                          10 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1002Cost: 2,879 Bytes: 368.690.880 Cardinality: 15.362.120      
                                                                               9 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT STGADMIN.STG_HIERARCHY :Q1002Cost: 2,879 Bytes: 368.690.880 Cardinality: 15.362.120
                                                           17 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 28,982 Bytes: 5.838.854.928 Cardinality: 243.285.622                     
                                                                16 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10003 :Q1003Cost: 28,982 Bytes: 5.838.854.928 Cardinality: 243.285.622                
                                                                     15 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1003Cost: 28,982 Bytes: 5.838.854.928 Cardinality: 243.285.622           
                                                                          14 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT STGADMIN.DELTA_ACCOUNT :Q1003Cost: 28,982 Bytes: 5.838.854.928 Cardinality: 243.285.622      
    ...second query
    Plan
    MERGE STATEMENT ALL_ROWSCost: 3.521.711 Bytes: 291.687.979.305 Cardinality: 652.545.815                                                                                                     
         32 MERGE STGADMIN.STG_HIERARCHY                                                                                                
              31 PX COORDINATOR                                                                                           
                   30 PX SEND QC (RANDOM) PARALLEL_TO_SERIAL SYS.:TQ10006 :Q1006Cost: 3.521.711 Bytes: 176.839.915.865 Cardinality: 652.545.815                                                                                      
                        29 VIEW PARALLEL_COMBINED_WITH_PARENT STGADMIN. :Q1006                                                                                
                             28 HASH JOIN RIGHT OUTER BUFFERED PARALLEL_COMBINED_WITH_PARENT :Q1006Cost: 3.521.711 Bytes: 176.839.915.865 Cardinality: 652.545.815                                                                            
                                  4 BUFFER SORT PARALLEL_COMBINED_WITH_CHILD :Q1006                                                                      
                                       3 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1006Cost: 20,828 Bytes: 860.278.720 Cardinality: 15.362.120                                                                  
                                            2 PX SEND HASH PARALLEL_FROM_SERIAL SYS.:TQ10000 Cost: 20,828 Bytes: 860.278.720 Cardinality: 15.362.120                                                             
                                                 1 TABLE ACCESS FULL TABLE STGADMIN.STG_HIERARCHY Cost: 20,828 Bytes: 860.278.720 Cardinality: 15.362.120                                                        
                                  27 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1006Cost: 3.500.345 Bytes: 140.125.783.665 Cardinality: 651.747.831                                                                       
                                       26 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10005 :Q1005Cost: 3.500.345 Bytes: 140.125.783.665 Cardinality: 651.747.831                                                                  
                                            25 VIEW PARALLEL_COMBINED_WITH_PARENT STGADMIN. :Q1005Cost: 3.500.345 Bytes: 140.125.783.665 Cardinality: 651.747.831                                                             
                                                 24 SORT UNIQUE PARALLEL_COMBINED_WITH_PARENT :Q1005Cost: 3.500.345 Bytes: 121.225.096.566 Cardinality: 651.747.831                                                        
                                                      23 VIEW PARALLEL_COMBINED_WITH_PARENT STGADMIN. :Q1005Cost: 1.195.554 Bytes: 121.225.096.566 Cardinality: 651.747.831                                                   
                                                           22 WINDOW SORT PUSHED RANK PARALLEL_COMBINED_WITH_PARENT :Q1005Cost: 1.195.554 Bytes: 58.005.556.959 Cardinality: 651.747.831                                              
                                                                21 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1005Cost: 1.195.554 Bytes: 58.005.556.959 Cardinality: 651.747.831                                         
                                                                     20 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10004 :Q1004Cost: 1.195.554 Bytes: 58.005.556.959 Cardinality: 651.747.831                                    
                                                                          19 WINDOW CHILD PUSHED RANK PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 1.195.554 Bytes: 58.005.556.959 Cardinality: 651.747.831                               
                                                                               18 HASH JOIN OUTER PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 34,402 Bytes: 58.005.556.959 Cardinality: 651.747.831                          
                                                                                    13 HASH JOIN OUTER PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 4,859 Bytes: 319.455.955 Cardinality: 4.914.707                     
                                                                                         8 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 1,963 Bytes: 152.576.580 Cardinality: 3.721.380                
                                                                                              7 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10001 :Q1001Cost: 1,963 Bytes: 152.576.580 Cardinality: 3.721.380           
                                                                                                   6 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1001Cost: 1,963 Bytes: 152.576.580 Cardinality: 3.721.380      
                                                                                                        5 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT STGADMIN.DELTA_SERVICE_HISTORY :Q1001Cost: 1,963 Bytes: 152.576.580 Cardinality: 3.721.380
                                                                                         12 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 2,879 Bytes: 368.690.880 Cardinality: 15.362.120                
                                                                                              11 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10002 :Q1002Cost: 2,879 Bytes: 368.690.880 Cardinality: 15.362.120           
                                                                                                   10 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1002Cost: 2,879 Bytes: 368.690.880 Cardinality: 15.362.120      
                                                                                                        9 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT STGADMIN.STG_HIERARCHY :Q1002Cost: 2,879 Bytes: 368.690.880 Cardinality: 15.362.120
                                                                                    17 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 28,982 Bytes: 5.838.854.928 Cardinality: 243.285.622                     
                                                                                         16 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10003 :Q1003Cost: 28,982 Bytes: 5.838.854.928 Cardinality: 243.285.622                
                                                                                              15 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1003Cost: 28,982 Bytes: 5.838.854.928 Cardinality: 243.285.622           
                                                                                                   14 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT STGADMIN.DELTA_ACCOUNT :Q1003Cost: 28,982 Bytes: 5.838.854.928 Cardinality: 243.285.622

  • Tuning Question

    Hi,
    Can some one guide me on how can we tune this query below :
    select glinv.segment1 "legal",
    glinv.segment2 "business_unit",
    glinv.segment3 "site",
    glinv.segment4 "department",
    glinv.segment5 "account",
    glinv.segment6 "product",
    glinv.segment7 "client",
    ind.accounting_date,
    ind.amount "invoice_distribution_amount",
    inv.source "invoice_source",
    ab.batch_name,
    inv.invoice_date,
    inv.creation_date,
    inv.invoice_num,
    pov.vendor_name,
    ind.base_amount "invoice_dist_functional_amount",
    ind.exchange_rate_variance "curr_exchamge_rate_variance",
    ppa.segment1 project_number,
    hou.NAME project_org_name
    from ap_invoices inv,
    ap_invoice_distributions ind,
    gl.gl_code_combinations glinv,
    po.po_vendors pov,
    apps.po_vendor_sites pvs,
    apps.ap_batches ab,
    pa_projects ppa,
    hr_organization_units hou
    where inv.invoice_id = ind.invoice_id
    and ind.dist_code_combination_id = glinv.code_combination_id
    and pov.vendor_id = inv.vendor_id
    and inv.vendor_site_id = pvs.vendor_site_id
    and pov.vendor_id = pvs.vendor_id
    and inv.batch_id = ab.batch_id
    and ind.project_id = ppa.project_id(+)
    and ind.expenditure_organization_id = hou.organization_id(+)
    and ind.accounting_date between to_date('03/31/2009','MM/DD/RRRR') and to_date('04/27/2009','MM/DD/RRRR')
    and glinv.segment1 between '4000' and '4999'
    The Explain Plan for this query is :
    Plan                                             
    SELECT STATEMENT CHOOSE Cost: 5,402 Bytes: 291 Cardinality: 1                                              
         26 NESTED LOOPS Cost: 5,402 Bytes: 291 Cardinality: 1                                         
              23 NESTED LOOPS OUTER Cost: 5,400 Bytes: 247 Cardinality: 1                                    
                   20 NESTED LOOPS OUTER Cost: 5,399 Bytes: 225 Cardinality: 1                               
                        13 NESTED LOOPS Cost: 5,398 Bytes: 162 Cardinality: 1                          
                             10 NESTED LOOPS Cost: 5,393 Bytes: 126 Cardinality: 1                     
                                  7 NESTED LOOPS Cost: 5,392 Bytes: 102 Cardinality: 1                
                                       4 NESTED LOOPS Cost: 1,407 Bytes: 31,083 Cardinality: 797           
                                            1 TABLE ACCESS FULL PO.PO_VENDOR_SITES_ALL Cost: 610 Bytes: 11,158 Cardinality: 797      
                                            3 TABLE ACCESS BY INDEX ROWID PO.PO_VENDORS Cost: 1 Bytes: 25 Cardinality: 1      
                                                 2 INDEX UNIQUE SCAN UNIQUE PO.PO_VENDORS_U1 Cardinality: 1
                                       6 TABLE ACCESS BY INDEX ROWID AP.AP_INVOICES_ALL Cost: 5 Bytes: 63 Cardinality: 1           
                                            5 INDEX RANGE SCAN UNIQUE AP.AP_INVOICES_U2 Cost: 3 Cardinality: 2      
                                  9 TABLE ACCESS BY INDEX ROWID AP.AP_BATCHES_ALL Cost: 1 Bytes: 24 Cardinality: 1                
                                       8 INDEX UNIQUE SCAN UNIQUE AP.AP_BATCHES_U1 Cardinality: 1           
                             12 TABLE ACCESS BY INDEX ROWID AP.AP_INVOICE_DISTRIBUTIONS_ALL Cost: 5 Bytes: 36 Cardinality: 1                     
                                  11 INDEX RANGE SCAN UNIQUE AP.AP_INVOICE_DISTRIBUTIONS_U1 Cost: 2 Cardinality: 22                
                        19 VIEW PUSHED PREDICATE APPS.HR_ORGANIZATION_UNITS Cost: 1 Bytes: 63 Cardinality: 1                          
                             18 NESTED LOOPS Cost: 3 Bytes: 73 Cardinality: 1                     
                                  15 TABLE ACCESS BY INDEX ROWID HR.HR_ALL_ORGANIZATION_UNITS Cost: 2 Bytes: 7 Cardinality: 1                
                                       14 INDEX UNIQUE SCAN UNIQUE HR.HR_ORGANIZATION_UNITS_PK Cost: 1 Cardinality: 1           
                                  17 TABLE ACCESS BY INDEX ROWID HR.HR_ALL_ORGANIZATION_UNITS_TL Cost: 1 Bytes: 66 Cardinality: 1                
                                       16 INDEX UNIQUE SCAN UNIQUE HR.HR_ALL_ORGANIZATION_UNTS_TL_PK Cardinality: 1           
                   22 TABLE ACCESS BY INDEX ROWID PA.PA_PROJECTS_ALL Cost: 1 Bytes: 22 Cardinality: 1                               
                        21 INDEX UNIQUE SCAN UNIQUE PA.PA_PROJECTS_U1 Cardinality: 1                          
              25 TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS Cost: 2 Bytes: 44 Cardinality: 1                                    
                   24 INDEX UNIQUE SCAN UNIQUE GL.GL_CODE_COMBINATIONS_U1 Cost: 1 Cardinality: 1      
    As you can see from the above query there is a full table scan on PO_VENDOR_SITES_ALL table. This query is running for 19 Minutes. It would be great if you can give some suggestions on how can we tune it.
    Thanks
    Sandeep

    John,
    Thanks for your Inputs.I understand that i did not provide you much of the information. Please find below the information as you requested:
    The following SQL statement has been identified to perform poorly. It currently takes up to 21 Minutes to execute, but it's supposed to take few seconds or minutes at most.
    This is the statement:
    select glinv.segment1 "legal",
    4      glinv.segment2 "business_unit",
    5      glinv.segment3 "site",
    6      glinv.segment4 "department",
    7      glinv.segment5 "account",
    8      glinv.segment6 "product",
    9      glinv.segment7 "client",
    10      ind.accounting_date,
    11      ind.amount "invoice_distribution_amount",
    12      inv.source "invoice_source",
    13      ab.batch_name,
    14      inv.invoice_date,
    15      inv.creation_date,
    16      inv.invoice_num,
    17      pov.vendor_name,
    18      ind.base_amount "invoice_dist_functional_amount",
    19      ind.exchange_rate_variance "curr_exchamge_rate_variance",
    20      ppa.segment1 project_number,
    21      hou.NAME project_org_name
    22 from ap_invoices          inv,
    23      ap_invoice_distributions ind,
    24      apps.ap_batches     ab,
    25      pa_projects               ppa,
    26      hr_all_organization_units_tl     hou,
    27      po.po_vendors           pov,
    28      gl.gl_code_combinations     glinv
    29 where inv.invoice_id     = ind.invoice_id
    30      and ind.project_id = ppa.project_id(+)
    31      and inv.vendor_id = pov.vendor_id
    32      and ind.dist_code_combination_id =     glinv.code_combination_id
    33      and inv.batch_id = ab.batch_id
    34      and ind.expenditure_organization_id = hou.organization_id(+)
    35      and ind.accounting_date between to_date('03/31/2009','MM/DD/RRRR') and to_date('04/27/2009','MM/DD/RRRR')
    36      -- and ind.accounting_date between '31-MAR-2009' and '27-APR-2009'
    37      and glinv.segment1     between to_number('4000') and to_number('4999');
    The version of the database is 9.2.0.7.0.
    We are using E-Business Suite 11i(11.5.10) Version of Oracle applications and all the tables involved in the above query are Oracle applications standard Base Tables.
    i am also including the log files which your code generates :
    SQL>
    SQL> show parameter user_dump_dest
    unknown SHOW option "parameter"
    USER is "APPS"
    unknown SHOW option "_dump_dest"
    SQL>
    SQL> show parameter optimizer
    unknown SHOW option "parameter"
    unknown SHOW option "optimizer"
    SQL>
    SQL> show parameter db_file_multi
    unknown SHOW option "parameter"
    unknown SHOW option beginning "db_file_mu..."
    SQL>
    SQL> show parameter db_block_size
    unknown SHOW option "parameter"
    unknown SHOW option beginning "db_block_s..."
    SQL>
    SQL> column sname format a20
    SQL> column pname format a20
    SQL> column pval2 format a20
    SQL>
    SQL> select
    2 sname
    3 , pname
    4 , pval1
    5 , pval2
    6 from
    7 sys.aux_stats$;
    no rows selected
    real: 109
    SQL>
    SQL> explain plan for
    2 -- put your statement here
    3 select glinv.segment1 "legal",
    4      glinv.segment2 "business_unit",
    5      glinv.segment3 "site",
    6      glinv.segment4 "department",
    7      glinv.segment5 "account",
    8      glinv.segment6 "product",
    9      glinv.segment7 "client",
    10      ind.accounting_date,
    11      ind.amount "invoice_distribution_amount",
    12      inv.source "invoice_source",
    13      ab.batch_name,
    14      inv.invoice_date,
    15      inv.creation_date,
    16      inv.invoice_num,
    17      pov.vendor_name,
    18      ind.base_amount "invoice_dist_functional_amount",
    19      ind.exchange_rate_variance "curr_exchamge_rate_variance",
    20      ppa.segment1 project_number,
    21      hou.NAME project_org_name
    22 from ap_invoices          inv,
    23      ap_invoice_distributions ind,
    24      apps.ap_batches     ab,
    25      pa_projects               ppa,
    26      hr_all_organization_units_tl     hou,
    27      po.po_vendors           pov,
    28      gl.gl_code_combinations     glinv
    29 where inv.invoice_id     = ind.invoice_id
    30      and ind.project_id = ppa.project_id(+)
    31      and inv.vendor_id = pov.vendor_id
    32      and ind.dist_code_combination_id =     glinv.code_combination_id
    33      and inv.batch_id = ab.batch_id
    34      and ind.expenditure_organization_id = hou.organization_id(+)
    35      and ind.accounting_date between to_date('03/31/2009','MM/DD/RRRR') and to_date('04/27/2009','MM/DD/RRRR')
    36      -- and ind.accounting_date between '31-MAR-2009' and '27-APR-2009'
    37      and glinv.segment1     between to_number('4000') and to_number('4999');
    Explained.
    real: 188
    SQL>
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 70 | 19040 | 23927 |
    | 1 | NESTED LOOPS | | 70 | 19040 | 23927 |
    | 2 | NESTED LOOPS OUTER | | 70 | 17290 | 23857 |
    | 3 | NESTED LOOPS | | 70 | 15750 | 23787 |
    | 4 | HASH JOIN OUTER | | 97 | 17557 | 23593 |
    | 5 | TABLE ACCESS BY INDEX ROWID | AP_INVOICE_DISTRIBUTIONS_ALL | 1 | 36 | 5 |
    | 6 | NESTED LOOPS | | 97 | 11446 | 23577 |
    | 7 | NESTED LOOPS | | 737 | 60434 | 19892 |
    | 8 | TABLE ACCESS FULL | AP_BATCHES_ALL | 228 | 5472 | 56 |
    | 9 | TABLE ACCESS BY INDEX ROWID| AP_INVOICES_ALL | 3 | 174 | 87 |
    | 10 | INDEX RANGE SCAN | AP_INVOICES_N1 | 309 | | 6 |
    | 11 | INDEX RANGE SCAN | AP_INVOICE_DISTRIBUTIONS_U1 | 22 | | 2 |
    | 12 | TABLE ACCESS FULL | HR_ALL_ORGANIZATION_UNITS_TL | 5766 | 354K| 15 |
    | 13 | TABLE ACCESS BY INDEX ROWID | GL_CODE_COMBINATIONS | 1 | 44 | 2 |
    | 14 | INDEX UNIQUE SCAN | GL_CODE_COMBINATIONS_U1 | 1 | | 1 |
    | 15 | TABLE ACCESS BY INDEX ROWID | PA_PROJECTS_ALL | 1 | 22 | 1 |
    | 16 | INDEX UNIQUE SCAN | PA_PROJECTS_U1 | 1 | | |
    | 17 | TABLE ACCESS BY INDEX ROWID | PO_VENDORS | 1 | 25 | 1 |
    | 18 | INDEX UNIQUE SCAN | PO_VENDORS_U1 | 1 | | |
    Note: cpu costing is off, PLAN_TABLE' is old version
    26 rows selected.
    real: 562
    SQL>
    SQL> rollback;
    Rollback complete.
    real: 62
    SQL>
    SQL> rem Set the ARRAYSIZE according to your application
    SQL> set autotrace traceonly arraysize 100
    SQL>
    SQL> alter session set tracefile_identifier = 'mytrace1';
    Session altered.
    real: 47
    SQL>
    SQL> rem if you're using bind variables
    SQL> rem define them here
    SQL>
    SQL> rem variable b_var1 number
    SQL> rem variable b_var2 varchar2(20)
    SQL>
    SQL> rem and initialize them
    SQL>
    SQL> rem exec :b_var1 := 1
    SQL> rem exec :b_var2 := 'DIAG'
    SQL>
    SQL> alter session set events '10046 trace name context forever, level 8';
    Session altered.
    real: 47
    SQL>
    SQL> -- put your statement here
    SQL> select glinv.segment1 "legal",
    2      glinv.segment2 "business_unit",
    3      glinv.segment3 "site",
    4      glinv.segment4 "department",
    5      glinv.segment5 "account",
    6      glinv.segment6 "product",
    7      glinv.segment7 "client",
    8      ind.accounting_date,
    9      ind.amount "invoice_distribution_amount",
    10      inv.source "invoice_source",
    11      ab.batch_name,
    12      inv.invoice_date,
    13      inv.creation_date,
    14      inv.invoice_num,
    15      pov.vendor_name,
    16      ind.base_amount "invoice_dist_functional_amount",
    17      ind.exchange_rate_variance "curr_exchamge_rate_variance",
    18      ppa.segment1 project_number,
    19      hou.NAME project_org_name
    20 from ap_invoices          inv,
    21      ap_invoice_distributions ind,
    22      apps.ap_batches     ab,
    23      pa_projects               ppa,
    24      hr_all_organization_units_tl     hou,
    25      po.po_vendors           pov,
    26      gl.gl_code_combinations     glinv
    27 where inv.invoice_id     = ind.invoice_id
    28      and ind.project_id = ppa.project_id(+)
    29      and inv.vendor_id = pov.vendor_id
    30      and ind.dist_code_combination_id =     glinv.code_combination_id
    31      and inv.batch_id = ab.batch_id
    32      and ind.expenditure_organization_id = hou.organization_id(+)
    33      and ind.accounting_date between to_date('03/31/2009','MM/DD/RRRR') and to_date('04/27/2009','MM/DD/RRRR')
    34      -- and ind.accounting_date between '31-MAR-2009' and '27-APR-2009'
    35      and glinv.segment1     between to_number('4000') and to_number('4999');
    no rows selected
    real: 3968
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=23927 Card=70 Bytes=
    19040)
    1 0 NESTED LOOPS (Cost=23927 Card=70 Bytes=19040)
    2 1 NESTED LOOPS (OUTER) (Cost=23857 Card=70 Bytes=17290)
    3 2 NESTED LOOPS (Cost=23787 Card=70 Bytes=15750)
    4 3 HASH JOIN (OUTER) (Cost=23593 Card=97 Bytes=17557)
    5 4 TABLE ACCESS (BY INDEX ROWID) OF 'AP_INVOICE_DISTR
    IBUTIONS_ALL' (Cost=5 Card=1 Bytes=36)
    6 5 NESTED LOOPS (Cost=23577 Card=97 Bytes=11446)
    7 6 NESTED LOOPS (Cost=19892 Card=737 Bytes=60434)
    8 7 TABLE ACCESS (FULL) OF 'AP_BATCHES_ALL' (Cos
    t=56 Card=228 Bytes=5472)
    9 7 TABLE ACCESS (BY INDEX ROWID) OF 'AP_INVOICE
    S_ALL' (Cost=87 Card=3 Bytes=174)
    10 9 INDEX (RANGE SCAN) OF 'AP_INVOICES_N1' (NO
    N-UNIQUE) (Cost=6 Card=309)
    11 6 INDEX (RANGE SCAN) OF 'AP_INVOICE_DISTRIBUTION
    S_U1' (UNIQUE) (Cost=2 Card=22)
    12 4 TABLE ACCESS (FULL) OF 'HR_ALL_ORGANIZATION_UNITS_
    TL' (Cost=15 Card=5766 Bytes=363258)
    13 3 TABLE ACCESS (BY INDEX ROWID) OF 'GL_CODE_COMBINATIO
    NS' (Cost=2 Card=1 Bytes=44)
    14 13 INDEX (UNIQUE SCAN) OF 'GL_CODE_COMBINATIONS_U1' (
    UNIQUE) (Cost=1 Card=1)
    15 2 TABLE ACCESS (BY INDEX ROWID) OF 'PA_PROJECTS_ALL' (Co
    st=1 Card=1 Bytes=22)
    16 15 INDEX (UNIQUE SCAN) OF 'PA_PROJECTS_U1' (UNIQUE)
    17 1 TABLE ACCESS (BY INDEX ROWID) OF 'PO_VENDORS' (Cost=1 Ca
    rd=1 Bytes=25)
    18 17 INDEX (UNIQUE SCAN) OF 'PO_VENDORS_U1' (UNIQUE)
    Statistics
    35 recursive calls
    0 db block gets
    370 consistent gets
    351 physical reads
    0 redo size
    728 bytes sent via SQL*Net to client
    216 bytes received via SQL*Net from client
    1 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed
    SQL>
    SQL> disconnect
    SQL>
    SQL> spool off
    SQL>
    SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));
    select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'))
    ERROR at line 1:
    ORA-00904: "DBMS_XPLAN"."DISPLAY_CURSOR": invalid identifier
    real: 79
    SQL>
    SQL> spool off
    As you can see dbms_xplan did not work. I also want to mention that creating indexes on oracle apps 11i base tables is not supported by oracle. we cannot alter the table structure. Our query should be written in such a way that it uses all indexes.
    Any suggestions are greatly appreciated.
    Thanks
    Sandeep
    Edited by: user340 on May 12, 2009 4:35 PM

  • Performance tuning questions

    i am fresher in sap-abap. so i have some doubts.
    how to increase performance in a report. what r the minimum steps to follow.
    expecting the answer more clearly

    Hi Josh,
    Here is a detailed document for performance check:
    <b>AWARD POINTS IF IT HELPS:</b>
    5.1     SQL 
         In order to enhance the performance of your ABAP code a number of useful guidelines can be used.
    •     Use SELECT SINGLE wherever possible to retrieve up to one row of information.  It is important to specify all the key fields to ensure a unique record.
    •     Be careful using the FOR ALL ENTRIES addition since this is very bad for very large datasets (10,000+ records)
    •     Joins and subqueries are good
    •     Do not use SELECT * statement unless the program needs ALL columns from the table. Instead, only specify the fields you require. This will also avoid unnecessary network transports.  The addition INTO CORRESPONDING FIELDS of the INTO clause of the SELECT statement is worthwhile to use only for large amounts of data where the external table and destination fields have the same names. Consider the use of the DISTINCT option in the case of many duplicate entries.
         The following example compares selecting all fields to selecting only the document number, the item number and the material.
         Avoid:.  select   *      from vbap
                                         where vbeln in s_docno.
                   endselect.
         Use:     select vbeln posnr matnr
         into (wa_vbap-vbeln, wa_vbap-posnr, wa_vbap-matnr)
         from vbap
         where vbeln in s_docno.
                   endselect.
    Important Points:
         The order of the fields retrieved must match the order of the destination fields in the field list.
    •     Use the SELECT...WHERE clause to restrict data rather than retrieve all rows and use a CHECK or IF statements to filter data.
         Avoid:     select  vbeln  posnr matnr
         into (wa_vbap-vbeln, wa_vbap-posnr, wa_vbap-matnr)
         from vbap.
                      check s_docno.
                   endselect.
         Use:     select vbeln  posnr matnr
         into (wa_vbap-vbeln, wa_vbap-posnr, wa_vbap-matnr)
         from vbap
         where vbeln in s_docno.
                   endselect.
    Important Points:
         Order the columns in the where clause of a select in the same order as the key or index table.
    •     WHERE Clause Tips
    o     Exploit the indexes of  the database tables for an efficient use of the WHERE clause. To do so check all index fields with the equality operator (EQ, =) and concatenate these checks by AND. The primary key of a database table makes up its primary index automatically. Secondary indexes for a database table can be created in the ABAP Dictionary.
    o     If possible, include all columns of the key or an index in the where clause. Use a default or appropriate value. If the column(s) is not included, the database may not be able to fully utilise the index.
    o     Avoid complex WHERE clauses. The system must split up those into single statements for the database system.
    o     Do not use the logical NOT in WHERE clauses but inverted operators instead. The logical NOT is not supported by the database indexes.
    •     Try to avoid the select … endselect  programming construct. Rather select all the required records from the database directly into an internal table and loop at the table to process the entries.  This is usually faster than the select … endselect code, and also allows easier debugging of the code.
         Avoid:     select vbeln  posnr matnr
         into (wa_vbap-vbeln, wa_vbap-posnr, wa_vbap-matnr)
         from vbap
         where vbeln in s_docno.
                       write:/ wa_vbap-vbeln, wa_vbap-posnr, wa_vbap-matnr.
                   endselect.
    Use:     select vbeln posnr matnr into table ts_vbap
                   from vbap
                   where vbeln in s_docno.
              loop at ts_vbap into wa_vbap.
                   write:/ wa_vbap-vbeln, wa_vbap-posnr, wa_vbap-matnr.
              Endloop.
    •     Avoid nested select statements if possible as they generally have  poor performance.  It is preferable to select all the entries for each table directly into an internal table and use nested internal table loops to process all the entries. 
    •     Check runtime analysis tips and tricks for detailed comparisons in select performance (SM30). SELECT statements.
    •     Use aggregate expressions in the SELECT clause to perform calculations instead of transporting great amounts of data and calculating thereafter.  This distributes the processing load and minimises the network data transfer.  Valid aggregate functions include: MAX, MIN, AVG, SUM and COUNT.
    •     The storage of database tables in local buffers can lead to significant time savings. Use the buffering of database tables whenever possible. Use the addition BYPASSING BUFFER only if it is really necessary.
    If DISTINCT, SINGLE FOR UPDATE, and aggregate expressions are used in the SELECT clause, buffering should be turned off.
    •     Provide the appropriate selection criteria to limit the number of data base reads. Force users to provide selection criteria by evaluating the selection criteria entered on the selection screen during the AT SELECTION-SCREEN event.
    •     Create indices where needed to enhance query performance. This should be used in large table lookups to increase efficiency. For example, SELECT…WHERE FieldA = ‘001’. In this case FieldA is not a key field, therefore an index should be created to improve the efficiency of the select statement.  Beware that there is always an additional processing system overhead for indices.  Therefore, only create indices if a major performance benefit will be realised, especially if the program concerned is executed many times throughout the day and is business critical.
    5.1.1     SQL Checklist
    •     Keep the selected dataset small
    •     Keep the transferred data small
    •     Keep the number of database accesses small
    •     Use database buffers
    •     Create views for table joins instead of using multiple selects.
    •     Select data only one time where possible (i.e., don’t have multiple selects against the same table - get the data one time and store it in an internal table).
    •     Remove unused indexes from tables.
    5.2     General Programming Techniques
         Detailed below are a number of additional programming techniques that should be borne in mind when implementing ABAP code.
    •     When testing fields "equal to" something, one can use either the nested IF or the CASE statement.  The CASE is better for two reasons.  It is easier to read and the performance of the CASE is more efficient.
    •     Do not use MOVE CORRESPONDING unless the data is contiguous.
    •     When records a and b have the exact same structure, it is more efficient to MOVE a TO b than to MOVE-CORRESPONDING a TO b, if records a and b have the exact same structure.
                   MOVE  BSEG TO *BSEG.  is better than
                   MOVE-CORRESPONDING BSEG TO *BSEG.
    •     Do not use the COLLECT statement with large internal tables as this can be very CPU intensive.
    •     When reading a single record in an internal table, the READ TABLE WITH KEY is not a direct READ on a on a sorted table.  Therefore, SORT the table and use READ TABLE WITH KEY BINARY SEARCH.
    •     Use the SORT...BY when sorting internal tables.
              SORT ITAB BY FLD1 FLD2.  is more efficient than
                   SORT ITAB.
    •     Avoid hard-coding and use of literals in ABAP code. Use reference tables to drive processing to support business change flexibility and reduce ongoing maintenance costs.  If hard-coding and literals are required, be sure to include these as constants.
    •     The Tips & Tricks function is very useful in comparing different methods of coding without going to the trouble of coding both and then performing your own run-time analysis. System > Utilities > Runtime Analysis > Tips & Tricks.
    5.3     Logical Databases
    Use logical databases and ‘GET’ events wherever reads on parent/child segments need to be performed e.g. require data from both MARA then MARD table - use GET MARA then GET MARD. (Note you do not need to use an LDB if data from only the MARA or MARD table is required.)
    Where an LDB is used provide defaults or checks for the standard selection-options/parameters wherever possible.
    Avoid use of  logical data bases as much as possible - use SELECT  statements instead. (Logical databases are good as a reference tool to look up database hierarchies).
         &#61558;     Logical database should only be used when there are no other options.
    &#61558;     Use of Logical database in ABAP programs must be approved by a DEVELOPMENT TEAM LEAD
    &#61558;     Due to strategic importance of LDB’s and the potential impact on system performance, careful consideration should be made before creating new LDB’s. Consequently no LDB’s should be created without approval from the Development Team Lead
    5.4     Debugging
    When testing ABAP, use of the debugging tool plays an essential role in checking the value of variables during the execution of the program.  This tool should be used during the unit testing to ensure programs are executing as desired.
    You can use the debugging tool by selecting Program > Debugging from the ABAP program Development Initial screen.
    In addition to the static programming of breakpoints, ABAP’s on-line debugging tools also allow you to set breakpoints and interrupt conditions dynamically.  This makes the whole process of debugging reports much more flexible and the consequent advantage is that you do not have to change your code. Watchpoints can now be set based on the value a field takes (like R/2).
    Once you have stopped the report processing, you can view the contents of all the fields (up to 8), internal tables and database tables referenced in the report.  The system fields SY-TABIX and SY-DBCNT are now displayed at the bottom of the screen along with SY-SUBRC.
    Finally, you can change the contents of fields for debugging purposes and then resume report processing, with the changed data.  To set breakpoints select Breakpoints > Set from the ABAP: Editor screen.  Then execute the program.
    Beware that in order to debug SAPscript programs, hard-coded breakpoints are often required. Be sure to remove these once testing is complete and the program transported. Use the syntax BREAK username, rather than BREAK-POINT, as this will ensure the code only stops when running under the specified username.
    6     SAPscript Techniques
    6.1     Good Practice
    •     Always copy the SAP standard print programs where available and, in most instances, the layout set. Never start a complex SAPscript (e.g. Invoice, Purchase Order) from the beginning, as this will require far more development time to complete.
    •     When creating a new layout set by copying a SAP standard, always change the original language from D to E and then activate.
    6.2     Standards
    •     Naming convention for layout sets – this will follow the same as the program name, except the version number will be prefixed L. For example a purchase order layout set would be:
    ZMM_DESC where
    Z          First character of the program
    MM          SAP R/3 module/component
    DESC          Meaningful description i.e. PO printing, INVOICE.
    •     When copying SAP standard print programs ensure they have a standard header block as defined earlier. Also ensure that any code that is added, removed or changed is commented in the standard fashion.
    6.3     Standard Texts
    •     These should be of the following format:
         ZXX_DESC
    Where:
    Z          First character of the program
         XX          Module name
         DESC          Meaningful description.
    6.4     Tips
    •     Text elements must be maintained individually for each layout set language. Any other changes to the layout set i.e. window size or paragraphs, will be copied from the original language to the other languages.
         &#61558;     As layout sets are client-dependant they must be ‘transported’ between clients on the same box. To do this use the ‘Copy from Client’ function from within the ‘target’ client. This is language specific.
    &#61558;     Always ACTIVATE the layout set each change and in each language before transporting.
    &#61558;     The SAPscript debugger can be sent useful, this is turned on at the front screen on transaction SE71.
    &#61558;     Standard text used within a layout set must be assigned to a transport request using the program RSTXTRAN. Once assigned the transport request can be released in the usual manner via SE10.
    7     Changing the SAP Standard
    •     You can adjust the R/3 System to meet your needs in the following ways:
    o     Customizing: This means setting up specific business processes and functions for your system according to an implementation guide. The need for these changes has already been foreseen by SAP and an implementation procedure has been developed.
    o     Personalization: This means making changes to certain fields' global display attributes (setting default values or fading fields out altogether), as well as creating user-specific menu sequences.
    o     Modifications: These are changes to SAP Repository objects made at the customer site. If SAP delivers a changed version of the object, the customer's system must be adjusted to reflect these changes. Prior to Release 4.0B these adjustments had to be made manually using upgrade utilities. From Release 4.5A, this procedure has been automated with the Modification Assistant.
    o     Enhancements: This means creating Repository objects for individual customers that refer to objects that already exist in the SAP Repository.
    o     Customer Developments: This means creating Repository objects unique to individual customers in a specific namespace reserved for new customer objects.
    •     If your requirements cannot be met by Customizing or personalization, you may either start a development project or try using a CSP solution (= Complementary Software Product).
    •     A development project falls into the customer development category if the SAP standard does not already contain functions similar to the one you are trying to develop. If, however, a similar SAP function exists, try to assimilate it into your development project by either enhancing or modifying it, by using a user exit, or simply by making a copy the appropriate SAP program.
    •     Modifications can create problems, as new versions of SAP objects must be adjusted after an upgrade to coincide with modified versions of SAP objects you have created. Prior to Release 4.0B these adjustments had to be made manually using upgrade utilities. From Release 4.5A, this procedure has been automated with the Modification Assistant.
    •     Thus, you should only make modifications if:
    •     Customizing or personalizing cannot satisfy your requirements
    •     Enhancements or user exits are not planned
    •     It would not make sense to copy the SAP object to the customer namespace.
    7.1     Originals and copies
    •     An object is original in only one system.  In the case of objects delivered by SAP, the original system is at SAP itself. These objects are only copies in customer systems.  This applies to your development system and all other systems that come after it.
    •     If you write your own applications, the objects that you create are original in your development system.  You assign your developments to a change request, which has the type Development/Correction.
    This request ensures that the objects are transported from the development system into the subsequent systems
    7.2     Corrections and repairs
    •     Changes to an original are called corrections.  They are recorded in a change request whose tasks have the type "Development/correction". 
    •     If, on the other hand, you change a copy (an object outside its own original system), the change is recorded in a task with the type "Repair". Repairs to SAP objects are called modifications. 
    •     When you repair your own objects (for example, if something goes wrong in your production system), you can correct the original in your development system straight away. When you change copies, you must correct the original immediately!
    •     However, you cannot do this with SAP objects, because they are not original in any of your systems.
    •     You should only modify the SAP standard if the modifications you want to make are absolutely necessary for optimizing workflow in your company. Be aware that good background knowledge of application structure and flow are important prerequisites for deciding what kind of modifications to make and how these modifications should be designed.
    7.3     Modifications and upgrades
    During an upgrade or an import of R/3 Support Packages, new objects delivered overwrite existing objects of the SAP standard. In order to help customers keep those objects that have been modified in a previous release, SAP now offers upgrade adjustment for all objects being upgraded in the form of transactions SPAU and SPDD. These transactions allow customers to enter their modifications into the corresponding new objects being delivered at upgrade. The Modification Assistant supports this process of adopting customer modifications. In general, objects altered using the Modification Assistant can now be automatically accepted into the upgraded system if the modifications undertaken in the original version do not directly overlap those made in the customer version. If collisions occur between the two versions at upgrade (naming collisions, or if SAP has deleted an object modified by a customer), the system offers semi-automatic adjustment support. In some cases, however, you may still have to manually adjust objects using ABAP Workbench tools.
    •     Whenever you upgrade your system, apply a support package, or import a transport request, conflicts can occur with modified objects.
    •     Conflicts occur when you have changed an SAP object and SAP has also delivered a new version of it.  The new object delivered by SAP becomes an active object in the Repository of your system.
    •     If you want to save your changes, you must perform a modification adjustment for the objects.  If you have a lot of modified SAP objects, your upgrade can be slowed down considerably.
    •     To ensure consistency between your development system and subsequent systems, you should only perform modification adjustments in your development system.  The objects from the adjustment can then be transported into other systems.
    7.4     Modifications Procedures
    •     A registered  developer must register changes to SAP objects. Exceptions to this registration are matchcodes, database indexes, buffer settings, customer objects, patches, and objects whose changes are based on automatic generation (for example, in Customizing). If the object is changed again at a later time, no new query is made for the registration key. Once an object is registered, the related key is stored locally and automatically copied for later changes, regardless of which registered developer is making the change. For the time being, these keys remain valid even after a release upgrade.
    •     How do you benefit from SSCR (SAP Software Change Registration)?
    o     Quick error resolution and high availability of modified systems
    All objects that have been changed are logged by SAP. Based on this information, SAP's First Level Customer Service can quickly locate and fix problems. This increases the availability of your R/3 system.
    o     Dependable operation
    Having to register your modifications helps prevent unintended modification. This in turn ensures that your R/3 software runs more reliably.
    o     Simplification of upgrades
    Upgrades and release upgrades become considerably easier due to the smaller number of modifications.
    7.5     Modifications Assistant
    The aim of the Modification Assistant is to make modification adjustments easier. This is because (among other reasons) the modifications are registered in a different layer
    •     If you want to change an SAP object, you must provide the following information:
    o     SSCR key
    o     Change request
    •     The system informs you that the object is under the control of the Modification Assistant. Only restricted functions are available in the editor.
    •     You can switch the Modification Assistant on or off for the entire system by changing the R/3 profile parameter eu/controlled_modification. SAP recommends that you always work with the Modification Assistant.
    •     You can switch off the Modification Assistant for single Repository Objects.  Once you have done so, the system no longer uses the fine granularity of the Modification Assistant.
    •     In modification mode, you have access to a subset of the normal editor tools.  You can access these using the appropriate pushbuttons.  For example, in the ABAP Editor, you can:
    o     Insert
    The system generates a framework of comment lines between which you can enter your source code.
    o     Replace
    Position the cursor on a line and choose Replace.  The corresponding line is commented out, and another line appears in which you can enter coding.  If you want to replace several lines, mark them as a block first.
    o     Delete
    Select a line or a block and choose Delete.  The lines are commented out.
    o     Undo modifications
    This undoes all of the modifications you have made to this object.
    o     Display modification overview
    Choose this function to display an overview of all modifications belonging to this object.
    7.6     Restoring the original
    You can reset all of the modifications that you have made to the current object using the Modification Assistant by choosing this function.  The record of the modifications is also deleted.
    Remember that you cannot selectively undo modifications to an object.  You can only undo modifications based on the "all or nothing" principle.
         &#61558;     Any modifications in standard SAP object will require proper justification and needs to be documented. The rights of modification the standard SAP object is limited to PROJECT MANGER only.
    Cheers,
    ashish.

  • Some Tuning Question

    hi
    I have two doubts.
    say there is two table
    tab1(contains 1200 rows primary key pk_col1 on col1 column) and tab2(contains 12000 rows primary key pk_col2 on col2 column)
    now in where clause i write
    tab1.col1=tab2.col2
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE
    1 0 NESTED LOOPS
    2 1 TABLE ACCESS (FULL) OF ‘tab1’
    3 1 TABLE ACCESS (BY INDEX ROWID) OF ‘tab2’
    4 3 INDEX (UNIQUE SCAN) OF ‘pk_col2’ (UNIQUE)
    Now my doubt is i read from a document that in such type of joinning we should try to do a full table scan on tab2 becuse in my case it is searching more than 40% rows of tab2 so if we do it with index scan it will be time consuming.
    And for tab1 we should try to do index scan becos it contains small amount data so searching by index will be more faster.
    but according to execution plan wrong things happening so what to do and the idea i read is it right?
    Another doubt i has
    say there is another table tab3(contains 1000 rows primary key pk_col3 on col3 column)
    and adding to previous where clause
    tab1.col1=tab2.col2
    and col3=col2
    I read that though in cbo it is not important what is the join order but we should try to build join order in such way that maximum data will be fetched in the first join condition (here tab1.col1=tab2.col2) and then other join we can write.
    Is it true?
    my database version is 10g r2
    Thanks in advance
    Edited by: bp on Sep 11, 2008 9:28 PM

    I apologize if the following will sound too teacher alike
    You have to retrieve some information from several tables according to some conditions
    The moment the query starts the tables are as freezed (read consistency) and there is a clear distinction between the rows (all the rows from all the tables mentioned in the query) fulfilling the query conditions or not
    In order to return a correct answer the query has to access all the rows for which all the conditions hold and it usually takes several steps to achieve that.
    These steps are usually planned in a way to have each next step dealing with the least possible number of rows (so trying to get rid of the maximum number of rows for which the combined query conditions return false)
    Taking your first example
    You have table A and table B where card(B) = 10 * card(A) -> card() standing for cardinality i.e. number of rows
    Your condition is pk(A) = pk(B) -> pk() standing for primary key
    The task is: for each row of one table find the corresponding row in the other table (as pk is unique and not null there can be just one or none corresponding row)
    If your driving table is table A you'll do card(A) seeks to table B
    If your driving table is table B you'll do card(B) seeks to table A i.e. 10 times the seeks of the above
    So I don't understand your concept of maximum/minimum data as the data to be retreived is constant (the data fulfilling the query conditions)
    Regards
    Etbin
    Edited by: Etbin on Sep 12, 2008 10:02 AM
    Forgot to refresh before posting (better leaving to native speakers the task of explaining details)

  • Perfromance tuning question

    Hi,
    Im using oracle 10g,and solaris
    From the AWR report , in the Top 5 wait events it is mentioned
    Event Time(s) % Total Call Time
    CPU time 4,477 86.8
    Can anyone explain how to reduce it..Please help
    Thanks

    user627729, did you read through the last two posts in the thread recommended by yingkuan:
    CPU Time Wait Event
    To Sybrand's point, something has to be the top wait. If you do not like the CPU time being the top wait, do something that causes the IO system to be the top wait. Subtle humor...
    Event    Time(s)    % Total Call Time
    CPU time   4,477            86.84,477 CPU seconds may be good, bad, or insignificant. 4,477 CPU seconds ~ 74.61 CPU minutes.
    As you did not indicate the scope (duration) of the time period, let's assume that the time period is 60 minutes.
    As you did not indicate the number of CPUs in the server, let's assume 32.
    74.61 / (60 * 32) = 0.0388 = 3.88% of the available CPU time consumed - you would probably want to look for the problem somewhere other than the CPU, if the server has 32 CPUs. If the server has 1 CPU, the formula changes a bit: 74.61 / (60 * 1) = 1.2435 = 124.35%, the server is CPU bottlenecked, sessions are waiting on the CPU.
    If the time interval is 24 hours, then the formula changes again. Proper scope is important.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Urgent Tuning Question....

    Each,
    I have the query shown below which is running slowly.
    journal_subscribers table has around 400,000 rows and a non-unique index on jsu_journal_code
    rate_types table has 40 rows and a unique index on rty_code
    payment_methods has 7 rows and a unique index on payment_method
    select     js.jsu_type,
         js.jsu_code,
         js.jsu_charge_type,
         rt.organisation,
         js.jsu_copies
    from     journal_subscribers js,
         rate_types rt,
         payment_methods pm
    where     js.jsu_journal_code = v_journal_code
    and      js.jsu_start_date      < p_start
    and      rt.jsu_rate      = rt.rty_code(+)
    and          jsu_payment_code = pm.payment_code
    and pm.real_payment_code = '1';
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 8 | 352 | 582 |
    | 1 | HASH JOIN OUTER | | 8 | 352 | 582 |
    | 2 | HASH JOIN | | 8 | 312 | 579 |
    | 3 | TABLE ACCESS FULL | PAYMENT_METHODS | 2 | 6 | 2 |
    | 4 | TABLE ACCESS FULL | JOURNAL_SUBSCRIBERS | 28 | 1008 | 576 |
    | 5 | TABLE ACCESS FULL | RATE_TYPES | 38 | 190 | 2 |
    It seems the optimizer is choosing to use full table scans and a hash join rather than using the indexes. Any ideas why it would do this and not use the indexes and a nested loop or how i can improve the speed of the query?
    Thanks

    If you believe that predicate
    where js.jsu_journal_code = v_journal_code
    and js.jsu_start_date < p_start
    has good selectivity (26000+ rows for js.jsu_journal_code = v_journal_code
    is not very good) then you can try ORDERED hint:
    SELECT --+ ORDERED
    js.jsu_type,
    js.jsu_code,
    js.jsu_charge_type,
    rt.organisation,
    js.jsu_copies
    from journal_subscribers js,
    rate_types rt,
    payment_methods pm
    where js.jsu_journal_code = v_journal_code
    and js.jsu_start_date < p_start
    and js.jsu_rate = rt.rty_code(+)
    and jsu_payment_code = pm.payment_code
    and pm.real_payment_code = '1';
    or even
    SELECT
    js.jsu_type,
    js.jsu_code,
    js.jsu_charge_type,
    rt.organisation,
    js.jsu_copies
    from
    select *
    from journal_subscribers
    where jsu_journal_code = v_journal_code
    and jsu_start_date < p_start
    and rownum < 100000000000
    ) js,
    rate_types rt,
    payment_methods pm
    where js.jsu_rate = rt.rty_code(+)
    and jsu_payment_code = pm.payment_code
    and pm.real_payment_code = '1';
    this will force Oracle to execute inline view 'JS' first and only then join it to the rest of tables

  • Oracle tuning question or any alternate solution

    All Oracle Guru – I need your help in solving this problem. I am simplifying this problem to make it easy to understand. Let's start. For example we have two Tables.
    Table one which stores data when one user access any dir on Unix sever has following columns
    Table one     
    Request Date          Server Name     Dir Accessed     User ID
    8/1/2005 12:00:00 PM     COBRA-65     root          conny
    8/2/2005 13:00:00 PM     PANTHER-98     opt          conny
    8/9/2005 23:00:00 PM     COBRA-965     opt          conny
    6/4/2005 22:00:00 PM     COBRA-73     shared          conny
    10/7/2005 23:00:00 PM     PANTHER-98     bin          conny
    10/7/2005 23:00:00 PM     PANTHER-98     bin          conny
    This table gets around 6 to 8 Million records every day and we have data for around 3 months. Total around 650 M Records. Now our users want to access this data and find out who is accessing what on a monthly basis. Requirement is that user will define a application and tell us server name and dir he wants to associate to that application name. So we created another table: - Table two and here is column list for this table.
    Table Two
    Application Name     Server Name     Dir Accessed
    COBRA-ROOT          COBRA73          shared
    PANTHER-OPT          PANTHER-98     opt
    PANTHER-BIN          PANTHER-98     bin
    Now we can not let user run ad hock queries on a table with 650 M records so we decided to create a aggregation table and store monthly data into this table where user can query them .
    AGGR_TABLE
    Application Name     Request Month     User ID      Total Hits
    COBRA-ROOT          Jun-05          conny          1
    PANTHER-OPT          Aug-05          conny          1
    PANTHER-BIN          Oct-05          conny          2
    Now to fill this aggregation table we need to run a monthly aggregation query and her is that query
    SELECT B.Application_name APPLICATION_NAME,
    TO_DATE(TO_CHAR(A.REQUEST_DATE,'MON-YYYY'),'MON-YYYY') REQUEST_DATE,
    A.User ID ,
    count(*) TOTAL_NUMBER_OF_HITS
    FROM Table Two Table One A
    WHERE A.REQUEST_DATE >= trunc(sysdate,'DD')
    AND A.REQUEST_DATE < truncadd_Month(sysdate,1),'DD')
    AND A.Server Name = B.Server Name
    AND A.Dir Accessed = B.Dir Accessed
    GROUP BY B.APPLICATION_SECTION,
    B.DATA_SOURCE,B.Application_name ,
    TO_DATE(TO_CHAR(A.REQUEST_DATE,'MON-YYYY'),'MON-YYYY'),
    A.PMDS_PERSON_NUM_SSO
    ORDER BY B.Application_name
    Now the problem is that this query takes forever too return. Sometime more then 72 hours.. We cannot do incremental updates as minimum we can do is month and if we do increment then we will not be able to collect historical data. This data will grow and we are planning to have 12 month retention..
    Any help on query optimizing or any alternate direction will be real help.. Ones again because of company confidentiality I have to change the real table names and query and it is little bit more complicated then I showed you in above example but concept is the same. I will appreciate if any one can help us in any way.
    Thanks
    TK

    You'll need an pk/fk relationship between your tables for a fast refresh to be possible.

Maybe you are looking for

  • HT3413 can I connect my new 15" macbook pro to 2 external video monitors?

    can I connect my new 15" macbook pro to 2 external video monitors?

  • Is there a way to search contact list by keyboard

    is there a way to search the contacts list with the keyboard rather than scrolling down via letters? i have like 2000 contacts and some letters are really long. would be much better if there was a search box (like that for SMS) where I could input le

  • Date Invoice Was Closed

    I'm writing a commission report that pays when the invoices is "closed". I was using the incomming payments table, but when and invoice is closed via a open on account payment application from a payment made in advance, that doesn't work. Is there an

  • OMG I'm going crazy!  (Windows Media Install not happening)

    Whatever happened to Macs are supposed to be intuitive? Not so, not at least coming from system 9.2 to osx (v10.2). Anyway, I'm simply trying to get Windows Media Player to install and it's NOT happening! I've tried everything I can think of to make

  • Apple TV3 Mirroring - MacBook Pro

    I have a MacBook Pro that I just updated to OS X 10.8.5 and I want to mirror the screen to my TV using a recently purchased a black Apple TV unit. The problem is that the homeshare drop down menu at the top of my screen does not show up. I can play m