Table prefix improving performance

i read in the oracle manual that when using joins it is a good practice to use table prefixes as it increases performance ,how?

f7218ad2-7d9f-4e71-ba26-0d6e4b38f87e wrote:
i read in the oracle manual that when using joins it is a good practice to use table prefixes as it increases performance ,how?
uh, maybe so the parser doesn't have as many hoops to jump through to resolve non-specific object references.
You could cite your reference ...
============================================================================
BTW, it would be really helpful if you would go to your profile and give yourself a recognizable name.  It doesn't have to be your real name, just something that looks like a real name.  Who says my name is really Ed Stevens?  But at least when people see that on a message they have a recognizable identity.  Unlike the system generated name of 'ed0f625b-6857-4956-9b66-da280b7cf3a2', which is no better than posting as "Anonymous".
All you ed0f625b-6857-4956-9b66-da280b7cf3a2's look alike . . .
============================================================================

Similar Messages

  • Using TABLE() to improve performance... Am I on the right track?

    I have a situation where I read data into a set of collections (let's assume 10,000 records and an emp_no collection).
    I then process each record in a for loop. Based on conditions, a subsequent query is issued to one of two tables:
    For i in emp_no.first .. emp_no.last loop
    <<processing>>
    if <<some condition>> then
    select emp_age into age from tab_a where employee_number=emp_no(i);
    else
    select spouse_age into age from tab_b
    where employee_number=emp_no(i) and {other conditions};
    end if;
    age_array(i) := age;
    <<processing>>
    end loop;
    after the additional fields are retrieved, processing continues using the retrived data.
    <<additional processing>>
    At the end of the processing I want to update a table's records given the values calculated during processing
    ForAll i in emp_no.first .. emp_no.last
    Update retirement Set age := age_array(i) ......
    where employee_number = emp_no(i);
    I imagine the single select queries in the loop structure will cause a lot of context switches between PL/SQL and SQL which will significantly decrease performance.
    After some review of the Oraclewebsite I found the TABLE function. It appears I can use this to change my routine to a more efficient bulk processing structure. Something like:
    -- In the loop build a collection of emp_no's associated to each query
    For i in emp_no.first .. emp_no.last loop
    <<processing>>
    if emp_no(i) is even then
    tab_a_emp_no_array.extend;
    tab_a_emp_no_array(tab_a_emp_no_array.last) := emp_no(i);
    else
    tab_b_emp_no_array.extend;
    tab_b_emp_no_array(tab_a_emp_no_array.last) := emp_no(i);
    end if;
    <<processing>>
    end loop;
    --After the loop use a Select... Bulk Collect Into statement with a where condition that references the collection values
    Select emp_no, emp_age
    bulk collect into emp_no_a, age_a
    from tab_a
    where employee_number in (select column_value from table(tab_a_emp_no_array));
    Select emp_no, spouse_age
    bulk collect into emp_no_b, age_b
    from tab_b
    where employee_number in (select column_value from table(tab_b_emp_no_array));
    Using the emp_no_a and emp_no_b the age values can be reassociated with the correct employee for further processing.
    I HAVE THREE CONCERNS:
    1. Am I understanding and using the TABLE function correctly? I don't think "pipelined processing" would help in this situation, correct?
    2. I may end up with an IN clause that has thousands of elements. Will this perform poorly and eliminate any performance gains obtained from the bulk collect? Would "where exists (select 1 from table(tab_a_emp_no_array) where column_value=employee_number)" work any better?
    3. Is there a better way to solve this issue of optimizing performance when various tables are conditionally queried during a loop?
    I hope my issue is clear (obviously the code isn't accurate) and I thank you in advance for any insights!
    Peace,
    Larry

    No.
    I will repeat one of Tom Kytes' mantras here
    1 when you can do it in 1 SQL statement, you should do it in SQL
    2 When you can not do it in SQL, you should do it in PL/SQL
    3 When you can not do in in PL/SQL, you should do it in Java
    Which means: You should things non-procedurally as often as possible. Quite often people resort too early to 3GL strategies.
    update inside a loop raises a red flag, especially if there would have been a commit inside this loop. This means you are not only into slow-by-slow prtogramming, but also increases the possibility of ora-15555 errors.
    Sybrand Bakker
    Senior Oracle DBA

  • Modify a SELECT Query on ISU DB tables to improve performance

    Hi Experts,
    I have a SELECT query in a Program which is hitting 6 DB tables by means of 5 inner joins.
    The outcome is that the program takes an exceptionally long time to execute, the SELECT statement being the main time consumer.
    Need your expertise on how to split the Query without affecting functionality -
    The Query :
    SELECT  fkkvkpgpart eablablbelnr eabladat eablistablart
      FROM eabl
      INNER JOIN eablg  ON eablgablbelnr = eablablbelnr
      INNER JOIN egerh  ON egerhequnr    = eablequnr
      INNER JOIN eastl  ON eastllogiknr  = egerhlogiknr
      INNER JOIN ever   ON everanlage    = eastlanlage
      INNER JOIN fkkvkp ON fkkvkpvkont   = evervkonto
      INTO TABLE itab
    WHERE eabl~adat GT [date which is (sy-datum - 3 years)]
    Thanks in advance,
    PD

    Hi Prajakt
    There are a couple of issues with the code provided by Aviansh:
    1) Higher Memory consumption by extensive use of internal tables (possible shortdump TSV_NEW_PAGE_ALLOC_FAILED)
    2) In many instances multiple SELECT ... FOR ALL ENTRIES... are not faster than a single JOIN statement
    3) In the given code the timeslices tables are limited to records active of today, which is not the same as your select (taking into account that you select for the last three years you probably want historical meter/installation relationships as well*)
    4) Use of sorted/hashed internal tables instead of standard ones could also improve the runtime (in case you stick to all the internal tables)
    Did you create an index on EABL including columns MANDT, ADAT?
    Did you check the execution plan of your original JOIN Select statement?
    Yep
    Jürgen
    You should review your selection, because you probably want business partner that was linked to the meter reading at the time of ADAT, while your select doesn't take the specific Contract / Device Installation of the time of ADAT into account.
    Example your meter reading is from 16.02.2010
    Meter 00001 was in Installation 3000001 between 01.02.2010 and 23.08.2010
    Meter 00002 was in Installation 3000001 between 24.08.2010 and 31.12.9999
    Installation 3000001 was linked to Account 4000001 between 01.01.2010 and 23.01.2011
    Installation 3000001 was linked to Account 4000002 between 24.01.2010 and 31.12.9999
    This means with your select returns four lines and you probably want only one.
    To achieve that you have to limit all timeslices to the date of EABL-ADAT (selects from EGERH, EASTL, EVER).
    Update:
    Coming back to point one and the memory consumption:
    What are you planning to do with the output of the select statment?
    Did you get a shortdump TSV_NEW_PAGE_ALLOC_FAILED with three years meter reading history?
    Or did you never run on production like volumes yet?
    Dependent on this you might want to redesign your program anyway.
    Edited by: sattlerj on Jun 24, 2011 10:38 AM

  • Multi table inheritance and performance

    I really like the idea of multi-table inheritance, since a have a main
    class and three subclasses which just add one integer to the main class.
    It would be a waste to spend 4 tables on this, so I decided to put them
    all into one.
    My problem now is, that when I query for a specific class, kodo will build
    SQL like:
    select ... from table where
    JDOCLASSX='de.mycompany.myprojectname.mysubpack.classname'
    this is pretty slow, when the table grows because string comparisons are
    awefull - and even worse: the database has to compare nearly the whole
    string because it differs only in the last letters.
    indexing would help a bit but wouldn't outperforming integer comparisons.
    Is it possible to get kodo to do one more step of normalization ?
    Having an extra table containing all classnames und id's for them (and
    references in the original table) would improve performance of
    multi-tables quite a lot !
    Even with standard classes it would save a lot memory not having the full
    classname in each row.

    Stefan-
    Thanks for the feedback. Note that 3.0 does make this simpler: we have
    extensions that allow you to define the mechanism for subclass
    identification purely in the metadata file(s). See:
    http://solarmetric.com/Software/Documentation/3.0.0RC1/docs/manual.html#ref_guide_mapping_classind
    The idea for having a separate table mapping numbers to class names is
    good, but we prefer to have as few Kodo-managed tables as possible. It
    is just as easy to do this in the metadata file.
    In article <[email protected]>, Stefan wrote:
    First of all: thx for the fast help, this one (IntegerProvider) helped and
    solves my problem.
    kodo is really amazing with all it's places where customization can be
    done !
    Anyway as a wish for future releases: exactly this technique - using
    integer as class-identifiers rather than the full class names is what I
    meant with "normalization".
    The only thing missing, is a table containing information of how classIDs
    are mapped to classnames (which is now contained as an explicit statement
    in the jdo-File). This table is not mapped to the primary key of the main
    table (as you suggested), but to the classID-Integer wich acts as a
    foreign key.
    A query for a specific class would be solved with a query like:
    select * from classValues, classMapping where
    classValues.JDOCLASSX=classmapping.IDX and
    classmapping.CLASSNAMEX='de.company.whatever'
    This table should be managed by kodo of course !
    Imagine a table with 300.000 rows containing only 3 different derived
    classes.
    You would have an extra table with 4 rows (base class + 3 derived types).
    Searching for the classID is done in that 4row table, while searching the
    actual class instances than would be done over an indexed integer-classID
    field.
    This is much faster than having the database doing 300.000 String
    comparisons (even when indexed).
    (By the way - it would save a lot memory as well, even on classes which
    are not derived)
    If this technique is done by kodo transparently, maybe turned on with an
    extra option ... that would be great, since you wouldn't need to take care
    for different "subclass-indicator-values", can go on as everytime and have
    a far better performance ...
    Stephen Kim wrote:
    You could push off fields to seperate tables (as long as the pk column
    is the same), however, I doubt that would add much performance benefit
    in this case, since we'd simply add a join (e.g. select data.name,
    info.jdoclassx, info.jdoidx where data.jdoidx = info.jdoidx where
    info.jdoclassx = 'foo'). One could turn off default fetch group for
    fields stored in data, but now you're adding a second select to load one
    "row" of data.
    However, we DO provide an integer subclass provider which can speed
    these sorts of queries a lot if you need to constrain your queries by
    class, esp. with indexing, at the expense of simple legibility:
    http://solarmetric.com/Software/Documentation/2.5.3/docs/ref_guide_meta_class.html#meta-class-subclass-provider
    Stefan wrote:
    I really like the idea of multi-table inheritance, since a have a main
    class and three subclasses which just add one integer to the main class.
    It would be a waste to spend 4 tables on this, so I decided to put them
    all into one.
    My problem now is, that when I query for a specific class, kodo will build
    SQL like:
    select ... from table where
    JDOCLASSX='de.mycompany.myprojectname.mysubpack.classname'
    this is pretty slow, when the table grows because string comparisons are
    awefull - and even worse: the database has to compare nearly the whole
    string because it differs only in the last letters.
    indexing would help a bit but wouldn't outperforming integer comparisons.
    Is it possible to get kodo to do one more step of normalization ?
    Having an extra table containing all classnames und id's for them (and
    references in the original table) would improve performance of
    multi-tables quite a lot !
    Even with standard classes it would save a lot memory not having the full
    classname in each row.
    Steve Kim
    [email protected]
    SolarMetric Inc.
    http://www.solarmetric.com
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • Improve Performance of Dimension and Fact table

    Hi All,
    Can any one explain me the steps how to improve performance of Dimension and Fact table.
    Thanks in advace....
    redd

    Hi!
    There is much to be said about performance in general, but I will try to answer your specific question regarding fact and dimension tables.
    First of all try to compress as many requests as possible in the fact table and do that regularily.
    Partition your compressed fact table physically based on for example 0CALMONTH. In the infocube maintenance, in the Extras menu, choose partitioning.
    Partition your cube logically into several smaller cubes based on for example 0CALYEAR. Combine the cubes with a multiprovider.
    Use constants on infocube level (Extras->Structure Specific Infoobject properties) and/or restrictions on specific cubes in your multiprovider queries if needed.
    Create aggregates of subsets of your characteristics based on your query design. Use the debug option in RSRT to investigate which objects you need to include.
    To investigate the size of the dimension tables, first use the test in transaction RSRV (Database Information about InfoProvider Tables). It will tell you the relative sizes of your dimensions in comparison to your fact table. Then go to transaction DB02 and conduct a detailed analysis on the large dimension tables. You can choose "table columns" in the detailed analysis screen to see the number of distinct values in each column (characteristic). You also need to understand the "business logic" behind these objects. The ones that have low cardinality, that is relate to each other shoule be located together. With this information at hand you can understand which objects contribute the most to the size of the dimension and separate the dimension.
    Use line item dimension where applicable, but use the "high cardinality" option with extreme care.
    Generate database statistics regularily using process chains or (if you use Oracle) schedule BRCONNECT runs using transaction DB13.
    Good luck!
    Kind Regards
    Andreas

  • How to improve performance by pulling data instead of BSEG table?

    Hi,
    We are facing issue in which we have to pull material no for some non copa postings.
    But if we use BSEG table then serious performance issues are coming up..
    so are there any other tables / combination of tables that we can look for instead of BSEG?

    Hi,
    BSEG is Cluster table, you can only select with key fields.
    if you have a select:
    select belnr budat wrbtr from bseg
              into table it_bseg
               where bukrs = bukrs
                   and belnr = belnr
                   and gjahr = gjahr
                   and bschl = 31.
    it's much better to select of this way:
    select belnr budat wrbtr from bseg
              into table it_bseg
               where bukrs = bukrs
                   and belnr = belnr
                   and gjahr = gjahr.
    delete it_bseg where bschl ne '31'.
    Regards,
    Fernando

  • FI-CA events to improve performance

    Hello experts,
    Does anybody use the FI-CA events to improve the extraction performance for datasources 0FC_OP_01 and 0FC_CI_01 (open and cleared items)?
    It seems that this specific exits associated to BW events have been developped especially to improve performance.
    Any documentation, guide should be appreciate.
    Thanks.
    Thibaud.

    Thanks to all for the replies
    @Sybrand
    Please answer first whether the column is stored in a separate lobsegment.
    No. Table,Index,LOB,LOB index uses the same TS. I missed adding this point( moving to separate TS) as part of table modifications.
    @Hemant
    There's a famous paper / blog post about CLOBs and Database Flashback. If I find it, I'll post the URL.
    Is this the one you are referring to
    http://laimisnd.wordpress.com/2011/03/25/lobs-and-flashback-database-performance/
    By moving the CLOB column to different block size , I will test the performance improvement it gives and will share the results.
    We dont need any data from this table. XML file contains details about finger prints and once the application server completes the job , XML data is deleted from this table.
    So no need of backup/recovery operations for this table. Client will be able to replay the transactions if any problem occurs.
    @Billy
    We are not performing XML parsing on DB side. Gets the XML data from client -> insert into table -> client selects from table -> Upon successful completion of the Job from client ,XML data gets deleted.
    Regarding binding of LOB from client side, will check on that side also to reduce round trips.
    By changing the blocksize, I can keep db_32K_cache_size=2G and keep this table in CACHE. If I directly put my table to CACHE, it will age out all other operation from buffer which makes things worse for us.
    This insert is part of transaction( Registration of a finger print) and this is the only statement taking time as of now compared to other statements in the transaction.
    Thanks,
    Arun

  • Improving Performance

    Hi Experts,
    How can we improve the performance of the select with out creating an secondary index?
    In my select query am not using  primary fields in where condation.
    so i want to know that how can we improve the performance .
    one more thing is that if we r creating secondary index what are the disadvantages of that?
    Thanks & Regards,
    Amit.

    If you select from a table without using an appropriate index or key, then the database will perform a table scan to get the required data.  If you accept that this will be slow but must be used, then the key to improving performance of the program is to minimise the number of times it does the scan of the table.
    Often the way to do this is not what would normally be counted as good programming.
    For example, if you SELECT inside a loop or SELECT using FOR ALL ENTRIES, the system can end up doing the table scan a lot of times because the SQL is broken up into lots of individual/small selects passed to the database one after the other.  So it may be quicker to SELECT from the table into an internal table without specifying any WHERE conditions, and then delete the rows from the internal table that are not wanted.  This way you do only a single table scan on the database to get all records.  Of course, this uses a lot of memory - which is often the trade off.  If you have a partial key and are then selecting based on non idexed fields, you can get all records matching the partial key and then throw away those where the remaining fields dont meet requirements.
    Andrew

  • How to improve performance of the attached query

    Hi,
    How to improve performance of the below query, Please help. also attached explain plan -
    SELECT Camp.Id,
    rCam.AccountKey,
    Camp.Id,
    CamBilling.Cpm,
    CamBilling.Cpc,
    CamBilling.FlatRate,
    Camp.CampaignKey,
    Camp.AccountKey,
    CamBilling.billoncontractedamount,
    (SUM(rCam.Impressions) * 0.001 + SUM(rCam.Clickthrus)) AS GR,
    rCam.AccountKey as AccountKey
    FROM Campaign Camp, rCamSit rCam, CamBilling, Site xSite
    WHERE Camp.AccountKey = rCam.AccountKey
    AND Camp.AvCampaignKey = rCam.AvCampaignKey
    AND Camp.AccountKey = CamBilling.AccountKey
    AND Camp.CampaignKey = CamBilling.CampaignKey
    AND rCam.AccountKey = xSite.AccountKey
    AND rCam.AvSiteKey = xSite.AvSiteKey
    AND rCam.RmWhen BETWEEN to_date('01-01-2009', 'DD-MM-YYYY') and
    to_date('01-01-2011', 'DD-MM-YYYY')
    GROUP By rCam.AccountKey,
    Camp.Id,
    CamBilling.Cpm,
    CamBilling.Cpc,
    CamBilling.FlatRate,
    Camp.CampaignKey,
    Camp.AccountKey,
    CamBilling.billoncontractedamount
    Explain Plan :-
    Description Object_owner Object_name Cost Cardinality Bytes
    SELECT STATEMENT, GOAL = ALL_ROWS 14 1 13
    SORT AGGREGATE 1 13
    VIEW GEMINI_REPORTING 14 1 13
    HASH GROUP BY 14 1 103
    NESTED LOOPS 13 1 103
    HASH JOIN 12 1 85
    TABLE ACCESS BY INDEX ROWID GEMINI_REPORTING RCAMSIT 2 4 100
    NESTED LOOPS 9 5 325
    HASH JOIN 7 1 40
    SORT UNIQUE 2 1 18
    TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY SITE 2 1 18
    INDEX RANGE SCAN GEMINI_PRIMARY SITE_I0 1 1
    TABLE ACCESS FULL GEMINI_PRIMARY SITE 3 27 594
    INDEX RANGE SCAN GEMINI_REPORTING RCAMSIT_I 1 1 5
    TABLE ACCESS FULL GEMINI_PRIMARY CAMPAIGN 3 127 2540
    TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY CAMBILLING 1 1 18
    INDEX UNIQUE SCAN GEMINI_PRIMARY CAMBILLING_U1 0 1

    duplicate thread..
    How to improve performance of attached query

  • Need to improve Performance of select...endselect query

    Hi experts,
    I have a query in my program like below with inner join of 3 tables.
    In my program used select....endselect   again inside this select...endselect statements used..
    While executing in production taking lot of time to fetch records. Can anyone suggest to improve performance of below query urgently...
    Greatly appreciated ur help...
    SELECT MVKEDWERK MVKEMATNR MVKEVKORG MVKEVTWEG MARA~MATNR
           MARAMTART ZM012MTART ZM012ZLIND ZM012ZPRICEREF
    INTO (MVKE-DWERK , MVKE-MATNR , MVKE-VKORG , MVKE-VTWEG , MARA-MATNR
         , MARA-MTART , ZM012-MTART , ZM012-ZLIND , ZM012-ZPRICEREF )
    FROM ( MVKE
           INNER JOIN MARA
           ON MARAMATNR = MVKEMATNR
           INNER JOIN ZM012
           ON ZM012MTART = MARAMTART )
           WHERE MVKE~DWERK IN SP$00004
             AND MVKE~MATNR IN SP$00001
             AND MVKE~VKORG IN SP$00002
             AND MVKE~VTWEG IN SP$00003
             AND MARA~MTART IN SP$00005
             AND ZM012~ZLIND IN SP$00006
             AND ZM012~ZPRICEREF IN SP$00007.
      %DBACC = %DBACC - 1.
      IF %DBACC = 0.
        STOP.
      ENDIF.
      CHECK SP$00005.
      CHECK SP$00004.
      CHECK SP$00001.
      CHECK SP$00002.
      CHECK SP$00003.
      CHECK SP$00006.
      CHECK SP$00007.
      clear Check_PR00.
      select * from A004
      where kappl = 'V'
      and kschl = 'PR00'
      and vkorg = mvke-vkorg
      and vtweg = mvke-vtweg
      and matnr = mvke-matnr
      and DATAB le sy-datum
      and DATBI ge sy-datum.
      if sy-subrc = 0.
      select * from konp
      where knumh = a004-knumh.
      if sy-subrc = 0.
      Check_PR00 = konp-kbetr.
      endif.
      endselect.
      endif.
      endselect.
      CHECK SP$00008.
      clear Check_ZPR0.
      select * from A004
      where kappl = 'V'
      and kschl = 'ZPR0'
      and vkorg = mvke-vkorg
      and vtweg = mvke-vtweg
      and matnr = mvke-matnr
      and DATAB le sy-datum
      and DATBI ge sy-datum.
      if sy-subrc = 0.
      select * from konp
      where knumh = a004-knumh.
      if sy-subrc = 0.
      Check_ZPR0 = konp-kbetr.
      endif.
      endselect.
      endif.
      endselect.
      CHECK SP$00009.
      clear ZFMP.
      select * from A004
      where kappl = 'V'
      and kschl = 'ZFMP'
      and vkorg = mvke-vkorg
      and vtweg = mvke-vtweg
      and matnr = mvke-matnr
      and DATAB le sy-datum
      and DATBI ge sy-datum.
      if sy-subrc = 0.
      select * from konp
      where knumh = a004-knumh.
      if sy-subrc = 0.
      ZFMP = konp-kbetr.
      endif.
      endselect.
      endif.
      endselect.
      CHECK SP$00010.
      clear mastercost.
      clear ZDCF.
      select * from A004
      where kappl = 'V'
      and kschl = 'ZDCF'
      and vkorg = mvke-vkorg
      and vtweg = mvke-vtweg
      and matnr = mvke-matnr
      and DATAB le sy-datum
      and DATBI ge sy-datum.
      if sy-subrc = 0.
      select * from konp
      where knumh = a004-knumh.
      if sy-subrc = 0.
      ZDCF = konp-kbetr.
      endif.
      endselect.
      endif.
      endselect.
      CHECK SP$00011.
      clear masterprice.
      clear Standardcost.
      select * from mbew
      where matnr = mvke-matnr
      and bwkey = mvke-dwerk.
      Standardcost = mbew-stprs.
      mastercost = MBEW-BWPRH.
      masterprice = mBEW-BWPH1.
      endselect.
      ADD 1 TO %COUNT-MVKE.
      %LINR-MVKE = '01'.
      EXTRACT %FG01.
      %EXT-MVKE01 = 'X'.
        EXTRACT %FGWRMVKE01.
    ENDSELECT.
    best rgds..
    hari..

    Hi there.
    Some advices:
    - why going to MVKE first and MARA then? You will find n rows in MVKE for 1 matnr, and then go n times to the same record in MARA. Do the oposite, i.e, go first to MARA (1 time per matnr) and then to MVKE.
    - avoid select *, you will save time.
    - use trace or measure performance in tcodes ST05 and SM30.
    -  replace:
    select * from konp
    where knumh = a004-knumh.
    if sy-subrc = 0.
    Check_ZPR0 = konp-kbetr.
    endif.
    endselect.
    by
    select * from konp
    where knumh = a004-knumh.
    Check_ZPR0 = konp-kbetr.
    exit.
    endselect.    
    Here, if I understood, you only need to atribute kbetr value to Check_ZPR0 if selecting anything (don't need the IF because if enters in select, subrc always equal to 0, and also don't need to do it several times from same a004-knumh - reason for the EXIT.
    Hope this helps.
    Regards.
    Valter Oliveira.
    Edited by: Valter Oliveira on Jun 5, 2008 3:16 PM

  • How to improve performance of attached query

    Hi,
    How to improve performance of the below query, Please help. also attached explain plan -
    SELECT Camp.Id,
    rCam.AccountKey,
    Camp.Id,
    CamBilling.Cpm,
    CamBilling.Cpc,
    CamBilling.FlatRate,
    Camp.CampaignKey,
    Camp.AccountKey,
    CamBilling.billoncontractedamount,
    (SUM(rCam.Impressions) * 0.001 + SUM(rCam.Clickthrus)) AS GR,
    rCam.AccountKey as AccountKey
    FROM Campaign Camp, rCamSit rCam, CamBilling, Site xSite
    WHERE Camp.AccountKey = rCam.AccountKey
    AND Camp.AvCampaignKey = rCam.AvCampaignKey
    AND Camp.AccountKey = CamBilling.AccountKey
    AND Camp.CampaignKey = CamBilling.CampaignKey
    AND rCam.AccountKey = xSite.AccountKey
    AND rCam.AvSiteKey = xSite.AvSiteKey
    AND rCam.RmWhen BETWEEN to_date('01-01-2009', 'DD-MM-YYYY') and
    to_date('01-01-2011', 'DD-MM-YYYY')
    GROUP By rCam.AccountKey,
    Camp.Id,
    CamBilling.Cpm,
    CamBilling.Cpc,
    CamBilling.FlatRate,
    Camp.CampaignKey,
    Camp.AccountKey,
    CamBilling.billoncontractedamount
    Explain Plan :-
    Description Object_owner Object_name Cost Cardinality Bytes
    SELECT STATEMENT, GOAL = ALL_ROWS 14 1 13
    SORT AGGREGATE 1 13
    VIEW GEMINI_REPORTING 14 1 13
    HASH GROUP BY 14 1 103
    NESTED LOOPS 13 1 103
    HASH JOIN 12 1 85
    TABLE ACCESS BY INDEX ROWID GEMINI_REPORTING RCAMSIT 2 4 100
    NESTED LOOPS 9 5 325
    HASH JOIN 7 1 40
    SORT UNIQUE 2 1 18
    TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY SITE 2 1 18
    INDEX RANGE SCAN GEMINI_PRIMARY SITE_I0 1 1
    TABLE ACCESS FULL GEMINI_PRIMARY SITE 3 27 594
    INDEX RANGE SCAN GEMINI_REPORTING RCAMSIT_I 1 1 5
    TABLE ACCESS FULL GEMINI_PRIMARY CAMPAIGN 3 127 2540
    TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY CAMBILLING 1 1 18
    INDEX UNIQUE SCAN GEMINI_PRIMARY CAMBILLING_U1 0 1

    duplicate thread..
    How to improve performance of attached query

  • Rewrite SQL query to improve performance

    Hello,
    The below queries are very time consuming.Could you please suggest how to improve performance for these 2 queries:
    QUERY1:
    SELECT avbeln aposnr aauart avkorg avtweg aspart
    avkbur akunnr bmatnr bkwmeng b~vrkme
    bnetwr bwerks blgort bvstel babgru berdat b~ernam
    cfaksk cktext cvdatu czzbrsch ckvgr1 caugru
    INTO CORRESPONDING FIELDS OF TABLE g_t_sodata
                FROM vapma AS a INNER JOIN vbap AS b ON
                avbeln = bvbeln AND aposnr = bposnr
                INNER JOIN vbak AS c ON avbeln = cvbeln
                  WHERE a~vkorg IN so_vkorg
                  AND a~vtweg IN so_vtweg
                  AND a~vkbur IN so_vkbur
                  AND a~auart IN so_auart
                  AND a~vbeln IN so_vbeln
                  AND b~abgru IN so_abgru
                  AND c~faksk IN so_faksk
                  AND ( b~erdat GT g_f_zenkai_date OR
            ( berdat EQ g_f_zenkai_date AND berzet GE g_f_zenkai_time ) )
            AND ( b~erdat LT g_f_kaisi_date  OR
             ( berdat EQ g_f_kaisi_date  AND berzet LT g_f_kaisi_time ) ).
    QUERY2:
    SELECT avbeln aposnr aauart avkorg avtweg aspart
            avkbur akunnr bmatnr bkwmeng  b~vrkme
            b~netwr
            bwerks blgort bvstel babgru berdat bernam
            cfaksk cktext cvdatu czzbrsch ckvgr1 caugru
            INTO CORRESPONDING FIELDS OF TABLE g_t_sodata
           FROM vapma AS a INNER JOIN vbap AS b ON
           avbeln = bvbeln AND aposnr = bposnr
           INNER JOIN vbak AS c ON avbeln = cvbeln
             WHERE a~vkorg IN so_vkorg
             AND a~vtweg IN so_vtweg
             AND a~vkbur IN so_vkbur
             AND a~auart IN so_auart
             AND a~vbeln IN so_vbeln
             AND b~abgru IN so_abgru
             AND c~faksk IN so_faksk.

    Questions like this a one of the favorites here in this forum.
    I guess that the statements are o.k.
    The problem is the usage!  There are so many Ranges, if they are filled then some index should support it. If no range is filled then it is usually slow, you can not do anything.
    You must find out, which are actually used and under which conditions it is slow and when it is o.k.
    Ask again, when you can provide the actual cases.
    Use SQL trace to check the performance:
    SQL trace:
    The SQL Trace (ST05) – Quick and Easy
    I know the probabilty is high, that you will ignore this recommendation and you will points to the
    'Use FOR ALL ENTRIES' recommendations' ... but then I can not help you.
    Siegfried

  • To improve performance for report

    Hi Expert,
    i have generated the opensales order report which is fetching data from VBAK it is taking more time exectuing in the forground itself.
    it is going in to dump in foreground and i have executed in the background also but it is going in to dump.
    SELECT vbeln
               auart
               submi
               vkorg
               vtweg
               spart
               knumv
               vdatu
               vprgr
               ihrez
               bname
               kunnr
        FROM vbak
        APPENDING TABLE itab_vbak_vbap
        FOR ALL ENTRIES IN l_itab_temp
    *BEGIN OF change 17/Oct/2008.
        WHERE erdat IN s_erdat              AND
             submi = l_itab_temp-submi     AND
    *End of Changes 17/Oct/2008.
              auart = l_itab_temp-auart     AND
    *BEGIN OF change 17/Oct/2008.
              submi = l_itab_temp-submi     AND
    *End of Changes 17/Oct/2008.
              vkorg = l_itab_temp-vkorg     AND
              vtweg = l_itab_temp-vtweg     AND
              spart = l_itab_temp-spart     AND
              vdatu = l_itab_temp-vdatu     AND
              vprgr = l_itab_temp-vprgr     AND
              ihrez = l_itab_temp-ihrez     AND
              bname = l_itab_temp-bname     AND
              kunnr = l_itab_temp-sap_kunnr.
        DELETE itab_temp FROM l_v_from_rec TO l_v_to_rec.
      ENDDO.
    Please give me suggession for improving performance for the programmes.

    hi,
    you try like this
    DATA:BEGIN OF itab1 OCCURS 0,
         vbeln LIKE vbak-vbeln,
         END OF itab1.
    DATA: BEGIN OF itab2 OCCURS 0,
          vbeln LIKE vbap-vbeln,
          posnr LIKE vbap-posnr,
          matnr LIKE vbap-matnr,
          END OF itab2.
    DATA: BEGIN OF itab3 OCCURS 0,
          vbeln TYPE vbeln_va,
          posnr TYPE posnr_va,
          matnr TYPE matnr,
          END OF itab3.
    SELECT-OPTIONS: s_vbeln FOR vbak-vbeln.
    START-OF-SELECTION.
      SELECT vbeln FROM vbak INTO TABLE itab1
      WHERE vbeln IN s_vbeln.
      IF itab1[] IS NOT INITIAL.
        SELECT vbeln posnr matnr FROM vbap INTO TABLE itab2
        FOR ALL ENTRIES IN itab1
        WHERE vbeln = itab1-vbeln.
      ENDIF.

  • Alternate for inner join to improve performance

    Hi all,
    I have used an inner join query to fetch data from five different tables into an internal table with where clause conditions.
    The execution time is almost 5-6 min for this particular query(I have more data in all five DB tables- more than 10 million records in every table).
    Is there any alternate for inner join to improve performance.?
    TIA.
    Regards,
    Karthik

    Hi All,
    Thanks for all your interest.
    SELECT  a~object_id a~description a~descr_language
                a~guid AS object_guid a~process_type
                a~changed_at
                a~created_at AS created_timestamp
                a~zzorderadm_h0207 AS cpid
                a~zzorderadm_h0208 AS submitter
                a~zzorderadm_h0303 AS cust_ref
                a~zzorderadm_h1001 AS summary
                a~zzorderadm_h1005 AS summary_uc
                a~zzclose_date     AS clsd_date
                d~stat AS status
                f~priority
                FROM crmd_orderadm_h AS a INNER JOIN crmd_link AS b ON  a~guid = b~guid_hi
                INNER JOIN crmd_partner AS c ON b~guid_set = c~guid
                INNER JOIN crm_jest AS d ON objnr  = a~guid
                INNER JOIN crmd_activity_h AS f ON f~guid = a~guid
                INTO CORRESPONDING FIELDS OF TABLE et_service_request_list
                WHERE process_type IN lt_processtyperange
                AND   a~created_at IN lt_daterange
                AND   partner_no IN lr_partner_no
                AND   stat IN lt_statusrange
                AND   object_id IN lt_requestnumberrange
                AND   zzorderadm_h0207 IN r_cpid
                AND   zzorderadm_h0208 IN r_submitter
                AND   zzorderadm_h0303 IN r_cust_ref
                AND   zzorderadm_h1005 IN r_trans_desc
                AND   d~inact = ' '
                AND   b~objtype_hi = '05'
                AND   b~objtype_set = '07'.
                f~priority
                FROM crmd_orderadm_h AS a INNER JOIN crmd_link AS b ON  a~guid = b~guid_hi
                INNER JOIN crmd_partner AS c ON b~guid_set = c~guid
                INNER JOIN crm_jest AS d ON objnr  = a~guid
                INNER JOIN crmd_activity_h AS f ON f~guid = a~guid
                INTO CORRESPONDING FIELDS OF TABLE et_service_request_list
                WHERE process_type IN lt_processtyperange
                AND   a~created_at IN lt_daterange
                AND   partner_no IN lr_partner_no
                AND   stat IN lt_statusrange
                AND   object_id IN lt_requestnumberrange
                AND   zzorderadm_h0207 IN r_cpid
                AND   zzorderadm_h0208 IN r_submitter
                AND   zzorderadm_h0303 IN r_cust_ref
                AND   zzorderadm_h1005 IN r_trans_desc
                AND   d~inact = ' '
                AND   b~objtype_hi = '05'
                AND   b~objtype_set = '07'.

  • How to improve performance of insert statement

    Hi all,
    How to improve performance of insert statement
    I am inserting 1lac records into table it takes around 20 min..
    Plz help.
    Thanx In Advance.

    I tried :
    SQL> create table test as select * from dba_objects;
    Table created.
    SQL> delete from test;
    3635 rows deleted.
    SQL> commit;
    Commit complete.
    SQL> select count(*) from dba_extents where segment_name='TEST';
    COUNT(*)
    4
    SQL> insert /*+ APPEND */ into test select * from dba_objects;
    3635 rows created.
    SQL> commit;
    Commit complete.
    SQL> select count(*) from dba_extents where segment_name='TEST';
    COUNT(*)
    6
    Cheers, Bhupinder

Maybe you are looking for