Performance issues in million records table

I Have a scenario wherein have some 20 tables each with a million and more records. [ Historical ]
On an average I do add 1500 - 2500 records a day... i.e would add a million records every year on an average
Am looking for archival solutions for these master tables.
Operations on Archival Tables, would be limited to read.
Expected benefits
User base would be around 2500 users on the whole - but expect 300 - 500 parallel users at the max.
Very limited usage on Historical data - compared to operations on current data
Performance on operations over current data is important compared over that on historical data
Environment - Oracle 9i - Should be migrating to Oracle 10g sooner.
Some solutions i cud think of ...
[ 1 ] Put every archived record into a archival table and fetch it from there
i.e clearly distinguish searches as current or archival - prior to searching
the impact i feel is again archival tables are ever increasing by approx a million in a year
[ 2 ] Put records into various archival tables each differentiated by a year
For instance every year i do replicate the set of tables and that year data goes into that table.
how do i do a fetch??
Note - i do have a unique way of identifying each record in my master table - the primary key is based on YYYYMMXXXXXXXXXX format eg: 2008070000562330, will the year part help me in anyway to check with the correct table
The major concern is i do have a very good response based on indexing and other common things, but would not want this to downgrade in a year and more, but expect to improvise on the current response timings and also do ensure to conitnue the same over a period of time.
Also I don't want to make change to every query in my app - until there is no way out..

Hi,
Read the following documentation link about Partitioning in Oracle.
Best Regards,
Alex

Similar Messages

  • Help with querying a 200 million record table

    Hi ,
    I need to query a 200 million record table which is partitioned by monthly activity.
    But my problem is I need to see how many activities occured on one account in a time frame.
    If there are 200 partitions, I need to go into all the partitions, get the activities of the account in the partition and at the end give the number of activities.
    Fortunately, only activity is expected for an account in the partition which may be present or absent.
    if this table had 100 records, i would use this..
    select account_no, count(*)
    from Acct_actvy
    group by account_no;

    Must stress that it is critical that you not write code (SQL or PL/SQL) that uses hardcoded partition names to find data.
    That approach is very risk, prone to runtime errors, difficult to maintain and does not scale. It is not worth it.
    From the developer's side, there should be total ignorance to the fact that a table is partitioned. A developer must treat a partition table no different than any other table.
    To give you an idea.. this a copy-and-paste from a SQL*Plus session doing what you want to do. Against a partitioned table at least 3x bigger than yours. It covers about a 12 month period. There's a partition per day - and empty daily partitions for the next 2 years. The SQL aggregation is monthly. I selected a random network address to illustrate.
    SQL> select count(*) from x25_calls;
      COUNT(*)
    619491919
    Elapsed: 00:00:19.68
    SQL>
    SQL>  select TRUNC(callendtime,'MM') AS MONTH, sourcenetworkaddress, count(*) from x25_calls where sourcenetworkaddress = '3103165962'
      2  group by TRUNC(callendtime,'MM'), sourcenetworkaddress;
    MONTH               SOURCENETWORKADDRESS   COUNT(*)
    2005/09/01 00:00:00 3103165962                 3599
    2005/10/01 00:00:00 3103165962                 1184
    2005/12/01 00:00:00 3103165962                    4
    2005/06/01 00:00:00 3103165962                    1
    2005/04/01 00:00:00 3103165962                  560
    2005/08/01 00:00:00 3103165962                  101
    2005/03/01 00:00:00 3103165962                 3330
    7 rows selected.
    Elapsed: 00:00:19.72As you can see - not a single reference to any partitioning. Excellent performance, despite running on an old K-class HP server.
    The reason for the performance is simple. A correctly designed and implemented partitioning scheme that caters for most of the queries against the table. Correctly designed and implemented indexes - especially local bitmap indexes. Without any hacks like partition names and the like...

  • Selecting Records from 125 million record table to insert into smaller table

    Oracle 11g
    I have a large table of 125 million records - t3_universe.  This table never gets updated or altered once loaded,  but holds data that we receive from a lead company.
    I need to select records from this large table that fit certain demographic criteria and insert those into a smaller table - T3_Leads -  that will be updated with regard to when the lead is mailed and for other relevant information.
    My question is what is the best (fastest) approach to select records from this 125 million record table to insert into the smaller table.  I have tried a variety of things - views, materialized views, direct insert into smaller table...I think I am probably missing other approaches.
    My current attempt has been to create a View using the query that selects the records as shown below.  Then use a second query that inserts into T3_Leads from this View V_Market.  This is very slow. Can I just use an Insert Into T3_Leads with this query - it did not seem to work with the WITH clause?    My Index on the large table is t3_universe_composite and includes zip_code, address_key, household_key. 
    CREATE VIEW V_Market  as
    WITH got_pairs    AS  
         SELECT /*+ INDEX_FFS(t3_universe t3_universe_composite) */  l.zip_code, l.zip_plus_4, l.p1_givenname, l.surname, l.address, l.city, l.state, l.household_key, l.hh_type as l_hh_type, l.address_key, l.narrowband_income, l.p1_ms, l.p1_gender, l.p1_exact_age, l.p1_personkey, e.hh_type as filler_data, 1.p1_seq_no, l.p2_seq_no 
         ,      ROW_NUMBER () OVER ( PARTITION BY  l.address_key 
                                      ORDER BY      l.hh_verification_date  DESC 
                      ) AS r_num   
         FROM   t3_universe  e   
         JOIN   t3_universe  l  ON   
                l.address_key  = e.address_key
                AND l.zip_code = e.zip_code
              AND   l.p1_gender != e.p1_gender
                 AND   l.household_key != e.household_key         
                 AND  l.hh_verification_date  >= e.hh_verification_date 
      SELECT  * 
      FROM  got_pairs
      where l_hh_type !=1 and l_hh_type !=2 and filler_data != 1 and filler_data != 2 and zip_code in (select * from M_mansfield_02048) and p1_exact_age BETWEEN 25 and 70 and narrowband_income >= '8' and r_num = 1
    Then
    INSERT INTO T3_leads(zip, zip4, firstname, lastname, address, city, state, household_key, hh_type, address_key, income, relationship_status, gender, age, person_key, filler_data, p1_seq_no, p2_seq_no)
    select zip_code, zip_plus_4, p1_givenname, surname, address, city, state, household_key, l_hh_type, address_key, narrowband_income, p1_ms, p1_gender, p1_exact_age, p1_personkey, filler_data, p1_seq_no, p2_seq_no
    from V_Market;

    I had no trouble creating the view exactly as you posted it.  However, be careful here:
    and zip_code in (select * from M_mansfield_02048)
    You should name the column explicitly rather than select *.  (do you really have separate tables for different zip codes?)
    About the performance, it's hard to tell because you haven't posted anything we can use, like explain plans or traces but simply encapsulating your query into a view is not likely to make it any faster.
    Depending on the size of the subset of rows you're selecting, the /*+ INDEX hint may be doing your more harm than good.

  • Update performance on a 38 million records table

    Hi all,
    I´m trying to create a script to update a table that have around 38 million records. That table isn´t partitioned and I just have to update one CHAR(1 byte) field and set it to 'N'.
    The Database is 10g r2 running on a Unix TRU64.
    The script I create have a LOOP on a CURSOR that Bulk 200.000 records by pass and do a FORALL to update the table by ROWID.
    The problem is, on the performances tests that method took about 20 minutes to update 1 million rows and should take about 13 hours to update all table.
    My question is: Is that any way to improve the performance?
    The Script:
    DECLARE
    CURSOR C1
    IS
    SELECT ROWID
    FROM RTG.TCLIENTE_RTG;
    type rowidtab is table of rowid;
    d_rowid rowidtab;
    v_char char(1) := 'N';
    BEGIN
    OPEN C1;
    LOOP
    FETCH C1
    BULK COLLECT INTO d_rowid LIMIT 200000;
    FORALL i IN d_rowid.FIRST..d_rowid.LAST
    UPDATE RTG.TCLIENTE_RTG
    SET CLI_VALID_IND = v_char
    WHERE ROWID = d_rowid(i);
    COMMIT;
    EXIT WHEN C1%NOTFOUND;
    END LOOP;
    CLOSE C1;
    END;
    Kind Regards,
    Fabio

    I'm just curious... Is this a new varchar2(1) column that has been added to that table? If so will the value for this column remain 'N' for the future for the majority of the rows in that table?
    Has this column specifically been introduced to support one of the business functions in your application / will it not be used everywhere where the table is currently in use?
    If your answers to above questions contain many yes'ses, then why did you choose to add a column for this that needs to be initialized to 'N' for all existing rows?
    Why not add a new single-column table for this requirement: the single column being the pk-column(s) of the existing table. And the meaning being if a pk is present in this new table, then the "CLI_VALID_IND" for this client is 'yes'. And if a pk is not present, then the "CLI_VALID_IND" for this client is 'no'.
    That way you only have to add the new table. And do nothing more. Of course your SQL statements in support for the business logic of this new business function will have to use, and maybe join, this new table. But is that really a huge disadvantage?

  • How can I update a particular column in a 7 million record table, where it has many conditions to go.

    I am designing a table, for which I am loading the data into my table from different tables by giving joins. But I have Status column, for which I have about 16 different statuses from different tables, now for each case I have a condition, if it satisfies
    then the particular status will show in status column, in that way I need to write the query as 16 different cases. 
    Now, my question is what is the best way to write these cases for the to satisfy all the conditions and also get the data quickly to the table. As the data we are getting is mostly from big tables about 7 million records. And if we give the logic as case
    it will scan for each case and about 16 times it will scan the table, How can I do this faster? Can anyone help me out

    Here is the code I have written to get the data from temp tables which are taking records from 7 millions table with  filtering records of year 2013. This is taking more than an hour to run. Iam posting the part of code which is running slow, mainly
    the part of Status column.
    SELECT
    z.SYSTEMNAME
    --,Case when ZXC.[Subsystem Name] <> 'NULL' Then zxc.[SubSystem Name]
    --else NULL
    --End AS SubSystemName
    , CASE
    WHEN z.TAX_ID IN
    (SELECT DISTINCT zxc.TIN
    FROM .dbo.SQS_Provider_Tracking zxc
    WHERE zxc.[SubSystem Name] <> 'NULL'
    THEN
    (SELECT DISTINCT [Subsystem Name]
    FROM .dbo.SQS_Provider_Tracking zxc
    WHERE z.TAX_ID = zxc.TIN)
    End As SubSYSTEMNAME
    ,z.PROVIDERNAME
    ,z.STATECODE
    ,z.TAX_ID
    ,z.SRC_PAR_CD
    ,SUM(z.SEQUEST_AMT) Actual_Sequestered_Amt
    , CASE
    WHEN z.SRC_PAR_CD IN ('E','O','S','W')
    THEN 'Nonpar Waiver'
    -- --Is Puerto Rico of Lifesynch
    WHEN z.TAX_ID IN
    (SELECT DISTINCT a.TAX_ID
    FROM .dbo.SQS_NonPar_PR_LS_TINs a
    WHERE a.Bucket <> 'Nonpar'
    THEN
    (SELECT DISTINCT a.Bucket
    FROM .dbo.SQS_NonPar_PR_LS_TINs a
    WHERE a.TAX_ID = z.TAX_ID)
    --**Amendment Mailed**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT b.PROV_TIN
    FROM .dbo.SQS_Mailed_TINs_010614 b WITH (NOLOCK )
    where not exists (select * from dbo.sqs_objector_TINs t where b.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN
    (SELECT DISTINCT b.Mailing
    FROM .dbo.SQS_Mailed_TINs_010614 b
    WHERE z.TAX_ID = b.PROV_TIN
    -- --**Amendment Mailed Wave 3-5**
    WHEN z.TAX_ID In
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (3rd Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (3rd Wave)'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (4th Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (4th Wave)'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (5th Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (5th Wave)'
    -- --**Top Objecting Systems**
    WHEN z.SYSTEMNAME IN
    ('ADVENTIST HEALTH SYSTEM','ASCENSION HEALTH ALLIANCE','AULTMAN HEALTH FOUNDATION','BANNER HEALTH SYSTEM')
    THEN 'Top Objecting Systems'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Top Objector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H'
    THEN 'Top Objecting Systems'
    -- --**Other Objecting Hospitals**
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Objector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H')
    THEN 'Other Objecting Hospitals'
    -- --**Objecting Physicians**
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    obj.TIN
    FROM .dbo.SQS_Provider_Tracking obj
    WHERE obj.[Objector?] in ('Objector','Top Objector')
    and z.TAX_ID = obj.TIN
    and z.Hosp_Ind = 'P')
    THEN 'Objecting Physicians'
    --****Rejecting Hospitals****
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Rejector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H')
    THEN 'Rejecting Hospitals'
    --****Rejecting Physciains****
    WHEN
    (z.TAX_ID IN
    (SELECT DISTINCT
    obj.TIN
    FROM .dbo.SQS_Provider_Tracking obj
    WHERE z.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Rejector')
    and z.Hosp_Ind = 'P')
    THEN 'REjecting Physicians'
    ----**********ALL OBJECTORS SHOULD HAVE BEEN BUCKETED AT THIS POINT IN THE QUERY**********
    -- --**Non-Objecting Hospitals**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    WHERE
    (z.TAX_ID = h.TAX_ID)
    OR h.SMG_ID IS NOT NULL)
    and z.Hosp_Ind = 'H'
    THEN 'Non-Objecting Hospitals'
    -- **Outstanding Contracts for Review**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Non-Objecting Bilateral Physicians'
    AND z.TAX_ID = qz.PROV_TIN)
    Then 'Non-Objecting Bilateral Physicians'
    When z.TAX_ID in
    (select distinct
    p.TAX_ID
    from dbo.SQS_CoC_Potential_Mail_List p
    where p.amendmentrights <> 'Unilateral'
    AND z.TAX_ID = p.TAX_ID)
    THEN 'Non-Objecting Bilateral Physicians'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'More Research Needed'
    AND qz.PROV_TIN = z.TAX_ID)
    THEN 'More Research Needed'
    WHEN z.TAX_ID IN (SELECT DISTINCT qz.PROV_TIN FROM [SQS_Mailed_TINs] qz where qz.Mailing = 'Objector' AND qz.PROV_TIN = z.TAX_ID)
    THEN 'ERROR'
    else 'Market Review/Preparing to Mail'
    END AS [STATUS Column]
    Please suggest on this

  • Performance issue in a custom table

    Hi All,
    I have  a ztable used in a program wherin I have a doubt of performance issue in selection.Its like :
        SELECT ship_no invoice_no
          INTO TABLE it_ship_no_hist
          FROM zco_cust_hist
          FOR ALL ENTRIES IN it_freight
          WHERE ship_no = it_freight-tknum.
    there are 7 key fields in this table out of which one ( tknum ) is used in a where condition.The table is without any index.
       For performance purpose should I create an index with the very field 'tknum' in the index..can I do that or index should be created only along with non key fields.

    Hi,
    a table has - besides a few exceptions - always one index that is the primary key. The fields are the key fields in the same order as in the table.
    The primary key is always there and therefore not displayed under the botton 'index'.
    Is tknum a key field? What are the key fields in correct order? If it is in the key and maybe the first one, then it does not make sense that you create an index.
    Siegfried

  • Reg update of a 10 million record table from 1 million record table

    I have 2 tables
    Tabke 1 : 10 millio records
    21 indexes --> 1). acct_id , acct_seq_no -- index_1
    2). c1 , c2 , c3 -- index_2
    Table 2 : 1 .5 million records
    1 index on ( acct_id, acct_seq_no) - idx_1
    common keys are acct_id and acct_seq_no
    I'm updating table1 from table 2
    I need to use index ( index_1 from table_1 ) and (idx_1 from table_2)
    How can I make my query to use only this particular index.
    MY query ia as follows
    UPDATE csban_&1 csb
    SET (
    duns_no,
    hdqtrs_duns_no,
    us_ultmt_duns_no,
    sci_id,
    blg_cl_id,
    cl_id
    ) =
    ( SELECT acct_id,
    acct_seq_no,
    duns_no,
    hdqtrs_duns_no,
    us_ultmt_duns_no,
    sci_id,
    blg_cl_id,
    cl_id
    FROM csban_abi_temp
    WHERE csb.acct_id = temp.acct_id
    AND csb.acct_seq_no = temp.acct_seq_no
    AND rownum < 2
    WHERE
    EXISTS
    SELECT 1
    FROM csban_abi_temp temp1
    WHERE csb.acct_id = temp1.acct_id
    AND csb.acct_seq_no = temp1.acct_seq_no
    DO I need to put and index hint after this
    UPDATE csban_&1 csb --???????? /*+ indedx (csb.index_1) */
    Thanks in advance

    Thanks a lot david and rob for sharing the info.
    Please find the details
    SQL> EXPLAIN PLAN FOR
    UPDATE csban_2 csb
    SET (
    duns_no,
    hdqtrs_duns_no,
    us_ultmt_duns_no,
    sci_id,
    blg_cl_id,
    cl_id
    ) =
    ( SELECT duns_no,
    hdqtrs_duns_no,
    us_ultmt_duns_no,
    sci_id,
    blg_cl_id,
    cl_id
    FROM csban_abi_temp temp
    WHERE csb.acct_id = temp.acct_id
    AND csb.acct_seq_no = temp.acct_seq_no
    AND rownum < 2
    WHERE
    EXISTS
    SELECT 1
    FROM csban_abi_temp temp1
    WHERE 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
    21 22 23 24 25 26 27 28 29
    Explained.
    SQL>
    SQL>
    SQL> SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 584770029
    | Id | Operation | Name | Rows | Bytes | Cost
    | TQ |IN-OUT| PQ Distrib |
    PLAN_TABLE_OUTPUT
    | 0 | UPDATE STATEMENT | | 530K| 19M| 8213
    | | | |
    | 1 | UPDATE | CSBAN_2 | | |
    | | | |
    |* 2 | FILTER | | | |
    | | | |
    | 3 | PX COORDINATOR | | | |
    | | | |
    PLAN_TABLE_OUTPUT
    | 4 | PX SEND QC (RANDOM) | :TQ10000 | 530K| 19M| 8213
    | Q1,00 | P->S | QC (RAND) |
    | 5 | PX BLOCK ITERATOR | | 530K| 19M| 8213
    | Q1,00 | PCWC | |
    | 6 | TABLE ACCESS FULL | CSBAN_2 | 530K| 19M| 8213
    | Q1,00 | PCWP | |
    |* 7 | INDEX RANGE SCAN | IDX_CSB_ABI_TMP | 1 | 10 | 3
    PLAN_TABLE_OUTPUT
    | | | |
    |* 8 | COUNT STOPKEY | | | |
    | | | |
    | 9 | TABLE ACCESS BY INDEX ROWID| CSBAN_ABI_TEMP | 1 | 38 | 4
    | | | |
    |* 10 | INDEX RANGE SCAN | IDX_CSB_ABI_TMP | 1 | | 3
    | | | |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    2 - filter( EXISTS (SELECT 0 FROM "CSBAN_ABI_TEMP" "TEMP1" WHERE "TEMP1"."ACC
    T_SEQ_NO"=:B1 AND
    "TEMP1"."ACCT_ID"=:B2))
    PLAN_TABLE_OUTPUT
    7 - access("TEMP1"."ACCT_ID"=:B1 AND "TEMP1"."ACCT_SEQ_NO"=:B2)
    8 - filter(ROWNUM<2)
    10 - access("TEMP"."ACCT_ID"=:B1 AND "TEMP"."ACCT_SEQ_NO"=:B2)
    Note
    - cpu costing is off (consider enabling it)
    30 rows selected.
    The query had completed and it took 1 hr 47 mts.
    SQL> SQL> SQL> Updating CSBAN from TEMP table
    old 1: UPDATE /*+ INDEX(acct_id,acct_seq_no) */ csban_&1 csb
    new 1: UPDATE /*+ INDEX(acct_id,acct_seq_no) */ csban_1 csb
    1611807 rows updated.
    Elapsed: 01:47:16.40

  • SCD 2 load performance with 60 millions records

    Hey guys!
    I'm wondering what would be the load performance for a type 2 SCD mapping based on the framework presented in the transformation guide (page A1-A20). The dimension has the following characteristics:
    60 millions records
    50 columns (including 17 to be tracked for changes)
    Has anyone come across a similar case?
    Mark or Igor- Is there any benchmark available on SCD 2 for large dimensions?
    Any help would be greatly appreciated.
    Thanks,
    Rene

    Rene,
    It's really very difficult to guesstimate the loading time for a similar configuration. Too many parameters are missing, especially hardware. We are in the process of setting up some real benchmarks later this year - maybe you can give us some interesting scenarios.
    On the other side, 50-60 million records is not that many these days... so I personally would consider anything more than several hours (on a half decent hardware) as too long.
    Regards:
    Igor

  • Performance issue with joins on table VBAK, VBEP, VBKD and VBAP

    hi all,
    i have a report where there is a join on all 4 tables VBAK, VBEP, VBKD and VBAP.
    the report is giving performance issues because of this join.
    all the key fields are used for the joining of tables. but some of the non-key fields like vbap-vstel, vbap-abgru and vbep-wadat are also part of select query and are getting filled.
    because of these there is a performance issue.
    is there any way i can improve the performance of the join select query?
    i am trying "for all entries" clause...
    kindly provide any alternative if possible.
    thanks.

    Hi,
    Pls perform some of the below steps as applicable for the performance improvement:
    a) Remove join on all the tables and put joins only on header and item (VBAK & VBAP).
    b) code should have separate select for VBEP and VBKD.
    c) remove the non key fields from the where clause. Once you retrieve data from the database into the internal table, sort the table and delete the entries which are not part of the non-key fields like vstel, abgru and wadat.
    d) last option is you can create index in the VBAP & VBEP table with respect to the fields vstel, abgru & wadat ( not advisable)
    e) buffering option on database tables also possible.
    f) select only the fields into the internal table that are applicable for the processing logic and also the select query should contaian the field names in the same order as mentioned in the database table.
    Hope this helps.
    Regards
    JLN

  • View objects performance issue with oracle seeded tables

    While i am writing a view object on a oracle seeded tables like MTL_PARAMETERS, its taking more time to show in the oaf page.I am trying to display all these view object columns in detail disclosure of advanced table. My Application is taking more than two minutes to display the view columns of the query which is returning just 200 rows. Please help me how to improve performance when my query using seeded tables.
    This issue is happening only in R12 view object and advanced tables.
    Edited by: vlsn on Jun 24, 2012 11:36 PM

    Hi All,
    Here is architecture of my application:
    Java application creates XML from the screen values and then inserts that XML
    into a framework(separate DB schema) table . then Java calls a Stored Procedure from same framework DB and in SP we have following steps.
    1. It fatches XML from the XML type table and inserts XML into screen specific XML TYPE table in the framework DB Schema. This table has a trigger which parses XML and then inserts XML values into GTT which are created in separate product schemas.
    2. it calls Product SP and then in product SP we have business logic. Product SP
    does the execution and then inserts response into Response GTT.
    3. Response XML is created by using XML generation function and response GTT.
    I hope u will understand my architeture this time and now let me know if GTT are good in this scenario or not. also please not that i need data in GTT only during execution and not after that. i dont want to do specific delete which i have to do if i am using normal tables.
    Regards,
    Vikas Kumar

  • Performance issue when selection LIPS table into program.

    Hi expert,
    I have created Pending sales order report , in that i am facing performance problem for selection of LIPS table.
    i have tried to use VLPMA table but performance has not been improved so, is their any need to create secondary index and
    if yes then which fields of lips table i have to includes in index.
    Please reply.
    Regards,
    Jyotsna

    >
    UmaDave wrote:
    > Hi ,
    > 1.Please make use of PACKAGE in your select query , it will definetly improve the performance.
    > 2.Please use the primary index by passing the fields in where clause in the order in which they appera in LIPS table.
    > 3.You can also create a secondary index with the fields which you are using in where clause of select query and maintain the fields in the same sequence (where clause and secondary index)
    > 4.If there is any inner joins (more than 3) then reduce them and have few mare select queries for them and make use of for all entries.   
    >
    > This will definitely improve the performance to great extend.
    >
    > Hope this is helpful.
    > Regards,
    > Uma
    Please do some more research before offering advice:
    PACKAGE SIZE is for memory management, not performance.
    Creating a secondary index is using a hammer to swat a fly and the order in the SELECT is not relevant.
    FAE does not improve performance over a JOIN.
    Rob

  • Performance issue related to BSIS table:pls help

    Theres a select statement which fetches data from BSIS table.
    As in the where clause the only key field used is BUKRS its consuming more time.Below is the code.
    Could you please tell me how to improvise this piece of code.
    I tried to fecth first from BKPF table based on the selection screen paramater t001-bukrs and then for all entries in BKPF fetched from BSIS.But it didnt worked.
    your help would be very much appreciated.Thanks in advance.
      SELECT bukrs waers ktopl periv
             FROM t001
             INTO TABLE i_ccode
             WHERE bukrs IN s_bukrs.
    SELECT bukrs hkont gjahr belnr buzei bldat waers blart monat bschl
    shkzg mwskz dmbtr wrbtr wmwst prctr kostl
               FROM bsis
               INTO TABLE i_bsis
               FOR ALL ENTRIES IN i_ccode
               WHERE bukrs EQ i_ccode-bukrs
               AND   budat IN i_date.
    Regards
    Akmal
    Moved by moderator to the correct forum
    Edited by: Matt on Nov 6, 2008 4:10 PM

    Dnt go for FOR ALL ENTRIES  it will not help in this case .Do like below , you can see a lot of performance improvement.
    SELECT bukrs waers ktopl periv
             FROM t001
             INTO TABLE i_ccode
             WHERE bukrs IN s_bukrs.
    sort i_ccode by bukrs.
    LOOP AT i_ccode.
       SELECT bukrs hkont gjahr belnr buzei bldat waers blart         monat bschl shkzg mwskz dmbtr wrbtr wmwst prctr kostl
             FROM bsis
            APPENDING TABLE i_bsis
            WHERE bukrs EQ i_ccode-bukrs
            AND   budat IN i_date.
      ENDLOOP.
    I dnt know why perform is good for the above query than "bulrs in so_bukrs" .This willl help , i m sure. this approach helped me.
    Edited by: Karthik Arunachalam on Nov 6, 2008 8:52 PM

  • Performance issue loading 4000 records from XML

    Hello, Im' trying to upload in a table with the following sqlstatement records from an XML having content of this type
    <?xml version="1.0" encoding="UTF-8"?>
    <custom-objects xmlns="http://www.mysite.com/xml/impex/customobject/2006-10-31">
        <custom-object type-id="NEWSLETTER_SUBSCRIBER" object-id="[email protected]">
      <object-attribute attribute-id="customer-no"><value>BLY00000001</value></object-attribute>
      <object-attribute attribute-id="customer_type"><value>registered</value></object-attribute>
            <object-attribute attribute-id="title"><value>Mr.</value></object-attribute>
            <object-attribute attribute-id="first_name"><value>Jean paul</value></object-attribute>
            <object-attribute attribute-id="is_subscribed"><value>true</value></object-attribute>
            <object-attribute attribute-id="last_name"><value>Pennati Swiss</value></object-attribute>
            <object-attribute attribute-id="address_line_1"><value>newsletter ADDRESS LINE 1 data</value></object-attribute>
            <object-attribute attribute-id="address_line_2"><value>newsletter ADDRESS LINE 2 data</value></object-attribute>
            <object-attribute attribute-id="address_line_3"><value>newsletter ADDRESS LINE 3 data</value></object-attribute>
            <object-attribute attribute-id="housenumber"><value>newsletter HOUSENUMBER data</value></object-attribute>
            <object-attribute attribute-id="city"><value>newsletter DD</value></object-attribute>
            <object-attribute attribute-id="post_code"><value>6987</value></object-attribute>
            <object-attribute attribute-id="state"><value>ASD</value></object-attribute>
            <object-attribute attribute-id="country"><value>ES</value></object-attribute>
            <object-attribute attribute-id="phone_home"><value>0044 1234567 newsletter phone_home</value></object-attribute>
            <object-attribute attribute-id="preferred_locale"><value>fr_CH</value></object-attribute>
            <object-attribute attribute-id="exported"><value>true</value></object-attribute>
            <object-attribute attribute-id="profiling"><value>true</value></object-attribute>
            <object-attribute attribute-id="promotions"><value>true</value></object-attribute>
            <object-attribute attribute-id="source"><value>https://www.mysite.com</value></object-attribute>
            <object-attribute attribute-id="source_ip"><value>10.10.1.1</value></object-attribute>
            <object-attribute attribute-id="pr_product_serial_number"><value>000123345678 product serial no.</value></object-attribute>
            <object-attribute attribute-id="pr_purchased_from"><value>Store where product to be registered was purchased</value></object-attribute>
            <object-attribute attribute-id="pr_date_of_purchase"><value></value></object-attribute>
            <object-attribute attribute-id="locale"><value>fr_CH</value></object-attribute> 
        </custom-object>
        <custom-object type-id="NEWSLETTER_SUBSCRIBER" object-id="[email protected]">
       <object-attribute attribute-id="customer-no"><value></value></object-attribute>
       <object-attribute attribute-id="customer_type"><value>unregistered</value></object-attribute>
            <object-attribute attribute-id="title"><value>Mr.</value></object-attribute>
            <object-attribute attribute-id="first_name"><value>Jean paul</value></object-attribute>
            <object-attribute attribute-id="is_subscribed"><value>true</value></object-attribute>
            <object-attribute attribute-id="last_name"><value>Pennati Swiss</value></object-attribute>
            <object-attribute attribute-id="address_line_1"><value>newsletter ADDRESS LINE 1 data</value></object-attribute>
            <object-attribute attribute-id="address_line_2"><value>newsletter ADDRESS LINE 2 data</value></object-attribute>
            <object-attribute attribute-id="address_line_3"><value>newsletter ADDRESS LINE 3 data</value></object-attribute>
            <object-attribute attribute-id="housenumber"><value>newsletter HOUSENUMBER data</value></object-attribute>
            <object-attribute attribute-id="city"><value>newsletter CASLANO</value></object-attribute>
            <object-attribute attribute-id="post_code"><value>6987</value></object-attribute>
            <object-attribute attribute-id="state"><value>TICINO</value></object-attribute>
            <object-attribute attribute-id="country"><value>CH</value></object-attribute>
            <object-attribute attribute-id="phone_home"><value>0044 1234567 newsletter phone_home</value></object-attribute>
            <object-attribute attribute-id="preferred_locale"><value>fr_CH</value></object-attribute>
            <object-attribute attribute-id="exported"><value>true</value></object-attribute>
            <object-attribute attribute-id="profiling"><value>true</value></object-attribute>
            <object-attribute attribute-id="promotions"><value>true</value></object-attribute>
            <object-attribute attribute-id="source"><value>https://www.mysite.com</value></object-attribute>
            <object-attribute attribute-id="source_ip"><value>85.219.17.170</value></object-attribute>
            <object-attribute attribute-id="pr_product_serial_number"><value>000123345678 product serial no.</value></object-attribute>
            <object-attribute attribute-id="pr_purchased_from"><value>Store where product to be registered was purchased</value></object-attribute>
            <object-attribute attribute-id="pr_date_of_purchase"><value></value></object-attribute>
            <object-attribute attribute-id="locale"><value>fr_CH</value></object-attribute> 
        </custom-object>
    </custom-objects>
    I use the following sequence of queries below to do the insert (XML_FILE is passed to the procedure as XMLType) 
    INSERT INTO DW_CUSTOMER.NEWSLETTERS (
       BRANDID,
       CUSTOMER_EMAIL,
       DW_WEBSITE_TAG
    Select
    p_brandid as BRANDID,
    CUSTOMER_EMAIL,
    p_website
    FROM
    (select XML_FILE from dual) p,
    XMLTable(
    xmlnamespaces(default 'http://www.mysite.com/xml/impex/customobject/2006-10-31'),
    '/custom-objects/custom-object' PASSING p.XML_FILE
    COLUMNS
    customer_email PATH '@object-id'
    ) CUSTOMER_LEVEL1;
    INSERT INTO DW_CUSTOMER.NEWSLETTERS_C_ATT (
       BRANDID, 
       CUSTOMER_EMAIL,
       CUSTOMER_NO, 
       CUSTOMER_TYPE,
       TITLE,
       FIRST_NAME,
       LAST_NAME,
       PHONE_HOME,
       BIRTHDAY,
       ADDRESS1,
       ADDRESS2,
       ADDRESS3,
       HOUSENUMBER,
       CITY,
       POSTAL_CODE,
       STATE,
       COUNTRY,
       IS_SUBSCRIBED,
       PREFERRED_LOCALE,
       PROFILING,
       PROMOTIONS,
       EXPORTED,
       SOURCE,
       SOURCE_IP,
       PR_PRODUCT_SERIAL_NO,
       PR_PURCHASED_FROM,
       PR_PURCHASE_DATE,
       LOCALE,
       DW_WEBSITE_TAG)
        with mainq as
            SELECT
            CUST_LEVEL1.customer_email as CUSTOMER_EMAIL,
            CUST_LEVEL2.*
            FROM
            (select XML_FILE from dual) p,
            XMLTable(
            xmlnamespaces(default 'http://www.mysite.com/xml/impex/customobject/2006-10-31'),
            '/custom-objects/custom-object' PASSING p.XML_FILE
            COLUMNS
            customer_email PATH '@object-id',
            NEWSLETTERS_C_ATT XMLType PATH 'object-attribute'
            ) CUST_LEVEL1,
            XMLTable(
            xmlnamespaces(default 'http://www.mysite.com/xml/impex/customobject/2006-10-31'),
            '/object-attribute' PASSING CUST_LEVEL1.NEWSLETTERS_C_ATT
            COLUMNS
            attribute_id PATH '@attribute-id',
            thevalue PATH 'value'
            ) CUST_LEVEL2
        select
        p_brandid
        ,customer_email
        ,nvl(max(decode(attribute_id,'customer_no',thevalue)),SET_NEWSL_CUST_ID) customer_no   
        ,max(decode(attribute_id,'customer_type',thevalue)) customer_type
        ,max(decode(attribute_id,'title',thevalue)) title
        ,substr(max(decode(attribute_id,'first_name',thevalue)) ,1,64)first_name
        ,substr(max(decode(attribute_id,'last_name',thevalue)) ,1,64) last_name
        ,substr(max(decode(attribute_id,'phone_hone',thevalue)) ,1,64) phone_hone
        ,max(decode(attribute_id,'birthday',thevalue)) birthday
        ,substr(max(decode(attribute_id,'address_line1',thevalue)) ,1,100) address_line1
        ,substr(max(decode(attribute_id,'address_line2',thevalue)) ,1,100) address_line2
        ,substr(max(decode(attribute_id,'address_line3',thevalue)) ,1,100) address_line3   
        ,substr(max(decode(attribute_id,'housenumber',thevalue)) ,1,64) housenumber
        ,substr(max(decode(attribute_id,'city',thevalue)) ,1,128) city
        ,substr(max(decode(attribute_id,'post_code',thevalue)) ,1,64) postal_code
        ,substr(max(decode(attribute_id,'state',thevalue)),1,256) state
        ,substr(max(decode(attribute_id,'country',thevalue)),1,32) country
        ,max(decode(attribute_id,'is_subscribed',thevalue)) is_subscribed
        ,max(decode(attribute_id,'preferred_locale',thevalue)) preferred_locale
        ,max(decode(attribute_id,'profiling',thevalue)) profiling
        ,max(decode(attribute_id,'promotions',thevalue)) promotions
        ,max(decode(attribute_id,'exported',thevalue)) exported   
        ,substr(max(decode(attribute_id,'source',thevalue)),1,256) source   
        ,max(decode(attribute_id,'source_ip',thevalue)) source_ip       
        ,substr(max(decode(attribute_id,'pr_product_serial_number',thevalue)),1,64) pr_product_serial_number
        ,substr(max(decode(attribute_id,'pr_purchased_from',thevalue)),1,64) pr_purchased_from   
        ,substr(max(decode(attribute_id,'pr_date_of_purchase',thevalue)),1,32) pr_date_of_purchase
        ,max(decode(attribute_id,'locale',thevalue)) locale
        ,p_website   
        from
        mainq
        group by customer_email, p_website
    I CANNOT MANAGE TO INSERT 4000 records in less than 30 minutes!
    Can you help or advise how to reduce this to reasonable timings?
    Thanks

    Simplified example on a few attributes :
    -- INSERT INTO tmp_xml VALUES ( xml_file );
    INSERT ALL
      INTO newsletters (brandid, customer_email, dw_website_tag)
      VALUES (p_brandid, customer_email, p_website)
      INTO newsletters_c_att (brandid, customer_email, customer_no, customer_type, title, first_name, last_name)
      VALUES (p_brandid, customer_email, customer_no, customer_type, title, first_name, last_name)
    SELECT o.*
    FROM tmp_xml t
       , XMLTable(
           xmlnamespaces(default 'http://www.mysite.com/xml/impex/customobject/2006-10-31')
         , '/custom-objects/custom-object'
           passing t.object_value
           columns customer_email varchar2(256) path '@object-id'
                 , customer_no    varchar2(256) path 'object-attribute[@attribute-id="customer-no"]/value'
                 , customer_type  varchar2(256) path 'object-attribute[@attribute-id="customer_type"]/value'
                 , title          varchar2(256) path 'object-attribute[@attribute-id="title"]/value'
                 , first_name     varchar2(64)  path 'object-attribute[@attribute-id="first_name"]/value'
                 , last_name      varchar2(64)  path 'object-attribute[@attribute-id="last_name"]/value'
         ) o

  • Performance issue with sys.user_history$ table

    Hi,
    I am investigating performance for one of my client's databases (which is at 9.2.0.8) as they are experiencing intermittent poor response. In the Statspack report (Top SQL section) I can see that a catalog table called SYS.USER_HISTORY$ is being accessed very frequently. Now I understand that this would be a result of password limits being set in users' profile but each time the table is accessed, it would appear to incur a full table scan resulting in over 7,500 gets each time. The total buffer gets (and physical blocks read) from this table account for a high percentage of the total and since the highest waits are buffer and I/O related this must be a major factor. Here is an extract from the Statspack report:
    CPU Elapsd
    Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
    2,327,138 316 7,364.4 7.8 190.22 4889.61 3236020785
    select password_date from user_history$ where user# = :1 order by password_date desc
    2,320,524 313 7,413.8 7.8 199.41 4278.44 3584552880
    delete from user_history$ where password_date < :1 and user# = :2
    2,272,260 308 7,377.5 7.6 169.36 3453.12 822812381
    select 1 from dual where exists (select password from user_history$ where password = :1 and user# = :2)
    Physical Reads Executions Reads per Exec %Total Time (s) Time (s) Hash Value
    1,448,689 316 4,584.5 20.6 190.22 4889.61 3236020785
    select password_date from user_history$ where user# = :1 order by password_date desc
    1,269,172 313 4,054.9 18.1 199.41 4278.44 3584552880
    delete from user_history$ where password_date < :1 and user# = :2
    1,206,906 308 3,918.5 17.2 169.36 3453.12 822812381
    select 1 from dual where exists (select password from user_history$ where password = :1 and user# = :2)
    Is there any way to improve access to this table? Since it's a catalog table, I presume it would not be acceptable to an index to it but, for example, would it be acceptable to assign it to a suitably sized KEEP buffer pool which should at least reduce the amount of physical I/O incurred?
    Any ideas would be appreciated.
    Regards,
    Ian Brennan

    Hi,
    Here is the remaining information which I have now gathered:-
    select count(*) from dba_users;
    24681
    select count(*) from sys.user_history$;
    1258133
    select profile, limit from dba_profiles where resource_name = 'PASSWORD_REUSE_TIME';
    PROFILE LIMIT
    DEFAULT UNLIMITED
    PRS2_DEFAULT_PROFILE 365
    select bytes from dba_segments where SEGMENT_NAME='USER_HISTORY$';
    61865984
    explain plan for
    select password_date from user_history$ where user# = :1 order by password_date desc;
    SELECT STATEMENT CHOOSE
    Cost: 647 Bytes: 913 Cardinality: 83
    2 SORT ORDER BY
    Cost: 647 Bytes: 913 Cardinality: 83
    1 TABLE ACCESS FULL SYS.USER_HISTORY$
    Cost: 638 Bytes: 913 Cardinality: 83
    Any further thoughts?

  • Create batches for million record table

    I have took this out for some reasons...
    Thanks for your support guys...
    Edited by: Srichan on Jun 17, 2009 2:19 PM

    Well, OK then:
    SQL> select * from t
      2  /
           COL
             1
            20
            56
            78
            80
            82
            88
            91
            95
            99
           100
           110
    12 rows selected.
    Elapsed: 00:00:00.03
    SQL> select min(col) stkey
      2  ,      max(col) endkey
      3  ,      count(*) recs
      4  ,      grpid
      5  from ( select col
      6         ,      ntile(5) over (order by col) grpid
      7         from t
      8       )
      9  group by grpid
    10  order by stkey;
         STKEY     ENDKEY       RECS      GRPID
             1         56          3          1
            78         82          3          2
            88         91          2          3
            95         99          2          4
           100        110          2          5
    5 rows selected.
    Elapsed: 00:00:00.01
    SQL> select min(col) stkey
      2   ,      max(col) endkey
      3   ,      count(*) recs
      4   ,      grpid
      5   from ( select col
      6  ,      ntile(3) over (order by col) grpid
      7          from t
      8        )
      9   group by grpid
    10   order by stkey;
         STKEY     ENDKEY       RECS      GRPID
             1         78          4          1
            80         91          4          2
            95        110          4          3
    3 rows selected.
    Elapsed: 00:00:00.06
    edit
    All credits to Salim here, I like the 'rownum-thing' to get the sets into 5-5-2, very nice and just had to run it ;) :
    SQL> select min(col) stkey
      2  ,      max(col) endkey
      3  ,      count(*) recs
      4  ,      rn
      5  from ( select col
      6        ,       trunc((rownum -1)/5)+1 rn
      7         from t
      8       )
      9  group by rn
    10  order by stkey;
         STKEY     ENDKEY       RECS         RN
             1         80          5          1
            82         99          5          2
           100        110          2          3Edited by: hoek on Jun 16, 2009 6:24 PM

Maybe you are looking for

  • Mass change

    Hi, I need to perform a mass change of billing dates for billing documents. Is there any tcode for the same. Regards Ramesh

  • Disc Drive won't take DVD+RW

    I want to free up some of the space on my macbook, so I'm trying to put some of my photos on a DVD+RW. The Brand is Memorex. At the top it says "rewritable-4X-4.7GB-120min". When I put the disc in the drive, it makes all the usual noises, and sounds

  • Shape button not working in preview made

    Hi I am working on captivate 8 responsive project. I have created button using shape. But when I publish for preview mode in Mozilla latest browser and  found that all slide content disappear. Also checked with sample responsive project in captivate

  • Photo Merge HDR and camera vibrations

    I use a Manfrotto 055X Pro tripod with a Manfrotto 332RC2 head, when mirror lock is utilized, this set-up produces excellent results. There is however a slight vibration when the mirror is raised, then again after the image has been captured. A brack

  • Printer page size problems

    I have an HP 1180C Deskjet which is set up with A3 paper as the default size. All programs recognise this apart from Adobe Reader which resets it somehow to A4 size. So whenever I receive a pdf that is A3 size I have to go into printer properties and