Performance during joining large tables

Hi,
I have to maintain a report which gets data from many large tables as below. Currently it is using join statement to join all 8 tables and causing a very slow performance.
SELECT
    into corresponding fields of table equip
    FROM caufv
              join afih on afih~aufnr = caufv~aufnr
              join iloa on iloa~iloan = afih~iloan
              join iflos  on iflos~tplnr = iloa~tplnr
              join iflotx on iflos~tplnr = iflotx~tplnr
              join vbak on vbak~aufnr = caufv~aufnr
              join equz on equz~equnr = afih~equnr
              join equi on equi~equnr = equz~equnr
              join vbap on vbak~vbeln = vbap~vbeln
    WHERE
Please suggest me another way, I'm newbie in ABAP. I tried using FOR ALL ENTRIES IN but it did not work. I would very appreciate if you can leave me some sample lines of code.
Thanks,

Hi Dear ,
I will suggest you not to use inner join for such i.e. 8 number of table and that too huge tables. Instead use For All entries wherever possible. But before using for all entries check initial for base table and if its not possible to avoid inner join then try to minimise it. Use inner join between header and item.
Hope this will help you to solve your problem . Feel free to ask if you have any doubt.
Regards,
Vijay

Similar Messages

  • Why oh why, weird performance on joining large tables

    Hello.
    I have a large table cotaining dates and customer data. Organised as:
    DATE CUSTOMER_ID INFOCOLUMN1 INFOCOLUMN2 etc...
    Rows per date are a couple of million.
    What I'm trying to do is to make a comparison between date a and date b and track changes in the database.
    When I do a:
    SELECT stuff
    FROM table t1
    INNER JOIN table t2
      ON t1.CUSTOMER_ID = t2.CUSTOMER_ID
    WHERE t1.date = TO_DATE(SOME_DATE)
    AND t2.date = TO_DATE(SOME_OTHER_DATE)I get a result in about 40 seconds which is acceptable.
    Then I try doing:
    SELECT stuff
    FROM (SELECT TO_DATE(LAST_DAY(ADD_MONTHS(SYSDATE, 0 - r.l))) AS DATE FROM dual INNER JOIN (SELECT level l FROM dual CONNECT BY LEVEL <= 1) r ON 1 = 1) time
    INNER JOIN table t1
      ON t1.date = time.date
    INNER JOIN table t2
      ON t1.CUSTOMER_ID = t2.CUSTOMER_ID
    WHERE t2.date = ADD_MONTHS(time.date, -1)Ie i generate a datefield from a subselect which I then use to join the tables with.
    When I try that the query takes an hour or two to complete with the same resultset as the first example.
    THe only difference is that in the first case I give the dates literally but in the other case I generate them in the subselect. It's the same dates and they are formatted as dates in both cases.
    Any ideas?
    Thanks
    Edited by: user1970293 on 2010-apr-29 00:52
    Edited by: user1970293 on 2010-apr-29 00:59

    When I try that the query takes an hour or two to complete with the same resultset as the first example.If you get the same results, than why change the query to the second one?
    THe only difference is that in the first case I give the dates literally but in the other case I generate them in the subselect. It's the same dates and they are formatted as dates in both cases.Dates are dates,... the formatting is just "pretty"
    This
    select to_date(last_day(add_months(sysdate
                                      ,0 - r.l)))
      from dual
    inner join (select level l from dual connect by level <= 1) r on 1 = 1doesn't make much sense... what is it supposed to do?
    (by the way: you are doing a TO_DATE on a DATE...)

  • MKPF & MSEG Performance issue : Join on table  MKPF & MSEG  taking too much time ..

    Hello Experts,
    I had a issue where we are executing one custom report in which  i used inner join on table  MKPF & MSEG,  some time join statement  took  9-10 min to excute and some time execute within  1-2 min with same test data .
    i am not able to understand what the actaully happing .
    please help.
    code :
       SELECT f~mblnr f~mjahr f~usnam f~bktxt  p~bukrs
        INTO TABLE itab
        FROM mkpf AS f INNER JOIN mseg AS p
            ON f~mblnr = p~mblnr AND f~mjahr = p~mjahr
         WHERE f~vgart = 'WE'
           AND f~budat IN p_budat
           AND f~usnam IN p_sgtxt
           AND p~bwart IN ('101','105')
           AND p~werks IN p_werks
           AND p~lgort IN p_lgort.
    Regards,
    Dipendra Panwar.

    Hi Dipendra,
    if you call a report twice after another with the same test data for data selection, then the second run should be faster, because some data are remaining in memory and needn't to be caught from database. This will be also for the following third und further runs, until the data in the SAP memory will be removed by other programs.
    For performance traces you should try to test with a first run.
    Regards,
    Klaus

  • Performing multiple joins on table

    Hi all,
       I would like to know if performing joins on more than 10 tables has some performance issues? If so, is there any limit on the joins that can be done?
    Is there another way out for this problem?

    Rob Burbank wrote:>
    > >
    Thomas Zloch wrote:
    > > If I had to bet, I'd say no, not included...do you have time to try out with a little test program?
    > Probably not so little, but maybe sometime. I know there are standard SAP views with upwards of 13 or 14 tables, but I doubt if they would be compatable for joining.
    >
    > Rob
    How about a self join? 

  • Slow performance of Query-Large Table how to optimize for performance

    Hi Friends,
    I am an ORacle DBA and just recently I have been asked to Administer an ORacle HRMS Database as a substitute for the HRMS DBA who have gone on vacation.
    I have been reported that few queries are taking a long time to refresh and populate the forms. After some investigation it is found that the tables: HR.PAY_ELEMENT_ENTRY_VALUES_F has some more than 15 million rows in it. The storage parameters specified for table r Oracle Defaults. The table has grown a lot big and even a Count(*) takes more than 7 mins to respond.
    My question is: Is there anyway it can be tuned for better performance without an overhaul. Is it normal for this table to grow this big for 6000 employees data for 4 years....
    Any response/help in this regard will be appreciated. U may please ans me at [email protected]
    Thanks in Advance.
    Rajeev.

    That was a good suggestion by Karthick_Arp, but there is a chance that it is not logically identical depending on the data (I believe that is the reason for his warning).
    Try this rewrite, which moves T6 to an inline view and uses the DECODE function to determine if the one row returned from T6 should be used:
    SELECT
      ASSOC.NAME_FIRST || ' ' || ASSOC.NAME_LAST AS CLIENT_MANAGER
    FROM
      T1 ASSOC,
      T2 CE,
      T3 AA,
      T4 ACT,
      T5 CC,
      (SELECT
        CA.ASSOC_ID
      FROM
        T6 CA
      WHERE
        CA.COMP_ID = :P_ENT_ID
        AND CA.CD_CODE IN ('CMG','RCM','BCM','CCM','BAE')
      GROUP BY
        CA.ASSOC_ID) CA
    WHERE
      CE.ENT_ID = ACT.PRIMARY_ENT_ID(+)
      AND CE.ENT_ID = :P_ENT_ID
      AND ASSOC.ID = DECODE(AA.ASSOC_ID, NULL, CA.ASSOC_ID, AA.ASSOC_ID)
      AND NVL(ACT.ACTIVITY_ID, 0) = NVL(AA.ACTIVITY_ID, 0)
      AND ASSOC.BK_CODE = CC.CPY_NO
      AND ASSOC.CENTER = CC.CCT_NO
      AND AA.ROLE_CODE IN ('CMG', 'RCM', 'BCM', 'CCM', 'BAE');Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Performance for join 9 custom table with native SQL ?

    Hi Expert,
    I need your opinion regarding performance to join 9 tables with native sql. Recently i have to tunning some customize extraction cost  report. This report extract about 10 million cost of material everyday.
    The current program actually, try to populate the condition data and insert into customize table and join all the table to get data using native sql.
    SELECT /*+ ordered use_hash(mst,pg,rg,ps,rs,dpg,drg,dps,drs) */
                mst.werks, ....................................
    FROM
                sapsr3.zab_info mst,
                sapsr3.zab_pc pg,
                sapsr3.zab_rc rg,
                sapsr3.zab_pc ps,
                sapsr3.zab_rc rs,
                sapsr3.zab_g_pc dpg,
                sapsr3.zab_g_rc drg,
                sapsr3.zab_s_pc dps,
                sapsr3.zab_s_rc drs
            WHERE mst.zseq_no = :p_rep_run_id
            AND mst.werks = :p_werks
            AND mst.mandt = rg.mandt(+)
            AND mst.ekorg = rg.ekorg(+)
            AND mst.lifnr = rg.lifnr(+)
            AND mst.matnr = rg.matnr(+)
            ...............................................   unitl all table (9 tables)
            AND ps.mandt = dps.mandt(+)
            AND ps.knumh = dps.knumh(+)
            AND ps.zseq_no = dps.zseq_no(+)
            AND COALESCE (dps.kbetr, drs.kbetr, dpg.kbetr, drg.kbetr) <> 0
    It seems the query ask for database to using hashed table. would that be it will burden the database ? and impacted to others sap process ?
    Please advise
    Thank You and Best Regards

    you can only argue coming from measurements and that is not the case.
    Coming from the code, I see only that you do not understand it at all, so better leave it as it is. It is not a hash table, but a hash join on these table.

  • Join multiple tables across separate databases

    I needed to perform a join on multiple tables in two separate databases. I am using Sybase ASE 12.5 and jConnect 5.5 JDBC driver.
    Table A is in DB1 and Table B is in DB2. Both DB1 and DB2 reside on the same database server.
    I have set up JNDI bound datasources DS1 for DB1 and DS2 for DB2.
    If the queries involved single tables or multiple tables within the same database, it is simple. But I am not seeing any way where I can perform a join between Tables A and B, which reside in separate databases. The datasources DS1 and DS2 are associated with DB1 and DB2 respectively, so I am not sure if there is anything like performing joins using two data sources.
    One alternative I am facing is using a stored procedure in one database refer the table in the other database, for the join. But I may not be allowed to modify the database; currently, I am allowed to perform READ-ONLY queries.
    So I am looking for any and all options. Please advice.
    Thanks!

    Two choices..
    One find the syntax in DB2 that allows you to do what you want in there. That would often be somelike...
    select a.myfield
    from database1.table1 a, database2.table2 b
    where a.id = b.id
    Obviously the database itself must support this. If it doesn't then you have choice two.
    You extract the data from database1 using java for table1.
    You extract the data from database2 using java for table2.
    You write java code that merges the data from both sources.

  • Simultaneous hash joins of the same large table with many small ones?

    Hello
    I've got a typical data warehousing scenario where a HUGE_FACT table is to be joined with numerous very small lookup/dimension tables for data enrichment. Joins with these small lookup tables are mutually independent, which means that the result of any of these joins is not needed to perform another join.
    So this is a typical scenario for a hash join: the lookup table is converted into a hashed map in RAM memory, fits there without drama cause it's small and a single pass over the HUGE_FACT suffices to get the results.
    Problem is, so far as I can see it in the query plan, these hash joins are not executed simultaneously but one after another, which renders Oracle to do the full scan of the HUGE_FACT (or any intermediary enriched form of it) as many times as there are joins.
    Questions:
    - is my interpretation correct that the mentioned joins are sequential, not simultaneous?
    - if this is the case, is there any possibility to force Oracle to perform these joins simultaneously (building more than one hashed map in memory and doing the single pass over the HUGE_FACT while looking up in all of these hashed maps for matches)? If so, how to do it?
    Please note that the parallel execution of a single join at a time is not the matter of the question.
    Database version is 10.2.
    Thank you very much in advance for any response.

    user13176880 wrote:
    Questions:
    - is my interpretation correct that the mentioned joins are sequential, not simultaneous?Correct. But why do you think this is an issue? Because of this:
    which renders Oracle to do the full scan of the HUGE_FACT (or any intermediary enriched form of it) as many times as there are joins.That is (should not be) true. Oracle does one pass of the big table, and then sequentually joins to each of the hashmaps (of each of the smaller tables).
    If you show us the execution plan, we can be sure of this.
    - if this is the case, is there any possibility to force Oracle to perform these joins simultaneously (building more than one hashed map in memory and doing the single pass over the HUGE_FACT while looking up in all of these hashed maps for matches)? If so, how to do it?Yes there is. But again you should not need to resort to such a solution. What you can do is use subquery factoring (WITH clause) in conjunction with the MATERIALIZE hint to first construct the cartesian join of all of the smaller (dimension) tables. And then join the big table to that.

  • How to improve Query performance on large table in MS SQL Server 2008 R2

    I have a table with 20 million records. What is the best option to improve query performance on this table. Is partitioning the table into filegroups  is a best option or splitting the table into multiple smaller tables? 

    Hi bala197164,
    First, I want to inform that both to partition the table into filegroups and split the table into multiple smaller tables can improve the table query performance, and they are fit for different situation. For example, our table have one hundred columns and
    some columns are not related to this table object directly (for example, there is a table named userinfo to store user information, it has columns address_street, address_zip,address_ province columns, at this time, we can create a new table named as Address,
    and add a foreign key in userinfo table references Address table), under this situation, by splitting a large table into smaller, individual tables, queries that access only a fraction of the data can run faster because there is less data to scan. Another
    situation is our table records can be grouped easily, for example, there is a column named year to store information about product release date, at this time, we can partition the table into filegroups to improve the query performance. Usually, we perform
    both of methods together. Additionally, we can add index to table to improve the query performance. For more detail information, please refer to the following document:
    Partitioning:
    http://msdn.microsoft.com/en-us/library/ms178148.aspx
    CREATE INDEX (Transact-SQL):
    http://msdn.microsoft.com/en-us/library/ms188783.aspx
    TechNet
    Subscriber Support 
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Allen Li
    TechNet Community Support

  • Performance Tuning Query on Large Tables

    Hi All,
    I am new to the forums and have a very specic use case which requires performance tuning, but there are some limitations on what changes I am actualy able to make to the underlying data. Essentially I have two tables which contain what should be identical data, but for reasons of a less than optimal operational nature, the datasets are different in a number of ways.
    Essentially I am querying call record detail data. Table 1 (refered to in my test code as TIME_TEST) is what I want to consider the master data, or the "ultimate truth" if you will. Table one contains the CALLED_NUMBER which is always in a consistent format. It also contains the CALLED_DATE_TIME and DURATION (in seconds).
    Table 2 (TIME_TEST_COMPARE) is a reconciliation table taken from a different source but there is no consistent unique identifiers or PK-FK relations. This table contains a wide array of differing CALLED_NUMBER formats, hugely different to that in the master table. There is also scope that the time stamp may be out by up to 30 seconds, crazy I know, but that's just the way it is and I have no control over the source of this data. Finally the duration (in seconds) can be out by up to 5 seconds +/-.
    I want to create a join returning all of the master data and matching the master table to the reconciliation table on CALLED_NUMBER / CALL_DATE_TIME / DURATION. I have written the query which works from a logi perspective but it performs very badly (master table = 200,000 records, rec table = 6,000,000+ records). I am able to add partitions (currently the tables are partitioned by month of CALL_DATE_TIME) and can also apply indexes. I cannot make any changes at this time to the ETL process loading the data into these tables.
    I paste below the create table and insert scripts to recreate my scenario & the query that I am using. Any practical suggestions for query / table optimisation would be greatly appreciated.
    Kind regards
    Mike
    -------------- NOTE: ALL DATA HAS BEEN DE-SENSITISED
    /* --- CODE TO CREATE AND POPULATE TEST TABLES ---- */
    --CREATE MAIN "TIME_TEST" TABLE: THIS TABLE HOLDS CALLED NUMBERS IN A SPECIFIED/PRE-DEFINED FORMAT
    CREATE TABLE TIME_TEST ( CALLED_NUMBER VARCHAR2(50 BYTE),
                                            CALLED_DATE_TIME DATE, DURATION NUMBER );
    COMMIT;
    -- CREATE THE COMPARISON TABLE "TIME_TEST_COMPARE": THIS TABLE HOLDS WHAT SHOULD BE (BUT ISN'T) IDENTICAL CALL DATA.
    -- THE DATA CONTAINS DIFFERING NUMBER FORMATS, SLIGHTLY DIFFERENT CALL TIMES (ALLOW +/-60 SECONDS - THIS IS FOR A GOOD, ALBEIT UNHELPFUL, REASON)
    -- AND DURATIONS (ALLOW +/- 5 SECS)                                        
    CREATE TABLE TIME_TEST_COMPARE ( CALLED_NUMBER VARCHAR2(50 BYTE),
                                       CALLED_DATE_TIME DATE, DURATION NUMBER )                                        
    COMMIT;
    --CREATE INSERT DATA FOR THE MAIN TEST TIME TABLE
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 202);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 08:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 19);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 07:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 35);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 09:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 30);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:18:47 AM', 'MM/DD/YYYY HH:MI:SS AM'), 6);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:20:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 20);
    COMMIT;
    -- CREATE INSERT DATA FOR THE TABLE WHICH NEEDS TO BE COMPARED:
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:10:51 AM', 'MM/DD/YYYY HH:MI:SS AM'), 200);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '00447721345675', TO_DATE( '11/09/2011 08:10:59 AM', 'MM/DD/YYYY HH:MI:SS AM'), 21);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '07721345675', TO_DATE( '11/09/2011 07:11:20 AM', 'MM/DD/YYYY HH:MI:SS AM'), 33);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '+447721345675', TO_DATE( '11/09/2011 09:10:01 AM', 'MM/DD/YYYY HH:MI:SS AM'), 33);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '+447721345675#181345', TO_DATE( '11/09/2011 06:18:35 AM', 'MM/DD/YYYY HH:MI:SS AM')
    , 6);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '004477213456759777799', TO_DATE( '11/09/2011 06:19:58 AM', 'MM/DD/YYYY HH:MI:SS AM')
    , 17);
    COMMIT;
    /* --- QUERY TO UNDERTAKE MATCHING WHICH REQUIRES OPTIMISATION --------- */
    SELECT MAIN.CALLED_NUMBER AS MAIN_CALLED_NUMBER, MAIN.CALLED_DATE_TIME AS MAIN_CALL_DATE_TIME, MAIN.DURATION AS MAIN_DURATION,
         COMPARE.CALLED_NUMBER AS COMPARE_CALLED_NUMBER,COMPARE.CALLED_DATE_TIME AS COMPARE_CALLED_DATE_TIME,
         COMPARE.DURATION COMPARE_DURATION     
    FROM
    SELECT CALLED_NUMBER, CALLED_DATE_TIME, DURATION
    FROM TIME_TEST
    ) MAIN
    LEFT JOIN
    SELECT CALLED_NUMBER, CALLED_DATE_TIME, DURATION
    FROM TIME_TEST_COMPARE
    ) COMPARE
    ON INSTR(COMPARE.CALLED_NUMBER,MAIN.CALLED_NUMBER)<> 0
    AND MAIN.CALLED_DATE_TIME BETWEEN COMPARE.CALLED_DATE_TIME-(60/86400) AND COMPARE.CALLED_DATE_TIME+(60/86400)
    AND MAIN.DURATION BETWEEN MAIN.DURATION-(5/86400) AND MAIN.DURATION+(5/86400);

    What does your execution plan look like?

  • Joining on two large tables give breaks connections?

    I am doing inner join on two large tables (172,818 and 146,215) give breaks connections. Using Oracle 8.1.7.0.0
    ERROR at line 1:
    ORA-03113: end-of-file on communication channel
    Oraigannly I was trying to do alter table to add constrints, it gave the error aswell.
    ALTER TABLE a ADD (CONSTRAINT a_FK
    FOREIGN KEY (a_ID, a_VERSION)
    REFERENCES b(b_ID, b_VERSION)
    DEFERRABLE INITIALLY IMMEDIATE)
    Also gives same error. Trace file does not make sense to me.

    Thanks for the reply, no luck yet.
    SQL> show parameter optimizer_max_permutations ;
    NAME TYPE VALUE
    optimizer_max_permutations integer 80000
    SQL> show parameter resource_limit ;
    NAME TYPE VALUE
    resource_limit boolean FALSE
    SQL>

  • Join between 2 large tables

    I've got 2 tables: pay_run_results (+/- 35.000.000 records) and XX_PAY_COSTS (25.000.000 records)
    When in join those table i get an error: ORA-01652: unable to extend temp segment by 128 in tablespace temp1
    So i thought the temp space would be to small. But a dba'er told met the temp space is 4,4 GB
    To reduce the total records i join a other table. see below
    select
    from
    pay_run_results,
    XX_PAY_COSTS,
    PAY_PAYROLL_ACTIONS
    where
    PAY_RUN_RESULTS.RUN_RESULT_ID = XX_PAY_COSTS.run_result_id
    and PAY_PAYROLL_ACTIONS.PAYROLL_ACTION_ID = XX_PAY_COSTS.PAYROLL_ACTION_ID
    and PAY_PAYROLL_ACTIONS.ACTION_TYPE ='C'
    and XX_PAY_COSTS.DEBIT_OR_CREDIT = 'C'
    When running the above query it took 44 minutes to complete, but i did not get the ora-01652 :)
    So I got 2 questions:
    1) why you get an error ORA-01652 when not sorting in your query. Is the temp space also used for temporaly save the result of a query? The result must give +/2 25.000.000 records
    2) The query below gives +/- 3.000.000 records. But still running for 44 minutes. How do you know what's normal? I think 44 minutes is quite long.
    Tnx for helping....

    You'll need to provide more information like database version, execution plan (or even better: a tkprof/trace report with wait events).
    It is explained here:
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html

  • Performance issue with joins on table VBAK, VBEP, VBKD and VBAP

    hi all,
    i have a report where there is a join on all 4 tables VBAK, VBEP, VBKD and VBAP.
    the report is giving performance issues because of this join.
    all the key fields are used for the joining of tables. but some of the non-key fields like vbap-vstel, vbap-abgru and vbep-wadat are also part of select query and are getting filled.
    because of these there is a performance issue.
    is there any way i can improve the performance of the join select query?
    i am trying "for all entries" clause...
    kindly provide any alternative if possible.
    thanks.

    Hi,
    Pls perform some of the below steps as applicable for the performance improvement:
    a) Remove join on all the tables and put joins only on header and item (VBAK & VBAP).
    b) code should have separate select for VBEP and VBKD.
    c) remove the non key fields from the where clause. Once you retrieve data from the database into the internal table, sort the table and delete the entries which are not part of the non-key fields like vstel, abgru and wadat.
    d) last option is you can create index in the VBAP & VBEP table with respect to the fields vstel, abgru & wadat ( not advisable)
    e) buffering option on database tables also possible.
    f) select only the fields into the internal table that are applicable for the processing logic and also the select query should contaian the field names in the same order as mentioned in the database table.
    Hope this helps.
    Regards
    JLN

  • Join on Large Table

    I have some queries that use an inner join between a table with a few hundred rows and a table that will eventually have many millions of rows. The join is on an integer value that is part of the primary key on the larger table. The primary key
    on said table consists of the integer and another field which is a BigInt (representing Date/time to the millisecond). The query also has  predicate (where clause) with an exact match for the BigInt.
    The query take about a second to execute at the moment but I was wondering whether I should expect a large increase in execution time as the years go by.
    Is an inner join on the large table advisable?
    By the way, the first field in the primary key is the integer followed by the BigInt, so any thought of selecting on the BigInt into temp table before attempting the join probably won't help.
    R Campbell

    Just in case anyone wants to see the full picture (which I am not actually expecting) this is a script for all SQL objects involved.
    The numbers of rows in the tables are.
    Tags 5,000
    NumericSamples millions (over time)
    TagGroups 50
    GroupTags 500
    CREATE TABLE [dbo].[Tags](
    [ID] [int] NOT NULL,
    [TagName] [nvarchar](110) NOT NULL,
    [Address] [nvarchar](80) NULL,
    [DataTypeID] [smallint] NOT NULL,
    [DatasourceID] [smallint] NOT NULL,
    [Location] [nvarchar](4000) NULL,
    [Properties] [nvarchar](4000) NULL,
    [LastReadSampleTime] [bigint] NOT NULL,
    [Archived] [bit] NOT NULL,
    CONSTRAINT [Tags_ID_PK] PRIMARY KEY CLUSTERED
    [ID] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    ALTER TABLE [dbo].[Tags] WITH NOCHECK ADD CONSTRAINT [Tags_DatasourceID_Datasources_ID_FK] FOREIGN KEY([DatasourceID])
    REFERENCES [dbo].[Datasources] ([ID])
    GO
    ALTER TABLE [dbo].[Tags] CHECK CONSTRAINT [Tags_DatasourceID_Datasources_ID_FK]
    GO
    ALTER TABLE [dbo].[Tags] WITH NOCHECK ADD CONSTRAINT [Tags_DataTypeID_DataTypes_ID_FK] FOREIGN KEY([DataTypeID])
    REFERENCES [dbo].[DataTypes] ([ID])
    GO
    ALTER TABLE [dbo].[Tags] CHECK CONSTRAINT [Tags_DataTypeID_DataTypes_ID_FK]
    GO
    ALTER TABLE [dbo].[Tags] ADD CONSTRAINT [DF_Tags_LastReadSampleTime] DEFAULT ((552877956000000000.)) FOR [LastReadSampleTime]
    GO
    ALTER TABLE [dbo].[Tags] ADD DEFAULT ((0)) FOR [Archived]
    GO
    CREATE TABLE [dbo].[NumericSamples](
    [TagID] [int] NOT NULL,
    [SampleDateTime] [bigint] NOT NULL,
    [SampleValue] [float] NULL,
    [QualityID] [smallint] NOT NULL,
    CONSTRAINT [NumericSamples_TagIDSampleDateTime_PK] PRIMARY KEY CLUSTERED
    [TagID] ASC,
    [SampleDateTime] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    ALTER TABLE [dbo].[NumericSamples] WITH NOCHECK ADD CONSTRAINT [NumericSamples_QualityID_Qualities_ID_FK] FOREIGN KEY([QualityID])
    REFERENCES [dbo].[Qualities] ([ID])
    GO
    ALTER TABLE [dbo].[NumericSamples] CHECK CONSTRAINT [NumericSamples_QualityID_Qualities_ID_FK]
    GO
    ALTER TABLE [dbo].[NumericSamples] WITH NOCHECK ADD CONSTRAINT [NumericSamples_TagID_Tags_ID_FK] FOREIGN KEY([TagID])
    REFERENCES [dbo].[Tags] ([ID])
    GO
    ALTER TABLE [dbo].[NumericSamples] CHECK CONSTRAINT [NumericSamples_TagID_Tags_ID_FK]
    GO
    CREATE TABLE [dbo].[TagGroups](
    [ID] [int] IDENTITY(1,1) NOT NULL,
    [TagGroup] [varchar](50) NULL,
    [Aggregates] [varchar](250) NULL,
    [NumericData] [bit] NULL,
    CONSTRAINT [PK_TagGroups] PRIMARY KEY CLUSTERED
    [ID] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    ALTER TABLE [dbo].[TagGroups] ADD CONSTRAINT [DF_Tag_Groups_Aggregates] DEFAULT ('First') FOR [Aggregates]
    GO
    ALTER TABLE [dbo].[TagGroups] ADD CONSTRAINT [DF_TagGroups_NumericData] DEFAULT ((1)) FOR [NumericData]
    GO
    CREATE TABLE [dbo].[GroupTags](
    [ID] [int] IDENTITY(1,1) NOT NULL,
    [TagGroupID] [int] NULL,
    [TagName] [varchar](150) NULL,
    [ColumnName] [varchar](50) NULL,
    [SortOrder] [int] NULL,
    [TotalFactor] [float] NULL,
    CONSTRAINT [PK_GroupTags] PRIMARY KEY CLUSTERED
    [ID] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    ALTER TABLE [dbo].[GroupTags] WITH CHECK ADD CONSTRAINT [FK_GroupTags_TagGroups] FOREIGN KEY([TagGroupID])
    REFERENCES [dbo].[TagGroups] ([ID])
    ON UPDATE CASCADE
    ON DELETE CASCADE
    GO
    ALTER TABLE [dbo].[GroupTags] CHECK CONSTRAINT [FK_GroupTags_TagGroups]
    GO
    ALTER TABLE [dbo].[GroupTags] ADD CONSTRAINT [DF_GroupTags_TotalFactor] DEFAULT ((1)) FOR [TotalFactor]
    GO
    CREATE VIEW [dbo].[vw_GroupTags]
    AS
    SELECT TOP (10000) dbo.TagGroups.TagGroup AS TableName, dbo.TagGroups.Aggregates AS SortOrder, dbo.GroupTags.SortOrder AS TagIndex, dbo.GroupTags.TagName,
    dbo.Tags.ID AS TagId, dbo.TagGroups.NumericData, dbo.GroupTags.TotalFactor, dbo.GroupTags.ColumnName
    FROM dbo.TagGroups INNER JOIN
    dbo.GroupTags ON dbo.TagGroups.ID = dbo.GroupTags.TagGroupID INNER JOIN
    dbo.Tags ON dbo.GroupTags.TagName = dbo.Tags.TagName
    ORDER BY SortOrder, TagIndex
    CREATE procedure [dbo].[GetTagTableValues]
    @SampleDateTime bigint,
    @TableName varchar(50),
    @PadRows int = 0
    as
    BEGIN
    DECLARE @i int
    DECLARE @ResultSet table(TagName varchar(150), SampleValue float, ColumnName varchar(50), SortOrder int, TagIndex int)
    set @i = 0
    INSERT INTO @ResultSet
    SELECT vw_GroupTags.TagName, NumericSamples.SampleValue, vw_GroupTags.ColumnName, vw_GroupTags.SortOrder, vw_GroupTags.TagIndex
    FROM vw_GroupTags INNER JOIN NumericSamples ON vw_GroupTags.TagId = NumericSamples.TagID
    WHERE (vw_GroupTags.TableName = @TableName) AND (NumericSamples.SampleDateTime = @SampleDateTime)
    set @i = @@ROWCOUNT
    if @i < @PadRows
    BEGIN
    WHILE @i < @PadRows
    BEGIN
    INSERT @ResultSet (TagName, SampleValue, ColumnName, SortOrder, TagIndex) VALUES ('', NULL, '', 0, 0)
    set @i = @i + 1
    END
    END
    select TagName, SampleValue, ColumnName, SortOrder, TagIndex
    from @ResultSet
    END
    R Campbell

  • Performance of large tables with ADT columns

    Hello,
    We are planning to build a large table ( 1 billion+ rows) with one of the columns being an Advanced Data Type (ADT) column. The ADT column will be based on a TYPE that will have approximately (250 attributes).
    We are using Oracle 10g R2
    Can you please tell me the following:
    1. How will Oracle store the data in the ADT column?
    2. Will the entire ADT record fit in one block?
    3. Is it still possible to partition a table on an attribute that is part of the ADT?
    4. How will the performace be affected if Oracle does a full table scan of such a table?
    5. How much space will Oracle take, if any, for storing a NULL in an ADT?
    I think we can create indexes on the attribute of the ADT column. Please let me know if this is not true.
    Thanks for your help.

    I agree with D.Morgan that object type with 250 attributes is doubtful.
    I don't like object tables (tables with "row objects") too.
    But, your table is a relational table with object column ("column object").
    C.J.Date in An introduction to Database Systems (2004, page 885) says:
    "... object/relational systems ... are, or should be, basically just relational systems
    that support the relational domain concept (i.e., types) properly - in other words, true relational systems,
    meaning in particular systems that allow users to define their own types."
    1. How will Oracle store the data in the ADT column?...
    For some answers see:
    “OR(DBMS) or R(DBMS), That is the Question”
    http://www.quest-pipelines.com/pipelines/plsql/tips.htm#OCTOBER
    and (of course):
    "Oracle® Database Application Developer's Guide - Object-Relational Features" 10g Release 2 (10.2)
    http://download-uk.oracle.com/docs/cd/B19306_01/appdev.102/b14260/adobjadv.htm#i1006903
    Regards,
    Zlatko

Maybe you are looking for