Is Index organization tables better in performance compare to normal tables

Hi,
i am using oracle 10g and my domain is on telecom.
my requirement is when 'A' party calls 'B' Party based on 'B' party number
we have to find which area the call lands based on that tariff will be applied
but the data configured in Area table is not the complete number its just a CC+NDC(4or5 digits in length)
so i have to find which one matches the nearest to 'B' party number.
i uses the following query
select max(area_code)
from ZONE_AREA
where '9888123456' like AREA_CODE||'%'
and network_id=1;
this is the structure of the table
create table ZONE_AREA(
AREA_CODE VARCHAR2(20),
AREA_NAME VARCHAR2(30) not null,
ZONE_CODE VARCHAR2(10) not null,
CALL_TYPE VARCHAR2(1) not null,
NETWORK_ID NUMBER(2),
primary key (NETWORK_ID, AREA_CODE))
the table contains around 200000 rows.
the data in table look like
AREA_CODE
98812
90020
900
9732
the hit ratio for the above query is massive since it fires for every call but my DBA complaining me
this query utilizes more CPU need to be tuned.
i thought of going for Index organization tables since i never used this but want to give a try to see any improvisation is there
Hence i created the Index organization table(IOT) for the same above structure in my development environment
with 60,000 rows in it.
create table ZONE_AREA_IOT
AREA_CODE VARCHAR2(20),
AREA_NAME VARCHAR2(30) not null,
ZONE_CODE VARCHAR2(10) not null,
CALL_TYPE VARCHAR2(1) not null,
NETWORK_ID NUMBER(2),
CONSTRAINT pk_admin_docindex1 PRIMARY KEY (NETWORK_ID, AREA_CODE))
ORGANIZATION INDEX
also the plain table (ZONE_AREA) have 60,000 rows in my development server.
now i fired the query on my plain table
select max(area_code)
from ZONE_AREA
where '9888123456' like AREA_CODE||'%'
and network_id=1;
the following is the execution plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 11 | 3 (34)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 11 | | |
| 2 | FIRST ROW | | 1 | 11 | 3 (34)| 00:00:01 |
|* 3 | INDEX RANGE SCAN (MIN/MAX)| SYS_C007738 | 1 | 11 | 3 (34)| 00:00:01 |
now i fired the query on the newly created IOT table
select max(area_code)
from ZONE_AREA_IOT
where '9888123456' like AREA_CODE||'%'
and network_id=1;
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 25 | 2 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 25 | | |
| 2 | FIRST ROW | | 21 | 525 | 2 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN (MIN/MAX)| PK_ADMIN_DOCINDEX1 | 21 | 525 | 2 (0)| 00:00:01 |
both tables have similar record count
but the plans differ don't under stand the Rows column in the above plans for normal table its shows 11 bytes
for IOT table it shows 525 bytes why this difference ?
also CPU cost shows 3 in normal table and for IOT it shows 2
for the above scenario is IOT table is advisable will it cut down the CPU cost is any overheads there for using IOT
please respond
regards
naveen

I think you are deviating from the real problem.
I'm also in Telecom domain and what you are talking about is what is called "most matching" algorithm.
Practically suppose that party A calls party B and you want to know which is the tariff to apply.
Usually tariff tables are based on this "most matching" criteria where the correct tariff is the most matching with the called number.
Let me give you an example.
Suppose that I have a tariff table in this way:
WITH mytariff AS
   SELECT '90123' destination, 1.5 tariff_per_min FROM DUAL UNION ALL
   SELECT '9012'  destination, 1.6 tariff_per_min FROM DUAL UNION ALL
   SELECT '901'   destination, 1.7 tariff_per_min FROM DUAL UNION ALL
   SELECT '90'    destination, 1.8 tariff_per_min FROM DUAL UNION ALL
   SELECT '55123' destination, 1.0 tariff_per_min FROM DUAL UNION ALL
   SELECT '5512'  destination, 1.1 tariff_per_min FROM DUAL UNION ALL
   SELECT '551'   destination, 1.2 tariff_per_min FROM DUAL UNION ALL
   SELECT '55'    destination, 1.3 tariff_per_min FROM DUAL
SELECT * FROM mytariff;
DESTINATION          TARIFF_PER_MIN
90123                           1.5
9012                            1.6
901                             1.7
90                              1.8
55123                             1
5512                            1.1
551                             1.2
55                              1.3
{code}
Correct me if I'm wrong:
{code}
if party A dials 901234567 then it will match destination 90123 and tariff_per_min 1.5
if party A dials 901244567 then it will match destination 9012  and tariff_per_min 1.6
if party A dials 901344567 then it will match destination 901   and tariff_per_min 1.6
if party A dials 551244567 then it will match destination 5512  and tariff_per_min 1.1
etc.
{code}
Confirm if this is your criteria in finding the tariff.
The billing/rating systems I know usually store this information in database tables but the rating engine (generally a c++ program in Unix) is normally reading this information once, putting them in memory and rating the calls by reading information in memory.
I'm not saying that this is the only approach but it seems the most used.
In your case, It looks that you are using to do the same thing SQL or PL/SQL and definitely I understand that applying this algorithm by reading the tariff table for each call records is going to affect your performances heavily.
I have a couple of questions for your:
1) Are you using a SQL statement or a PL/SQL procedure to rate your calls?
2) Could you show us how you assign the tariff to your calls?
I don't think using IOT will solve your problem. IOT has the advantage to read the data together with the index and it is suitable especially if you read your data always with a certain key.
If your data about tariff is static, or doesn't change so often, which I suppose it the case, you could consider a different approach like loading them in a collection in PL/SQL and them retrieving them from collection. It might not be the optimal solution but it is worth considering it.
In order to evaluate your problem please give the details mentioned above.
Regards.
Al                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Similar Messages

  • PL/SQL Pipelined Function to Compare *ANY*  2 tables

    I am trying to create a pipelined function in 10g R1 that will take the name of two tables, compare the the tables using dynamic SQL and pipe out the resulting rows using the appropriate row type. The pipelined function will be used in a DML insert statement.
    For example:
    create table a (f1 number, f2, date, f3 varchar2);
    create table b (f1 number, f2, date, f3 varchar2);
    create table c (f1 number, f2, date, f3 varchar2);
    create or replace TYPE AnyCollTyp IS TABLE OF ANYTYPE;
    create or replace TYPE CRowType IS c%ROWTYPE;
    create or replace TYPE CRowTabType IS table of CRowType;
    CREATE OR REPLACE FUNCTION compareTables (p_source IN VARCHAR2, p_dest IN VARCHAR2)
    RETURN AnyCollTyp PIPELINED
    IS
    CURSOR columnCur (p_tableName IN user_tab_columns.table_name%TYPE)
    IS
    SELECT column_name, column_id
    FROM user_tab_columns
    WHERE table_name = p_tableName
         ORDER BY column_id;
    l_cur sys_refcursor;
    l_rec ANYTYPE;
    l_stmt VARCHAR2 (32767);
    BEGIN
    l_stmt := 'select ';
    FOR columnRec IN columnCur (p_dest)
    LOOP
    l_stmt := l_stmt || CASE
    WHEN columnRec.column_id > 1
    THEN ','
    ELSE ''
    END || columnRec.column_name;
    END LOOP;
    l_stmt := l_stmt || ' from ' || p_source;
    l_stmt := l_stmt || ' minus ';
    l_stmt := l_stmt || ' select ';
    FOR columnRec IN columnCur (p_dest)
    LOOP
    l_stmt := l_stmt || CASE
    WHEN columnRec.column_id > 1
    THEN ','
    ELSE ''
    END || columnRec.column_name;
    END LOOP;
    l_stmt := l_stmt || ' from ' || p_dest;
    OPEN l_cur FOR l_stmt;
    LOOP
    FETCH l_cur
    INTO l_rec;
    PIPE ROW (l_rec);
    EXIT WHEN l_cur%NOTFOUND;
    END LOOP;
    CLOSE l_cur;
    RETURN;
    END compareTables;
    The pipelined function gets created without error. However, the testCompare procedure gets an error:
    SQL> create or replace procedure testCompare is
    begin
    insert into c
    select *
    from (TABLE(CAST(compareTables('a','b') as cRowTabType)));
    dbms_output.put_line(SQL%ROWCOUNT || ' rows inserted into c.');
    end;
    Warning: Procedure created with compilation errors.
    SQL> show errors
    Errors for PROCEDURE TESTCOMPARE:
    LINE/COL ERROR
    3/4 PL/SQL: SQL Statement ignored
    5/47 PL/SQL: ORA-22800: invalid user-defined type
    Does anyone know what I am doing wrong? Is there a better way to compare any two tables and get the resulting rows?

    904640 wrote:
    Hi All,
    Is it possible to post messages to weblogic JMS queue from pl/sql procedure/function?
    From this Queue, message will be read by OSB interface.
    Any help will be highly appreciated.
    http://www.lmgtfy.com/?q=oracle+pl/sql+weblogic+jms+queue

  • Why using workarea for internal table is better in performance

    Please tell me
    why using workarea for internal table is better in performance

    Hi Vineet ,
      Why would we choose to use an internal table without a header line when it is easier to code one  with a header line?
    it has following reason.
    1)Separate Internal Table Work Area:
         The work area (staging area) defined for the internal table   is not limited to use with just one internal table.
    take ex-
    Suppose you want two internal tables for EMPLOYEE – one to contain all records and one to contain only those records where country = ‘USA’.  You could create both of these internal tables without header lines and use only one work area to load data into both of them. You would append all records from the work area into the first internal table.  You would conditionally append the ‘USA’ records from the same work area into the second internal table.
    2)  Performance Issues:  Using an internal table without a header line is more efficient than one  with a header line
    3) Nested Internal Tables:  If you want to include an internal table within a structure or another   internal table, you must use one without a header line.
    If this one is helpful ,then rewards me
    Regards
    Shambhu

  • When will i use index organization table.

    when will i use index organization table.
    what is the advantage of these.

    See the sit
    http://www.dba-oracle.com/t_index_organized_tables.htm

  • Multi table inheritance and performance

    I really like the idea of multi-table inheritance, since a have a main
    class and three subclasses which just add one integer to the main class.
    It would be a waste to spend 4 tables on this, so I decided to put them
    all into one.
    My problem now is, that when I query for a specific class, kodo will build
    SQL like:
    select ... from table where
    JDOCLASSX='de.mycompany.myprojectname.mysubpack.classname'
    this is pretty slow, when the table grows because string comparisons are
    awefull - and even worse: the database has to compare nearly the whole
    string because it differs only in the last letters.
    indexing would help a bit but wouldn't outperforming integer comparisons.
    Is it possible to get kodo to do one more step of normalization ?
    Having an extra table containing all classnames und id's for them (and
    references in the original table) would improve performance of
    multi-tables quite a lot !
    Even with standard classes it would save a lot memory not having the full
    classname in each row.

    Stefan-
    Thanks for the feedback. Note that 3.0 does make this simpler: we have
    extensions that allow you to define the mechanism for subclass
    identification purely in the metadata file(s). See:
    http://solarmetric.com/Software/Documentation/3.0.0RC1/docs/manual.html#ref_guide_mapping_classind
    The idea for having a separate table mapping numbers to class names is
    good, but we prefer to have as few Kodo-managed tables as possible. It
    is just as easy to do this in the metadata file.
    In article <[email protected]>, Stefan wrote:
    First of all: thx for the fast help, this one (IntegerProvider) helped and
    solves my problem.
    kodo is really amazing with all it's places where customization can be
    done !
    Anyway as a wish for future releases: exactly this technique - using
    integer as class-identifiers rather than the full class names is what I
    meant with "normalization".
    The only thing missing, is a table containing information of how classIDs
    are mapped to classnames (which is now contained as an explicit statement
    in the jdo-File). This table is not mapped to the primary key of the main
    table (as you suggested), but to the classID-Integer wich acts as a
    foreign key.
    A query for a specific class would be solved with a query like:
    select * from classValues, classMapping where
    classValues.JDOCLASSX=classmapping.IDX and
    classmapping.CLASSNAMEX='de.company.whatever'
    This table should be managed by kodo of course !
    Imagine a table with 300.000 rows containing only 3 different derived
    classes.
    You would have an extra table with 4 rows (base class + 3 derived types).
    Searching for the classID is done in that 4row table, while searching the
    actual class instances than would be done over an indexed integer-classID
    field.
    This is much faster than having the database doing 300.000 String
    comparisons (even when indexed).
    (By the way - it would save a lot memory as well, even on classes which
    are not derived)
    If this technique is done by kodo transparently, maybe turned on with an
    extra option ... that would be great, since you wouldn't need to take care
    for different "subclass-indicator-values", can go on as everytime and have
    a far better performance ...
    Stephen Kim wrote:
    You could push off fields to seperate tables (as long as the pk column
    is the same), however, I doubt that would add much performance benefit
    in this case, since we'd simply add a join (e.g. select data.name,
    info.jdoclassx, info.jdoidx where data.jdoidx = info.jdoidx where
    info.jdoclassx = 'foo'). One could turn off default fetch group for
    fields stored in data, but now you're adding a second select to load one
    "row" of data.
    However, we DO provide an integer subclass provider which can speed
    these sorts of queries a lot if you need to constrain your queries by
    class, esp. with indexing, at the expense of simple legibility:
    http://solarmetric.com/Software/Documentation/2.5.3/docs/ref_guide_meta_class.html#meta-class-subclass-provider
    Stefan wrote:
    I really like the idea of multi-table inheritance, since a have a main
    class and three subclasses which just add one integer to the main class.
    It would be a waste to spend 4 tables on this, so I decided to put them
    all into one.
    My problem now is, that when I query for a specific class, kodo will build
    SQL like:
    select ... from table where
    JDOCLASSX='de.mycompany.myprojectname.mysubpack.classname'
    this is pretty slow, when the table grows because string comparisons are
    awefull - and even worse: the database has to compare nearly the whole
    string because it differs only in the last letters.
    indexing would help a bit but wouldn't outperforming integer comparisons.
    Is it possible to get kodo to do one more step of normalization ?
    Having an extra table containing all classnames und id's for them (and
    references in the original table) would improve performance of
    multi-tables quite a lot !
    Even with standard classes it would save a lot memory not having the full
    classname in each row.
    Steve Kim
    [email protected]
    SolarMetric Inc.
    http://www.solarmetric.com
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • A better way than a global temp table to reuse a distinct select?

    I get the impression from other threads that global temp tables are frowned upon so I'm wondering if there is a better way to simplify what I need to do. I have some values scattered about a table with a relatively large number of records. I need to distinct them out and delete from 21 other tables where those values also occur. The values have a really low cardinality to the number of rows. Out of 500K+ rows there might be a dozen distinct values.
    I thought that rather than 21 cases of:
    DELETE FROM x1..21 WHERE value IN (SELECT DISTINCT value FROM Y)
    It would be better for performance to populate a global temp table with the distinct first:
    INSERT INTO gtt SELECT DISTINCT value FROM Y
    DELETE FROM x1..21 WHERE value IN (SELECT value FROM GTT)
    People asking questions about GTT's seem to get blasted so is this another case where there's a better way to do this? Should I just have the system bite the bullet on the DISTINCT 21 times? The big table truncates and reloads and needs to do so quickly so I was hoping not to have to index it and meddle with disable/rebuild index but if that's better than a temp table, I'll have to make do.
    As far as I understand WITH ... USING can't be used to delete from multiple tables or can it?

    Almost, but not quite, as efficient as using a temporary table would be to use a PL/SQL collection and FORALL statements and/or referencing the collection in your subsequent statements). Something like
    DECLARE
      TYPE value_nt IS TABLE OF y.value%type;
      l_values value_nt;
    BEGIN
      SELECT distinct value
        BULK COLLECT INTO l_values
        FROM y;
      FORALL i IN 1 .. l_values.count
        DELETE FROM x1
         WHERE value = l_values(i);
      FORALL i IN 1 .. l_values.count
        DELETE FROM x2
         WHERE value = l_values(i);
    END;or
    CREATE TYPE value_nt
      IS TABLE OF varchar2(100); -- Guessing at the type of y.value
    DECLARE
      l_values value_nt;
    BEGIN
      SELECT distinct value
        BULK COLLECT INTO l_values
        FROM y;
      DELETE FROM x1
       WHERE value = (SELECT /*+ cardinality(v 10) */ column_value from table( l_values ) v );
      DELETE FROM x2
       WHERE value = (SELECT /*+ cardinality(v 10) */ column_value from table( l_values ) v );
    END;Justin

  • Problem while comparing two internal tables

    I have to modify work start date which are initial in the data base table for the records in the flat file.
    my code is:
    REPORT  ZAUFK_WORKSTARTDATE_UPDATE                 .
    Tables Declaration
    TABLES: AUFK.
    Type pools Declaration
    TYPE-POOLS : SLIS.
    Internal Table Declaration
    DATA: I_AUFK LIKE AUFK OCCURS 0 WITH HEADER LINE,
          IT_AUFK LIKE AUFK OCCURS 0 WITH HEADER LINE,
          ITAB1 LIKE ALSMEX_TABLINE OCCURS 0 WITH HEADER LINE.
    DATA: BEGIN OF I_AUFK1 OCCURS 0,
            AUFNR(12),
            AUART(4),
          END OF I_AUFK1.
    Data Declaration
    DATA I_FIELDCAT TYPE SLIS_FIELDCAT_ALV OCCURS 0.
    DATA WA_FCAT LIKE LINE OF I_FIELDCAT.
    DATA: B1 TYPE I VALUE 1,
          C1 TYPE I VALUE 1,
          B2 TYPE I VALUE 256,
          C2 TYPE I VALUE 65536.
    Selection Screen Declaration
    SELECTION-SCREEN BEGIN OF BLOCK B1 WITH FRAME TITLE TEXT-001.
      PARAMETERS: P_FILE LIKE RLGRAP-FILENAME.
      SELECT-OPTIONS: S_AUFNR FOR AUFK-AUFNR,
                      S_AUART FOR AUFK-AUART.
      PARAMETERS: P_USER7 LIKE AUFK-USER7 DEFAULT '20070101' OBLIGATORY.
    SELECTION-SCREEN END OF BLOCK B1.
    SELECTION-SCREEN BEGIN OF BLOCK B2 WITH FRAME TITLE TEXT-002.
    PARAMETERS: R1 RADIOBUTTON GROUP G1 DEFAULT 'X' USER-COMMAND UCOMM1, " Upload using File Path
                R2 RADIOBUTTON GROUP G1. " Uplaod using particular IO's
    SELECTION-SCREEN END OF BLOCK B2.
    To get F4 Help for File path on selection screen.
    AT SELECTION-SCREEN ON VALUE-REQUEST FOR P_FILE.
      CALL FUNCTION 'F4_FILENAME'
        EXPORTING
          PROGRAM_NAME  = SYST-CPROG
          DYNPRO_NUMBER = SYST-DYNNR
        IMPORTING
          FILE_NAME     = P_FILE.
    To disbale the filepath when r2 radiobutton is selected
    AT SELECTION-SCREEN OUTPUT.
    LOOP AT SCREEN.
    Check if R1 is checked
    IF R1 = 'X'.
        IF SCREEN-NAME = 'S_AUFNR-LOW' OR
          SCREEN-NAME = 'S_AUFNR-HIGH' OR
          SCREEN-NAME = 'S_AUART-LOW' OR
          SCREEN-NAME = 'S_AUART-HIGH'.
    Make the Internal order number and order type field disable from the selection screen
            SCREEN-INPUT = '0'.
            MODIFY SCREEN.
        ENDIF.
    ELSEIF R2 = 'X' AND SCREEN-NAME = 'P_FILE'.
    Make the file path field disappear from the selection screen
            SCREEN-INPUT = 0.
            MODIFY SCREEN.
        ENDIF.
    ENDLOOP.
    Start of executable code
    START-OF-SELECTION.
    To get the relavent IO data from the order master data table
    SELECT *
           FROM AUFK
           INTO TABLE I_AUFK
           WHERE AUFNR IN S_AUFNR
           AND AUART IN S_AUART
           AND ( AUART = '5200' OR AUART = '5500'
           OR AUART = '5700' OR AUART = '8500'
           OR AUART = '8700' ).
    Table must be updated using flat file
      IF R1 = 'X'.
    To Upload Excel sheet data into an internal table
        CALL FUNCTION 'ALSM_EXCEL_TO_INTERNAL_TABLE'
          EXPORTING
            FILENAME                = P_FILE
            I_BEGIN_COL             = B1
            I_BEGIN_ROW             = C1
            I_END_COL               = B2
            I_END_ROW               = C2
          TABLES
            INTERN                  = ITAB1
          EXCEPTIONS
            INCONSISTENT_PARAMETERS = 1
            UPLOAD_OLE              = 2
            OTHERS                  = 3.
        IF SY-SUBRC <> 0.
          MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
          WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
        ENDIF.
    To organize the data in the internal table(excel data) as per our requirement
        PERFORM ORGANIZE_UPLOADED_DATA.
       <b> LOOP AT I_AUFK1.
    Comparing the excel data and the database table data
           READ TABLE I_AUFK WITH KEY AUFNR = I_AUFK1-AUFNR.
             IF SY-SUBRC EQ 0.
    Check if the the work start date is initial(blank)
                IF I_AUFK-USER7 IS INITIAL.
    If work start is initial move the value in p_user7 to the work start date field in i_aufk
                 MOVE P_USER7 TO I_AUFK-USER7.
                 MODIFY I_AUFK.
    Moving the data which is changed into it_aufk internal table(this table is for displaying the updated records)
                 MOVE-CORRESPONDING I_AUFK TO IT_AUFK.
                 APPEND IT_AUFK.
              ENDIF.
          ENDIF.
        ENDLOOP.
    endif.</b>
    WA_FCAT-FIELDNAME = 'AUFNR'.
      WA_FCAT-TABNAME = 'I_AUFK'.
      WA_FCAT-SELTEXT_M = 'Internal Order Number'.
      WA_FCAT-OUTPUTLEN = 12.
      APPEND WA_FCAT TO i_fieldcat.
      WA_FCAT-FIELDNAME = 'AUART'.
      WA_FCAT-TABNAME = 'I_AUFK'.
      WA_FCAT-SELTEXT_M = 'Order Type'.
      WA_FCAT-OUTPUTLEN = 4.
      APPEND WA_FCAT TO I_FIELDCAT.
      WA_FCAT-FIELDNAME = 'KTEXT'.
      WA_FCAT-TABNAME = 'I_AUFK'.
      WA_FCAT-SELTEXT_M = 'Description'.
      WA_FCAT-OUTPUTLEN = 40.
      APPEND WA_FCAT TO I_FIELDCAT.
      WA_FCAT-FIELDNAME = 'USER7'.
      WA_FCAT-TABNAME = 'I_AUFK'.
      WA_FCAT-SELTEXT_M = 'Work Start Date'.
      WA_FCAT-OUTPUTLEN = 10.
      APPEND WA_FCAT TO I_FIELDCAT.
    Updating the AUFK(Internal Order Data Table) using i_aufk
    *MODIFY AUFK FROM TABLE I_AUFK.
    Check if the database table is modified
    *IF SY-SUBRC = 0.
    Display the modified data
    CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
    EXPORTING
       I_CALLBACK_PROGRAM                = SY-REPID
       I_GRID_TITLE                      = 'List of updated Records'
       IT_FIELDCAT                       = I_FIELDCAT[]
      TABLES
        T_OUTTAB                          = IT_AUFK[]
    EXCEPTIONS
       PROGRAM_ERROR                     = 1
       OTHERS                            = 2
    IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
             WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    *ENDIF.
    ENDIF.
    *&      Form  ORGANIZE_UPLOADED_DATA
          text
    FORM ORGANIZE_UPLOADED_DATA .
      SORT ITAB1 BY ROW COL.
      LOOP AT ITAB1.
        CASE ITAB1-COL.
          WHEN 1.
            I_AUFK1-AUFNR = ITAB1-VALUE.
          WHEN 2.
            I_AUFK1-AUART = ITAB1-VALUE.
        ENDCASE.
        AT END OF ROW.
          APPEND I_AUFK1.
        ENDAT.
      ENDLOOP.
      LOOP AT I_AUFK1.
      CONCATENATE '000000' I_AUFK1-AUFNR INTO I_AUFK1-AUFNR.
      MODIFY I_AUFK1.
      ENDLOOP.
    ENDFORM. " ORGANIZE_UPLOADED_DATA
    The bold are is were i'm facing problem. Please help me .its urgent.

    Hi try using the below code... 
    LOOP AT I_AUFK1.
    Comparing the excel data and the database table data
    READ TABLE I_AUFK WITH KEY AUFNR = I_AUFK1-AUFNR.
    IF SY-SUBRC EQ 0.
    Check if the the work start date is initial(blank)
    IF I_AUFK-USER7 IS INITIAL.
    If work start is initial move the value in p_user7 to the work start date field in i_aufk
    MOVE P_USER7 TO I_AUFK-USER7.
    [<u>b]MODIFY I_AUFK from i_aufk transporting user7.</b></u>
    Moving the data which is changed into it_aufk internal table(this table is for displaying the updated records)
    MOVE-CORRESPONDING I_AUFK TO IT_AUFK.
    APPEND IT_AUFK.
    ENDIF.
    ENDIF.
    ENDLOOP.
    endif.
    if you can't still solve some more questions
    what is teh sy-subrc of teh read ..
    what does the header line contain after teh read
    and what happens after modify.. does the contents change?

  • No of columns in a table and SQL performance

    How does the table size effects sql performance?
    I am comparing 2 tables , with same number of rows(54 million rows) ,
    table1(columns a,b,c,d,e,f..) has 40 columns
    table2 (columns (a,b,c,d)
    SQL uses columns a,b.
    SQL using table2 runs in 1 sec.
    SQL using table1 runs in 30 min.
    Can any one please let me know how the table size , number of columns in table efects the performance of SQL's?
    Thanks
    jeevan.

    user600431 wrote:
    This is a general question. I just want to compare table with more columns and table with less columns with same number of rows .
    I am finding that table with less columns is good in performance , than the table with more columns.
    Assuming there are no row chains , will there be any difference in performance with the number of columns in a table.Jeevan,
    the question is not how many columns your table has, but how large your table segment is. If your query runs a full table scan it has to read through the whole table segment, so in that case the size of the table matters.
    A table having more columns potentially has a larger row size than a table with less columns but this is not a general rule. Think of large columns, e.g. varchar2 columns, think of blank (NULL) columns and you can easily end up with a table consisting of a single column taking up more space per row than a table with 200 columns consisting only of varchar2(1) columns.
    Check the DBA/ALL/USER_SEGMENTS view to determine the size of your two table segments. If you gather statistics on the tables then the dictionary will contain information about the average row size.
    If your query is using indexes then the size of the table won't affect the query performance significantly in many cases.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Performance - DIM and FACT tables

    Experts,
    I have performance issues where my DIM tables are bigger than the FACT tables by 50 %.
    Can any one please let me know what should be done in order to solve this issue
    I have already done below steps
    1)Kept the small dimensions together
    2)kept the line item dimensions wherever needed
    3)Grouped related characteristics into one dimension only
    4)Removed high cardinality dimensions
    pls help
    thanks in advance

    Often, not much thought is given to the dissemination of characteristics to dimensions. Dimension tables, however, have a huge impact on InfoCube performance. The star schema design works best when the database can assume minimal records in the dimension tables and larger volumes in the fact table.
    Rationale : -
    Each dimension should be of approximately equal size and that the file size of each dimension should not make up more than 10 percent of the associated fact table. The dimensions must also support growth.
    You should make every attempt to split up the most dynamic characteristics so they do not exist in the same dimension. This ensures that the system does not create too many entries in a dimension table.
    Example : Order data is loaded into BW with the dynamic characteristics customer and material. If these InfoObjects were to be placed together in the same dimension, it poses a problem for the system because a new dimension record would be created each time the combination of customer or material changed. This would make the dimension very large in relation to the fact table.
    When one dimension grows very large in relation to the fact table, it makes it difficult for the database optimizer to choose an efficient path to the data, because the guideline of each dimension having less than 10 percent of the fact table’s records has been violated. This condition of having a large volume of data growth in a dimension is known as “degenerative dimension.”
    In the data modeling phase, it is very important to determine if a dimension table will be degenerated, and then explicitly set it as a line item dimension
    The best way to fix a degenerative dimension is to move the offending characteristics to different dimensions
    Line-item dimensions arise in nearly every case where the granularity of the fact table represents an actual working document like an order number, invoice number, sequence number.. This can only be accomplished if no data is in the InfoCube. If data is present, however, a dump and reload is required. This underscores the point that the data modeling decisions need to be well thought out during the initial implementation to avoid a dump and reload of data.
    Because it is far better to have many smaller dimensions than a few large dimensions, I suggest you identify the most dynamic characteristics and place them in separate dimensions. The current size of your dimensions can be monitored in relation to the fact table by running report SAP_INFOCUBE_DESIGNS in transaction SE38 for live InfoCubes
    This shows the size of the fact table and its associated dimension tables. It also shows the ratio percentage of fact to dimension size.
    Recommendation: -
    Try to limit the number of records in the dimension tables. Use the following guidelines:
    1. If an InfoObject has almost as many distinct values as there are entries in the fact tables, the dimension this InfoObject belongs to should be defined as a line item dimension. If the dimension is defined in this manner, the system will write the data directly to the fact table instead of creating a dimension table that has almost as many entries as the fact table.
    On the other hand, if there are several dimension tables with very few entries (for example, less than 10), these smaller dimensions should be combined into one dimension.
    2. Group related characteristics into one dimension only. Unrelated characteristics can use too much disk space and cause performance problems (for example, 10,000 customers and 10,000 materials may result in 100,000,000 records).
    3. Avoid characteristics with a high granularity, that is, many distinct entries compared with the number of entries in the fact table.
    4. Remove all "High-Cardinality" indicators from the InfoCube definition,generally, a dimension has a high cardinality if the number of dimension entries is 20% (or more) of the number of fact table entries. When in doubt, do not set a dimension with high cardinality
    5.Because it is far better to have many smaller dimensions than a few large dimensions, I suggest you identify the most dynamic characteristics and place them in separate dimensions. The current size of your dimensions can be monitored in relation to the fact table by running report SAP_INFOCUBE_DESIGNS in transaction SE38 for live InfoCubes This shows the size of the fact table and its associated dimension tables. It also shows the ratio percentage of fact to dimension size.
    Hope it Helps
    Chetan
    @CP..

  • How can I Improve the Performance using Global Temo Tables ??

    Hi,
    Can anyone tell me , How can i make use of Global Temporary Tables to improve the Performance.
    I have few sample scripts ,
    Say i have the View based on some Complex query like ,
    CREATE OR REPLACE VIEW Profile_values_view AS
    SELECT d.Profile_option_name, d.Profile_option_id, Profile_option_value,
    u.User_name, Level_id, Level_code
    FROM Profile_definitions d, Profile_values v, Profile_users u
    WHERE d.Profile_option_id = v.Profile_option_id
    AND ((Level_code = 'USER' AND Level_id = U.User_id) OR
    (Level_code = 'DEPARTMENT' AND Level_id = U.Department_id) OR
    (Level_code = 'SITE'))
    AND NOT EXISTS (SELECT 1 FROM PROFILE_VALUES P
    WHERE P.PROFILE_OPTION_ID = V.PROFILE_OPTION_ID
    AND ((Level_code = 'USER' AND
    level_id = u.User_id) OR
    (Level_code = 'DEPARTMENT' AND
    level_id = u.Department_id) OR
    (Level_code = 'SITE'))
    AND INSTR('USERDEPARTMENTSITE', v.Level_code) >
    INSTR('USERDEPARTMENTSITE', p.Level_code));
    Now i have created the Global temp Table as ,
    CREATE GLOBAL TEMPORARY TABLE Profile_values_temp
    Profile_option_name VARCHAR(60) NOT NULL,
    Profile_option_id NUMBER(4) NOT NULL,
    Profile_option_value VARCHAR2(20) NOT NULL,
    Level_code VARCHAR2(10) ,
    Level_id NUMBER(4) ,
    CONSTRAINT Profile_values_temp_pk
    PRIMARY KEY (Profile_option_id)
    ) ON COMMIT PRESERVE ROWS ORGANIZATION INDEX;
    Now I am Inserting the Records into Temp table as
    INSERT INTO Profile_values_temp
    (Profile_option_name, Profile_option_id, Profile_option_value,
    Level_code, Level_id)
    SELECT Profile_option_name, Profile_option_id, Profile_option_value,
    Level_code, Level_id
    FROM Profile_values_view;
    COMMIT;
    Now what my doubt is, when do i need to execute the Insert Statement.
    Say , if the View returns few millions of records , then loading such a data into Global Temporary table takes lot of time.
    Then what is the use of Global Temporary tables and how can i improve the Performance using the same.
    Raj

    Thanks for the responce ,
    There are 2 to 3 complex views in our database, and there always be more than 5000+ users will be workinf on the application and is OLTP application. Those complex views are killing the application performance.
    I what i felt was, if i create the Global Temporary tables for thow views and will be able to load the one third million of records returned by the views in to cache and can improve the application performance.
    I have created the Global Temporary tables for 2 views with the option On Commit Preserve , But after am inserting the records into the Temp table and when i Issue the commit statement, the Temp table is getting Cleared.
    I really got surpised of this behaviour as i know that with the Option On Commit Preserve , the rows should retain in the Temp Table, Instead , it's getting cleared.
    Pelase suggest , what to do ??
    Raj

  • Compare the int table with Z-table

    Hi
    i want to compare one Internal table with one Z-Db table
    help me with coding..
    thank you

    Hi Sunny,
    Check this code.
    DATA: BEGIN OF LINE,
    COL1 TYPE I,
    COL2 TYPE I,
    END OF LINE.
    DATA: ITAB LIKE TABLE OF LINE,
    JTAB LIKE TABLE OF LINE.
    DO 3 TIMES.
    LINE-COL1 = SY-INDEX.
    LINE-COL2 = SY-INDEX ** 2.
      APPEND LINE TO ITAB.
    ENDDO.
    MOVE ITAB TO JTAB.
    LINE-COL1 = 10. LINE-COL2 = 20.
    APPEND LINE TO ITAB.
    IF ITAB GT JTAB.
    WRITE / 'ITAB GT JTAB'.
    ENDIF.
    APPEND LINE TO JTAB.
    IF ITAB EQ JTAB.
    WRITE / 'ITAB EQ JTAB'.
    ENDIF.
    LINE-COL1 = 30. LINE-COL2 = 80.
    APPEND LINE TO ITAB.
    IF JTAB LE ITAB.
    WRITE / 'JTAB LE ITAB'.
    ENDIF.
    LINE-COL1 = 50. LINE-COL2 = 60.
    APPEND LINE TO JTAB.
    IF ITAB NE JTAB.
    WRITE / 'ITAB NE JTAB'.
    ENDIF.
    IF ITAB LT JTAB.
    WRITE / 'ITAB LT JTAB'.
    ENDIF.
    And my suggestion is better you go to the website where the above code is present and helpful for basics.
    http://help.sap.com/saphelp_nw04/helpdata/en/fc/eb3841358411d1829f0000e829fbfe/content.htm
    Cheers!!
    Venkat

  • Regarding Internal table and access performance

    hey guys.
    In my report , Somehow i reduced the query performance time by selecting minimum key fields and moved the selected records to internal table.
    Now from this internal table i am restricting the loop
    as per my requirements using where statements.(believing that internal table retrieval is more faster than database acces(using query)).
    But still my performance goes down.
    Could you pls suggest me how to reduce the execution time
    in abap programming.
    I used below commands.
    Read using binary search.
    loop ...where statement.
    perform statements.
    collect staements.
    delete itab.(delete duplicates staements too)
    sort itab(sorting).
    For each above statements do we have any faster way to retrieval records.
    If i see my bottle neck at se30.it shows
    ABAP programming to 70 percent
    database access to 20 percent
    R3 system as 10percent.
    now how to reduce this abap process.
    could you pls reply.
    ambichan.
    ambichan.

    Hello Ambichan,
    It is difficult to suggest the improvements without looking at the actual code that you are running. However, I can give you some general information.
    1. READ using the BINARY SEARCH addition.
    This is indeed a good way of doing a READ. But have you made sure that the internal table is <i>sorted by the required fields</i> before you use this statement ?
    2. LOOP...WHERE statement.
    This is also a good way to avoid looping through unnecessary entries. But further improvement can certainly be achieved if you use FIELD-SYMBOLS.
    LOOP AT ITAB INTO <FIELD_SYMBOL_OF_THE_SAME_LINE-TYPE_AS_ITAB>.
    ENDLOOP.
    3. PERFORM statements.
    A perform statement can not be optimized. what matters is the code that you write inside the FORM (or a subroutine).
    4. COLLECT statements.
    I trust you have used the COLLECT statement to simplify the logic. Let that be as it is. The code is more readable and elegant.
    The COLLECT statement is somewhat performance intensive. It takes more time with a normal internal table (STANDARD). See if you can use an internal table of type  SORTED. Even better, you can use a HASHED internal table.
    5. DELETE itab.(delete duplicates staements too)
    If you are making sure that you are deleting several entries based on a condition, then this should be okay. You cannot avoid using the DELETE statement if your functionality requires you to do so.
    Also, before deleting the DUPLICATES, ensure that the internal table is sorted.
    6. SORT statement.
    It depends on how many entries there are in the internal table. If you are using most of the above points on the same internal table, then it is better that you define your internal table to be of type SORTED. That way, inserting entries will take a little more time (to ensure that the table is always sorted), but alll the other operations are going to be much faster.
    Get back to me if you need further assistance.
    Regards,
    <a href="https://www.sdn.sap.com:443http://www.sdn.sap.comhttp://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.sdn.businesscard.sdnbusinesscard?u=zwcc%2fwm4ups%3d">anand Mandalika</a>.

  • Query to compare 2 different tables from 2 different database

    Is it possible to write a SQL query to compare 2 Tables from 2 different Oracle Database. Also I need a query to do the same when the database is same.
    Thanks in advance

    OK, well "compare" can mean one of two things: Compare structure or compare contents. Here is a quick script to compare column structures of two tables on one database:
    (select COLUMN_NAME,
    DATA_TYPE,
    DATA_LENGTH,
    DATA_PRECISION,
    DATA_SCALE,
    NULLABLE
    from dba_Tab_columns
    where owner=:OWNR1
    and table_name = :tablename
    minus
    select COLUMN_NAME,
    DATA_TYPE,
    DATA_LENGTH,
    DATA_PRECISION,
    DATA_SCALE,
    NULLABLE
    from dba_Tab_columns
    where owner=:ownr2
    and table_name = :tablename)
    union all
    (select COLUMN_NAME,
    DATA_TYPE,
    DATA_LENGTH,
    DATA_PRECISION,
    DATA_SCALE,
    NULLABLE
    from dba_Tab_columns
    where owner=:ownr2
    and table_name = :tablename
    minus
    select COLUMN_NAME,
    DATA_TYPE,
    DATA_LENGTH,
    DATA_PRECISION,
    DATA_SCALE,
    NULLABLE
    from dba_Tab_columns
    where owner=:ownr1
    and table_name = :tablename)
    If this query returns any rows, then these indicate that there are structural differences between the tables. We do a minus in both directions to ensure that an additional column in either schema will be returned in the query.
    If you are going across dblinks to remote tables then you have to amend the "dba_tab_columns" to "sys.dba_tab_columns@yourdblink"
    If you also want to compare indexes, triggers, etc then do the same sort of thing for the associated dba_ views for those objects.
    And if you want to compare table contents, then often the fastest way is also to check minuses in both directions, if the tables are not too big:
    e.g.
    (select * from schema1.table@dblink1
    minus
    select * from schema2.table@dblink2)
    union all
    (select * from schema2.table@dblink2
    minus
    select * from schema1.table@dblink1)
    local tables, of course, simply omit the "@dblink" issues.
    There are tools to help in such things. TOAD, for example, has a pretty good schema comparison tool, and there are plenty of other options out there. But if you need to script this yourself then the logic I've shown is a good starting point.
    Cheers,
    Mike

  • Different Performance for a view/table

    Hi,
    I have a view called "Myview" which has a poor performance on one database (DBTEST) but with good performance on another database (DBDEV)
    I checked the indexes on both and all of them were in place on both databases.
    DBTEST and DBDEV both installed on the same Unix machine (They share the same resources).
    Since both databases are configured similarly I'm wondering why querying Myview view takes two times more to return records?
    How can I identify where the problem is? The "consistent gets" and "physical reads" parameters are about 2 times more in DBTEST when I query the view. I believe this is why I have poor performance in DBTEST.
    Could someone give me an advice on what db parameters I should verify to identify the problem?
    DBTEST> select status from Myview where id =100;
    elapse time: 40 seconds
    DBDEV> select status from Myview where id =100;
    elapse time: 22 seconds
    DBTEST> select count(*) from Myview;
    5123 rows selected
    DBDEV> select count(*) from Myview;
    4022 rows selected
    Thanks,
    Amir

    There are 13 tables plus one view underlying the Myview
    The tables which are not listed, are lookup tables and contain equal number of rows on both DBs:
    DBDEV
    TableName No of Rows
    user_role                          3023
    project                          2059
    project_year 647
    doc_tab 3091
    user                                              3155
    org                         2639
    region                              125
    application 3353
    DBDEV
    TableName No of Rows
    user_role                          6362
    project                          5058
    project_year                         1516
    doc_tab 8659
    user                                              6936
    org                         6320
    region                              176
    application 7325
    Since Myview is using UNION clause I picked part of the execution plan:
    DBDEV:
    11 rows selected.
    Elapsed: 00:00:16.01
    Execution Plan
    SELECT STATEMENT Optimizer=CHOOSE (Cost=525 Card=3 Bytes=111)
    VIEW OF 'Myview' (Cost=525 Card=3 Bytes=111)
    SORT (UNIQUE) (Cost=560 Card=3 Bytes=1103)
    UNION-ALL
    HASH JOIN (ANTI) (Cost=138 Card=1 Bytes=369)
    HASH JOIN (Cost=135 Card=1 Bytes=356)
    NESTED LOOPS (Cost=132 Card=1 Bytes=348)
    NESTED LOOPS (OUTER) (Cost=131 Card=1 Bytes=330)
    NESTED LOOPS (OUTER) (Cost=130 Card=1 ytes=308)
    NESTED LOOPS (OUTER) (Cost=129 Card=1 Bytes=295)
    FILTER
    NESTED LOOPS (OUTER)
    HASH JOIN (Cost=128 Card=1 Bytes=175)
    VIEW OF 'Myview_PROJ_ALL_YEAR'
    (Cost=123 Card=15 Bytes=2295)
    MERGE JOIN (Cost=123 Card=15 Bytes=1935)
    SORT (JOIN) (Cost=119 Card=529 Bytes=61893)
    HASH JOIN (Cost=107 Card=529 Bytes=61893)
    VIEW OF 'Myview_PROJECT' (Cost=100 Card=529 Bytes=44436)
    SORT (UNIQUE) (Cost=100 Card=529 Bytes=40998)
    UNION-ALL
    HASH JOIN (Cost=9 Card=51 Bytes=2703)
    TABLE ACCESS (FULL) OF 'GRANT_PROGRAM' (Cost=2 Card=15 Bytes=135)
    TABLE ACCESS (FULL) OF 'PROJECT' (Cost=6 Card=51 Bytes=2244)
    HASH JOIN (Cost=48 Card=129 Bytes=11610)
    HASH JOIN (Cost=41 Card=127 Bytes=9779)
    HASH JOIN (Cost=29 Card=94 Bytes=5922)
    HASH JOIN (Cost=9 Card=51 Bytes=2703)
    TABLE ACCESS (FULL) OF 'GRANT_PROGRAM' (Cost=2 Card=15 Bytes=135)
    TABLE ACCESS (FULL) OF 'PROJECT' (Cost=6 Card=51 Bytes=2244)
    TABLE ACCESS (FULL) OF 'APPLICATION' (Cost=19 Card=3353 Bytes=33530)
    INDEX (FAST FULL SCAN) OF 'UK_user_INVOLVE' (UNIQUE) (Cost=11 Card=4527 Bytes=63378)
    TABLE ACCESS (FULL) OF 'user_role' (Cost=6 Card=3023 Bytes=39299)
    HASH JOIN (Cost=12 Card=298 Bytes=22350)
    HASH JOIN (Cost=9 Card=51 Bytes=2907)
    TABLE ACCESS (FULL) OF 'GRANT_PROGRAM' (Cost=2 Card=15 Bytes=135)
    TABLE ACCESS (FULL) OF 'PROJECT' (Cost=6 Card=51 Bytes=2448)
    TABLE ACCESS (FULL) OF 'region' (Cost=2 Card=69 Bytes=1242)
    HASH JOIN (Cost=19 Card=51 Bytes=4335)
    HASH JOIN (Cost=12 Card=51 Bytes=3366)
    HASH JOIN (Cost=9 Card=51 Bytes=2907)
    DBINT:
    9 rows selected.
    Elapsed: 00:00:34.03
    Execution Plan
    SELECT STATEMENT Optimizer=CHOOSE (Cost=941 Card=3 Bytes=111)
    VIEW OF 'Myview' (Cost=941 Card=3 Bytes=111)
    SORT (UNIQUE) (Cost=976 Card=3 Bytes=1106)
    UNION-ALL
    HASH JOIN (ANTI) (Cost=253 Card=1 Bytes=370)
    NESTED LOOPS (OUTER) (Cost=250 Card=1 Bytes=357)
    NESTED LOOPS (OUTER) (Cost=250 Card=1 Bytes=341)
    NESTED LOOPS (OUTER) (Cost=249 Card=1 Bytes=318)
    HASH JOIN (Cost=248 Card=1 Bytes=304)
    NESTED LOOPS (Cost=245 Card=1 Bytes=296)
    HASH JOIN (Cost=243 Card=2 Bytes=556)
    FILTER
    HASH JOIN (OUTER)
    VIEW OF 'Myview_PROJ_ALL_YEAR' (Cost=229 Card=35 Bytes=5355)
    MERGE JOIN (Cost=229 Card=35 Bytes=4550)
    SORT (JOIN) (Cost=226 Card=1262 Bytes=148916)
    HASH JOIN (Cost=198 Card=1262 Bytes=148916)
    VIEW OF 'Myview_PROJECT' (Cost=183 Card=1262 Bytes=106008)
    SORT (UNIQUE) (Cost=183 (Card=1262 Bytes=100528)
    UNION-ALL
    HASH JOIN (Cost=15 Card=126 Bytes=6678)
    TABLE ACCESS (FULL) OF 'GRANT_PROGRAM' (Cost=2 Card=28 Bytes=252)
    TABLE ACCESS (FULL) OF 'PROJECT' (Cost=12 Card=126 Bytes=5544)
    HASH JOIN (Cost=98 Card=454 Bytes=41314)
    HASH JOIN (Cost=88 Card=448 Bytes=34496)
    HASH JOIN (Cost=48 Card=206 Bytes=12978)
    HASH JOIN (Cost=15 Card=126 Bytes=6678)
    TABLE ACCESS (FULL) OF 'GRANT_PROGRAM' (Cost=2 Card=28 Bytes=252)
    TABLE ACCESS (FULL) OF 'PROJECT' (Cost=12 Card=126 Bytes=5544)
    TABLE ACCESS (FULL) OF 'APPLICATION' (Cost=32 Card=7325 Bytes=73250)
    INDEX (FAST FULL SCAN) OF 'UK_user_INVOLVE' (UNIQUE) (Cost=39 Card=15889
    Bytes=222446)
    TABLE ACCESS (FULL) OF 'user_role' (Cost=9 Card=6362 Bytes=89068)
    HASH JOIN (Cost=18 Card=556 Bytes=41700)
    TABLE ACCESS (FULL) OF 'region' (Cost=2 Card=88 Bytes=1584)
    HASH JOIN (Cost=15 Card=126 Bytes=7182)
    TABLE ACCESS (FULL) OF 'GRANT_PROGRAM' (Cost=2 Card=28 Bytes=252)
    TABLE ACCESS (FULL) OF 'PROJECT' (Cost=12 Card=126 Bytes=6048)
    HASH JOIN (Cost=28 Card=126 Bytes=10836)
    HASH JOIN (Cost=18 Card=126 Bytes=8316)
    You see alapse time for querying DBTEST is sometimes 2 times more than DBDEV. BTW, I checked all indexes for both database are in place.
    Based on the information provided can you tell me what the problem is?
    Thanks,

  • Regarding comparing of 2 tables in report

    HI,
    I making a report in which i have to display the comparison between the two transparent tables out of which one is already present
    (in my case ZSTOCKSUM) is the first and other table is (ZST1) which is same as ZSTOCKSUM as it will store the data from BDC .
    In the report i have to compare the data whether data is stored in SAP and from bdc in a report.
    can anybody provide me some examples of reports comparing these kind of tables ,as help will be definitely rewarded.

    Hi,
    The following code tells you, how to compare two internal tables:
    DATA: BEGIN OF LINE,
    COL1 TYPE I,
    COL2 TYPE I,
    END OF LINE.
    DATA: ITAB LIKE TABLE OF LINE,
    JTAB LIKE TABLE OF LINE.
    DO 3 TIMES.
    LINE-COL1 = SY-INDEX.
    LINE-COL2 = SY-INDEX ** 2.
    APPEND LINE TO ITAB.
    ENDDO.
    MOVE ITAB TO JTAB.
    LINE-COL1 = 10. LINE-COL2 = 20.
    APPEND LINE TO ITAB.
    IF ITAB GT JTAB.
    WRITE / 'ITAB GT JTAB'.
    ENDIF.
    APPEND LINE TO JTAB.
    IF ITAB EQ JTAB.
    WRITE / 'ITAB EQ JTAB'.
    ENDIF.
    LINE-COL1 = 30. LINE-COL2 = 80.
    APPEND LINE TO ITAB.
    IF JTAB LE ITAB.
    WRITE / 'JTAB LE ITAB'.
    ENDIF.
    LINE-COL1 = 50. LINE-COL2 = 60.
    APPEND LINE TO JTAB.
    IF ITAB NE JTAB.
    WRITE / 'ITAB NE JTAB'.
    ENDIF.
    IF ITAB LT JTAB.
    WRITE / 'ITAB LT JTAB'.
    ENDIF.
    This example creates two standard tables, ITAB and JTAB. ITAB is filled with 3 lines and copied to JTAB. Then, another line is appended to ITAB and the first logical expression tests whether ITAB is greater than JTAB. After appending the same line to JTAB, the second logical expression tests whether both tables are equal. Then, another line is appended to ITAB and the third logical expressions tests whether JTAB is less than or equal to ITAB. Next, another line is appended to JTAB. Its contents are unequal to the contents of the last line of ITAB. The next logical expressions test whether ITAB is not equal to JTAB. The first table field whose contents are different in ITAB and JTAB is COL1 in the last line of the table: 30 in
    ITAB and 50 in JTAB. Therefore, in the last logical expression, ITAB is less than JTAB.
    Regards,
    Bhaskar

Maybe you are looking for

  • Problem while creating a new project template

    I am trying to create a new project template under one operating unit newly created. I am using P A Super User responsibility for that operating unit. In profile setting the above responsibility has been assigned the new operating unit under Mo opera

  • How to detect client OS from SQL*Plus script

    Sometimes in a SQL*Plus script I need to execute OS commands e.g. host rm tempfile.bufHowever of course Windows has no "rm" command by default, so I have to edit the script to use host del tempfile.bufNow if I could define &DELETE (for example, "cat"

  • SQL Server 2008 to SQL Server 2012

    My server crashed with SQL Server 2008 on it.  I am replacing it with a new machine with SQL Server 2012.  How do I restore my old SQL database to my new machine?

  • ADF customize previous/next link action in ADF data table

    in an ADF Data Table, how can i set an action method on the range links (i.e. next pagesize, previous pagesize links..), I need to customize that. i want to check first if the transaction is dirty, display a save changes panel and then proceed to the

  • Wi-Fi: Not Configured

    I just got a new MacBook Pro running 10.9.1, and after migrating files from my old MacBook (late 2006 build, running 10.6.8) I am unable to get my AirPort/Wi-Fi up and running. On the status bar it shows "Wi-Fi not configured". When I open Network Pr