Issue with single table hash cluster

I'm still learning lots about html db so please do bear with me...
I have a table that is part of a single table hash cluster. The cluster and table are created like so:
create cluster webmon.owm_system
  db_id number(3),
  env_id number(3)
size 171
hashkeys 1009
single table
pctfree 2
pctused 96;
create table webmon.owm_system
  tier_id number(3) not null,
  host_id number(3) not null,
  port number(4) not null,
  sid varchar2(8) not null,
  admin_user varchar2(32) not null,
  admin_auth varchar2(48) not null,
  primary_dba_id number(3) not null,
  secondary_dba_id number(3) not null,
  primary_bac_id number(3) not null,
  secondary_bac_id number(3) not null,
  version_id number(3) not null,
  log_mode_id number(3) not null,
  db_id number(3) not null,
  env_id number(3) not null,
  admin_conn varchar2(32) null,
  constraint owm_system_pk primary key (db_id, env_id)
cluster webmon.owm_system
  db_id,
  env_id
);I have created a form (and report) to view and edit the data in the table. When I click the "Apply Changes" button on the form after editing data, I get an error. The SQL that is sent to the database is incorrect.
Here is the (reformatted) incorrect SQL that is sent (captured in a database trace file):
update
  "WEBMON"."OWM_SYSTEM"
set
  "SECONDARY_DBA_ID" = :DML_BV0001,
  "DB_ID" = :DML_BV0002,
  "DB_ID" = :DML_BV0003,  /* This should not be here */
  "ENV_ID" = :DML_BV0004,
  "ENV_ID" = :DML_BV0005,  /* This should not be here */
  "HOST_ID" = :DML_BV0006,
  "PORT" = :DML_BV0007,
  "SID" = replace(:DML_BV0008,'%null%',null),
  "ADMIN_USER" = replace(:DML_BV0009,'%null%',null),
  "ADMIN_AUTH" = replace(:DML_BV0010,'%null%',null),
  "ADMIN_CONN" = replace(:DML_BV0011,'%null%',null),
  "VERSION_ID" = :DML_BV0012,
  "LOG_MODE_ID" = :DML_BV0013,
  "TIER_ID" = :DML_BV0014,
  "PRIMARY_DBA_ID" = :DML_BV0015,
  "PRIMARY_BAC_ID" = :DML_BV0016,
  "SECONDARY_BAC_ID" = :DML_BV0017
where
  "DB_ID" = :p_rowid
and
  "ENV_ID" = :p_rowid2Why are the DB_ID and ENV_ID columns in the statement twice? That is the error.
Now, if I create the table as just a basic, non-clustered table like this:
create table webmon.owm_system
  tier_id number(3) not null,
  host_id number(3) not null,
  port number(4) not null,
  sid varchar2(8) not null,
  admin_user varchar2(32) not null,
  admin_auth varchar2(48) not null,
  primary_dba_id number(3) not null,
  secondary_dba_id number(3) not null,
  primary_bac_id number(3) not null,
  secondary_bac_id number(3) not null,
  version_id number(3) not null,
  log_mode_id number(3) not null,
  db_id number(3) not null,
  env_id number(3) not null,
  admin_conn varchar2(32) null,
  constraint owm_system_pk primary key (db_id, env_id)
);Everything works as expected. The (correct) SQL sent in this case is:
update
  "WEBMON"."OWM_SYSTEM"
set
  "SECONDARY_DBA_ID" = :DML_BV0001,
  "DB_ID" = :DML_BV0002,
  "ENV_ID" = :DML_BV0003,
  "HOST_ID" = :DML_BV0004,
  "PORT" = :DML_BV0005,
  "SID" = replace(:DML_BV0006,'%null%',null),
  "ADMIN_USER" = replace(:DML_BV0007,'%null%',null),
  "ADMIN_AUTH" = replace(:DML_BV0008,'%null%',null),
  "ADMIN_CONN" = replace(:DML_BV0009,'%null%',null),
  "VERSION_ID" = :DML_BV0010,
  "LOG_MODE_ID" = :DML_BV0011,
  "TIER_ID" = :DML_BV0012,
  "PRIMARY_DBA_ID" = :DML_BV0013,
  "PRIMARY_BAC_ID" = :DML_BV0014,
  "SECONDARY_BAC_ID" = :DML_BV0015
where
  "DB_ID" = :p_rowid
and
  "ENV_ID" = :p_rowid2The "real" system in question has a fairly large number of single table hash clusters. I'm leary of continuing my explorations if there is a problem with using single table hash clusters. On the other hand, I'm quite open to the idea that I've done something wrong!
Any pointers? Is there anyway to change the SQL that is being generated when using the single table hash cluster? More information needed?
Thanks,
Mark

Mark,
Can't help with the hash table problem but as a method of dealing with it you could just make a procedure to manage the table.
Pass the value of :REQUEST (matches value of button) in and you'll be able to modify the tables.
procedure manage_owm_system (p_request varchar2,db_id number ...)is
begin
case p_request
when 'CREATE' then
insert
into ....
when 'SAVE' then
update .....
else
raise_application_error(-20001,'Unkown request');
end case;
end;
put that in a plsql reqion after submit
begin
manage_owm_system(p_request=>:REQUEST,p_dbid=>:P1_DBID .......);
end;
You need to create something similar , with out parameters, to populate the items when the pags loads.
Chris

Similar Messages

  • Single table hash clusters

    I created a single table hash cluster like this :
    create tablespace mssm datafile 'c:\app\mssm01.dbf' size 100m
    segment space management manual;
    create cluster hash_cluster_4k
    ( id number(2) )
    size 8192 single table hash is id hashkeys 4 tablespace mssm;
    -- Created a table in cluster with row size such that only one record fits one block and inserted 5 records each with a distinct key value
    CREATE TABLE hash_cluster_tab_8k
    ( id number(2) ,
    txt1 char(2000),
    txt2 char(2000),
    txt3 char(2000)
    CLUSTER hash_cluster_8k( id );
    Begin
    for i in 1..5 loop
    insert into hash_cluster_tab_8k values (i, 'x', 'x', 'x');
    end loop;
    end;
    exec dbms_stats.gather_table_stats(USER, 'HASH_CLUSTER_TAB_8K', CASCADE=>true);
    Now, If I try to access record with id = 1 - It shows 2 I/O's (cr = 2) instead of single I/O as is expected in a hash cluster.
    Rows Row Source Operation
    1 TABLE ACCESS HASH HASH_CLUSTER_TAB_8K (cr=2 pr=0 pw=0 time=0 us)
    If I issue the same query after creating unique index on hash_cluster_tab(id), the execution plan shows hash access and single I/O (cr = 1).
    Does it mean that to have single I/o in a single table hash cluster, we have to create unique index? Won't it create additional overhead of maintaining an index?
    What is the second I/O needed for in case unique index is absent?
    I would be extremely thankful if gurus could explain this behaviour .
    Thanks in advance ..

    >
    Now, If I try to access record with id = 1 - It shows 2 I/O's (cr = 2) instead of single I/O as is expected in a hash cluster.
    1 TABLE ACCESS HASH HASH_CLUSTER_TAB_8K (cr=2 pr=0 pw=0 time=0 us)
    >
    As expected? Have you considered that your 'expectation' is wrong?
    >
    If I issue the same query after creating unique index on hash_cluster_tab(id), the execution plan shows hash access and single I/O (cr = 1).
    Does it mean that to have single I/o in a single table hash cluster, we have to create unique index? Won't it create additional overhead of maintaining an index?
    What is the second I/O needed for in case unique index is absent?
    >
    My hypothesis would be that are seeing the effects of having a 'hash collision'; a collision that you caused yourself by the way you defined the table.
    Remember when you said this?
    >
    create cluster hash_cluster_4k
    ( id number(2) )
    size 8192 single table hash is id hashkeys 4 tablespace mssm;
    >
    You told Oracle there will only be FOUR different IDs used.
    And then you said this
    >
    -- Created a table in cluster with row size such that only one record fits one block and inserted 5 records each with a distinct key value
    >
    You used FIVE different IDs and only ONE record will fit into each block.
    So that record with 'ID=5' is guaranteed to HASH to one of the existing four hash values. And that means you have a 'hash collision'.
    The docs explain what happens when you have a 'hash collision'. See the 'Hash Cluster Storage' section in the Database Concepts doc
    http://docs.oracle.com/cd/E11882_01/server.112/e25789/tablecls.htm#sthref258
    >
    Hash Cluster Storage
    Oracle Database allocates space for a hash cluster differently from an indexed cluster. In Example 2-9, HASHKEYS specifies the number of departments likely to exist, whereas SIZE specifies the size of the data associated with each department. The database computes a storage space value based on the following formula:
    HASHKEYS * SIZE / database_block_size
    Thus, if the block size is 4096 bytes in Example 2-9, then the database allocates at least 200 blocks to the hash cluster.
    Oracle Database does not limit the number of hash key values that you can insert into the cluster. For example, even though HASHKEYS is 100, nothing prevents you from inserting 200 unique departments in the departments table. However, the efficiency of the hash cluster retrieval diminishes when the number of hash values exceeds the number of hash keys.
    >
    Using that formula above with HASHKEYS=4, SIZE=8192 and block size=8192 Oracle allocates at least 4 blocks.
    The next two paragraphs tell you what happens for a use case like yours: HASH COLLISION
    >
    To illustrate the retrieval issues, assume that block 100 in Figure 2-7 is completely full with rows for department 20. A user inserts a new department with department_id 43 into the departments table. The number of departments exceeds the HASHKEYS value, so the database hashes department_id 43 to hash value 77, which is the same hash value used for department_id 20. Hashing multiple input values to the same output value is called a hash collision.
    When users insert rows into the cluster for department 43, the database cannot store these rows in block 100, which is full. The database links block 100 to a new overflow block, say block 200, and stores the inserted rows in the new block. Both block 100 and 200 are now eligible to store data for either department. As shown in Figure 2-8, a query of either department 20 or 43 now requires two I/Os to retrieve the data: block 100 and its associated block 200. You can solve this problem by re-creating the cluster with a different HASHKEYS value.
    >
    Note the next to last sentence:
    >
    As shown in Figure 2-8, a query of either department 20 or 43 now requires two I/Os to retrieve the data: block 100 and its associated block 200.
    >
    Hmmmm - sounds suspiciously like your use case don't you think?
    Try what the doc says in that last sentence and see if it solves your problem:
    >
    You can solve this problem by re-creating the cluster with a different HASHKEYS value.
    >
    The parameters you provided and the table example you are using GUARANTEE that if more than FOUR ids are used there will be hash collisions and the result MUST BE what the doc describes. There will NEVER be space in an existing block for a second row so a new block has to be used and that means 'chaining' the blocks to find the one you need: one I/O for each block in the chain.
    Jonathan said he could not reproduce your problem but the 'hash' algorithm for his instance might have hashed 'ID=5' to a different value; his 'hash collision' might only occur for ID=2 (or 3 or 4).

  • Hello Gurus..... ISSUE with child Table update

    I have an issue with child table update
    I have created a GTC with one parent table and two child tables. I'm able to update the parent table and the values are found in db, but the ISSUE is the child Table values are not updating the db.
    please give me a solution
    regards
    Srikanth

    If you are keeping referential integrity in the database, not in the application, it is easy to find the child and parent tables. Here is a quick and dirty query. You can join this to dba_cons_columns to find out on which columns the referential constraints are defined. This lists all child-parent table including SYS and SYSTEM users. You can run this for specific users of course.
    select cons1.owner child_owner,cons1.table_name child_table,
    cons2.owner parent_owner,cons2.table_name parent_table
    from dba_constraints cons1,dba_constraints cons2
    where cons1.constraint_type='R'
    and cons1.r_constraint_name=cons2.constraint_name;

  • Issues with Advance Table Add Row New Row not work in some scenarios.

    Hi,
    Wondering if there's any issue with Advanced Tables where it does not create any rows. I don't know if anyone tried this or not. I have one OA Page with Advanced Table and a button that when clicked open a new OA Page in a POP-UP Window. The pop-up page conatins one textbox where u enter a data and this gets saved in one of the VO's transient attribute. Now on the ase page if you don't click a button to open a pop-up page you can Add New Rows in the Advanced Table by clicking Add Row Button. But as soon as you open a popup window and close it Add New Rows button doesn't work and is not creating any new rows. Basically page stops working. Both the POP-UP and the base page share the same AM but have different controllers.
    POP-UP page is a custom page that I open giving the Destination URI value in the button item and target frame _blank.
    I even tried creating rows programmatically for Advance Table but this too doesn't work once u open a pop-up. Also I have used pageContext.putTransactionValue in the pop-up page and am checking and removing this in the base page.
    Any help is appreciated.
    Thanks

    anyone

  • SQL Query : Order By issue with HUGE Table

    Hello friends,
    I have been through a terrible issue with order by. I would appreciate your help. Please let me know, your input for my case:
    => if i run select query it returns result quick in some milliseconds. (sql dev. fetches 50 rows at a time)
    => if i run select query with where condition and column (say A) in where condition is even indexed and i have order by and that order by column (say B) is also indexed.
    Now, here is the issue:
    1. if no. of rows with that where condition can filter yielding small result set then order by works fine .. 1-5 sec which is good.
    2.*if no. of rows with that where condition can filter yielding Large result set, say more than 50,000 then with order by then the wait time is exponential.... i have even waited 10+ mins to get the result back for 120,000 records.*
    Is order by takes that long for 100K records ... i think something else if wrong... your pointer will really be helpful... i am very new to sql and even newer for large table case.
    I am using SQL Developer Version 2.1.1.64
    and Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    Thank you so much.
    Edited by: 896719 on Jan 11, 2013 8:38 AM

    Yes you are correct, but my concentration was on order by thing, so it will do full scan of table so i was putting that ... and was also wondering if millions of record in table should not be a issue...???
    Any way for the explain plan , when just a value in the where changes there is the huge difference i want to point out too as below:
    SELECT
    FROM
        EES_EVT EES_EVT  where APLC_EVT_CD= 'ABC' ORDER BY  CRE_DTTM DESC
    execution time : 0.047 sec
    Plan hash value: 290548126
    | Id  | Operation                    | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |             |    27 | 14688 |    25   (4)| 00:00:01 |
    |   1 |  SORT ORDER BY               |             |    27 | 14688 |    25   (4)| 00:00:01 |
    |   2 |   TABLE ACCESS BY INDEX ROWID| EES_EVT     |    27 | 14688 |    24   (0)| 00:00:01 |
    |*  3 |    INDEX RANGE SCAN          | XIE1EES_EVT |    27 |       |     4   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - access("APLC_EVT_CD"='ABC')
    Note
       - SQL plan baseline "SYS_SQL_PLAN_6d41e6b91925c463" used for this statement
    =============================================================================================
    SELECT
    FROM
        EES_EVT EES_EVT  where APLC_EVT_CD= 'XYZ' ORDER BY  CRE_DTTM DESC
    execution : 898.672 sec.
    Plan hash value: 290548126
    | Id  | Operation                    | Name        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |             |   121K|    62M|       |   102K  (1)| 00:11:02 |
    |   1 |  SORT ORDER BY               |             |   121K|    62M|    72M|   102K  (1)| 00:11:02 |
    |   2 |   TABLE ACCESS BY INDEX ROWID| EES_EVT     |   121K|    62M|       | 88028   (1)| 00:09:27 |
    |*  3 |    INDEX RANGE SCAN          | XIE1EES_EVT |   121K|       |       |   689   (1)| 00:00:05 |
    Predicate Information (identified by operation id):
       3 - access("APLC_EVT_CD"='XYZ')
    Note
       - SQL plan baseline "SYS_SQL_PLAN_ef5709641925c463" used for this statementAlso Note this table contains 74328 MB data in it.
    Thanks

  • Insert performance issue with Partitioned Table.....

    Hi All,
    I have a performance issue during with a table which is partitioned. without table being partitioned
    it ran in less time but after partition it took more than double.
    1) The table was created initially without any partition and the below insert took only 27 minuts.
    Total Rec Inserted :- 2424233
    PL/SQL procedure successfully completed.
    Elapsed: 00:27:35.20
    2) Now I re-created the table with partition(range yearly - below) and the same insert took 59 minuts.
    Is there anyway i can achive the better performance during insert on this partitioned table?
    [ similerly, I have another table with 50 Million records and the insert took 10 hrs without partition.
    with partitioning the table, it took 18 hours... ]
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 4195045590
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 643K| 34M| | 12917 (3)| 00:02:36 |
    |* 1 | HASH JOIN | | 643K| 34M| 2112K| 12917 (3)| 00:02:36 |
    | 2 | VIEW | index$_join$_001 | 69534 | 1290K| | 529 (3)| 00:00:07 |
    |* 3 | HASH JOIN | | | | | | |
    | 4 | INDEX FAST FULL SCAN| PK_ACCOUNT_MASTER_BASE | 69534 | 1290K| | 181 (3)| 00:00
    | 5 | INDEX FAST FULL SCAN| ACCOUNT_MASTER_BASE_IDX2 | 69534 | 1290K| | 474 (2)| 00:00
    PLAN_TABLE_OUTPUT
    | 6 | TABLE ACCESS FULL | TB_SISADMIN_BALANCE | 2424K| 87M| | 6413 (4)| 00:01:17 |
    Predicate Information (identified by operation id):
    1 - access("A"."VENDOR_ACCT_NBR"=SUBSTR("B"."ACCOUNT_NO",1,8) AND
    "A"."VENDOR_CD"="B"."COMPANY_NO")
    3 - access(ROWID=ROWID)
    Open C1;
    Loop
    Fetch C1 Bulk Collect Into C_Rectype Limit 10000;
    Forall I In 1..C_Rectype.Count
    Insert test
         col1,col2,col3)
    Values
         val1, val2,val3);
    V_Rec := V_Rec + Nvl(C_Rectype.Count,0);
    Commit;
    Exit When C_Rectype.Count = 0;
    C_Rectype.delete;
    End Loop;
    End;
    Total Rec Inserted :- 2424233
    PL/SQL procedure successfully completed.
    Elapsed: 00:51:01.22
    Edited by: user520824 on Jul 16, 2010 9:16 AM

    I'm concerned about the view in step 2 and the index join in step 3. A composite index with both columns might eliminate the index join and result in fewer read operations.
    If you know which partition the data is going into beforehand you can save a little bit of processing by specifying the partition (which may not be a scalable long-term solution) in the insert - I'm not 100% sure you can do this on inserts but I know you can on selects.
    The APPEND hint won't help the way you are using it - the VALUES clause in an insert makes it be ignored. Where it is effective and should help you is if you can do the insert in one query - insert into/select from. If you are using the loop to avoid filling up undo/rollback you can use a bulk collect to batch the selects and commit accordingly - but don't commit more often than you have to because more frequent commits slow transactions down.
    I don't think there is a nologging hint :)
    So, try something like
    insert /*+ hints */ into ...
    Select
         A.Ing_Acct_Nbr, currency_Symbol,
         Balance_Date,     Company_No,
         Substr(Account_No,1,8) Account_No,
         Substr(Account_No,9,1) Typ_Cd ,
         Substr(Account_No,10,1) Chk_Cd,
         Td_Balance,     Sd_Balance,
         Sysdate,     'Sisadmin'
    From Ideaal_Cons.Tb_Account_Master_Base A,
         Ideaal_Staging.Tb_Sisadmin_Balance B
    Where A.Vendor_Acct_Nbr = Substr(B.Account_No,1,8)
       And A.Vendor_Cd = b.company_no
          ;Edited by: riedelme on Jul 16, 2010 7:42 AM

  • Creation of SAP Query in SQ02 with Single Table With Condition

    Hi All,
    I want to Create SAP Query in SQ02 using single Table MCHA.
    ii) I dont want all entries of MCHA Table I mean , I have to apply some Condition on this Table.
    i.e  Suppose I am having actual data in MCHA table is like this for Material M1.
    Plant    Material   Batch   BatchCreationdate
    P1          M1         B1       20.06.2007
    P2          M1         B1       04.05.2009
    P3          M1         B1       04.05.2009
    But I want the Output of SAP Query is like this:
       Material   Batch   BatchCreationdate
          M1         B1       20.06.2007
    That is irrespective of Plant if Material & Batch are equal ---> 1st record with Lowest date shoud get at the output.
    Please help me How write the code on single table in the SAP Query.
    Thanks,
    Kiran Manyam

    Hi,
    Your query should be like this:
    Select MATNR CHARG HSDAT
    from MCHA
    into table t_mcha
    where matnr = Materlal number from selection screen.
    The structure of t_mcha should contain the fields that you select.
    Then sort the table by date ascending
    Sort t_mcha by HSDAT.
    Hope this solves your problem.
    Thanks,
    Sowmya

  • Performance issues with pipelined table functions

    I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
    Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
    Many thanks in advance.
    CREATE OR REPLACE PACKAGE pipeline_example
    IS
       TYPE resultset_typ IS REF CURSOR;
       TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
       TYPE table_typ IS TABLE OF row_typ;
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ;
       c_default_limit   CONSTANT PLS_INTEGER := 100;  
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ);
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ);
    END pipeline_example;
    CREATE OR REPLACE PACKAGE BODY pipeline_example
    IS
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ
       IS
          o_resultset   resultset_typ;
       BEGIN
          OPEN o_resultset FOR
             SELECT colC, colD, colE
               FROM some_table
              WHERE colA = ArgA AND colB = argB;
          RETURN o_resultset;
       END base_query;
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
       IS
          aa_source_data   table_typ;-- := table_typ ();
       BEGIN
          LOOP
             FETCH p_source_data
             BULK COLLECT INTO aa_source_data
             LIMIT p_limit_size;
             EXIT WHEN aa_source_data.COUNT = 0;
             /* Process the batch of (p_limit_size) records... */
             FOR i IN 1 .. aa_source_data.COUNT
             LOOP
                PIPE ROW (aa_source_data (i));
             END LOOP;
          END LOOP;
          CLOSE p_source_data;
          RETURN;
       END processor;
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT /*+ PARALLEL(t, 5) */ colC,
                      SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM TABLE (processor (base_query (argA, argB),100)) t
             GROUP BY colC
             ORDER BY colC
       END with_pipeline;
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT colC,
                      SUM (CASE WHEN colD > colE AND colE  != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD  != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD  != '0' THEN 1 END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM (SELECT colC, colD, colE
                         FROM some_table
                        WHERE colA = ArgA AND colB = argB)
             GROUP BY colC
             ORDER BY colC;
       END no_pipeline;
    END pipeline_example;
    ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
    Edited by: Earthlink on Nov 14, 2010 11:31 AM
    Edited by: Earthlink on Nov 14, 2010 11:32 AM
    Edited by: Earthlink on Nov 20, 2010 12:04 PM
    Edited by: Earthlink on Nov 20, 2010 12:54 PM

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Issue with Multiple Tables in Report

    Post Author: dwessell
    CA Forum: General
    Hi,
    I'm using Crystal Reports 2k8.
    I'm doing a report with three tables, CQ_HEADER, SO_HEADER and SALESPERSON. Both the CQ_HEADER and the SO_HEADER tables link to the SALESPERSON table via a SPN_AUTO_KEY field.
    However, I always receive duplicates in my result set, due to the joins made, and I don't receive results that are valid in one table, and empty in another (Such that it only counts a CQ, if there is a SO associated with it. Here's the query that's produced by CR.
      SELECT "CQ_HEADER"."CQ_NUMBER", "CQ_HEADER"."ENTRY_DATE", "CQ_HEADER"."TOTAL_PRICE", "SALESPERSON"."SALESPERSON_NAME", "SO_HEADER"."ENTRY_DATE", "SO_HEADER"."TOTAL_PRICE"
    FROM   "CQ_HEADER" "CQ_HEADER" INNER JOIN ("SO_HEADER" "SO_HEADER" INNER JOIN "SALESPERSON" "SALESPERSON" ON "SO_HEADER"."SPN_AUTO_KEY"="SALESPERSON"."SPN_AUTO_KEY") ON "CQ_HEADER"."SPN_AUTO_KEY"="SALESPERSON"."SPN_AUTO_KEY"
    WHERE  ("CQ_HEADER"."ENTRY_DATE">={ts '2007-12-01 00:00:00'} AND "CQ_HEADER"."ENTRY_DATE"<{ts '2007-12-18 00:00:00'}) AND ("SO_HEADER"."ENTRY_DATE">={ts '2007-12-01 00:00:00'} AND "SO_HEADER"."ENTRY_DATE"<{ts '2007-12-18 00:00:00'})
    ORDER BY "SALESPERSON"."SALESPERSON_NAME"
    There is no link between the SO_HEADER and the CQ_HEADER.  Can anyone make a suggestion as to how I could go about structuring this such that it doesn't return duplicate values?
    Thanks
    David     

    Hey,
    I understand you used Retainsameconnection property true for all the OLEDB connections you used in the package if not make sure its for all the connection including file connection as well.
    Additionally, you can try to set Delayvalidation property to true for all the dataflows and control flows in the connection and try running the package for 10MB file.
    I hope this will fix the intermittent failure issue you are facing with SSIS.
    (Please mark solved if I've answered your question, vote for it as helpful to help other user's find a solution quicker)
    Thanks,
    Atul Gaikwad.

  • Issue with Temp tables in SSIS 2012 with RetainSameConnection=true

    Hello,
    We have few packages written in 2008 and are being upgraded to 2012. Our package mostly uses temp tables during the process.  During initial migration, we faced issue with handling temp table in the OLE Db destination provider and found a solution for
    the same under 
    usage of Temp tables in SSIS 2012
    Most of our packages execute fine now. 
    we came across a different issue recently. For one of our package, which merges 3 feeds into a temp table and then executes a stored procedure for processing, the package fails intermittently.
    Below are properties of SSIS and its components, which you might be interested
    * Retainsameconnection for the OLE Db connection manager set to True
    * properties of OLEDB Destination 
    AccessMode : SQL Command
    CommandTimeOut : 0
    SQLCommand : Select * from #tmp
    * using SSIS 2012 and SQL OLEDB Native Provider 11 (Provider=SQLNCLI11.1)
    * one of the feed is 10MB
    During investigation using profiler, found that though I use RetainSameConnection, I often  could see that more than one SPId is used during the scope of SSIS execution and when ever this happens, package fails with below error message 
    An OLE DB record is available. Source: "Microsoft SQL Server Native Client 11.0" Hresult: 0x80040E14 Description: "Statement(s) could not be prepared.".
    An OLE DB record is available. Source: "Microsoft SQL Server Native Client 11.0" Hresult: 0x80040E14 Description: "Invalid object name '#tmp'."
    Now, Why SSIS uses a different SPId during its execution when RetainSameconnection is set to True (note : I have only one OLEDB connection in that package)? 
    To simulate the issue, Instead of 10MB file, I used a 500KB file and execute the package twice and all went fine.
    Is it because of 10 MB file taking long time to process causing the time out of that OLEDB destionation forcing the SSIS to go for another connection? but remember, CommandTimeout is set to infinite(0) for that OLEDB destination. 
    Much appreciated your response. 

    Hey,
    I understand you used Retainsameconnection property true for all the OLEDB connections you used in the package if not make sure its for all the connection including file connection as well.
    Additionally, you can try to set Delayvalidation property to true for all the dataflows and control flows in the connection and try running the package for 10MB file.
    I hope this will fix the intermittent failure issue you are facing with SSIS.
    (Please mark solved if I've answered your question, vote for it as helpful to help other user's find a solution quicker)
    Thanks,
    Atul Gaikwad.

  • Rebate related issue with database table VKDFS & VBAK

    Hi everybody,
    I am facing the problem with the tables VKDFS and VBAK.
    In my program the report has to display the details of the agrement numbers concerning to the sale or billing doucmnets later on it has to create a credit memo for that particular customer.
    In the coding the program in very beging step, it is fetching all sales documents from VKDFS as per selections like following.
      select        * from  vkdfs into table ivkdfs
             where  fktyp  in r_fktyp
             and    vkorg  in s_vkorg
             and    fkdat  in s_fkdat
             and    kunnr  in s_kunnr
             and    fkart  in s_fkart
             and    vbeln  in s_vbeln
             and    faksk  in s_faksk
             and    vtweg  in s_vtweg
             and    spart  in s_spart
             and    netwr  in s_netwr
             and    waerk  in s_waerk.
    After this whatever the sales orders fetched here, for those all again its fetching from VBAK table as following.
    SVBAK[] = IVKDFS[]
    select * from vbak into table ivbak
      for all entries in svbak
      where vbeln = svbak-vbeln
      and   knuma in s_knuma
      and   auart in s_auart
      and   submi in s_submi
      and  (vbak_wtab).
    So, its filtering from VBAK.
    But the exact issue is that, there is one sales order which is available in VBAK but does not available in VKDFS table.
    So, my program fails to display the report regarding to that agreement number.
    As per my analysis I came to know that there are no entries in VKDFS table against to the sales orders in VBAK concerning agreement numbers.
    VKDFS-SD index: billing initiator table.
    I want to know how come this VKDFS table is updating against to VBAK table. If possible how to make this entry in that table against to the values in VBAK. But it should not effect other tables.
    Please let me know the solution if you people have any .
    Its an urgent and sev 1 tickets
    eagerly waiting for solution or some information.
    Thanks&Regards.
    J.

    Hi everybody,
    I am facing the problem with the tables VKDFS and VBAK.
    In my program the report has to display the details of the agrement numbers concerning to the sale or billing doucmnets later on it has to create a credit memo for that particular customer.
    In the coding the program in very beging step, it is fetching all sales documents from VKDFS as per selections like following.
      select        * from  vkdfs into table ivkdfs
             where  fktyp  in r_fktyp
             and    vkorg  in s_vkorg
             and    fkdat  in s_fkdat
             and    kunnr  in s_kunnr
             and    fkart  in s_fkart
             and    vbeln  in s_vbeln
             and    faksk  in s_faksk
             and    vtweg  in s_vtweg
             and    spart  in s_spart
             and    netwr  in s_netwr
             and    waerk  in s_waerk.
    After this whatever the sales orders fetched here, for those all again its fetching from VBAK table as following.
    SVBAK[] = IVKDFS[]
    select * from vbak into table ivbak
      for all entries in svbak
      where vbeln = svbak-vbeln
      and   knuma in s_knuma
      and   auart in s_auart
      and   submi in s_submi
      and  (vbak_wtab).
    So, its filtering from VBAK.
    But the exact issue is that, there is one sales order which is available in VBAK but does not available in VKDFS table.
    So, my program fails to display the report regarding to that agreement number.
    As per my analysis I came to know that there are no entries in VKDFS table against to the sales orders in VBAK concerning agreement numbers.
    VKDFS-SD index: billing initiator table.
    I want to know how come this VKDFS table is updating against to VBAK table. If possible how to make this entry in that table against to the values in VBAK. But it should not effect other tables.
    Please let me know the solution if you people have any .
    Its an urgent and sev 1 tickets
    eagerly waiting for solution or some information.
    Thanks&Regards.
    J.

  • Strange issue with ADF table in chrome browser

    I have ADF table which should display 23 rows, but only 20 rows are visible in chrome browser, but other browsers like IE, firefox displays the 23 rows correctly. I have used default ADF table with Drag&drop behaviour in this table. All the 23 rows exported correctly to Excel with export to Excel behaviour and inspect page source also shows all the rows in Chrome browser, but display in the adf is only problem in chrome browser. We're having a production issue with this, any ideas are appreciated.
    Thanks,
    Surya

    Hi All,
    Is this issue fixed yet? There are a couple of threads reporting this issue and the original thread has been Archived. It is a real issue, and it remains an issue. The Chrome browser cuts off the last row of a table in the display. IE displays the row correctly. I am working with JDev 12.1.2 and I am building an application using ADF Tables. Without exception, on every page that has one, the last row of the table is cut off from display in a very ugly way and you cannot scroll down to display the full row. I have tried wrapping the table in a Panel Collection - same result, I have tried setting the height of the table - same result. I have tried surrounding the table with a PanelGroupLayout component (layout set to scroll) - same result. I have even tried surrounding the table with a PanelHeaderComponent component, Type set to both default and Stretch - yes, you guessed it, same result! I've even put the table in the middle of a PanelStretchLayout component - but the last row is always cut off.
    This should be easy for you to reproduce, just drop a data control on a ADF page and select a table. When you view it in the Chrome browser and you will see what I'm talking about. I'm using Google Chrome version 31.0.1650.63 m.
    I have experimented with AFStretchWidth and AutoHeightRows (as suggested by previous threads), nothing seems to work.
    Here's another suggestion, if the forum would allow you to insert an image, I could actually show you what I'm talking about. Food for thought perhaps?
    Best regards,
    Nigel
    "Life's too short not to use ADF"

  • Performance issue with COEP table in ECC 6

    Hi,,
    Any idea how to resonlve performance issue on COEP table in ECC6.0
    We are not using COEP table right now. this table occupies 100gb of 900 gb in PRD system.
    Can i directly archive/delete the table?
    Regards
    Siva

    Hi Siva,
    You cannot archive COEP table alone. It should be archived along with the respective archive object. Just deleting the table is not at all a good idea.
    For finding out the appropriate archive object contributing to the entries in COEP, you need to perform CO table analysis using programs RARCCOA1 and RARCCOA2. For further informaton refer to SAP note 138688.
    Hope this helps,
    Naveen

  • Performance issue with MSEG table

    Hi all,
    I need to fetch materials(MATNR) based on the service order number (AUFNR) in the selection screen,but there is performance isssue with this , how to over come this issue .
    Regards ,
    Amit

    Hi,
    There could be various reasons for performance issue with MSEG.
    1) database statistics of tables and indexes are not upto date.
    because of this wrong index is choosen during the execution.
    2) Improper indexes, because there is no indexes with the fields mentioned in the WHERE clause of the statement. Because of this reason, CBO would have choosen wrong index and did a range scan.
    3) Optimizer bug in oracle.
    4) Size of table is very huge, archive.
    Better switch on ST05 trace before you run this statements, so it will give more detailed information, where exactly time being spent during the execution.
    Hope this helps
    dileep

  • AP Tax Calculation issue with SINGLE TAX vs TAX GROUP

    Hi Gurus,
    i need your help on below, please advise!!
    i have to calculate ap VAT tax on AP invoice,
    (Rounding = nearest, precession=2, tax calcualtion= Include tax)
    if i calculate 5% is the tax rate, then the tax amount is 0.47cents,this is in case of single tax calculation.
    here my requirement was i need calculate 2 Taxes(TAX A AND TAX B (Rates are 5 AND 5%)
    EG:
    Invoce Base amount = 10 dollars
    in case of single tax = 5/105 * 10 = 0.4761 cents(this is 48cents in apps with rouning nearest and precession 2)
    tax mode = Include tax
    In case of tax group = Tax A and Tax B = 5 + 5 = 10%, when i calculate this in apps its showing 45cents and 45 cents as tax A and B
    why this tax caluclation is different with single tax and tax group.
    tax code actual amount tax amount remaining amount
    single tax 10 0.4761 10 - 0.48cents = 9.52 cents
    tax group 10 45+45=90 cents 9.10 cents
    Please Help !!!
    Thanks,
    Satish

    Hi Vineeth,
    This is Kathy from BSI Support.  I wanted to make sure that you understood that the TF80 Like Reciprocal flag was made available in TF90 for testing purposes only.  This was meant as a tool for customers to be able to compare their TF90 results to their TF80 output, to insure a successful upgrade.  This funcitonality, however, was never intended to be utilized going forward.  There have been significant changes implemented in BSI TaxFactory 9.0 regarding multi-state withholding (also known as reciprocity).  There is information available on our website that explains these changes.  If you log onto our website, please look under the "Whats New" section for an explanation of reciprocal functionality in BSI TaxFactoryu2122 9.0
    If you have specific scenarios that you need help with, please contact us and we will be happy to assist you.
    Regards,
    BSI Support - Kathy

Maybe you are looking for