Statistics on temporary table issue

We have a temporary table that causes sometimes a performance issue as the table statistics are not correct. The solution might be to compute the object statistics just before several select statements using the temporary table are executed (this is currently not the case). I think there can still be issues with this approach:
step 1.) Job 1 fills temporary table and gathers statistics on the table
step 2.) Job 1 executes first sql on temporary table
step 3.) Job 2 fills temporary table and gathers statistics on the table
step 4.) Job 2 executes first sql on temporary table using the new statistics
step 5.) Job 1 executes second sql on the table - but has now new (= wrong) table statistics of job 2
Job 1 executes for mandant 1 and job 2 for mandant 2, etc. Some of the "heap-organized" tables are partitioned by mandant.
How can we solve this problem? We consider to partition the temporary table into partitions by mandant, too. Are there other / better solutions?
(Oracle 10.2.0.4)

Hello,
If you don't have statistics on your Temporary Table Oracle will use dynamic sampling instead.
So, you may try to delete the statistics on this Table and then Lock them, as follow:
execute dbms_stats.delete_table_stats('{color:red}schema{color}','{color:red}temporary_table{color}');
execute dbms_stats.lock_table_stats('{color:red}schema{color}','{color:red}temporary_table{color}');You may check the performances as dynamic sampling is also resource consuming.
Hope this help.
Best regards,
Jean-Valentin

Similar Messages

  • Temporary Table issue

    Hi All
    I am using ODI 10.1.3.5
    I have a requirement like the temporary tables created during interface execution should not be dropped .
    Instead it should truncate and kept it for next execution.
    I have used the option DELETE_TEMP_OBJECT to no in flow tab option of interface.
    But it is keeping the objects as it is (i.e with data) which will again take memory space.
    Is there any other option in ODI or i need to update the KMs to implement the above ???
    any feedback will be appreciated.
    regards
    Gourisankar

    Hi,
    Add a step(s) to the KM to delete/truncate the temporary table(s). You may want to add an option to control this, similar to drop temp objects, so you do either or via the interface.
    Thanks
    Bos

  • Direct Path Loading Issues with Global Temporary Tables - OCI & OCILib

    I am writing some code to import data into a warehouse from a CPU grid which computes risk data. Due to the fact a computing grid is used there will be many clients which can load the data concurrently and at any point in time.
    Currently the import uses Binding in OCCI and chunking with a prepared statement to import the data into a global temporary table in a staging area after which a stored procedure is called within the same session which will process the data and load the data into a star schema.
    The GTT has the advantage that if any clients have issues no dirty data will be left and each client only sees their own instance of the data.
    I have been looking at using direct path loading to increase the performance of the load and have written some OCI code to perform the same task. I have manged to import the data into a regular heap based table using the OCI direct path apis. However when I try and use the same code to import against a Global Temporary Table I get an OCI Error (ORA-00600: internal error code, arguments: [6979], [16], [1], [1318528], [], [], [], [], [], [], [], [])
    I get error when the function OCIDirPathPrepare is executed. The same issue occurs in both OCI and OCILib.
    Is it not possible to use Direct Path Loading against a Global Temporry Table ? Because you can use the /*+ APPEND */ hint and load global temporary tables this way from tools like SQL Devloper / toad which is surely informing the SQL Engine to use Direct Path ?
    Looking at the table USER_OBJECTS I can see that for a Global Temporary Table the DATA_OBJECT_ID is null. Does this mean that it is impossible to us a direct path load into Global Temporary Tables ?
    Any ideas / suggestions would be really appreciated. If this means redesigning the application then I would appreciate suggestions which would allow many client to quick write processes in a parallel fashion. If this means creating a new parition in a Heap Table for each writer and direct path loading into this table then so be it.
    Thanks
    H
    Edited by: 813640 on 19-Nov-2010 11:08

    Replying to my own message in case anyone else is interested.
    I have now managed to successfully load data using direct path into a global temporary table with OCI. There appears to be no reason why this approach will not work.
    I loaded data into the temporary table and then issued a select count(*) on the table from within the session and from a new session. The results were as expected.
    The resaon for the ORA-006000 error was due to the fact that I had enabled table level parallel loading
    ie
    OCIAttrSet((dvoid *) context, (ub4) OCI_HTYPE_DIRPATH_CTX, *(ub1) 1*, (ub4)0, (ub4) OCI_ATTR_DIRPATH_PARALLEL, errhp)
    When loading a Global Temporary Table the OCI_ATTR_DIRPATH_PARALLEL attribute needs to be zero
    This makes sense, since the temp table does not have any partitions so it would not be possible to write in parallel to multiple paritions.
    Edited by: 813640 on 22-Nov-2010 08:42

  • Scalability issue with global temporary table.

    Hi All,
    Does create global temporary table would lock data disctionary like create table? if yes would not it be a scalable issue with multi user environment?
    Thanks and Regards,
    Rudra

    Billy  Verreynne  wrote:
    acadet wrote:
    am I correct in interpreting your response that we should be using GTT's in favour of bulk operations and collections and in memory operations? No. I said collections cannot scale. This means due to the fact that collections reside in expensive PGA memory, you cannot stuff large data volumes into them. Thus they do not make an ideal storage bin for temporary data (e.g. data loaded from file or a web service). GTTs otoh do not suffer from the same restrictions, can be indexed and offer vastly better scalability and so on.
    Multiple passes are often needed using such a data structure. Or filtering to find specific data. As a GTT is a SQL native, it offers a lot more flexibility and performance in this regard.
    And this makes sense - as where do we put out persistent data? Also in tables, but ones of a persistent and not temporary kind like a GTT.
    Collections are pretty useful - but limited in size and capability.
    Rudra states:
    I want to pull out few metrices from differnt tables and processing itIf this can't be achieved in a SQL statement, unless Rudra is a master of understatement then I would see GTT's as a waste of IO and programming effort. I agree.
    My comments however were about choices for a temporary data storage bin in PL/SQL.I agree with your general comments regarding temporary storage bins in Oracle, but to say that collections don't scale is putting to narrow a definition on scaling. True, collections can be resource intensive in terms of memory and CPU requirements, but their persistence will generally be much shorter than other types of temporary storage. Given the right characteristics, collections will scale and given the wrong characteristics GTT's wont scale.
    As you say it is all about choice. Getting back to the theme of this thread though, the original poster should be made aware that well designed and well coded applications are most likely to scale. Creating tables on the fly is generally considered bad practice and letting the database do what it does best, join tables in queries at the SQL level is considered good practice. The rest lies somewhere in between and knowing when to do which is why we get paid the big bucks (not). ;-)
    Regards
    Andre

  • TABLESPACE issue with temporary table

    I am getting the error ORA-00922: missing or invalid option when compiling the following code. What seems to be causing the trouble is the line when I specified "TABLESPACE MISC". For some reason, CREATE GLOBAL TEMPORARY TABLE does not allow me to choose the tablespace "MISC" for the table. What did I do wrong here?
    My code
    CREATE OR REPLACE PROCEDURE Nmh_Sp_Chf_Patients(
    r_to_report      IN OUT      Nmh_Pkg_Chf_Patients.TYP_PKG_CHF_PATIENTS
    AS
    e_table_not_exist EXCEPTION;
    PRAGMA EXCEPTION_INIT (e_table_not_exist, -942);
    dACTIVE_CD NUMBER := Nmh_Get_Code_Value_By('DISPLAY_KEY', 48, 'ACTIVE');
    dENCNTRACTIVE_CD NUMBER := Nmh_Get_Code_Value_By('DISPLAY_KEY', 261, 'ACTIVE');
    dATTENDDOCCd NUMBER := Nmh_Get_Code_Value_By('DISPLAY_KEY', 333, 'ATTENDINGPHYSPROV');
    dCHFCd NUMBER := Nmh_Get_Code_Value_By('DISPLAY_KEY', 200, 'HEARTFAILUREHANDOUTS');
    dORDERCOMPLETECd NUMBER := Nmh_Get_Code_Value_By('DISPLAY', 6004, 'Order Complete');
    dORDEREDCd NUMBER := Nmh_Get_Code_Value_By('DISPLAY', 6004, 'Ordered');
    // Populating active inpatients //
    // NOTE: This approach IS proven TO be more efficient than //
    // joining ENCNTR_DOMAIN TABLE //
    /* Define the cursor variable type */
    TYPE t_ActiveInpatient IS REF CURSOR;
    /* and the variable itself. */
    r_ActiveInpatient t_ActiveInpatient;
    /* Variables to hold the output of r_ActiveInpatient. */
    v_patient_full_name VARCHAR2(100);
    v_encntr_id NUMBER;
    v_person_id NUMBER;
    v_mrn VARCHAR2(200);
    v_fin VARCHAR2(200);
    v_reg_dt_tm DATE;
    v_nurse_unit VARCHAR2(40);
    v_med_service VARCHAR2(40);
    v_attending_physician VARCHAR2(100);
    v_min_updt_cnt ENCNTR_PRSNL_RELTN.updt_cnt%TYPE;
    v_tmp_sql VARCHAR2(2000);
    v_tmp_char VARCHAR2(11);
    BEGIN
    BEGIN
         EXECUTE IMMEDIATE 'DROP TABLE NMH_SP_CHF_PATIENTS_1';
    EXCEPTION
         WHEN e_table_not_exist THEN NULL;
         END;     
    EXECUTE IMMEDIATE 'CREATE GLOBAL TEMPORARY TABLE NMH_SP_CHF_PATIENTS_1'
    || '(patient_full_name VARCHAR2(100), encntr_id NUMBER,'
    || ' person_id NUMBER, mrn VARCHAR2(200), fin VARCHAR2(200),'
    || ' reg_dt_tm DATE, nurse_unit VARCHAR2(40),'
    || ' med_service VARCHAR2(40), attending_physician VARCHAR2(100))'
    || ' TABLESPACE MISC ' -- If I commented out this line, it would have worked fine.
    || ' ON COMMIT PRESERVE ROWS';
    OPEN r_ActiveInpatient FOR
    SELECT nai.patient_full_name,
    nai.encntr_id,
    nai.person_id,
    nai.mrn,
    nai.fin,
    nai.reg_dt_tm,
    Nmh_Get_Code_Display(nai.loc_nurse_unit_cd) NURSE_UNIT,
    Nmh_Get_Code_Display(nai.med_service_cd) med_service
    FROM NMH_ACTIVE_INPATIENTS nai,
              ORDERS o
    WHERE o.encntr_id = nai.encntr_id
    AND o.catalog_cd = dCHFCd
    AND (o.order_status_cd = dORDERCOMPLETECd
    OR o.order_status_cd = dORDEREDCd)
    AND o.template_order_id = 0;
    LOOP
    FETCH r_ActiveInpatient INTO
    v_patient_full_name,
    v_encntr_id,
    v_person_id,
    v_mrn,
    v_fin,
    v_reg_dt_tm,
    v_nurse_unit,
    v_med_service;
    EXIT WHEN r_ActiveInpatient%NOTFOUND;
    BEGIN
    SELECT MIN(epr.updt_cnt)
    INTO v_min_updt_cnt
    FROM ENCNTR_PRSNL_RELTN epr
         WHERE epr.encntr_id = v_encntr_id
              AND epr.encntr_prsnl_r_cd = dATTENDDOCCd;
    EXCEPTION
              WHEN NO_DATA_FOUND THEN NULL;
              END;     
    BEGIN
         SELECT pr.name_full_formatted
         INTO v_attending_physician
         FROM ENCNTR_PRSNL_RELTN epr,
                   PRSNL pr
         WHERE epr.encntr_id = v_encntr_id
                   AND epr.encntr_prsnl_r_cd = dATTENDDOCCd
         AND epr.updt_cnt = v_min_updt_cnt
         AND pr.person_id = epr.prsnl_person_id;
    EXCEPTION
              WHEN NO_DATA_FOUND THEN NULL;
              END;     
                   v_tmp_sql := 'INSERT INTO NMH_SP_CHF_PATIENTS_1 ';
                   v_tmp_sql := v_tmp_sql || '(patient_full_name, encntr_id, person_id, ';
                   v_tmp_sql := v_tmp_sql || 'mrn, fin, reg_dt_tm, nurse_unit, ';
                   v_tmp_sql := v_tmp_sql || 'med_service, attending_physician) ';
                   v_tmp_sql := v_tmp_sql || 'VALUES (:1,:2,:3,:4,:5,:6,:7,:8,:9)';
    EXECUTE IMMEDIATE v_tmp_sql
                   USING               
                   v_patient_full_name, v_encntr_id, v_person_id, v_mrn,
                   v_fin, v_reg_dt_tm, v_nurse_unit,
                   v_med_service, v_attending_physician;
    END LOOP;
    v_tmp_sql := 'SELECT * FROM NMH_SP_CHF_PATIENTS_1 t';
    OPEN r_to_report FOR
    v_tmp_sql;
    END;
    /

    In looking through your code a bit more, it seems that you have a fundamental minunderstanding of how temporary tables work in Oracle (if you have a SQL Server background, Oracle temp tables and SQL Server temp tables are quite different).
    The definition of a temporary table is available to every session, so you do not want to create a temporary table dynamically. You would create the temporary table once, just like any other table. The data that is inserted into a temporary table is local to that session, so you can have multiple sessions inserting data and each will see only its own data. By creating the temporary table outside of the stored procedure, you can avoid all the dynamic SQL you are currently doing, which is going to make the codea lot clearer, plus you avoid the implicit commit of a DDL operation in the procedure.
    As to the error you're getting, is there an ORA-xxxxx error message being returned? If your front end is hiding that error, try running the procedure from SQL*Plus and post the error. Of course, first try moving the table declaration outside the procedure.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Performance issue with temporary table

    Hello oracle community,
    Oracle 11.1
    I have a problem with a global temp table (IMPO.REPCUSTOMERSLUCK24). I insert about 600.000 records into the table and doing some UPDATE statements on the table and at the end a MERGE statemtent to fill another table. I think the problem is, that the optimizier dont know how many records are in the temp table (Cardinality 1), but I cannot use DBMS_STATS.GATHER_TABLE_STATS to analyze the temp table (will lose the records if I do). Maybe I could analyze it with the "preserve on commit" option, but would like to avoid that. here is the
    Plan
    UPDATE STATEMENT ALL_ROWSCost: 1 Bytes: 1.171 Cardinality: 1                                              
         15 UPDATE IMPO.REPCUSTOMERSLUCK24                                         
              14 FILTER                                    
                   2 TABLE ACCESS BY INDEX ROWID TABLE (TEMP) IMPO.REPCUSTOMERSLUCK24 Cost: 1 Bytes: 1.171 Cardinality: 1                               
                        1 INDEX RANGE SCAN INDEX IMPO.FK_1883_REPCUSTOMERSLUCK24 Cost: 1 Cardinality: 1                          
                   13 FILTER                               
                        12 SORT GROUP BY NOSORT Cost: 0 Bytes: 2.212 Cardinality: 1                          
                             11 NESTED LOOPS                     
                                  9 NESTED LOOPS Cost: 0 Bytes: 2.212 Cardinality: 1                
                                       7 NESTED LOOPS Cost: 0 Bytes: 1.685 Cardinality: 1           
                                            4 TABLE ACCESS BY INDEX ROWID TABLE (TEMP) IMPO.REPCONTRACTSLUCK24 Cost: 0 Bytes: 1.158 Cardinality: 1      
                                                 3 INDEX FULL SCAN INDEX IMPO.FK_1875_REPCONTRACTSLUCK24 Cost: 0 Cardinality: 1
                                            6 TABLE ACCESS BY INDEX ROWID TABLE CRM2.MEDIACODE Cost: 0 Bytes: 527 Cardinality: 1      
                                                 5 INDEX UNIQUE SCAN INDEX (UNIQUE) CRM2.AK_1970_MEDIACODE Cost: 0 Cardinality: 1
                                       8 INDEX UNIQUE SCAN INDEX (UNIQUE) CRM2.PK_1955_PARTNER Cost: 0 Cardinality: 1           
                                  10 TABLE ACCESS BY INDEX ROWID TABLE CRM2.PARTNER Cost: 0 Bytes: 527 Cardinality: 1                
    any suggestions to my problem ?
    Ikrischer

    hi,
    dynamic sampling is read only a part of the table to make an estimatation (generally count the number of rows, or get an average (if the sample is 'large' enough' for the result to be reliable) etc.
    So in you case you could evaluate the number of row like this (the explain plans show you that the estimated cost is propotional to the size of the sample read (either expressed in # of rows or block)).
    SQL*Plus: Release 10.2.0.2.0 - Production on Thu Jun 17 15:32:43 2010
    Copyright (c) 1982, 2005, Oracle.  All Rights Reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning, Oracle Label Security, OLAP and Data Mining options
    SQL> CREATE GLOBAL TEMPORARY TABLE XTEST
      2  (
      3    NUM1  NUMBER                                  NOT NULL
      4  )
      5  ON COMMIT PRESERVE ROWS
      6  NOCACHE
      7  /
    Table created.
    SQL> INSERT INTO xtest
      2     SELECT     ROWNUM
      3     FROM       DUAL
      4     CONNECT BY ROWNUM <= 100000;
    100000 rows created.
    SQL> commit;
    Commit complete.
    SQL> EXEC dbms_stats.gather_table_stats(ownname=>user,tabname=>'XTEST');
    PL/SQL procedure successfully completed.
    SQL> EXPLAIN PLAN SET STATEMENT_ID = 'st1' FOR SELECT COUNT(*)*10 FROM xtest SAMPLE(10);
    Explained.
    SQL> EXPLAIN PLAN SET STATEMENT_ID = 'st2' FOR SELECT COUNT(*)*1.1 FROM xtest SAMPLE(90);
    Explained.
    SQL> set linesize 120;
    SQL> SELECT PLAN_TABLE_OUTPUT FROM   TABLE(DBMS_XPLAN.DISPLAY(NULL,'st1','TYPICAL'));
    PLAN_TABLE_OUTPUT
    Plan hash value: 2221487120
    | Id  | Operation            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |       |     1 |     4 |    31  (26)| 00:00:01 |
    |   1 |  SORT AGGREGATE      |       |     1 |     4 |            |          |
    |   2 |   TABLE ACCESS SAMPLE| XTEST | 10077 | 40308 |    31  (26)| 00:00:01 |
    9 rows selected.
    SQL> SELECT PLAN_TABLE_OUTPUT FROM   TABLE(DBMS_XPLAN.DISPLAY(NULL,'st2','TYPICAL'));
    PLAN_TABLE_OUTPUT
    Plan hash value: 2221487120
    | Id  | Operation            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |       |     1 |     4 |    32  (29)| 00:00:01 |
    |   1 |  SORT AGGREGATE      |       |     1 |     4 |            |          |
    |   2 |   TABLE ACCESS SAMPLE| XTEST | 90693 |   354K|    32  (29)| 00:00:01 |
    9 rows selected.
    SQL> Note the difference of rows/bytes in both samples, but be carrefull because the explain plan only gives you an estimation ...
    REM: If you sample by blocks, you'll get less 'IO' (physical or not) (select count(*)1.5 from mytable sample block (50) is costless thans elect count(*)1.5 from mytable sample (50)) ...

  • Weird issue: Partial data inserted when reading from Global temporary table

    I have a complex sql query that fetches 88k records. This query uses a global temporary table which is the replica of one of our permanent tables. When I do Create table..select... using this query it inserts only fewer records (66k or lesser). But when I make the query point to the permanent table it inserts all 88k records.
    1. I tried running the select query separately using temp and perm table. Both retrieves 88k records.
    2. From debugging I found that this problem occurred when we were trying to perform a left outer join on an inline view.
    However this problem got resolved when I used the /*+ FIRST_ROWS */ hint.
    From my limited oracle knowledge I assume that it is the problem with the query and how it is processed in the memory.
    Can someone clarify what is happening behind the scenes and if there is a better solution?
    Thanks

    user3437160 wrote:
    I have a complex sql query that fetches 88k records. This query uses a global temporary table which is the replica of one of our permanent tables. When I do Create table..select... using this query it inserts only fewer records (66k or lesser). But when I make the query point to the permanent table it inserts all 88k records.
    1. I tried running the select query separately using temp and perm table. Both retrieves 88k records.
    2. From debugging I found that this problem occurred when we were trying to perform a left outer join on an inline view.
    However this problem got resolved when I used the /*+ FIRST_ROWS */ hint.
    From my limited oracle knowledge I assume that it is the problem with the query and how it is processed in the memory.
    Can someone clarify what is happening behind the scenes and if there is a better solution?
    Thanksmight specifics be OS & Oracle version dependent?
    How to ask question
    SQL and PL/SQL FAQ

  • Performance issue with Oracle Global Temporary table

    Hi
    Oracle version : 10.2.0.3.0 - Production
    We have an application in Java / Oracle. Users request comes in XML and oracle parser parses it and inserts it into Global temporary tables and then Business Stored procedure picks data from these GTT's and do the required processing.
    in the end data required response data is again inserted into response GTT's from which Response XML is generated.
    Question : Is the use of Global temporary tables in Oracle degrades performance as we have large number of GTT's in our application approx. 5-600 such tables.
    Regards,
    Vikas Kumar

    Hi All,
    Here is architecture of my application:
    Java application creates XML from the screen values and then inserts that XML
    into a framework(separate DB schema) table . then Java calls a Stored Procedure from same framework DB and in SP we have following steps.
    1. It fatches XML from the XML type table and inserts XML into screen specific XML TYPE table in the framework DB Schema. This table has a trigger which parses XML and then inserts XML values into GTT which are created in separate product schemas.
    2. it calls Product SP and then in product SP we have business logic. Product SP
    does the execution and then inserts response into Response GTT.
    3. Response XML is created by using XML generation function and response GTT.
    I hope u will understand my architeture this time and now let me know if GTT are good in this scenario or not. also please not that i need data in GTT only during execution and not after that. i dont want to do specific delete which i have to do if i am using normal tables.
    Regards,
    Vikas Kumar

  • Is there a way to create "temporary" tables in SAP database?

    Hello,
    Is there a way to create temporary tables in ABAP?
    Here is our scenario:
    1. Invoke a custom RFC that creates a temporary table and returns the name of the table.
    2. Invoke another custom RFC and pass this table name as parameter. This RFC internally does some INNER JOIN with the temporary table.
    3. Invoke the third RFC to delete the temporary table.
    Note that the name of the table cannot be static. We have many users using our application simultaneously and connecting to the SAP server.
    I would appreciate it if you could point me in the right direction.
    Thank you in advance for your help.
    Peter

    I just ran into a similar issue.  While only calling the select statement 2 times, each time had so many entries in the 'for all entries' list, that the compiler converted this into about 700 calls to the select.  Now since the select joined three real tables on the database, the trace shows this one select as being the slowest item in this application.
    I think that happened because 'for all entries' gets converted to an 'IN' clause, and then the total number of characters in any SQL statement has an upper limit.   So the compiler must make the select statement over and over until it covers all entries in the 'for all entries' list.  Is that correct?
    Since every database I ever saw has the concept of db temporary tables, I have used db temp tables many times for this sort of thing.
    The ABAP compiler could determine that more than one IN statement will be need, then use an alternate: write all the FOR ALL ENTRIES to a db temp table, then join on the db temp table, then drop db temp table.  Since the compiler does this sort of thing, no application code needs change to get the speed boost.

  • Transaction with Global Temporary Table

    Problem:
    Transaction starts with few DML statements and in the middle we are calling a JAVA method which creates the dynamic global temporary table and proceeding with few DML statements, if one of the statement fails, in the exception clause we are trying to rollback and transactions after the DDL (create global temp table) are rolled back and transactions before the DDL (create global temp table) are committed.
    We cannot pre-create the global temporary table, since we do not know the number of temp table to be pre-created.
    How we can resolve this issue? The same concept works for SQL server.
    Example of our issue:
    --drop table table1 purge;
    --drop table t_id purge;
    Create table table1 (col1 number, col2 varchar2(20));
    Insert into table1 values (1, 'Test1');
    Create global temporary table t_id (id number);
    Insert into table1 values (2, 'Test2');
    Rollback;
    After the rollback you can see only the 'Test1' record.

    > We cannot pre-create the global temporary table, since we do not know the number of temp table to be pre-created.
    I don't see how one procedure could need to create an unknown number of temporary tables. Do they all have different (and unknown) column lists? Couldn't you combine them into a single table with a key to distinguish betweeen the different sets of rows?

  • Global temporary table in PL/SQL called from APEX page

    I have a global temporary table in a PL/SQL procedure that is called from an APEX page.
    The global temp table is populated with data as the procedure runs and then at the end of the procedure I do a create_collection_from_query_b to populate a collection with the data from the temp table. (I do this b/c it is much faster than creating the collection and doing an add_member for each row.)
    The problem is that there are no commits in my procedure but I cannot get the bulk insert to work unless I define the temp table as on commit preserve rows.
    Can anyone shed any light on this issue.
    Thanks,
    Andrew

    alamantia wrote:
    My PL/SQL procedure is called from an after submit page process. Does that imply that there is a commit happening after that process is successful?Ultimately, yes.
    If the process calls the PL/SQL procedure and the temp table is in the procedure, wouldn't the commit fire after all the PL/SQL code is complete which would be after the bulk insert from the temp table to my collection?Yes, but at any point in the procedure containing code like
    :APEX_ITEM := ...or
    select ... into :APEX_ITEM from ...or
    my_procedure(p_in => ..., p_out => :APEX_ITEM, ...);or
    apex_util.set_session_state(...);then APEX will commit whilst maintaining session state.
    If you don't have any of these events in the procedure, then test to see if the commit is occurring in <tt>apex_collection.create_collection_from_query_b</tt> prior to creation of the collection.

  • Unable to read data from Temporary table

    Hello
    Iam calling a stored procedure in java which populates data into a temporary table. This temporary table is reset for each session. The issue is that the procedure is executed successfully but when I run a select query on the temp table, it shows 0 rows.
    When i execute the same procedure from TOAD or MSSQL, the temp table is populated successfully.
    Any Suggestion on what is the possible error
    tnx
    -S-

    Temp table exists for duration of session.
    Make sure you are using the same session.

  • Doubt with Global Temporary table

    hi,
    i have created a global temporary table with ON COMMIT DELETE ROWS option. in my Function in a loop i m inserting values into this Table, after that loop closes and then i m selecting some other values from DB. and in the last i am returning a ref cursor which is selecting values from temporary table i hav created.
    now the thing is i m not getting any values in the cursor.
    later I have created the table with ON COMMIT PRESERVE ROWS option, in this case cursor returning values,
    can anyone explain me the functionality, as per my knowledge global temporary table values are session specific so why i m not getting the values in the 1st case when i used ON COMMIT DELETE ROWS (same session).
    Thanks
    Piyush

    Ok, here's a simple example, like we'd like to see from you not working....
    First create a GTT with ON COMMIT DELETE ROWS...
    SQL> ed
    Wrote file afiedt.buf
      1* create global temporary table mytable (x number) on commit delete rows
    SQL> /
    Table created.Now a simple function that populates the GTT and returns a ref cursor to the data without doing any commits (hence the data should be there!)
    SQL> ed
    Wrote file afiedt.buf
      1  create or replace function pop_table return sys_refcursor is
      2    v_rc sys_refcursor;
      3  begin
      4    insert into mytable
      5    select rownum from dual connect by rownum <= 10;
      6    OPEN v_rc FOR SELECT x FROM mytable;
      7    RETURN v_rc;
      8* end;
    SQL> /
    Function created.So now we call the function and get a reference to our ref cursor...
    SQL> var v_a refcursor;
    SQL> exec :v_a := pop_table();
    PL/SQL procedure successfully completed.So, in principle, because no commits have been issued the ref cursor should return data...
    SQL> print v_a;
             X
             1
             2
             3
             4
             5
             6
             7
             8
             9
            10
    10 rows selected.... which it does.
    Now, what happens if we do that again...
    SQL> commit;
    Commit complete.
    SQL> exec :v_a := pop_table();
    PL/SQL procedure successfully completed.... but this time we commit before retrieving the data...
    SQL> commit;
    Commit complete.
    SQL> print v_a;
    ERROR:
    ORA-00600: internal error code, arguments: [kcbz_check_objd_typ_1], [0], [0], [1], [], [], [], []
    no rows selected
    SQL>Oracle has (correctly) lost reference to the data because of the commit.
    So show us what yours is doing.

  • Does Global Temporary Table help in performance?

    I have a large database table that is growing daily. The application I have has a page for the past day data and another for some chosen period of time. Since I'm looking at a very large amount of data for each page (~100k rows) and having charts based on time, I have performance issues. I tried collections for each of these and found out that it is making everything slower and I think because the collection is large and it is not indexed.
    Since I don't need the data to be maintained for the session and in fact for each time that I submit a page I need to get the updated data at least for the past day page, I wonder if Global Temporary Table is a good solution for me.
    The only reason I want to store the data in a table is to avoid running similar queries for different charts and reports. Is this a valid reason at all?
    If this is a good solution, can someone give me a hint on how to do this?
    Any help is appreciated.

    It all depends on how efficient your query is. You can have a billion row table and still get a fraction of a second response if the data is indexed, and the number of data blocks to be visited to retrieve the data is small. It's all about reducing the number of I/Os to find and retrieve your data with the query. Many aspects of the data, stats, table/index structure etc can influence the efficiency of your query. The SQL forum would be a better place to get into the query tuning, but if this test is fast, you can probably focus elsewhere for now. It will resolve your full resultset, and then just do a count of the result (to avoid sending 100k rows back to the client). We are trying to get an idea of how long it takes to resolve your resultset. Using litterals rather than item names in your sql should be fine for this test. Avoid using V() around item names in your SQL.
    select count(*) from ( <your-query-goes-here> );

  • Query hint to force SQL to use a temporary table in a CTE query?

    Hi,
    is it possible to tell SQL Server to create a temporary table by itself when I'm using a CTE in my query?
    I have a query starting with a CTE where I group by my record, then another recursive CTE use the first CTE and finally my select statement like:
    with cte as (select a,b,c,row_number() ...  from mytable group by a,b,c)
    , cte2(select .... from cte A where rownum =1
    union all select ... from cte B inner join cte2 C on ......
    select * from cte2
    this query is very very slow, but if I store the first CTE into a temporary table and then cte2 consume my temp table rather than the CTE, the query is very fast.
    creating the temp table took 10sec and the select took 20sec
    while the initial query didnt return anything  after 2minutes!!!
    so what can I try to do to have the query running in less than 30sec without creating the temp table first?
    is there a query hint which can be used to tell SQL Server to convert the CTE into a temp table?
    as I have a lot of query to manage, I want to simplify my model without relying in temporary tables every time I suffer this issue...
    thanks.

    What is your SQL Server version?
    There is no hint to materialize results of cte into a temp table, so the solution you tried is the best you can have.
    I think the idea of materializing CTE into a temp table was already proposed on Connect. Try searching for this and vote.
    For every expert, there is an equal and opposite expert. - Becker's Law
    My blog
    My TechNet articles

Maybe you are looking for

  • Is there any use manual for Lion?

    It seems reasonable that AApple woud not release a new operating system without a use manual?

  • JDBC MS SQL Server ResultSet Problem

    I'm writing a web-app using apache tomcat. I'm using MS SQL Server 2000 as my database. I am having no problem connecting to the database, but whenever I query the database, the ResultSet that is returned never has anything in it. I've tried even the

  • 2 iPhone on same ATT account with each having its own iTunes account

    I want to add another line to my existing att account, but I want each phone to have its own itunes account. In another words, I dont want the other person with the other phone syncing to my itunes.

  • Photocasting Smart Albums?

    I'm having some problems with photocasting. I can share a regular album of images, but for some reason a smart album won't work. I get the photocast icon for the smart album, and when I click it, I get the dialogue box with all the usual photocast op

  • Macbook doesn't recognize HP 6525 Photosmart (Costco#) for a 6520, Os 10.9.5

    Printer works fine with USB cable connection.  Indicates it is connected to Wi-Fi, using Airport extreme router and comcast cable with comcast router 9no buttons to push on router) I downloaded HP software and drivers, reports from printer indicates