Will Partitioning improve performance on Global Temporary Table

Dear Guru,
In one of complicated module I am using Global Temporary Table (GTT) for intermediate processing i.e. It will store required data in it but the rows of the data goes into 10,00,000 - 20,00,000.
Can Partitioning or Indexing on Global Temporary Table improve the performance?
Thanking in Advance
Sanjeev

Sounds like an odd use of a GTT to me, but I'm sure there are valid reasons...
Presumably you are going to be processing all of these rows in some way? In which case I can't see how partitioning, even if it's possible (And I don't think it is) would help you.
Indexes - sure, that might help, but again, if you are reading all/most of these rows anyway they might not help or even get used.
Can you give a bit more detail about exactly what you are doing?
Edited by: Carlovski on Nov 24, 2010 12:51 PM

Similar Messages

  • How can I Improve the Performance using Global Temo Tables ??

    Hi,
    Can anyone tell me , How can i make use of Global Temporary Tables to improve the Performance.
    I have few sample scripts ,
    Say i have the View based on some Complex query like ,
    CREATE OR REPLACE VIEW Profile_values_view AS
    SELECT d.Profile_option_name, d.Profile_option_id, Profile_option_value,
    u.User_name, Level_id, Level_code
    FROM Profile_definitions d, Profile_values v, Profile_users u
    WHERE d.Profile_option_id = v.Profile_option_id
    AND ((Level_code = 'USER' AND Level_id = U.User_id) OR
    (Level_code = 'DEPARTMENT' AND Level_id = U.Department_id) OR
    (Level_code = 'SITE'))
    AND NOT EXISTS (SELECT 1 FROM PROFILE_VALUES P
    WHERE P.PROFILE_OPTION_ID = V.PROFILE_OPTION_ID
    AND ((Level_code = 'USER' AND
    level_id = u.User_id) OR
    (Level_code = 'DEPARTMENT' AND
    level_id = u.Department_id) OR
    (Level_code = 'SITE'))
    AND INSTR('USERDEPARTMENTSITE', v.Level_code) >
    INSTR('USERDEPARTMENTSITE', p.Level_code));
    Now i have created the Global temp Table as ,
    CREATE GLOBAL TEMPORARY TABLE Profile_values_temp
    Profile_option_name VARCHAR(60) NOT NULL,
    Profile_option_id NUMBER(4) NOT NULL,
    Profile_option_value VARCHAR2(20) NOT NULL,
    Level_code VARCHAR2(10) ,
    Level_id NUMBER(4) ,
    CONSTRAINT Profile_values_temp_pk
    PRIMARY KEY (Profile_option_id)
    ) ON COMMIT PRESERVE ROWS ORGANIZATION INDEX;
    Now I am Inserting the Records into Temp table as
    INSERT INTO Profile_values_temp
    (Profile_option_name, Profile_option_id, Profile_option_value,
    Level_code, Level_id)
    SELECT Profile_option_name, Profile_option_id, Profile_option_value,
    Level_code, Level_id
    FROM Profile_values_view;
    COMMIT;
    Now what my doubt is, when do i need to execute the Insert Statement.
    Say , if the View returns few millions of records , then loading such a data into Global Temporary table takes lot of time.
    Then what is the use of Global Temporary tables and how can i improve the Performance using the same.
    Raj

    Thanks for the responce ,
    There are 2 to 3 complex views in our database, and there always be more than 5000+ users will be workinf on the application and is OLTP application. Those complex views are killing the application performance.
    I what i felt was, if i create the Global Temporary tables for thow views and will be able to load the one third million of records returned by the views in to cache and can improve the application performance.
    I have created the Global Temporary tables for 2 views with the option On Commit Preserve , But after am inserting the records into the Temp table and when i Issue the commit statement, the Temp table is getting Cleared.
    I really got surpised of this behaviour as i know that with the Option On Commit Preserve , the rows should retain in the Temp Table, Instead , it's getting cleared.
    Pelase suggest , what to do ??
    Raj

  • Does Global Temporary Table help in performance?

    I have a large database table that is growing daily. The application I have has a page for the past day data and another for some chosen period of time. Since I'm looking at a very large amount of data for each page (~100k rows) and having charts based on time, I have performance issues. I tried collections for each of these and found out that it is making everything slower and I think because the collection is large and it is not indexed.
    Since I don't need the data to be maintained for the session and in fact for each time that I submit a page I need to get the updated data at least for the past day page, I wonder if Global Temporary Table is a good solution for me.
    The only reason I want to store the data in a table is to avoid running similar queries for different charts and reports. Is this a valid reason at all?
    If this is a good solution, can someone give me a hint on how to do this?
    Any help is appreciated.

    It all depends on how efficient your query is. You can have a billion row table and still get a fraction of a second response if the data is indexed, and the number of data blocks to be visited to retrieve the data is small. It's all about reducing the number of I/Os to find and retrieve your data with the query. Many aspects of the data, stats, table/index structure etc can influence the efficiency of your query. The SQL forum would be a better place to get into the query tuning, but if this test is fast, you can probably focus elsewhere for now. It will resolve your full resultset, and then just do a count of the result (to avoid sending 100k rows back to the client). We are trying to get an idea of how long it takes to resolve your resultset. Using litterals rather than item names in your sql should be fine for this test. Avoid using V() around item names in your SQL.
    select count(*) from ( <your-query-goes-here> );

  • Performance issue with Oracle Global Temporary table

    Hi
    Oracle version : 10.2.0.3.0 - Production
    We have an application in Java / Oracle. Users request comes in XML and oracle parser parses it and inserts it into Global temporary tables and then Business Stored procedure picks data from these GTT's and do the required processing.
    in the end data required response data is again inserted into response GTT's from which Response XML is generated.
    Question : Is the use of Global temporary tables in Oracle degrades performance as we have large number of GTT's in our application approx. 5-600 such tables.
    Regards,
    Vikas Kumar

    Hi All,
    Here is architecture of my application:
    Java application creates XML from the screen values and then inserts that XML
    into a framework(separate DB schema) table . then Java calls a Stored Procedure from same framework DB and in SP we have following steps.
    1. It fatches XML from the XML type table and inserts XML into screen specific XML TYPE table in the framework DB Schema. This table has a trigger which parses XML and then inserts XML values into GTT which are created in separate product schemas.
    2. it calls Product SP and then in product SP we have business logic. Product SP
    does the execution and then inserts response into Response GTT.
    3. Response XML is created by using XML generation function and response GTT.
    I hope u will understand my architeture this time and now let me know if GTT are good in this scenario or not. also please not that i need data in GTT only during execution and not after that. i dont want to do specific delete which i have to do if i am using normal tables.
    Regards,
    Vikas Kumar

  • Performance slow on DELETE command on global temporary table!

    Hi,
    I have a delete on a global temporary table that is taking long time!.
    Anyone have a clue about how to improve delete command's against global temporary table??
    Tks,
    Paulo Portugal

    Same problem here!
    <QUOTE>
    SELECT DISTINCT PDT_CHILD.SUP_ID, PDT_CHILD.SUB_ID,
    PDT_CHILD.SUB_LEAF_FLAG_ID
    FROM
    PJI_FP_AGGR_RBS_T PDT_CHILD WHERE 1=1 AND PDT_CHILD.SUP_ID = :B2 AND
    PDT_CHILD.SUP_ID <> PDT_CHILD.SUB_ID AND PDT_CHILD.WORKER_ID = :B1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 88561 20.71 20.23 0 0 0 0
    Fetch 90269 926.19 906.80 45 45164134 0 176545
    total 178831 946.91 927.03 45 45164134 0 176545
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (APPS) (recursive depth: 1)
    Rows Execution Plan
    0 SELECT STATEMENT MODE: ALL_ROWS
    0 HASH (UNIQUE)
    0 TABLE ACCESS (FULL) OF 'PJI_FP_AGGR_RBS_T' (TABLE (TEMP))
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    latch: row cache objects 1 0.00 0.00
    direct path write temp 3 0.00 0.00
    direct path read temp 3 0.00 0.00
    </QUOTE>
    The fetch is too high for TEMP table... Any help would be much appreciated!
    Note: Please teach me on how we can format the above in my future posts in OTN forums.
    ===

  • Direct Path Loading Issues with Global Temporary Tables - OCI & OCILib

    I am writing some code to import data into a warehouse from a CPU grid which computes risk data. Due to the fact a computing grid is used there will be many clients which can load the data concurrently and at any point in time.
    Currently the import uses Binding in OCCI and chunking with a prepared statement to import the data into a global temporary table in a staging area after which a stored procedure is called within the same session which will process the data and load the data into a star schema.
    The GTT has the advantage that if any clients have issues no dirty data will be left and each client only sees their own instance of the data.
    I have been looking at using direct path loading to increase the performance of the load and have written some OCI code to perform the same task. I have manged to import the data into a regular heap based table using the OCI direct path apis. However when I try and use the same code to import against a Global Temporary Table I get an OCI Error (ORA-00600: internal error code, arguments: [6979], [16], [1], [1318528], [], [], [], [], [], [], [], [])
    I get error when the function OCIDirPathPrepare is executed. The same issue occurs in both OCI and OCILib.
    Is it not possible to use Direct Path Loading against a Global Temporry Table ? Because you can use the /*+ APPEND */ hint and load global temporary tables this way from tools like SQL Devloper / toad which is surely informing the SQL Engine to use Direct Path ?
    Looking at the table USER_OBJECTS I can see that for a Global Temporary Table the DATA_OBJECT_ID is null. Does this mean that it is impossible to us a direct path load into Global Temporary Tables ?
    Any ideas / suggestions would be really appreciated. If this means redesigning the application then I would appreciate suggestions which would allow many client to quick write processes in a parallel fashion. If this means creating a new parition in a Heap Table for each writer and direct path loading into this table then so be it.
    Thanks
    H
    Edited by: 813640 on 19-Nov-2010 11:08

    Replying to my own message in case anyone else is interested.
    I have now managed to successfully load data using direct path into a global temporary table with OCI. There appears to be no reason why this approach will not work.
    I loaded data into the temporary table and then issued a select count(*) on the table from within the session and from a new session. The results were as expected.
    The resaon for the ORA-006000 error was due to the fact that I had enabled table level parallel loading
    ie
    OCIAttrSet((dvoid *) context, (ub4) OCI_HTYPE_DIRPATH_CTX, *(ub1) 1*, (ub4)0, (ub4) OCI_ATTR_DIRPATH_PARALLEL, errhp)
    When loading a Global Temporary Table the OCI_ATTR_DIRPATH_PARALLEL attribute needs to be zero
    This makes sense, since the temp table does not have any partitions so it would not be possible to write in parallel to multiple paritions.
    Edited by: 813640 on 22-Nov-2010 08:42

  • Use of global temporary tables in Procedures

    Hi
    I am using global temporary tables in the procedures. Loading data in the same table through many procedures. I am fetching the data from the global temporary table in PRO-C by a cursor. Will this degrade performance?
    Please help me..
    Thanks in Advance...

    Will this degrade performance?That depends... in comparison to what?
    Loading data into temporary tables will generally be more efficient than loading data into permanent tables because Oracle needs to do less to protect this data since it is inherently transient. On the other hand, loading the data into a table in the first place tends to be more expensive than alternatives like using a single SQL statement, a pipelined table function, or an in memory collection.
    Justin

  • Life time of data in a Global Temporary Table.

    Dear Friends,
    I have a global temporary table in which I insert some values via a backend package, when forms start up and accessing it via the same package when user performs some changes in it - storing the value and during exit saving it in the master table. My problem is the data is not accessible while processing. I'm using Oracle9i Enterprise Edition Release 9.2.0.1.0 database and Forms [32 Bit] Version 6.0.8.8.0. I also give you the script in using which I created the temporary table.
    CREATE GLOBAL TEMPORARY TABLE GTT_PRA
    A1 VARCHAR2(10 BYTE) NOT NULL,
    A2 VARCHAR2(15 BYTE) NOT NULL,
    A3 VARCHAR2(10 BYTE) NOT NULL
    ON COMMIT DELETE ROWS;
    Why is that so? Please help me.
    With Regards,
    Senthil .A. Perumal.

    Dear Arun,
    Thank you for your script. But I'm accessing a large table, so for each and every process, the table get populated and grows very large giving some space problem, that is why I'm deleting rows when commiting. I would appreciate your help.
    Dear Yogesh,
    From the same forms I'm calling the backend package - will that be a different session. Once I'm calling to populate the table and next time I'm calling to store the user modified data and finally calling to store the data to master table. I think all are in the same sessions. Please reply me.
    Thank you dear friends fr your immediate response. I would really appreciate it.
    Regards,
    Senthil .A. Perumal.

  • How we cn avoid the sequence in remote tables through global temporary tabl

    Hi,
    We have table xx_interface_qualifiers in the remote db and we are inserting the data like this and its on a loop
    INSERT INTO xx_interface_qua
    (interface_id,
    list_line_interface_id, excluder_flag,
    qualifier_context, qualifier_attribute,
    qualifier_attr_value, qualifier_precedence,
    comparison_operator_code, start_date_active,
    end_date_active, list_header_name, list_line_no,
    creation_date, created_by, last_update_date,
    last_updated_by, interface_attribute1
    VALUES (xx_interface_qua_s.NEXTVAL,
    ttt, 'Y',
    xxx, xxx2,
    xxx3, xxx4,
    '=', SYSDATE,
    NULL, xxx4, -1,
    SYSDATE, '-1', SYSDATE,
    '-1', 44 );
    We are trying to avoid the hitting of the database every time for a sequence and try to implement the global temporary tables,i mean to say 1st we need to insert the data to TEMP table and then from temp table we cn inseert all the data to xx_interface_qual in a single shot to improve the performance.
    But how we cn avoid the sequence in this case as we do not know the sequence in remote side.
    Please suggest any other way to improve the performance.
    Regards
    Das

    797846 wrote:
    We have table xx_interface_qualifiers in the remote db and we are inserting the data like this and its on a loop
    We are trying to avoid the hitting of the database every time for a sequence and try to implement the global temporary tables,i mean to say 1st we need to insert the data to TEMP table and then from temp table we cn inseert all the data to xx_interface_qual in a single shot to improve the performance.
    But how we cn avoid the sequence in this case as we do not know the sequence in remote side.Does not make sense. I/O is the slowest database operation.
    You have an unknown performance problem (that you claim is due to a sequence, but failed to provide any evidence for). Now you want to create more I/O, by writing the data twice. Once into a temp table and then again into the destination table. And do that in order to increase performance?
    I do not see how this can solve the underlying, and unknown, performance issue that you claim exists.
    Any problem solution needs to start with correctly and comprehensively identifying the problem.
    You cannot solve a problem without first knowing WHAT the problem is.

  • Scalability issue with global temporary table.

    Hi All,
    Does create global temporary table would lock data disctionary like create table? if yes would not it be a scalable issue with multi user environment?
    Thanks and Regards,
    Rudra

    Billy  Verreynne  wrote:
    acadet wrote:
    am I correct in interpreting your response that we should be using GTT's in favour of bulk operations and collections and in memory operations? No. I said collections cannot scale. This means due to the fact that collections reside in expensive PGA memory, you cannot stuff large data volumes into them. Thus they do not make an ideal storage bin for temporary data (e.g. data loaded from file or a web service). GTTs otoh do not suffer from the same restrictions, can be indexed and offer vastly better scalability and so on.
    Multiple passes are often needed using such a data structure. Or filtering to find specific data. As a GTT is a SQL native, it offers a lot more flexibility and performance in this regard.
    And this makes sense - as where do we put out persistent data? Also in tables, but ones of a persistent and not temporary kind like a GTT.
    Collections are pretty useful - but limited in size and capability.
    Rudra states:
    I want to pull out few metrices from differnt tables and processing itIf this can't be achieved in a SQL statement, unless Rudra is a master of understatement then I would see GTT's as a waste of IO and programming effort. I agree.
    My comments however were about choices for a temporary data storage bin in PL/SQL.I agree with your general comments regarding temporary storage bins in Oracle, but to say that collections don't scale is putting to narrow a definition on scaling. True, collections can be resource intensive in terms of memory and CPU requirements, but their persistence will generally be much shorter than other types of temporary storage. Given the right characteristics, collections will scale and given the wrong characteristics GTT's wont scale.
    As you say it is all about choice. Getting back to the theme of this thread though, the original poster should be made aware that well designed and well coded applications are most likely to scale. Creating tables on the fly is generally considered bad practice and letting the database do what it does best, join tables in queries at the SQL level is considered good practice. The rest lies somewhere in between and knowing when to do which is why we get paid the big bucks (not). ;-)
    Regards
    Andre

  • Performance issue with temporary table

    Hello oracle community,
    Oracle 11.1
    I have a problem with a global temp table (IMPO.REPCUSTOMERSLUCK24). I insert about 600.000 records into the table and doing some UPDATE statements on the table and at the end a MERGE statemtent to fill another table. I think the problem is, that the optimizier dont know how many records are in the temp table (Cardinality 1), but I cannot use DBMS_STATS.GATHER_TABLE_STATS to analyze the temp table (will lose the records if I do). Maybe I could analyze it with the "preserve on commit" option, but would like to avoid that. here is the
    Plan
    UPDATE STATEMENT ALL_ROWSCost: 1 Bytes: 1.171 Cardinality: 1                                              
         15 UPDATE IMPO.REPCUSTOMERSLUCK24                                         
              14 FILTER                                    
                   2 TABLE ACCESS BY INDEX ROWID TABLE (TEMP) IMPO.REPCUSTOMERSLUCK24 Cost: 1 Bytes: 1.171 Cardinality: 1                               
                        1 INDEX RANGE SCAN INDEX IMPO.FK_1883_REPCUSTOMERSLUCK24 Cost: 1 Cardinality: 1                          
                   13 FILTER                               
                        12 SORT GROUP BY NOSORT Cost: 0 Bytes: 2.212 Cardinality: 1                          
                             11 NESTED LOOPS                     
                                  9 NESTED LOOPS Cost: 0 Bytes: 2.212 Cardinality: 1                
                                       7 NESTED LOOPS Cost: 0 Bytes: 1.685 Cardinality: 1           
                                            4 TABLE ACCESS BY INDEX ROWID TABLE (TEMP) IMPO.REPCONTRACTSLUCK24 Cost: 0 Bytes: 1.158 Cardinality: 1      
                                                 3 INDEX FULL SCAN INDEX IMPO.FK_1875_REPCONTRACTSLUCK24 Cost: 0 Cardinality: 1
                                            6 TABLE ACCESS BY INDEX ROWID TABLE CRM2.MEDIACODE Cost: 0 Bytes: 527 Cardinality: 1      
                                                 5 INDEX UNIQUE SCAN INDEX (UNIQUE) CRM2.AK_1970_MEDIACODE Cost: 0 Cardinality: 1
                                       8 INDEX UNIQUE SCAN INDEX (UNIQUE) CRM2.PK_1955_PARTNER Cost: 0 Cardinality: 1           
                                  10 TABLE ACCESS BY INDEX ROWID TABLE CRM2.PARTNER Cost: 0 Bytes: 527 Cardinality: 1                
    any suggestions to my problem ?
    Ikrischer

    hi,
    dynamic sampling is read only a part of the table to make an estimatation (generally count the number of rows, or get an average (if the sample is 'large' enough' for the result to be reliable) etc.
    So in you case you could evaluate the number of row like this (the explain plans show you that the estimated cost is propotional to the size of the sample read (either expressed in # of rows or block)).
    SQL*Plus: Release 10.2.0.2.0 - Production on Thu Jun 17 15:32:43 2010
    Copyright (c) 1982, 2005, Oracle.  All Rights Reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning, Oracle Label Security, OLAP and Data Mining options
    SQL> CREATE GLOBAL TEMPORARY TABLE XTEST
      2  (
      3    NUM1  NUMBER                                  NOT NULL
      4  )
      5  ON COMMIT PRESERVE ROWS
      6  NOCACHE
      7  /
    Table created.
    SQL> INSERT INTO xtest
      2     SELECT     ROWNUM
      3     FROM       DUAL
      4     CONNECT BY ROWNUM <= 100000;
    100000 rows created.
    SQL> commit;
    Commit complete.
    SQL> EXEC dbms_stats.gather_table_stats(ownname=>user,tabname=>'XTEST');
    PL/SQL procedure successfully completed.
    SQL> EXPLAIN PLAN SET STATEMENT_ID = 'st1' FOR SELECT COUNT(*)*10 FROM xtest SAMPLE(10);
    Explained.
    SQL> EXPLAIN PLAN SET STATEMENT_ID = 'st2' FOR SELECT COUNT(*)*1.1 FROM xtest SAMPLE(90);
    Explained.
    SQL> set linesize 120;
    SQL> SELECT PLAN_TABLE_OUTPUT FROM   TABLE(DBMS_XPLAN.DISPLAY(NULL,'st1','TYPICAL'));
    PLAN_TABLE_OUTPUT
    Plan hash value: 2221487120
    | Id  | Operation            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |       |     1 |     4 |    31  (26)| 00:00:01 |
    |   1 |  SORT AGGREGATE      |       |     1 |     4 |            |          |
    |   2 |   TABLE ACCESS SAMPLE| XTEST | 10077 | 40308 |    31  (26)| 00:00:01 |
    9 rows selected.
    SQL> SELECT PLAN_TABLE_OUTPUT FROM   TABLE(DBMS_XPLAN.DISPLAY(NULL,'st2','TYPICAL'));
    PLAN_TABLE_OUTPUT
    Plan hash value: 2221487120
    | Id  | Operation            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |       |     1 |     4 |    32  (29)| 00:00:01 |
    |   1 |  SORT AGGREGATE      |       |     1 |     4 |            |          |
    |   2 |   TABLE ACCESS SAMPLE| XTEST | 90693 |   354K|    32  (29)| 00:00:01 |
    9 rows selected.
    SQL> Note the difference of rows/bytes in both samples, but be carrefull because the explain plan only gives you an estimation ...
    REM: If you sample by blocks, you'll get less 'IO' (physical or not) (select count(*)1.5 from mytable sample block (50) is costless thans elect count(*)1.5 from mytable sample (50)) ...

  • Need Advise on Global Temporary tables or Materialized views or Views

    Need advise on a plsql procedure working on.
    I had 6 tables having 200,000 rows in total intially,but will get added a maximum 20,000 rows daily by a batch process.
    I am writing a plsql code that takes an input ,for example customer_id, and is required to get all the data for that customer_id and
    had to do some complex calculation that includes stepwise validations before giving the output.Now while doing the logic it has the get the data for that customer_id from all the tables.
    There may be 100 records for that particular customer_id.
    I need advise on the below options.
    1.Use of global temporary tables get those 100 records and do the calculation part on that Global Temporary table.
    2.Use of Views or Materialized views.
    3.Using the Record Structures(like table types for those records) and then do the logic on them
    As Performance is the key point here i would like pull all the data at once into memory and then do the calculations instead of hitting the database many times, this is my main idea(correct me if am wrong).Also please advise if there are any other options
    I am using ORACLE 10G.
    Thanks
    Rede

    The approach that many advocate for here (including myself) is to do as much in SQL as possible. So, copying to GTTs or using record structures is probably not the solution you should be after.
    If you can provide the following details we may be able to steer you down the right path
    1. Oracle version (SELECT * FROM V$VERSION)
    2. Sample data in the form of CREATE / INSERT statements.
    3. Expected output
    4. Explanation of expected output (A.K.A. "business logic")
    5. Use \ tags for #2 and #3. See FAQ (Link on top right side) for details.
    Ideally try and re-create the problem, simplifying it as much as possible, without losing context. Use #1-#5 above as a base for posting your simplified problem here. Then we may be able to give you a solution specific to your problem.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Global Temporary Table (and more)

    Hi every one,
    here's the scenario:
    My database version is Oracle 9i.
    I have two Global Temporary Tables (GTT). I want to insert into those two tables (using a SELECT statement for each table) and then to use a SELECT statement to select from the two tables and the result sent to Report Builder 6i.
    Now, i guess i could use a Stored Procedure (SP) to insert into those tables and then use a SYS_REFCURSOR to return from this SP. The problem with that is Report Builder 6i does not recognise SYS_REFCURSOR types; it requires actual rows to be returned.
    So, my question is:
    Is there any way to insert into the two GTT first (using Select statements with Insert) and then select from the two tables, all in a single SELECT statement(In any case, a statement must be present that returns actual rows)?
    Additionally, one may run the report more than once (not necessarily after issuing a COMMIT or logging out), which means, the two GTT will be filled again and again. So, i will have to Truncate the GTT every time before Inserting.
    Efficiency of the query/solution is very important too as the data involved can consist of up to 2,00,000 records.
    Any suggestions will be greatly appreciated.
    Thank u.

    Here is some more detail:
    Q1////This statement handles INSERT for one GTT. Data inserted consists of multiple data selected from tables other than POLICY which is used by stored proc MANPOWER (in Q2).
    INSERT INTO TEMP1_NewBusiness
    SELECT ORG1, ORG2, (SELECT NAME FROM ORGANISER WHERE CODE=ORG1), ...,
    FROM POLICY
    WHERE DATCOM = '2007';
    Q2////This handles INSERT for second GTT
    INSERT INTO TEMP2_MANPOWER
    SELECT ORG1, MANPOWER(ORG1)
    FROM TEMP1_NewBusiness;
    /////Table POLICY is a normal table.
    MANPOWER is a stored proc which performs a string aggregation, using a cursor, by selecting from TEMP1.NBS. Because of the volume of data involved and the number of Selects, i'm using the GTT TEMP1.NBS as source for Manpower's data.
    So, first i need these two statements to be executed so that my GTTs are filled.
    Next, i want the result of query below sent to Report Builder.
    Q3.
    SELECT A.ORG1, A.ORG2, ..., B.MANPOWER
    FROM TEMP1.NewBusiness A, TEMP2.MANPOWER B
    WHERE A.ORG1 = B.ORG1;
    Now, i could place Q3 in the Report, but how do i get Q1 and Q2 to be executed first.
    Hope the situation is a little more clear now.
    I understand where you are coming from DAMORGAN and duplication is something i want to avoid myself.
    Thank u.

  • Creating a Global Temporary Table on non-default TEMP tablespace.

    Hello ,
    I am using Oracle 11g.
    I have a procedure which create global temporary tables for its functionality. As the data which is going in the global temporary table , mean the data which is going in the default TEMP tablesapce is too huge ..... billions of rows..
    So what i want to do is , I want to create the global temporary table in another TEMP2 tablespace ( which is not the default one) so the load of billions of rows of data will be shifted to TEMP2. The default TEMP tablespace will not be affected and it can be used for other transactions.
    Is this possible. Can i shift the global temporary table from TEMP( Default temp tablespace) to TEMP2 ( the non-default temp tablespace) ????
    Please guide me with proper solutions and examples ....
    Thanks in advance ..

    DBA4 wrote:
    Hello ,
    I am using Oracle 11g.
    I have a procedure which create global temporary tables for its functionality. As the data which is going in the global temporary table , mean the data which is going in the default TEMP tablesapce is too huge ..... billions of rows..
    So what i want to do is , I want to create the global temporary table in another TEMP2 tablespace ( which is not the default one) so the load of billions of rows of data will be shifted to TEMP2. The default TEMP tablespace will not be affected and it can be used for other transactions.
    Is this possible. Can i shift the global temporary table from TEMP( Default temp tablespace) to TEMP2 ( the non-default temp tablespace) ????
    Global temporary tables are instantiated in the temporary tablespace of the schema that inserts the data - not into "the default" temporary tablespace.
    Assume Schema1 creates a GTT and grants all on that table to schema2
    Assume schema1 also creates a procedure (authid owner, the default) to insert data into the GTT and grants execute on the procedure to schema2
    If schema2 executes: insert into schema1.gtt, the data will appear in the temporary tablespace of schema2
    If schema2 executes: execute schema1.procedure, the data will appear in the temporary tablespace of schema1
    So if you want to protect the "normal" temporary tablespace, you could just create a special temporary tablespace for the owner of the procedure.
    Regards
    Jonathan Lewis

  • Global temporary table in PL/SQL called from APEX page

    I have a global temporary table in a PL/SQL procedure that is called from an APEX page.
    The global temp table is populated with data as the procedure runs and then at the end of the procedure I do a create_collection_from_query_b to populate a collection with the data from the temp table. (I do this b/c it is much faster than creating the collection and doing an add_member for each row.)
    The problem is that there are no commits in my procedure but I cannot get the bulk insert to work unless I define the temp table as on commit preserve rows.
    Can anyone shed any light on this issue.
    Thanks,
    Andrew

    alamantia wrote:
    My PL/SQL procedure is called from an after submit page process. Does that imply that there is a commit happening after that process is successful?Ultimately, yes.
    If the process calls the PL/SQL procedure and the temp table is in the procedure, wouldn't the commit fire after all the PL/SQL code is complete which would be after the bulk insert from the temp table to my collection?Yes, but at any point in the procedure containing code like
    :APEX_ITEM := ...or
    select ... into :APEX_ITEM from ...or
    my_procedure(p_in => ..., p_out => :APEX_ITEM, ...);or
    apex_util.set_session_state(...);then APEX will commit whilst maintaining session state.
    If you don't have any of these events in the procedure, then test to see if the commit is occurring in <tt>apex_collection.create_collection_from_query_b</tt> prior to creation of the collection.

Maybe you are looking for

  • Is it possible to add a UDF in province/states table?

    Hi, Basically I would like to add a default tax code for each province/state, I think it should be a UDF in province table. But I can't find where I can add this UDF in UDF management form. Any idea? Thanks! Lan

  • Applescript batch convert .RTF to .TXT with line breaks

    Hey guys, looking for help with an Applescript that can change a .RTF to a .TXT with line breaks. I have an Applescript that will go from a .DOC to a plain .TXT but haven't found anything to get from the .RTF to a .TXT wth line breaks. I'm doing this

  • Images Not Showing In Any browesr

    My images, albums and photopage, arn't showing in any broswer. http://mitchellpics.up.to

  • Reopened topic "private protected" modifier

    I respond to reply in thread <http://forums.sun.com/thread.jspa?threadID=503004>. There I posted following: I would like to express that something like "private protected" is really missing. I develop packages for modelling of engineering structures.

  • Converged ACCESS CWA

    Hi Im doing CWA with my 3850 wlc, but the client seems to be stuck inĀ "WEBAUTH_PEND " on the WLC client list. It all looks ok in the ISE logs and in the client detail i can see that it has gotten the redirect url, but nothing is happening. Someone wh