About global temporary table

forms_ddl('truncate table SUB_BF reuse storage');
insert /*+ APPEND */ into SUB_BF select * from SUB_TEMP_BF;
Hi all,
sub_bf is a regular table and sub_temp_bf is gtt(global temporary). and a report is populate base on sub_bf table. but when i use delete statement instead of truncate statemen
then the reports come but when i use truncate reports doesn't shows any values.
i can't use delete statement because it slow and my user concurrently generate the report so sub_bf tables become lock. and reports become hangs.
BR//
Sanjay

sub_bf is a regular table and sub_temp_bf is gtt(global temporary). and a report is populate base on sub_bf table. but when i use delete statement instead of truncate statemen
then the reports come but when i use truncate reports doesn't shows any values.Truncate is a DDL and results in auto commit of all pending transactions of that session, while delete will require explicit COMMIT or ROLLBACK.
Please check CREATE TABLE Specificatin for GTT SUB_TEMP_BF, if it has an on COMIIT DELETE ROWS clause, then probably because of truncate, all rows in table SUB_TEMP_BF are getting delete before inserting in table sub_bf.
Vivek L

Similar Messages

  • Global Temporary Table Space

    Hi
    Can any one give me some information about GLOBAL TEMPORARY TABLE.
    Thnx & Regards
    Sunil

    http://download-uk.oracle.com/docs/cd/B10501_01/appdev.920/a96590/adg03sch.htm#7794

  • Does Global Temporary Table help in performance?

    I have a large database table that is growing daily. The application I have has a page for the past day data and another for some chosen period of time. Since I'm looking at a very large amount of data for each page (~100k rows) and having charts based on time, I have performance issues. I tried collections for each of these and found out that it is making everything slower and I think because the collection is large and it is not indexed.
    Since I don't need the data to be maintained for the session and in fact for each time that I submit a page I need to get the updated data at least for the past day page, I wonder if Global Temporary Table is a good solution for me.
    The only reason I want to store the data in a table is to avoid running similar queries for different charts and reports. Is this a valid reason at all?
    If this is a good solution, can someone give me a hint on how to do this?
    Any help is appreciated.

    It all depends on how efficient your query is. You can have a billion row table and still get a fraction of a second response if the data is indexed, and the number of data blocks to be visited to retrieve the data is small. It's all about reducing the number of I/Os to find and retrieve your data with the query. Many aspects of the data, stats, table/index structure etc can influence the efficiency of your query. The SQL forum would be a better place to get into the query tuning, but if this test is fast, you can probably focus elsewhere for now. It will resolve your full resultset, and then just do a count of the result (to avoid sending 100k rows back to the client). We are trying to get an idea of how long it takes to resolve your resultset. Using litterals rather than item names in your sql should be fine for this test. Avoid using V() around item names in your SQL.
    select count(*) from ( <your-query-goes-here> );

  • What to fill in "temp table scope" for global temporary tables?

    Hi,
    I'm using Data Modeler 4.0.1.836 and whatever I put in the "temp table scope" box for a global temporary table doesn't seem to affect the DDL script regarding the ON COMMIT PRESERVE/DELETE ROWS option. The script always shows ON COMMIT PRESERVE ROWS no matter what.
    Yet, some of my temporary tables must be created as ON COMMIT DELETE ROWS.
    The Data Modeler help says the following about this :
    Temp Table Scope:
    For a table classified as Temporary, you can specify a scope, such as Session or Dimension.
    Not sure what "Dimension" has to do with the scope here, but it doesn't make any difference.
    I tried putting "Session", "Dimension", "Transaction", but no luck. So what's the text to put for the script to generate ON COMMIT DELETE ROWS?
    Thanks

    Hi,
    The Temporary Table Scope property (on the Classification Types page of the Table Properties dialog) is purely documentary.
    To set ON COMMIT DELETE ROWS you should expand the Browser node for the Relational Model and look for the node for the relevant Oracle Physical Model.  If you expand this you will find an entry there for your Table. Double-click on this to get the Physical Model properties dialog for your table, and you will find a "Temporary"  property which has options YES (Preserve Rows), YES (Delete Rows) or NO.
    David

  • Are global temporary tables in Oracle 10.2 behaving differently?

    My procedure is creating and inserting data into a Global Temporary Table (on commit data is preserved). I am running this procedure in two different environments.
    The first environment is running under Oracle 10.2 and after the procedure runs successfully I can not see any data inserted in the GTT.
    When I run the very same procedure in the second environment which runs under Oracle 9.2, it works fine, runs successfully and I can see all rows inserted in the GTT.
    Can someone explain this? What should I do differently in the Oracle 10.2? Any thoughts?
    Thank you very much.

    Hi,
    The following TEMPORARY table is created with the attribute, ON COMMIT DELETE ROWS. This allows an application to load registration entries into the global temporary table and manipulate that data with SQL statements. The data is deleted upon each commit with the clause ON COMMIT DELETE ROWS.
    CREATE GLOBAL TEMPORARY TABLE student_reg_entries
    student_name VARCHAR2(30),
    reg_entries reg_entry_varray_type
    ) ON COMMIT DELETE ROWS;
    Replace ON COMMIT DELETE ROWS with ON COMMIT PRESERVE ROWS to retain temporary table data throughout the database session, regardless of any commits made during that session.
    The following illustrates a PL/SQL block that populates this temporary table with two classes each for two students. It makes no difference how many other applications are currently using this temporary global table for similar purposes. For the code below, two rows are inserted, then a COMMIT, after which a SELECT shows that the table is empty.
    DECLARE
    classes_to_take reg_entry_varray_type :=
    reg_entry_varray_type();
    BEGIN
    classes_to_take := reg_entry_varray_type(
    reg_entry('CS101', 'C' ),
    reg_entry('HST310', 'C' ));
    INSERT INTO student_reg_entries VALUES
    ('John', classes_to_take);
    classes_to_take := reg_entry_varray_type(
    reg_entry('ENG102', 'C' ),
    reg_entry('BIO201', 'C' ));
    INSERT INTO student_reg_entries VALUES
    ('Mary', classes_to_take);
    END;
    This next SELECT returns two rows:
    SQL> SELECT * FROM student_reg_entries;
    Row 1:
    John
    REG_ENTRY_VARRAY_TYPE
    (REG_ENTRY('CS101','C'), REG_ENTRY('HST310','C'))
    Row 2:
    Mary
    REG_ENTRY_VARRAY_TYPE
    (REG_ENTRY('ENG102','C'), REG_ENTRY('BIO201','C'))
    A COMMIT automatically deletes rows making the table available for the next session transaction.
    SQL> COMMIT;
    SQL> SELECT * from student_reg_entries;
    no rows selected
    The full information about this article is in the link I post before, I'm just showing this extract of the article because some people don't like to read links....
    Cheers,
    Francisco Munoz Alvarez
    http://www.oraclenz.com

  • Scalability issue with global temporary table.

    Hi All,
    Does create global temporary table would lock data disctionary like create table? if yes would not it be a scalable issue with multi user environment?
    Thanks and Regards,
    Rudra

    Billy  Verreynne  wrote:
    acadet wrote:
    am I correct in interpreting your response that we should be using GTT's in favour of bulk operations and collections and in memory operations? No. I said collections cannot scale. This means due to the fact that collections reside in expensive PGA memory, you cannot stuff large data volumes into them. Thus they do not make an ideal storage bin for temporary data (e.g. data loaded from file or a web service). GTTs otoh do not suffer from the same restrictions, can be indexed and offer vastly better scalability and so on.
    Multiple passes are often needed using such a data structure. Or filtering to find specific data. As a GTT is a SQL native, it offers a lot more flexibility and performance in this regard.
    And this makes sense - as where do we put out persistent data? Also in tables, but ones of a persistent and not temporary kind like a GTT.
    Collections are pretty useful - but limited in size and capability.
    Rudra states:
    I want to pull out few metrices from differnt tables and processing itIf this can't be achieved in a SQL statement, unless Rudra is a master of understatement then I would see GTT's as a waste of IO and programming effort. I agree.
    My comments however were about choices for a temporary data storage bin in PL/SQL.I agree with your general comments regarding temporary storage bins in Oracle, but to say that collections don't scale is putting to narrow a definition on scaling. True, collections can be resource intensive in terms of memory and CPU requirements, but their persistence will generally be much shorter than other types of temporary storage. Given the right characteristics, collections will scale and given the wrong characteristics GTT's wont scale.
    As you say it is all about choice. Getting back to the theme of this thread though, the original poster should be made aware that well designed and well coded applications are most likely to scale. Creating tables on the fly is generally considered bad practice and letting the database do what it does best, join tables in queries at the SQL level is considered good practice. The rest lies somewhere in between and knowing when to do which is why we get paid the big bucks (not). ;-)
    Regards
    Andre

  • Performance slow on DELETE command on global temporary table!

    Hi,
    I have a delete on a global temporary table that is taking long time!.
    Anyone have a clue about how to improve delete command's against global temporary table??
    Tks,
    Paulo Portugal

    Same problem here!
    <QUOTE>
    SELECT DISTINCT PDT_CHILD.SUP_ID, PDT_CHILD.SUB_ID,
    PDT_CHILD.SUB_LEAF_FLAG_ID
    FROM
    PJI_FP_AGGR_RBS_T PDT_CHILD WHERE 1=1 AND PDT_CHILD.SUP_ID = :B2 AND
    PDT_CHILD.SUP_ID <> PDT_CHILD.SUB_ID AND PDT_CHILD.WORKER_ID = :B1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 88561 20.71 20.23 0 0 0 0
    Fetch 90269 926.19 906.80 45 45164134 0 176545
    total 178831 946.91 927.03 45 45164134 0 176545
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (APPS) (recursive depth: 1)
    Rows Execution Plan
    0 SELECT STATEMENT MODE: ALL_ROWS
    0 HASH (UNIQUE)
    0 TABLE ACCESS (FULL) OF 'PJI_FP_AGGR_RBS_T' (TABLE (TEMP))
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    latch: row cache objects 1 0.00 0.00
    direct path write temp 3 0.00 0.00
    direct path read temp 3 0.00 0.00
    </QUOTE>
    The fetch is too high for TEMP table... Any help would be much appreciated!
    Note: Please teach me on how we can format the above in my future posts in OTN forums.
    ===

  • Insert slow into Global Temporary Table...

    I am working with a stored procedure that does a select into a global temporary table and it is really slow. I have read up on the append hint and know that it is not a solution since GTT's are in the temporary table space which are thus always appended and never have logs.
    Is there something else that I need to know about performance for GTT? I find it hard to believe that Oracle would find it acceptable to take 50 seconds to insert 3300 rows.

    My apologies in advance as my skill level with Oracle is not as high as I would like for this type of analysis and remediation.
    I had thought of it being the select as well but if I run it by itself it takes about 1 second. The interesting part is when I explain plan on it with the Insert, the SQL plan changes.
    Here is the Non-insert explain plan:
    <code class="jive-code jive-java">
    Plan hash value: 3474166068
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 51 | 8 (38)| 00:00:01 |
    | 1 | HASH GROUP BY | | 1 | 51 | 8 (38)| 00:00:01 |
    | 2 | VIEW | VM_NWVW_1 | 1 | 51 | 7 (29)| 00:00:01 |
    | 3 | HASH UNIQUE | | 1 | 115 | 7 (29)| 00:00:01 |
    | 4 | NESTED LOOPS | | | | | |
    | 5 | NESTED LOOPS | | 1 | 115 | 6 (17)| 00:00:01 |
    | 6 | NESTED LOOPS | | 1 | 82 | 5 (20)| 00:00:01 |
    | 7 | SORT UNIQUE | | 1 | 23 | 2 (0)| 00:00:01 |
    | 8 | TABLE ACCESS BY INDEX ROWID| PEAKSPEAKDAYSEG$METERMASTER | 1 | 23 | 2 (0)| 00:00:01 |
    |* 9 | INDEX RANGE SCAN | IDX_PDSEG$MTR_SEGID | 1 | | 1 (0)| 00:00:01 |
    |* 10 | TABLE ACCESS BY INDEX ROWID | FC_FFMTR_DAILY | 1 | 59 | 2 (0)| 00:00:01 |
    |* 11 | INDEX RANGE SCAN | FC_FFMTRDLY_IDX10 | 2461 | | 2 (0)| 00:00:01 |
    |* 12 | INDEX UNIQUE SCAN | FC_METER_PK | 1 | | 0 (0)| 00:00:01 |
    | 13 | TABLE ACCESS BY INDEX ROWID | FC_METER | 1 | 33 | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    9 - access("SM"."SEGID"=584)
    10 - filter(TO_DATE(TO_CHAR("V"."MEASUREMENT_DAY"),'YYYYMMDD')>=TO_DATE(' 2002-01-01 00:00:00',
    'syyyy-mm-dd hh24:mi:ss') AND TO_DATE(TO_CHAR("V"."MEASUREMENT_DAY"),'YYYYMMDD')<TO_DATE(' 2003-01-01
    00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND ("V"."ADJUSTED_TOTAL_VOLUME"<>0.0 OR
    ROUND("V"."ADJUSTED_TOTAL_ENERGY",3)<>0.0))
    11 - access("V"."METER_NUMBER"="SM"."METER_ID")
    12 - access("M"."METER_NUMBER"="V"."METER_NUMBER")
    </code>
    Here is the Insert explain plan:
    <code class="jive-code jive-java">
    Plan hash value: 4282493455
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | INSERT STATEMENT | | 39 | 2886 | | 7810 (1)| 00:01:34 |
    | 1 | LOAD TABLE CONVENTIONAL | PEAKDAY_TEMP_CONSECUTIVEVALUES | | | | | |
    | 2 | HASH GROUP BY | | 39 | 2886 | | 7810 (1)| 00:01:34 |
    |* 3 | HASH JOIN RIGHT SEMI | | 39 | 2886 | | 7809 (1)| 00:01:34 |
    | 4 | VIEW | VW_NSO_1 | 1 | 10 | | 2 (0)| 00:00:01 |
    | 5 | NESTED LOOPS | | 1 | 27 | | 2 (0)| 00:00:01 |
    |* 6 | INDEX UNIQUE SCAN | PK_PEAKSPEAKDAYSEG | 1 | 4 | | 0 (0)| 00:00:01 |
    | 7 | TABLE ACCESS BY INDEX ROWID | PEAKSPEAKDAYSEG$METERMASTER | 1 | 23 | | 2 (0)| 00:00:01 |
    |* 8 | INDEX RANGE SCAN | IDX_PDSEG$MTR_SEGID | 1 | | | 1 (0)| 00:00:01 |
    | 9 | VIEW | PEAKS_RP_PEAKDAYMETER | 3894 | 243K| | 7807 (1)| 00:01:34 |
    | 10 | SORT UNIQUE | | 3894 | 349K| 448K| 7807 (1)| 00:01:34 |
    | 11 | NESTED LOOPS | | | | | | |
    | 12 | NESTED LOOPS | | 3894 | 349K| | 7722 (1)| 00:01:33 |
    | 13 | TABLE ACCESS FULL | FC_METER | 637 | 21021 | | 18 (0)| 00:00:01 |
    |* 14 | INDEX RANGE SCAN | FC_FFMTRDLY_IDX11 | 6 | | | 10 (0)| 00:00:01 |
    |* 15 | TABLE ACCESS BY INDEX ROWID| FC_FFMTR_DAILY | 6 | 354 | | 12 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    3 - access("METER_ID"="METER_ID")
    6 - access("GS"."SEGID"=584)
    8 - access("SM"."SEGID"=584)
    14 - access("M"."METER_NUMBER"="V"."METER_NUMBER")
    filter(TO_DATE(TO_CHAR("V"."MEASUREMENT_DAY"),'YYYYMMDD')>=TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd
    hh24:mi:ss') AND TO_DATE(TO_CHAR("V"."MEASUREMENT_DAY"),'YYYYMMDD')<TO_DATE(' 2003-01-01 00:00:00', 'syyyy-mm-dd
    hh24:mi:ss'))
    15 - filter("V"."ADJUSTED_TOTAL_VOLUME"<>0.0 OR ROUND("V"."ADJUSTED_TOTAL_ENERGY",3)<>0.0)
    </code>
    As you can see there is a real spike in the cost and yet the only thing that was done was the addition of the Insert to GTT. From what I can ascertain the solution may be in an alternate SQL or finding some way to push Oracle into running the query as it would have for the first execution (non-insert).
    I tried creating a simple view out of the SELECT statement to see if that would precompile it but in the end it ran exactly the same way.
    The next thing that I am going to try is removing the PEAKS_RP_PEAKDAYMETER view by going more direct.
    I have not done the trace file analysis yet. Should I still do that?

  • Can global temporary  table work for me?

    Hi i recently found out about GTT, I first created a table and inserted values into my new_table but it was
    I have a report that I need to display a time range based on the time given in the report, now each page of the report shows a different time range. I created a formula that would insert into a new_table based on the value of the time range, I would then pull the values from the table to the report in a repeating frame, this caused the problem because data in table is overwriting the previous page values before it is displayed on the screen(there is a delete from table in the formula). This is my problem, how can I pull from the table before its deleted, will GTT work , if so How?

    I have no idea if it will work as you neglected to provide any code and any information about how you are producing the reports and with what tool.
    There are two types of global temporary tables. Do you know the difference?
    The docs are at http://tahiti.oracle.com
    Demos of both types in Morgan's Library at www.psoug.org
    After you've read the Oracle docs and understand how they work.
    And after you've run the Library demos.
    If you still have questions post them here along with sufficient information for someone not sitting next to you to know what you are doing.

  • Inserts into Global Temporary Table

    I'm working on using a global temporary table in one of my apps. I have a small test run here to isolate the problem. It simply creates the global temporary table, inserts a row, commits and then does a select to see if the insert worked. No data shows in the table when running this. I don't know much about global temp tables, so any help would be appreciated.
    CREATE GLOBAL TEMPORARY TABLE AGENT_SILO.AS_TEMP_VALIDATE (
    SBI_EMPLOYEE_ID NUMBER,
    CURRENT_FLAG char(1),
    EFFECTIVE_START date,
    EFFECTIVE_END date
    ) ON COMMIT DELETE ROWS;
    INSERT INTO AGENT_SILO.AS_TEMP_VALIDATE(SBI_EMPLOYEE_ID, CURRENT_FLAG, EFFECTIVE_START, EFFECTIVE_END)
    VALUES(0, '', SYSDATE, SYSDATE);
    commit;
    SELECT * FROM AGENT_SILO.AS_TEMP_VALIDATE;

    So I wonder what else I'm doing wrong that's really obvious. Here's what i'm trying to accomplish and maybe there's a better way of going about it.
    I have a trigger that is supposed to do some validation before the insert is allowed to go through. So here's my approach. I have a trigger fired when there's an insert into the AS_Employee_history table. This passes some of the fields from this insert into a proc (the id, a flag and a couple of dates). Within the proc, i create a global temp table, insert these passed values into the temp table. Then I have a cursor to basically copy the rows from the as_employee_history table that have the same id. Then I can do some selects on the temp table to see if it passes the validation.
    I have outputs throughout for debugging and it gets to right after the inserts into the temp table, then the rest of the code doesn't appear to be executed. So it looks like it's failing at the execution of select statements on the temp table. Anything else obvious that I"m missing here?
    Here's my proc.
    PROCEDURE "PAS_VALIDATE" (STATUS OUT VARCHAR2, v_status OUT BOOLEAN, NEW_SBI_EMPLOYEE_ID IN NUMBER,
    NEW_CURRENT_FLAG IN CHAR, NEW_EFFECTIVE_START IN DATE,
    NEW_EFFECTIVE_END IN DATE)
    IS
    v_prev_effective_end date;
    v_flag_count number;
    v_flag_count_date number;
    --variables to store dynamic sql returns
    v_sql_flag_count_date varchar2(255);
    v_sql_flag_count varchar2(255);
    v_sql_prev_eff_end varchar2(255);
    cursor c_row is
    select * from AGENT_SILO.AS_EMPLOYEE_HISTORY EMP
    where (EMP.SBI_EMPLOYEE_ID = NEW_SBI_EMPLOYEE_ID);
    r_row c_row%ROWTYPE;
    BEGIN
    Status := 'Started';
    v_status := true;
    DBMS_OUTPUT.PUT_LINE('Creating temporary table...');
    execute immediate 'CREATE GLOBAL TEMPORARY TABLE AGENT_SILO.AS_TEMP_VALIDATE (
    SBI_EMPLOYEE_ID NUMBER,
    CURRENT_FLAG char(1),
    EFFECTIVE_START date,
    EFFECTIVE_END date
    ) ON COMMIT PRESERVE ROWS';
         DBMS_OUTPUT.PUT_LINE('Validating the data...');
         --DBMS_OUTPUT.PUT_LINE('Inserting submitted row into temp table');
    --Insert the new row being submitted from user into the temp table
    execute immediate 'INSERT INTO AGENT_SILO.AS_TEMP_VALIDATE(SBI_EMPLOYEE_ID, CURRENT_FLAG, EFFECTIVE_START, EFFECTIVE_END)
    VALUES(' || NEW_SBI_EMPLOYEE_ID || ',
    ''' || NEW_CURRENT_FLAG || ''',
    to_date(''' || to_char(NEW_EFFECTIVE_START, 'mm/dd/yyyy hh:mi:ss') || ''', ''mm/dd/yyyy hh:mi:ss''),
    to_date(''' || to_char(NEW_EFFECTIVE_END, 'mm/dd/yyyy hh:mi:ss') || ''', ''mm/dd/yyyy hh:mi:ss''))';
    --Insert the other rows to we end up with a subset of the employee history table
    --with only rows that match the sbi_employee_id of the submitted row
         --DBMS_OUTPUT.PUT_LINE('Inserting into temp table...');
    open c_row;
    loop
    fetch c_row into r_row;
    exit when c_row%NOTFOUND;
    execute immediate 'INSERT INTO AGENT_SILO.AS_TEMP_VALIDATE(SBI_EMPLOYEE_ID, CURRENT_FLAG, EFFECTIVE_START, EFFECTIVE_END)
    VALUES(' || r_row.SBI_EMPLOYEE_ID || ',
    ''' || r_row.CURRENT_FLAG || ''',
    to_date(''' || to_char(r_row.EFFECTIVE_START, 'mm/dd/yyyy hh:mi:ss') || ''', ''mm/dd/yyyy hh:mi:ss''),
    to_date(''' || to_char(r_row.EFFECTIVE_END, 'mm/dd/yyyy hh:mi:ss') || ''', ''mm/dd/yyyy hh:mi:ss''))';
    end loop;
    close c_row;
    DBMS_OUTPUT.PUT_LINE('After inserts');
    -----Store queries to determine values for validation--------------------------
    v_sql_prev_eff_end := 'SELECT to_char(max(effective_end), ''dd-mon-yy'')
    FROM AGENT_SILO.AS_TEMP_VALIDATE
    where to_char(EFFECTIVE_END, ''dd-mon-yy'') != ''31-dec-99'' AND
    SBI_EMPLOYEE_ID = NEW_SBI_EMPLOYEE_ID';
    --Find the largest effective_end, besides the 9999 value
    execute immediate v_sql_prev_eff_end into v_prev_effective_end;
    DBMS_OUTPUT.PUT_LINE('The highest previous end date: ' || v_prev_effective_end);
    --...........Validation testing...........
    execute immediate 'DROP TABLE AGENT_SILO.AS_TEMP_VALIDATE'; --Drop temp table
    DBMS_OUTPUT.PUT_LINE('Validation Procedure Complete');
    COMMIT;
    status:='Success';
    EXCEPTION
    When Others Then
    ROLLBACK;
    Status := SQLERRM;
    END;
    Thanks a bunch for helping a noob out.

  • Create global temporary table in delete trigger

    Hi to all, I am triyng to create a global temporary table in trigger so i can hold all the deleted rows and do some stuff after the statement which uses the table that fires the trigger.
    In this way I am trying to avod mutating table error. but the following trigger gives error.
    create or replace
    TRIGGER TD_EKSINAVLAR
    FOR DELETE ON DERSSECIMI_EKSINAVLAR
    COMPOUND TRIGGER
    BEFORE STATEMENT IS
    BEGIN
    CREATE GLOBAL TEMPORARY TABLE DELETED_ROWS
    AS ( SELECT * FROM DERSSECIMI_EKSINAVLAR WHERE 1 = 2 )
    ON COMMIT DELETE ROWS;
    END BEFORE STATEMENT;
    BEFORE EACH ROW IS
    BEGIN
    NULL;
    END BEFORE EACH ROW;
    AFTER EACH ROW IS
    BEGIN
    NULL;
    END AFTER EACH ROW;
    AFTER STATEMENT IS
    BEGIN
    NULL;
    END AFTER STATEMENT;
    END TD_EKSINAVLAR;
    the error is
    Error(12,5): PLS-00103: Encountered the symbol "CREATE" when expecting one of the following: ( begin case declare exit for goto if loop mod null pragma raise return select update while with <an identifier> <a double-quoted delimited-identifier> <a bind variable> << continue close current delete fetch lock insert open rollback savepoint set sql execute commit forall merge pipe purge
    Please help me about the situation.
    Thanks in advance.
    Gokhan

    Karthick you are absolutly right
    Our main process is to migrate sql server 2000 database to oracle 11g and I am stuck with the triggers that reference to table that fires the trigger itself.
    Can you help me about how i can overcome mutating table errors using compund triggers? Espacially for the situation that one statement tries to update or delete multiple rows on a table.
    You can understand my logic from the above code. I want to hold all the affected rows in a table and in after statement body using a cursor on that table I want to do required changes on the table. How can I do that or how should I do ?
    regards.

  • Global temporary table screwing up execution plan

    Hello
    I have a process I'm trying to optimise. The first part of the process opens a cursor, bulk collects, and then inserts the data into a table using FORALL. The next part uses another FORALL statement to insert into another table, using 4 of those collections populated by the bulk collect to control a select statement i.e.
    FORALL i IN 1..Collection1.COUNT
        INSERT
        INTO
             table
        SELECT
             t1.col1,
             t2.col2,
             t3.col3,
             Collection4(i)
        FROM
           table1 t1,
           table2 t2,
           table3 t3
        WHERE
           t1.pk = Collection1(i)
        AND
           t2.fk = t1.pk
        AND
           t3.fk = t1.pk;This statement is executed 50k+ times and takes about an hour. The execution plan for the select is fine, the time is just down to the number of itterations. Anyway, to try and make this quicker, I created a global temporary table to store the contents of the collections of interest, and then in a new cursor using the above select (replacing the collections with the columns in the temporary table), I bulk collect that into new collections and do the inserts. The problem is, that the optimiser assumes that the temporary table has 2 million+ rows in it for some strange reason, and so this completely changes the executions plan to give a cost of 180k! I can't analyze a temporary table so does anyone have an idea as to what I can do to make this work and bring the cost down to something more reasonable? Maybe an alternative strategy?
    Cheers

    <<I can't analyze a temporary table>>
    What version of Oracle do you use?
    SQL> create global temporary table mytable(col1 number,
      2  col2 date) on commit preserve rows;
    Table created.
    SQL> begin
      2  for i in 1..1000 loop
      3  insert into mytable values(i,sysdate);
      4  end loop;
      5  end;
      6  /
    PL/SQL procedure successfully completed.
    SQL> select count(1) from mytable;
      COUNT(1)
          1000
    SQL> set autot traceonly
    SQL> select count(1) from mytable;
    Execution Plan
       0      SELECT STATEMENT Optimizer=RULE
       1    0   SORT (AGGREGATE)
       2    1     TABLE ACCESS (FULL) OF 'MYTABLE'
    Statistics
              0  recursive calls
              0  db block gets
              5  consistent gets
              0  physical reads
              0  redo size
            199  bytes sent via SQL*Net to client
            277  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> ALTER SESSION SET OPTIMIZER_GOAL = ALL_ROWS;
    Session altered.
    SQL>  select count(1) from mytable;
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=11 Card=1)
       1    0   SORT (AGGREGATE)
       2    1     TABLE ACCESS (FULL) OF 'MYTABLE' (Cost=11 Card=8168)
    Statistics
              5  recursive calls
              0  db block gets
             10  consistent gets
              0  physical reads
              0  redo size
            211  bytes sent via SQL*Net to client
            277  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL>
    SQL> ANALYZE TABLE MYTABLE COMPUTE STATISTICS;
    Table analyzed.
    SQL> select count(1) from mytable;
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=1)
       1    0   SORT (AGGREGATE)
       2    1     TABLE ACCESS (FULL) OF 'MYTABLE' (Cost=2 Card=1000)
    Statistics
              0  recursive calls
              0  db block gets
              5  consistent gets
              0  physical reads
              0  redo size
            210  bytes sent via SQL*Net to client
            277  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> disconnect
    Disconnected from Oracle9i Enterprise Edition Release 9.2.0.3.0 - 64bit Production
    With the Partitioning option
    JServer Release 9.2.0.3.0 - Production

  • Problem with GLOBAL TEMPORARY TABLE  into SP

    im trying to create a temporary table into a sp but it show me an error , this is the store
    CREATE OR REPLACE PROCEDURE sp_sample IS
    BEGIN
    CREATE GLOBAL TEMPORARY TABLE tmp_datos_entrada
    ON COMMIT DELETE ROWS AS
         Select * from Customer where NAME = 'LOPEZ';
    END sp_sample

    Hi,
    You can't execute DDL directly in PL/SQL, it's not supported.
    You can, however, use Native Dynamic SQL to do this in the following manner:
    begin
       execute immediate 'create table t(x int)';
    end;You will need to use the autonomous_transaction pragma to maintain transactional integrity.
    But you may want to think about WHY you are doing this in a SP. You should probably just create the global temporary table as a permanent database object. It's refreshed on commit anyway -- I think you're missing the point slightly.
    cheers,
    Anthony

  • Will Partitioning improve performance on Global Temporary Table

    Dear Guru,
    In one of complicated module I am using Global Temporary Table (GTT) for intermediate processing i.e. It will store required data in it but the rows of the data goes into 10,00,000 - 20,00,000.
    Can Partitioning or Indexing on Global Temporary Table improve the performance?
    Thanking in Advance
    Sanjeev

    Sounds like an odd use of a GTT to me, but I'm sure there are valid reasons...
    Presumably you are going to be processing all of these rows in some way? In which case I can't see how partitioning, even if it's possible (And I don't think it is) would help you.
    Indexes - sure, that might help, but again, if you are reading all/most of these rows anyway they might not help or even get used.
    Can you give a bit more detail about exactly what you are doing?
    Edited by: Carlovski on Nov 24, 2010 12:51 PM

  • Use of global temporary tables in Procedures

    Hi
    I am using global temporary tables in the procedures. Loading data in the same table through many procedures. I am fetching the data from the global temporary table in PRO-C by a cursor. Will this degrade performance?
    Please help me..
    Thanks in Advance...

    Will this degrade performance?That depends... in comparison to what?
    Loading data into temporary tables will generally be more efficient than loading data into permanent tables because Oracle needs to do less to protect this data since it is inherently transient. On the other hand, loading the data into a table in the first place tends to be more expensive than alternatives like using a single SQL statement, a pipelined table function, or an in memory collection.
    Justin

Maybe you are looking for

  • 5th Gen. Video & Photo feautres are gone

    I just discovered that my 5th gen iPod has been demoted. The photo and video features have somehow been disabled - they don't know up in iTunes preferences, nor can I drag a movie onto the iPod. When I run the iPod software updater, the icon has chan

  • RFC_READ_TABLE

    Hello, I´m quite new in xMII. I tried the function module with a table. I set an OPTION and 2 FIELDS. In SAP it worked. I get an entry for DATA/WA. When i try this in MII and put a tracer to the response of the JCO/RFC_READ_TABLE/TABLES/DATA/item/WA

  • I am trying to remove skype. It will not leave, sa...

    I am trying to remove "skype" from my HP pavilion desk top and it won't leave saying, "close all CHROME windows." I did. Even restarted pc, same thing. HELP

  • Custom date format -every time? dd/mm/yy

    I'm ding my first biggish form so there is a list of issues I'm trying to resolve here, bear with me... I'm in europe and we like the dd/mm/yy date format but this isn't a default option to choose from so I have to put in custom, dd/mm/yy every time

  • Problems importing the sample applications into htmldb 2.0

    When I try to install the sample applications from the htmldb studio page into the htmldb 2.0 I get the following error. 1 error has occurred File is not a valid HTML DB application export file. I am trying to load the tracker.zip file. I have read t