Using GTT

Hi,
In my procedure i am using normal tables with insert and delete dml operations. My code sample is as below:
create or replace procedure proc_name(parameter1....param6) as
variables declaration;
ran_num= random();/* random number generator function*/
cursor c1 as select * from t1 where rep_id=ran_num;
cursor c2 as select * from t2 where rep_id=ran_num;
cursor c3 as select * from t3 where rep_id=ran_num;
insert into t1 values(x,y,z,ran_num);
insert /*+ append parallel(t2,10) */into t2 (select /*+ append parallel(t1,10) */x,y,z,rep_id from t1 where rep_id=ran_num);
insert /*+ append parallel(t3,10) */into t3 (select /*+ append parallel(t2,10) */x,y,z,rep_id from t2 where rep_id=ran_num);
finally using cursors i am inserting data into final tem table
insert into t4 values(x,y,z,rep_id);
output ref_cursor;/* which throws output to report*/
end;
which is taking 3 hrs to run bcz t1 from which data is inserted is of huge volumes of data.
my senior suggested to use gtt tables instead of normal temp tables of which automatically data is deleted at the end of session and is independent for each session.
is it worth ful using gtt?
As i am not aware of the gtt concept and after reading from goole about that i am also having some doubts.
can any one clarify along with suggestion of using gtt.
1. Can i completely avoid using random number generation which is used to identify each session independentness after using GTT.
2. Can i usie them in place of t2 and t1 in select queries simply like select x,y,z from t1;
3. Some where read that gtt cannot be used in select statements that contain parallel hints.

Hi,
Statistics calculated at the table level are associated with the table even with a temp table. For theses tables (temp) : the statistics are shared between the different sessions so it means that a session may use inaccurate statistics.
If you work with temp tables sometime you need to hint your DML to avoid potenital issues (use /*+ cardinality(yourtemptable 10000) */ or dynamically specify the cardinality of your table (this is only an example, there's plenty of others hint that can be used)
If you call DBMS_STAT on a temp table with 2 different sessions, the last call will override the previous stats (unless locked).
I hope that one day Oracle will provide a mean to maintain statistics on temp at session level.
Note about parallelization:
It's posssible to use the parallelization over temp tables:
CREATE GLOBAL TEMPORARY TABLE TEMP_TEST
  COL1  NUMBER(8)
ON COMMIT PRESERVE ROWS
NOCACHE
EXPLAIN PLAN  FOR SELECT /*+ PARALLEL(R) */ * FROM TEMP_TEST R
SELECT * FROM  TABLE(DBMS_XPLAN.DISPLAY);
PLAN_TABLE_OUTPUT
Plan hash value: 10615466
| Id  | Operation            | Name      | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
|   0 | SELECT STATEMENT     |           |     1 |    13 |     2   (0)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR      |           |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)| :TQ10000  |     1 |    13 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
|   3 |    PX BLOCK ITERATOR |           |     1 |    13 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
|   4 |     TABLE ACCESS FULL| TEMP_TEST |     1 |    13 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
Note
   - dynamic sampling used for this statementEdited by: user11268895 on Jul 22, 2010 2:37 PM
Edited by: user11268895 on Jul 22, 2010 2:39 PM

Similar Messages

  • GTT helps performance - one day to be several minute

    I have a a case like this:
    for rec (select *
    from gl.interface
    where accounting_date between...
    order by c1, c2)
    loop
       -- concate some columns of rec
       -- do update gl_interface
    end loop;it takes about a day to generate notepad report from this query (the records is about 6.000.000).
    And after along day, i resolve it and it takes only several minutes to generate the report.
    What i am doing is these steps:
    1. create global temporary table that delete records after commit. And create some indeks, like rn that is row_number() over (order by c1, c2)
    2. ITAS the gtt with select * , row_number() over (order by c1, c2)
    from gl.interface
    where accounting_date between
    3. change the cursor query above in my oracle form:
    for rec in(
       select *
    from gtt
    where accounting_date between...
    order by rn
    loop
    end loop;
    update gl_interface
    set...
    where exists(
      select ..
    from gtt...
    where col1 =...
    )I still wonder how it works? and what is the drawbacks of using this gtt? tx.
    Edited by: infotools on Jan 7, 2013 3:35 AM
    Edited by: infotools on Jan 7, 2013 3:35 AM
    Edited by: infotools on Jan 7, 2013 3:36 AM

    infotools wrote:
    No, i don't want to use bulk for i don't want to create arrays. And for my case, i am satisfied with the results. But, i still wonder how it decrease the time so much using gtt...and can be ITAS...and may be the
    select a||b||c
    from
    instead of
    for ()
    loop
    s = cc.a || cc.b||cc.c....
    end loop;
    is it because i put much of the pl-sql statements in my select-sql?Likely because you have reduced context switching between the SQL and PL/SQL engines. By selecting rows in a loop and updating or inserting for each row selected, you are constantly switching back and forth between PL/SQL and SQL, and that is known to be poor for performance. Using a GTT will help improve things because you are reducing the amount of context switches, but then if the data is already coming from database tables, you shouldn't need to use a GTT in the first place... as just doing it directly from the database tables, using something like MERGE will do it all in the SQL engine anyway.

  • GTT table getting Row Exclusive lock

    I have a procedure which loads a table which is used for reporting.
    Currently it is a single process which picks up most of the data after joining 2-3 big tables(around 70-80GB) and then loading the main table.
    The joins are on PI and also partitions are also being used. Then a lot of updates are done on the main table to update other columns which are being picked from different tables.
    As the processing is happening on huge amount of data(around 1M), so processing is taking a lot of time.
    This process is taking around 40-45 minutes.
    I am planning to use parallelism to run it in 75 sessions. So the initial big insert will be faster and later on all the updates will be done on smaller chunks which will benefit the performance.
    I planned to use GTT table so that i dont have to create 75 temp tables for each sessions.
    But while using GTT table(ON COMMIT DELETE ROWS) in parallel sessions, i am seeing that the GTT table is getting Row Exclusive lock and the time taken by overall process is increasing by lot.
    So i tested by using 75 temp tables. There i saw that the performance has increased by lots as assumed initially.
    Please let me know if there is any other way or how to remove the locks on GTT table.

    First you should question why you think you need 75 GTT.
    Also your question contains no useful information
    (no four digit version of Oracle, no OS, no number of processors) , this paragrah
    Currently it is a single process which picks up most of the data after joining 2-3 big tables(around 70-80GB) and then loading the main table.
    The joins are on PI and also partitions are also being used. Then a lot of updates are done on the main table to update other columns which are being picked from different tables.
    As the processing is happening on huge amount of data(around 1M), so processing is taking a lot of time.
    This process is taking around 40-45 minutes.tells nothing,
    so basically your questions boils down to
    - Hey I come from a sqlserver background (this is just an educated guess), how can I find a workaround to wreck Oracle to make it work like sqlserver.
    75 parallel sessions on say 4 to 8 processors is a very bad idea, as these sessions will be simple competing for resources.
    Also a row exclusive lock is just that: a row level lock. This isn't a problem, usually.
    Sybrand Bakker
    Senior Oracle DBA

  • Using Global Temporary Table in OBIEE 11

    Hi,
    THe requirement is to run a database function before the report runs. This Function accepts 5 parameters (Presentation Variable) and populates a GTT Global Temporary Table.
    This GTT is configured in presentation Layer.
    Issue:
    1. Created a dummy report which returns only one row and called the fucntion using "Evaluate" in this.
    2. Created 2nd report using GTT Columns.
    All these 2 reports placed on dashboard and Presentation variable are set in the prompt. The first one runs correctly and populate the "Regular Table". Means when I create a regular table instead of GTT I can see the correct record. But if I change it to GTT then the data does not get populated in this table.
    Any Inputs?
    Regards

    HI,
    If you are using a GTT, you need to run the function which populates the GTT before OBI server executes the select query on that table.
    In your case, the OBI server is executing the select query from the table before the data is populated in the table.
    In order to force the BI server to run the query after the GTT is populated, you need to specify the condition in the connection pool settings under the connection scripts tab.
    In the execute before query, enter your physical query which runs the function to populate the GTT.
    Hope this helps !
    Thanks,
    Vineeth

  • When to use global temporary tables?

    which are the probable situations when one can think of using global temporary tables?

    In my experience, most often GTTs are used by developers from other RDBMSs who have become used to needing 'intermediate tables' created on the fly to hold data while they 'loop and commit to free locks'.
    These developer may not have had enough exposure to Oracle's concurrency model and therefore use the 'other' concurrency model as the basis for an application port - usually followed up by a call to the forum for assistance in performance tuning. <g>
    That said, legitimate uses for GTTs include staging or scrubbing table for data transport or preparation, eg: ETL into a warehouse, when the intermediate data is
    a) reproducible (therefore does not need to be logged); and
    b) the results of the intermediate operation, but not the source, are stored in permanent tables.
    Some developers also use GTTs to hold the equivalent to a 'temporary materialized view' specifically for complex reporting purposes.

  • How to use other cols in group by which does not need grouping.

    HI,
    saleshistory table has sales record of each empoly, day wise.
    that is saleshistory has more than one record of empid in it.
    empcode is number(9,0) and empname is varchar2(200), sales number(10,5),empid is number(10,0)
    select empid, sum(sales), max(empname), max(empcode) from saleshistory
    group by empid ;
    or
    select empid, sum(sales), empname,empcode from saleshistory
    group by empid ,empname,empcode ;
    i want to find the total amout of sales done by each employee with empid,empname,empcode in select list.
    1) please tel me which method good which i should follow or is there any other good way to get it.
    yours sincerely
    Edited by: 944768 on Apr 20, 2013 5:14 AM
    Edited by: 944768 on Apr 20, 2013 5:15 AM
    Edited by: 944768 on Apr 20, 2013 5:15 AM
    Edited by: 944768 on Apr 20, 2013 5:16 AM
    Edited by: 944768 on Apr 20, 2013 5:34 AM
    Edited by: 944768 on Apr 20, 2013 7:08 AM

    Hi,
    This sounds like a job for the analytic SUM function.
    Since you didn't post CREATE TABLE and INSERT statements for your own table, I can't test with your table. I'll show how to do his using the scott.emp table instead. In scott.emp, there can any number of rows with the same job, just as in your table there can be any number of rows for the same empid.
    SELECT       job
    ,       sal
    ,       SUM (sal) OVER  (PARTITION BY  job)     AS total_sal
    FROM       scott.emp
    ORDER BY  job
    ;Output:
    JOB              SAL  TOTAL_SAL
    ANALYST         3000       6000
    ANALYST         3000       6000
    CLERK           1300       4150
    CLERK            950       4150
    CLERK            800       4150
    CLERK           1100       4150
    MANAGER         2850       8275
    MANAGER         2975       8275
    MANAGER         2450       8275
    PRESIDENT       5000       5000
    SALESMAN        1500       5600
    SALESMAN        1250       5600
    SALESMAN        1250       5600
    SALESMAN        1600       5600Most aggregate functions (like SUM) have analytic counterparts which can get the same results without collapsing the result set down to one row per group. The PARTITION BY clause of analytic functions corresponds to the GROUP BY clause used with aggregate functions.
    944768 wrote:
    ... when we used GTT in Stored proc (SP) is it necessary to truncate it at the begining of sp , if one truncate can it create any harm, as SP is used by many people at the same time.This seems to be a completely separate question. Most of your message involved grouping and the SUM function; it has nothing to do with Global Temporary Tables or stored procedures. You might have an application that uses a stored procedure and a global temporary table, and which also uses groupng and the SUM function, but that doesn't mean you hve a problem that involves all of them. If the stored procedure problem has anything to do with the grouping problem, explain it. If it is a completely separate problem, then start aompletely separate thread or it (and explain it).
    I hope this answers your question.
    If not, point out where the query above is producing the wrong reslts, and explain, using specific examples, how you get the correct results in those places.
    If you want to use your own table, post a little sample data (CREATE TABLE and INSERT statements, relevant columns only), and the results you want from that data.
    Always say what version of Oracle you're using (e.g. 11.2.0.2.0).
    See the forum FAQ {message:id=9360002}

  • General Reporting Issue

    Dear Experts,
    i need your advises for an issue regarding a reporting idea, i am working in Telecom company and as you know, there is alot of reports requested daily from END users, some of these reports can be moved to discoverer to let the users generate them from their side through discoverer viewer, but there are many reports still generated from my side directly from the back end (database) because they are customized for specific data sent by the end users.
    for example, some users send me a list of customers numbers and they need some information for them, so i used to load them to a temp table and join it with the query to get the needed data.
    i want to ask is there a way to enable the end users to get this report from their side without returning to me, that is is there any tool that let the users load some data into temp table and generate the report for that data only?
    please i need ideas?
    Edited by: eyass on Oct 28, 2011 12:08 PM

    Hi,
    Well all is not possible since beside the database side (the function) all other steps are configuration steps.
    as for the function you can see the following as example:
    create or replace function LOAD_DATA_XXXTABLE
    (P_DATE_V in date) RETURN VARCHAR2
    IS
    PRAGMA AUTONOMOUS_TRANSACTION;
    V_OUT varchar2(50);
    p_date date;
    v_sql varchar2(30000);
    v_date varchar2(50);
    v_period varchar2(50);
    BEGIN
    begin
    p_date := (p_date_v);
    v_date := ''''||to_char(p_date_v,'DD-MON-YYYY')||'''';
    v_period := ''''||to_char(p_date_v,'MON-YY')||'''';
    DBMS_OUTPUT.PUT_LINE('*************************************** Process Start ***************************************');
    -- Truncate the table
    execute IMMEDIATE 'TRUNCATE TABLE XXX.XXXTABLE';
    -- Populate the table in the user partition
    v_sql := '
    insert into XXX.XXXTABLE
    Select '||v_period||' period
    from dual';
    execute immediate v_sql;
    commit;
    DBMS_OUTPUT.PUT_LINE('End of first load: '||to_char(sysdate,'YYYY-Mon-DD HH:MI:SS'));
    DBMS_OUTPUT.PUT_LINE('Table XXXXX was populated on : '||to_char(sysdate,'YYYY-Mon-DD HH:MI:SS'));
    V_OUT := 'Finished OK';
    RETURN V_OUT;
    EXCEPTION
    WHEN OTHERS THEN
    dbms_output.put_line(' Error: '||substr(SQLERRM,1,250));
    V_OUT := 'Finished ERROR';
    RETURN V_OUT;
    raise;
    end;
    end LOAD_DATA_XXXTABLE;
    you have to build the table prior to running it !!!
    insert your logic in the V_SQL variable.
    if you wish to use GTT then you do not need the truncate statement...After compiling it try to run it and see if you get data, run the following:
    declare
    v_out varchar2(50);
    begin
    -- Call the function
    v_out := load_data_xxxtable(p_date_v => '01-OCT-2011');
    end;
    then select from the table to verify it is populated:
    select * from
    xxx.xxxtable;
    now you should register it to the discoverere administrator, after log in go to the toolbar: "tools->register PL/SQL functions"
    enter the needed data (use upper case),
    validate it.
    then create a worksheet using Plus or Desktop, do not select any folder or any item.
    create a new calculation, use the function you have just registered.
    run the report and see the output, if you get OK then you can create a new worksheet that will select from your table.
    Of course you need to add the table to the BA so that you will be able to use it for a report.
    hope it helps
    Tamir

  • Global Temp Table or Permanent Temp Tables

    I have been doing research for a few weeks and trying to comfirm theories with bench tests concerning which is more performant... GTTs or permanent temp tables. I was curious as to what others felt on this topic.
    I used FOR loops to test out the performance on inserting and at times with high number of rows the permanent temp table seemed to be much faster than the GTTs; contrary to many white papers and case studies that have read that GTTs are much faster.
    All I did was FOR loops which iterated INSERT/VALUES up to 10 million records. And for 10 mil records, the permanent temp table was over 500k milliseconds faster...
    Anyone have an useful tips or info that can help me determine which will be best in certain cases? The tables will be used for staging for ETL Batch processing into a Data Warehouse. Rows within my fact and detail tables can reach to the millions before being moved to archives. Thanks so much in advance.
    -Tim

    > Do you have any specific experiences you would like to share?
    I use both - GTTs and plain normal tables. The problem dictates the tools. :-)
    I do have an exception though that does not use GTTs and still support "restartability".
    I need to to continuously roll up (aggregate) data. Raw data collected for an hour gets aggregated into an hourly partition. Hourly partitions gets rolled up into a daily partition. Several billion rows are processed like this monthly.
    The eventual method I've implemented is a cross between materialised views and GTTs. Instead of dropping or truncating the source partition and running an insert to repopulate it with the latest aggregated data, I wrote an API that allows you to give it the name of the destination table, the name of the partition to "refresh", and a SQL (that does the aggregation - kind of like the select part of a MV).
    It creates a brand new staging table using a CTAS, inspects the partitioned table, slaps the same indexes on the staging table, and then performs a partition exchange to replace the stale contents of the partition with that of the freshly build staging table.
    No expensive delete. No truncate that results in an empty and query-useless partition for several minutes while the data is refreshed.
    And any number of these partition refreshes can run in parallel.
    Why not use a GTT? Because they cannot be used in a partition exchange. And the cost of writing data into a GTT has to be weighed against the cost of using that data by writing it (or some of it) into permanent tables. Ideally one wants to plough through a data set once.
    Oracle has a fairly rich feature set - and these can be employed in all kinds of ways to get the job done.

  • How to crate a dynamic size array, collection

    Hi,
    Can someone point me to some tutorial on how to create a dynamic size array. Actually I have multiple cursors and on different tables and want to loop through each cursor and get some values from each cursor and put it in an array. But don't know how to create or initialize an array as I don't know the size. Is there any other way.
    Here is what I am doing I have 6 cursors on different tables, I loop through each cursor and get some specific data that I need to place at one place after looping through all the cursors which then finally needs to be inserted in one table. But before the insert I need to validate each data so want to have all the data in some array or some other place for temporary storage from it's easier to do the validate and insert rather than while looping through the cursors as there may be duplicates which I am trying to remove.
    As this procedure will be called multiple times so wanted to save the cursor data in temporary array before inserting in the final table. Looking for some faster and efficient way.
    Any help is appreciated.
    Thanks

    guest0012 wrote:
    All the 6 cursors are independent and no relation i.e. can have a join and have one sql as no relationship in tables.If there is no relation, then what are your code doing combining unrelated rows into the same GTT/array?
    Now using GTT when I do an insert how do I make sure the same data doesnot already exists. i.e. GTT will only have one column. Then create a unique index or primary key for the GTT and use that to enforce uniqueness.
    So everytime I iterate over a cursor I have to insert into GTT and then finally again have to iterate over GTT and then do an insert in the final table which maybe a performance issue Which is why using SQL will be faster and more scalable - and if PL/SQL code/logic can be used to glue these "no relationship" tables together, why can't that not be done in SQL?
    that's why i was wondering if can use any kind of array or collection as it will be a collection of numbersAnd that will reside in expensive PGA memory. Which means a limit on the size of the collection/array you can safely create without impacting server performance, or even cause a server crash by exhausting all virtual memory and causing swap space to trash.
    and finally will just iterate ovr array and use FOR ALL for insert but don't know what will be the size of the array as I only know after looping the cursors the total number of records to be stored. So wondering if how to do it through collection/array is there a way to intialize the array and keep populating it with the data with defining the size before hand.You cannot append the bulk collect of one cursor into the collection used for bulk collecting another cursor.
    Collections are automatically sized to the number of rows fetched. If you want to manually size collection, the Extend() method needs to be used, where the method's argument specifies the number of cells/locations to add to the collection.
    From what you describe about the issue you have - collections are not the correct choice. If you are going to put the different tables's data into the same collection, then you can also combine those tables's data using a single SQL projection (via a UNION for example).
    And doing the data crunching side in SQL is always superior in scalability and performance, than doing it in PL/SQL.

  • Creating a table/view or temporary table from within a stored procedure

    Hi Gurus,
    Can someone tell me if it is possible to create a table (or view) from within a stored procedure.
    PROBLEM:
    In fact I need to create a report at back end (without using oracle developer forms or reports). This report requires creating several tables to hold temporary report data. If I create a sql*plus script for this, i works fine, because it can run DDL and other sql or pl/sql statements sequencialy. But this sql*plus script cannot be called from application. So, application needs an stored procedure to do this task and then application call that procedure. But within stored procedure, i am unable to create table (or run any ddl statement). Can somebody help me in this?
    Thanks in Advance.

    Denis,
    The problem with Nicholas' suggestion isrelated to the fact that now you have two components
    (a table and a stored procedure)
    I don't see any problem to have "two
    components" here. After all, what about all others
    tabes ? This is only one more, but I don't understand
    why want manage less objects, that implies more code,
    more maintenance, and more difficulties to debug.
    Needless to say about performance...
    Nicolas.The same reasons apply if you were forced to declare all PL/SQL variables publicly (outside the stored proc.) rather than privately (from inside the stored proc). Naming conflicts for one. If the name that you want to use for the GTT already exists, you need to find a new name. With the SQL Server type local/private declarations, you wouldn't have that problem.
    I can see how performance would be the same or better using GTTs. If the number of records involved is low, this is likely negligable.

  • List item problem in oracle form

    i have list item that is populate from table. e.g list item contain issued,received,forward to finance,cancelled,
    if i insert in the issued in table thats values will be shown in list item but not selected means disabled which means that this value once inserted in the table for that record.
    after that when recieved is inserted for that particular record this means that issued and received is disabled in list item....
    i want such dynamic list item which that is shown all values in list item but that values once insert in table these value is disabled in list item not selected by user for particular record...

    Hi,
    We can't disable the lov field but we can eliminate such field from lov by using GTT:
    Re: How to update the LOV without commit

  • Table based on multiple views

    Hello all,
    Is it possible to populate a table with data from multiple view objects that are linked with a view link?
    The reason i want to do this is because if you update the reference ID in the master the detail automatically follows.
    Now, I know i can create a view object and then include the detail info using expert mode, but this way the detail info doesn't change if i change the reference ID.
    To conclude, a brief explanation of what i'm trying to achieve.
    I have a table with adresses. Having a reference ID to the ZIP table.
    Because the user doesn't know all the ID's by heart i show the name of the city and the user can change them by clicking on a lov button. This all works fine, BUT, when the city is changed the old one stays visible for the user. Although committing, changes the city to the new one.
    Regard
    Johan

    Keeping in mind what Dave just said, you would be better off creating your GTT once. Then reusing it over and over again. Using GTTs is advantagous in that its contents are visible only at the session level, and depending on how the GTT was created it will either automatically truncate the table on commit (the default if not otherwise specfied) or truncate the table at the end of your session regardless of commits.

  • FI-SL data package too large on delta

    Hi Guys,
    I`m loading data from FI-SL total extractor ( 3FI_SL_xx_TT )
    No problem with Delta Init but when we try to load a regular delta there is some problem with the size of the packages, there are totally irregular package sizes: 70.158, 52.398, 299.784, 299.982, 57243, .. So I`m getting TSV_TNEW_PAGE_ALLOC_FAILED error on transfer from PSA to InfoCube.
    Unfortunately we have to load a heavy bunch of data on delta because they want this data monthly (an business issue) so I`m note able to load it in small requests every day.
    To make the things worse I have that start routine to carry on Balance (sapnote_0000577644 - DataSource 0EC_PCA_3 and FI-SL line item DataSources).
    It`s not taking the package size of the infopackage in consideration (1000 Kbs / 2 processes / 10 pkgs/IDoc Info).
    Config:
    - BI 7.0 SP 23/AIX/Oracle
    - ECC: AIX/Oracle PI_BASIS 2008_1_700 000 / SAP_ABA 700 015 / SAP_BASIS 700 015 / SAP_APPL 600 013
    I  checked sap note sapnote_0000917737 - Reducing the request size for totals record DataSources but it not apply to us...
    Someone have any idea how to fix it?
    Thanks in advance,
    Alex

    Chubbyd4d wrote:
    Hi masters,
    I have a package of 11,000 rows coding with around 70 procedures and function in my DB.
    I used array to handle the global data, and almost every procedure and function in the package using the array.
    The package was creating problem in debugging. Gave me the Program too large error. so it came to an idea to split the package to smaller packages.
    However, the problem that I would like to discuss are
    what is the advantage to split the package into few smaller packages. How is the impact on the memory.
    if I chunk the package into few packages, will it be easier if use GTT instead of array for the global data? How is the impact on the peroformance, since GTT is using I/OOne of our larger packages is over 20,000 lines and around 500 procedures on a 10.2 database.
    No problem or complaints about it being too large.
    As for splitting packages, that's entirely up to you. Packages are designed to put all related functionality in one place. If you feel a need to split it out to seperate packages then perhaps consider grouping together related functionality on a smaller granularity that you previously had.
    In relation to memory, smaller packages will take up less space obviously, but if you are going to be calling all the packages anyway then they'll all get loaded into memory and still take up about the same amount of memory (give or take a little).
    GTT's are generally a better idea than arrays if you are dealing with large amounts of data or you need to run queries against that data. If you need to effectively query against the data then you'll probably end up with worse performance processing an array in a query style than the IO overhead of a GTT. Of course you have to look at the individual processes and weigh up the pros and cons for each.

  • Problem with global temporary table in Oracle 10g

    Hi All,
    I face a peculiar problem in Oracle 10g with respect to Global temporary table.
    Have Oracle 10g version in Production and 11g version in UAT.
    Table_
    create global temporary table TT_TEMPGPSMANUAL
      Col_1    VARCHAR2(50),
      Col_2    VARCHAR2(500),
      Col_3    VARCHAR2(50),
      Col_4    VARCHAR2(50),
      Col_5    VARCHAR2(15),
      Col_6    VARCHAR2(20),
      Col_7    VARCHAR2(250),
      Col_8    VARCHAR2(20),
      Col_9    VARCHAR2(15),
      Col_10   VARCHAR2(20),
      Flag     NUMBER,
      Col_11   INTEGER,
      Col_12   VARCHAR2(50)
    on commit preserve rows;So this should preserve the rows inserted into this table until the session ends.
    Have a webpage in front-end where in turn, it opens another page (session is carried through) and a few rows will be inserted to this table from the webpage (through a function) on submit and the current page will be closed.
    From the parent page, if I open the sub-page data inserted in the temporary table are held and displayed (another function to fetch the values in the Global Temp table).
    The Problem in Oracle 10g (Production) is, this is not happening properly. When I close and open the sub-page, not every time I get the data stored i.e if I close and open the page 10 times, atelast 4 times the data is missed in the page (I am not getting values from temp table) randomly.
    But this does not happen in UAT (which has Oracle 11g installed) as I get the data in the webpage consistently. After passing UAT, when we rolled out to Prod, getting this issue which we are unable to get what could be the reason.
    It is very hard to debug using GTT dynamically in prod. It takes time to get Oracle 11g installed in Prod.
    Can anyone suggest?
    Regards
    Deep

    935195 wrote:
    Also, I am opening the sub-page from the parent page (through a hyperlink). Then in this case, Would session will be changed from parent to subpage? (I am not aware exactly and have the impression that, as the second page is a child, I guess it would take the same session).I'm not sure what "sub-page" or "parent page" means to you. If you're just linking from one page to another, "parent" and "child" don't really make sense since page A links to page B and B links to A quite frequently.
    Assuming that you have to log in to access the site, it is likely that the two pages share the same middle tier application session. It is unlikely that the middle tier would hold the database session from the first request open waiting to see if the user eventually requested the second page. It is theoretically possible that you could code your middle tier this way but it is extremely unlikely that you would want to do so for a variety of reasons. So, when you say "would [the] session ... be changed", it is likely that the application session would be the same for both calls but that the database session would be different.
    Justin

  • Need help in quering table in loop

    I have following scenario and need some help:
    I have Table A from there I need to select NAME in LOOP and query Table B with each name.
    Table B can give me more than one record per call and then I have to select the best record and update the Table A with Table B values (best row).
    I have created a global temp table and insert the table B results in that table and then loop through Temp table to decide which record is best. Then update the Table A.
    This solution is working but is very slow. I have created indexes on all the tables but still very slow. Is there any solution to make it fast?
    Thanks

    skas wrote:
    I have following scenario and need some help:
    I have Table A from there I need to select NAME in LOOP and query Table B with each name.
    Table B can give me more than one record per call and then I have to select the best record and update the Table A with Table B values (best row).
    I have created a global temp table and insert the table B results in that table and then loop through Temp table to decide which record is best. Then update the Table A.
    This solution is working but is very slow. I have created indexes on all the tables but still very slow. Is there any solution to make it fast?
    ThanksDon't use PL/SQL loops, don't use GTT's, do it in plain sql. If you post the table descriptions and some sample data (preferrably in the form of cretate table and insert statement) and explain how you determine "the best record" in table B I'm sure that someone here could help you with the requirements.
    John

Maybe you are looking for

  • LR 1.1 works for me...

    I think 1.1 is a big update with much needed improvements. My photos are already benefitting from the new features; specifically, improved sharpening, clarity feature, etc.

  • Business area to be grayed out or hidden for IM program position

    Hi Experts, We want to make the business area field to be grayed out or hidden in IM program position. We have tried to find of the Field selection for IM program position in configuration, but unfortunately we have not found out any configuration pa

  • OAS 10.1.2.3, changing the ip and domain on one machine

    Hello We are changing the ip and domain of the hardware (hostname remains the same). We have OAS 10.1.2.3 infra, metadata repository (created with MRCA), middle tier on one and the same machine on Aix 5300-10 (I know you are wondering at this). Middl

  • Qs on setting ReplyTo multiple address in EMail activity

    Hi, I am trying to put EMail activity, in that i am trying to put reply-to as two different addrerss like [email protected],[email protected] When i run bpel process i get following exception... 07/11/13 13:39:03 javax.mail.internet.AddressException: Illegal address in

  • Outlook 2003 Adobe Pro 8.1.2  "Item Failed to Convert"

    New to this forum.... This is the only error I get and I have changed many of the settings in the PDFMaker. Not sure where to start and I can't find any KB articles etc. It did work at one time after I installed the software - but for some reason now