Temp tables and transaction log

Hi All,
I am on SQL 2000.
When I am inserting(or updating or deleting) data to/from temp tables (i.e. # tables), is transaction log created for those DML operations?
The process is, we have a huge input dataset to process. So, we insert subset(s) of input data in temp table, treat that as our input set and do the processing in parts. Can I avoid transaction log generation for these intermediate steps?
Soon, we will be moving to 2008 R2. Are there any features in 2008, which can help me in avoiding this transaction logging?
Thanks in advance

Every DML operation is logged in the LOG file. Is that possible to insert the data in small chunks?
http://www.dfarber.com/computer-consulting-blog/2011/1/14/processing-hundreds-of-millions-records-got-much-easier.aspx
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Blog:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance

Similar Messages

  • Difference between Temp table and Variable table and which one is better performance wise?

    Hello,
    Anyone could you explain What is difference between Temp Table (#, ##) and Variable table (DECLARE @V TABLE (EMP_ID INT)) ?
    Which one is recommended to use for better performance?
    also Is it possible to create CLUSTER and NONCLUSTER Index on Variable table?
    In my case: 1-2 days transactional data are more than 3-4 Millions. I tried using both # and table variable and found table variable is faster.
    Is that Table variable using Memory or Disk space?
    Thanks Shiven:) If Answer is Helpful, Please Vote

    Check following link to see differences b/w TempTable & TableVariable: http://sqlwithmanoj.com/2010/05/15/temporary-tables-vs-table-variables/
    TempTables & TableVariables both use memory & tempDB in similar manner, check this blog post: http://sqlwithmanoj.com/2010/07/20/table-variables-are-not-stored-in-memory-but-in-tempdb/
    Performance wise if you are dealing with millions of records then TempTable is ideal, as you can create explicit indexes on top of them. But if there are less records then TableVariables are good suited.
    On Tables Variable explicit index are not allowed, if you define a PK column, then a Clustered Index will be created automatically.
    But it also depends upon specific scenarios you are dealing with , can you share it?
    ~manoj | email: http://scr.im/m22g
    http://sqlwithmanoj.wordpress.com
    MCCA 2011 | My FB Page

  • PM Tables and Transaction

    HI...
    Plz provide the Plant maintenance(PM) Tables and Transaction.
    And detais about PMIS?? Use of it??
    thanks..

    Hi,
    PM T codes and PM Tables
    Equipement-- Maintable-EQUI
    Functiuonal location-     IFLOT
    Serial number             -SERI
    Material master         MATNR
    IE01                 Create Equipment
    IL01                 Create Functional Location
    IB01                 Create Equipment BOM
    IQ01                 Create Material Serial Number
    MM01               Material Master
    IB 01, IB11       Bills of materials
    IW21 - Notification creation
    IW31- PM order IA07 Display General Task List IA03 Display Equipment Task List IA13 Display Functional Location Task List IR03 Display Work Centre KO88 Settle Order (Single) IW32 Change Work Order IW31 Create Work Order IW33 Display Work Order IW38 Work Order List Editing - Change IW39 Work Order List Editing - Display IW40 Work Order List Editing - Display Multi Level ME21 Create Purchase Order (pre R4.6) ME21N Create Purchase Order (R4.6 onwards) IP41 Create Single Cycle Plan (R4 onwards) IE02 Change Equipment IE01 Create Equipment IE03 Display Equipment IE05 Equipment List Editing - Change IE08 Equipment List Editing - Display IW42 Overall Completion Confirmation IW26 Create Notification QS42 Display Catalog ML81 Create Service Entry Sheet MM03 Display Material CS03 Display Material BOM IW13 Material Where Used List IW66 Change Notification List of Tasks IW67 Display Notification List of Tasks IW22 Change Notification IW23 Display Notification IH01 Display Functional Location Structure MB11 Goods Movement MB31 Goods Receipt IW8W Goods Receipt for Refurbishment (R4 onwards) IP02 Change Maintenance Plan IP03 Display Maintenance Plan IP10 Schedule Maintenance Plan IP30 Deadline Monitoring IP11 Change Maintenance Strategy IP12 Display Maintenance Strategy IP19 Maintenance Scheduling Overview Graphic IP24 Maintenance Scheduling Overview List IW28 Notification List Editing - Change IW29 Notification List Editing - Display IW30 Notification List Editing - Display Multi Level IW64 Change Notification List of Activities IW65 Display Notification List of Activities IW68 Change Notification List of Items IW69 Display Notification List of Items IQ03 Display Serial Numbers IW24 Create Notification IP42 Create Strategy Maintenance Plan (from R4 onwards) IW25 Create Notification IL02 Change Functional Location IL01 Create Functional Location IL03 Display Functional Location IL05 Functional Location List Editing - Change IL06 Functional Location List Editing - Display IW41 Time Confirmation - Indvidual Entry IW48 Time Confirmation - Collective Entry with Selection IW44 Time Confirmation - Collective Entry no Selection IA11 Create Functional LocationTask Lists IA12 Change Functional Location Task List IL02 Change Functional Location IA05 Create General Task List IA06 Change General Task List IA01 Create Equpment Task List IA02 Change Equipment Task List IE03 Display Equipment IR01 Create Work Centre IR02 Change Work Centre CA85 Replace Work Centre IP13 Strategy Package Sequence IP14 Strategy Package Sequence IP04 Create Maintenance Item IP05 Change Maintenance Item IP06 Display Maintenance Item IP17 Maintenance Item List Editing - Change IP18 Maintenance Item List Editing - Display IP02 Change Maintenance Plan IP03 Display Maintenance Plan IP15 Maintenance Plan List Editing - Change IP16 Maintenance Plan List Editing - Display IK11 Create Measurement Documents IK12 Change Measurement Documents IK13 Display Measurement Documents IK22 Measurement Documents List Editing - Create IK21 Measurement Documents List Editing - Create IK22 Measurement Documents List Editing - Create IK18 Measurement Documents List Editing - Change IK17 Measurement Documents List Editing - Display IK41 Measurement Documents List Editing - Display Archive IQ01 Create Serial Numbers IQ02 Change Serial numbers IQ04 Serial Numbers List Editing - Create IQ08 Serial Numbers List Editing - Change IQ09 Serial Numbers List Editing - Display IK01 Create Measurment Point IK02 Change Measurement Point Ik03 Display Measurement Point IK08 Measurement Point List Editing - Change Ik07 Measurement Point List Editing u2013 Display and a few PM related tables: TPST -- Functional Location - BOM Link STAS -- BOMs - Item Selection STKO -- BOM Header STPO -- BOM item STPU -- BOM Subitem STZU -- Permanent BOM data RESB - Reservation, Material number, Requirement date, Required Quantity, Quantity withdrawn, Work order number RSADD - Date created, User ID of person who created the reservation MAKT - Material Description AUFK - Work Order description AFIH - Revision (as a selection field) MBEW - Total valuated stock (SOH) USER_ADDR - The User ID first name & second name RKPF - Reservation Header information if required.
    IP01 - maintenance plan
    Plant maintenance (PM) IHPA Plant Maintenance: Partners OBJK Plant Maintenance Object List ILOA PM Object Location and Account Assignment AFIH Maintenance order header AFIH Maintenance Order Header AFKO Order header data PP orders AFPO Order item (not used much) AFRU Order completion confirmations AFVC Operation within an order AFVV Order position data CRCO Assignment of work center to cost center OBJK Plant Maintenance Object List CRHD Work Center Header PLAF Planned order HIKO Order master data history HIVG PM order history: operations Logical databases: CNJ AUFK Order master data AFKO Order header data PP orders AFPO Order item JSTO Status object information JEST Object status Logical databases: ODK 12.1 Routings/operations MAPL - Allocation of task lists to materials PLAS - Task list - selection of operations/activities PLFH - Task list - production resources/tools PLFL - Task list - sequences PLKO - Task list - header PLKZ - Task list: main header PLPH - Phases / suboperations PLPO - Task list operation / activity PLPR - Log collector for tasklists PLMZ - Allocation of BOM - items to operations 12.2 Bill of material STKO - BOM - header STPO - BOM - item STAS - BOMs - Item Selection STPN - BOMs - follow-up control STPU - BOM - sub-item STZU - Permanent BOM data PLMZ - Allocation of BOM - items to operations MAST - Material to BOM link KDST - Sales order to BOM link 12.3 PRTu2019s production orders AFFH - PRT assignment data for the work order CRVD_A - Link of PRT to Document DRAW - Document Info Record TDWA - Document Types TDWD - Data Carrier/Network Nodes TDWE - Data Carrier Type 12.4 Work center CRHH - Work center hierarchy CRHS - Hierarchy structure CRHD - Work center header CRTX - Text for the Work Center or Production Resource/Tool CRCO - Assignment of Work Center to Cost Center KAKO - Capacity Header Segment CRCA - Work Center Capacity Allocation TC24 - Person responsible for the work center 12.5 Classification CABS Result of the Statistical Analysis of Table AUSP CUFM Customizing: Class/Config: Screendesigner Form TCME Validity for Global Characteristics KLAH Class Header Data KLAT Classes: long texts KSML Characteristics of a Class AUSP Characteristic values SWOR Classification System: Catchwords KSSK Allocation Table: Object to Class TCLG Class groups TCLO Key Fields of Objects TCLS Classes: Organizational areas TCLST Classes: Org. Areas (Texts) TCLU Class Status COCC PP-PI attributes for characteristics COFV Process management - process instr. charact. in ctrl. recipe COME Process management - message characteristics CORE Process mgmt.- display characteristics of the eval. version PLFV PI Characteristics/Sub-Operation Parameter Values TCLA Class Types TCLAT Class Type Texts TCLT Classifiable Objects LTCLTT Classifiable Objects: Texts CABN Characteristic CAWN Characteristic values Logical databases: - Matchcodes: CLAS, MERK en KLSW 12.6 Equipment EQUI Equipment master data EQKT Equipment short texts EQUZ Equipment time segment EAPL Allocation of Task Lists to Pieces of Equipment EQUI Equipment master data JSTO Status object information JEST Object status TJ30 User status TJ30T Texts for user status TJ02 System status TJ02T Texts for system status 12.7 Functional Location IFLOT Functional Location (Table) IFLOTX Functional location: short texts IRLOTX Reference functional location: short texts TAPL Allocation of task lists to functional
    Edited by: Sreenivasu on May 18, 2009 12:43 PM
    Edited by: Sreenivasu on May 18, 2009 12:48 PM
    Edited by: Sreenivasu on May 18, 2009 12:50 PM
    Edited by: Sreenivasu on May 18, 2009 2:06 PM
    Edited by: Sreenivasu on May 18, 2009 2:07 PM
    Edited by: Sreenivasu on May 18, 2009 2:14 PM
    Edited by: Sreenivasu on May 18, 2009 2:15 PM

  • Query is taking too much time for inserting into a temp table and for spooling

    Hi,
    I am working on a query optimization project where I have found a query which takes hell lot of time to execute.
    Temp table is defined as follows:
    DECLARE @CastSummary TABLE (CastID INT, SalesOrderID INT, ProductionOrderID INT, Actual FLOAT,
    ProductionOrderNo NVARCHAR(50), SalesOrderNo NVARCHAR(50), Customer NVARCHAR(MAX), Targets FLOAT)
    SELECT
    C.CastID,
    SO.SalesOrderID,
    PO.ProductionOrderID,
    F.CalculatedWeight,
    PO.ProductionOrderNo,
    SO.SalesOrderNo,
    SC.Name,
    SO.OrderQty
    FROM
    CastCast C
    JOIN Sales.Production PO ON PO.ProductionOrderID = C.ProductionOrderID
    join Sales.ProductionDetail d on d.ProductionOrderID = PO.ProductionOrderID
    LEFT JOIN Sales.SalesOrder SO ON d.SalesOrderID = SO.SalesOrderID
    LEFT JOIN FinishedGoods.Equipment F ON F.CastID = C.CastID
    JOIN Sales.Customer SC ON SC.CustomerID = SO.CustomerID
    WHERE
    (C.CreatedDate >= @StartDate AND C.CreatedDate < @EndDate)
    It takes almost 33% for Table Insert when I insert the data in a temp table and then 67% for Spooling. I had removed 2 LEFT joins and made it as JOIN from the above query and then tried. Query execution became bit fast. But still needs improvement.
    How I can improve further. Will it be good enough if I create Indexes on the columns for the temp table and try.or what If I use derived tables?? Please suggest.
    -Pep

    How I can improve further. Will it be good enough if I create Indexes on the columns for the temp table and try.or what If I use derived tables??
    I suggest you start with index tuning.  Specifically, make sure columns specified in the WHERE and JOIN columns are properly indexed (ideally clustered or covering, and unique when possible).  Changing outer joins to inner joins is appropriate
    if you don't need outer joins in the first place.
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Field sales related tables and Transaction code

    Dear SAPGurus,
    I have been working with field sales activities in mySAP CRM..
    Can I have the list of the tables and transaction code related to Field sales sales activies..
    I am grateful for ur help....
    I would like to appreciate u by rewarding the points
    Thanks a lot

    Hi Pratik Patel,
    Thank u very much for sending the table names.They are all very useful unto me
    Can I have the more table in this area
    If u have the CRM Data model and its relationship...Pls share with me..
    I  appreciate ur help by rewarding the points
    Regards,
    CRM Consultant

  • Temp table, and gather table stats

    One of my developers is generating a report from Oracle. He loads a subset of the data he needs into a temp table, then creates an index on the temp table, and then runs his report from the temp table (which is a lot smaller than the original table).
    My question is: Is it necessary to gather table statistics for the temp table, and the index on the temp table, before querying it ?

    It depends yesterday I have very bad experience with stats one of my table has NUM_ROWS 300 and count(*)-7million and database version is 9206(bad every with optimizer bugs) so queries starts breaking and lot of buffer busy and latch free it took while to figure out but I have deleted the stats and every thing came under control - my mean to say statistics are good and bad. Once you start collecting you should keep an eye.
    Thanks.

  • Insert data from an tabular to a temp table and fetching a columns.

    Hi guys ,
    I am working in apex 3.2 in which in a page i have a data's fom various tables and displays it in tabular form. Then i have to insert the tabular form data to a temp table and fetch the data from the temp table and insert into my main table. I think that i have to use a cursor to fetch the data from the temp table and insert into the main table but i didnt get the perfect example for doing this. Can any one help me to sort it out.
    Thanks With regards
    Balaji

    Hi,
    Follow this scenario.
    Your Query:
    SELECT t1.col1, t1.col2, t2.col1, t2.col2, t3.col1
    FROM table1 t1, table2 t2, table3 t3
    (where some join conditions);On insert button click call this process
    DECLARE
    temp1 VARCHAR2(100);
    temp2 VARCHAR2(100);
    temp3 VARCHAR2(100);
    temp4 VARCHAR2(100);
    temp5 VARCHAR2(100);
    BEGIN
         FOR i IN 1..apex_application.g_f01.COUNT
         LOOP
              temp1    := apex_application.g_f01(i);
              temp2    := apex_application.g_f02(i);
              temp3    := apex_application.g_f03(i);
              temp4    := apex_application.g_f04(i);
              temp5    := apex_application.g_f05(i);
              INSERT INTO table1(col1, col2) VALUES(temp1, temp2);
              INSERT INTO table2(col1, col2) VALUES(temp3, temp4);
              INSERT INTO table3(col1) VALUES(temp5);
         END LOOP;
    END;You don't even need temp tables and cursor to make an insert into different tables.
    Thanks,
    Ramesh P.
    *(If you know you got the correct answer or helpful answer, please mark as corresponding.)*

  • Let me know what is new table and transaction code in ECC6?

    Hi
    As the above title, I want to know what is new table and transaction code, through from R/3 4.6c to ECC6.
    Regards
    Sang lim.

    Hi Sang lim,
    Apart from the tocdes listed above,
    Transactions that changed from Release 4.6 C
    Rel.  Old TCode  New TCode 
    46C  ME51  ME51N 
    46C  ME52  ME52N 
    46C  ME53  ME53N 
    470  FNBD  FNBT 
    470  ME54  ME54N 
    470  ME59  ME59N 
    46A  MR01  MIRO 
    46A  MR02  MRBR 
    46A  MR08  MR8M 
    46A  MR1G  MIRO 
    46A  MRHG  MIRO 
    46A  MRHR  MIRO 
    46A  MRRS  MRRL 
    46B  S_P99_41000327  S_ALR_87100205 
    46C  MR03  MIR4 
    46C  MR1B  MIR6 
    46C  MR2M  MIR4 
    46C  MR3M  MIR4 
    46C  MR41  MIR7 
    46C  MR42  MIR4 
    46C  MR43  MIR4 
    46C  MR44  MIR4 
    46C  MR5M  MIR4 
    46C  OAA2  AUFW 
    620  AFAB  AFABN 
    620  AL01  RZ20 
    620  AL02  RZ20 
    620  AL03  RZ20 
    620  AR11  AR11N 
    620  AR29  AR29N 
    620  ASKB  ASKBN 
    620  CA97  CA97N 
    620  DB02  DB02N 
    620  FM3S  FMCIA 
    620  FM3U  FMCIA 
    620  FMN3  FMN3N 
    620  FMN4  FMN4N 
    620  FMN5  FMN5N 
    620  O02E  BMBC 
    620  OACR  OAC0 
    620  RZ23  RZ23N 
    620  SCOM  SCOT 
    620  SM22  SM21 
    620  SWID  SWI2_DIAG 
    620  S_P9C_18000190  S_PL0_09000447 
    620  S_P9C_18000247  S_P6B_12000136 
    620  VOPA  VOPAN 
    620  VOTX  VOTXN 
    620  WE49  WE42 
    620  WE52  WE41 
    620  WE53  WE41 
    640  ABAW  ABAWN 
    640  AL04  RZ20 
    640  AL19  OS07 
    640  COHVOMAVAILCHECK  COMAC 
    640  COHVOMPI  COHVPI 
    640  COHVOMPP  COHV 
    640  KE1F  KE1FN 
    640  KE29  KE29N 
    640  MKH1  MKH1N 
    640  MKH2  MKH2N 
    640  RZ02  RZ20 
    640  RZ06  RZ20 
    640  RZ08  RZ20 
    640  STAT  STAD 
    640  STMP  SLPP 
    640  VL22  VL22N 
    700  AL05  RZ20 
    700  AL16  RZ20 
    700  AL17  OS07 
    700  OVXA  OVXAN 
    700  OVXG  OVXGN 
    700  OVXJ  OVXJN 
    700  OVXK  OVXKN 
    700  OVXM  OVXMN 
    700  OVX3  OVX3N 
    700  OVX6  OVX6N 
    700  OVX8  OVX8N 
    700  WLAM  WLAMN 
    700  WLMM  WLMMN 
    700  WLMV  WLMVN 
    700  WLWB  WLWBN 
    700  WPLG  WPLGN 
    Regards,
    Kiran

  • Need to get a list of PSA tables and change log tables existing in a PC

    Is there a standars table to look up all active DSOs and the change log tables associated to those DSOs?
    and also Data sources and the PSA tables associated to that DS.
    I need to get a list of PSA tables and change log tables existing in a processchain(whioch deletes the data in them time to time)how do I do this in a quicker way?
    Thanks in advance

    Hi Ramya
    Check  RSTSODS table with filter of  User App   CHANGELOG ---> For change log tables

  • Master data tables and Transaction data Tables

    Hello Gurus,
    Please let me know how to know which table belongs to master data  and which table belongs to transaction data.
    for FICO module.
    Does any one  have specific material relating to master data table and transaction data tables.
    Thanks
    Edited by: Manu Rathore on Jan 18, 2012 4:38 AM

    Hi Manu,
    Find attached table relation diagram by Christopher Solomon. It is one of the very comprehensive chart on this topic.
    deleted
    Warm regards,
    Murukan Arunachalam

  • IMDB Cache and transaction logs

    Hi,
    We have installed the IMDB Cache as part of a proof of concept. We want to cache a large Oracle table (approx 900 million rows) into a read only local cache group and are finding the amount of space taken by transaction logs during the initial cache load operation exceeds the amount of disk space available. Is there a way to prevent transaction logging during the initial cache load? A failure during the initial load is acceptable for us as we can always reload the cache from the base Oracle table. We are using a datastore with 60GB of memory, however, the filesystem available is 273GB less the 120GB for the two datastore backing files, leaving approximately 150GB for transaction logs. To date we have only been able to load approximately 350 millions rows before failing with
    5056: The cache operation fails: error_type=<TimesTen Error>, error_code=<802>, error_message: [TimesTen]TT0802: Data store space exhaustedThe datastore attributes we are using are
    [EntResPP]
    Driver=/app1/oracle/product/11.2.0/TimesTen/ER/lib/libtten.so
    DataStore=/prod100/oradata/EntResPP
    LogPurge=1
    PermSize=60000
    TempSize=2000
    PLSQL=1
    DatabaseCharacterSet=AL32UTF8
    OracleNetServiceName=TRAQPP.worldThe command we use to load the cache is
    load cache group ro commit every 256 rows parallel 4Thanks
    Mark

    The replication agent is only involved if you have AWT cache groups or if you are using replication. If this is a standalone datastore with a readonly cache group then it is not necessary (or possible) to run the replication agent.
    The error message you mentioned is nothing to do with transaction log space. What has happenned is that the memory allocated ot the permanent data region within the datastore (where table data, indexes etc. reside) has become full (this corresponds to PermSize in your DSN attributes). This means you have not allocated enough memory in TimesTen to hold all the data. Be aware that there is typically significant storage space 'inflation' when caching data. This can range from 2x through to 5x or more. So, if the table data occupies a real 10 GB in oracle it will require between 20 and 50 GB in TimesTen.
    It is possible to suppress logging while loading the cache data (or at least it used to be prior to TT 11.2.1 - I haven't tied this in 11.2.1 myself). You'd do this as follows:
    1. Stop all application connections etc. to the datastore, stop cache and replication agents. make sure that the datastore is unloaded from memory.
    2. Change the value for 'Logging' in the DSN attributes to 0 and connect to the DSN using ttIsql as the instance administrator user.
    3. Start the cache agent. from the ttIsql session issue the command:
    load cache group ro commit every 0 rows;
    You have to use 0 (load entire group as single 'transaction' and you cannot use the 'parallel' clause.
    If this fails you may have to manually delete any rows that were loaded since TT cannot rollback.
    4. When the load has completed successfully, stop the cache agent and disconnect the ttIsql session.
    5. Change Logging back to 1 and reconnect as instance administrator from ttIsql. restart cache agent.
    6. Start applications etc. as required.
    Note that I would consider this at best a temporary workaround. Really, you need to ensure you have enough disk space to perform the load using logging. Of course, as I mentioned, the error you are getting right now is nothing to do with log disk space...
    Chris

  • Difference betweem temp table and CTE as performance wise?

    Hi Techies,
    Can anyone explain CTE and Temp table performance wise. Which is the better object to use while implementing DML operations.
    Thanks in advance.
    Regards
    Cham bee

    Welcome to the world of performance tuning in SQL Server! The standard answer to this kind of question is:
    It depends.
    A CTE is a logical construct, which specifies the logical computation order for the query. The optimizer is free to recast computation order in such away that the intermediate result from the CTE never exists during the calculation. Take for instance this
    query:
    WITH aggr AS (
        SELECT account_no, SUM(amt) AS amt
        FROM   transactions
        GROUP  BY account_no
    SELECT account_no, amt
    FROM   aggr
    WHERE  account_no BETWEEN 199 AND 399
    Transactions is a big table, but there is an index on account_no. In this example, the optimizer will use that index and only compute the total amount for the accounts in the range. If you were to make a temp table of the CTE, SQL Server would have no choice
    to scan the entire table.
    But there also situations when it is better to use a temp table. This is often a good strategy when the CTE appears multiple times in the query. The optimizer is not able to pick a plan where the CTE is computed once, so it may compute the CTE multiple times.
    (To muddle the waters further, the optimizers in some competing products have this capability.)
    Even if the CTE is only referred to once, it may help to materialise the CTE. The temp table has statistics, and those statistics may help the optimizer to compute a better plan for the rest of the query.
    For the case you have at hand, it's a little difficult to tell, because it is not clear to me if the conditions are the same for points 1, 2 and 3 or if they are different. But the second one, removing duplicates, can be quite difficult with a temp table,
    but is fairly simple using a CTE with row_number().
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Global temp table and edit

    Hi all,
    Can someone tell me why when I create a GTT and insert the data like the followijng ,I get insert 14 rows msg. But when I do a select statement from sqlwork shop , sometimes i get the data sometimes I don't. my understanding is this data is supposed to stay during my logon session then got cleaned out when I exit session.
    I am developing a screen in apex and will use this temp table for user to do some editing work. Once ithe editing is done then I save the data into a static table. Can this be done ? So far my every attempt to update the temp table always result to 0 rows updated and the temp table reversed back to 0 rows. CAn you help me ?
    CREATE GLOBAL TEMPORARY TABLE "EMP_SESSION"
    (     "EMPNO" NUMBER NOT NULL ENABLE,
         "ENAME" VARCHAR2(10),
         "JOB" VARCHAR2(9),
         "MGR" NUMBER,
         "HIREDATE" DATE,
         "SAL" NUMBER,
         "COMM" NUMBER,
         "DEPTNO" NUMBER
    ) ON COMMIT PRESERVE ROWS
    insert into emp_session( EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO)
    select * from emp
    select * from emp_session
    -- sometimes I get 14 rows, sometimes 0 rows
    Thanks.
    Tai

    Tai,
    To say that Apex doesn't support GTT's is not quite correct. In order to understand why it is not working for you and how they may be of use in an Apex application, you have to understand the concept of a session in Apex as opposed to a conventional database session.
    In a conventional database session, as when you are connected with sqlplus then you have what is known as a dedicated session, or a synchronous connection. Temporary objects such as GTTs and packaged variables can persist across calls to the database. A session in Apex however is asynchronous by nature and a connection to the database is done through some sort of a server such as the Oracle HTTP server or the Apex Listener, which in effect maintains a pool of connections to the database and calls by your application aren't guaranteed to get the same connection for each call.
    To get over this, the guys who developed Apex came up with various methods to maintain session state and global objects that are persistent within the context of an Apex session. One of these is Apex collections, which are a device for maintaining collection like (array like) data that is persistent within an Apex session. These are Apex session specific objects in that they are local to the session that creates and maintains them.
    With this knowledge, you can then see why the GTT is not working for you and also how a GTT may be of use in an Apex application, provided you don't expect the data to persist across a call, as in a PL/SQL procedure. You should note though, that unless you are dealing with very large datasets, then a regular Oracle collection is preferable.
    I hope this explains your issue.
    Regards
    Andre

  • WAE 512 and transaction logs problem

    Hi guys,
    I have a WAE 512 with ACNS 5.5.1b7 and I'm not able to export archived logs correctly. I tried to configure the WAE as below:
    transaction-logs enable
    transaction-logs archive interval every-day at 23:00
    transaction-logs export enable
    transaction-logs export interval every-day at 23:30
    transaction-logs export ftp-server 10.253.8.125 cache **** .
    and the WAE exported only one file of about 9 MB even if the files was stored on the WAE as you can see from the output:
    Transaction log configuration:
    Logging is enabled.
    End user identity is visible.
    File markers are disabled.
    Archive interval: every-day at 23:00 local time
    Maximum size of archive file: 2000000 KB
    Log File format is squid.
    Windows domain is not logged with the authenticated username
    Exporting files to ftp servers is enabled.
    File compression is disabled.
    Export interval: every-day at 23:30 local time
    server type username directory
    10.253.8.125 ftp cache .
    HTTP Caching Proxy logging to remote syslog host is disabled.
    Remote syslog host is not configured.
    Facility is the default "*" which is "user".
    Log HTTP request authentication failures with auth server to remote syslog host.
    HTTP Caching Proxy Transaction Log File Info
    Working Log file - size : 96677381
    age: 44278
    Archive Log file - celog_213.175.3.19_20070420_210000.txt size: 125899771
    Archive Log file - celog_213.175.3.19_20070422_210000.txt size: 298115568
    Archive Log file - celog_213.175.3.19_20070421_210000.txt size: 111721404
    I made a test and I configured the archiveng every hour from 12:00 to 15:00 and the export at 15:10, the file trasnferred by the WAE was only three one of 12:00 the other of 13:00 and 14:00 the 15:00 has been missed.
    What can I do?
    Thx
    davide

    Hi Davide,
    You seem to be missing the path on the FTP server; which goes on the export command.
    Disable transaction logs, then remove the export command and then add it again like this: transaction-logs export ftp-server 10.253.8.125 cache **** / ; after that enable transaction logs again and test it.
    Let me know how it goes. Thanks!
    Jose Quesada.

  • What are these DR$TEMP % tables and can they be deleted?

    We are generating PL/SQL using ODMr and then periodically running the model in batch mode. Out of 26,542 objects in the DMUSER1 schema, 25,574 are DR$TEMP% tables. Should the process be cleaning itself up or is this supposed to be a manual process? Is the cleanup documented somewhere?
    Thanks

    Hi Doug,
    The only DR$ tables/indexes built are the ones generated by the Build,Apply and Test Activities. I confirmed that they are deleted in ODMr 10.2.0.3. As I noted earlier, there was a bug in ODMr 10.2.0.2 which would lead to leakage when deleting Activities. You will have DR$ table around for existing Activities, so do not delete these without validating they are no longer part of an existing Activity.
    You can track down the DR$ objects associated to an Activity by viewing the text step in the activity and finding the table generated for the text data. This table will have a text index created on it. The name of that text index is used as a base name for several tables which Oracle text utilizes.
    Again, all of these are deleted when you delete an Activity with ODMr 10.2.0.3.
    Thanks, Mark

Maybe you are looking for