Question about temporary tables and imports.

Hello everybody.
Interesting one this. We have an application that uses global temporary tables. We refresh live to test using import of the application schema. I noticed when doing this yesterday that the temporary tables were being recreated in test as perminant rather than temporary.
Is there a reason for this? Has anyone come across this before? Is there a way around it (apart from manual checking)?
Many thanks
Rup

Could you specify how you found out that it is coming in as permanent?
I believe exp/Imp will export and import it is Temporary table. I have just done a simple test to check and it works fine;
Here is my test log:
SQL> connect scott
Enter password:
Connected.
SQL>SQL> CREATE GLOBAL TEMPORARY TABLE test_global
  2          (startdate DATE,
  3           enddate DATE,
  4           class CHAR(20))
  5        ON COMMIT DELETE ROWS;
Table created.
SQL>  select table_name,temporary from user_tables where table_name='TEST_GLOBAL';
TABLE_NAME                     T
TEST_GLOBAL                    Y
SQL> select table_name,temporary from user_tables;
TABLE_NAME                     T
DEPT                           N
EMP                            N
BONUS                          N
SALGRADE                       N
TEST_GLOBAL                    Y
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options
C:\Documents\dbmsdirect\Testing\global>exp
Export: Release 10.2.0.2.0 - Production on Wed Feb 20 12:18:45 2008
Copyright (c) 1982, 2005, Oracle.  All rights reserved.
Username: scott
Password:
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options
Enter array fetch buffer size: 4096 >
Export file: EXPDAT.DMP > test_global
(2)U(sers), or (3)T(ables): (2)U > t
Export table data (yes/no): yes >
Compress extents (yes/no): yes >
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
server uses AL32UTF8 character set (possible charset conversion)
About to export specified tables via Conventional Path ...
Table(T) or Partition(T:P) to be exported: (RETURN to quit) > test_global
. . exporting table                    TEST_GLOBAL
Table(T) or Partition(T:P) to be exported: (RETURN to quit) >
Export terminated successfully without warnings.
C:\Documents\dbmsdirect\Testing\global>sqlplus /nolog
SQL*Plus: Release 10.2.0.2.0 - Production on Wed Feb 20 12:19:50 2008
Copyright (c) 1982, 2005, Oracle.  All Rights Reserved.
SQL> connect scott
Enter password:
Connected.
SQL> drop table test_global purge;
Table dropped.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options
C:\Documents\dbmsdirect\Testing\global>imp
Import: Release 10.2.0.2.0 - Production on Wed Feb 20 12:20:19 2008
Copyright (c) 1982, 2005, Oracle.  All rights reserved.
Username: scott
Password:
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options
Import file: EXPDAT.DMP > test_global.DMP
Enter insert buffer size (minimum is 8192) 30720>
Export file created by EXPORT:V10.02.01 via conventional path
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
import server uses AL32UTF8 character set (possible charset conversion)
List contents of import file only (yes/no): no >
Ignore create error due to object existence (yes/no): no >
Import grants (yes/no): yes >
Import table data (yes/no): yes >
Import entire export file (yes/no): no > yes
. importing SCOTT's objects into SCOTT
. importing SCOTT's objects into SCOTT
Import terminated successfully without warnings.
C:\Documents\dbmsdirect\Testing\global>sqlplus /nolog
SQL*Plus: Release 10.2.0.2.0 - Production on Wed Feb 20 12:22:44 2008
Copyright (c) 1982, 2005, Oracle.  All Rights Reserved.
SQL> connect scott
Enter password:
Connected.
SQL>
SQL> select table_name,temporary from user_tables;
TABLE_NAME                     T
DEPT                           N
EMP                            N
BONUS                          N
SALGRADE                       N
TEST_GLOBAL                    Y
SQL> insert into TEST_GLOBAL values (sysdate,sysdate,'test1');
1 row created.
SQL> select * from TEST_GLOBAL;
STARTDATE ENDDATE   CLASS
20-FEB-08 20-FEB-08 test1
SQL> commit;
Commit complete.
SQL> select * from TEST_GLOBAL;
no rows selected
SQL>  select table_name,temporary from user_tables where table_name='TEST_GLOBAL';
TABLE_NAME                     T
TEST_GLOBAL                    Y
SQL>

Similar Messages

  • Question about temporary table population

    Hi ,
    I wrote a stored proc to populate a temp table inside a pacakage.
    I call that package in visual studio.
    As it's a temporary table , the table get deleted as soon as the session over .
    The Is there a way to c the table content ?
    PROCEDURE edr_class_by_hour_use_per_veh
       in_report_parameter_id   IN      report_tasks.report_task_id%TYPE
    IS
    BEGIN
      DELETE FROM edr_lane_by_class_report_data;
      INSERT INTO edr_lane_by_class_report_data
          site_id,
          site_lane_id,
          bin_start_date_time,
          v_class,
          v_lane,
          lane_count
      SELECT
              site_id,
              site_lane_id,
              bin_start_date_time,
              v_class,
              v_lane,
              COUNT(1)
        FROM (SELECT site_id,
                     site_lane_id,
                     ( SELECT NVL(MAX(interval_start_date_time), date_time)
                         FROM edr_rpt_tmp_grouping_table
                     WHERE interval_start_date_time <= date_time) bin_start_date_time,
                     NVL(traffic_class.v_class, 0) v_class,
                     NVL(traffic_record.lane, 0)   v_lane
                FROM edr_lane_by_class_per_veh_data
                LEFT OUTER JOIN traffic_class
                             ON traffic_class.record_id = edr_lane_by_class_per_veh_data.record_id
                LEFT OUTER JOIN traffic_record
                             ON traffic_record.record_id = edr_lane_by_class_per_veh_data.record_id             
              )vehicles
        GROUP BY site_id,
                 site_lane_id,
                 bin_start_date_time,
                 v_class,
                 v_lane;
    END edr_class_by_hour_use_per_veh;Edited by: Indhu Ram on Nov 25, 2009 12:29 PM

    Hi,
    You must have commited the rows for temporary sake with "on commit preserve rows"
    Refer to doc
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96521/tables.htm#7832
    - Pavan Kumar N
    Oracle 9i/10g - OCP
    http://oracleinternals.blogspot.com/

  • Question about multiple tables and formulas in numbers

    I  was wondering about having a formula that effects one table based on data in the other table. I have a spreedsheet that i created for work. In column A is a list of addresses. In column B-ZZ is a list of "billing codes" the get applied to the service work done for that address.Its kind of difficult to look at this large spreadsheet on my iphone and edit it on the go. Is there a way to, Have another column, possibly on sheet 2 or something, where i type in these "billing codes" and they would fill an x or some other character in cell that corrisponds with that billing code on that address line? So to be as plain as possible, I want to enter data in one cell (another sheet if possible) and have it "mark" another cell in response to the specific data on that "entry cell." Im thinking this way because instead of scrolling back and fourth on the sheet on my iphone, to "mark" the boxes, i could just navigate to sheet 2 enter the data, or "billing codes" and they would "X" the cell that would match up with the code column and address row. each address row would have a seperate "entry field" in sheet 2, to make the formula easier. thanks for any help

    Tom,
    That's a lot of columns for jsut a table of marks that reflect an input. Sure, you could have each cell in that giant table check to see if some corresponding cell contains certain text, but that document is going to be slower than you might be able to tolerate.
    Jerry

  • Question about Update Tables

    Hello Gurus,
    I have a question about "update table" entries. I read somewhere that an entry in update table  is done at the time of the OLTP transaction. Is this correct? If so, does this happen on a V1 update or V2 update? Please clarify. Similarly, an entry in the "extraction queue" (incase you are using "queued delta" will happen on V1 update. I just want to get a clarification on both these methods. Any help in this matter is highly appreciated.
    Thanks,
    Sreekanth

    Hi
    update tables are temporary table that are refreshed by v3 job.update table is like buffer, it gets filled after updation of application table and statistical table through v1 v2 update .  we can bypass this process by delta methods.
    M Kalpana

  • Doubts about Temporary Table.

    Hi,
    I am using Temporary Table.
    But the insert command takes too much time compare to insert in Normal table.
    One more doubt about Temporary Table is:
    Suppose there are two different users. They connect and first insert rows of their use .Now they go for select.
    Does select of one user goes to check the rows of second user also or the temporary table treats 2 users data as inserted in 2 different tables?
    Help!!!

    Nested structure (not deep - deep means their a string or a table as a component)
    TYPES: BEGIN OF tp_header_type,
             BEGIN OF d,
               empresa TYPE ...
               num_docsap TYPE ...
            END OF d,
            awkey TYPE ...
          END OF tp_header_type.
    matt

  • A question about item "type and release" of  source system creation

    Hello expert,
    I have a question about item "type and release" of  source system creation.
    As we know,when we create a web servie source system,there will display a pop-up which includes three items as "logical system","source system"and "type and release".
    About the item "type and release",when we push "F4" button,there will be three default selections as below:
    "ORA 115     Oracle Applications 11i
    TLF 205     Tealeaf 2.05B
    XPD 020     SAP xPD".
    Who can tell me when and how should I use the three selections.
    And also I attempted to input the item by some optional letters except the default three selections and it seems that I can input it freely.
    Thank you and Best Regards,
    Maggie

    Hello DMK,
    Thank you very much for your answer.It is very helpful for me.
    Can I ask you further about it?
    I got that it is a semantic description item.
    You said the default selections are set by our basis people.Would you like to tell me how should we creat a new value except the default ones for item "type and release"?Only by inputing the value in the item directly?But you see we canot see the new value item we created by ourself when we push "F4" button next time ,is that ok?Or do we have to ask basis people to define one new value item just like the default seletions before we use it.
    Also if possible would you like to describe detail about "This becomes important when you are troubleshooting certain issues especially when RFC connection problems."
    Thank you and Best Regards,
    Maggie
    Message was edited by: Maggie

  • Question About Color's and Gradients

    Hi all,
    I have a question about color swatches and gradients.
    I am curious to know, if I have 2 color swatches that I make into a gradient color, is it posible to change the tint of each indivdual color in that gradient and have that applied to the gradient without having to adjust the gradients opacity.
    The reason that I'm asking this is because in creating a project I found that the colors that I chose for to make my gradient from my swatches were to dark, and while I can adjust each one's tint to my liking (if the object they were applied to was going to be a solid color) but that doesn't seem to apply to the overall gradient.
    I hope that makes sense, I know that this was something that was able to be accomplished in quark and was wondering if I can do something similar.

    If you double click your gradient swatch (after adding it to the swatches)
    Then click a colour stop in the gradient, and then change the drop down menu to CMYK (or rgb)
    And you can alter the percentages there. It's not much use for spot colours but it's a start.
    But making tint swatches would be a good start anyway.
    At least then when you double click the gradient (in the swatches) to edit it you can choose from CMYK, RGB, LAB, or Swatches and adjust each colour stop to your liking.

  • Question about clear page and reset pagination

    Hi,
    I have a question about clear pages and the reset pagination in an URL. What is the reason why a clear page doesn't also trigger a reset pagination on the pages which are cleared?
    I can't really imagine a business case where it makes sense to clear all data of page items on a page and don't reset the pagination of reports on that page which probably use a page item in there where clause...
    The drawback of this behavior is that a developer always has to set the reset pagination checkbox when he clears the target page and the even bigger drawback is that if you specify other pages to clear, you can't reset pagination for them, because reset pagination only works for the target page.
    Thanks for your input.
    Patrick
    *** New *** Oracle APEX Essentials *** http://essentials.oracleapex.info/
    My Blog, APEX Builder Plugin, ApexLib Framework: http://www.oracleapex.info/

    Enhancement request filed, thanks,
    Scott

  • The question about portlet customization and synchronization

    I have a question about portlet customization and synchronization.
    When I call
    NameValuePersonalizationObject data = (NameValuePersonalizationObject) PortletRendererUtil.getEditData(portletRenderRequest);
    portletRenderRequest.setPortletTitle(str);
    portletRenderRequest.putString(aKey, aValue);
    PortletRendererUtil.submitEditData(portletRenderRequest, data);
    Should I make any synchronization myself (use "synchronized" blocks or something else) or this procedure is made thread-safe on the level of the Portal API?

    HI Dimitry,
    I dont think you have to synchronize the block. i guess the code is synchronized internally.
    regards,
    Harsha

  • Difference between global temporary table and with clause

    what is the difference between global temporary table and with claue .(with clause is used as table in select query)

    what is the difference between global temporary table and with claue .(with clause is used as table in select query)Its big difference between the two.
    Global temporary table exists for a session or a transaction while, with clause lives only for a query.
    GTT is a named permanent storage table whose data flushes away on session exit or at end of a transaction while WITH clause just provides names to a reference data within a query (which is as good as having union subquery in FROM clause)
    eg
    SQL> with c as
      2  (
      3  select 1 id1, 2 id2 from dual union all
      4  select 2,4 from dual union all
      5  select 3,5 from dual)
      6  select * from c
      7  /
           ID1        ID2
             1          2
             2          4
             3          5
    SQL> ed
    Wrote file afiedt.buf
      1  select *
      2  from
      3  (
      4  select 1 id1, 2 id2 from dual union all
      5  select 2,4 from dual union all
      6* select 3,5 from dual)
      7  /
           ID1        ID2
             1          2
             2          4
             3          5But GTT serves its purpose very well in case of session specific transaction and the scnearion with multiple users working on same data.

  • A few questions about the ka790gx and dka790gx

    i have a few questions about the ka790gx and dka790gx , how much better is the dka790gx compaired to the ka790gx ? . how much difference does the ACC function make to overclocking etc , i plan on getting a phenom II 940BE or 720BE . i already have the ka790gx so would it be worth building another system using the dka790gx mobo , or should i keep what i already have and just change the cpu ?

    It's largely irrelevant what other boards had VRM issues other than the KA790GX - the fact is it died at stock settings. Since there is little cost difference between the more robust DKA790GX (or Platinum if you really need 1394) why bother with the proven weakling? There are other examples around of the KA not having a robust power section.  There's no way I would use even a 95W TDP CPU in the KA and absolutely not O/C.....!
    As for the credentials of Custom PC, I have generally found their reviews accurate and balanced, and echo my own findings where applicable. If a little too infrequent.
    The fact that the KA has such a huge VRM heatsink leads me to my other comments on the Forum, particularly regarding the "fudge" aspect:
    """Henry is spot on - the notion that adding a heatsink to the top of the D2PAK or whatever MOSFETS is effective is virtually worthless. The device's die thermal junction is the tab on the device back - which is always against the PCB pad. The majority of heat is therefore dissipated in to the board, and the fact that the epoxy plastic encapsulation gets hot is simply due to the inability of the heat to be conducted away from the device die via the tab. Not sure when Epoxy become an effective conductor of heat.... Good practice is to increase the size of the PCB pad (or "land" in American) such that the enlarged PCB copper area acts as an adequate heatsink. This is still not as effective as clamping a power device tab to an actual piece of ali or copper, but since the devices used are SMD devices, this is not possible. However, the surface area required to provide sufficient PCB copper area to act as a heatsink for several devices isn't available in the current motherboard layouts. Where industrial SBC designs differ in this respect is to place the VRM MOSFETs on the back of the PCB on very enlarged PCB pads - where real estate for components is not an issue.
    Gigabyte's UD3 2oz copper mainboards sound like a good idea, on the face of it. However, without knowing how they have connected the device tabs to where and what remains a mystery. I suspect it is more hype than solution, although there will be some positive effect. From an electrical perspective, having lower resistance connecting whatever to whatever (probably just a 0V plane) is no bad thing.
    The way the likes of ASUS sort of get round the problem is to increase the sheer number of MOSFET devices and effectively spread the heat dissipation over a larger physical area. This works to a degree, there is the same amount of heat being dissipated, but over several more square inches. The other advantage of this is that each leg of the VRM circuit passes less current and therefore localised heat is reduced. Remember that as well as absolute peak operating temperature causing reduced component life, thermal cycling stresses the mechanical aspects of components (die wire bonds for example) as well as the solder joints on the board. Keeping components at a relatively constant temperature, even if this is high (but within operating temperature limits), is a means of promoting longevity.
    For myself, the first thing I do with a seperate VRM heatsink is take it off and use a quiet fan to blow air on to the VRM area of the PCB - this is where the heat is. This has the added benefit of actively cooling the inductors and capacitors too....
    Cooling the epoxy component body is a fudge. If the epoxy (and thus any heatsink plonked on top of it) is running at 60C, the component die is way above that.....
    It's better than nothing, but only just."""

  • Question about Global index and Table Partitions

    I have created a global index for a partitioned table now in the future the partitions will be dropped in the table. Do I need to do anything to the global index? Does it need to be rebuilt or would it be ok if partitions get dropped in the table?

    >
    I have created a global index for a partitioned table now in the future the partitions will be dropped in the table. Do I need to do anything to the global index? Does it need to be rebuilt or would it be ok if partitions get dropped in the table?
    >
    You can use the UPDATE INDEXES clause. That allows users to keep using the table and Oracle will keep the global indexes updated.
    Otherwise, as already stated all global indexes will be marked UNUSABLE.
    See 'Dropping Partitions' in the VLDB and Partitioning Guide
    http://docs.oracle.com/cd/E11882_01/server.112/e25523/part_admin002.htm#i1007479
    >
    If local indexes are defined for the table, then this statement also drops the matching partition or subpartitions from the local index. All global indexes, or all partitions of partitioned global indexes, are marked UNUSABLE unless either of the following is true:
    You specify UPDATE INDEXES (Cannot be specified for index-organized tables. Use UPDATE GLOBAL INDEXES instead.)
    The partition being dropped or its subpartitions are empty

  • Question about extending dimension and fact tables

    We have our data warehouse completely loaded with several years of historical data. We now want to extend one of our dimension tables by performing a type 1 customization to add an additional column to that dimension table.
    Is there a way for the ETL to update all of the historical records in the dimension table by filling in the data for the new column? Assume that we made the necessary changes in the database and updated the ETL mapping accordingly. We want to avoid having to truncate the table or doing a full load again -- even just for that table and the corresponding facts.

    Remove the last refresh date for the table in the DAC repository...that will force an ujpdate of the whole table without you needing to reload everything.
    Oh yeah and you're in the wrong forum: Business Intelligence Applications
    C.

  • Temporary tables in import

    Hi,
    i am following the below metalink notes which suggest to export and import first without rows and change the nls_length_semantics from byte to char and import with data.
    but i can see around 72 temporary tables created and not delted. does import creates temporary tables? we have not created any idea about this?
    313175.1     SCRIPT: Changing columns to CHAR length semantics

    of course it is 10.2.0.4 on windows.
    it is not bin tables . these are actual tables suffixed with tmp say i have table called employees and i have two tables one is employees and another employeestmp.
    now i have actual table AHILFANALYSEZEIT2 but i have AHILFANALYSEZEIT2TMP "the syntax show below from TOAD"
    CREATE GLOBAL TEMPORARY TABLE AHILFANALYSEZEIT2TMP
    descipton)
    ON COMMIT PRESERVE ROWS
    NOCACHE
    /

  • Global Temporary Table (and more)

    Hi every one,
    here's the scenario:
    My database version is Oracle 9i.
    I have two Global Temporary Tables (GTT). I want to insert into those two tables (using a SELECT statement for each table) and then to use a SELECT statement to select from the two tables and the result sent to Report Builder 6i.
    Now, i guess i could use a Stored Procedure (SP) to insert into those tables and then use a SYS_REFCURSOR to return from this SP. The problem with that is Report Builder 6i does not recognise SYS_REFCURSOR types; it requires actual rows to be returned.
    So, my question is:
    Is there any way to insert into the two GTT first (using Select statements with Insert) and then select from the two tables, all in a single SELECT statement(In any case, a statement must be present that returns actual rows)?
    Additionally, one may run the report more than once (not necessarily after issuing a COMMIT or logging out), which means, the two GTT will be filled again and again. So, i will have to Truncate the GTT every time before Inserting.
    Efficiency of the query/solution is very important too as the data involved can consist of up to 2,00,000 records.
    Any suggestions will be greatly appreciated.
    Thank u.

    Here is some more detail:
    Q1////This statement handles INSERT for one GTT. Data inserted consists of multiple data selected from tables other than POLICY which is used by stored proc MANPOWER (in Q2).
    INSERT INTO TEMP1_NewBusiness
    SELECT ORG1, ORG2, (SELECT NAME FROM ORGANISER WHERE CODE=ORG1), ...,
    FROM POLICY
    WHERE DATCOM = '2007';
    Q2////This handles INSERT for second GTT
    INSERT INTO TEMP2_MANPOWER
    SELECT ORG1, MANPOWER(ORG1)
    FROM TEMP1_NewBusiness;
    /////Table POLICY is a normal table.
    MANPOWER is a stored proc which performs a string aggregation, using a cursor, by selecting from TEMP1.NBS. Because of the volume of data involved and the number of Selects, i'm using the GTT TEMP1.NBS as source for Manpower's data.
    So, first i need these two statements to be executed so that my GTTs are filled.
    Next, i want the result of query below sent to Report Builder.
    Q3.
    SELECT A.ORG1, A.ORG2, ..., B.MANPOWER
    FROM TEMP1.NewBusiness A, TEMP2.MANPOWER B
    WHERE A.ORG1 = B.ORG1;
    Now, i could place Q3 in the Report, but how do i get Q1 and Q2 to be executed first.
    Hope the situation is a little more clear now.
    I understand where you are coming from DAMORGAN and duplication is something i want to avoid myself.
    Thank u.

Maybe you are looking for

  • New Harddrive changed library paths

    I had a new 160 HD installed in my G4 1book a month ago. When my original HD failed, the tech was able to recover my data (which included the folders pictures->iPhoto Library and it's contents. I use iPhoto to view and organize my photos, but I use P

  • HT201077 Can I share all my photos with my wife without having to constantly add photos to the shared photo stream?

    My family has an imac where we have a family photo library in iphoto.  The iphoto library is under my wife's profile.  My wife and I each have an iphone 4S.  I'd like all the photos my wife and I take on our iphones to automatically upload to the iph

  • Is there a maximum size limit to convert from pdf to excel?

    I have a pdf file that is 28,000 KB or 2661 pdf pages and of course it is timing out when I try to convert.  Any advice on how to convert this file to Excel? Thanks, Michelle

  • Books in iBooks not being removed after syncing

    Although I use iCloud, I do back up and sync music, photos, movies, and books to iTunes on my Mac. I just purchased another book on iTunes, connected to iTunes for back up and loading the new book onto the iPad. I also unchecked some books that I no

  • Office 2011

    I recently loaded Office 2011 on my wife's new Mac Air (OS 10.8).  It is connected wirelessly to an Epson WF-3520 printer.  For some reason she can only print in the double sided mode.  The print dialog box does not seem to have a setting to disable