Speed up oracle imp

Hi Folks,
We are importing table data from oracle 9i (9.2.0.8) into oracle 11g (11.2.0.3) using the old oracle imp utility, and we're hoping that you can help us speed up the process please.
We have exported one table data; roughly 35 million rows have been exported generating an export dump size of 7.5GB. Please note that this is a subset of the data (one month worth of data to be exact). We do realize that there will be a lot of duplicates within this data sub-set and that the import would reject the duplicated rows through PK constraints (giving the following error):
IMP-00019: row rejected due to ORACLE error 1
IMP-00003: ORACLE error 1 encountered
ORA-00001: unique constraint (RG_SCHEMA.SSTG010P) violated
Column 1 243113850256342640
The import has been running for over 3 hrs now and is taking quite a while even though we're running it with the following parameters:
statistics=NONE
buffer=12000000
resumable=Y
ignore=Y
constraints=N
indexes=N
We do realize rejecting more than half the rows would slow down the process as there is overhead associated with that; having said that is there anything else we can try to to speed up this dreaded process please.
Appreciate the help.

IMPORT:
Create an indexfile so that you can create indexes AFTER you have imported data. Do this by setting INDEXFILE to a filename and then import. No data will be imported but a file containing index definitions will be created. You must edit this file afterwards and supply the passwords for the schemas on all CONNECT statements.
Place the file to be imported on a separate physical disk from the oracle data files
Set the LOG_BUFFER to a big value and restart oracle.
Stop redo log archiving if it is running (ALTER DATABASE NOARCHIVELOG;)
Create a BIG tablespace with a BIG rollback segment inside. Set all other rollback segments offline (except the SYSTEM rollback segment of course). The rollback segment must be as big as your biggest table (I think?)
Use COMMIT=N in the import parameter file if you can afford it
Use STATISTICS=NONE in the import parameter file to avoid time consuming to import the statistics
Remember to run the indexfile previously created
Import Export FAQ - Oracle FAQ

Similar Messages

  • Poor SSD disk IO speed in Oracle Linux 6.3 (Windows migration)

    Hello,
    I am trying to migrate from Windows to Oracle Linux, but I'm seeing very poor disk IO speeds. It's probably a tuning thing, but I'm relatively new to Oracle Linux and could use some detailed advice.
    I took one physical server and migrated it from Windows 2008R2 to Oracle Linux 6.3 while maintaing the same Oracle version (11.2.0.3 Enterprise with ASM) and the same hardware (quad CPU 48 core HP DL585 G7 with 128GB RAM, 7 LSI 9200-8e HBAs, 28 Samsung SSD Drives). Disk IO performance, as measured using Oracle IO Calibration, was ~7,800MB/Second and 440K IOPS under windows but fell to ~2,400MB/Second and 250K IOPS under Linux.
    Oracle Linux and the DB were installed using default values. The Oracle tools seem to have done a great job setting all of the obvious IO tuning parameters like the scheduler, but I figure that there are other important IO-related OS or DB parameters and that I have failed to configure the system properly.
    My goal for the migration is sequential read IO speed and I would have bet money that Linux would provide better performance than Windows. I still think that it should. What basic IO tuning should I do for Oracle Linux using ASM and SSD drives?
    Thank you!
    Some details:
    Oracle DB 11.2.0.3 enterprise installed via the GUI with the "Data Warehousing" template
    ASM - single disk group, 28 SSD disks, AU=4MB
    Oracle memory: Automatic memory management, 64GB allocated
    Non-default Oracle params: filesystemio_options=setall, disk_asynch_io=true
    Edited by: 975524 on Dec 7, 2012 8:56 AM

    Thanks "dude" for the advice. Unfortunately, I am still seeing low IO speeds.
    The default scheduler for OEL 6.3 with the DB pre-install package is deadline, which seemed like a far better choice than CFS. Based on your advice, I tried noop this morning and got the same results. I also tested with and without hugepages and saw only a small difference - at least in IO speed - I did not test overall DB performance. Lastly, I understand the /dev/shm issue, but even with the default configuration I'm getting 64MB allocated to Oracle, which is far more than is needed to test sequential IO - in fact I can get better results by using less RAM.
    To answer your questions, I am testing using Oracle IO Calibration, which is an IO testing feature of the Oracle DB that is similar to the standalone Oracle Orion tool. I also performed a few tests using IOMeter, but found that the Linux version of that product was not giving me consistent data. The overall trend was the same however - IO on the Linux version far lower than the same hardware running Windows. The system is functioning very well, so I assume that everything has been installed correctly, but I do not think that it was installed optimally - thus my cry for help.
    I am so surprised that Linux is showing slow IO!
    Edited by: 975524 on Dec 7, 2012 9:22 AM

  • Data Retrieval Speed in Oracle Spatial vs. ESRI ArcSDE

    I would appreciate any opinions regarding data retrieval
    performance between Oracle Spatial and ESRI ArcSDE. Would an end-
    user (using ESRI software) experience significant differences in
    data retrieval speed depending on how the data were stored in
    Oracle (MDSYS.SDO_GEOMETRY verses ESRI Binary/Blob formats).
    Knowing that the ESRI binary formats are tailored to their
    software front-end apps (ArcGIS, ArcMap, ArcCatalog, and
    ArcInfo), wouldn't this be a "non-issue" until the spatial
    dataset gets "large", and even then, wouldn't performance be
    (almost) equal if the spatial indexes were created properly?
    Thanks for your inputs,
    Bruce

    John,
    You can't do that type of query in sql from sql*plus using
    SDEBINARY. HOwever, you can perform spatial queries in ArcMap
    if you are using SDEBINARY.
    You can use the query builder to perform point-in-polygon type
    queries.
    Hope that helps.
    For my two cents, I think SDO_GEOMETRY gives you a more robust
    database to work with, because you have the added power of
    Oracle Spatial functions. If you are using SDEBINARY you are
    limited to only what you can do thru ArcGIS.
    If you are concerned more about performance than accessibility,
    especially with a large number of users, then SDEBINARY might
    be the better choice.
    I love Oracle Spatial and am hoping that the performance issue
    will not be a serious one when we start putting ArcIMS developed
    apps into production.
    Dave

  • Oracle Imp/exp

    hi,
    I am trying to import emp.dmp which i have done through exp utility (exp vtprod/vtprod file=emp.dmp tables=(EMP) log=exp-emp.log)
    by using the following command
    imp vtprod/vtprod fromuser=vtprod touser=vtprod file=emp.dmp tables=(EMP) ignore=y commit=y log=imp-emp.log
    Question: Data is getting appended to the table emp, i wanted to truncate the table and insert..i dont want to append the data..what needs to be done during the import..
    Thanks in Advance

    Hi,
    In conventional Import utility, there is no option to which will first truncate table and then reinsert into table. You need to do it manually before involing Import.
    YOu can check this with Import help.
    You can let Import prompt you for parameters by entering the IMP
    command followed by your username/password:
         Example: IMP SCOTT/TIGER
    Or, you can control how Import runs by entering the IMP command followed
    by various arguments. To specify parameters, you use keywords:
         Format:  IMP KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
         Example: IMP SCOTT/TIGER IGNORE=Y TABLES=(EMP,DEPT) FULL=N
                   or TABLES=(T1:P1,T1:P2), if T1 is partitioned table
    USERID must be the first parameter on the command line.
    Keyword  Description (Default)       Keyword      Description (Default)
    USERID   username/password           FULL         import entire file (N)
    BUFFER   size of data buffer         FROMUSER     list of owner usernames
    FILE     input files (EXPDAT.DMP)    TOUSER       list of usernames
    SHOW     just list file contents (N) TABLES       list of table names
    IGNORE   ignore create errors (N)    RECORDLENGTH length of IO record
    GRANTS   import grants (Y)           INCTYPE      incremental import type
    INDEXES  import indexes (Y)          COMMIT       commit array insert (N)
    ROWS     import data rows (Y)        PARFILE      parameter filename
    LOG      log file of screen output   CONSTRAINTS  import constraints (Y)
    DESTROY                overwrite tablespace data file (N)
    INDEXFILE              write table/index info to specified file
    SKIP_UNUSABLE_INDEXES  skip maintenance of unusable indexes (N)
    FEEDBACK               display progress every x rows(0)
    TOID_NOVALIDATE        skip validation of specified type ids
    FILESIZE               maximum size of each dump file
    STATISTICS             import precomputed statistics (always)
    RESUMABLE              suspend when a space related error is encountered(N)
    RESUMABLE_NAME         text string used to identify resumable statement
    RESUMABLE_TIMEOUT      wait time for RESUMABLE
    COMPILE                compile procedures, packages, and functions (Y)
    STREAMS_CONFIGURATION  import streams general metadata (Y)
    STREAMS_INSTANTIATION  import streams instantiation metadata (N)
    The following keywords only apply to transportable tablespaces
    TRANSPORT_TABLESPACE import transportable tablespace metadata (N)
    TABLESPACES tablespaces to be transported into database
    DATAFILES datafiles to be transported into database
    TTS_OWNERS users that own data in the transportable tablespace set
    Import terminated successfully without warnings.Note: Destroy=y is not for this purpose. It is basically for datafile reuse option.
    This feature(truncate the table) has been introduced in the Datapump import in Oracle 10g.
    You can go through this.
    http://www.oracle-base.com/articles/10g/OracleDataPump10g.php
    Regards,
    Navneet

  • Oracle Imp running so slow

    Hello!
    We are currently migrating Oracle 10g database from an IBM P5 machine with AIX 5.3 OS to an IBM P730 machine with AIX 6.1. We are already in the process of importing the data to the new machine but the IMP was so slow, unlike the EXP process that only took less than 3 hours.
    As of now, we still can't import our data.
    Does anybody know what we should do to resolve this?
    Thanks so much!

    Since both the source and target Oracle versions are 10 or greater, it should be faster to use data pump expdp/impdp. You should try parallel= max of 2 times the number of CPUs you want to use for data movement. Make sure you specify the same or more dump files in your expdp command, or wildcard them.
    I haven't used exp/imp for quite some time, so I can't help much there.
    Dean

  • Speed of oracle blob access

    I have a simple table with an id, name and blob field.
    If I load a thousand records into this table and then query for all thousand records it takes about two hundred seconds
    Why so slow?
    if I drop the blob field from the table the time is only a few seconds

    1. How big is the blob?
    2. Talk to someone who tunes Oracle databases. There are many ways to
    optimize table organization in Oracle.
    Peace,
    Cameron Purdy
    Tangosol Inc.
    Tangosol Coherence: Clustered Coherent Cache for J2EE
    Information at http://www.tangosol.com/
    "Calvin R. Smith" <[email protected]> wrote in message
    news:3c3681f8$[email protected]..
    I have a simple table with an id, name and blob field.
    If I load a thousand records into this table and then query for all thousand
    records it takes about two hundred seconds
    Why so slow?
    if I drop the blob field from the table the time is only a few seconds

  • How to exclude create oracle job when during oracle imp

    Hi Expert,
    I would like to know how to exclude create oracle job during oracle import . It is schema export. Tks
    Regard
    Liang

    Oracle attempts to reimport job definitions as well. However, if you have an existing job with the same JOB_ID, the job definition fails (as there is a Unique Constraint on it).
    So, one "workaround" is to precreate dummy jobs before the import (which also means that the database account must be created in advance). To ensure that the JOB_ID is the same, you may have to keep incrementing the JOB_ID sequence.
    Hemant K Chitale

  • ORACLE imp

    1.How do you import into a preferred tablespace?
    or
    2.How do you move tables to a different tablespace?
    Scenario:
    I exported Scott's schema with..
    "E:\>EXP system/manager owner=scott file=e:\expscott.dmp"
    I imported into Peter's schema with..
    "E:\>imp system/manager file=e:\expdat.dmp fromuser=scott touser=peter"
    Peter's default tablespace is USERS.
    but all the imported tables went to SYSTEM.
    What parameter in the IMP string is used to force the table into the default USERS tablespace?

    When exporting the data it displays error messages.
    ORA-09004
    EXP-00008
    ORA-01004
    But there is no problems with pentium -3 computers.

  • What are the difference between Oracle IMP, SQL*Loader and Data Pump

    It's hard to decide which should be used for a flat file import. any body give some suggestions or if there is any guidelines for choosing among these?
    Thanks

    <p>You might want to take a look at <b>this</b>. It should answer all of your questions.</p>
    Tom

  • Speed Oracle forms (web based) versus client server

    2 years ago we've tested forms 9i
    Everything went very smoothly:
    We had no problems in converting our old forms to 9i.
    We've installed a new database on a new server and we've installed a application server on a second server (Both servers had 2 CPU's and 2GB of memory).
    In short everything worked perfectly ... except the speed.
    Oracle forms (web based) are a lot slower than client-server forms on a LAN, so we kept developing client-server.
    We just use forms on our own local network (100Mbit) but
    we use a lot of triggers in our forms (perhaps the reason for the poor performance?)
    Has anything changed?
    Is it now possible to use web based forms, that are at leased as fast as client server forms?

    OK, I agree: changing the form (more views, less post-query, more pl/sql on database) will improve performance.
    But then I have to recreate every form (> 1000), so then converting c/s to web is not just recompile.
    We don't use images in our forms, the database is connected to the application server with a gigabit line. We did tests when even the client was on gigabit, but c/s remains faster. We did the test with just 1 client. and we have never experienced a bottleneck on the network using c/s (even with post-query on millions of records)
    So 1 database (2 Xeon CPU's 2GB RAM), 1 App server (2 XEON CPU's, 2GB RAM) and 1 client (XEON workstation, 512MB RAM) using web forms on a gigabit network is slower than the same client using forms (c/s) without an appl. server. => A lot more hardware for less performance?
    In short my question: is the latest version of forms faster than the previous version?
    Can webforms have the same performance as client server?

  • Suppress Redo in an imp Process in a 9i database

    Hi,
    i want to import a 1 Terrabyte Table. Oracle Imp will should takes 12 days. So i want to suppress the generating of redos to increase speed.
    How could i achieve this in a 9i database.
    thanks a lot
    Wolle

    Change the Table to NOLOGGING which reduces the amount Redo Generated. Follow other options provided already.
    Create indexes with multiple sessions with NOLOGGING and PARALLEL to speed up the process.

  • Is ORACLE slower on Windows then on Linux ?

    Hi,
    I'm suppose to choose the platform for Oracle .... Linux or Windows 2008 Server ...
    Ones we've installed Oracle on Windows 2003 Server and it was really slower like 200-300% ..... we did simple default install
    Does anyone know if this is a fact that Oracle will run slower on Windows platform then on any other ? Any experiance with that ? If we are talking about 5-10% then this is ok and I'll stick with windows ...
    Cause I know how to deal with windows much more then how to deal with Linux admin, install and other stuff ...
    Thank you...
    Kris

    burleson wrote:
    Hi Charles,
    Installing a virus scanner on the server, especially if it is permitted to scan program and data files used by Oracle.Ha, that a good one! That the FIRST thing I check!
    Also, I've seen high demand screensavers (fractals) clobber Oracle performance . . .
    Oracle on Windows uses a thread modelThat's actually one of the few "positives" about running Oracle on Windows . . .
    rather than providing a performance comparison between Linux and Windows.Fine, try here:
    www.dba-oracle.com/oracle_tips_linux_oracle.htm
    "Roby Sherman performed an exhaustive study of the speed of Oracle on Linux and Microsoft Windows using identical hardware. Sherman currently works for Qwest Communications in the data technologies group of IT architecture and transversal services. He is a recognized expert in designing, delivering, tuning, and troubleshooting n-tier systems and technology architecture components of various size and complexity based on Oracle RDBMS technology.
    Sherman concludes that Linux was over 30% faster:
    "From perspective of performance, RedHat Linux 7.2 demonstrated an average performance advantage of 38.4 percent higher RDBMS throughput than a similarly configured Windows 2000 Server in a variety of operational scenarios." Sherman also notes: "Another point of contention was Window's lack of consistency between many database administrative functions (e.g., automated startup, shutdown, service creation, scripting) compared to what DBAs are already used to in many mainstream UNIX environments (e.g., Solaris and HP-UX)."
    Mr. Burleson's comments seem to be out of line. No, I'm right on the money.
    *I've seen enough companies get burned by Windows (unplanned outages, data corruption) to speak with confidence.*
    Bottom line, it's malfeasence to recommend any OS platform for a production database application that has an unsavory history.
    It does not take a genius to figure out that an OS with this kind of history should not be used with any data that you care about:
    - "blue screen of death"
    - legandary security vulnerabilities
    - memory leaks you could drive a truck through
    - Patching weekly
    - The world's most incompetant tech support
    I'm not alone in this opinion:
    thetendjee.wordpress.com/2007/01/22/oracle-10202-sucks-on-windows/
    www.google.com/search?&q=%22windows+sucksMr. Burleson,
    Good point on the screen saver.
    Comparing Windows 2000, released in March 2000, with Red Hat 7.2 which was released in ... sorry, forgot the date, but I put in a couple servers running that release of Linux into service in 2001. I was kind of hoping that you would have an article which pits Windows 3.1 against the original Linux release. :-)
    You might be happy to know that things have changed significantly since Windows 3.1, and also significantly since Windows 2000. There were many improvements in Windows 2003 over Windows 2000 (I happened to read a couple large books on the subject of Windows 2003 Server a couple years ago). This page contains a couple links that you may want to browse:
    http://www.microsoft.com/windowsserver2003/evaluation/performance/default.mspx
    Windows sucks... I have heard that the Microsoft Windows operating system is on barcode scanners, phones, and even car navigation systems. I had no idea that vacuums also utilized Windows, thanks for the heads-up:
    http://www.patentstorm.us/patents/6289552/description.html
    http://advertising.microsoft.com/BestVacuum
    Interesting Google search of the day: define:malfeasence
    http://www.google.com/search?hl=en&q=define%3Amalfeasence&aq=f&oq=&aqi=
    "Did you mean: define:malfeasance
    No definitions were found for malfeasence"
    define:malfeasance
    http://www.google.com/search?hl=en&q=define%3Amalfeasance&spell=1
    "Definitions of malfeasance on the Web:
    •wrongful conduct by a public official
    wordnetweb.princeton.edu/perl/webwn
    •The expressions misfeasance and nonfeasance, and occasionally malfeasance, are used in English law with reference to the discharge of public obligations existing by common law, custom or statute.
    en.wikipedia.org/wiki/Malfeasance
    •wrongdoing; Misconduct or wrongdoing, especially by a public official that causes damage
    en.wiktionary.org/wiki/malfeasance"
    define:unsavory
    http://www.google.com/search?hl=en&q=define%3Aunsavory
    "Definitions of unsavory on the Web:
    •morally offensive; ‘an unsavory reputation’; ‘an unsavory scandal’
    •distasteful: not pleasing in odor or taste
    wordnetweb.princeton.edu/perl/webwn
    •Disreputable, not respectable, of questionable moral character
    en.wiktionary.org/wiki/unsavory"
    Regarding blue screen of death, those are rather rare with versions of Windows since the release of Windows 2000. The last blue screen that I saw on a server happened when an ECC memory module started experiencing multiple hardware bit errors which could not be corrected by the ECC memory logic. The server hardware forced the blue screen to prevent data corruption. The previous blue screen? A Windows NT 4.0 Server (circa 1996) when a new RAID controller was added to the server.
    Regarding legandary security vulnerabilities, well I don't think that it quite qualifies as legendary. However, given the wide usage of Windows (particularly by people just starting to learn to use computers), there will very definitely be more security issues to contend with - Windows often offers a larger attack surface than other operating systems. Yes, there have been many security problems over the years.
    Regarding memory leaks you could drive a truck through, I have to say that in a server environment I have not experienced that problem. On a desktop environment, I would say that it is typically the fault of poorly written applications which cause the memory leaks. Windows will often clean up after the poorly written applications when they are closed.
    Regarding patching weekly, yes there are typically frequent security and bug fixes released for the Windows platform, but I suggest that if someone is patching weekly on a server, there is probably a larger problem to be addressed.
    Regarding the world's most incompetant tech support, I am not sure that I follow your logic:
    define:incompetant
    http://www.google.com/search?hl=en&q=define%3Aincompetant
    "Did you mean: define:incompetent
    No definitions were found for incompetant.
    Suggestions:
    - Make sure all words are spelled correctly.
    - Search the Web for documents that contain 'incompetant'"
    As previously stated, consideration should be given to the operating system which is most familiar to the OP.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Search in japanese using Oracle text

    Hello ,
    I have a lot of japanese( in text)documents of nos 500 and which I want to use search and found it. How long the process of search documents in oracle 9itakes to build? How effective is all about? where to get started ex:
    in enterprise manager ,application document
    what version of oracle suports effectively?
    Is oracle text a in built in oracle or has to be coded in oracle as SQL ?
    Thanking in advance
    vimal

    Hello ,
    I have a lot of japanese( in text)documents of nos 500 and which I want to use search and found it. How long the process of search documents in oracle 9itakes to build? How effective is all about? where to get started ex:
    in enterprise manager ,application document
    what version of oracle suports effectively?
    Is oracle text a in built in oracle or has to be coded in oracle as SQL ?
    Thanking in advance
    vimal Oracle Text is part of the database. No extra installer is needed
    Let me give you some quick links to get up to speed with Oracle Text.
    - Quick start: http://otn.oracle.com/products/text/x/Samples/Quick_Start/index.html
    - Sample code for a simple search application: http://otn.oracle.com/docs/products/oracle9i/doc_library/release2/text.920/a96517/acase.htm#620714
    - Example of multilingual search: check for the Unicode presentation from otn.oracle.com/products/text

  • Select 10000 rows from Oracle in shortest way

    Looking for a good desicions for trasfering more than 10000 rows from
    oracle to mysql using only jdbc. Connect is mostly slow about 1 row in
    a second, rights on oracle are just for select, update, insert
    operations. I think if I can devide 10000 rows in 10 parts and transfer
    them in concurent threads and connections. Any suggestion can help me
    to solve this problem.

    See java.sql.Statement.setFetchSize(), e.g.I'm not sure that setFetchSize will have any impact on your performance. It is mostly a suggestion to the JDBC driver, and while in theory the suggestion may help, in practice I have not seen any signficant benefit when using Oracle.
    To test the speed of your fetch, try issuing your select, then loop through the resultset, getting all the columns, just don't do anything with them. You could try changing the fetchSize to see if it does have an impact in your specific architecture.
    Now that you know how fast you can 'get' all the rows, how much different is it then the 1 second per row that you are seeing when selecting and inserting (I'm assuming you haven't done this yet, sorry if you already have)?
    Assuming that you can select all rows in less then the 1 second per row in your original test; there are several things you can do to increase the speed of inserts. Try using a PreparedStatement if you are not already using one . Try using the .addBatch() and .executeBatch(). You can try changing the count of rows that are inserted on each executeBatch() command.
    If all your inserts are inserting into a single table, then I don't think multiple threads will help you, and may in fact hurt you because you could run into locking issues in the mySQL database. If you are inserting into multiple tables, then it is possible that multi-threading, done correctly, may provide some increase in overall speed.
    Oracle can be accessed with 3 different JDBC drivers. If you are using Oracle 8i or 9i, then try using the OCI8 driver instead of the thin driver. The OCI8 driver may provide some performance benefits when doing mass inserts in the older versions of Oracle. Do not use the JDBC-ODBC bridge as that will provide the worst performance.
    If I was going to move data between two different databases, and the vendor did not provide a utility to do that specifically then I would try and use the 1st db vendors unload utility to unload into a text file, and the 2nd db vendors load utility to load from that text file. These utilities have been optimized for speed far and above anything that will ever be available to you as a Java programmer using JDBC. That isn't always possible based on the architecture of your solution, but it is always preferrable.
    Best of luck to you.

  • To run a piece of PL/SQL code,  in TT  is much slower than   in ORACLE.

    A piece of PL/SQL code , about 1500 lines, package is named rtmon_event, function in it is named rtmon_SHOLD_CUS_RPT;
    the PL/SQL code is run in ORACLE.
    Now I want to get fast speed, I think of TT.
    I rewrite the PL/SQL code by grammer in TT.
    But the speed in TT is much slower than the speed in ORACLE.
    In ORACLE, to run the PL/SQL code, it need 80 seconds; but In TT, to run the PL/SQL code, it need 183 seconds;
    How can I resolve the problem?
    Btw: there are some joins of 2 tables, or 3 tables in rtmon_event.rtmon_SHOLD_CUS_RPT, and some complex DML in it.
    The run method is :
    declare
    a number;
    begin
    a := rtmon_event.rtmon_SHOLD_CUS_RPT ;
    end;
    Thanks a lot.

    The easiest way to view a plan is to use ttIsql and issue the command:
    explain SQL-statement;
    For example:
    explain select a.ol1, b.col2 from taba a, tab b where a.key = b.key;
    See the documentation that 'hitgon' pointed you to to help you interpret the plans.
    Chris

Maybe you are looking for

  • Activty release should be possible only after WBS release

    Hi Gurus My customer wants release of activity only after release of WBS element. Is there any user exit available for that or can i write some validation for the same. Thanks in advance Regards Abhijit Sen

  • Correct in dbms_job.submit to run midnight?

    Hi, Can you tell me if this is correct to run midnight everyday? Thanks. dbms_job.submit(:jobNUM, 'package.procedure();', trunc(sysdate)+1, 'trunc(sysdate)+1')

  • Applications won't open after software update

    after i installed an automatic software update on my iBook G4, and did the restart, several applications won't open such as Safari, iTunes, iChat. I tried to reinstall the software update, but that won't open either. it just jumps up and down like no

  • Customizing OBIEE with BI Apps informatica

    We are in the process of customizing bi apps finance and have 3 new dimension tables to add to the existing facts. What is the best process to tie these new dimensions to the existing GL fact via ETL method. We have started by adding one wid column f

  • Raw files fujifilm

    CCan photoshop ipad open fujifilm raw files (.raf)