EXP/IMP with NCLOB/BLOB

Hello all,
I have to pass an schema from one database, to another recreated as the first one (same schema name, same rights, same tablespace, same all) Both are Oracle database 11.2.0.2 enterprise 64 bits on linux 64. Is there any consideration I have to take, as there's objects with NCLOB and BLOB types... I've tryed the export, and it has gone ok without any problem, buy there will be any issue when I import it on the second database?
All this is to change the database's server from the phisical one that it's now, to a virtual one (OVM).
Thanks for your help!!

It would probably be better if you used Data Pump (expdp/impdp) instead of original exp/imp. It has support for more objects and features that you may have used.
Dean

Similar Messages

  • Exp/Imp with tablespace autocreate?

    Hello dear community,
    I've got a question about backup with exp/imp. It's not about RMAN, but nevertheless I hope it's the right board.
    Is there a command to tell imp to automatically create a specified tablespace from a dumpfile?
    For exp I use:
    exp.exe user/pass tablespaces=test file=FILE.DMP
    For imp I use:
    imp.exe user/pass tablespaces=test file=FILE.DMP
    Now, if I drop the tablespace between exp and imp, so imp can't restore it. There I manually have to create the specified tablespace first. It would be nice, if there is a way that imp creates the tablespace if it is not present.
    Are there maybe some parameter like "autocreate" and "use datafile=..." and so on?
    Thanks for help,
    best regards,
    Ronny

    Transportable Tablespaces is a seperate feature whereby you take a physical
    copy of tablespace datafiles and "plug" them into the target database.
    That is different from taking a logical export dump.
    With export dumps, the only way to get the tablespace created is in a FULL
    export and import. The CREATE TABLESPACE commands are written into
    the dmp file only when a FULL export is done. At import time, you can choose
    to import a specific schema in which case, the CREATE TABLESPACE
    commands are not executed by import (they must be precreated). So,
    CREATE TABLESPACE is executed only when you do a FULL import as well.
    Note that this {obviously} creates the same datafile names (including physical
    paths) and sizes as existant in the source database. That may or may not
    meet your requirements on occassion (eg different filesystem structure,
    different sizes planned in the target database).

  • Tablespace exp imp 11G

    I have problem,
    I want to imp file.dmp to schemauser
    schemauser have tablespace default USERS
    then I exp schmeauser from my pc. I named file.dmp
    then when
    I imp from other pc,
    I have prepare for schmeausernew with tablespace default tbspace.
    when I imp the data I suprise with my default tablespace tbspace not using.
    But my imp it take a place in USERS.
    Sow my question is, How to imp automaticly to my default tablespace tbspace ???
    I don't want to alter table tablespace when I must one by one and also my index.
    Thanks.
    for Supporting.

    Exp/Imp with tablespace autocreate?
    imp user/pass tablespaces=dbtothis FULL=Y
    Why still not into tablespace=dbtothis.
    Still in Tablespace users
    Thanks.
    For help me.

  • Exp/Imp alternatives for large amounts of data (30GB)

    Hi,
    I've come into a new role where various test database are to be 'refreshed' each night with cleansed copies of production data. They have been using the Imp/Exp utilities with 10g R2. The export process is ok, but what's killing us is the time it takes to transfer..unzip...and import 32GB .dmp files. I'm looking for suggestions on what we can do to reduce these times. Currently the import takes 4 to 5 hours.
    I haven't used datapump, but I've heard it doesn't offer much benefit when it comes to saving time over the old imp/exp utilities. Are 'Transportable Tablespaces' the next logical solution? I've been reading up on them and could start prototyping/testing the process next week. What else is in Oracle's toolbox I should be considering?
    Thanks
    brian

    Hi,
    I haven't used datapump, but I've heard it doesn't offer much benefit when it comes to saving time over the old imp/exp utilitiesDatapump will be faster for a couple of reasons. It uses direct path to unload the data. DataPump also supports parallel processes, so while one process is exporting metadata, the other processes can be exporting the data. In 11, you can also compress the dumpfiles as you are exporting. (Both data and metadata compression is available in 11, I think metadata compression is available in 10.2). This will remove your zip step.
    As far as transportable tablespace, yes, this is an option. There are some requirements, but if it works for you, all you will be exporting will be the metadata and no data. The data is copied from the source to the target by way of datafiles. One of the biggest requirements is that the tablespaces need to be read only while the export job is running. This is true for both exp/imp and expdp/impdp.

  • Export / import  exp / imp commands Oracle 10gXE on Ubuntu

    I have Oracle 10gXE installed on Linux 2.6.32-28-generic #55-Ubuntu, and I need soe help on how to export / import the base with exp / imp commands. The commands seems to be installed on */usr/lib/oracle/xe/app/oracle/product/10.2.0/server/bin* directory but I cannot execute them.
    The error message I got =
    No command 'exp' found, did you mean:
    Command 'xep' from package 'pvm-examples' (universe)
    Command 'ex' from package 'vim' (main)
    Command 'ex' from package 'nvi' (universe)
    Command 'ex' from package 'vim-nox' (universe)
    Command 'ex' from package 'vim-gnome' (main)
    Command 'ex' from package 'vim-tiny' (main)
    Command 'ex' from package 'vim-gtk' (universe)
    Command 'axp' from package 'axp' (universe)
    Command 'expr' from package 'coreutils' (main)
    Command 'expn' from package 'sendmail-base' (universe)
    Command 'epp' from package 'e16' (universe)
    exp: command not found
    Is there something I have to do ?

    Hi,
    You have not set environment variables correctly.
    http://download.oracle.com/docs/cd/B25329_01/doc/install.102/b25144/toc.htm#BABDGCHH
    And of course that sciprt have small hickup, so see
    http://ubuntuforums.org/showpost.php?p=7838671&postcount=4
    Regards,
    Jari

  • Exp/Imp In Oracle 10g client

    Hi All,
    I want to take one schema export from Oracle 10g Client. I am in new Oracle 10g.
    In oracle 9i, we can using exp/imp command for Export and Import from Client itself.
    I heard about in Oracle 10g, we can use Expdb/Impdb in Server only. Is there any possibility to take exp/imp in Client also.
    Pls help me...
    Cheers,
    Moorthy.GS

    To add up to expdb from oracle client
    NETWORK_LINK
    Default: none
    Purpose
    Enables an export from a (source) database identified by a valid database link. The data from the source database instance is written to a dump file set on the connected database instance.
    Syntax and Description
    NETWORK_LINK=source_database_link
    The NETWORK_LINK parameter initiates an export using a database link. This means that the system to which the expdp client is connected contacts the source database referenced by the source_database_link, retrieves data from it, and writes the data to a dump file set back on the connected system.
    The source_database_link provided must be the name of a database link to an available database. If the database on that instance does not already have a database link, you or your DBA must create one. For more information about the CREATE DATABASE LINK statement, see Oracle Database SQL Reference.
    If the source database is read-only, then the user on the source database must have a locally managed tablespace assigned as the default temporary tablespace. Otherwise, the job will fail. For further details about this, see the information about creating locally managed temporary tablespaces in the Oracle Database Administrator's Guide.
    Restrictions
    When the NETWORK_LINK parameter is used in conjunction with the TABLES parameter, only whole tables can be exported (not partitions of tables).
    The only types of database links supported by Data Pump Export are: public, fixed-user, and connected-user. Current-user database links are not supported.
    Example
    The following is an example of using the NETWORK_LINK parameter. The source_database_link would be replaced with the name of a valid database link that must already exist.
    expdp hr/hr DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_linkDUMPFILE=network_export.dmp LOGFILE=network_export.log

  • Database upgrade from 8i to 10g using exp/imp

    Dear Friends,
    Please provide the steps to upgrade from 8i to 10g using exp/imp.
    Thanks,
    Rathinavel

    Hi;
    Please also see cold backup option
    How to migrate from 8i to 10g to new server using cold backup [ID 742108.1]
    Also see:
    Upgrading from 8i to 10g with import utility
    http://searchoracle.techtarget.com/answer/Upgrading-from-8i-to-10g-with-import-utility
    Regard
    Helios

  • Full Database Exp & Imp

    Hi,
    I am trying to Exp & Imp a full database. I am working on Oracle 9i & 10g on Solaris 9. I am not using Data Pump. Can anyone please help me with the following:
    1. I am performing the full export using SYSTEM user.
    1.a Does a Full export include (or backups up) the DATA DICTIONARY of the database?
    1.b Does a Full export include the backup of SYS and SYSTEM objects?
    I am using the following command to export
    exp system/system@testdb file=$HOME/testdbfullexp.dmp full=y statistics=none
    I have tried importing the FULL export to another database and i did see that SYS and SYSTEM objects were also being imported ( got some errors regarding constraints and inconsistencies).
    I would like to ask like what are the ideal steps to follow to copy a database from DB1 to DB2 using EXP and IMP
    Any information will be of a great help
    Thanks,
    Harris.

    1 a) No, as the data dictionary will be automagicall recreated by implicit SQL.
    This means any non dictionary objects under SYS will be lost
    1 b) as above. SYSTEM however is a normal user.
    any %SYS user will NOT be exported (CTXSYS, MDSYS, etc)
    On import of SYSTEM there will be always errors, as SYSTEM is non-empty after initial database creation.
    Sybrand Bakker
    Senior Oracle DBA

  • Full database exp/imp  between RAC  and single database

    Hi Experts,
    we have a RAC database oracle 10GR2 with 4 node in linux. i try to duplicate rac database into single instance window database.
    there are same version both database. during importing, I need to create 4 undo tablespace to keep imp processing.
    How to keep one undo tablespace in single instance database?
    any experience of exp/imp RAC database into single instance database to share with me?
    Thanks
    Jim
    Edited by: user589812 on Nov 13, 2009 10:35 AM

    JIm,
    I also want to know can we add the exclude=tablespace on the impdp command for full database exp/imp?You can't use exclude=tablespace on exp/imp. It is for datapump expdp/impdp only.
    I am very insteresting in your recommadition.
    But for a full database impdp, how to exclude a table during full database imp? May I have a example for this case?
    I used a expdp for full database exp. but I got a exp error in expdp log as ORA-31679: Table data object "SALE"."TOAD_PLAN_TABLE" has long columns, and longs can not >be loaded/unloaded using a network linkHaving long columns in a table means that it can't be exported/imported over a network link. To exclude this, you can use the exclude expression:
    expdp user/password exclude=TABLE:"= 'SALES'" ...
    This will exclude all tables named sales. If you have that table in schema scott and then in schema blake, it will exclude both of them. The error that you are getting is not a fatal error, but that table will not be exported/imported.
    the final message as
    Master table "SYSTEM"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
    Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
    F:\ORACLEBACKUP\SALEFULL091113.DMP
    Job "SYSTEM"."SYS_EXPORT_FULL_01" completed with 1 error(s) at 16:50:26Yes, the fact that it did not export one table does not make the job fail, it will continue on exporting all other objects.
    . I drop database that gerenated a expdp dump file.
    and recreate blank database and then impdp again.
    But I got lots of error as
    ORA-39151: Table "SYSMAN"."MGMT_ARU_OUI_COMPONENTS" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
    ORA-39151: Table "SYSMAN"."MGMT_BUG_ADVISORY" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
    ......ORA-31684: Object type TYPE_BODY:"SYSMAN"."MGMT_THRESHOLD" already exists
    ORA-39111: Dependent object type TRIGGER:"SYSMAN"."SEV_ANNOTATION_INSERT_TR" skipped, base object type VIEW:"SYSMAN"."MGMT_SEVERITY_ANNOTATION" >already exists
    and last line as
    Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 2581 error(s) at 11:54:57Yes, even though you think you have an empty database, if you have installed any apps or anything, it may create tables that could exist in your dumpfile. If you know that you want the tables from the dumpfile and not the existing ones in the database, then you can use this on the impdp command:
    impdp user/password table_exists_action=replace ...
    If a table that is being imported exists, DataPump will detect this, drop the table, then create the table. Then all of the dependent objects will be created. If you don't then the table and all of it's dependent objects will be skipped, (which is the default).
    There are 4 options with table_exists_action
    replace - I described above
    skip - default, means skip the table and dependent objects like indexes, index statistics, table statistics, etc
    append - keep the existing table and append the data to it, but skip dependent objects
    truncate - truncate the existing table and add the data from the dumpfile, but skip dependent objects.
    Hope this helps.
    Dean

  • Running OMBPlus and EXP/IMP in mixed version environment

    OWB Mixed Environment Guru's
    Current environment:
    OWB Client: 10.1.0.2.0 on Windows XP Professional
    OWB Server side: 10.1.0.2.0 on UNIX (AIX 5.2)
    Repository: Oracle 9.2.0.4 on UNIX (AIX 5.2)
    UNIX Listener: 9.2.0.4 on UNIX (AIX 5.2)
    Runtime Repository: Oracle 9.2.0.4 on UNIX (AIX 5.2)
    I call this a mixed environment since my OWB stuff is 10g and my database stuff is 9.2.
    Issues:
    1- I can't get the command line exp.sh script to connect to the repository and returns the famous 'ORA-12154, TNS:listener does not currently know of service requested in connect descriptor'. It looks like the 'owbsetenv.sh' script is changing the value of $ORACLE_HOME to point to the 10g areas. Could that be then causing the system to look for a 10g LISTENER which doesn't exist since all my databases are 9.2.0.4???
    2- I have the same issue trying to run OMBPlus.sh.
    I am ultimately trying to set up a promotion process using the UNIX command line programs (exp/imp and OMBPlus) to get objects from the TEST environment into the PRODUCTION environment which is a separate repository and target schema on a different machine.
    Any advice on how to successfully operate in this 'mixed' environment is most welcomed.
    Many thanks!
    Gary

    Well it looks like I did it again!
    Total brain fart.
    The problem turned out that I wasn't specifying the entire SERVICE_NAME for the repository database. I had been leaving off the domain information. Must be a habit from not having to use it in the TNSNAMES.ORA files.
    I was able to compelte my test export and connect to OMBPLUS and will now try my test import.
    Sorry to clutter the forum but if it helps anyone else with the same affliction I seem to have frequently, I guess that's a small reward.
    Until next time.
    Gary

  • Exp/imp of tablespace

    hi all
    could we use exp/imp of tablespace on same database
    say we have list of objects in a tablespace and we want to shrink the tablespace size. ( as there are not many transactions ) then can we take export of this tablespace onyly , drop the tablespace , create a new tablesapce with same name but of less size and import objects back from the export dump file into this tablespace
    thanks
    kedar

    So basically you want to shrink the tablespace but you've got objects scattered around in there and it won't coalesce.
    There are several ways to do this, not least the one you've mentioned.
    An alternative would be to rebuild the objects into a new (smaller) tablespace. If you haven't got the disk space to accomodate a new tablespace and the existing one then, yes, export/import will do the job.

  • Migrating from 9i to 10g through exp/imp

    Hi,
    We need to migrate a database on 9i in HP-UNIX to 10g in IBM AIX. Can you please tell me whether i can export data in 9i with 9i export utility and import it into 10g using 10g data pump utility for faster import ?
    We don't have the 10g software installed in the HP-UNIX platform.
    And what other alternatives we have to reduce the amount of time for the migration process as it's a critical Production database ?
    Thanks,
    Jayanta

    I believe that Justin is correct that if you use exp to unload the data you will need to use imp and not impdp to reload the data.
    To speed up the cross-platform exp/imp process I suggest you consider running multiple concurrent exp/imp jobs. You can generate a table=list into a spool file using sql. You can give each large table its own exp job and bunch the small tables.
    Depending on how your database objects are organized you might also be albe to export by owner or a combination of owner, tables= exports.
    You can use a full=y with the rows=n option to grab the public synonyms, non-owning users, packages, etc.... not brought in by the prior exp/imp.
    HTH -- Mark D Powell --

  • Exp/imp

    Hi
    Regarding the schema level export/import.
    If I take a export of one schema, and import with standard exp/imp, I realized that user is not created and I have to create the user prior to import.
    However, if I take a export of one schema with datapump and import with impdp,
    I realized that user is also created and I dont need to create the user prior to import.
    What is the reason for this?

    if you don't want to create user prior during import then drop only the objects of that schema before refresh start

  • Exp imp of full db

    I want to take full exp and imp it to new db, i tried one imp but ended up errors....
    What would be the command for full db exp , imp db with data which should end without warnings?

    . exporting bitmap, functional and extensible indexes
    . exporting posttables actions
    . exporting triggers
    EXP-00056: ORACLE error 1422 encountered
    ORA-01422: exact fetch returns more than requested number of rows
    ORA-06512: at "XDB.DBMS_XDBUTIL_INT", line 55
    ORA-06512: at line 1
    EXP-00056: ORACLE error 1422 encountered
    ORA-01422: exact fetch returns more than requested number of rows
    ORA-06512: at "XDB.DBMS_XDBUTIL_INT", line 55
    ORA-06512: at line 1
    EXP-00000: Export terminated unsuccessfully

  • Exp/Import database schema vs Ch 10 Exp/Imp Content

    Hi I'm using Portal 10.1.2.02. I'm a DBA tasked with migrating a Dev portal into Test, then Production.
    There seems to be a number of ways to achieve this and I was hoping to clarify which is best for me.
    I've run through note: 330391.1(copying the Portal schema) which details running a perl script to export and import onto a newly installed server. This process worked will.
    Chapter 10 of the Portal guide details a fairly complex process of creating transports sets and migrating these over and importing.
    My question is: if I want the entire Portal copied is there any difference in these processes? ie. Do you end up with the same result?
    thanks in advance of any advice :-)

    The cloning model is not a rerunnable model, its not granular and conditional migraton is not possible.
    You can do the cloning only if u want to take a copy of the entire setup and rewire to a new midtier.
    Portal exp/imp model helps u to achieve granular, conditional and rerunnable method of moving portal objects.
    Apart from that, it comes with the readily available prechecks to intimate what's going wrong during the process.

Maybe you are looking for

  • Office 2013 windows 'pop-under' when run as remote apps on Server 2008R2 from Windows 7 clients

    We have an environment with Windows 7 clients running Office 2013 via RDS on Windows Server 2008R2. When users double click emails from Outlook, they 'pop-under', forcing the user to relocate the window which has fallen behind the application's prima

  • Weird, unsolvable bug

    Hey guys!  I'm having quite the problem with the OS.  Here are some pictures and a video of it in action: Ok for some reason the images look flipped on the website. But anyway, this is recurring everytime I boot up or log off.  I have done a reinstal

  • DB Prompt Layout

    Hi, I have a DB Prompt with 3 column prompts defined in column based layout. after the prompt selection in the Dashboard,in the PDF download I need to display the prompts in horizontal layout instead of vertical.. Any ideas?? Thank you in advance.

  • Yosemite and Wifi issues

    MacBook Pro (Retina, 13-inch, Late 2012) 8 GB 1600 MHz DDR3  251GB Flash storage; Apple Thunderbolt display, apple BT keyboard and trackpad. OS 10.10 Upgraded to Yosemite beta; wifi was fine; installed latest update and wifi became useless.  Tried al

  • INTERNET CONNECTION NOT DETECTED

    The internet is connected, 2008 loads OK on my other machines XI loads fine on the machine in question. I have been searching through the forums but no joy! Is there a simple solution from the 2008 start up screen?