2 questions - exp/imp and coltype

I have two questions - both unrelated : (Oracle 9.2.0.5 running on AIX 5.3)
1. I did a schema export in database DB1 of a user USR1 that has a bunch of tables with data. Some of the tables are in tablespace TS1 whereas some are in TS2. Export is successful.
Now, I want to use this .dmp file to import the objects into another oracle database DB2. Having created the user USR1 in DB2, I want to import all the objects into USR1. But DB2 has only one tablespace TS1.
So, how do I successfully import all the objects into TS1 (even though some of them existed in TS2 of the source database ) ??
2. Based on the data in a column of a table, I want to determine if the value contains alphabets or if it is numeric only. How do I do this ?
For example
SELECT ENAME FROM EMP;
SCOTT
BRIAN2
MATHEWS
GEORGE3
5555
I am looking for an output something like :
SCOTT ALPHA
BRAIN2 ALPHANUMERIC
MATHEWS ALPHA
GEORGE3 ALPHANUMERIC
5555 NUMERIC

Hi,
1) No problem, the dump will be imported into default's target user
2) Maybe this code below help you a little ...
SQL> create table emp (ename varchar2(20));
Table created.
SQL> insert into emp values ('SCOTT');
1 row created.
SQL> insert into emp values ('BRIAN2');
1 row created.
SQL> insert into emp values ('MATHEWS');
1 row created.
SQL> insert into emp values ('GEORGE3');
1 row created.
SQL> insert into emp values ('5555');
1 row created.
SQL> commit;
Commit complete.
SQL> SELECT ENAME,NVL2(LENGTH(TRIM(TRANSLATE(ENAME,'+-.0123456789',' '))),'ALPHA','NUMERIC') FROM EMP;
ENAME                NVL2(LE
SCOTT                ALPHA
BRIAN2               ALPHA
MATHEWS              ALPHA
GEORGE3              ALPHA
5555                 NUMERIC
SQL>Cheers

Similar Messages

  • Exp/imp and storage clause

    Is it possible to make an export of the database with the exp tool, but without the storage clause for tables and indexes?
    My develop database is 2GB big, and when i make an export without data and import it in a clean production environment without data, the datafiles are also 2GB big.
    The export parameter compress=Y/N doesn't matter.
    Thanks

    gert-jan, your post is unclear on by 2G big you mean the physical datafile size or the amount of space allocated within the tablespaces?
    If you are allowing the import to create the tablespaces then they are going to have the same size as defined in test.
    The compress=y/n option should make a difference in that the default compress=y would cause a larger value be written to the export file for the initial extent.
    But an important question is how were the import database target tablespaces defined: dictionary or locally managed, uniform or auto-allocate.
    If the tables were created with too large of allocations I would generate alter table move commands with storage clauses to resize the objects on the target. Then I would rebuild the indexes with storage clauses to do the same. You can change the initial extent size on a move/rebuild operation.
    The version of Oracle is always important because Oracle has been known to change feature behavior and parameter default values with release changes.
    HTH -- Mark D Powell --

  • Running OMBPlus and EXP/IMP in mixed version environment

    OWB Mixed Environment Guru's
    Current environment:
    OWB Client: 10.1.0.2.0 on Windows XP Professional
    OWB Server side: 10.1.0.2.0 on UNIX (AIX 5.2)
    Repository: Oracle 9.2.0.4 on UNIX (AIX 5.2)
    UNIX Listener: 9.2.0.4 on UNIX (AIX 5.2)
    Runtime Repository: Oracle 9.2.0.4 on UNIX (AIX 5.2)
    I call this a mixed environment since my OWB stuff is 10g and my database stuff is 9.2.
    Issues:
    1- I can't get the command line exp.sh script to connect to the repository and returns the famous 'ORA-12154, TNS:listener does not currently know of service requested in connect descriptor'. It looks like the 'owbsetenv.sh' script is changing the value of $ORACLE_HOME to point to the 10g areas. Could that be then causing the system to look for a 10g LISTENER which doesn't exist since all my databases are 9.2.0.4???
    2- I have the same issue trying to run OMBPlus.sh.
    I am ultimately trying to set up a promotion process using the UNIX command line programs (exp/imp and OMBPlus) to get objects from the TEST environment into the PRODUCTION environment which is a separate repository and target schema on a different machine.
    Any advice on how to successfully operate in this 'mixed' environment is most welcomed.
    Many thanks!
    Gary

    Well it looks like I did it again!
    Total brain fart.
    The problem turned out that I wasn't specifying the entire SERVICE_NAME for the repository database. I had been leaving off the domain information. Must be a habit from not having to use it in the TNSNAMES.ORA files.
    I was able to compelte my test export and connect to OMBPLUS and will now try my test import.
    Sorry to clutter the forum but if it helps anyone else with the same affliction I seem to have frequently, I guess that's a small reward.
    Until next time.
    Gary

  • Exp/Imp alternatives for large amounts of data (30GB)

    Hi,
    I've come into a new role where various test database are to be 'refreshed' each night with cleansed copies of production data. They have been using the Imp/Exp utilities with 10g R2. The export process is ok, but what's killing us is the time it takes to transfer..unzip...and import 32GB .dmp files. I'm looking for suggestions on what we can do to reduce these times. Currently the import takes 4 to 5 hours.
    I haven't used datapump, but I've heard it doesn't offer much benefit when it comes to saving time over the old imp/exp utilities. Are 'Transportable Tablespaces' the next logical solution? I've been reading up on them and could start prototyping/testing the process next week. What else is in Oracle's toolbox I should be considering?
    Thanks
    brian

    Hi,
    I haven't used datapump, but I've heard it doesn't offer much benefit when it comes to saving time over the old imp/exp utilitiesDatapump will be faster for a couple of reasons. It uses direct path to unload the data. DataPump also supports parallel processes, so while one process is exporting metadata, the other processes can be exporting the data. In 11, you can also compress the dumpfiles as you are exporting. (Both data and metadata compression is available in 11, I think metadata compression is available in 10.2). This will remove your zip step.
    As far as transportable tablespace, yes, this is an option. There are some requirements, but if it works for you, all you will be exporting will be the metadata and no data. The data is copied from the source to the target by way of datafiles. One of the biggest requirements is that the tablespaces need to be read only while the export job is running. This is true for both exp/imp and expdp/impdp.

  • Help with exp schema and related objects

    I need to export a schema from one database to another. I must also be able to export the tablespaces, roles, constraints, etc that are associated with it that aren't necessarily part of the schema. Is there an easy way to do this? I had already tried simply doing a schema exp/imp and it threw up a bunch of errors mostly grants which isn't that big of a deal but some of the tables were not created due to tablespaces being missing.

    exp system/manager file=exp.dmp log=exp.log full=y at the source database.
    imp system/manager file=exp.dmp log=imp_show.log full=y show=y - create the log file without importing the data
    edit the imp_show.log and extract the statements that are needed to re-create users,roles,alter user and grants.
    Pre-create the tablespaces using SQL*Plus in the target database.
    execute the modified imp_show.log(All DDl's)
    imp system/manager file=exp.dmp log=imp.log fromuser=A touser=B
    Re-compile all the Invalid Objects.
    HTH
    -Anantha

  • Imp and Exp Command

    Hi,
    Pls explain me the concept of Imp/Exp command and the parameters assocoiated with it?If u can tell me online practicals for Import/Export command?
    Regards
    sudhir

    At 8:03, you "wanna learn" about tablespaces and datafiles.
    At 8:11 you want the concept of Imp/Exp explained to you.
    At 8:10 you "wanna learn" the concept of Roll Back Segmenst [sic]
    And at 8:12 you "wanna learn" all about SQL*Loader
    Not bad for nine minutes of complete thoughtlessness, I suppose.
    If you "wanna learn" all about Oracle, download the software, install it, start playing with it, and come back with some serious questions that show a modicum of thought, effort, persistence and care. Paying some attention to the usual rules of grammar and spelling might be an idea, too.
    You'll find installations documented at http://www.dizwell.com/prod/node/695#installation, and then you can work your way through some concepts at http://www.dizwell.com/prod/node/263 and then you can start doing some basic fiddling at http://www.dizwell.com/prod/node/695#basics and once you've worked your way through that lot, you should be in a position to make sensible use of the stuff over at http://tahiti.oracle.com.
    And after you've had a good read of the official Concepts book there, then you can come back here to ask specific questions and reasonably expect to get meaningful replies.

  • Full database exp/imp  between RAC  and single database

    Hi Experts,
    we have a RAC database oracle 10GR2 with 4 node in linux. i try to duplicate rac database into single instance window database.
    there are same version both database. during importing, I need to create 4 undo tablespace to keep imp processing.
    How to keep one undo tablespace in single instance database?
    any experience of exp/imp RAC database into single instance database to share with me?
    Thanks
    Jim
    Edited by: user589812 on Nov 13, 2009 10:35 AM

    JIm,
    I also want to know can we add the exclude=tablespace on the impdp command for full database exp/imp?You can't use exclude=tablespace on exp/imp. It is for datapump expdp/impdp only.
    I am very insteresting in your recommadition.
    But for a full database impdp, how to exclude a table during full database imp? May I have a example for this case?
    I used a expdp for full database exp. but I got a exp error in expdp log as ORA-31679: Table data object "SALE"."TOAD_PLAN_TABLE" has long columns, and longs can not >be loaded/unloaded using a network linkHaving long columns in a table means that it can't be exported/imported over a network link. To exclude this, you can use the exclude expression:
    expdp user/password exclude=TABLE:"= 'SALES'" ...
    This will exclude all tables named sales. If you have that table in schema scott and then in schema blake, it will exclude both of them. The error that you are getting is not a fatal error, but that table will not be exported/imported.
    the final message as
    Master table "SYSTEM"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
    Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
    F:\ORACLEBACKUP\SALEFULL091113.DMP
    Job "SYSTEM"."SYS_EXPORT_FULL_01" completed with 1 error(s) at 16:50:26Yes, the fact that it did not export one table does not make the job fail, it will continue on exporting all other objects.
    . I drop database that gerenated a expdp dump file.
    and recreate blank database and then impdp again.
    But I got lots of error as
    ORA-39151: Table "SYSMAN"."MGMT_ARU_OUI_COMPONENTS" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
    ORA-39151: Table "SYSMAN"."MGMT_BUG_ADVISORY" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
    ......ORA-31684: Object type TYPE_BODY:"SYSMAN"."MGMT_THRESHOLD" already exists
    ORA-39111: Dependent object type TRIGGER:"SYSMAN"."SEV_ANNOTATION_INSERT_TR" skipped, base object type VIEW:"SYSMAN"."MGMT_SEVERITY_ANNOTATION" >already exists
    and last line as
    Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 2581 error(s) at 11:54:57Yes, even though you think you have an empty database, if you have installed any apps or anything, it may create tables that could exist in your dumpfile. If you know that you want the tables from the dumpfile and not the existing ones in the database, then you can use this on the impdp command:
    impdp user/password table_exists_action=replace ...
    If a table that is being imported exists, DataPump will detect this, drop the table, then create the table. Then all of the dependent objects will be created. If you don't then the table and all of it's dependent objects will be skipped, (which is the default).
    There are 4 options with table_exists_action
    replace - I described above
    skip - default, means skip the table and dependent objects like indexes, index statistics, table statistics, etc
    append - keep the existing table and append the data to it, but skip dependent objects
    truncate - truncate the existing table and add the data from the dumpfile, but skip dependent objects.
    Hope this helps.
    Dean

  • How to exp/imp both diff character set tables between in DB1 and DB2?

    In the Solaris 2.7 ,the oracle 8i DB1 has NLS_CHARACTERSET
    ZHS16CGB231280 and NLS_NCHAR_CHARACTERSET ZHS16CGB231280
    character set.
    In other linux7.2 system ,the oracle 8i DB2 is install into the
    NLS_NCHAR_CHARACTERSET US7ASCII and NLS_CHARACTERSET US7ASCII
    character set.
    The tables contents of DB1 have some chinese. I want to exp/imp
    tables of DB1 into DB2 . But the chinese can't correct display
    in the SQLWheet tools. How do the Exp/Imp operation ? ples help
    me . thanks .

    The supported way to store GB231280-encoded characters is using a ZHS16CGB231280 database or a database created using a superset of GB231280 ,such as UTF8 .Can you not upgrade your target database from US7ASCII to ZHS16CGB231280 ?
    With US7ASCII and NLS_LANG set to US7ASCII , you are using the garbage in garbage out (GIGO) approach. This may seem to work but there are many hidden problems :-
    1. Invalid SQL String Function behaviours - LENGTH ( ) , SUBSTR ( ) , INSTR ( )
    2. Data can be corrupted when data is loaded into another database. e.g. EXP / IMP , Dblinks
    3. Communication with other clients will generate incorrect results. e.g. other Oracle products - Oracle Text, Forms. , Java , HTML etc.
    4. Linguistic sorts not available
    5. Query using the standard WHERE clause may return incorrect results ..
    6. Extra coding overhead in handling character conversions manually.
    I recommend you to check out the FAQ and the DB Character set migration guide on the Globalization Support forum on OTN.
    Nat.

  • Info about imp and exp

    Hi everyone
    I need you to test If the work I will do is right or not .
    in general:-
    I want to take full export from database 10g
    and I want to import it in oracle database 7.3.2
    in details:-
    I will execute catexp7 on database 10g and after that I will take full export
    and import the exported file on database 7.3.2.
    If it is not right please could you correct me.
    bye.

    Hi,
    Try with exp/imp.Are you facing any problem .Study the below link:
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96652/ch01.htm
    Best regards,
    Rafi.
    http://rafioracledba.blogspot.com/

  • Another EXP/IMP question

    Moving data from 5 databases that are 9i over to 10g. If each datbase has 1 user with tables in associated schema what needs to be done on the 10g side as far as a user is concerned. The only way I've been able to exp / imp is cloning the sys account using toad then doing a dba/import from user sys to the duplicated account. What privilages are needed on the 10g side to imp that data without having to clone the sys account?
    thanks.

    CHarles is correct, if the export on the original machine is not a full export, anyone can import it. However, if it is a full export, then you must have the IMP_FULL_DATABASE priviledge. However, SYS and SYSTEM have the priviledge. Why don't you start imp as the sys or system user and do a user import into your new schema on the 10G system.

  • Some questions on RMAN and others

    hi,
    I have some doubts and need some clarifications to clear my doubts......thanks in advance
    can data be copied from one db say A to another db say B, if A is running on Windows 32bit OS and db B is on Solaris 64bit
    can I have a primary db on 10.2.0.4 and physical standby for this db on 11g ??
    I know RMAN can exclude tablespace but can we exclude tables like in dataguard %table_name% ....I know we can't just wanted to confirm
    Can I restore one specific tablespace from PROD to test ????
    I have out of date TEST db and have added additional datafiles and PROD, how can I update TEST without recreating the entire db

    can data be copied from one db say A to another db say B, if A is running on Windows 32bit OS and db B is on Solaris 64bit
    Yes you can do it either through transportable tablespace or using exp/imp (expdp/impdp in 10g)
    can I have a primary db on 10.2.0.4 and physical standby for this db on 11g ??
    No. this is not possible
    I know RMAN can exclude tablespace but can we exclude tables like in dataguard %table_name% ....I know we can't just wanted to confirm
    didn't understand the question. please elabrote.
    Can I restore one specific tablespace from PROD to test ????
    You cannot restore but you can move using transportable tablespace feature.
    I have out of date TEST db and have added additional datafiles and PROD, how can I update TEST without recreating the entire db
    you can add those datafiles in TEST.

  • Best way of using exp/imp

    Dear all,
    I wanted to migrate database from 8i to 11g(8.1.5 to 11.1.0). I am going for exp/imp method. Which is the best method of doing this task? I mean Full export and Import Or Schemawise export and import? Is there any chances of missing objects or rows while doing this task? If yes, How to avoid? Please help me to take a best decision. I dont want any problem after migration.
    Approach is (take exp of 8.1.5 and imp in 9.2.0) then (exp of 9.2.0 and imp it to 11.1.0.6)
    OS is HP Unix
    Nishant Santhan

    Have you not yet completed this task? As we have already answered to your question a couple of days back.
    Take a look at the similar duplicate thread created by you.
    Re: Migrating from 8i to 11g
    Regards,
    Sabdar Syed.

  • Exp/Import database schema vs Ch 10 Exp/Imp Content

    Hi I'm using Portal 10.1.2.02. I'm a DBA tasked with migrating a Dev portal into Test, then Production.
    There seems to be a number of ways to achieve this and I was hoping to clarify which is best for me.
    I've run through note: 330391.1(copying the Portal schema) which details running a perl script to export and import onto a newly installed server. This process worked will.
    Chapter 10 of the Portal guide details a fairly complex process of creating transports sets and migrating these over and importing.
    My question is: if I want the entire Portal copied is there any difference in these processes? ie. Do you end up with the same result?
    thanks in advance of any advice :-)

    The cloning model is not a rerunnable model, its not granular and conditional migraton is not possible.
    You can do the cloning only if u want to take a copy of the entire setup and rewire to a new midtier.
    Portal exp/imp model helps u to achieve granular, conditional and rerunnable method of moving portal objects.
    Apart from that, it comes with the readily available prechecks to intimate what's going wrong during the process.

  • Copy Page/region (incl. elements...) to application ( using exp/imp page) ?

    question and enhancement request(!) -
    some customers/prospects claim teamworking / reusable components. goal is to use a page (including its components like regions,processes,items,...) in another application - could realized by exp/imp if we can do it.
    This would improve reusability and also teamworking.
    I work together with a collegue on an application at htmldb.oracle.com. We exported the appl. and work on our local databases at different tasks by different pages.
    But how we can reimport our results?

    Lutz,
    It's kind of difficult, but Raj described a procedure here: Synonyms
    We are aware of the potential value of being able to copy pages between applications and are looking at ways to provide that in future versions of HTML DB.
    Scott

  • Exp/Imp with tablespace autocreate?

    Hello dear community,
    I've got a question about backup with exp/imp. It's not about RMAN, but nevertheless I hope it's the right board.
    Is there a command to tell imp to automatically create a specified tablespace from a dumpfile?
    For exp I use:
    exp.exe user/pass tablespaces=test file=FILE.DMP
    For imp I use:
    imp.exe user/pass tablespaces=test file=FILE.DMP
    Now, if I drop the tablespace between exp and imp, so imp can't restore it. There I manually have to create the specified tablespace first. It would be nice, if there is a way that imp creates the tablespace if it is not present.
    Are there maybe some parameter like "autocreate" and "use datafile=..." and so on?
    Thanks for help,
    best regards,
    Ronny

    Transportable Tablespaces is a seperate feature whereby you take a physical
    copy of tablespace datafiles and "plug" them into the target database.
    That is different from taking a logical export dump.
    With export dumps, the only way to get the tablespace created is in a FULL
    export and import. The CREATE TABLESPACE commands are written into
    the dmp file only when a FULL export is done. At import time, you can choose
    to import a specific schema in which case, the CREATE TABLESPACE
    commands are not executed by import (they must be precreated). So,
    CREATE TABLESPACE is executed only when you do a FULL import as well.
    Note that this {obviously} creates the same datafile names (including physical
    paths) and sizes as existant in the source database. That may or may not
    meet your requirements on occassion (eg different filesystem structure,
    different sizes planned in the target database).

Maybe you are looking for

  • How to combine multiple podcast folders into one

    When I open itunes Podcast section I find that it shows multiple folders or podcasts with the same name. I want these to show up as just one podcast. How can I do this? For example: When I look at my HD>itunes>Podcast folder; I see one folder called

  • Fixed asset Write up Error. ABZU(Parallel depreciation area 32 is not poste

    Hi Sap Gurus,       I am running into an error when trying to post a write up via "ABZU". Here is the error message "Parallel depreciation area 32 is not posted". Message no. AA565. Diagnosis The asset to be posted does not manage parallel depreciati

  • Problems with 2nd gen 8 gig nano

    I have just bought my 2nd gen 8gig nano thinking it would be plug in and play, but i cant get nothing working, i cant get it charging, i keep getting usb not recognized every time i plug it in to my pc or friends pc, i tunes dont see it. its just in

  • Siri Microphone Activation Button Intermittent Defect

    I recently got my new iPhone 5 and love the new Siri capabilities on it.  I use it for quite a lot of things and enjoy that it can be used hands-free to do many tasks on the iPhone. I have noticed, however, that sometimes when Siri gets finished proc

  • Why won't tumblr embed code work anymore?

    The embed code i was using in Muse doesn't seem to be working in a new site i'm building. Has anyone else found this? Eugene