Charset in expdp and imp dp

Hello frnds
I have a expdp dump file of a schema whose database 10g charset set is :AR8MSWIN1256
by using above dump file , I have impdp in another linus server whose database is 10g but character set is:;AR8MSWIN1256
the schema was imported sucessfully , but in source schema some of the rows were containing some arabic characters rows whearas in target database in place of arabic characters rows it has taken ' ???' symbols ,
so is there any parameters is available while importing it has to be import with exact characters.

Please refer to ML doc id 227332.1, paragraph 7.
Sybrand Bakker
Senior Oracle DBA

Similar Messages

  • Using expdp and impdp instead of exp and imp

    Hi All,
    I am trying to use expdp and impdp instead of exp and imp.
    I am facing few issues while using expdb. I have a Job which exports data from one DB server and then its imported so another DB server. Both DB servers are run on separate machines. Job runs on various clients machine and not on any of DB server.
    For using expdp we have to create DIRECTORY and as I understand it has to be created on DB server. Problem here is Job can not access DB Server or files on DB server. Also dump file created is moved by Job to other machines based on requirement( Usually it goes to multiple DB server).
    I need way to create dump files on server where job runs.
    If I am not using expdp correctly please guide. I am new to expdp/impdp and imp/exp.
    Regards,

    Thanks for quick reply ..
    Job executing expdb/impdp runs on Red Hat Enterprise Linux Server release 5.6 (Tikanga)
    ORACLE server Release 11.2.0.2.0
    Job can not access the ORACLE server as it does not have privileges (In fact there is no user / password to access ORACLE server machines). Creating dump on oracle server and moving is not an option for this JOB. It has to keep dump with itself.
    Regards,

  • Scripts for exp and imp in linux

    please give me some links for exp and imp tables peridical bactch script in oracle 10 R2 over RHEL 4.5.

    user13653962 wrote:
    please give me some links for exp and imp tables peridical bactch script in oracle 10 R2 over RHEL 4.5.try this.. change the same script for your environment
    $ crontab -l
    00 22 * * * sh /u01/db/scripts/testing_user1.sh
    $ cat /u01/db/scripts/testing_user1.sh
    . /home/oracle/TEST1.env
    export ORACLE_SID=TEST1
    export ORA_USER=dbdump
    export ORA_PASSWORD=dbdump
    export TNS_ALIAS=TEST1
    expdp $ORA_USER/$ORA_PASSWORD@$TNS_ALIAS tables=YOUR_tables_NAMEs  dumpfile=TEST_`date +'%Y-%m-%d`.dmp directory=YOUR_DUMPDIR logfile=TEST_`date +'%Y-%m-%d`.logif you need for original export change the exp command,
    exp username/password tables=table_name1,table_name2,etc... file=your_file_name log=your_log_file_name

  • How to set default encoding and charsets for jsp and servlets.

    Hi,
    Is there any possibility to set default encoding or charset for jsps and servlest (for both request and response)?
    For example in Weblogic such parameters can be set in weblogic specific configuration files (weblogic.xml).
    Thanks in advance.

    Hi,
    I created one request with logo in the header an page in the footer etc. and called StyleSheet. After you can import this formats by each request.
    You can do this in compound layout.
    Regards,
    Stefan

  • [svn:fx-trunk] 7661: Change from charset=iso-8859-1" to charset=utf-8" and save file with utf-8 encoding.

    Revision: 7661
    Author:   [email protected]
    Date:     2009-06-08 17:50:12 -0700 (Mon, 08 Jun 2009)
    Log Message:
    Change from charset=iso-8859-1" to charset=utf-8" and save file with utf-8 encoding.
    QA Notes:
    Doc Notes:
    Bugs: SDK-21636
    Reviewers: Corey
    Ticket Links:
        http://bugs.adobe.com/jira/browse/iso-8859
        http://bugs.adobe.com/jira/browse/utf-8
        http://bugs.adobe.com/jira/browse/utf-8
        http://bugs.adobe.com/jira/browse/SDK-21636
    Modified Paths:
        flex/sdk/trunk/templates/swfobject/index.template.html

    same problem here with wl8.1
    have you sold it and if yes, how?
    thanks

  • Full expdp and impdp: one db to another

    Hi! Good Day!
    I Would like to ask for any help regarding my problem.
    I would like to create a full export of one database to and import it on another database. This 2 database are on a separate machines.
    I am trying to use the expdp and impdp tool to do this task. However, i have been experiencing some problems during the import task.
    here is the detail of my problems:
    When i try to impdp the dump file, it seems that i was not able to iimport the user's data and metadata.
    Here is the exact command that i've used during the import and export task:
    export (server #1)
    expdp user01/******** directory=ora3_dir full=y dumpfile=db_full%U.dmp filesize=2G parallel=4 logfile=db_full.log
    import (server #2)
    impdp user01/******** directory=ora3_dir dumpfile=db_full%U.dmp full=y log=db_full.log sqlfile=db_full.sql estimate=blocks parallel=4
    here is the log that was generated during the impdp running:
    Import: Release 10.2.0.1.0 - 64bit Production on Friday, 27 November, 2009 17:41:07
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    Master table "SSMP"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
    Starting "SSMP"."SYS_SQL_FILE_FULL_01": ssmp/******** directory=ora3_dir dumpfile=ssmpdb_full%U.dmp full=y logfile=ssmpdb_full.log sqlfile=ssmpdb_full.sql
    Processing object type DATABASE_EXPORT/TABLESPACE
    Processing object type DATABASE_EXPORT/PROFILE
    Processing object type DATABASE_EXPORT/SYS_USER/USER
    Processing object type DATABASE_EXPORT/SCHEMA/USER
    Processing object type DATABASE_EXPORT/ROLE
    Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
    Processing object type DATABASE_EXPORT/SCHEMA/TABLESPACE_QUOTA
    Processing object type DATABASE_EXPORT/RESOURCE_COST
    Processing object type DATABASE_EXPORT/SCHEMA/DB_LINK
    Processing object type DATABASE_EXPORT/TRUSTED_DB_LINK
    Processing object type DATABASE_EXPORT/SCHEMA/SEQUENCE/SEQUENCE
    Processing object type DATABASE_EXPORT/SCHEMA/SEQUENCE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/DIRECTORY/DIRECTORY
    Processing object type DATABASE_EXPORT/DIRECTORY/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/DIRECTORY/GRANT/CROSS_SCHEMA/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/CONTEXT
    Processing object type DATABASE_EXPORT/SCHEMA/PUBLIC_SYNONYM/SYNONYM
    Processing object type DATABASE_EXPORT/SCHEMA/SYNONYM
    Processing object type DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC
    Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
    Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
    Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/PRE_TABLE_ACTION
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/CROSS_SCHEMA/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/COMMENT
    Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/PACKAGE_SPEC
    Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/FUNCTION
    Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/PROCEDURE
    Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/ALTER_FUNCTION
    Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/ALTER_PROCEDURE
    Processing object type DATABASE_EXPORT/SCHEMA/VIEW/VIEW
    Processing object type DATABASE_EXPORT/SCHEMA/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/VIEW/GRANT/CROSS_SCHEMA/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/VIEW/COMMENT
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/REF_CONSTRAINT
    Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE_BODIES/PACKAGE/PACKAGE_BODY
    Processing object type DATABASE_EXPORT/SCHEMA/TYPE/TYPE_BODY
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_TABLE_ACTION
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TRIGGER
    Processing object type DATABASE_EXPORT/SCHEMA/VIEW/TRIGGER
    Processing object type DATABASE_EXPORT/SCHEMA/JOB
    Processing object type DATABASE_EXPORT/SCHEMA/DIMENSION
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCACT_INSTANCE
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCDEPOBJ
    Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCOBJ
    Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
    Job "SSMP"."SYS_SQL_FILE_FULL_01" successfully completed at 17:43:09
    Thank you in advance.

    I believe there are lines like that in the export log.
    here's an extract from the log file:
    Processing object type DATABASE_EXPORT/SCHEMA/VIEW/TRIGGER
    Processing object type DATABASE_EXPORT/SCHEMA/JOB
    Processing object type DATABASE_EXPORT/SCHEMA/DIMENSION
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCACT_INSTANCE
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCDEPOBJ
    Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCOBJ
    Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
    . . exported "SSMP"."SMALLBOX" 450.2 MB 2489790 rows
    . . exported "SSMP"."LTCONFSB" 397.9 MB 3122899 rows
    . . exported "SSMP"."INFRJCT" 317.3 MB 3615810 rows
    . . exported "SSMP"."ICTEST1" 202.5 MB 789618 rows
    . . exported "SSMP"."EQUIPERF" 315.3 MB 3008636 rows
    . . exported "SSMP"."INFOPE2" 190.7 MB 569039 rows
    . . exported "SSMP"."SMALLBSB" 189.8 MB 2659468 rows
    . . exported "PRS1"."STKDTLOLD" 193.4 MB 983322 rows
    . . exported "SSMP"."ICTEST11" 185.2 MB 553916 rows
    . . exported "PRS1"."PURCHASE" 166.5 MB 122787 rows
    . . exported "SSMP"."JIGREC" 183.5 MB 2664488 rows
    . . exported "SSMP"."SALES" 112.4 MB 294134 rows
    . . exported "SSMP"."P_INFOPE" 160.4 MB 485773 rows
    . . exported "SSMP"."CIOP" 128.1 MB 385160 rows
    . . exported "SSMP"."SMALLBOX2" 126.0 MB 589964 rows
    . . exported "SSMP"."SHIPOUT" 123.4 MB 281778 rows
    . . exported "SSMP"."INFHISTORY" 129.9 MB 786844 rows
    . . exported "SYSMAN"."MGMT_TARGETS" 14.90 KB 11 rows
    . . exported "SSMP"."LOGCNT" 4.937 KB 1 rows
    . . exported "SSMP"."WEEKINF" 45.07 KB 700 rows
    . . exported "SSMP"."DEPTNM" 12.33 KB 81 rows
    . . exported "SSMP"."WFRID" 5.953 KB 2 rows
    . . exported "SSMP"."MATDTLS" 85.53 KB 199 rows
    . . exported "SSMP"."ICRATTACH" 9.257 KB 11 rows
    . . exported "SSMP"."WIP2" 20.34 KB 67 rows
    . . exported "SSMP"."MATINFO" 17.83 KB 43 rows
    . . exported "SSMP"."ICRFAPP" 11.48 KB 11 rows
    . . exported "SSMP"."WMXINF" 43.60 KB 500 rows
    . . exported "SSMP"."MLDMACHINE" 32.30 KB 228 rows
    . . exported "SSMP"."ICRNAME" 10.48 KB 23 rows
    . . exported "SSMP"."XLODCNT" 4.937 KB 1 rows
    . . exported "SSMP"."IJPLODCNT" 5.234 KB 1 rows
    . . exported "SSMP"."MONTHINF" 14.67 KB 152 rows
    . . exported "SSMP"."YYSTMP" 15.57 KB 59 rows
    . . exported "SSMP"."MPRICE" 39.95 KB 365 rows
    . . exported "SYSMAN"."ESM_COLLECTION" 8.585 KB 91 rows
    . . exported "SSMP"."INFCHKSHT" 6.406 KB 4 rows
    . . exported "SSMP"."MRKLNO" 4.929 KB 1 rows
    . . exported "SSMP"."JPNMARK" 5.890 KB 2 rows
    . . exported "SSMP"."LODCNT" 4.921 KB 1 rows
    . . exported "SSMP"."MRPNAME" 29.73 KB 310 rows
    . . exported "SYSMAN"."EXPORT000381" 49.68 KB 244 row

  • I am trying the expdp and impdp utility getting error

    I am trying the expdp and impdp utility, as per the documentation I logged in as sys and created a directory datapumpt in the location 'C:\oracle\odp' , i did grant the read and write privilege to my user_id.
    when i tried the expdp command in the prompt its throwing errors:
    Ora:39002 invalid operation
    ora: 39070 unable to open the log file
    ora: 29283 invalid file operation
    ora:06512 : at sys.utl_file line 475
    ora: 29283 invalid file operation.
    Operating system : windows / oracle version : 10.1.0.3.0
    Operating system : Linux AS4
    please help me on this.

    I'm afraid there is a big problem...
    C:\oracle\odp is a windows path
    If the Oracle server is running under Linux, your directory is invalid.
    A directory is an "alias" for a folder on the server, not on the client !!!
    The expdp/utl_file/... won't access folder on the client !

  • Move all data from one database to other using oracle expdp and impdp tool

    I need to move all of the data from one database to another. For that I am using oracle expdp and impdp tool.
    I have the full database export dump file. I moved that dump file to my other database.
    Now when I try to use impdp tools it gives lots of error regarding path mismatch,user not exists, role not exists.
    As my datafile path of source database does not match with the target database path , also the users are not there in target database. I dont want to create all the tablespaces that in my source database as there are 82 tablespaces.Is there any way by which I can move my all data without using remap option for tablespaces or users and grants.

    The FULL parameter indicates that a complete database export is required. The following is an example of the full database export and import syntax.
    The user must have the privielge of EXP_FULL_DATABASE
    expdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=expdpDB10G.log
    impdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=impdpDB10G.logGO through below linkfor more details...
    http://www.oracle-base.com/articles/10g/OracleDataPump10g.php
    Regards
    Umi
    Edited by: Umi on Feb 4, 2011 2:27 AM

  • Expdp and user_sdo_geom_metadata

    Dear all,
    I am trying to export data with expdp, and import it into another database. I can however see problems with my geometry indexes, which fail on import. user_sdo_geom_metadata also seems to be empty after import. Is there an extra option I should use in expdp to support geometries?
    Regards

    enhancement request 2419788 has been added to Oracle's database.

  • Datapumps and Imp/Exp

    what is the primary difference between datapumps and imp/exp utilities?
    Thank you

    The 10g Utilities manual lists the new capabilities that Data Pump provides over imp/exp:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_overview.htm#i1010248
    That is a pretty complete list. The original exp/imp chapter mentions two things that should be fairy obvious:
    You need to use the original imp to import dump files generated by the original exp.
    You need to use exp to generate files that can be read by imp on earlier (pre-10) versions of Oracle.
    Last but not least, many of us have become accustomed to writing to and reading from named pipes with exp and imp. This is a popular way to compress or pipe across rsh/ssh on the fly. Unfortunately, this is impossible with Data Pump! I would say that is among the most strenuous complaints I hear about Data Pump.
    Regards,
    Jeremiah Wilton
    ORA-600 Consulting
    http://www.ora-600.net

  • Missing job schedulers after complete db exp and imp

    We are using Oracle apps and the database version is 10.2.0.4 and inorder to delink the database and application we have run the export (using expdp) and run the import in different server (using impdp). Now the database is up and the application has been linked to the new database. The problem is I don't see any jobs (which was available in the database).
    I don't see any rows in dba_scheduler_jobs for the custom jobs created by the users.
    How do we migrate the jobs?
    Regards,
    ARS

    Hi,
    Check this thread : Jobs in schema export

  • Conversions between character sets when using exp and imp utilities

    I use EE8ISO8859P2 character set on my server,
    when exporting database with NLS_LANG not set
    then conversion should be done between
    EE8ISO8859P2 and US7ASCII charsets, so some
    characters not present in US7ASCII should not be
    successfully converted.
    But when I import such a dump, all characters not
    present in US7ASCII charset are imported to the database.
    I thought that some characters should be lost when
    doing such a conversions, can someone tell me why is it not so?

    Not exactly. If the import is done with the same DB character set, then no matter how it has been exported. Conversion (corruption) may happen if the destination DB has a different character set. See this example :
    [ora102 work db102]$ echo $NLS_LANG
    AMERICAN_AMERICA.WE8ISO8859P15
    [ora102 work db102]$ sqlplus test/test
    SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jul 25 14:47:01 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    TEST@db102 SQL> create table test(col1 varchar2(1));
    Table created.
    TEST@db102 SQL> insert into test values(chr(166));
    1 row created.
    TEST@db102 SQL> select * from test;
    C
    ¦
    TEST@db102 SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    [ora102 work db102]$ export NLS_LANG=AMERICAN_AMERICA.EE8ISO8859P2
    [ora102 work db102]$ sqlplus test/test
    SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jul 25 14:47:55 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    TEST@db102 SQL> select col1, dump(col1) from test;
    C
    DUMP(COL1)
    ©
    Typ=1 Len=1: 166
    TEST@db102 SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    [ora102 work db102]$ echo $NLS_LANG
    AMERICAN_AMERICA.EE8ISO8859P2
    [ora102 work db102]$ exp test/test file=test.dmp tables=test
    Export: Release 10.2.0.1.0 - Production on Tue Jul 25 14:48:47 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Export done in EE8ISO8859P2 character set and AL16UTF16 NCHAR character set
    server uses WE8ISO8859P15 character set (possible charset conversion)
    About to export specified tables via Conventional Path ...
    . . exporting table                           TEST          1 rows exported
    Export terminated successfully without warnings.
    [ora102 work db102]$ sqlplus test/test
    SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jul 25 14:48:56 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    TEST@db102 SQL> drop table test purge;
    Table dropped.
    TEST@db102 SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    [ora102 work db102]$ imp test/test file=test.dmp
    Import: Release 10.2.0.1.0 - Production on Tue Jul 25 14:49:15 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Export file created by EXPORT:V10.02.01 via conventional path
    import done in EE8ISO8859P2 character set and AL16UTF16 NCHAR character set
    import server uses WE8ISO8859P15 character set (possible charset conversion)
    . importing TEST's objects into TEST
    . importing TEST's objects into TEST
    . . importing table                         "TEST"          1 rows imported
    Import terminated successfully without warnings.
    [ora102 work db102]$ export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P15
    [ora102 work db102]$ sqlplus test/test
    SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jul 25 14:49:34 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    TEST@db102 SQL> select col1, dump(col1) from test;
    C
    DUMP(COL1)
    ¦
    Typ=1 Len=1: 166
    TEST@db102 SQL>

  • Database merge with the exp and imp...

    Hi All,
    I am very new to these dba activities. I have two databases with the same schema and has some data in both of it. Now I would like to merge both the databases and come up with one database. In this process I don't wanted to loose any data.
    Does the oracle exp/imp helps in this scenario ? If not any other tools helps us in doing this?
    What are the best practices to follow when we are doing this kind of work?
    What kind of verifications we need to do pre/post merge ?
    Any help would really be appreciated... Thank you in adavance...
    K.

    NewBPELUser wrote:
    Hi All,
    I am very new to these dba activities. I have two databases with the same schema and has some data in both of it. Now I would like to merge both the databases and come up with one database. In this process I don't wanted to loose any data.
    Does the oracle exp/imp helps in this scenario ? If not any other tools helps us in doing this?
    What are the best practices to follow when we are doing this kind of work?
    What kind of verifications we need to do pre/post merge ?
    Any help would really be appreciated... Thank you in adavance...
    K.What do you mean with "merging" data?
    How many objects are in both schemas?
    Are both schemas need to be merged in mutual way or in one way?
    Do you have option to use MERGE command for each object in the schema?
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_9016.htm#SQLRF01606
    P.S. If the version of the database is greater than 9i, then use Data Pump (expdp/impdp) instead of deprecated exp/imp utitlies
    Kamran Agayev A.
    Oracle ACE
    My Oracle Video Tutorials - http://kamranagayev.wordpress.com/oracle-video-tutorials/

  • Data Pump - expdp and slow performance on specific tables

    Hi there
    I have af data pump export af a schema. Most of the 700 tables is exported very quickly (direct path) but a couple of them seems to be extremenly slow.
    I have chekced:
    - no lobs
    - no long/raw
    - no VPD
    - no partitions
    - no bitmapped index
    - just date, number, varchar2's
    I'm runing with trace 400300
    But I'm having trouble reading the output from it. It seems that some of the slow performning tables is runinng with method 4??? Can anyone find an explanation for the method in the trace:
    1 > direct path (i think)
    2 > external table (i think)
    4 > ?
    others?
    I have done some stats using v$filestat/v$session_wait (history) - and it seems that we always wait for DB seq file read - and doing lots and lots of SINGLEBLKRDS. Not undo is read
    I have a table 2.5 GB -> 3 minutes
    and then this (in my eyes) similar table 2.4 GB > 1½ hrs.
    There are 367.000 blks (8 K) and avg rowlen = 71
    I'm on Oracle 11.2 on a Linux box with plenty of RAM and CPU power.
    Trace file /opt/oracle112/diag/rdbms/prod/prod/trace/prod_dw00_24268.trc
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    ORACLE_HOME = /opt/oracle112/product/11.2.0.2/dbhome_1
    System name:  Linux
    Node name:  tiaprod.thi.somethingamt.dk
    Release:  2.6.18-194.el5
    Version:  #1 SMP Mon Mar 29 22:10:29 EDT 2010
    Machine:  x86_64
    VM name:  Xen Version: 3.4 (HVM)
    Instance name: prod
    Redo thread mounted by this instance: 1
    Oracle process number: 222
    Unix process pid: 24268, image: [email protected] (DW00)
    *** 2011-09-20 09:39:39.671
    *** SESSION ID:(401.8395) 2011-09-20 09:39:39.671
    *** CLIENT ID:() 2011-09-20 09:39:39.671
    *** SERVICE NAME:(SYS$BACKGROUND) 2011-09-20 09:39:39.671
    *** MODULE NAME:() 2011-09-20 09:39:39.671
    *** ACTION NAME:() 2011-09-20 09:39:39.671
    KUPP:09:39:39.670: Current trace/debug flags: 00400300 = 4195072
    *** MODULE NAME:(Data Pump Worker) 2011-09-20 09:39:39.672
    *** ACTION NAME:(SYS_EXPORT_SCHEMA_09) 2011-09-20 09:39:39.672
    KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML called.
    KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML returned.
    KUPC:09:39:39.693: Setting remote flag for this process to FALSE
    prvtaqis - Enter
    prvtaqis subtab_name upd
    prvtaqis sys table upd
    KUPW:09:39:39.819: 0: KUPP$PROC.WHATS_MY_ID called.
    KUPW:09:39:39.819: 1: KUPP$PROC.WHATS_MY_ID returned.
    KUPW:09:39:39.820: 1: worker max message number: 1000
    KUPW:09:39:39.822: 1: Full cluster access allowed
    KUPW:09:39:39.823: 1: Original job start time: 11-SEP-20 09:39:38 AM
    KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME called.
    KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME returned. Process name: DW00
    KUPW:09:39:39.862: 1: KUPV$FT_INT.GET_INSTANCE_ID called.
    KUPW:09:39:39.866: 1: KUPV$FT_INT.GET_INSTANCE_ID returned. Instance name: prod
    KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE called.
    KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE returned.
    KUPW:09:39:39.871: 1: KUPF$FILE.INIT called.
    KUPW:09:39:39.996: 1: KUPF$FILE.INIT returned.
    KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH called.
    KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH returned.
    KUPW:09:39:39.998: 1: Max character width: 1
    KUPW:09:39:39.998: 1: Max clob fetch: 32757
    KUPW:09:39:39.998: 1: Max varchar2a size: 32757
    KUPW:09:39:39.998: 1: Max varchar2 size: 7990
    KUPW:09:39:39.998: 1: In procedure GET_PARAMETERS
    KUPW:09:39:40.000: 1: In procedure GET_METADATA_FILTERS
    KUPW:09:39:40.001: 1: In procedure GET_METADATA_TRANSFORMS
    KUPW:09:39:40.002: 1: In procedure GET_DATA_FILTERS
    KUPW:09:39:40.004: 1: In procedure GET_DATA_REMAPS
    KUPW:09:39:40.005: 1: In procedure PRINT_MT_PARAMS
    KUPW:09:39:40.005: 1: Master table             : "SYSTEM"."SYS_EXPORT_SCHEMA_09"
    KUPW:09:39:40.005: 1: Metadata job mode        : SCHEMA_EXPORT
    KUPW:09:39:40.005: 1: Debug enable             : TRUE
    KUPW:09:39:40.005: 1: Profile enable           : FALSE
    KUPW:09:39:40.005: 1: Transportable enable     : FALSE
    KUPW:09:39:40.005: 1: Metrics enable           : FALSE
    KUPW:09:39:40.005: 1: db version               : 11.2.0.2.0
    KUPW:09:39:40.005: 1: job version              : 11.2.0.0.0
    KUPW:09:39:40.005: 1: service name             :
    KUPW:09:39:40.005: 1: Current Edition          : ORA$BASE
    KUPW:09:39:40.005: 1: Job Edition              :
    KUPW:09:39:40.005: 1: Abort Step               : 0
    KUPW:09:39:40.005: 1: Access Method            : AUTOMATIC
    KUPW:09:39:40.005: 1: Data Options             : 0
    KUPW:09:39:40.006: 1: Dumper directory         :
    KUPW:09:39:40.006: 1: Master only              : FALSE
    KUPW:09:39:40.006: 1: Data Only                : FALSE
    KUPW:09:39:40.006: 1: Metadata Only            : FALSE
    KUPW:09:39:40.006: 1: Estimate                 : BLOCKS
    KUPW:09:39:40.006: 1: Data error logging table :
    KUPW:09:39:40.006: 1: Remote Link              :
    KUPW:09:39:40.006: 1: Dumpfile present         : TRUE
    KUPW:09:39:40.006: 1: Table Exists Action      :
    KUPW:09:39:40.006: 1: Partition Options        : NONE
    KUPW:09:39:40.006: 1: Tablespace Datafile Count: 0
    KUPW:09:39:40.006: 1: Metadata Filter    Index : 1    Count : 10
    KUPW:09:39:40.006: 1:         1           Name - INCLUDE_USER
    KUPW:09:39:40.006: 1:                    Value - TRUE
    KUPW:09:39:40.006: 1:              Object Name - SCHEMA_EXPORT
    KUPW:09:39:40.006: 1:         2           Name - SCHEMA_EXPR
    KUPW:09:39:40.006: 1:                    Value -  IN ('TIA')
    KUPW:09:39:40.006: 1:         3           Name - NAME_EXPR
    KUPW:09:39:40.006: 1:                    Value -  ='ACC_PAYMENT_SPECIFICATION'
    KUPW:09:39:40.006: 1:                   Object - TABLE
    KUPW:09:39:40.006: 1:         4           Name - INCLUDE_PATH_EXPR
    KUPW:09:39:40.006: 1:                    Value -  IN ('TABLE')
    KUPW:09:39:40.006: 1:         5           Name - ORDERED
    KUPW:09:39:40.006: 1:                    Value - FALSE
    KUPW:09:39:40.006: 1:                   Object - TABLE_DATA
    KUPW:09:39:40.006: 1:         6           Name - NO_XML
    KUPW:09:39:40.006: 1:                    Value - TRUE
    KUPW:09:39:40.006: 1:                   Object - XMLSCHEMA/EXP_XMLSCHEMA
    KUPW:09:39:40.006: 1:         7           Name - XML_OUTOFLINE
    KUPW:09:39:40.006: 1:                    Value - FALSE
    KUPW:09:39:40.006: 1:                   Object - TABLE/TABLE_DATA
    KUPW:09:39:40.006: 1:         8           Name - XDB_GENERATED
    KUPW:09:39:40.006: 1:                    Value - FALSE
    KUPW:09:39:40.006: 1:                   Object - TABLE/TRIGGER
    KUPW:09:39:40.007: 1:         9           Name - XDB_GENERATED
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:                   Object - TABLE/RLS_POLICY
    KUPW:09:39:40.007: 1:         10           Name - PRIVILEGED_USER
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1: MD remap schema    Index : 4    Count : 0
    KUPW:09:39:40.007: 1: MD remap other     Index : 5    Count : 0
    KUPW:09:39:40.007: 1: MD Transform ddl   Index : 2    Count : 11
    KUPW:09:39:40.007: 1:         1           Name - DBA
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - JOB
    KUPW:09:39:40.007: 1:         2           Name - EXPORT
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:         3           Name - PRETTY
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:         4           Name - SQLTERMINATOR
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:         5           Name - CONSTRAINTS
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:                   Object - TABLE
    KUPW:09:39:40.007: 1:         6           Name - REF_CONSTRAINTS
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:                   Object - TABLE
    KUPW:09:39:40.007: 1:         7           Name - OID
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - TABLE
    KUPW:09:39:40.007: 1:         8           Name - RESET_PARALLEL
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - INDEX
    KUPW:09:39:40.007: 1:         9           Name - OID
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - TYPE
    KUPW:09:39:40.007: 1:         10           Name - OID
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - INC_TYPE
    KUPW:09:39:40.007: 1:         11           Name - REVOKE_FROM
    KUPW:09:39:40.008: 1:                    Value - SYSTEM
    KUPW:09:39:40.008: 1:                   Object - ROLE
    KUPW:09:39:40.008: 1: Data Filter        Index : 6    Count : 0
    KUPW:09:39:40.008: 1: Data Remap         Index : 7    Count : 0
    KUPW:09:39:40.008: 1: MD remap name      Index : 8    Count : 0
    KUPW:09:39:40.008: 1: In procedure DISPATCH_WORK_ITEMS
    KUPW:09:39:40.009: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:40.009: 1: KUPC$QUEUE.TRANSCEIVE called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:40.036: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2011
    KUPW:09:39:40.036: 1: DBMS_LOB.CREATETEMPORARY called.
    KUPW:09:39:40.037: 1: DBMS_LOB.CREATETEMPORARY returned.
    KUPW:09:39:40.038: 1: Flags: 18
    KUPW:09:39:40.038: 1: Start sequence number:
    KUPW:09:39:40.038: 1: End sequence number:
    KUPW:09:39:40.038: 1: Metadata Parallel: 1
    KUPW:09:39:40.038: 1: Primary worker id: 1
    KUPW:09:39:40.041: 1: In procedure GET_TABLE_DATA_OBJECTS
    KUPW:09:39:40.041: 1: In procedure CREATE_MSG
    KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT called.
    KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT returned.
    KUPW:09:39:40.041: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:40.041: 1: KUPC$QUEUE_INT.SEND called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:40.044: 1: KUPC$QUEUE_INT.SEND returned.
    KUPW:09:39:40.044: 1: Estimate in progress using BLOCKS method...
    KUPW:09:39:40.044: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
    KUPW:09:39:40.044: 1: Old Seqno: 0 New Path: SCHEMA_EXPORT/TABLE/TABLE_DATA PO Num: -5 New Seqno: 62
    KUPW:09:39:40.046: 1: Created type completion for duplicate 62
    KUPW:09:39:40.046: 1: In procedure CREATE_MSG
    KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT called.
    KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT returned.
    KUPW:09:39:40.046: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:40.046: 1: KUPC$QUEUE_INT.SEND called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:40.047: 1: KUPC$QUEUE_INT.SEND returned.
    KUPW:09:39:40.047: 1: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    KUPW:09:39:40.048: 1: In procedure CONFIGURE_METADATA_UNLOAD
    KUPW:09:39:40.048: 1: Phase: ESTIMATE_PHASE Filter Name:  Filter Value:
    KUPW:09:39:40.048: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
    KUPW:09:39:40.182: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 100001
    KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: ESTIMATE_PHASE
    KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
    KUPW:09:39:40.194: 1: DBMS_METADATA.SET_PARSE_ITEM called.
    *** 2011-09-20 09:39:40.325
    KUPW:09:39:40.325: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
    KUPW:09:39:40.325: 1: DBMS_METADATA.SET_COUNT called.
    KUPW:09:39:40.328: 1: DBMS_METADATA.SET_COUNT returned.
    KUPW:09:39:40.328: 1: DBMS_METADATA.FETCH_XML_CLOB called.
    *** 2011-09-20 09:39:42.603
    KUPW:09:39:42.603: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
    KUPW:09:39:42.603: 1: In procedure CREATE_TABLE_DATA_OBJECT_ROWS
    KUPW:09:39:42.603: 1: In function GATHER_PARSE_ITEMS
    KUPW:09:39:42.603: 1: In function CHECK_FOR_REMAP_NETWORK
    KUPW:09:39:42.603: 1: Nothing to remap
    KUPW:09:39:42.603: 1: In procedure BUILD_OBJECT_STRINGS
    KUPW:09:39:42.604: 1: In procedure LOCATE_DATA_FILTERS
    KUPW:09:39:42.604: 1: In function NEXT_PO_NUMBER
    KUPW:09:39:42.620: 1: In procedure DETERMINE_METHOD_PARALLEL
    KUPW:09:39:42.620: 1: flags mask: 0
    KUPW:09:39:42.620: 1: dapi_possible_meth: 1
    KUPW:09:39:42.620: 1: data_size: 3019898880
    KUPW:09:39:42.620: 1: et_parallel: TRUE
    KUPW:09:39:42.620: 1: object: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION"                               <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    KUPW:09:39:42.648: 1: l_dapi_bit_mask: 7
    KUPW:09:39:42.648: 1: l_client_bit_mask: 7
    KUPW:09:39:42.648: 1: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" either, parallel: 12               <<<<< Here is says either (I thought that was method ?)  <<<<<<<<<<<<<<<<
    KUPW:09:39:42.648: 1: FORALL BULK INSERT called.
    KUPW:09:39:42.658: 1: FORALL BULK INSERT returned.
    KUPW:09:39:42.660: 1: DBMS_LOB.TRIM called. v_md_xml_clob
    KUPW:09:39:42.660: 1: DBMS_LOB.TRIM returned.
    KUPW:09:39:42.660: 1: DBMS_METADATA.FETCH_XML_CLOB called.
    KUPW:09:39:42.678: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
    KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY called.
    KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY returned.
    KUPW:09:39:42.678: 1: In procedure UPDATE_TD_ROW_EXP with seqno: 62
    KUPW:09:39:42.680: 1: 1 rows fetched
    KUPW:09:39:42.680: 1: In function NEXT_PO_NUMBER
    KUPW:09:39:42.680: 1: Next table data array entry: 1 Parallel: 12 Size: 3019898880 Method: 4Creation_level: 0               <<<<<<<<<<<<<<<<  HERE IT SAYS METHOD = 4  and PARALLEL=12 (I'm not using the parallel parameter ???)  <<<<<<<<<<<<<<<<<<
    KUPW:09:39:42.681: 1: In procedure UPDATE_TD_BASE_PO_INFO
    KUPW:09:39:42.683: 1: Updated 1 td objects with bpo between 1 and 1
    KUPW:09:39:42.684: 1: Send table_data_varray called.  Count: 1
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:42.695: 1: Send table_data_varray returned.
    KUPW:09:39:42.695: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:42.695: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
    KUPW:09:39:42.695: 1: Old Seqno: 62 New Path:  PO Num: -5 New Seqno: 0
    KUPW:09:39:42.695: 1: Object count: 1
    KUPW:09:39:42.697: 1: 1 completed for 62
    KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE called. Handle: 100001
    KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE returned.
    KUPW:09:39:42.697: 1: In procedure CREATE_MSG
    KUPW:09:39:42.697: 1: KUPV$FT.MESSAGE_TEXT called.
    KUPW:09:39:42.698: 1: KUPV$FT.MESSAGE_TEXT returned.
    KUPW:09:39:42.698: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:42.698: 1: KUPC$QUEUE_INT.SEND called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:42.699: 1: KUPC$QUEUE_INT.SEND returned.
    KUPW:09:39:42.699: 1: Total estimation using BLOCKS method: 2.812 GB
    KUPW:09:39:42.699: 1: In procedure CONFIGURE_METADATA_UNLOAD
    KUPW:09:39:42.699: 1: Phase: WORK_PHASE Filter Name: BEGIN_WITH Filter Value:
    KUPW:09:39:42.699: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
    KUPW:09:39:42.837: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 200001
    KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: WORK_PHASE
    KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
    KUPW:09:39:42.847: 1: DBMS_METADATA.SET_PARSE_ITEM called.
    KUPW:09:39:42.964: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
    KUPW:09:39:42.964: 1: DBMS_METADATA.SET_COUNT called.
    KUPW:09:39:42.967: 1: DBMS_METADATA.SET_COUNT returned.
    KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT called.
    KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT returned.
    KUPW:09:39:42.968: 1: DBMS_METADATA.FETCH_XML_CLOB called. Handle: 200001
    *** 2011-09-20 09:40:01.798
    KUPW:09:40:01.798: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
    KUPW:09:40:01.798: 1: Object seqno fetched:
    KUPW:09:40:01.799: 1: Object path fetched:
    KUPW:09:40:01.799: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:40:01.799: 1: In procedure COMPLETE_EXP_OBJECT
    KUPW:09:40:01.799: 1: KUPF$FILE.FLUSH_LOB called.
    KUPW:09:40:01.815: 1: KUPF$FILE.FLUSH_LOB returned.
    KUPW:09:40:01.815: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
    KUPW:09:40:01.815: 1: Old Seqno: 226 New Path:  PO Num: -5 New Seqno: 0
    KUPW:09:40:01.815: 1: Object count: 1
    KUPW:09:40:01.815: 1: 1 completed for 226
    KUPW:09:40:01.815: 1: DBMS_METADATA.CLOSE called.  Handle: 200001
    KUPW:09:40:01.816: 1: DBMS_METADATA.CLOSE returned.
    KUPW:09:40:01.816: 1: KUPF$FILE.CLOSE_CONTEXT called.
    KUPW:09:40:01.820: 1: KUPF$FILE.CLOSE_CONTEXT returned.
    KUPW:09:40:01.821: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:40:01.821: 1: KUPC$QUEUE.TRANSCEIVE called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:40:01.827: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2012
    KUPW:09:40:01.827: 1: DBMS_LOB.CREATETEMPORARY called.
    KUPW:09:40:01.828: 1: DBMS_LOB.CREATETEMPORARY returned.
    KUPW:09:40:01.828: 1: Process order range: 1..1
    KUPW:09:40:01.828: 1: Method: 1
    KUPW:09:40:01.828: 1: Parallel: 1
    KUPW:09:40:01.828: 1: Creation level: 0
    KUPW:09:40:01.830: 1: BULK COLLECT called.
    KUPW:09:40:01.830: 1: BULK COLLECT returned.
    KUPW:09:40:01.830: 1: In procedure BUILD_OBJECT_STRINGS
    KUPW:09:40:01.836: 1: In procedure MOVE_DATA UNLOADing process_order 1 TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION"            <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    KUPW:09:40:01.839: 1: KUPD$DATA.OPEN called.
    KUPW:09:40:01.840: 1: KUPD$DATA.OPEN returned.
    KUPW:09:40:01.840: 1: KUPD$DATA.SET_PARAMETER - common called.
    KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - common returned.
    KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags called.
    KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags returned.
    KUPW:09:40:01.843: 1: KUPD$DATA.START_JOB called.
    KUPW:09:40:01.918: 1: KUPD$DATA.START_JOB returned. In procedure GET_JOB_VERSIONThis is how I called expdp:
    expdp system/xxxxxxxxx schemas=tia directory=expdp INCLUDE=TABLE:\" =\'ACC_PAYMENT_SPECIFICATION\'\" REUSE_DUMPFILES=Y LOGFILE=expdp:$LOGFILE TRACE=400300

    Hi there ...
    I have read the note - thats where I found the link to the trace note 286496.1 - on now to setup a trace
    But I still need an explanation for the methods (1,2,4 etc)
    regards
    Mette

  • STOPPED JOBS with expdp and dbms_scheduler

    Hello.
    I am working with the 10g release 2 in a RAC enviroment, and i am trying to put an export job at the scheduler.
    To launch the export i have make a shell script, then first exec the export process and after launch a bzip2 command to compress the resultant dmp file.
    The problem is that the export process finish ok, but it don't compress the file, because the scheduler mark the job as STOPPED.
    The log say:
    REASON="Stop job with force called by user: 'SYS'"
    and the expdp S.O process that launch the extjobo stay runing for ever, like if it was waiting for the expdp to exit and it can`t so the script never arrive to the part that compress the file.
    I put the script that i make to export the schema:
    #!/bin/bash
    export ORACLE_HOME=/opt/oracle/product/10.2.0/db
    export PATH=$PATH:$ORACLE_HOME/bin
    export DIRBACK=/ORACLE/BACKUPS/BMR/Dumps
    export dia=`date +%d_%m_%Y_%H_%M_%S`
    export LOG=dump_backup_bmr_$dia.log
    cd $DIRBACK
    $ORACLE_HOME/bin/expdp userid=oracle_backup/orabck@BMR dumpfile="BMR_BMR_$dia.dmp" schemas=BMR directory=Dumps logfile=$LOG
    cd $DIRBACK
    /usr/bin/bzip2 -f --best ./BMR_BMR_$dia.dmp
    cd $DIRBACK
    /bin/mail -s "DUMP BACKUP BMR DIARIO [$dia]" [email protected] < ./dump_backup_bmr_$dia.log
    I have put several cd $DIRBACK to see if it fail because the script don`t find the dmp file.
    Any idea why it STOP after finish the script ?
    PD: sorry for my poor english.
    Regards

    Hi,
    A stop is only done in two cases - if the user calls dbms_scheduler.stop_job or if the database is shutdown while a job is running. Make sure the database is not being shutdown while the job is running or inside of the job.
    If expdp is still running then this suggests that it is hanging. One possibility for that is that expdp is generating a lot of standard error messages and hanging the job (this is a known issue in 10gR2). You can try redirecting standard output and error to files to see if this helps.
    e.g.
    $ORACLE_HOME/bin/expdp > /tmp/output 2> /tmp/errors
    Hope this helps,
    Ravi.

Maybe you are looking for

  • Select with no conditions taking long time, too many blocks, pctused?

    select * from at_journal;is taking 15 seconds, which is absurd. I have similar situations with other log tables in this system. (selecting with rownum < 5 and with first_rows hint still takes 15 seconds) Suspicious parameters: PCT_FREE 1 PCT_USED 99

  • Setting the time on 1300 bridges

    stupid question, how do you set the clock on the 1300 bridges, specifically the 1310?

  • Help with EventListeners

    ok can someone tell me what I can do to fix this. here is my code package game; import javax.swing.*; import java.awt.*; import java.awt.event.*; public class GameFrame extends JFrame implements EventListener { public int plastic = 75; public int met

  • HCm Process and forms Message Type Unknown

    Hi, I started a process through Start process application in Portal,process started in could click on check and send.process number generated. but i coudnt see the workflow triggerted.i have checked activation and general task of workflow,it is fine.

  • Which graphics card for Photoshop CS4

    I will be adding a graphics card to my Vista 64 machine which currently has integrated graphics. I normally don't care about 3D performance, so I was looking at the Radeon HD 4550. Please let me know what I should look for in a card if my primary obj