Dump import

Hi,
I am getting the following error while importing the dumpfile using impdp (oracle10gR2)
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
ORA-39083: Object type INDEX failed to create with error:
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
Failing sql is:
CREATE UNIQUE INDEX "TSOWN"."EBF_TERMINAL_PK" ON "TSOWN"."EBF_TERMINAL" ("RECID"
) PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTE
NTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_P
OOL DEFAULT) TABLESPACE "T24INDEX" PARALLEL 1
ORA-39083: Object type INDEX failed to create with error:
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
Thanks in advance
KGS

KSG wrote:
Hi,
I am getting the following error while importing the dumpfile using impdp (oracle10gR2)
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
ORA-39083: Object type INDEX failed to create with error:
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
Failing sql is:
CREATE UNIQUE INDEX "TSOWN"."EBF_TERMINAL_PK" ON "TSOWN"."EBF_TERMINAL" ("RECID"
) PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTE
NTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_P
OOL DEFAULT) TABLESPACE "T24INDEX" PARALLEL 1
ORA-39083: Object type INDEX failed to create with error:
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys foundWell,
http://lmgtfy.com/?q=ORA-01452
gave,
>
ORA-01452:     cannot CREATE UNIQUE INDEX; duplicate keys found
Cause:     A CREATE UNIQUE INDEX statement specified one or more columns that currently contain duplicate values. All values in the indexed columns must be unique by row to create a UNIQUE INDEX.
Action:     If the entries need not be unique, remove the keyword UNIQUE from the CREATE INDEX statement, then re-execute the statement. If the entries must be unique, as in a primary key, then remove duplicate values before creating the UNIQUE index.>
HTH
Aman....

Similar Messages

  • Dump import seems hanging/not progressive

    Aloha!
    I have a 48GB data pump dump import. For the past 48 hours, the import is still going on at this point..
    . . imported "SCHEMA_NAME"."T_NAME" 0 KB 0 rows
    Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEXand the import was set at this setting..
    system/******** schemas=s_name TRANSFORM=SEGMENT_ATTRIBUTES:N TABLE_EXISTS_ACTION=REPLACE directory=BW_TEMP
    dumpfile=DPname.dmp logfile=imp1.log If the import is running at this, with this size of dump file, is this normal? If not what part should i check and alter?
    Thanks in advance for the forthcoming help.
    Regards

    The first set of script has no result.
    Running this script,
    select sid, serial#, sofar, totalwork from v$session_longops;
    does this look good?
    SID SERIAL# SOFAR TOTALWORK
    135     176     46355     46356
    69     8286     2     2
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     3     3
    69     8286     2     2
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     2     2
    69     8286     2     2
    69     8286     1     1
    69     8286     1     1
    69     8286     2     2
    69     8286     1     1
    69     8286     2     2
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     5     5
    69     8286     3     3
    69     8286     1     1
    69     8286     1     1
    69     8286     2     2
    69     8286     1     1
    69     8286     3     3
    69     8286     4     4
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     1     1
    69     8286     2     2
    69     8286     1     1
    69     8286     1     1
    69     8286     3     3
    69     8286     2     2
    69     8286     1     1
    72     338     128357     128357
    72     338     115370     115370
    72     338     12675     12675
    72     338     113819     113819
    72     338     12421     12421
    72     338     116029     116029
    72     338     12706     12706
    72     338     129134     129134
    72     338     115370     115370
    72     338     10529     10529
    72     338     6577     6577
    72     338     113819     113819
    72     338     10383     10383
    72     338     116029     116029
    72     338     10598     10598
    72     338     129134     129134
    72     338     11761     11761
    72     338     103874     103874
    72     338     115370     115370
    72     338     10261     10261
    72     338     113819     113819
    72     338     10136     10136
    72     338     116029     116029
    72     338     10313     10313
    72     338     129134     129134
    72     338     11439     11439
    72     338     103874     103874
    72     338     115370     115370
    72     338     13312     13312
    72     338     113819     113819
    72     338     13122     13122
    72     338     116029     116029
    72     338     13401     13401
    72     338     129134     129134
    72     338     115370     115370
    72     338     10119     10119
    72     338     113819     113819
    72     338     9962     9962
    72     338     116029     116029
    72     338     10186     10186
    72     338     129134     129134
    72     338     11315     11315
    72     338     103874     103874
    72     338     115370     115370
    72     338     113819     113819
    72     338     116029     116029
    72     338     129134     129134
    72     338     103874     103874
    72     338     96666     96666
    72     338     105546     105546
    72     338     116217     116217
    72     338     118250     118250
    72     338     119992     119992
    72     338     128357     128357
    72     338     132673     132673
    72     338     118116     118116
    72     338     115370     115370
    72     338     113819     113819
    72     338     116029     116029
    72     338     129134     129134
    72     338     103874     103874
    72     338     115370     115370
    72     338     5765     5765
    72     338     113819     113819
    72     338     5682     5682
    72     338     116029     116029
    72     338     115370     115370
    72     338     7610     7610
    72     338     113819     113819
    72     338     115370     115370
    72     338     5248     5248
    72     338     113819     113819
    72     338     116029     116029
    72     338     115370     115370
    72     338     113819     113819
    72     338     116029     116029
    72     338     129134     129134
    72     338     115370     115370
    72     338     7648     7648
    72     338     113819     113819
    72     338     67312     67312
    73     39508     7     7
    72     338     67312     67312
    72     338     67312     67312
    72     338     105546     105546
    72     338     4877     4877
    72     338     116217     116217
    72     338     5348     5348
    72     338     118250     118250
    72     338     5439     5439
    72     338     119992     119992
    72     338     5512     5512
    72     338     128357     128357
    72     338     5894     5894
    72     338     132673     132673
    72     338     6091     6091
    72     338     118116     118116
    72     338     5433     5433
    72     338     115370     115370
    72     338     6406     6406
    72     338     113819     113819
    72     338     6246     6246
    72     338     116029     116029
    72     338     6378     6378
    72     338     129134     129134
    72     338     6914     6914
    72     338     103874     103874
    72     338     5674     5674
    72     338     96666     96666
    72     338     5297     5297
    72     338     105546     105546
    72     338     5807     5807
    72     338     116217     116217
    72     338     6369     6369
    72     338     118250     118250
    72     338     6490     6490
    72     338     119992     119992

  • DB Dumps import/export in ODI

    I have Master and Work DB dumps. Now i created two new schemas where i am importing these dumps.
    I created a Master rep with the new schema name and login for the same. I didnt see any topology which was there in the imported dump.Moreover when i tried to explicitly create a work rep , its not allowing saying password doesnt match with the existing. SO when i edited the login for the new work im seeing its taking workrep1 by default . so when i tried to remove it , then there is no work rep for the master details given.
    what can i do ?

    Zareen Shaikh wrote:
    I am moving from dev->Test. so i have taken expdp and impdp.I have not got any errors while doing so. But when i created master , i assumed , it should have all connection , work rep.If this is the case - you dont need to be 'creating' a master - as you've just imported it from Dev - you should just be logging in. If the schema passwords have changed you need to first change your ODI Login profile to the master repos, login to Topology, change the work repos connection (under Work Repositories) to your new password - You should then have a full clone of your Dev objects into your Test environment.
    However I would warn you that you've done x2 questionable things here that might trip you up in future :
    1) You have cloned Master and Work repositories, with identical ID's into another environment that is part of your development lifecycle , from this point on you wont be able to incrementally release new / change code from Dev to Test - You are looking at full database (schema) exports and the same list of configuration changes you are doing now, every time you want to release a bit of code (unless you replicate the developments in Test but there is scope for error there).
    2) You have created a 'development' type work repository in your test environment , there is nothing stopping people changing the ODI code in Interfaces, Proceedures, Knowledge modules etc. - how will you guarantee that Test is a true release from Dev (and preceeding Production) when they can both go in different directions. I would suggest you create a brand new 'Execution' only repository and only ever release scenarios from Dev to test - this forces changes to be made in the right place (dev) and the code is protected where it should be (Test)
    But since it was not there , i manually tried to create Work rep.In doing so , it asked for password , test connection was successful but when it asked for password confirmation and i am giving the same password , its not takingWhere is it asking for password confirmation exactly? The Work Repository data server connection or your ODI Login Profile when trying to log into Studio?

  • Dump import Error

    Hi All,
    I'm getting the below error while importing a dump file.
    ORA-39126: Worker unexpected fatal error in KUPW$WORKER.PUT_DDLS [TABLE:"T24"."STUBFILES"]
    ORA-06502: PL/SQL: numeric or value error
    LPX-00007: unexpected end-of-file encountered
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPW$WORKER", line 7709
    ----- PL/SQL Call Stack -----
    object line object
    handle number name
    3d17cfd48 18051 package body SYS.KUPW$WORKER
    3d17cfd48 7736 package body SYS.KUPW$WORKER
    3d17cfd48 15263 package body SYS.KUPW$WORKER
    3d17cfd48 3766 package body SYS.KUPW$WORKER
    3d17cfd48 8370 package body SYS.KUPW$WORKER
    3d174a9f0 1 anonymous block
    3d1d214e8 1501 package body SYS.DBMS_SQL
    3d17cfd48 8201 package body SYS.KUPW$WORKER
    3d17cfd48 1477 package body SYS.KUPW$WORKER
    3d17d3078 2 anonymous block
    ORA-39126: Worker unexpected fatal error in KUPW$WORKER.PUT_DDLS []
    ORA-06502: PL/SQL: numeric or value error
    LPX-00007: unexpected end-of-file encountered
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
    ORA-06512: at "SYS.KUPW$WORKER", line 7704
    ----- PL/SQL Call Stack -----
    object line object
    handle number name
    3d17cfd48 18051 package body SYS.KUPW$WORKER
    3d17cfd48 7736 package body SYS.KUPW$WORKER
    3d17cfd48 15263 package body SYS.KUPW$WORKER
    3d17cfd48 3766 package body SYS.KUPW$WORKER
    3d17cfd48 8370 package body SYS.KUPW$WORKER
    3d17ca8f0 1 anonymous block
    3d1d214e8 1501 package body SYS.DBMS_SQL
    3d17cfd48 8201 package body SYS.KUPW$WORKER
    3d17cfd48 1477 package body SYS.KUPW$WORKER
    3d17d3078 2 anonymous block
    Job "abz"."SYS_IMPORT_FULL_01" stopped due to fatal error at 21:05:07
    please suggest me.
    Thanks
    KSG

    Hi,
    now i get the error while trying to import after running the scripts given in the metalink
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORA-31626: job does not exist
    ORA-31637: cannot create job SYS_IMPORT_FULL_05 for user T24
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPV$FT", line 1481
    ORA-39062: error creating master process DM00
    ORA-31613: Master process DM00 failed during startup.
    Thanks
    KSG

  • Problem regarding database dump import

    hi all...good afternoon....
    can u plss tell me how to import the dump of a higher version of database into a lower version of database....i am explaining the scenario below...
    i have a schema named *"ABCD" which is a part of oracle11g*. i have taken the dump of it(ABCD.dmp) and was trying to import the dump in another schema name *"PQRS" which is in oracle10g*....actually the server of 11g is down and thats why i have to do this to continue the job...
    i have transferred the dump ABCD.dmp to the 10g server using sftp.
    now when i am giving the following command.....
    imp file=ABCD.dmp fromuser=ABCD touser=PQRS buffer=200000
    it is asking for the username and password. but when i provided that,it is throwing the error....*not a valid export file, header failed verification*
    please help...thanks

    ohhh ...am verry sorry about it....
    from the next time it will be done widout fail....
    agustin....
    thanks a lot for ur helpful reply....but this time my doubt is still not totally clear....as i am new in this impdp and expdp...thats why i want to know the method of making an object directory as told by agustin UN....
    previously i never created a directory to import or export....
    thanks a lot in advance....

  • ABAP for Super Dumps: Import- & Export-Parameter for a Table in a FM

    Hello ABAP Profs,
    sorry I am BW.
    <b>Import- & Export-Parameter for a Table in and out of an Function Modul.</b>
    I want to import a table into a Function Module, change it and export it again.
    How do I have to define the Import- and Export- Parameters in the FM ?
    The table looks looks this:
    DATA: zvpshub_tab TYPE SORTED TABLE OF /bic/pzvpshub WITH UNIQUE KEY
    /bic/zvpshub objvers /bic/zvpsoursy INITIAL SIZE 0.
    Thanks a lot
    Martin Sautter

    Hi Clemens,
    <u>in SE11</u> I defined a datatype of Type Structure: ZVPSHUB_ROW.
    <u>in SE11</u> I defiend a datatype of Type Tabletype: ZVPSHUB_TAB,
    bases on Rowtype ZVPSHUB_ROW.
    <u>in SE 80</u> I creates an FM with a CHANGEING Parameter referencing ZVPSHUB_TAB:
    FUNCTION ZVP_SHUB_TAB_LOAD.
    ""Lokale Schnittstelle:
    *"  CHANGING
    *"     VALUE(SHUB_TAB) TYPE  ZVPSHUB_TAB
    <u>in RSA1</u> in BW in the Startroutine of the Upload Rules in defined the table:
    DATA:shub_tab          TYPE zvpshub_tab.
    <u>in RSA1</u> in BW in the Startroutine of the Upload Rules in defined the table:
    DATA:shub_tab          TYPE zvpshub_tab.
    <u>in RSA1</u> in BW in the Startroutine i called the FM
    CALL FUNCTION 'ZVP_SHUB_TAB_LOAD'
        CHANGING
          shub_tab = shub_tab.
    and it works ..
    Thank You
    Martin Sautter

  • Oracle 10g Data Dump---Import to different users

    Hi everybody,
    I exported data in oracle 10g using expdp in user schema(abc1).
    Now i want to import that data in different user schema (abc2) and another user schema (abc3). Both schemas (abc2 and abc3) are in same tablespace.
    How to do this. Shall i need to export that data in SYSTEM mode to import for different users..?
    Thanks in advance
    Pal

    Shall i need to export that data in SYSTEM mode to import for different users..?No need. You can use REMAP_SCHEMA option on import, e.g.
    impdp system/<password> directory=<directory> dumpfile=<dumpfile> remap_schema=abc1:abc2
    See http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_import.htm#sthref339

  • Estimate the Import Time by viewing the DUMP File Size

    Its very generic question and I know it can varry from DB to DB.
    But is there any standard way that we can Estimate the Oracle dump import time by viewing the Export Dump file size ?

    Well, it's going to be vaguely linear, in that it probably takes about twice as long to load a 2 GB dump file as to load a 1 GB dump file, all else being equal. Of course, all things rarely are equal.
    For example,
    - Since only index DDL is stored in the dump file, dumps from systems with a lot of indexes will take longer to load than dumps from systems with few indexes, all else being equal.
    - Doing a direct path load will generally be more efficient than doing a conventional path load
    Justin

  • DUMP CONNE_ILLEGAL_TRANSPORT_HEADER

    Hi SDNers,
    I have the following DUMP: CONNE_ILLEGAL_TRANSPORT_HEADER, when I execute several transactions in ECC 6.0:
    For example:
    - S_ALR_87012284
    - FSE2
    In both situations:
    Program: SAPLFAGL_FSV
    Screen: SAPMSSY0 1000
    Screen Line: 6
    Include: LFAGL_FSVF01
    Line where program dumps
    -> import x011p to lt_x011p
           i011z to lt_x011z
           x011v to lt_x011v
           x011s to lt_x011s
           x011f to lt_x011f
      from database rfdt(bs) client p_mandt
                                 id p_versn
                          accepting padding
                           ignoring conversion errors.
    I have been looking for a note, but I didn't find.
    Anyone had the same problem?
    Thanks in advance for your help.
    Best Regards
    Leonardo

    Check this thread.
    Re: Short dump in BI

  • EXPORT/IMPORT Q & A

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-08
    EXPORT/IMPORT Q & A
    ===================
    PURPOSE
    다음은 EXPORT와 IMPORT에 관한 주요 질문과 답변에 관한 내용이다.
    Explanation
    [질문1] RDBMS와 EXPORT, IMPORT의 연관 관계는(catexp.sql 이란)?
    Export, Import시 이미 생성된 OBJECT의 정보를 데이타 딕셔너리에서 쿼리를
    하는데 이러한 OBJECT의 정보가 data dictionary내의 여러 테이블에 나누어져 있다.
    필요한 데이타 딕셔너리 정보를 편리하게 이용하기 위해 여러 가지의 뷰를
    catexp.sql에 Script되어 있다.
    이 스크립트 화일은 $ORACLE_HOME/rdbms/admin에 있으며 Install시 수행되도록
    되어 있다.
    [질문2] Export 시 OBJECT의 백업 순서는 있는가 ?
    Export하는 OBJECT의 순서는 존재하며 이는 Oracle의 Version Up 등에 의한
    새로운 OBJECT가 만들어지거나 하면 Export되는 OBJECT의 순서는 변할 수 있다.
    OBJECT의 Export순서는 다음과 같다.
    1. Tablespaces
    2. Profiles
    3. Users
    4. Roles
    5. System Privilege Grants
    6. Role Grants
    7. Default Roles
    8. Tablespace Quotas
    9. Resource Costs
    10. Rollback Segments
    11. Database Links
    12. Sequences( includes Grants )
    13. Snapshots ( includes grants, auditing )
    14. Snapshot logs
    15. Job Queues
    16. Refresh Groups
    17. Cluster Definitions
    18. Tables(includes grants,column grants,comments,indexes,
    constraints,auditing)
    19. Referential Integrity
    20. POSTTABLES actions
    21. Synonyms
    22. Views
    23. Stored Procedures
    24. Triggers
    25. Default and System Auditing
    [질문3] Export 시 BUFFER와 RECORDLENGTH는 무엇인가?
    -BUFFER
    Export시 OBJECT 내에 있는 여러 개의 Row가 한꺼번에 Fetch된다.
    디스크에서 Fetch된 정보는 화일에 Write하기 전에 메모리를 거치게 되며,
    이때 할당되는 메모리의 양이 Buffer 파라미터의 값이다.
    -RECORDLENGTH
    메모리에 있는 Export할 자료를 화일에 Write하기 위해 한번에 운반되는
    양을 결정하는 파라미터이다.
    [주의] 위의 BUFFER와 RECORDLENGTH는 O/S의 Block Size의 배수가 되도록
    하는 것이 효율적이다.
    [질문4] 다량의 Row를 Export, Import 시 어느 정도의 Row가 처리되었는지
    알수 있는가?
    알 수 있다. V 7.1까지는 다량의 Row를 Export, Import시 처리된 정도를 알
    수가 없어 현재 작업 중인지 시스템이 Hang인지 파악되지 않았으나 V 7.2
    부터는 FEEDBACK이라는 옵션을 이용하여 체크가 가능하다.
    [질문5] Export 시 한번에 몇 개의 Row가 Fetch되는가?
    한번에 Fetch되는 Row의 수는 Buffer Size와 연관 관계가 있다. 하나의 row가
    Export 시 차지하는 양은 (각 Column Size의 합) + 4*(Column의 수)로 구할 수
    있다. 한번 Fetch되는 Row의 수는 Buffer Size / 한 Row의 Export시 Size이다.
    이를 이용하면 Export된 Output File의 Size는 대략 한 Row의 Export 시
    Size * Row 수 이다.
    [질문6] Export, Import의 호환성은 어떻게 되는가?
    Export, Import의 호환성은 Oracle의 버젼과 직접적인 연관관계를 갖고 있다.
    호환성은 4가지로 나누어 설명할 수 있으며 이를 아래의 가정을 이용해
    설명하겠다.
    가령 A라는 기계에 Oracle V 7.0, B 라는 기계에 Oracle V 7.1이 설치되어
    운영 중이라 가정하자.
    Oracle V7.0을 X라 하고 Oracle V7.1을 Y라고 하자.
    - Base Compatibility : X의 exp를 이용해 X DB를 export하여 X의 imp를
    이용해 X DB에 import하는 것을 말한다. 이는 당연히 지원한다.
    - Upward Compatibility : X의 exp를 이용해 X DB를 export하여 Y DB에 Y의
    imp를 이용해 import하는 것을 말한다.
    이도 Oracle에서는 지원한다.
    - Downward Compatibility : Y exp를 이용해 Y DB를 export하여 X DB에 X의
    imp로 import하는 것을 말한다.
    이는 지원될 수도 안될 수도 있다.
    - Cross Compatibility : X exp를 이용해 Y DB를 export(SQL*Net 이용)하여
    X 또는 Y DB에 import(imp는 적정한 것을 활용)하는 것을 말한다.
    이는 지원될 수도 안될 수도 있다.
    [질문7] 어떤 경우에 Downward Compatibility가 실패하는가?
    V7.2에 hash cluster expressions라는 옵션이 있는데, 이를 이용해서 클러스터를
    생성하여 사용 후 export한 것을 V7.0 또는 V7.1로 downward시 create cluster문에
    옵션이 맞지않아 실패하게 된다.
    [질문8] EXP-37 에러(export views not compatible with database version)
    발생의 원인은 무엇인가 ?
    이 에러는 Cross Compatibility에서 발생하는 문제로 이는 Export가 이용
    하는 View(Catexp.sql에 의해 생성된)가 Oracle Version내에 일치하지 않아
    발생한 문제로 이를 해결하기 위해 Exp에서 이용 가능한 View를 설치한다.
    [질문9] Full Export는 DBA 권한을 갖고 있는 유저만 할 수 있는가 ?
    Version 6 에서는 DBA권한을 갖고 있는 유저만 Full Export를 할 수 있으며,
    V7에서는 DBA가 아니더라도 EXP_FULL_DATABASE Role이 Grant되면 Full
    Export가 가능하다.
    [질문10] 테이블 Import 시에 DEFAULT tablespace가 아닌 곳으로 들어가는 경우는
    왜 발생하는가?
    예를 들어서 scott 유저의 DEFAULT TABLESPACE가 users 인데 임포트를 해보면
    tools TABLESPACE에 테이블이 만들어졌다고 하자. 그 이유는 다음과 같다.
    즉, 임포트 하는 테이블이 원래 tools TABLESPACE에 있었고 scott가 현재
    tools 테이블스페이스에 대한 Quota 를 가지고 있거나 아니면 Unlimited
    Tablespace 권한(Resource Role에 포함)을 부여받았기 때문이다.
    Import 시에 테이블을 DEFAULT TABLESPACE에 만들려면 DEFAULT TABLESPACE
    외의 TABLESPACE 에 대한 모든 Quota 를 0 로 만들고 UNLIMITED TABLESPACE
    권한을 Revoke시킨 다음에 임포트를 수행해야 한다.
    그리고, DEFAULT TABLESPACE 에 대한 Quota만 Unlimited 로 한다.
    예를 들면 다음과 같다.
    $ sqlplus system/manager
    SQL> alter user scott
    quota 0 on system
    quota 0 on tools
    quota 0 on data
    quota unlimited on users;
    SQL>revoke unlimited tablespace from scott;
    이렇게 한 다음 Import 를 수행하면 된다. 물론 유저를 만들 때 quota 를 주지
    않은 TABLESPACE는 상관 없으며 UNLIMITED TABLESPACE 권한(또는 RESOURCE ROLE)을
    주지 않았다면 Revoke 명령도 사용할 필요가 없다.
    [질문11] Import 시에 Core Dump/Segmentation Fault 가 발생하는 경우
    오라클에는 Character Set이 있다. 국내에서는 US7ASCII 또는 KO16KSC5601을
    주로 사용하는데 Export 받은 곳과 Import 하는 곳의 Character Set이 다르면
    Import시에 Core Dump 가 발생하거나 원인 불명의 에러가 발생하면서 IMPORT가
    중단되는 경우가 발생한다.
    이 경우에는 Export 받은 dump file 을 convert 프로그램을 이용하여 Import 하는
    곳의 Character Set 으로 변환시킨 다음 Import를 하는 방법이 있고, 아니면 어느
    한 쪽 DB 의 Character Set 자체를 바꿔서 동일하게 맞춘 다음 Export/Import하는
    방법이 있다. 이 중에서 Convert 프로그램을 이용하는 방법이 간단한데 이
    프로그램은 Unix 상에서 cc로 컴파일하여서 사용하면 된다.
    Reference Documents
    --------------------

    I`m talking about the wsusutil export\import process..
    Oh! That's NOT what you asked. What you asked is:
    my question is, do I need to export all files again?
    As for the WSUSUTIL functionality, that is an ALL or NOTHING operation. You have no choice in the matter.
    Lawrence Garvin, M.S., MCSA, MCITP:EA, MCDBA
    SolarWinds Head Geek
    Microsoft MVP - Software Packaging, Deployment & Servicing (2005-2014)
    My MVP Profile: http://mvp.microsoft.com/en-us/mvp/Lawrence%20R%20Garvin-32101
    http://www.solarwinds.com/gotmicrosoft
    The views expressed on this post are mine and do not necessarily reflect the views of SolarWinds.

  • Oracle DMP not importing

    Hi Gurus,
    I have been trying to DUMP, and it is not working for me.
    This is how I have created the user,
    DROP USER de_data CASCADE;
    CREATE USER "DE_DATA" PROFILE "DEFAULT"
    IDENTIFIED BY "dedata" DEFAULT TABLESPACE "WDEL"
    TEMPORARY TABLESPACE "TEMP"
    QUOTA UNLIMITED
    ON "WDEL"
    ACCOUNT UNLOCK;
    GRANT CONNECT, RESOURCE, ALTER ANY TABLE, ALTER ANY TRIGGER, CREATE TABLE, CREATE TRIGGER, DROP ANY TRIGGER TO dedata;
    Now when I try to import using DUMP import,
    I get this message: IMP-00017: following statement failed with ORACLE error 1659: IMP-00003: ORACLE error 1659 encountered
    ORA-01659: unable to allocate MINEXTENTS beyond 1 in tablespace WDEL
    How could I fix the problem?
    Thanks

    Hi,
    I do not have answer for your fist question, sorry.
    But second one, see installation document.
    http://download.oracle.com/docs/cd/E17556_01/doc/install.40/e15513/otn_install.htm#CBHBABCC
    If you are sure you are not going downgrade to Apex 3.1, you can drop schema.
    Of course you can take database backup first.
    Regards,
    Jari

  • ORA-07445: exception encountered: core dump [hshget1()+199] [ACCESS_VIOLATION] [ADDR:0x18] [PC:0x8C4341F] [UNABLE_TO_READ] []

    Hello all,
    I am working on windows 64 bit (local system, testing)
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    my database was working just fine, I was copying export dump(exp/imp) from 'A' hard disk(external) to 'B', as it was urgent I gave a table for import(imp) from 'A' hard disk which had the original dump.
    import of table stopped with ORA-03113: end-of-file on communication channel
    i thought it might be because I gave import when copy was going on, later on when copy was over I gave the same table for import, this time also I got same error.
    when I tried to login as user "test" is gives me ORA 01033 error.
    Enter user-name: test/test
    ERROR:
    ORA-01033: ORACLE initialization or shutdown in progress
    Process ID: 0
    Session ID: 0 Serial number: 0
    Enter user-name: test/test@prod
    ERROR:
    ORA-01033: ORACLE initialization or shutdown in progress
    Process ID: 0
    Session ID: 0 Serial number: 0
    Enter user-name: test/test@prod as sysdba
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> show user
    USER is "SYS"
    my alert log shows ORA 07445 ERROR. below is part of alertlog which is repeating.
    Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0x18] [PC:0x8C4341F, hshget1()+199]
    Errors in file c:\app\administrator\diag\rdbms\prod\prod\trace\prod_mmon_7848.trc  (incident=94924):
    ORA-07445: exception encountered: core dump [hshget1()+199] [ACCESS_VIOLATION] [ADDR:0x18] [PC:0x8C4341F] [UNABLE_TO_READ] []
    Incident details in: c:\app\administrator\diag\rdbms\prod\prod\incident\incdir_94924\prod_mmon_7848_i94924.trc
    Fri Aug 23 13:00:10 2013
    Sweep Incident[94923]: completed
    Fri Aug 23 13:00:10 2013
    Trace dumping is performing id=[cdmp_20130823130010]
    Fri Aug 23 13:02:11 2013
    Restarting dead background process MMON
    Fri Aug 23 13:02:11 2013
    MMON started with pid=14, OS id=8512
    To add more I was studying/practicing "Auditing in Oracle",  yesterday System  table space was getting full hence, exported "IDL_UB1$" and truncated it.
    Please let me know if I need to provide any more information, also help me out to solve this issue.

    Yes I have the backup of truncated table with me(I am practicing "auditing" in my local system hence exported the table and truncated the table as system table space  was getting full.)
    also i checked the trace file c:\app\administrator\diag\rdbms\prod\prod\incident\incdir_94924\prod_mmon_7848_i94924.trc, but not sure what to check..i mean it gives the same
    "ORA-07445: exception encountered: core dump [hshget1()+199] [ACCESS_VIOLATION] [ADDR:0x18] [PC:0x8C4341F] [UNABLE_TO_READ] []" error at the start
    not sure how to read this trace file,i mean how to find error part, kindly let me know on this.

  • How to import  user  for SSO (i.e ssoimp.cmd)

    Hi,
    i installed portal 3.0.7 and tried to transport the application from development to production server.Both uses same DB,same Portal version & same apache server version.This is the order mentioned for importing: I too did the same way.
    Import sso dump (ssoimp.cmd)
    Import sec dump (secimp.cmd)
    Import Database dump (imp)
    Import App dump (appimp.cmd)
    Import cont area dump (contimp.cmd)
    But during the very first time and very first step ,when i started the SSO dump import ,it says the following .
    "Note:Import user have to login to login server by entering his/her password and must rename his/her password".
    Of course the sso import also failed.
    For,your info,i didn't create any user for import.System automatically created 'Portal30_SSO' user ,with it's default password during installation itself.
    I also tried chaning the password for portal30_sso.But all in vein.
    So can anybody guide me in this import issue ? Or else is there any other work around for bringing the application developed in development server to prod server( is there any way like taking a datafile backup and replacing it in PROD server to reflect the same changes ?) or what procedure do you follow to transport the developed application into Production server in your Organization.
    Thanks in advance.
    cheers!!!
    [email protected]

    Wenjun,
    As a general rule, you can use DX EXPORT request to get the structure of XML script for DX IMPORT:
    <DX_REQUEST>
    <EXPORT>
    <USER_GROUP_OBJECT>
    <SITE>SK</SITE>
    <USER_GROUP>ADMINISTRATORS</USER_GROUP>
    </USER_GROUP_OBJECT>
    </EXPORT>
    </DX_REQUEST>
    In turn this gives me:
    <USER_GROUP_OBJECT>
      <USER_GROUP>ADMINISTRATORS</USER_GROUP>
      <MODIFIED_DATE_TIME_ISO>2010-01-25T16:03:33Z</MODIFIED_DATE_TIME_ISO>
      <DESCRIPTION>Administrators</DESCRIPTION>
      <SITE>SK</SITE>
    <USER_GROUP_MEMBER>
    <USER_OR_GROUP_GBO>
    <USR>
      <USER_ID>SITE_ADMIN</USER_ID>
      <SITE>SK</SITE>
      </USR>
      </USER_OR_GROUP_GBO>
      </USER_GROUP_MEMBER>
      <CREATED_DATE_TIME_ISO>2010-01-25T16:03:33Z</CREATED_DATE_TIME_ISO>
      </USER_GROUP_OBJECT>
    This structure with needed changes should be inserted into <DX_REQUEST><IMPORT>...</IMPORT></DX_REQUEST> script.
    Please note, that each time you send such script, the user group will be overwritten rather than updated with new users.
    Regards,
    Sergiy

  • Normal Import  - Exclude tables form import

    Hi ,
    Is it possible to exclude all tables from a database dump import.I have a very big size dump.I dont want to import the tables.Just anted to check some procedures/packages/triggers.
    is there any way to do this.
    Thanks in Advance,
    SSN

    You have 2 options:
    1) As suggested above, use the SHOW=Y option of import and you can see everything in the log file generated.
    2) Create a database, run the import with rows=N and you would be able to access all the structures.

  • Help Fetching files from an FTP server into a DBase or local folder

    Hello Gurus!
    I need help to automate or design/write an app capable of (replace a manual process) reading log files in an ftp server folder, this log files are called CDR (Call Detail Record) and named as follow cdr.txt.20020221010000 meaning that this log file was created on 2002/02/21 at 01:00:00.
    This CDR log files are generated every hour by a Lucent telephone switch using a software/program called EXS ExchangePlus in ASCII format (fixed length delaminated, Char data type). EXS ExchangePlus writes a record in this CDR/log files after the call is completed, i.e. Calling_Number, Called_Number, Date_connect (yyyymmdd), Time_connect (hhmmss), Charged_time (hhmmss, duration of the call), and so on.
    The manual process I am using now is:
    1. Log in the ftp server into the folder where the CDRs log files are, located the latest generated CDRs log files and downloaded them into a local folder/directory. Normally I download the CDR for full day or 24 CDRs log files.
    1a. Optionally sometimes I used a program called VEdit compiled/gather all 24 CDRs into a single file.
    2. Once I have downloaded the CDRs I want to process (normally I process a full day 24 CDRs) I imported then into a Data Base temp table, where then I do the conversion of data types (i.e. Dates and times fields from char type -> to ->Date type) and calculations for billing purposes.
    So if somebody can help with this, I would really appreciated ideas or any kind of information on how to automated this process.
    I�m familiar with Java2 platform (using JDeveloper ) so if somebody has done something like this before using java, this would very helpful. Or Maybe using Visual Basic 6.0 would help too.
    Thanks Gurus, and I look forward to hear from you soon!
    Alf Pathros!

    Thanks for the idea, even though an example would help me more.
    I already found the FTPClient class.
    I also would like to knwo if there is a away Ican append/merged various files CDRs into a single/one to then dump/import it into the database

Maybe you are looking for

  • How do I make it that my Contacts are on my phone instead of iCloud?

    I just recently updated my iPhone 4 to iOS 5. My contacts used to be on my phone itself. However, now they are in a group called "All iCloud" and "All on My iPhone" is empty. I dont like have groups. I would much rather all my contacts be on my phone

  • Transporting table and table maintainance generator

    Hi, Can i transport the table and table maintainace generator and function group of the table with in single request or not. One more thing is when i transport the table maintainance generator,will the entries also be transport to another system or n

  • EBS Performance  Check

    Hi, The database I am currently working on is about 400Gb. There is lot of load during month end period and even more during financial year end. On a regular basis,if we confront performance issues , we find more CPU and I/O consuming sessions and ac

  • Changing screen name in pidgin

    i got the pidgin contacts and conversations add-on, which is working great. i went to settings, VoIP and changed the display name, but it isn't changing in the actual conversation. any suggestions?

  • Monitor Calibration For Vista Issues?

    I am looking for information about the current state of custom monitor calibration for Vista particularly concerning whether or not the profiles "stick". This forum has a link to this Microsoft knowledge base article about the problem and a link to