Export import NT - Unix

What is the the best way to export from NT to
Unix (hp-ux) the complete Oracle Portal with about 1000 PL/SQL portets.
1) What are the restrictions?
2) Do you know some problems?
3) How long time it will take (1 day, 2 days or more?
null

Before exporting/importing, be CERTAIN that the versions of Oracle Portal are the SAME between your NT and HP/UX Portal installations. For instance, one installation cannot be at 3.0.6.x vs. another at 3.0.7.x.
Also, there is a portal export/import white paper which provides a good overview of this process at: http://technet.oracle.com/products/iportal/pdf/export_import.pdf
This should get you started in the right direction..
Best Regards,
Harry
null

Similar Messages

  • System copy with Export/Import Distribyted system UNIX database, Windows CI

    HI
    we have a distributed system witch I want to make a copy of
    Oracle Database on HP-UX
    CI + SCS on Windows
    Dual stack system  ABAP/Java
    1)  I have made an system copy with data base copy tools, and that works fine.
    2)  I have made a system copy export/import of a "central system"
    If I try to make an export/import system copy on a distributed system it wont work.
    ( this work fine in a central system) I dont get anny error messages all gren checks marks ! 
    It seems like the problem are in not finding the profiles from CI ?
    It find the profiles for the sapgateway.
    I someone are more interested I have more documentation with screen shots in a word document.
    I Have used Installation Master SR3 and tryed with some older versions as well.
    I have read SAP Note 970518 - Hom./Het.System Copy SAP NetWeaver 7.0 (2004s) SR2
    SAPINST   
    Using virtual hosts for central and/or DB instances
    SYMPTOM:
    You are using virtual host names for your central instance and/or the database instance (e.g. in a HA environment). Although SAPinst finishes successfully, some export steps are not being performed.
    SOLUTION:
    Start the export with the property SAPINST_USE_HOSTNAME=<virtual host>.
    If the central instance and the database instance run on different hosts, run the database export with SAPINST_USE_HOSTNAME=<virtual host name of the DB server> and - if you run a Java or ABAP+Java system - run the central instance export with SAPINST_USE_HOSTNAME=<virtual host nameof the central instance host>.
    If you run a central system with different host names in the profiles, handle the export as a distributed export, calling SAPinst with parameter SAPINST_USE_HOSTNAME. Perform the export of the database instance on the DB server, calling SAPinst with SAPINST_USE_HOSTNAME=<virtual host of the DB server>.

    Hi I have received folloving answer from SAP:
    You're doing everything correct - besides one thing.
    When you specify the profiles during the database export it cannot be
    correct to specify the profiles of the gateway instance located on
    the same host.
    As a consequence - it picks up the wrong instance G01 and sets the
    property 'hasABAP' and 'hasJava' to 'false'...  - thus the export
    is empty. You can see this within the corresponding 'sapinst.log' file.
    The correct procedure would be to specify the profiles of the central
    instance on the Windows Machine. If you cannot access them, you should
    make them available by Samba or by any type of software which makes
    it's possible to export filesystems or directories from windows to
    unix.
    Best regards
    Bent

  • Exporting  from Hp-unix and Importing on Solaris

    Hi all,
    Can anyone give me the procedure and constraints that are taken for exporting oracle 8i on HP-unix and importing on oracle 9i Solaris machine, Any docs or metalink notes also helps me .
    And i need steps like how to disable the constraints and triggers and how to take take export. what precautions are taken while exporting and importing.
    what are the parameters that should taken for faster export and import ...
    export is for full database ..
    If anyone can give me a quick response , i would appreciate that. because i have only 2 days in hand...

    Hi,
    I dont think that cross platform with work in exp/imp u can give a try anyway.
    Take export in direct path tat would be faster if you are using 10G then use Datapump (expdp/impdp)
    Before u could use exp or expdp / imp or impdp login as oracle use and export the variable ORACLE_SID to the target database
    do this $ exp help=y (for export) or imp help=y (for import)
    E.g. export syntax
    EXPORT
    exp parfile=exp.par
    parfile (exp.par)
    userid="/ as sysdba"
    file=/path/filename.dmp
    log=/path/filename.log
    direct=y
    constraints=n
    triggers=n
    full=y (for full export)
    owner=user1,user2(for schema level export)
    IMPORT
    imp parfile=imp.par
    parfile (imp.par)
    userid="/ as sysdba"
    file=/path/filename.dmp
    log=/path/impfilename.log
    full=y (for full import)
    fromuser=username (for schema level import)
    touser=username
    Try in google search for syntax of otn site for good documents
    Cheers,
    Kamalesh JK

  • APEX Application Export/Import

    Hi,
    I have my application build in APEX on 10.2.0.4 database which now i want to migrate to 11.2. Name of schema on the back of my application is DBADMIN. What i have done is, i just exported the application from my current apex on 10.2.0.4 and tried to import on my 11.2 but it gives me following error message
    ORA-20001: GET_BLOCK Error. ORA-20001: Execution of the statement was unsuccessful. ORA-06550: line 80, column 32: PLS-00103: Encountered the symbol &amp;quot;!&amp;quot; when expecting one of the following: ( - + case mod new not null &amp;lt;an identifier&amp;gt; &amp;lt;a double-quoted delimited-identifier&amp;gt; &amp;lt;a bind variable&amp;gt; continue avg count current exists max min prior sql stddev sum variance execute forall merge time timestamp interval date &amp;lt;a string litThis is first time i am migrating my application so please help to find out what is the issue and how to resolve it. Following is the detail how i performed the operation
    Source APEX
    Workspace name = Monitoring
    User of application which loged in to this workspace and created application = SALMAN
    Application name = Monitoring
    Database schema name behind application = DBADMIN
    I exported application in UNIX and also DOS format
    Destination APEX
    Workspace name = Monitoring
    User of application which loged in to import the application = UTACADMIN
    Database schema name behind application = DBADMIN and i have already exported the schema from my source database have imported all the objects with data
    Tried importing application from UNIX as well as DOS script but same error message as above.
    Please guide me how can i successfully migrate my application from my source APEX to this new APEX
    Thanks

    Now i exported the workspace from my source APEX and imported in destination and everything is fine and now apparently everything is same but still same error during import of the application, please help to resolve.
    I tried to execute the application export script on sqlplus, but following error was returned
    EMREP.UTAC.COM.SG$APEX_030200> @d:\f100.sql
    APPLICATION 100 - Monitoring
    Set Credentials...
    Check Compatibility...
    API Last Extended:20090112
    Your Current Version:20090112
    This import is compatible with version: 20090112
    COMPATIBLE (You should be able to run this import without issues.)
    Set Application ID...
    begin
    ERROR at line 1:
    ORA-02291: integrity constraint (APEX_030200.WWV_FLOWS_FK) violated - parent
    key not found
    ORA-06512: at "APEX_030200.WWV_FLOW_API", line 555
    ORA-06512: at line 3Thanks
    Salman

  • Regarding Distribution Monitor for export/import

    Hi,
    We are planning to migrate the 1.2TB of database from Oracle 10.2g to MaxDB7.7 . We are currently testing the database migration on test system for 1.2TB of data. First we tried with just simple export/import i.e. without distribution monitor we were able to export the database in 16hrs but import was running for more than 88hrs so we aborted the import process. And later we found that we can use distribution monitor and distribute the export/import load on multiple systems so that import will get complete within resonable time. We used 2 application server for export /import but export completed within 14hrs but here again import was running more than 80hrs so we aborted the import process. We also done table splitting for big tables but no luck. And 8 parallel process was running on each servers i.e. one CI and 2 App servers. We followed the document DistributionMonitorUserGuide from SAP. I observerd that  on central system CPU and Memory was utilizing above 94%. But on 2 application server which we added on that servers the  CPU and Memory utilization was very low i.e. 10%. Please find the system configuration as below,
    Central Instance - 8CPU (550Mhz) 32GB RAM
    App Server1 - 8CPU (550Mhz) 16GB RAM
    App Server2 - 8CPU (550Mhz) 16GB RAM
    And also when i used top unix command on APP servers i was able to see only one R3load process to be in run state and all other 7 R3load  process was in sleep state. But on central instance all 8 R3load process was in run state. I think as on APP servers all the 8 R3load process was not running add a time that could be the reason for very slow import.
    Please can someone let me know how to improve the import time. And also if someone has done the database migration from Oracle 10.2g to MaxDB if they can tell how they had done the database migration will be helpful. And also if any specific document availble for database migration from Oracle to MaxDB will be helpful.
    Thanks,
    Narendra

    > And also when i used top unix command on APP servers i was able to see only one R3load process to be in run state and all other 7 R3load  process was in sleep state. But on central instance all 8 R3load process was in run state. I think as on APP servers all the 8 R3load process was not running add a time that could be the reason for very slow import.
    > Please can someone let me know how to improve the import time.
    R3load connects directly to the database and loads the data. The quesiton is here: how is your database configured (in sense of caches and memory)?
    > And also if someone has done the database migration from Oracle 10.2g to MaxDB if they can tell how they had done the database migration will be helpful. And also if any specific document availble for database migration from Oracle to MaxDB will be helpful.
    There are no such documents available since the process of migration to another database is called "heterogeneous system copy". This process requires a certified migration consultant ot be on-site to do/assist the migraiton. Those consultants are trained specially for certain databases and know tips and tricks how to improve the migration time.
    See
    http://service.sap.com/osdbmigration
    --> FAQ
    For MaxDB there's a special service available, see
    Note 715701 - Migration to SAP DB/MaxDB
    Markus

  • Extents in Export/Import files

    I work on Oracle 8.1.7 with HP unix. I need to create a schema in QA exactly with the same tables as in Prod. (data in QA is a subset of Prod)
    Tables in prod have very big initial extents.
    I created a dump file by exporting the prod env.
    I imported the dump file (from prod) with the Index file option and got the SQL file. However, each 'create table' statement has the initial extent size included in it. I would have to manually change for 600 tables so that the initial extent is smaller for QA environment. I would like to use the default initial extent size for each table.
    I tried using the COMPRESS=Y and N options and could not spot any difference.
    Is there a way to completely avoid the extents or storage parameter in the export/import process ?
    Appreciate any help on this
    Thanks

    Hi Tushar,
    1. How do I pass this internal table to function module ?
       I assume u are creating your own Y/Z FM.
       Pass it thru TABLES parameter.
    2. When I am creating function module in se37 where do I define this iternal table type
       Define this in TABLES interface.
       What Type ?
       THE SAME TYPE WHICH HAS BEEN DEFINED
        WHILE PASSING IN THE USER-EXIT FUNCTION MODULE.
       IF U SEE THE FM OF THE USER-EXIT,
       U WILL COME TO KNOW.
    3.
    Where do I define error structure type (which is returned by function module to main program)? Is it in Export or table parameter during function module creation?
    Define it in TABLES interace. (not in export, import)
      (Since what u are going to return is an internal table)
      U can take for eg. BDCMSGCOLL.
      OR u can create your own Y/Z structure
    for the same purpose.
      (or u can use the structure type T100)
    I hope it helps.
    Regards,
    Amit M.

  • EXPORT/IMPORT Q & A

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-08
    EXPORT/IMPORT Q & A
    ===================
    PURPOSE
    다음은 EXPORT와 IMPORT에 관한 주요 질문과 답변에 관한 내용이다.
    Explanation
    [질문1] RDBMS와 EXPORT, IMPORT의 연관 관계는(catexp.sql 이란)?
    Export, Import시 이미 생성된 OBJECT의 정보를 데이타 딕셔너리에서 쿼리를
    하는데 이러한 OBJECT의 정보가 data dictionary내의 여러 테이블에 나누어져 있다.
    필요한 데이타 딕셔너리 정보를 편리하게 이용하기 위해 여러 가지의 뷰를
    catexp.sql에 Script되어 있다.
    이 스크립트 화일은 $ORACLE_HOME/rdbms/admin에 있으며 Install시 수행되도록
    되어 있다.
    [질문2] Export 시 OBJECT의 백업 순서는 있는가 ?
    Export하는 OBJECT의 순서는 존재하며 이는 Oracle의 Version Up 등에 의한
    새로운 OBJECT가 만들어지거나 하면 Export되는 OBJECT의 순서는 변할 수 있다.
    OBJECT의 Export순서는 다음과 같다.
    1. Tablespaces
    2. Profiles
    3. Users
    4. Roles
    5. System Privilege Grants
    6. Role Grants
    7. Default Roles
    8. Tablespace Quotas
    9. Resource Costs
    10. Rollback Segments
    11. Database Links
    12. Sequences( includes Grants )
    13. Snapshots ( includes grants, auditing )
    14. Snapshot logs
    15. Job Queues
    16. Refresh Groups
    17. Cluster Definitions
    18. Tables(includes grants,column grants,comments,indexes,
    constraints,auditing)
    19. Referential Integrity
    20. POSTTABLES actions
    21. Synonyms
    22. Views
    23. Stored Procedures
    24. Triggers
    25. Default and System Auditing
    [질문3] Export 시 BUFFER와 RECORDLENGTH는 무엇인가?
    -BUFFER
    Export시 OBJECT 내에 있는 여러 개의 Row가 한꺼번에 Fetch된다.
    디스크에서 Fetch된 정보는 화일에 Write하기 전에 메모리를 거치게 되며,
    이때 할당되는 메모리의 양이 Buffer 파라미터의 값이다.
    -RECORDLENGTH
    메모리에 있는 Export할 자료를 화일에 Write하기 위해 한번에 운반되는
    양을 결정하는 파라미터이다.
    [주의] 위의 BUFFER와 RECORDLENGTH는 O/S의 Block Size의 배수가 되도록
    하는 것이 효율적이다.
    [질문4] 다량의 Row를 Export, Import 시 어느 정도의 Row가 처리되었는지
    알수 있는가?
    알 수 있다. V 7.1까지는 다량의 Row를 Export, Import시 처리된 정도를 알
    수가 없어 현재 작업 중인지 시스템이 Hang인지 파악되지 않았으나 V 7.2
    부터는 FEEDBACK이라는 옵션을 이용하여 체크가 가능하다.
    [질문5] Export 시 한번에 몇 개의 Row가 Fetch되는가?
    한번에 Fetch되는 Row의 수는 Buffer Size와 연관 관계가 있다. 하나의 row가
    Export 시 차지하는 양은 (각 Column Size의 합) + 4*(Column의 수)로 구할 수
    있다. 한번 Fetch되는 Row의 수는 Buffer Size / 한 Row의 Export시 Size이다.
    이를 이용하면 Export된 Output File의 Size는 대략 한 Row의 Export 시
    Size * Row 수 이다.
    [질문6] Export, Import의 호환성은 어떻게 되는가?
    Export, Import의 호환성은 Oracle의 버젼과 직접적인 연관관계를 갖고 있다.
    호환성은 4가지로 나누어 설명할 수 있으며 이를 아래의 가정을 이용해
    설명하겠다.
    가령 A라는 기계에 Oracle V 7.0, B 라는 기계에 Oracle V 7.1이 설치되어
    운영 중이라 가정하자.
    Oracle V7.0을 X라 하고 Oracle V7.1을 Y라고 하자.
    - Base Compatibility : X의 exp를 이용해 X DB를 export하여 X의 imp를
    이용해 X DB에 import하는 것을 말한다. 이는 당연히 지원한다.
    - Upward Compatibility : X의 exp를 이용해 X DB를 export하여 Y DB에 Y의
    imp를 이용해 import하는 것을 말한다.
    이도 Oracle에서는 지원한다.
    - Downward Compatibility : Y exp를 이용해 Y DB를 export하여 X DB에 X의
    imp로 import하는 것을 말한다.
    이는 지원될 수도 안될 수도 있다.
    - Cross Compatibility : X exp를 이용해 Y DB를 export(SQL*Net 이용)하여
    X 또는 Y DB에 import(imp는 적정한 것을 활용)하는 것을 말한다.
    이는 지원될 수도 안될 수도 있다.
    [질문7] 어떤 경우에 Downward Compatibility가 실패하는가?
    V7.2에 hash cluster expressions라는 옵션이 있는데, 이를 이용해서 클러스터를
    생성하여 사용 후 export한 것을 V7.0 또는 V7.1로 downward시 create cluster문에
    옵션이 맞지않아 실패하게 된다.
    [질문8] EXP-37 에러(export views not compatible with database version)
    발생의 원인은 무엇인가 ?
    이 에러는 Cross Compatibility에서 발생하는 문제로 이는 Export가 이용
    하는 View(Catexp.sql에 의해 생성된)가 Oracle Version내에 일치하지 않아
    발생한 문제로 이를 해결하기 위해 Exp에서 이용 가능한 View를 설치한다.
    [질문9] Full Export는 DBA 권한을 갖고 있는 유저만 할 수 있는가 ?
    Version 6 에서는 DBA권한을 갖고 있는 유저만 Full Export를 할 수 있으며,
    V7에서는 DBA가 아니더라도 EXP_FULL_DATABASE Role이 Grant되면 Full
    Export가 가능하다.
    [질문10] 테이블 Import 시에 DEFAULT tablespace가 아닌 곳으로 들어가는 경우는
    왜 발생하는가?
    예를 들어서 scott 유저의 DEFAULT TABLESPACE가 users 인데 임포트를 해보면
    tools TABLESPACE에 테이블이 만들어졌다고 하자. 그 이유는 다음과 같다.
    즉, 임포트 하는 테이블이 원래 tools TABLESPACE에 있었고 scott가 현재
    tools 테이블스페이스에 대한 Quota 를 가지고 있거나 아니면 Unlimited
    Tablespace 권한(Resource Role에 포함)을 부여받았기 때문이다.
    Import 시에 테이블을 DEFAULT TABLESPACE에 만들려면 DEFAULT TABLESPACE
    외의 TABLESPACE 에 대한 모든 Quota 를 0 로 만들고 UNLIMITED TABLESPACE
    권한을 Revoke시킨 다음에 임포트를 수행해야 한다.
    그리고, DEFAULT TABLESPACE 에 대한 Quota만 Unlimited 로 한다.
    예를 들면 다음과 같다.
    $ sqlplus system/manager
    SQL> alter user scott
    quota 0 on system
    quota 0 on tools
    quota 0 on data
    quota unlimited on users;
    SQL>revoke unlimited tablespace from scott;
    이렇게 한 다음 Import 를 수행하면 된다. 물론 유저를 만들 때 quota 를 주지
    않은 TABLESPACE는 상관 없으며 UNLIMITED TABLESPACE 권한(또는 RESOURCE ROLE)을
    주지 않았다면 Revoke 명령도 사용할 필요가 없다.
    [질문11] Import 시에 Core Dump/Segmentation Fault 가 발생하는 경우
    오라클에는 Character Set이 있다. 국내에서는 US7ASCII 또는 KO16KSC5601을
    주로 사용하는데 Export 받은 곳과 Import 하는 곳의 Character Set이 다르면
    Import시에 Core Dump 가 발생하거나 원인 불명의 에러가 발생하면서 IMPORT가
    중단되는 경우가 발생한다.
    이 경우에는 Export 받은 dump file 을 convert 프로그램을 이용하여 Import 하는
    곳의 Character Set 으로 변환시킨 다음 Import를 하는 방법이 있고, 아니면 어느
    한 쪽 DB 의 Character Set 자체를 바꿔서 동일하게 맞춘 다음 Export/Import하는
    방법이 있다. 이 중에서 Convert 프로그램을 이용하는 방법이 간단한데 이
    프로그램은 Unix 상에서 cc로 컴파일하여서 사용하면 된다.
    Reference Documents
    --------------------

    I`m talking about the wsusutil export\import process..
    Oh! That's NOT what you asked. What you asked is:
    my question is, do I need to export all files again?
    As for the WSUSUTIL functionality, that is an ALL or NOTHING operation. You have no choice in the matter.
    Lawrence Garvin, M.S., MCSA, MCITP:EA, MCDBA
    SolarWinds Head Geek
    Microsoft MVP - Software Packaging, Deployment & Servicing (2005-2014)
    My MVP Profile: http://mvp.microsoft.com/en-us/mvp/Lawrence%20R%20Garvin-32101
    http://www.solarwinds.com/gotmicrosoft
    The views expressed on this post are mine and do not necessarily reflect the views of SolarWinds.

  • Bad performance on system, export/import buffer many sawps

    Hello,
    I have an ECC 6.0 system on AIX with 6 application servers. There seems to be a performance problem on the system, this issue is being noticed very well when people are trying to save a sale order for example, this operation takes about 10 minutes.
    Sometimes we get short dumps TSV_TNEW_PAGE_ALLOC_FAILED or MEMORY_NO_MORE_PAGING but not very often.
    I am not very good at studying the performance issues, but from what I could see is that there are may swaps on buffer export/import, program and generic key. Also the HitRatio is 88% at  buffer export/import, which I think is pretty low.
    I know that the maximum value accepted of swaps per day is 10000, is that right?
    Can you please advice me what needs to be done in order for these swaps to decrese and hit ratio to increase? And also what else I should do in order to analyse and root cause and the bad performance of the system?
    Many thannks,
    manoliv

    Hi,
    sappfpar determines the minimum and maximum (worst-case) swap space requirements of an R/3 application server. It also checks on shared memory requirements and that the em/initial_size_MB and abap/heap_area_total parameters are correctly set with the following procedure:
    /usr/sap/<SYSTEMNAME>/SYS/exe/run/sappfpar check pf=/usr/sap/<SYSTMENAME>/SYS/profile/<Profile name>
    At the end of the list, the program reports the minimum swap space, maximum heap space, and worst case swap space requirements:
    Additional Swap Space Requirements :
    You will probably need to increase the size of the swap space in hosts in which R/3 application servers run.
    As a rule of thumb, swap space should equal
    3 x Size of Main Storage or at least 1 GB, whichever is larger.
    SAP recommends a swap space of 2-3 GB for optimal performance.
    Determining Current Swap Space Availability: memlimits
    You can find out how much swap space is currently available in your host system with R/3’s memlimits program.
    Here’s how to run memlimits:
    From the UNIX command prompt, run the R/3 memlimits program to check on the size of the available swap space on the host system on which an R/3 application server is to run.
    The application server must be stopped, not running.
    /usr/sap/<SYSTEMNAME>/SYS/exe/run/memlimits | more
    The available swap space is reported in the output line Total available swap space: at the end of the program output. The program also indicates whether this amount of swap space will be adequate and determines the size of the data segments in the system.

  • Export/Import Corruption ?

    We have just moved an application from Oracle 9i to 10g via the Export/Import options.
    One of our pages now displays the Oracle error "failed to parse SQL query: ORA-00907: missing right parenthesis" in an SQL Query region after the import. It works fine in the Oracle 9i set up.
    I found the error is caused by an extra carriage return/paragraph mark being placed in the middle of a function name used in the query.
    If we cut/paste between the two environments by running two web browsers the problem is fixed, as the carriage return/paragraph mark is not inserted.
    The problem occurs in both Unix and DOS exports.
    My concern is that there may now be other introduced errors as a result, but as yet not found.

    Geoff,
    Have you verified that:
    1) you are importing the file with the same character set setting as the one it was exported with?
    2) the DAD character set settings are properly configured?
    See this thread as well:
    9iAS DAD file upload
    Sergio

  • Fast EXPORT/IMPORT Question

    I have a 200+ GB Oracle 9i DB on UNIX, many tables have primary keys/indexes/constraints. Assuming nobody accessing/using the DB, what is the best procedure to speed up export/import? Both of these will happen on the same machine/OS and RDBMS version. No RMAN please. Thanks.

    include the following -
    direct=y
    recordlength=65535
    export file on seperate disks to the datafiles.
    import -
    recordlength=65535
    buffer=64M (or higher)
    export file on a seperate disk to the datafiles
    Not many people use recordlength, definately a quicker way to export/import.

  • Exporting/importing Responsibility-level personalizations

    Hello,
    Suppose on one instance I have done some personalizations for responsibility A and I want to apply these personalizations on another instance, but there is no responsibility A on another instance, but instead its name is B.
    Does export/import mechanism has support for this kind of requirement.
    - Yora

    Yora,
    Does export/import mechanism has support for this kind of requirement.No, they don't have any functionality like this.
    For much more information on this refer http://apps2fusion.com/apps/14-fwk/215-move-oa-framework-personalizations-from-one-environment-to-another
    //Note Taken from Above Link -- Anil Passi
    You have done responsibility level personaliation for OAF Pages, for this resp.
    Issue:- When you extract the personalizations, the directory path of extracted personalizations will contain the responsibility_id
    However, the same responsibility when created on TEST SYSTEM might be allocated a responsibility_id of 1088.
    Hence, you referring to ***** in notes above, you will have to rename the directory from 1032 to 1088. This can be scripted too in Unix.
    To overcome this issue:-If you wish to perform responsibility level personalizations against a custom responsibility, first create
    this responsibility on production, and get this cloned to other environments. Alas, such forward planning rarely happens in projects.
    Blame this on Oracle’ design. They could have easily used RESPONSIBILITY_KEY/APPLICATION_ID in the path, instead of using RESPONSIBILITY_ID. Regards,
    Gyan

  • Reduce export import time

    Hl All,
    I want to reduce the total time spent during export-import.
    Restrictions:
    =========
    1)Cross Platform
    2)exp and imp from lower version to upper version
    3)No 10G database
    Basically i want to do exp and imp in parallel so it reduces the total time spent on this activity. I thought of doing schema level exp-imp in parallel but i am afraid of the dependencies.
    Is there any other way to achieve the same or if i go with the above specified approach, can anyone provide some valuable inputs to that.
    I am trying to automate the above so that it becomes one time effort and rest all the times it(script) should do at his own.
    Thanks and regards
    Neeraj

    Hi All,
    Data volume is not less than 60GB and not more than 150GB.
    If i use a pipe on unix in between exp-imp what if my exp is slower at any
    point of time and import is faster(Any reason)?
    is import going to wait for the contents coming into the pipe through export or import will fail.
    I can consider creating indexes using the flat file.
    is there any way to get only the indexes in the flat file, i mean if i use indexfile
    option for import it gives me "CREATE TABLE..." statements too which means import utility is reading full
    dumpfile and giving me the output, i want only the "CREATE INDEX..." statement in the flat file.
    What about the schema level export and import?
    Any valuable inputs or proper steps from anyone out there
    Any restrictions while importing the schema's.
    Thanks and Regards

  • Page Export/Import problems

    I have been trying to migrate my portal 3.0.7 development to another portal 3.0.8 development database and have been having problems with the page export and import routines. I have been following the Oracle Export / Import document and first come across the problem when importing the content area as it tries to import its content area page.
    I have then tried to export / import pages separately and consistently get the following error.
    pageimp.csh s portal30 p portal30 m reuse d pobpage.dmp c Kenny.ogg security
    Start Portal Page Import
    Please Wait...
    declare
    ERROR at line 1:
    ORA-06510: PL/SQL: unhandled user-defined exception
    ORA-06512: at "PORTAL30.WWUTL_POB_TRANSPORT", line 1879
    ORA-01403: no data found
    ORA-06512: at line 6
    Disconnected from Oracle8i Enterprise Edition Release 8.1.7.0.0 - Production
    With the Partitioning option
    JServer Release 8.1.7.0.0 - Production
    Import of Portal Page Complete
    The Export command used was as follows:
    pageexp.csh s portal30 p portal30 n PAGE0 d pobpage.dmp c tools.ogg security
    As I am migrating from 3.0.7 to 3.0.8 I use the 3.0.7 export script and the 3.0.8 import script. For completeness I have also tried using the 3.0.7 and 3.0.8 import and export scripts, and get the same error.
    The version of Portal I am using is the Solaris version.
    When I look at the new site and click on the pages Tab no new pages are loaded as expected. Does anyone have a similar problem or am I overlooking something?
    Any suggestions?
    Thanks
    Garry
    null

    Portal 3.0.8 is now available in both NT and UNIX. So you need to upgrade both your platforms and then use export/import between the same versions. That's easier said than done as I have yet to get the export/import scripts to correctly and fully work on exporting from NT to UNIX Tru64. Originally I ran the scripts on NT and connected to the UNIX DB; however, somewhere I saw that it is more relaible to run the scripts on UNIX. So I am trying that now.
    Good luck!
    (By the way I notice that Larry declared war on complexity. The laughable irony of this is revealed by a look at these discussion boards!)
    null

  • Memory Limitation on EXPORT & IMPORT Internal Tables?

    Hi All,
    I have a need to export and import the internal tables to memory. I do not want to export it to any data base tables. is there a limitation on the amount of memroy that is can be used for the EXPORT & IMPORT. I will free the memory once I import it. The maximum I expect would be 13,000,000 lines.
    Thanks,
    Alex (Arthur Samson)

    You don't have limitations, but try to keep your table as small as possible.
    Otherwise, if you are familiar with the ABAP OO context, try use Shared Objects instead of IMPORT/EXPORT.
    <a href="http://help.sap.com/saphelp_erp2004/helpdata/en/13/dc853f11ed0617e10000000a114084/frameset.htm">SAP Help On Shared Objects</a>
    Hope this helps,
    Roby.

  • Using set/get parameters or export/import in BSP.

    Hi All,
    Is it possible to use set/get or export/import in BSP?
    We need to set/export some variables from a BADI and get/ import them in the BSP application.
    Code snippet will be of great help..
    Thanks,
    Anubhav

    Hi Anubhav,
    You can use the Export / Import statements for your requirement,
    from the BADI use EXPORT to send the variable data to a unique memory location
    with IDs
    e.g.
    *data declaration required for background processing
          DATA: WA_INDX TYPE INDX.
    **here CNAME is the variable you want to export
    EXPORT PNAME = CNAME TO DATABASE INDX(XY) FROM WA_INDX CLIENT
                SY-MANDT ID 'ZVAR1'.
    and in the BSP application use the IMPORT statement to fetch back the values
    set with the IDs above.
    IMPORT PNAME = LV_CNAME
      FROM DATABASE INDX(XY) TO WA_INDX CLIENT
      SY-MANDT ID 'ZVAR1'.
    deletes the data to save wastage of memory
      DELETE FROM DATABASE INDX(XY)
        CLIENT SY-MANDT
        ID 'ZVAR1'.
    Regards,
    Samson Rodrigues

Maybe you are looking for

  • Publishing EBS 12.1.3

    Hi all, I'm trying to publish oracle e-Business 12.1.3 via UAG and am running into the following problem. The application includes some AJAX which generates the navigator menu and includes the application's private URL which I am able to rewrite usin

  • Re: To install Tiger, some Macs need internal DVD drive, or CD version of Tiger

    I'd like to ammend this at the end and say: Notes: Macs released on or after April 26, 2005 need to install Tiger from the discs that came with them, or a newer retail release except Intel Macs. Retail release of Tiger is a black disc with a white X

  • Information on Datasources and Data Targets in R/3

    Hello friends, I've been informed that there are several Open Extraction Requests in the R/3 System, due to which the transport of a Dictionary object had not been possible. Could some one assist me in getting the solutions for the below questions? 1

  • Load Balance & Fault Tolerance

    I need do design a solution for load balance the DLSw traffic between 4 central routers and, if this 4 routers fail (oe wan fail) all peers and circuits need to be restablished on other site with other 4 routers. To balance the traffic I will use the

  • Separating Java add-in from ABAP Stack server

    I have installed an ABAP Stack and added the JAVA add-in on the same physical server.  Now I would like to have the JAVA add-in run on a physically different server.  ABAP on Server1 and JAVA add-in on server2. What is the process to move the add-in