Steps to Perform before schema level import

Hi,
We are planning to migrate Oracle 8i database to Oracle 10g Database.
Approach that we have decided is export/Import.
Can anyone tell me what all steps we have to perform before importing dmp to new database?
We are planning to go for schema level export/import.
Thanks in Advance
AT

1. Get a list of users to be exported
select distinct owner from dba_segments where owner NOT LIKE '%SYS%'
2. exp parfile=gen8i.prf
vi gen8i.prf
system/sys
owern=( <list generated above>)
rest of the exp paramters such as statistics, consistent
3. imp the dump file
4. recompile the packages
5. done.

Similar Messages

  • Impdp Performing a Schema-Mode Import Example

    The Oracle docs ( http://docs.oracle.com/cd/B12037_01/server.101/b10825/dp_import.htm#i1006564) has an example of performing a schema-mode import as follows:
    impdp hr/hr SCHEMAS=hr DIRECTORY=dpump_dir1 DUMPFILE=expschema.dmp
    EXCLUDE=CONSTRAINT, REF_CONSTRAINT, INDEX TABLE_EXISTS_ACTION=REPLACE
    Looking carefully at this example what does " INDEX TABLE_EXISTS_ACTION=REPLACE" mean, specifically with respect to "INDEX"?
    Does it mean it will drop the table and associated index if it already exists and then re-create and load it using the dump file contents?

    Index is an argument to "EXCLUDE" and Table_Exists_Action is separate option. In this example, It is excluding Indexes during import. It has nothing to do with "TABLE_EXISTS_ACTION=REPLACE"
    Thanks

  • Schema level Import issue

    Hi ,
    Recently i faces one issue :-
    schema backup from one database is created for <SCHEMA1> whose default tablespace is <TABS1> , and trying to import in to <SCHEMA1> of different database whose default tablespace is <TABS2> but it looks for <TABS1> tablespace.i have used fromuser touser clause during import.
    So, How can i perform this task without creating a <TABS1> tablespace and assign a default tablespace for <SCHEMA1> or renamimg a <TABS2> tablespace to <TABS1> tablespace which is a tidious task in oracle 9i.

    1 set up a default tablespace for the target user
    2 Make sure the target user doesn't have the RESOURCE role and/or UNLIMITED TABLESPACE privilege.
    3 Make sure the target user has QUOTA on the default tablespace ONLY
    alter user quota unlimited on <target> quota 0 on <the rest>
    4 Import without importing indexes, those won't be relocated.
    5 Imp indexfile=<any file> ---> file with create index statements
    6 edit this file, adjusting tablespaces
    7 run it.
    Tablespaces can't be renamed in 9i.
    Sybrand Bakker
    Senior Oracle DBA

  • Error while doing schema level import using datapump

    Hi I get the following ierrors while importing a schema from prod to dev database... can anyone help ,thanks!
    impdp system DIRECTORY=DATA_PUMP_DIR DUMPFILE=abcdprod.DMP LOGFILE=abcdprod.log REMAP_SCHEMA=abcdprod:abcddev
    ORA-39002: invalid operation
    ORA-31694: master table "SYSTEM"."SYS_IMPORT_FULL_01" failed to load/unload
    ORA-31644: unable to position to block number 170452 in dump file "/ots/oracle/echo/wxyz/datapump/abcdprod.DMP

    877410 wrote:
    Hi I get the following ierrors while importing a schema from prod to dev database... can anyone help ,thanks!
    impdp system DIRECTORY=DATA_PUMP_DIR DUMPFILE=abcdprod.DMP LOGFILE=abcdprod.log REMAP_SCHEMA=abcdprod:abcddev
    ORA-39002: invalid operation
    ORA-31694: master table "SYSTEM"."SYS_IMPORT_FULL_01" failed to load/unload
    ORA-31644: unable to position to block number 170452 in dump file "/ots/oracle/echo/wxyz/datapump/abcdprod.DMPpost complete command line for the expdp that made the abcdprod.DMP file

  • Steps to Follow before import

    Hi,
    We are planning to migrate Oracle 8i database to Oracle 10g Database.
    Approach that we have decided is export/Import.
    Can anyone tell me what all steps we have to perform before importing dmp to new database?
    We are planning to go for schema level export/import.
    Thanks in Advance
    AT

    Hi,
    You want perform import schema by schema?
    You can perform a full export or schema level export. Remember that to move data UP a version, you export using the EXP of the database that contains the data and you import using the IMP that ships with the TARGET database.
    Then, to move data from 8i to 10g:
    o exp over sqlnet to the 8i database.
    o imp natively into the 10g database.
    I suggest you to create all user data tablespaces into target database, before make a import.
    For more information, you can access this link below:
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14238/expimp.htm#CHDDCIHD
    Cheers

  • What are the steps to perform in sap pi after importing Efilling XI content

    what are the steps to perform in sap pi after importing Efilling XI content

    Hi ,
      The government agency processing the E filing might have a system that is capable of receiving http requests.
    That is a possible explanation for your "why" question.
    regarding, what has to be done in PI, you just need to configure in ID.
    1) Create business service (Communication component) for the receiver.
    2) create  a HTTP receiver adapter.
    3) Import the business system for your sap system
    4) Create a recdeiver determination
    5) Create a interface determination, and give your mapping name
    6) Create a receiver agreement using the http communication channel you have created
    best Regards,
    Ravi

  • Steps required to be performed before registering on SAP ISA B2C website.

    Hello,
    What are the steps required to be performed before registering on SAP ISA B2C
    website? Whenever I am trying to create a new account from the WEB, I am getting a null pointer exception.
    I created a B2C reference user. But what is next?
    Please help.
    Harsha

    Hello,
    What are the steps required to be performed before registering on SAP ISA B2C
    website? Whenever I am trying to create a new account from the WEB, I am getting a null pointer exception.
    I created a B2C reference user. But what is next?
    Please help.
    Harsha

  • Sequence nextval after an schema level export import

    If I export a schema that has some sequences and then truncate tables and then import the schema, do I get theh old sequence values?
    I guess my question is do sequences get stored at the schema level or the database level.
    I noticed that sequences are exported at the schema level when I do an export so that may be the answer but your confirmation would be appreciated.

    Hi,
    Nothing to worry,imp/exp does not change the value of Nextval.you can use truncate table after exp, then u can import it.
    Regards
    Vinay Agarwal
    OCP

  • How to add new tables in Streams for Schema level replication ( 10.2.0.3 )

    Hi,
    I am in process of setting up Oracle Streams schema level replication on version 10.2.0.3. I am able to setup replication for one table properly. Now I want to add 10 more new tables for schema level replication. Few questions regarding this
    1. If I create new tables in source, shall I have to create tables in target database manually or I have to do export STREAMS_INSTANTIATION=Y
    2. Can you tell me metalink note id to read more on this topic ?
    thanks & regards
    parag

    The same capture and apply process can be used to replicate other tables. Following steps should suffice your need:
    Say table NEW is the new table to be added with owner SANTU
    downstr_cap is the capture process which is already running
    downstr_apply is the apply process which is already there
    1. Now stop the apply process
    2. Stop the capture process
    3. Add the new table in the capture process using +ve rule
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES
    table_name      => 'SANTU.NEW',
    streams_type    => 'capture',
    streams_name    => 'downstr_cap',
    queue_name      => 'strmadmin.DOWNSTREAM_Q',
    include_dml     => true,
    include_ddl     => true,
    source_database =>  ' Name of the source database ',
    inclusion_rule  => true
    END;
    4. Take export of the new table with "OBJECT_CONSISTENT=Y" option
    5. Import the table at destination with "STREAMS_INSTANTIATION=Y' option
    6. Start the apply process
    7. Start the capture process

  • Steps to be followed in Export/Import

    Can anyone help me out defining the exact steps to be followed in exporting and importing the ODI objects?
    I have exported the master and work repository. I created a new master repository by importing the zip file. I can see the architecture neatly imported into the topology. The problem starts when I try to import the work repository. It keeps throwing the integrity constraint error. I export/import folder-wise as the current work repository is pretty huge!!
    It will be of great help if some one could explain the steps to be followed while export/import in ODI

    Hi there,
    It is typical of ODI to throw those errors, as the documentation is not quite comprehensive about how to do it.
    Here are a few guidelines:
    (i) Topology (master repository) should be exported n imported first. Then the models and finally the projects and the variables. Also, work repository export may not work if you try to export whole work repo at one go using File---->Export---->Work Repository !
    So the best thing is File--->Export---->Multiple Export----> then drag all ur objects into the box and check the "zip" checkbox
    Create separate zips for Models and separate for Projects and so on
    And import Model zip before Project zip in your new repository
    OR
    (ii) Use database schema export and import. This has worked the best for us.
    And I think it is the most flawless way of doing export import because you dont lose any objects and dont get any integrity constraint errors etc
    Just export the master repository & work repository schemas and then import them in you new database

  • Schema level and table level supplemental logging

    Hello,
    I'm setting up bi- directional DML replication between two oracle databases. I have enabled supplemental logging database level by running this command-
    SQL>alter database add supplemental log data (primary key) columns;
    Database altered.
    SQL> select SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI from v$database;
    SUPPLEME SUP SUP
    IMPLICIT YES NO
    -My question is should I enable supplemental logging table level also(for DML replication only)? should I run the below command also?
    GGSCI (db1) 1> DBLOGIN USERID ggs_admin, PASSWORD ggs_admin
    Successfully logged into database.
    GGSCI (db1) 2> ADD TRANDATA schema.<table-name>
    what is the deference between schema level and table level supplemental logging?

    For Oracle, ADD TRANDATA by default enables table-level supplemental logging. The supplemental log group includes one of the following sets of columns, in the listed order of priority, depending on what is defined on the table:
    1. Primary key
    2. First unique key alphanumerically with no virtual columns, no UDTs, no functionbased
    columns, and no nullable columns
    3. First unique key alphanumerically with no virtual columns, no UDTs, or no functionbased
    columns, but can include nullable columns
    4. If none of the preceding key types exist (even though there might be other types of keys
    defined on the table) Oracle GoldenGate constructs a pseudo key of all columns that
    the database allows to be used in a unique key, excluding virtual columns, UDTs,
    function-based columns, and any columns that are explicitly excluded from the Oracle
    GoldenGate configuration.
    The command issues an ALTER TABLE command with an ADD SUPPLEMENTAL LOG DATA clause that
    is appropriate for the type of unique constraint (or lack of one) that is defined for the table.
    When to use ADD TRANDATA for an Oracle source database
    Use ADD TRANDATA only if you are not using the Oracle GoldenGate DDL replication feature.
    If you are using the Oracle GoldenGate DDL replication feature, use the ADD SCHEMATRANDATA command to log the required supplemental data. It is possible to use ADD
    TRANDATA when DDL support is enabled, but only if you can guarantee one of the following:
    ● You can stop DML activity on any and all tables before users or applications perform DDL on them.
    ● You cannot stop DML activity before the DDL occurs, but you can guarantee that:
    ❍ There is no possibility that users or applications will issue DDL that adds new tables whose names satisfy an explicit or wildcarded specification in a TABLE or MAP
    statement.
    ❍ There is no possibility that users or applications will issue DDL that changes the key definitions of any tables that are already in the Oracle GoldenGate configuration.
    ADD SCHEMATRANDATA ensures replication continuity should DML ever occur on an object for which DDL has just been performed.
    You can use ADD TRANDATA even when using ADD SCHEMATRANDATA if you need to use the COLS option to log any non-key columns, such as those needed for FILTER statements and KEYCOLS clauses in the TABLE and MAP parameters.
    Additional requirements when using ADD TRANDATA
    Besides table-level logging, minimal supplemental logging must be enabled at the database level in order for Oracle GoldenGate to process updates to primary keys and
    chained rows. This must be done through the database interface, not through Oracle GoldenGate. You can enable minimal supplemental logging by issuing the following DDL
    statement:
    SQL> alter database add supplemental log data;
    To verify that supplemental logging is enabled at the database level, issue the following statement:
    SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
    The output of the query must be YES or IMPLICIT. LOG_DATA_MIN must be explicitly set, because it is not enabled automatically when other LOG_DATA options are set.
    If you required more details refer Oracle® GoldenGate Windows and UNIX Reference Guide 11g Release 2 (11.2.1.0.0)

  • Activities to be performed before implementation

    Hi guys
    Could anyone please tell me the prior activities to be performed before the implemenatation. As per my understanding we nedd to upload master data.
    At present the present activities run through AS400 application, I need to undersatnd the activities to be performed and the what are the relavent master data to be uploaded and cut over activities to be performed.
    will be og great help.
    Thanks

    hi satya,
    Here some important infirmation links.details infirmation about FI porjet preparation.
    http://www.geocities.com/rmtiwari/main.html?http://www.geocities.com/rmtiwari/Resources/Management/ASAP_Links.html
    http://iris.tennessee.edu/sap_project.htm
    Yes, u have to upload the master data through  LSMW.
    Regarding the Cut over activities...
    Cut over activity is an important and crucial activity during implementation / upgrade of SAP or any other ERP package. This needs to be meticulously planned such that all the steps / activities to be performed are listed and executed in the required sequence. Some of the main activities are as under:
    1. Transport all the Configuration settings related to Entreprise Structure and Master Data related transports.
    2. Create Master Data - GL Accounts / Cost Elements / Cost Centers / Profit Centers/ Customer Master / Vendor Master / Asset Master / Material Master etc.
    3. Upload all open Sales Orders / Purchase Orders.
    4. Upload Closing GL Balances as on the date of cutover. In case of Open item managed GL Accounts - line items of the GL Accounts need to be uploaded.
    5. Upload all open items of Customers and Vendors.
    6. Material Quantity / value to be uploaded and this should tie with GL Balance.
    7. For conversion activity create four to six Conversion GL Accounts - one each for Assets, Liabilities, Expense Accounts, Revenue, Materials etc. The sum of all conversion accounts should be zero. Once the conversion activity is completed Block those GL Accounts.
    8. Get a sign off from Business for all the conversion activity. The financial ststements should tie with the legacy system.
    Transactions should be posted only after obtaining the sign off.
    assign the points ifhelpfulll
    Ranjit

  • Performance before and after

    I would like to run the following command to increase efficieny of the database. However I would like to know how to determine if these commands really make a difference to database performance.
    analyse schema
    dbms...gather index statistics
    table
    Is there a way that IO could test whether the before performance and that after perfomance ...has incresed ..or made no difffference at all.
    Thanks

    If you just have massive updates on your tables or you never collected statistics, that will make some differences on query performance. However, if you have current statistics of your tables and indexes and there's no much changes to the data, recollect won't make any difference to performance.

  • Oracle 12c migration (performance before/after checks) - suggestion needed

    Hello Experts;
    We are going to upgrade our database to 12c from 10.2.0.4 via manual migration (direct migration is from 10.2.0..5) as per documentation:
    http://www.oracle.com/technetwork/database/upgrade/upgrading-oracle-database-wp-12c-1896123.pdf
    I cannot find any guidance or tutorial for testing - how we should check our processes (performance, resource usage) before and after.
    Could you please suggest a way for testing before/after for our production processes?
    Thanks in advanace for your reply.
    Regards,
    Bolo

    What to test from performance perspective is very generic thing and answer lies in what is important in your application and what are the performance expectations around those important functionality.
    Since applications could be of different nature like- OLTP, DSS. Your requirement could also differ accordingly. In short business must be looking at no impact situation from this upgrade. There are many ways it can be ensured. More accurate way means more money and effort. e.g. You can use Real application testing option to test the production like workload in 12c and see the impact. This requires effort around setting up and using this option and license fee. You can use other available tools like load runner etc.
    In other option, you can do performance baseline on existing 10204 database  and then compare it with performance baseline in 12c database. Directly doing it in prod will make most sense since then you are actually doing it real data volume and workload but it is most risky in the situations where performance degrades for something very important.
    Henceforth it is recommended to run the performance baselining on non-production environment which is production like. By production like, I meant having same data as production (to get this, database refresh is very much recommended), similar workload as production (this criteria becomes more important in OLTP systems) and similar H/W, OS and Database configuration as Production. If you can't do so then your approach is not risk free.
    In any case, you will have option of quick database tuning using ADDM, SQL tuning advisor and AWR etc.
    Hope it helps.
    Thanks,
    Abhi

  • AWR REPORT AT SCHEMA LEVEL

    Hello,
    Can anybody guide me how to generate an AWR (automatic workload repository) report at schema level? Actually I have created one user name xyz and imported some 1000 objects (tables, views etc) .
    I have little doubt here , when we create one user ,is schema for that user will it create automatically …..if its true then I need to generate one AWR report for that user or schema

    I don't think this is possible: AWR only works at database/instance level and not at schema level.

Maybe you are looking for