GoldenGate Initial Load Replicat

Initial Load REPLICAT Paramter File
REPLICAT 1_IRMRI
SPECIALRUN
ASSUMETARGETDEFS
HANDLECOLLISIONS
DBOPTIONS USEODBC
SOURCEDB XXX, USERID XXX, PASSWORD XXX
EXTFILE ./dirdat/INITMRI000000
EXTFILE ./dirdat/INITMRI000001
EXTFILE ./dirdat/INITMRI000002
EXTFILE ./dirdat/INITMRI000003
EXTFILE ./dirdat/INITMRI000004
EXTFILE ./dirdat/INITMRI000005
EXTFILE ./dirdat/INITMRI000006
EXTFILE ./dirdat/INITMRI000007
EXTFILE ./dirdat/INITMRI000008
EXTFILE ./dirdat/INITMRI000009
EXTFILE ./dirdat/INITMRI000010
EXTFILE ./dirdat/INITMRI000011
EXTFILE ./dirdat/INITMRI000012
EXTFILE ./dirdat/INITMRI000013
EXTFILE ./dirdat/INITMRI000014
EXTFILE ./dirdat/INITMRI000015
EXTFILE ./dirdat/INITMRI000016
EXTFILE ./dirdat/INITMRI000017
EXTFILE ./dirdat/INITMRI000018
EXTFILE ./dirdat/INITMRI000019
EXTFILE ./dirdat/INITMRI000020
EXTFILE ./dirdat/INITMRI000021
EXTFILE ./dirdat/INITMRI000022
EXTFILE ./dirdat/INITMRI000023
EXTFILE ./dirdat/INITMRI000024
EXTFILE ./dirdat/INITMRI000025
EXTFILE ./dirdat/INITMRI000026
EXTFILE ./dirdat/INITMRI000027
EXTFILE ./dirdat/INITMRI000028
EXTFILE ./dirdat/INITMRI000029
EXTFILE ./dirdat/INITMRI000030
EXTFILE ./dirdat/INITMRI000031
EXTFILE ./dirdat/INITMRI000032
EXTFILE ./dirdat/INITMRI000033
EXTFILE ./dirdat/INITMRI000034
DISCARDFILE ./dirrpt/1_IRMRI.dsc, PURGE
MAP dbo.*, TARGET dbo.*;
END RUNTIME
The above parameter file does not pickup all the EXTFILE only the last file EXTFILE ./dirdat/INITMRI000034 is used by the REPLICAT process. I want to use all the EXTFILE ??
The Syntax to add REPLICAT with multiple EXTFILE
GGSCI>> ADD REPLICAT 1_IRMRI, EXTFILE ./dirdat/INITMRI*
Can I use wildcard character in the above syntax ??

Have you tried the Metalink Doc regarding that error?
What Causes The "Bad Column Index(xxxx)" Error In Replicat? [ID 972954.1]
Applies to:
Oracle GoldenGate - Version: 4.0.0 - Release: 4.0.0
Information in this document applies to any platform.
Solution
The "Bad Column Index(xxxx)" error in Replicat is caused by a Source Column Index "xxxx" greater than the number of columns in the Source Table Definition File Input to the Replicat process or if the ASSUMETARGETDEFS parameter is used and the Source Table and Target Table do not have the same structure, the Source Table has more columns than the Target Table.
Example
GGS ERROR 160 Bad column index(129) specified for table {table name}, max columns = 127
Explanation
The Source Table Trail Record has an Index and Data for Column Number 129 but only 127 columns are defined in the Source Table Definition File or when the ASSUMETARGETDEFS parameter is used in the Replicat Parameter File the Target Table contains 127 columns.
This is generally caused by changes in the Source Table or Target Table(i.e. columns have been added or deleted and a new Source Definition File has not been created to reflect the Source Table structure to match the Trail Records that Replicat is trying to process.
To resolve this error, run DEFGEN on the Source System for the Table causing the Replicat abend, copy that Definition File to the Target system. Add this SOURCEDEFS file to the Replicat Parameter file and restart the Replicat process.
Note: This applies to all Open Systems platforms except z/OS(IBM mainframe)

Similar Messages

  • GoldenGate - Initial Load

    Experts,
    Do we need to create tables on the target side for the inital load method ?
    In my case, i have 1660 table under single scheme in production database and i want to replicate this entire schema using GG.
    What ll happen, if i go with DDL & DML sync ?
    Need your help..!!!
    -Riaz

    Hi Ravi,
    For first tiime replication, If I go with DDL sync method dirctly (leaving initial load), the full structure/date of the tables will not be replicated ?
    What is need for intial load eventhough I am able to replicate the newly created tables ?
    Help me....
    --Riaz                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Initial load failed to transfer the clob data

    Hi Experts
    I am trying to move my huge database from Window 10g to 11g on linux though Goldengate Initial Laod. It have clob, long, blob datatypes. When i tring to move thought below parameters its giving an error
    Error:
    The Trail file can not be used with specialrun parameter and when i create a normal replicate process to replicate the data its dissplaying an error for log_csn, log_xid and log_cmplt_csnl columns under ggs_checkpointable (unable to populate this columns)
    --Loading data from file to Replicat (Transfer Method)
    Source Database Server:
    1. EDIT PARAMS load1
    2. Add below parameter into parameter files with name load1
    SOURCEISTABLE
    USERID gguser@orcl, PASSWORD test
    RMTHOST 10.8.18.189, MGRPORT 7810
    RMTFILE /ora01/initialload/pt, MAXFILES 10000, MEGABYTES 10
    TABLE test.*;
    3. EDIT PARAMS load2
    4. Add below parameter into parameter files with name load2
    SPECIALRUN
    USERID gguser@orcl1, PASSWORD test
    EXTTRAIL/ora01/initialload/pt
    ASSUMETARGETDEFS
    MAP test.*, TARGET test.*;
    END RUNTIME
    5. Start the extract porcess on the source database:
    cmd> ogg_directory > extract paramfile dirprm\load1.prm reportfile c:\load1.rpt
    6. Start the replicat process on the target Database Server:
    $ ogg_directory> replicat paramfile dirprm/load2.prm reportfile /ora01/load1.rptt

    Checkpointtable is not needed for a initial load replicat. You could do the following,
    load2.prm
    REPLICAT LOAD2
    USERID gguser@orcl1, PASSWORD test
    -- EXTTRAIL /ora01/initialload/pt
    ASSUMETARGETDEFS
    MAP test., TARGET test.;
    -- END RUNTIME
    ggsci> add rep load2, exttrail /ora01/initialload/pt, nodbcheckpoint
    ggsci> start rep load2
    Thanks,
    Rajesh

  • Initial load....start replicat in parallel

    Oracle 11gR2 GG 11.1.1.1.5 Linux
    All,
    I am doing the initial load (to replicat file method) and wondering if it is possible to kick off the replicat process in parallel while the initial extract process is running. I ask this because it is taking really long for the extract process to finish as the tables are large and it seems like it does not start inserting any rows into the target table until it has completely finished the extract. I followed the following note [1195705.1] to get passed the 2GB file size limit issue.

    Hi Karthik,
      Start transaction GN_START to generate the missing objects. The status of the generation (generation errors) can be seen in transaction GENSTATUS.
    Regards.
    Manuel.

  • Initial load failing between identical tables. DEFGEN skewed and fixable?

    Initial load failing between identical tables. DEFGEN skewed and fixable?
    Error seen:
    2013-01-28 15:23:46 WARNING OGG-00869 [SQL error 0 (0x0)][HP][ODBC/MX Driver] DATETIME FIELD OVERFLOW. Incorrect Format or Data. Row: 1 Column: 11.
    Then compared the discard record against a select * on the key column.
    Mapping problem with insert record (target format)...
    **** Comparing Discard contents to Select * display
    ABCHID = 3431100001357760616974974003012 = 3431100001357760616974974003012
    *!!! ABCHSTEPCD = 909129785 <> 9 ???*
    ABCHCREATEDDATE = 2013-01-09 13:43:36 = 2013-01-09 13:43:36
    ABCHMODIFIEDDATE = 2013-01-09 13:43:36 =2013-01-09 13:43:36
    ABCHNRTPUSHED = 0 = 0
    ABCHPRISMRESULTISEVALUATED = 0 = 0
    SABCHPSEUDOTERM = 005340 = 005340
    ABCHTERMID = TERM05 = TERM05
    ABCHTXNSEQNUM = 300911112224 = 300911112224
    ABCHTIMERQSTRECVFROMACQR = 1357799914310 = 1357799914310
    *!!! ABCTHDATE = 1357-61-24 00:43:34 <> 2013-01-09 13:43:34*
    ABCHABCDATETIME = 2013-01-09 13:43:34.310000 = 2013-01-09 13:43:34.310000
    ABCHACCOUNTABCBER =123ABC = 123ABC
    ABCHMESSAGETYPECODE = 1210 = 1210
    ABCHPROCCDETRANTYPE = 00 = 00
    ABCHPROCCDEFROMACCT = 00 = 00
    ABCHPROCCDETOACCT = 00 = 00
    ABCHRESPONSECODE = 00 = 00
    …. <snipped>
    Defgen comes out same when run against either table.
    Also have copied over and tried both outputs from DEFGEN.
    +- Defgen version 2.0, Encoding ISO-8859-1
    * Definitions created/modified 2013-01-28 15:00
    * Field descriptions for each column entry:
    * 1 Name
    * 2 Data Type
    * 3 External Length
    * 4 Fetch Offset
    * 5 Scale
    * 6 Level
    * 7 Null
    * 8 Bump if Odd
    * 9 Internal Length
    * 10 Binary Length
    * 11 Table Length
    * 12 Most Significant DT
    * 13 Least Significant DT
    * 14 High Precision
    * 15 Low Precision
    * 16 Elementary Item
    * 17 Occurs
    * 18 Key Column
    * 19 Sub Data Type
    Database type: SQLMX
    Character set ID: ISO-8859-1
    National character set ID: UTF-16
    Locale: en_EN_US
    Case sensitivity: 14 14 14 14 14 14 14 14 14 14 14 14 11 14 14 14
    Definition for table RT.ABC
    Record length: 1311
    Syskey: 0
    Columns: 106
    ABCHID 64 34 0 0 0 0 0 34 34 34 0 0 32 32 1 0 1 3
    ABCHSTEPCD 132 4 39 0 0 0 0 4 4 4 0 0 0 0 1 0 0 0
    ABCHCREATEDDATE 192 19 46 0 0 0 0 19 19 19 0 5 0 0 1 0 0 0
    ABCHMODIFIEDDATE 192 19 68 0 0 0 0 19 19 19 0 5 0 0 1 0 0 0
    ABCHNRTPUSHED 130 2 90 0 0 0 0 2 2 2 0 0 0 0 1 0 0 0
    ABCHPRISMRESULTISEVALUATED 130 2 95 0 0 0 0 2 2 2 0 0 0 0 1 0 0 0
    ABCHPSEUDOTERM 0 8 100 0 0 0 0 8 8 8 0 0 0 0 1 0 0 0
    ABCTERMID 0 16 111 0 0 0 0 16 16 16 0 0 0 0 1 0 0 0
    ABCHTXNSEQNUM 0 12 130 0 0 0 0 12 12 12 0 0 0 0 1 0 0 0
    ABCHTIMERQSTRECVFROMACQR 64 24 145 0 0 0 0 24 24 24 0 0 22 22 1 0 0 3
    ABCTHDATE 192 19 174 0 0 0 0 19 19 19 0 5 0 0 1 0 0 0
    ABCHABCDATETIME 192 26 196 0 0 1 0 26 26 26 0 6 0 0 1 0 0 0
    ABCHACCOUNTABCER 0 19 225 0 0 1 0 19 19 19 0 0 0 0 1 0 0 0
    ABCHMESSAGETYPECODE 0 4 247 0 0 1 0 4 4 4 0 0 0 0 1 0 0 0
    ABCHPROCCDETRANTYPE 0 2 254 0 0 1 0 2 2 2 0 0 0 0 1 0 0 0
    ABCHPROCCDEFROMACCT 0 2 259 0 0 1 0 2 2 2 0 0 0 0 1 0 0 0
    ABCHPROCCDETOACCT 0 2 264 0 0 1 0 2 2 2 0 0 0 0 1 0 0 0
    ABCHRESPONSECODE 0 5 269 0 0 1 0 5 5 5 0 0 0 0 1 0 0 0
    … <snipped>
    The physical table shows a PACKED REC 1078
    And table invoke is:
    -- Definition of table ABC3.RT.ABC
    -- Definition current Mon Jan 28 18:20:02 2013
    ABCHID NUMERIC(32, 0) NO DEFAULT HEADING '' NOT
    NULL NOT DROPPABLE
    , ABCHSTEPCD INT NO DEFAULT HEADING '' NOT NULL NOT
    DROPPABLE
    , ABCHCREATEDDATE TIMESTAMP(0) NO DEFAULT HEADING '' NOT
    NULL NOT DROPPABLE
    , ABCHMODIFIEDDATE TIMESTAMP(0) NO DEFAULT HEADING '' NOT
    NULL NOT DROPPABLE
    , ABCHNRTPUSHED SMALLINT DEFAULT 0 HEADING '' NOT NULL NOT
    DROPPABLE
    , ABCHPRISMRESULTISEVALUATED SMALLINT DEFAULT 0 HEADING '' NOT NULL NOT
    DROPPABLE
    , ABCHPSEUDOTERM CHAR(8) CHARACTER SET ISO88591 COLLATE
    DEFAULT NO DEFAULT HEADING '' NOT NULL NOT DROPPABLE
    , ABCHTERMID CHAR(16) CHARACTER SET ISO88591 COLLATE
    DEFAULT NO DEFAULT HEADING '' NOT NULL NOT DROPPABLE
    , ABCHTXNSEQNUM CHAR(12) CHARACTER SET ISO88591 COLLATE
    DEFAULT NO DEFAULT HEADING '' NOT NULL NOT DROPPABLE
    , ABCHTIMERQSTRECVFROMACQR NUMERIC(22, 0) NO DEFAULT HEADING '' NOT
    NULL NOT DROPPABLE
    , ABCTHDATE TIMESTAMP(0) NO DEFAULT HEADING '' NOT
    NULL NOT DROPPABLE
    , ABCHABCDATETIME TIMESTAMP(6) DEFAULT NULL HEADING ''
    , ABCHACCOUNTNABCBER CHAR(19) CHARACTER SET ISO88591 COLLATE
    DEFAULT DEFAULT NULL HEADING ''
    , ABCHMESSAGETYPECODE CHAR(4) CHARACTER SET ISO88591 COLLATE
    DEFAULT DEFAULT NULL HEADING ''
    , ABCHPROCCDETRANTYPE CHAR(2) CHARACTER SET ISO88591 COLLATE
    DEFAULT DEFAULT NULL HEADING ''
    , ABCHPROCCDEFROMACCT CHAR(2) CHARACTER SET ISO88591 COLLATE
    DEFAULT DEFAULT NULL HEADING ''
    , ABCHPROCCDETOACCT CHAR(2) CHARACTER SET ISO88591 COLLATE
    DEFAULT DEFAULT NULL HEADING ''
    , ABCHRESPONSECODE CHAR(5) CHARACTER SET ISO88591 COLLATE
    DEFAULT DEFAULT NULL HEADING ''
    …. Snipped
    I suspect that the fields having subtype 3 just before the garbled columns is a clue, but not sure what to replace with or adjust.
    Any and all help mighty appreciated.

    Worthwhile suggestion, just having difficulty applying.
    I will tinker with it more. But still open to more suggestions.
    =-=-=-=-
    Oracle GoldenGate Delivery for SQL/MX
    Version 11.2.1.0.1 14305084
    NonStop H06 on Jul 11 2012 14:11:30
    Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved.
    Starting at 2013-01-31 15:19:35
    Operating System Version:
    NONSTOP_KERNEL
    Version 12, Release J06
    Node: abc3
    Machine: NSE-AB
    Process id: 67895711
    Description:
    ** Running with the following parameters **
    2013-01-31 15:19:40 INFO OGG-03035 Operating system character set identified as ISO-8859-1. Locale: en_US_POSIX, LC_ALL:.
    Comment
    Comment
    REPLICAT lodrepx
    ASSUMETARGETDEFS
    Source Context :
    SourceModule : [er.init]
    SourceID : [home/ecloud/sqlmx_mlr14305084/src/app/er/init.cpp]
    SourceFunction : [get_infile_params]
    SourceLine : [2418]
    2013-01-31 15:19:40 ERROR OGG-00184 ASSUMETARGETDEFS is not supported for SQL/MX ODBC replicat.
    2013-01-31 15:19:45 ERROR OGG-01668 PROCESS ABENDING.

  • Initial load methodology

    hello
    My project is to replicat in real time a JD Edwards database (oracle 10.2.0.4, 1To) with goldengate (11.1.1.1.2 on Aix 6.1)
    I want to be sure to validate my initial load setup, what I want to do is :
    1 -  start extract + pump, begin now  (apply stopped)
    2 -  start export (expdp from source + impdp on target),  get date time of end  import process :  date_end_import
    3 -  alter replicat begin ( date_end_import + 10mn)
    4 -  start replicat
    is it Ok ?
    thank you

    I think I'm wrong, seems the right operations are :
    1 - start extract + pump, begin now  (apply stopped)
    2 - start export (expdp from source + impdp on target),  get date time of begin export process : date_begin_export
    3 - AFTER IMPORT :  alter replicat begin (date_begin_export)
    4 - AFTER IMPORT : start replicat
    But reading the forum give the following setup :
    1 - start extract + pump, begin now  (apply stopped)
    2 - ON SOURCE : select dbms_flashback.get_system_change_number() from dual;  (ex : scn=123)
    3 - ON SOURCE expdp ... flashback_scn=123 ...
    4 - ON TARGET  impdp ...
    5 - AFTER IMPORT : start replicat AFTERCSN 123
    Thank you

  • Initial load with LOBs

    Hi, i'm trying to do an inital load and I keep getting errors like these:
    ERROR OGG-01192 Oracle GoldenGate Capture for Oracle, ext1.prm: Trying to use RMTTASK on data types which may be written as LOB chunks (Table: 'TESTDB.BLOBTABLE').
    ERROR OGG-01668 Oracle GoldenGate Capture for Oracle, ext1.prm: PROCESS ABENDING.
    The table looks like this:
    COLUMN_NAME|DATA_TYPE|NULLABLE|DATA_DEFAULT|COLUMN_ID|COMMENTS
    UUID     VARCHAR2(32 BYTE)     No          1     
    DESCRIPTION     VARCHAR2(2000 BYTE)     Yes          2     
    CONTENT     BLOB     Yes          3     
    I've checked and the source database does contain data in the blobtable and both databases have the same tables, so now I have no idea what can be wrong? =/

    For initial loads with LOBs, use a RMTFILE and a normal replicat. There are a number of things that are not supported with "RmtTask". A "rmtfile" is basically the same format as a 'rmttrail' file, but is specifically for initial loads or other "captured" data that is not a continuous stream. And make sure you do have a newer build of GG (either v11 or a latest 10.4 from the support site).
    The 'extract' would look something like this:
    ggsci> add extract e1aa, sourceIsTable
    ggsci> edit param e1aa
    extract e1aa
    userid ggs, password ggs
    -- either local or remote
    -- extFile dirdat/aa, maxFiles 999999, megabytes 100
    rmtFile dirdat/aa, maxFiles 999999, megabytes 100
    Table myschema1.*;
    Table myschema2.*;
    Then on the target, use a normal 'replicat' to read the "files".
    Note that if the source and target are both oracle, this is not the most efficient way to instantiate the target. Using export/import or backup/restore (or any other mechanism) would usually be preferable.

  • Multiple Initial Load

    Good day!
    I have 27 schemas (which contains 1000+ tables each) and I needed to replicate all of it in our DR machine, my question, is it possible to create multiple initial load extract for me to be able to finish initial loading in a small amount of time?
    Thanks!!
    Regards,
    Mela

    I'm having this kind of problem.
    I need to migrate database from Solaris with raw devices to AIX RAC with ASM.
    As it is a production server and it cannot spend to much time down to transfer huge data using expdp from one server to another, the only way I found to do it is using GoldenGate.
    Can somebody help me?
    Regards,

  • Initial Load Error

    Hi All
    I am trying to make inital load but I am receiving the following error.
    But trail file has been created on target in dirdat folder about 2GB size
    Source and Destination windows2003 32 Bit
    GG version is 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230
    Have you any idea??
    Source Context :
    SourceModule : [er.extrout]
    SourceID : [er/extrout.c]
    SourceFunction : [complete_tcp_msg]
    SourceLine : [1480]
    2013-01-09 18:00:57 ERROR OGG-01033 There is a problem in network communication, a remote file problem, encryption keys for target and source do not match (if using ENCRYPT) or an unknown error. (Remote file used is ./dirdat/INITLOAD01.DAT, reply received is Error 0 (The operation completed successfully.) getting position in ./dirdat/INITLOAD01.DAT).
    2013-01-09 18:00:57 ERROR OGG-01668 PROCESS ABENDING.
    ggserr.log on target
    OGG-01223 Oracle GoldenGate Collector for Oracle: Error 0 (The operation completed successfully.) getting position in ./dirdat/INITLOAD01.DAT.
    INFO OGG-01670 Oracle GoldenGate Collector for Oracle: Closing ./dirdat/INITLOAD01.DAT.
    INFO OGG-01676 Oracle GoldenGate Collector for Oracle: Terminating after client disconnect

    Why do you keep naming the file?
    Enter the parameters listed in Table 26 in the order shown, starting a new line for each parameter statement. The following is a sample initial-load Extract parameter file for
    loading data from file to Replicat.
    <your extract name>
    SOURCEISTABLE
    SOURCEDB mydb, USERID ogg, PASSWORD
    AACAAAAAAAAAAAJAUEUGODSCVGJEEIUGKJDJTFNDKEJFFFTC &
    AES128, ENCRYPTKEY securekey1
    RMTHOST ny4387, MGRPORT 7888, ENCRYPT AES 192 KEYNAME mykey
    ENCRYPTTRAIL AES192, KEYNAME mykey1
    RMTFILE /ggs/dirdat/initld, MEGABYTES 2, PURGE
    TABLE hr.*;
    TABLE sales.*;

  • Initial Load of DD* tables

    Hi everyone,
    Just a hopefully quick and simple question. We set up a SLT Replication Server recently and created our first SLT Configuration into HANA. We have no need for real time so we choose a scheduled time out of office hours. The configuration was successful however the DD* tables appeared in a scheduled state. From what I thought these tables should populate (load) initially regardless of the schedule. What appeared to happen was that they were waiting for the first schedule to run. Is this expected? Without these populating initially we could not choose any real ERP tables to replicate.
    We also tried with a non-SAP source (Oracle) and the DD* tables for that configuration were populated instantly even though that config was scheduled to run "off-peak" as well.
    Thanks,
    Marcus.

    Hi Marcus,
    As far as I understand your question, please find below my comments-
    (sap source system scenario)
    If the configuration is created with "schedule by time" option which I think is done in your case, then the  replication server would replicate database changes to target system at the time set by you. Here the metadata of tables would be copied from source system tables[DD002L and DD02T]
    yes, you are correct that ideally we should start with ERP tables replication after DD* tables are replicated successfully. This generally is faster but then depends upon the system.
    (non sap source system scenario)
    here the DD* tables are just initially loaded and not automatically replicated.
    So you would find a difference in how the replication takes place in both scenarios.
    Hope it answers to some extent for your query.
    Regards,
    Saritha K

  • Initial load with CDC

    I am now able to replicate changes made on the source table and make the changes visible on the staging site, now I have two questions:
    1.) how do I manage the initial load of the source table.
    The table as object gets replicated as an empty table, after the first change on the source table the changes are beeing replictated, but not the whole contents of the table.
    2.) how do I manage to apply the changes to target table ?
    I have the changes in the change view but with a.) all the additional columns, ending with a $-sign and b.) only the changed values, if the modification was an update-operation.
    rueisel

    Hi Rueisel,
    1. before starting replication, initialiaze yours destinations tables with DataPump. You can create a scripèt based on impdp/expdp ou write a PL/SQL package around DataPump API (not very difficult and well documented)
    2. To propagate the changes from the Changes tables to the Destination tables, you have to write your own solution. The main principles is to
    create a job that read the Changes Tables and update the Destination tables (use merge statements). The job can be stored on the Destination database or staging one. A database link must be created between Destination and staging database.
    Warning: If you have to propagate some clobs/blobs objects, you have to create a specifc solution because you can not access them throught a database link.
    I hope it's help,
    Cyryl

  • Golden Gate Initial load from 3 tb schema

    Hi
    My source database is 9i rdbms on solaris 5.10. I would like to build 11gR2 database on oracel Enterprise linux .
    How can i do the initial load of 3tb size schema , from my source to target ( which is cross platform and different version of rdbms)
    Thanks

    Couple of options.
    Use old export/import to do the initial load. While that is taking place, turn on change capture on the source so any transactions that take place during exp/imp timeframe are captured in the trails. Once the init load is done, you start replicat with the trails that have accumulated since exp started. Once source and target are fully synchronized, do your cutover to the target system.
    Do an in-place upgrade of your 9i source, to at least 10g. Reason: use transportable tablespaces (or, you can go with expdp/impdp). If you go the TTS route, you will also have to take into account endian/byte ordering of the datafiles (Solaris = big, Linux = little), and that will involve time to run RMAN convert. You can test this out ahead of time both ways. Plus, you can get to 10g on your source via TTS since you are on the same platform. When you do all of this for real, you'll also be starting change capture so trails can be applied to the target (not so much the case with TTS, but for sure with Data Pump).

  • Initial load gets slower and slower

    For a PoC I tried to use the internal goldengate mechanism for initial load. The size of the table is about 500 mb in total but over time the load is decreasing. Starting with nearly 1000 rows per second after one hour I was down to 50 Rows per hour and again decreasing down to not more than 10 rows per hour. So the entire load took 15 hours!
    There is only a primary key on the target table and no other constraints.
    Any idea?

    Same thing happens performance-wise on imports: starts off pretty fast, then starts to slow down. Can you rebuild/enable the PK index after the load is done? That should be a safe operation, give that your source has a PK. Are you sure there aren't any other constraints (or triggers) on the target table?
    Plus (assuming you are a DBA), what does AWR (or statspack, or tracing) show for wait events?

  • Golden Gate - Initial Load using parallel process group

    Dear all,
    I am new to GG and I was wondering if GG can support initial load with parallel process groups? I have manage to do an initial load using "Direct BULK Load" and "File to Replicat", but I have several big tables and replicat is not catching up. I am aware that GG is not ideal for making initial load, but it is complicated to explain why I am using it.
    Is it possible to user @RANGE function while performing Initial Load regardless of which method is used (file to replicat, direct bulk, ...) ?
    Thanks in advance

    you may use datapump for initial load for large tables.

  • Golden Gate Initial Load - Performance Problem

    Hello,
      I'm using the fastest method of initial load. Direct Bulk Load with additional parameters:
    BULKLOAD NOLOGGING PARALLEL SKIPALLINDEXES
    Unfortunatelly the load of a big Table 734 billions rows (around 30 GB) takes about 7 hours. The same table loaded with normal INSERT Statement in parallel via DB-Link takes 1 hour 20 minutes.
    Why does it take so long using Golden Gate? Am I missing something?
    I've also noticed that the load time with and without PARALLEL parameter for BULKLOAD is almost the same.
    Regards
    Pawel

    Hi Bobby,
    It's Extract / Replicat using SQL Loader.
    Created with following commands
    ADD EXTRACT initial-load_Extract, SOURCEISTABLE
    ADD REPLICAT initial-load_Replicat, SPECIALRUN
    The Extract parameter file:
    USERIDALIAS {:GGEXTADM}
    RMTHOST {:EXT_RMTHOST}, MGRPORT {:REP_MGR_PORT}
    RMTTASK replicat, GROUP {:REP_INIT_NAME}_0
    TABLE Schema.Table_name;
    The Replicat parameter file:
    REPLICAT {:REP_INIT_NAME}_0
    SETENV (ORACLE_SID='{:REPLICAT_SID}')
    USERIDALIAS {:GGREPADM}
    BULKLOAD NOLOGGING NOPARALLEL SKIPALLINDEXES
    ASSUMETARGETDEFS
    MAP Schema.Table_name, TARGET Schema.Table_tgt_name,
    COLMAP(USEDEFAULTS),
    KEYCOLS(PKEY),
    INSERTAPPEND;
    Regards,
    Pawel

Maybe you are looking for