Golden Gate - DML statements are not replicated to target database

Hi,
Testing Environment
Source:
OS: RHEL 4.6, Database: 11gR2, Golden Gate 10.4, ASM
extract ext1
connection to database
userid ggate, password qwerty
hostname and port for trail
rmthost win2003, mgrport 7800
path and name for trial
rmttrail C:\app\admin\GOLDENGATE\dirdat\lt
EXTTRAIL /u01/oracle/goldengate/dirdat/lt
--TRANLOGOPTIONS ASMUSER SYS@ASM, ASMPASSWORD sys ALTARCHIVELOGDEST /u03/app/arch/ORCL/archivelog
--DDL support
ddl include mapped objname sender.*;
--DML
table sender.*;
Target:
OS: Windows 2003, Database: 11gR2, Golden Gate 10.4
--replicate group
replicat rep1
--source and target defintions
ASSUMETARGETDEFS
--target database login
userid ggate, password ggate
--file for discared transaction
discardfile C:\app\admin\GOLDENGATE\discard\rep1_disc.txt, append, megabytes 10
--ddl support
DDL
--specifying table mapping
map sender.* ,target receiver.* ;
I've Successfully setup Oracle Golden Gate test environment as above.
DDL statements are replicating successfully to target database.
while DML statements are not being replicated to target database.
Pl. try to solve the problem
Regards,
Edited by: Vihang Astik on Jul 2, 2010 2:33 PM

Almost ok but how you will handle the overlapping (transactions captured by expdp & captured by Extract too) of transactions for the new table ?
Metalink doc ID 1332674.1 has the complete steps. Follow the "without HANDLECOLLISIONS" approach.

Similar Messages

  • Data is not replicated on target database - oracle stream

    has set up streams replication on 2 databases running Oracle 10.1.0.2 on windows.
    Steps for setting up one-way replication between two ORACLE databases using streams at schema level followed by the metalink doc
    I entered a few few records in the source db, and the data is not getting replication to the destination db. Could you please guide me as to how do i analyse this problem to reach to the solution
    setps for configuration _ steps followed by metalink doc.
    ==================
    Set up ARCHIVELOG mode.
    Set up the Streams administrator.
    Set initialization parameters.
    Create a database link.
    Set up source and destination queues.
    Set up supplemental logging at the source database.
    Configure the capture process at the source database.
    Configure the propagation process.
    Create the destination table.
    Grant object privileges.
    Set the instantiation system change number (SCN).
    Configure the apply process at the destination database.
    Start the capture and apply processes.
    Section 2 : Create user and grant privileges on both Source and Target
    2.1 Create Streams Administrator :
    connect SYS/password as SYSDBA
    create user STRMADMIN identified by STRMADMIN;
    2.2 Grant the necessary privileges to the Streams Administrator :
    GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;
    In 10g :
    GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;
    execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
    2.3 Create streams queue :
    connect STRMADMIN/STRMADMIN
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'STREAMS_QUEUE_TABLE',
    queue_name => 'STREAMS_QUEUE',
    queue_user => 'STRMADMIN');
    END;
    Section 3 : Steps to be carried out at the Destination Database PLUTO
    3.1 Add apply rules for the Schema at the destination database :
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name => 'SCOTT',
    streams_type => 'APPLY ',
    streams_name => 'STRMADMIN_APPLY',
    queue_name => 'STRMADMIN.STREAMS_QUEUE',
    include_dml => true,
    include_ddl => true,
    source_database => 'REP2');
    END;
    3.2 Specify an 'APPLY USER' at the destination database:
    This is the user who would apply all DML statements and DDL statements.
    The user specified in the APPLY_USER parameter must have the necessary
    privileges to perform DML and DDL changes on the apply objects.
    BEGIN
    DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name => 'STRMADMIN_APPLY',
    apply_user => 'SCOTT');
    END;
    3.3 Start the Apply process :
    DECLARE
    v_started number;
    BEGIN
    SELECT decode(status, 'ENABLED', 1, 0) INTO v_started
    FROM DBA_APPLY WHERE APPLY_NAME = 'STRMADMIN_APPLY';
    if (v_started = 0) then
    DBMS_APPLY_ADM.START_APPLY(apply_name => 'STRMADMIN_APPLY');
    end if;
    END;
    Section 4 :Steps to be carried out at the Source Database REP2
    4.1 Move LogMiner tables from SYSTEM tablespace:
    By default, all LogMiner tables are created in the SYSTEM tablespace.
    It is a good practice to create an alternate tablespace for the LogMiner
    tables.
    CREATE TABLESPACE LOGMNRTS DATAFILE 'logmnrts.dbf' SIZE 25M AUTOEXTEND ON
    MAXSIZE UNLIMITED;
    BEGIN
    DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS');
    END;
    4.2 Turn on supplemental logging for DEPT and EMPLOYEES table :
    connect SYS/password as SYSDBA
    ALTER TABLE scott.dept ADD SUPPLEMENTAL LOG GROUP dept_pk(deptno) ALWAYS;
    ALTER TABLE scott.EMPLOYEES ADD SUPPLEMENTAL LOG GROUP dep_pk(empno) ALWAYS;
    Note: If the number of tables are more the supplemental logging can be
    set at database level .
    4.3 Create a database link to the destination database :
    connect STRMADMIN/STRMADMIN
    CREATE DATABASE LINK PLUTO connect to
    STRMADMIN identified by STRMADMIN using 'PLUTO';
    Test the database link to be working properly by querying against the
    destination database.
    Eg : select * from global_name@PLUTO;
    4.4 Add capture rules for the schema SCOTT at the source database:
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name => 'SCOTT',
    streams_type => 'CAPTURE',
    streams_name => 'STREAM_CAPTURE',
    queue_name => 'STRMADMIN.STREAMS_QUEUE',
    include_dml => true,
    include_ddl => true,
    source_database => 'REP2');
    END;
    4.5 Add propagation rules for the schema SCOTT at the source database.
    This step will also create a propagation job to the destination database.
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
    schema_name => 'SCOTT',
    streams_name => 'STREAM_PROPAGATE',
    source_queue_name => 'STRMADMIN.STREAMS_QUEUE',
    destination_queue_name => 'STRMADMIN.STREAMS_QUEUE@PLUTO',
    include_dml => true,
    include_ddl => true,
    source_database => 'REP2');
    END;
    Section 5 : Export, import and instantiation of tables from
    Source to Destination Database
    5.1 If the objects are not present in the destination database, perform
    an export of the objects from the source database and import them
    into the destination database
    Export from the Source Database:
    Specify the OBJECT_CONSISTENT=Y clause on the export command.
    By doing this, an export is performed that is consistent for each
    individual object at a particular system change number (SCN).
    exp USERID=SYSTEM/manager@rep2 OWNER=SCOTT FILE=scott.dmp
    LOG=exportTables.log OBJECT_CONSISTENT=Y STATISTICS = NONE
    Import into the Destination Database:
    Specify STREAMS_INSTANTIATION=Y clause in the import command.
    By doing this, the streams metadata is updated with the appropriate
    information in the destination database corresponding to the SCN that
    is recorded in the export file.
    imp USERID=SYSTEM@pluto FULL=Y CONSTRAINTS=Y FILE=scott.dmp IGNORE=Y
    COMMIT=Y LOG=importTables.log STREAMS_INSTANTIATION=Y
    5.2 If the objects are already present in the desination database, there
    are two ways of instanitating the objects at the destination site.
    1. By means of Metadata-only export/import :
    Specify ROWS=N during Export
    Specify IGNORE=Y during Import along with above import parameters.
    2. By Manaually instantiating the objects
    Get the Instantiation SCN at the source database:
    connect STRMADMIN/STRMADMIN@source
    set serveroutput on
    DECLARE
    iscn NUMBER; -- Variable to hold instantiation SCN value
    BEGIN
    iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
    DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' || iscn);
    END;
    Instantiate the objects at the destination database with
    this SCN value. The SET_TABLE_INSTANTIATION_SCN procedure
    controls which LCRs for a table are to be applied by the
    apply process. If the commit SCN of an LCR from the source
    database is less than or equal to this instantiation SCN,
    then the apply process discards the LCR. Else, the apply
    process applies the LCR.
    connect STRMADMIN/STRMADMIN@destination
    BEGIN
    DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN(
    SOURCE_SCHEMA_NAME => 'SCOTT',
    source_database_name => 'REP2',
    instantiation_scn => &iscn );
    END;
    Enter value for iscn:
    <Provide the value of SCN that you got from the source database>
    Note:In 9i, you must instantiate each table individually.
    In 10g recursive=true parameter of DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN
    is used for instantiation...
    Section 6 : Start the Capture process
    begin
    DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'STREAM_CAPTURE');
    end;
    /

    You must have imported a JKM and after that these are the steps
    1. Go to source datastrore and click on CDC --> Add to CDC
    2. Click on CDC --> Start Journal
    3. Now go to the interface Choose the source table and select Journalized data only and then click on ok
    4. Now execute the interface
    If still it doesn't work, are you using transactions in your interface ?

  • Some Sales Orders are not replicating into CRM

    Hi guys,
    when I check SMQ2 in CRM,  I find that some sales orders are not replicating into CRM, then check the error, its says check SMW01..
    when I check the SMW01 it says validation error and BDOC status is F05 (information no processing). I tried to search the sales order in CRMD_Order transaction could not find the sales order...
    *Error in SMW01*
         Processing of document with Guid E01D11C71856C4F1AEBF0024E84DD0CE is canceled     CRM_ORDER
         Validation error occurred: Module CRM_DOWNLOAD_BTMBDOC_VAL, BDoc type BUS_TRANS_MSG.     SMW3
    It's not giving much clue what's the error is...
    I tried to setup a request to replicate those sales orders.. those request also end up in the same error...
    any help on this?
    thanks,
    Ken

    Hello Ken,
    When reprocessing a BDoc(via SMW01) you might need to set a flag.
    Goto trxn. SMW01. Start the debugger using command '/h' and reprocess the BDoc. After going into the debug mode press SHIFT+F7 and put a breakpoint at 2nd Tab in : class:cl_smw_flow and method 'restart_processing'. Keep debugging(F5 or F6 accordingly) until you find the requisite flag l_retry_allowed and set it as 'X'. This will allow you to reprocess the Bdoc.
    Thanks,
    Rohit

  • Limit PO price changes are not replicated to ECC backend

    Hi all,
    SRM 5.0 ECS SP13.
    When I change the value(increase/decrease) in limt PO which is already created(PO is changed for first time) and order the PO,under Overview screen.,the PO total value(at header level) shows the correct changed /new value while at the Item level,the Net price is shown as the old value!However when I go and check the item details,the  field "VALUE" and "EXPECTED VALUE" have the changed/new value.
    Because of the above issue,the PO changes are not replicated to ECC backend system.When I tested the BAPI BAPI_PO_CHANGE1 with the test data,there is no error mesasge in ECC backend system.
    After this,when I change the PO for second time(again value increase/descrease),the total value and net price is shown correctly in SRM and changes are also replicated to ECC backend system!
    I have also checked the cong for SPOOLPARAMETERS under Set Control Parameters and eveything is set correctly as shown below
    SPOOL_JOB_USER     User that execueds spool job.     CUA_ADMIN
    SPOOL_LEAD_INTERVAL     intervall by which the retrytime incr.     60
    SPOOL_MAX_RETRY     Max. number of retrys for writing BAPIS     10
    Can someone throw some pointers how to resolve the above issue.
    Thanks in advance.

    Hi
    can you recreate the same issue
    are you saying that
    1. create a limit po for 100 USD and replicated to ECC AS 100 USD total value.
    2 . now update the PO in SRM 150 USD and it was not replicated to ECC ?
    or
    1. limit sc
    2. Limit PO
    3. update not reached ECC.
    Note 1284361 - Limits not transferred to backend purchase order
    Symptom
    Extended classic scenario.
    You have ordered a purchase order with more than one hierarchy item containing a limit.The purchase order is created or updated in the backend system without errors but the limits are missing in the backend purchase order.

  • Script to know how many statements are not loaded in oracle CM

    Hi,
    Can some one would help me on how to prepare a script to know the bank statements are not loaded into oracle CM and how many checks under that particular statement are not loaded??
    I have the following info with me..
    1. The statements are loaded in a temporary table as soon as we run statement loader program.
    2. Then the records will get into ce_statement_headers_int_all and ce_statement_lines_interface.
    3. Then they will get loaded into ce_statement_headers_all and ce_statement_lines tables.
    Now, the only thing i am confused is how to get to know the statements list that are not loaded into oracle along with the error message. i am confused which tables will give the exact info ce_statement_headers_int_all and ce_statement_lines_interface or ce_headers_interface_errors and ce_lines_interface_errors??? immediate help would be highly appreciated and its very urgent...........
    Thanks.

    Hi Helios,
    I dint get you. This is not something which SR can be raised. I need to know how script can be written to know the statements that are not loaded in oracle cash management. I need to prepare a notification where in that should contain how many statements have been loaded successfully and how many are not on a daily basis...
    Thanks.

  • Customers Created in R3 are not replicating in connected CRM System.

    Dear All,
    I am facing an issue " Customers created in SAP R3 are not replicating to CRM system".
    Please provide me the possible do list (Check list) to solve this problem (if any Middleware settings also).  This an urgent issue with the production system.
    Thanks & Regards,
    Ramana Rao

    Hi ,
    Check following :
    SMQ1/2 - Inbound/ Outbound Queues in both ERP and CRM system.You can see if there some "stoppes"queues.
    SMW01 - Bdoc messages in CRM - detailed explanation about errors in replication.
    Check if "stop data" flag in table  "CRMRFCPAR" in erp systemis is unchecked.
    Check your filter definitions in R3AC1 trn business object "customer_main" .
    Also check SAP Note 429423 - you have detailed check list for middelware problems.
    Rika

  • EMERGENCY: List of files in Recovery Area not managed by the database

    Hi,
    this is an emergency request....
    I have this data guard set up. And i accidently dropped online redo logs, standby logs and the archive log except the current online redolog.
    i actually ran to cleanup the whole directory which included the controlfile and the datafiles (was on the wrong location on ASM)
    but the controlfile and the datafiles did not get deleted. i can still see them on the ASM.
    and now when i am trying to restore the archive logs, but i keep getting error as shown below...
    WARNING: A file of type ARCHIVED LOG may exist in
    db_recovery_file_dest that is not known to the database.
    Use the RMAN command CATALOG RECOVERY AREA to re-catalog
    any such files. This is most likely the result of a crash
    during file creation.
    so when i did the below i was getting error as:
    RMAN> CATALOG RECOVERY AREA;
    using target database control file instead of recovery catalog
    searching for all files in the recovery area
    no files found to be unknown to the database
    List of files in Recovery Area not managed by the database
    ==========================================================
    File Name: +DATA/afcpdg/controlfile/controlfile01.ctl
    RMAN-07526: Reason: File is not an Oracle Managed File
    File Name: +DATA/afcpdg/onlinelog/group_1.304.718194815
    RMAN-07527: Reason: File was not created using DB_RECOVERY_FILE_DEST initialization parameter
    number of files not managed by recovery area is 82, totaling 168.10GB
    But i do see those data files that RMAN is complaining as shown above in the ASM.
    I HOPE I DID NOT MESS UP THE CONTROLFILE and the DATFILES
    CAN SOMEONE HELP ME FIX THIS MISTAKE THAT I DID.
    Right now the data guard is running fine, the logs are getting shipped from primary and the getting applied at the DG.
    And i am able to see all the redo logs got deleted.
    But not sure how the deleted redo logs got created in the ASM (i did not create them manually)
    Thanks.
    Philip.
    Edited by: user8898644 on May 5, 2010 8:52 AM
    Edited by: user8898644 on May 5, 2010 9:16 AM
    Edited by: user8898644 on May 5, 2010 11:43 AM

    Yes this happened on the standby database.
    I had to restore the archive logs because i deleted all the archive logs which got shipped from the primary.
    I did re-create the standby logs. But when i checked back the redo logs that i deleted came back again.
    I have no idea how they can back again. is it a feature of ASM where a mirrored copy of the files are stored on ASM and they re-create them when accidently dropped.
    Can you help me understand how the redologs got created again automatically.
    Right now the data guard is running fine but i do see the error
    WARNING: A file of type ARCHIVED LOG may exist in
    db_recovery_file_dest that is not known to the database.
    Use the RMAN command CATALOG RECOVERY AREA to re-catalog
    any such files. This is most likely the result of a crash
    during file creation.
    and when i do the following
    RMAN> catalog recovery area;
    i do see errors as below...
    List of files in Recovery Area not managed by the database
    ==========================================================
    File Name: +DATA/afcpdg/controlfile/controlfile01.ctl
    RMAN-07526: Reason: File is not an Oracle Managed File
    File Name: +DATA/afcpdg/onlinelog/group_1.304.718194815
    RMAN-07527: Reason: File was not created using DB_RECOVERY_FILE_DEST initialization parameter
    File Name: +DATA/afcpdg/onlinelog/group_2.372.718194833
    RMAN-07527: Reason: File was not created using DB_RECOVERY_FILE_DEST initialization parameter
    File Name: +DATA/afcpdg/datafile/system.596.715610261
    RMAN-07527: Reason: File was not created using DB_RECOVERY_FILE_DEST initialization parameter
    File Name: +DATA/afcpdg/datafile/undotbs1.571.715603525
    number of files not managed by recovery area is 82, totaling 168.10GB
    we do have the db_create_file_dest and db_recovery_file_dest pointing to the same location as +DATA
    How do i fix the above error?
    Thanks in advance.

  • In Journalization updates are not reflecting in target

    Hi,
    I am using Simple Journalization for SQL Server tables. Only the newly added rows are reflecting in the target but not the updated ones. I have inserted two rows and updated one row in source. Only the inserted records are reflecting in target and updated record is not reflecting in target.
    When I check the intermediate(I$_CDC_TARGET) and JKM tables(J$CDC_SOURCE) both inserted and updated rows are there. Why this updated records are not reflecting in target?
    Both Source and Target tables have Primary keys.
    Thanks,
    Naveen Suram

    Hi,
    I am using IKM SQL Incremental Update (row by row). Updated are coming to I$ table and I am able to see these records in Journal data also.
    One doubt here: In my source no extra columns are coming...I refered JohnGoodwin blog...according to the blog...there should be some extra columns related to journal right? Why they are not coming?
    First I have added the JKM to my model..then added the data source to CDC...then created a sunscriber...then started a Journal....
    Any thing I am missing?
    Thanks,
    Naveen Suram

  • Redo log files are not applying to standby database

    Hi everyone!!
    I have created standby database on same server ( windows XP) and using oracle 11g . I want to synchronize my standby database with primary database . So I tried to apply redo logs from primary to standby database as follow .
    My standby database is open and Primary database is not started (instance not started) because only one database can run in Exclusive Mode as DB_NAME is same for both database.  I run the following command on the standby database.
                SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
    It returns "Database altered" . But when I checked the last archive log on primary database, its sequence is 189 while on standby database it is 177. That mean archived redo logs are not applied on standby database.
    The tnsnames.ora file contains entry for both service primary & standby database and same service has been used to transmit and receive redo logs.
    1. How to resolve this issue ?
    2.Is it compulsory to have Primary database open ?
    3. I have created standby  control file by using  command
              SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS ‘D:\APP\ORACLE\ORADATA\TESTCAT\CONTROLFILE\CONTROL_STAND1.CTL‘;
    So database name in the standby control file is same as primary database name (PRIM). And hence init.ora file of standby database also contains DB_NAME = 'PRIM' parameter. I can't change it because it returns error of mismatch database name on startup. Should  I have different database name for both or existing one is correct ?
    Can anybody help me to come out from this stuck ?
    Thanks & Regards
    Tushar Lapani

    Thank you Girish. It solved  my redo apply problem. I set log_archive_dest parameter again and then I checked archive redo log sequence number. It was same for both primary and standby database. But still table on standby database is not being refresh.
    I did following scenario.
    1.  Inserted 200000 rows in emp table of Scott user on Primary database and commit changes.
    2. Then I synchronized standby database by using Alter database command. And I also verify that archive log sequence number is same for both database. It mean archived logs from primary database has been applied to standby database.
    3. But when I count number of rows in emp table of scott user on standby database, it returns only 14 rows even of redo log has been applied.
    So my question is why changes made to primary database is not reflected on standby database although redo logs has been applied ?
    Thanks

  • Export and Import are not supported on 10g databases for users logged

    Hi
    I having the below written error, when i want to export or import my database. I having Standalone system using windows xp.
    Role Error - Export and Import are not supported on 10g databases for users logged in with the SYSDBA role. Logout and login using a different role before trying again.

    Actually i ma new to EM...
    When I want to Import any thing He is asking abt the HOST CREDENTIALS
    I can't undrstatnd wht he is asking????
    Host Credentials
    * Username          
    * Password          
              Save as Preferred Credential
    It Genrate the below error
    Error - ERROR: Wrong password for user

  • GG configuration - Changes are not replicating from Source to Target

    Hi,
    I am new in golden gate configuration. I have set up GG between two oracle databases to replicate the changes from source to target database.
    But when I update or insert in HR.JOBS table in source, it is not propagating to target. I didn't find any error in error log.
    Could you please help me to suggest the configuration I did wrong here?
    SOURCE Database information
    ===========================
    Database Name : SRISRI(10.2.0.1) , Hostname : ggs-1.cu.com(Linux 64 bit)
    Oracle GoldenGate Delivery for Oracle Version 11.1.1.1.1 OGGCORE_11.1.1.1.1_PLATFORMS_110729.1700
    GGSCI (ggs-1.cu.com) 2> info all
    Program Status Group Lag Time Since Chkpt
    MANAGER RUNNING
    EXTRACT RUNNING EXT_1 00:00:00 00:00:01
    EXTRACT RUNNING PUMP_1 00:00:00 00:00:08
    GGSCI (ggs-1.cu.com) 3>
    Error Log
    =====================
    2012-02-12 15:08:34 INFO OGG-00975 Oracle GoldenGate Manager for Oracle, mgr.prm: EXTRACT EXT_1 starting.
    2012-02-12 15:08:34 INFO OGG-00992 Oracle GoldenGate Capture for Oracle, ext_1.prm: EXTRACT EXT_1 starting.
    2012-02-12 15:08:35 INFO OGG-01513 Oracle GoldenGate Capture for Oracle, ext_1.prm: Positioning to Sequence 55, RBA 11345936.
    2012-02-12 15:08:35 INFO OGG-01516 Oracle GoldenGate Capture for Oracle, ext_1.prm: Positioned to Sequence 55, RBA 11345936, Feb 12, 2012 3:00:14 PM.
    2012-02-12 15:08:35 INFO OGG-00993 Oracle GoldenGate Capture for Oracle, ext_1.prm: EXTRACT EXT_1 started.
    2012-02-12 15:08:35 INFO OGG-01055 Oracle GoldenGate Capture for Oracle, ext_1.prm: Recovery initialization completed for target file /app/ggs/trail/local_trail_1/ta000001, at RBA 1307.
    2012-02-12 15:08:35 INFO OGG-01478 Oracle GoldenGate Capture for Oracle, ext_1.prm: Output file /app/ggs/trail/local_trail_1/ta is using format RELEASE 10.4/11.1.
    2012-02-12 15:08:35 INFO OGG-01026 Oracle GoldenGate Capture for Oracle, ext_1.prm: Rolling over remote file /app/ggs/trail/local_trail_1/ta000001.
    2012-02-12 15:08:35 INFO OGG-01053 Oracle GoldenGate Capture for Oracle, ext_1.prm: Recovery completed for target file /app/ggs/trail/local_trail_1/ta000002, at RBA 1025.
    2012-02-12 15:08:35 INFO OGG-01057 Oracle GoldenGate Capture for Oracle, ext_1.prm: Recovery completed for all targets.
    2012-02-12 15:08:35 INFO OGG-01517 Oracle GoldenGate Capture for Oracle, ext_1.prm: Position of first record processed Sequence 55, RBA 11345936, SCN 0.873188, Feb 12, 2012 3:00:14 PM.
    2012-02-12 15:09:07 INFO OGG-00987 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (oracle): start pump_1.
    2012-02-12 15:09:07 INFO OGG-00963 Oracle GoldenGate Manager for Oracle, mgr.prm: Command received from GGSCI on host 218.186.40.99 (START EXTRACT PUMP_1 ).
    2012-02-12 15:09:07 INFO OGG-00975 Oracle GoldenGate Manager for Oracle, mgr.prm: EXTRACT PUMP_1 starting.
    2012-02-12 15:09:07 INFO OGG-00992 Oracle GoldenGate Capture for Oracle, pump_1.prm: EXTRACT PUMP_1 starting.
    2012-02-12 15:09:07 INFO OGG-00993 Oracle GoldenGate Capture for Oracle, pump_1.prm: EXTRACT PUMP_1 started.
    2012-02-12 15:09:12 INFO OGG-01226 Oracle GoldenGate Capture for Oracle, pump_1.prm: Socket buffer size set to 27985 (flush size 27985).
    2012-02-12 15:09:12 INFO OGG-01055 Oracle GoldenGate Capture for Oracle, pump_1.prm: Recovery initialization completed for target file /app/ggs/trail/remote_trail_1/tm000001, at RBA 1435.
    2012-02-12 15:09:12 INFO OGG-01478 Oracle GoldenGate Capture for Oracle, pump_1.prm: Output file /app/ggs/trail/remote_trail_1/tm is using format RELEASE 10.4/11.1.
    2012-02-12 15:09:12 INFO OGG-01026 Oracle GoldenGate Capture for Oracle, pump_1.prm: Rolling over remote file /app/ggs/trail/remote_trail_1/tm000002.
    2012-02-12 15:09:12 INFO OGG-01053 Oracle GoldenGate Capture for Oracle, pump_1.prm: Recovery completed for target file /app/ggs/trail/remote_trail_1/tm000002, at RBA 1063.
    2012-02-12 15:09:12 INFO OGG-01057 Oracle GoldenGate Capture for Oracle, pump_1.prm: Recovery completed for all targets.
    GGSCI (ggs-1.cu.com) 3> view report ext_1
    ** Running with the following parameters **
    EXTRACT ext_1
    USERID ogg, PASSWORD ***
    setenv (NLS_LANG="AMERICAN_AMERICA.WE8ISO8859P1")
    Set environment variable (NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1)
    EXTTRAIL /app/ggs/trail/local_trail_1/ta
    discardfile /app/ggs/trail/local_trail_1/SRISRI_discard.txt
    GETUPDATEBEFORES
    TABLE HR.*;
    Bounded Recovery Parameter:
    BRINTERVAL = 4HOURS
    BRDIR = /app/ggs
    CACHEMGR virtual memory values (may have been adjusted)
    CACHEBUFFERSIZE: 64K
    CACHESIZE: 8G
    CACHEBUFFERSIZE (soft max): 4M
    CACHEPAGEOUTSIZE (normal): 4M
    PROCESS VM AVAIL FROM OS (min): 16G
    CACHESIZEMAX (strict force to disk): 13.99G
    Database Version:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi
    PL/SQL Release 10.2.0.1.0 - Production
    CORE 10.2.0.1.0 Production
    TNS for Linux: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    Database Language and Character Set:
    NLS_LANG = "AMERICAN_AMERICA.WE8ISO8859P1"
    NLS_LANGUAGE = "AMERICAN"
    NLS_TERRITORY = "AMERICA"
    NLS_CHARACTERSET = "WE8ISO8859P1"
    2012-02-12 15:08:35 INFO OGG-01513 Positioning to Sequence 55, RBA 11345936.
    2012-02-12 15:08:35 INFO OGG-01516 Positioned to Sequence 55, RBA 11345936, Feb 12, 2012 3:00:14 PM.
    2012-02-12 15:08:35 INFO OGG-01055 Recovery initialization completed for target file /app/ggs/trail/local_trail_1/ta000001, at RBA 1307.
    2012-02-12 15:08:35 INFO OGG-01478 Output file /app/ggs/trail/local_trail_1/ta is using format RELEASE 10.4/11.1.
    2012-02-12 15:08:35 INFO OGG-01026 Rolling over remote file /app/ggs/trail/local_trail_1/ta000001.
    2012-02-12 15:08:35 INFO OGG-01053 Recovery completed for target file /app/ggs/trail/local_trail_1/ta000002, at RBA 1025.
    2012-02-12 15:08:35 INFO OGG-01057 Recovery completed for all targets.
    ** Run Time Messages **
    2012-02-12 15:08:35 INFO OGG-01517 Position of first record processed Sequence 55, RBA 11345936, SCN 0.873188, Feb 12, 2012 3:00:14 PM.
    TABLEWildcard resolved (entry HR.*): TABLE HR.JOBS;
    Using the following key columns for source table HR.JOBS: JOB_ID.
    GGSCI (ggs-1.cu.com) 4>
    GGSCI (ggs-1.cu.com) 1> view report pump_1
    ** Running with the following parameters **
    EXTRACT pump_1
    USERID ogg, PASSWORD ***
    setenv (NLS_LANG="AMERICAN_AMERICA.WE8ISO8859P1")
    Set environment variable (NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1)
    RMTHOST ggs-2.cu.com, MGRPORT 7809
    RMTTRAIL /app/ggs/trail/remote_trail_1/tm
    PASSTHRU
    TABLE HR.*;
    CACHEMGR virtual memory values (may have been adjusted)
    CACHEBUFFERSIZE: 64K
    CACHESIZE: 8G
    CACHEBUFFERSIZE (soft max): 4M
    CACHEPAGEOUTSIZE (normal): 4M
    PROCESS VM AVAIL FROM OS (min): 16G
    CACHESIZEMAX (strict force to disk): 13.99G
    Database Version:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi
    PL/SQL Release 10.2.0.1.0 - Production
    CORE 10.2.0.1.0 Production
    TNS for Linux: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    Database Language and Character Set:
    NLS_LANG = "AMERICAN_AMERICA.WE8ISO8859P1"
    NLS_LANGUAGE = "AMERICAN"
    NLS_TERRITORY = "AMERICA"
    NLS_CHARACTERSET = "WE8ISO8859P1"
    2012-02-12 15:09:12 INFO OGG-01226 Socket buffer size set to 27985 (flush size 27985).
    2012-02-12 15:09:12 INFO OGG-01055 Recovery initialization completed for target file /app/ggs/trail/remote_trail_1/tm000001, at RBA 1435.
    2012-02-12 15:09:12 INFO OGG-01478 Output file /app/ggs/trail/remote_trail_1/tm is using format RELEASE 10.4/11.1.
    2012-02-12 15:09:12 INFO OGG-01026 Rolling over remote file /app/ggs/trail/remote_trail_1/tm000002.
    2012-02-12 15:09:12 INFO OGG-01053 Recovery completed for target file /app/ggs/trail/remote_trail_1/tm000002, at RBA 1063.
    2012-02-12 15:09:12 INFO OGG-01057 Recovery completed for all targets.
    ** Run Time Messages **
    Opened trail file /app/ggs/trail/local_trail_1/ta000001 at 2012-02-12 15:09:12
    Switching to next trail file /app/ggs/trail/local_trail_1/ta000002 at 2012-02-12 15:09:12 due to EOF, with current RBA 1307
    Opened trail file /app/ggs/trail/local_trail_1/ta000002 at 2012-02-12 15:09:12
    TABLEWildcard resolved (entry HR.*): TABLE HR.JOBS;PASSTHRU mapping resolved for source table HR.JOBS
    GGSCI (ggs-1.cu.com) >
    ######################### Target database information ##############################################
    Database Name : THAKUR(10.2.0.1) , Hostname : ggs-1.cu.com(Linux 64 bit)
    Oracle GoldenGate Delivery for Oracle Version 11.1.1.1.1 OGGCORE_11.1.1.1.1_PLATFORMS_110729.1700
    Error log
    ==========================================================
    2012-02-12 15:09:07 INFO OGG-01677 Oracle GoldenGate Collector: Waiting for connection (started dynamically).
    2012-02-12 15:09:07 INFO OGG-01228 Oracle GoldenGate Collector: Timeout in 300 seconds.
    2012-02-12 15:09:12 INFO OGG-01229 Oracle GoldenGate Collector: Connected to 218.186.40.99:47496.
    2012-02-12 15:09:12 INFO OGG-01669 Oracle GoldenGate Collector: Opening /app/ggs/trail/remote_trail_1/tm000001 (byte -1, current EOF 1435).
    2012-02-12 15:09:12 INFO OGG-01670 Oracle GoldenGate Collector: Closing /app/ggs/trail/remote_trail_1/tm000001.
    2012-02-12 15:09:12 INFO OGG-01669 Oracle GoldenGate Collector: Opening /app/ggs/trail/remote_trail_1/tm000001 (byte 1435, current EOF 1435).
    2012-02-12 15:09:12 INFO OGG-01735 Oracle GoldenGate Collector: Synchronizing /app/ggs/trail/remote_trail_1/tm000001 to disk.
    2012-02-12 15:09:12 INFO OGG-01735 Oracle GoldenGate Collector: Synchronizing /app/ggs/trail/remote_trail_1/tm000001 to disk.
    2012-02-12 15:09:12 INFO OGG-01670 Oracle GoldenGate Collector: Closing /app/ggs/trail/remote_trail_1/tm000001.
    2012-02-12 15:09:12 INFO OGG-01669 Oracle GoldenGate Collector: Opening /app/ggs/trail/remote_trail_1/tm000002 (byte -1, current EOF 0).
    [oracle@ggs-2 ggs]$
    GGSCI (ggs-2.cu.com) 1> view report rep_1
    ** Running with the following parameters **
    REPLICAT rep_1
    ASSUMETARGETDEFS
    setenv (NLS_LANG="AMERICAN_AMERICA.WE8ISO8859P1")
    Set environment variable (NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1)
    USERID ogg, PASSWORD ***
    MAP HR.*, TARGET HR.*;
    CACHEMGR virtual memory values (may have been adjusted)
    CACHEBUFFERSIZE: 64K
    CACHESIZE: 512M
    CACHEBUFFERSIZE (soft max): 4M
    CACHEPAGEOUTSIZE (normal): 4M
    PROCESS VM AVAIL FROM OS (min): 1G
    CACHESIZEMAX (strict force to disk): 881M
    Database Version:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi
    PL/SQL Release 10.2.0.1.0 - Production
    CORE 10.2.0.1.0 Production
    TNS for Linux: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    Database Language and Character Set:
    NLS_LANG = "AMERICAN_AMERICA.WE8ISO8859P1"
    NLS_LANGUAGE = "AMERICAN"
    NLS_TERRITORY = "AMERICA"
    NLS_CHARACTERSET = "WE8ISO8859P1"
    For further information on character set settings, please refer to user manual.
    ** Run Time Messages **
    GGSCI (ggs-2.cu.com) 2>
    Thanks
    Tarun

    Hi,
    I have used below statements for ADD EXTRACT and ADD REPLICAT.
    ------Extract
    ADD EXTRACT ext_1, TRANLOG, BEGIN NOW
    -------Data Pump
    ADD EXTRACT pump_1, EXTTRAILSOURCE /app/ggs/trail/local_trail_1/ta, BEGIN NOW
    -------Replicat
    ADD REPLICAT rep_1, EXTTRAIL /app/ggs/trail/remote_trail_1/tb, BEGIN NOW, CHECKPOINTTABLE ogg.tarun_chk
    Yes, i have tried tutorial at Oracle Learning Library.
    Thanks
    Tarun
    Edited by: user8886876 on Feb 12, 2012 9:56 PM

  • Write Statements are not getting Displayed

    Hi All,
    We are going for Upgrade from 4.7 to ECC. So, in my program we have few write statements which are not getting displayed in the output. Those write statments are seen as errors in the extended check as Char. strings w/o text elements will not be translated: 'Income Tax Worksheet'.
    Could any one please help me to over come this errors and Display the wite statements.
    Thanks
    Shashikanth

    Have you mapped the checked and unchecked glyphs to be used?
    Check the doc and blog for check boxes.
    Regards
    tim

  • Vendors are not replicated

    Hi Everyone,
    I am working on SRM 7.0 - ECC 6.0 landscape with Classic Scenario. When I try to replicate vendors with BBPGETVD only 2 vendors got replicated out of 288. Now when I try to replicate them again I receive a message saying "All backend descriptions are already assigned in the system". I had seen this message earlier when you try to replicate vendors that are already replicated. So I tried to use BBPUPDVD to see if I can update and get the missing vendors. However, when I get to the second screen I see that only those 2 vendors that has been replicated are considered to be updated and the remaining 286 vendor are given in a list as missing vendors.
    Can you please help me to understand why these vendors are not getting replicated? Can it be that they have missing information so SRM do not consider them for replication?
    Thank you in advance,
    asli

    see our dave important point.
    Error repliacting vendors
    Yes, just make sure that the external range in SRM starts with your lowest vendor number in ECC and ends with the highest possible number.
    There will only be 1 range in SRM but it must cover all 5 ranges in ECC.
    Just make sure your internal range in SRM does not overlap with any of your vendors in ECC or you will have major problems. I like to give SRM a very high range for internal business partners, then you can use most of the numbers for your external ranges to match ECC.
    br
    Muthu

  • Widget accordion states are not changing.

    Widget accordion states can not be altered individually. HELP!

    Hello,
    I just tried it at my end and it seems to work fine. Please have a look  at the video in the link below :
    http://trainingwebcom.worldsecuresystems.com/SachinFTP/2012-09-25_2349.swf
    Could you please try and create a new Accordian panel in a new test website and let me know if you are facing the same issue or not.
    Regards,
    Sachin

  • Trigger changes are not committing to the database

    I have 9iAS and 9i DB both on my laptop.
    I am having a problem in which a trigger run off a WHEN_BUTTON_PRESSED function is not committing the changes to the database. In the trigger I have:
    1 record insert into table A.
    1 record update to table B.
    1 record insert into table C.
    1 delete from table D.
    None of the data is related.
    I have tried various combinations of the below to get the changes to commit:
    POST;
    COMMIT_FORM;
    Exit_Form(NO_COMMIT, NO_ROLLBACK);
    MESSAGE('Got past COMMIT');
    COMMIT;
    CLEAR_FORM(NO_COMMIT);
    ENTER_QUERY;     
    I am getting varying amounts of "FRM: 40508 Oracle Error: Unable to INSERT record" statements. Even so, many times the form would act as if the changes had been properly applied. But when I did a separate DB verification, I would see that the changes are not being committed. Also, most of the time the changes would also be reflected in the calling form queries, but when I exit, all changes are rolled back no matter how many commit stmts are in the trigger.
    I have finally gotten the form to do what I want to do (the 4 steps noted above), but I had to add a FORMS_DDL('COMMIT'); stmt and I am still getting a FRM 40508, but at least the changes are appearing in the db.
    Any ideas on why so many troubles in getting the changes to commit??? I have spent a ton of hours trying "what ifs" to see what might work. Also, this trigger is the only real "code" in the forms.
    Kim

    Brett -
    You're probably right about the intention, but this is a place where people can come and share styles, ideas, and coding tricks, I don't understand why someone would say that. Additionally, I had a professor who was a complete momo that said that all the time (consequently, his lax attitude towards teaching crippled the IS program where I graduated and most likely will cause it to no longer be available). It's a personal peeve of mine, just to let you know where I was coming from.
    Secondly, the problem I'm having may have to do with what you said, however I can't be sure. To give a better description of my scenerio, I created a form that allows the user to load information about an employee by querying a SSN. Most of this information is for display only. Six fields are available to be updated and I wrote a DML UPDATE statement that I placed inside a WHEN-BUTTON-PRESSED trigger. However, these changes won't be written to the DB because Forms is attempting to write my entire datablock, instead of just following the specified DML statements. I'm at a loss as to why this would happen, but for simplicity's sake, I would listen to ideas of how to suppress this from happening so only my statements are used when updating the DB. If you can help, thank you, if not, then thank you for your time.
    Steve

Maybe you are looking for

  • Toshiba LCD TV as external monitor

    Hi there, my Toshiba 27" LCD TV has a VGA input so I bought the Mini-DVI to VGA adapter and connected my Black MacBook to it: turning mirroring off I'm trying to get the best resolution for the LCD TV in order to watch some avi videos and dvd's With

  • WiFi light off

    Hello, The wireless light at the left bottom of the screen on my T400s (running Windows 8) just stopped working the other day.  The bluetooth light next to it lights up but not the wifi light.  It does not show any wifi access points either.  The wir

  • Transfer iPhoto library from one computer to another

    I have many photos in my PowerBook G4 library. They are arranged in many folders. I have recently acquired another Mac using the same OS X 3.9. I have not been succesful in tgransfering my iPhoto library to the new computer. I did copy the entire lib

  • Server crash, can anyone help trace the reason

    Our Mountain Lion server (10.8.1) crashed this morning at 7:34 when no one was in the office, the last thing in the system log before we restarted it was: Aug 31 07:34:50 xxxxxx.xxxxxxxxxxxxx.co.uk locationd[46]: Apple80211Open failed, error 24 (Too

  • User "not responding" when trying to audio and video chat

    Hello everyone, I know there have been many posts here about this topic - to be honest I've tried everything and thought starting a clean thread might be the way forward. Basically when I try and do a video (or audio) chat with Australia (or my dad u