Gg dml ddl 单线同步,replicat 进程lag出现长时间等待
如题,进而导致源端数据库上的更新无法及时同步到目标端
一下是目标端replicat 进程的状态信息
GGSCI (db02) 26> info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
EXTRACT STOPPED DPE2 00:00:00 27:43:07
EXTRACT STOPPED EXT22 00:00:00 27:43:33
REPLICAT RUNNING REP1 03:12:01 09:18:47
GGSCI (db02) 27> info rep1
REPLICAT REP1 Last Started 2012-10-18 12:10 Status RUNNING
Checkpoint Lag 03:12:01 (updated 09:18:51 ago)
Log Read Checkpoint File /ggs/dirdat/rt000000
2012-10-19 03:31:33.930735 RBA 95309202
GGSCI (db02) 28> info rep1,detail
REPLICAT REP1 Last Started 2012-10-18 12:10 Status RUNNING
Checkpoint Lag 03:12:01 (updated 09:19:01 ago)
Log Read Checkpoint File /ggs/dirdat/rt000000
2012-10-19 03:31:33.930735 RBA 95309202
Extract Source Begin End
/ggs/dirdat/rt000000 * Initialized * 2012-10-19 03:31
/ggs/dirdat/rt000000 * Initialized * First Record
Current directory /ggs
Report file /ggs/dirrpt/REP1.rpt
Parameter file /ggs/dirprm/rep1.prm
Checkpoint file /ggs/dirchk/REP1.cpr
Checkpoint table ggs.checkpoint
Process file /ggs/dirpcs/REP1.pcr
Stdout file /ggs/dirout/REP1.out
Error log /ggs/ggserr.log
GGSCI (db02) 29> send rep1,status
Sending STATUS request to REPLICAT REP1 ...
Current status: Processing data
Sequence #: 0
RBA: 99742942
12489 records in current transaction
GGSCI (db02) 30> info rep1,showch
REPLICAT REP1 Last Started 2012-10-18 12:10 Status RUNNING
Checkpoint Lag 03:12:01 (updated 09:19:25 ago)
Log Read Checkpoint File /ggs/dirdat/rt000000
2012-10-19 03:31:33.930735 RBA 95309202
Current Checkpoint Detail:
Read Checkpoint #1
GGS Log Trail
Startup Checkpoint (starting position in the data source):
Sequence #: 0
RBA: 0
Timestamp: Not Available
Extract Trail: /ggs/dirdat/rt
Current Checkpoint (position of last record read in the data source):
Sequence #: 0
RBA: 95309202
Timestamp: 2012-10-19 03:31:33.930735
Extract Trail: /ggs/dirdat/rt
CSN state information:
CRC: F9-3C-30-92
Latest CSN: 216472052
Latest TXN: 10.25.1410526
Latest CSN of finished TXNs: 216472052
Completed TXNs: 10.25.1410526
Header:
Version = 2
Record Source = A
Type = 1
# Input Checkpoints = 1
# Output Checkpoints = 0
File Information:
Block Size = 2048
Max Blocks = 100
Record Length = 2048
Current Offset = 0
Configuration:
Data Source = 0
Transaction Integrity = -1
Task Type = 0
Database Checkpoint:
Checkpoint table = ggs.checkpoint
Key = 3326595731 (0xc647d293)
Create Time = 2012-10-18 11:55:42
Status:
Start Time = 2012-10-18 12:10:20
Last Update Time = 2012-10-19 06:43:35
Stop Status = A
Last Result = 0
SQL> select ADDR,STATUS,START_TIME,NAME,USED_UBLK from v$transaction;
ADDR STATUS START_TIME NAME USED_UBLK
000000052F0F7800 ACTIVE 10/19/12 06:43:35 319
SQL> select * from v$lock where addr='000000052F0F7800';
ADDR KADDR SID TYPE ID1 ID2 LMODE REQUEST CTIME BLOCK
000000052F0F7800 000000052F0F7838 1079 TX 655388 1286321 6 0 29768 0
SQL> select sid,sql_id,serial#,username from v$session;
SQL> select SQL_ID,SQL_TEXT from v$sqltext where sql_id='fwt1pvjtrqa56' order by piece;
SQL_ID SQL_TEXT
fwt1pvjtrqa56 UPDATE "TEST"."TEST_HISTORY" SET "COMMUNITY" = :a0,"S
============================================================================
GGSCI (db02) 31> info rep1
REPLICAT rep1 Last Started 2012-10-18 12:10 Status RUNNING
Checkpoint Lag 03:12:01 (updated 09:34:03 ago)
Log Read Checkpoint File /ggs/dirdat/rt000000
2012-10-19 03:31:33.930735 RBA 95309202
GGSCI (db02) 32> send rep1,status
Sending STATUS request to REPLICAT REP1 ...
Current status: Processing data
Sequence #: 0
RBA: 99859919
12817 records in current transaction
Logdump 233 >pos 99859919
Reading forward from RBA 99859919
Logdump 234 >
Logdump 234 >
Logdump 234 >
Logdump 234 >n
Hdr-Ind : E (x45) Partition : . (x04)
UndoFlag : . (x00) BeforeAfter: A (x41)
RecLength : 336 (x0150) IO Time : 2012/10/19 03:31:34.850.740
IOType : 115 (x73) OrigNode : 255 (xff)
TransInd : . (x01) FormatType : R (x52)
SyskeyLen : 0 (x00) Incomplete : . (x00)
AuditRBA : 0 AuditPos : 72232920
Continued : N (x00) RecCount : 1 (x01)
2012/10/19 03:31:34.850.740 GGSPKUpdate Len 336 RBA 99859919
Name: TEST.TEST_HISTORY
After Image: Partition 4 G m
00a7 0000 0004 ffff 0000 0001 000d 0000 0009 e6b0 | ....................
91e7 a791 e8b7 af00 0200 0700 0000 03e5 8f8c 0003 | ....................
0010 0000 000c 3e3d 327c 7c3c 3d31 3030 3030 0004 | ......>=2||<=10000..
000b 0000 0007 7368 676d 3034 3200 0500 1700 0000 | ......shgm042.......
13e5 9bbd e7be 8ee5 ae9d e5b1 b1e5 b882 e58c ba31 | ...................1
0006 0015 0000 3230 3132 2d31 302d 3139 3a30 333a | ......2012-10-19:03:
3332 3a30 3200 0700 0c00 0000 0832 3031 3231 3031 | 32:02........2012101
Before Image Len 169 (x000000a9)
Logdump 235 >n
Hdr-Ind : E (x45) Partition : . (x04)
UndoFlag : . (x00) BeforeAfter: B (x42)
RecLength : 172 (x00ac) IO Time : 2012/10/19 03:31:34.850.740
IOType : 15 (x0f) OrigNode : 255 (xff)
TransInd : . (x01) FormatType : R (x52)
SyskeyLen : 0 (x00) Incomplete : . (x00)
AuditRBA : 965 AuditPos : 72233360
Continued : N (x00) RecCount : 1 (x01)
2012/10/19 03:31:34.850.740 FieldComp Len 172 RBA 99860362
Name: TEST.TEST_HISTORY
Before Image: Partition 4 G m
0000 0004 ffff 0000 0001 0013 0000 000f e5ae 9de9 | ....................
92a2 e6b9 84e6 b5a6 e8b7 af00 0200 0700 0000 03e5 | ....................
8f8c 0003 0010 0000 000c 3e3d 327c 7c3c 3d31 3030 | ..........>=2||<=100
3030 0004 000b 0000 0007 7368 676d 3030 3200 0500 | 00........shgm002...
1600 0000 12e5 9bbd e7be 8ee5 ae9d e5b1 b1e8 bf91 | ....................
e983 8a00 0600 1500 0032 3031 322d 3130 2d31 393a | .........2012-10-19:
3033 3a33 323a 3032 0007 000c 0000 0008 3230 3132 | 03:32:02........2012
Column 0 (x0000), Len 4 (x0004)
SQL> select ADDR,STATUS,START_TIME,NAME,USED_UBLK from v$transaction;
ADDR STATUS START_TIME NAME USED_UBLK
000000052F0F7800 ACTIVE 10/19/12 06:43:35 319
由于INFO的LAG是基于checkpoint的,所以如果出现大事务的情况Long Running Transactions (LRTs),事务可能长时间不提交COMMIT。 该事务可能变成一个最老而又最无聊的数据由于一直不COMMIT而无法写出。 这将造成EXTRACT/PUMP/REPLICAT实际处理这个大事务的时间点远落后于该大事务实际commit的时间点。 对于REPLICAT可以使用MAXTRANSOPS 参数来减少LAG。
Similar Messages
-
Hi,
I have a very excessive replicat lag time in the target db,and the repoa lag time is growing...
GGSCI > info all
Program Status Group Lag Time Since Chkpt
MANAGER RUNNING
REPLICAT RUNNING REPOA 34:35:33 00:45:30
REPLICAT RUNNING REPOB 00:00:00 00:00:09
In the source:
GGSCI > info all
Program Status Group Lag Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING DPEOA 00:00:00 00:00:08
EXTRACT RUNNING DPEOB 00:00:00 00:00:07
EXTRACT RUNNING EXTOA 00:00:00 00:00:02
EXTRACT RUNNING EXTOB 00:00:00 00:00:00
The replicat repoa's parameter :
replicat repoa
SETENV (ORACLE_SID = "xzjbtjcx")
SETENV (ORACLE_HOME = "/oracle/app/oracle/product/10.2.0")
SETENV (NLS_LANG = "AMERICAN_AMERICA.ZHS16GBK")
userid ogg,password ogg
reportcount every 30 minutes,rate
reperror default,abend
transactiontimeout 5 s
assumetargetdefs
discardfile /oradata/ogg/dirrpt/repoa.dsc,append,megabytes 5000
gettruncates
allownoopupdates
DDL INCLUDE MAPPED
DDLOPTIONS REPORT
DDLERROR DEFAULT IGNORE RETRYOP
SQLEXEC "ALTER SESSION SET COMMIT_WRITE = NOWAIT"
BATCHSQL
map JSSIMIS_XZ.*, target JSSIMIS_XZ.*;
Please help me,how to overcome the replicat lag.
Thanks.
Edited by: user7418832 on 2012-2-21 下午7:18Hi,
I have a very excessive replicat lag time in the target db,and the repoa lag time is growing...
GGSCI > info all
Program Status Group Lag Time Since Chkpt
MANAGER RUNNING
REPLICAT RUNNING REPOA 34:35:33 00:45:30
REPLICAT RUNNING REPOB 00:00:00 00:00:09
In the source:
GGSCI > info all
Program Status Group Lag Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING DPEOA 00:00:00 00:00:08
EXTRACT RUNNING DPEOB 00:00:00 00:00:07
EXTRACT RUNNING EXTOA 00:00:00 00:00:02
EXTRACT RUNNING EXTOB 00:00:00 00:00:00
The replicat repoa's parameter :
replicat repoa
SETENV (ORACLE_SID = "xzjbtjcx")
SETENV (ORACLE_HOME = "/oracle/app/oracle/product/10.2.0")
SETENV (NLS_LANG = "AMERICAN_AMERICA.ZHS16GBK")
userid ogg,password ogg
reportcount every 30 minutes,rate
reperror default,abend
transactiontimeout 5 s
assumetargetdefs
discardfile /oradata/ogg/dirrpt/repoa.dsc,append,megabytes 5000
gettruncates
allownoopupdates
DDL INCLUDE MAPPED
DDLOPTIONS REPORT
DDLERROR DEFAULT IGNORE RETRYOP
SQLEXEC "ALTER SESSION SET COMMIT_WRITE = NOWAIT"
BATCHSQL
map JSSIMIS_XZ.*, target JSSIMIS_XZ.*;
Please help me,how to overcome the replicat lag.
Thanks.
Edited by: user7418832 on 2012-2-21 下午7:18 -
Anyone know of a good OLAP DML/DDL tutorial? (and NOT reference guide)
In addition to Analytic Workspace, I am trying to learn how to build cubes using just the OLAP DML / DDL language.
Oracle and others have published a great deal of reference material on the language of OLAP DML / DDL but I can't find any real tutorials that guide me logically thru the process of creating cubes and all that goes with it (defining dimensions, hierarchies, measures, calculations etc..) All the reference material is spread out in bits and pieces and difficult to efficiently learn from. Does anyone have any good OLAP DML / DDL tutorial links they could recommend? Thanks.John W wrote:
Oracle and others have published a great deal of reference material on the language of OLAP DML / DDL but I can't find any real tutorials that guide me logically thru the process of creating cubes and all that goes with it (defining dimensions, hierarchies, measures, calculations etc..) All the reference material is spread out in bits and pieces and difficult to efficiently learn from. Does anyone have any good OLAP DML / DDL tutorial links they could recommend? Thanks.This is a very difficult task. The OLAP DML/DDL only supports basic multidimensional variables, dimensions (very primitive), and relations. The externally visible cubes and dimensions are a construct of a lot of component objects (a dimension has many primitive dimensions and relations put together to create the higher level DIMENSION that goes with a cube; a cube has multiple variables and other pieces that get put together) in the AW, plus a bunch of metadata objects in the Oracle dictionary.
You really don't want to try to create your own and expect them to interact with the OLAPI interfaces. It is possible to construct your own version of things, but you'd also have to do your own view generation, etc. There are applications out there that are created from scratch, but they are completely from scratch (including whatever interactions they have via SQL). Whatever you create will not work with any of the Oracle-provided interfaces, nor the recently-announced Simba MDX package.
Jim -
Any issue if we writing DML,DDL and TCL commands in stored functions.
Hi,
Is there any issue if we writing DML,DDL and TCL commands in stored function with help of PRAGMA AUTONOMOUS_TRANSACTION.Hi,
Yes, Ofcourse. Using DML Statements Inside the function using PRAGMA AUTONOMOUS_TRANSACTION is not highly
recommended. It is recommended to use AUTONOMOUS TRANSACTION for error logging purposes only, and
when used in an disorganized way it may lead to dead locks and we will have problem when examining the
Trace Files.
Thanks,
Shankar -
Reg. Extracting DML/ DDL Stmnts.
hi ,
I dont have access to Db . So I apologise straight away for not trying out the code myself to interpret and scrutinise.
The code is to extract DML / DDL stmnts from all_procedures. and I have tried a sample (select stmnt) of it . Please let me know if its wrong or working fine.
create or replace procedure sp_identify_stmnts (i_table in varchar2)
is
i_text varchar2(32767);
begin
select REGEXP_SUBSTR(replace(text,chr(10)),'select+;$',1,1000,'i') into i_text from ALL_PROCEDURES where
PROCEDURE_NAME=i_table ;
dbms_output.put_line('text is ' || i_text);
end;
/Oracle 10g release 2It won't work, because ALL_PROCEDURES has no TEXT column with code.
I think you have to use something like this:
select text
from all_source
where type='PROCEDURE'
and owner=:proc_owner
and name=:proc_name
order by line -- code is splittet in multiple lines
; -
I need to create dml/ddl handlers, i have searched a lot, but yet to find such materiel to learn. I need from beginner to advance level material with examples. Can any one help me. I have thousands of DEQUEUE MESSAGES to handle.
thank no error code work fine.
Mohd.hamza :) -
DDL Replicate Oracle 10gR2 to SQL Server 2005
Dear Gurus,
I have configured OGG and it had succeed to have DML transaction from Oracle to SQL Server.
now i'm configuring DDL replication but have problem when i tried to start replicat program in SQL Server 2005, the status is abended.
the log shows as below,
** Running with the following parameters **
REPLICAT rora01
sourcedb dsn_ncli, userid ggs_user, password ********
2012-04-17 18:02:20 INFO OGG-01552 Connection String: provider=SQLNCLI;init
ial catalog=gate;data source=BT-70CF7B38342B;user id=ggs_user;password= ********
HANDLECOLLISIONS
ASSUMETARGETDEFS
DISCARDFILE ./dirrpt/rora01.dsc, PURGE
DDL INCLUDE MAPPED
DDLOPTIONS REPORT
MAP hr.*, TARGET ggs_schema.*;
2012-04-17 18:02:20 ERROR OGG-00899 Table GGS_SCHEMA.GGS_SETUP does not exist.
Fyi, in hr schema in source database, GGS_SETUP is created after i Install GoldenGate DDL objects in source db. GGS_SCHEMA is schema that replicat schema in target database.
Please help what should i add in replicat parameter to have this DDL works ?
Regards,
Andes
Edited by: 915856 on Apr 17, 2012 10:16 PMHi Steve,
in Installation guide there is no explicit line that explain "DDL replication is not supported for SQL Server."
but now i know, thanks for that.
i have another question, so DDL replication only support between the same database like Oracle to Oracle, SQL Server to SQL Server etc. ?
Regards, -
Best parameters setting for DML & DDL?
hi,
i am implementing the DDL & DML replication using OGG. i just wanted to know what should the best parameters for extract, data pump(source) & Replicate process(Target).
while my source and target having same structure.
& i have to replicate 277 tables.
thanks.
Regards,
AMSII
Edited by: AMSI on Apr 8, 2013 8:12 PMTo improve the bulk DML operation via goldengate replication:
Using BATCHSQL
In default mode the Replicat process will apply SQL to the target database, one statement at a time. This often causes a performance bottleneck where the Replicat process cannot apply the changes quickly enough, compared to the rate at which the Extract process delivers the data. Despite configuring additional Replicat processes, performance may still be a problem.
Configuration Options
GoldenGate has addressed this issue through the use of the BATCHSQL Replicat configuration parameter. As the name implies, BATCHSQL organizes similar SQLs into batches and applies them all at once. The batches are assembled into arrays in a memory queue on the target database server, ready to be applied.
Similar SQL statements would be those that perform a specific operation type (insert, update, or delete) against the same target table, having the same column list. For example, multiple inserts into table A would be in a different batch from inserts into table B, as would updates on table A or B. Furthermore, referential constraints are considered by GoldenGate where dependant transactions in different batches are executed first, depicting the order in which batches are executed. Despite the referential integrity constraints, BATCHSQL can increase Replicat performance significantly.
BATCHSQL BATCHESPERQUEUE 100, OPSPERBATCH 2000
Regards
Rajabaskar -
Tool to do syntax check on DML/DDL
Is there any tool in Toad or SQL Plus or any other IDE to do a syntax check on DDL/DML statements without executing them ?
In my Toad Verson 7.6, there is a built in formatter namely -> Formatter Plus v4.8.0. This will format any given SQL Statement/ PL/SQL Code.
Also if your statement is wrong syntactically, the formatter will throw an error.
You just have to type the code and right click -> Formatting Tools -> Format Code. Alas your code gets formatted neatly!!
Thanks,
Jithendra -
Undo tablespace/redo logs/ DML /DDL/ truncate/ delete
1st scenario:Delete
10 rows in a table
Delete 5 rows
5 rows in the table
savepoint sp1
Delete 3 rows
2 rows in the table
rollback to savepoint sp1
5 rows in the table
So all DML affected values are noted in the undotablespace and present in the undotablespace until a commit is issued.And also redo logs make note of DML statements.AM I RIGHT??????
2nd scenario-truncate
10 rows in table
savepoint sp1
truncate
0 rows in table
rollback to savepoint sp1 gives this error
ORA-01086: savepoint 'SP2' never established
So is truncate [are all DDL statements] noted in the undo tablespace and are they noted in the redologs????????
I KNOW THAT A DML IS NOT AUTOCOMMIT AND A DDL IS AN AUTOCOMMITWhen you issue a delete is the data really deleted ?WHen you issue a delete, there is a before image of the data recorded to the undo area. (And that undo information itself is forwarded to the redo.) Then the data is actually deleted from the 'current' block as represented in memory.
Therefore, the data is actually deleted, but can be recovered by rolling back until a commit occurs.
It can also be recovered using flashback techniques which simply rebuild from the undo.
When you issue a truncate is the data really deleted
or is the high water mark pointer for a datablock lost?The data is not deleted. Therefore there is no undo record of the data 'removal' to be rolled back.
The high water mark pointer is reset. It's old value is in the undo, but the truncate is a DDL command, and it is preceded and followed by an implicit commit, voiding any potential rollback request.
I mean you can always rollback a delete and not
rollback a truncate?Correct - using standard techniques, deletes can be rolled back and truncates can not. -
hi
Can I find the last time of dml or ddl of specıfıc table?desc dba_objects
Name Null? Type
OWNER VARCHAR2(30)
OBJECT_NAME VARCHAR2(128)
SUBOBJECT_NAME VARCHAR2(30)
OBJECT_ID NUMBER
DATA_OBJECT_ID NUMBER
OBJECT_TYPE VARCHAR2(18)
CREATED DATE
---> LAST_DDL_TIME DATE
TIMESTAMP VARCHAR2(19)
STATUS VARCHAR2(7)
TEMPORARY VARCHAR2(1)
GENERATED VARCHAR2(1)
SECONDARY VARCHAR2(1)
Is there any chance you will ever resolve any doc question on your own by not misusing this forum structurally as a free documentation lookup service?
Why can't you do anything on your own?
Why are you so lazy?
Sybrand Bakker
Senior Oracle DBA -
Hello Expert,
In My application when users press the “Run” it is executing the DML or DDL statement. What I am looking for it is to display the message that’s come after executing this statement. I am want the same message which appear when you run the DML or DDL statement from SQL command prompt: For example
SQL> Create table xyx (a varchar2(1))
Table created. --- {color:#ff0000}"I want to display this message on my application"{color}
SQL> Drop table xyx
Table dropped. --- {color:#ff0000}"I want to display this message on my application"{color}
SQL> update tb_HR set firstname= 'SAGAR' where AGSNO = 2461059
0 row(s) updated. --- {color:#ff0000}"I want to display this message on my application"{color}
Hopefully my example will explain what I am looking for.
SagarSagor,
Your submit process can use the success message to display whatever you want. Create hidden items or application level varaibles to hold the name of the object you are working with and form the message using them.
Keep Smiling,
Bob R -
DML question about LAG function
Hello,
I am trying to get a month-to-date number on a value that is stored as YTD in the cube. That is, for today's month-to-date number, I want to subtract today's value, from last month's value. I am trying to do this with the following statement:
data - lag(chgdims(data limit lmt(time to parents using time_parentrel)),1,time)
I'm pretty new to DML, but I know that this is clearly not the correct formula. Does anyone have any ideas on how to do this?
ThanksDear Fred-san
Thank you very much for your support on this.
But, may I double check about what you mentioned above?
So, what you were mentioning was that if some user executes the query with
the function module (RFC_READ_TABLE), under the following conditions, he can access to
the HR data even when he does not have the authorizations for HR transactions?
<Conditions>
1. the user has the authorization for HR database tables themselves
2. RFC_READ_TABLE is called to retrieve the data from HR database
<example>
Data: LF_HR_TABLE like DD02L-TABNAME value 'PA0000'.
CALL FUNCTION 'RFC_READ_TABLE'
EXPORTING
query_table = LF_HR_TABLE
TABLES
OPTIONS =
fields =
data = .
But then, as long as we call this function module for a non-critical tables such as
VBAP (sales order) or EKKO (purchase order) within our query, it wouldn't seem to be
so security risk to use RFC_READ_TABLE...
Besides, each query (infoset query) has got the concept of user groups, which limits
the access to the queries within the user group.
※If someone does not belong to the user group, he cannot execute the queries within that
user group, etc
So, my feeling is that even infoset queries does have authorization concept...
Would you give me your thought on this?
I also thank you for your information for SCU0.
That is an interesting transaction
Kind regards,
Takashi -
1.Can any one write a select statement to generate sequencial number ?
2.How to print the duplicate records from a table ?
3.What is the difference between a WHERE Clause and HAVING Clause?
4.What is the difference between Key-Next Trigger and When-Validate Trigger?
5.What is a Transaction in Oracle?
6.What are Database triggers and Stored Procedures?
7. What is the difference between Function,Procedures,Packages?
Hope it is enough for the day..Will write more question upon receving answers for the above..
Guruprasad T.N.A 1 Check the Create Sequence
A 2 In Form When We Are In Detail Section
Can use Function Key F4 For Copying the Prev. record as it to the Next Record.
A 3 Where Clause Is used for the Resticting For the Single Row Funtions.
and Having is used for Group Functions
Q4 What is the difference between Key-Next Trigger and When-Validate Trigger..
Ans : Key Next Item can used for developer required Rotines ...While When -validate-trigger can be used for the Check the Validation of the Record.
I provid u the Answers but....The is important piece of work
Remaing Next Time.......
Regards.
M.Azam Rasheed
([email protected])
MCS,OCP
null -
How to replicate missing ddl / dml to target after intial loading
Hi GG Experts,
I m new to GG, my test setup is like Oracle->Oracle (same version, 10.2.0.4) Replication. I did initial loading between source & target. There were some problem with my REPLICAT Process was getting abended, hence when i fired any dml/ddl are not transferred to target. Then, I put HANDLECOLLISIONS, APPLYNOOPUPDATES in REPLICAT Process, after which my replication got started but from that moment.
Now my question, how should I replicate the missing ddl/dml to target db which I fired on source db after initial loading and before my REPLICAT process successfully started.
is there any special consideration do I need to take for ddl replication.
Your help would be highly appreciated.
Regards,
ManishHi,
As both the source and target tables are out of sync currently, I would suggest you to re-synchronize the target using below procedure and restart the
online replication back again.
The export could be performed directly on the production system by using the FLASHBACK_SCN option. Then the FLASHBACK_SCN used for the export would then be the CSN value used for the Replicat. Note that you have to take the entire export using the same value for FLASHBACK_SCN for your entire export, even if you use multiple export files (e.g. you run multiple sessions in parallel, or in the case of data pump export, you use Oracle's parallelism).
Below are sample steps that you could use to instantiate an Oracle target system:
a.Switch the database to archivelog mode:
SQL> shutdown immediate
SQL> startup mount
SQL> alter database archivelog;
SQL> alter database open;
b. Enable minimal supplemental logging:
SQL> alter database add supplemental log data;
c. Prepare the database to support ddl replication
--Turn off recyclebin for the database . . .
SQL> alter system set recyclebin=off scope=both;
d.Create schema for ddl support replication & create a OGG user and provide necesaary user privileges
GGSCI> edit params ./GLOBAL
GGSCHEMA ogg
e.Run scripts for creating all necessary objects for support ddl replication:
SQL> @$ogg/marker_setup.sql
SQL> @$ogg/ddl_setup.sql
SQL> @$ogg/role_setup.sql
SQL> grant GGS_GGSUSER_ROLE to ogg;
SQL> @$ogg/ddl_enable.sql
GGSCI> dblogin userid ogg password ogg@321!
GGSCI> add trandata schema1.*
f. Create the extract group on the source side:
GGSCI>ADD EXTRACT EXT1, TRANLOG, BEGIN NOW, THREADS 2
GGSCI>ADD EXTTRAIL ./dirdat/P1, EXTRACT EXT1, MEGABYTES 100
GGSCI> edit params EXT1
EXTRACT EXT1
SETENV (ORACLE_HOME = "/oracle/orabin/product/10.2.4")
SETENV (ORACLE_SID = "prd1")
SETENV (NLS_LANG = "AMERICAN_AMERICA.WE8ISO8859P1")
USERID ogg@prd1, PASSWORD ogg@321!
TRANLOGOPTIONS ASMUSER sys@+ASM1, ASMPASSWORD asm321
RMTHOST dr, MGRPORT 7810
RMTTRAIL ./dirdat/P1
DISCARDFILE ./dirrpt/EXT1.DSC, PURGE, MEGABYTES 100
DDL INCLUDE MAPPED OBJNAME "schema1.*"
DDLOPTIONS ADDTRANDATA RETRYOP RETRYDELAY 60 MAXRETRIES 10, REPORT
TABLE schema1.*;
g.Create replicat group on target:
GGSCI> dblogin userid ogg password ogg@321!
GGSCI> add checkpointtable ogg.checkpoint
GGSCI>ADD REPLICAT REP1, RMTTRAIL ./dirdat/P1, checkpointtable ogg.checkpoint
GGSCI> edit params REP1
REPLICAT REP1
SETENV (ORACLE_HOME = “/oracle/orabin/product/10.2.4”)
SETENV (ORACLE_SID = “dr”)
SETENV (NLS_LANG = “AMERICAN_AMERICA.WE8ISO8859P1”)
ASSUMETARGETDEFS
USERID ogg@DR, PASSWORD ogg@321!
DISCARDFILE ./dirrpt/REP1.DSC, append, megabytes 100
DDL INCLUDE MAPPED OBJNAME "schema1.*"
MAP schema1.*, TARGET schema1.*;
h.Create a database directory:
SQLPLUS> create directory dumpdir as '<some directory>' ;
i.Get the current SCN on the source database:
SQLPLUS> select current_scn from v$database ;
28318029
j. Run the export using the flashback SCN you obtained in the previous step. The following example shows running the expdp utility at a Degree Of Parallelism (DOP) of 4. If you have sufficient system resources (CPU,memory and IO) then running at a higher DOP will decrease the amount of time it takes to take the export (up to 4x for a DOP of 4). Note that expdp uses Oracle Database parallel execution settings (e.g.parallel_max_servers) which have to be set appropriately in order to take advantage of parallelism. Other processes running in parallel may be competing for those resources. See the Oracle Documentation for more details.
expdp directory=dumpdir full=y parallel=4 dumpfile=ora102_%u.dmp flashback_scn=28318029
Username: sys as sysdba
Password:
Note: The export log needs to be checked for any errors.
h.Start an import using impdp to the target database when step 7 is complete.
i. GGSCI> start replicat <rename>, aftercsn <value returned from earlier step>
By following above procedure the resynchronisation of source and target can be done and the online change synchronuzation can be started without any data integrity issues. Hope this information helps.
Thanks & Regards
SK
Maybe you are looking for
-
Is there an easy way to get a "recent files" menu with a path control?
If not, does anybody have any modular code that would do the trick? I could build this from scratch, but I'm sure it's been done many times already.
-
Session Properties inside Transaction
Hi All, I am trying to access session variable inside transaction as below. 1. Create new XML query 2. Set URL as xMII 11.5 --> http://<Server>/Lighthammer/PropertyAccessServlet?Mode=List and 12.0 --> http://<Server>:50000/XMII/PropertyAccessServlet
-
How to share apple account with two phones
Can you put two iphones onto one Apple account? I want to be able to share music and apps in my family, but don't want to share messaging, face time. How do I do this?
-
Flash SSD for Adobe Premiere Pro - problem, or not?
Hi, I've just ordered a new workstation specifically for Adobe Premiere Pro use. Then I saw this on Adobe's technical requirements page for PP: 10GB of available hard-disk space for installation; additional free space required during installation (ca
-
How to make silverlight run on my mac
I have to download microsoft silverlight on my mac. Once I do how do I make it run?