Data streems replication

Hi Guru's
Just thought of checking with you all for the possible solution to a problem.
The problem is with the strems replication.
Client has come back stating that data is not replicating to the target database .
could some put please come up with the possible factors for this failure .
I checked the db links are working fine .
apart from this any suggestion on this will be highly apprciated .
Thanks in Advance .

Not much informations, so let's grasp whatever we can.
Can you post the results of the following queries (
(Please, enclosed the response with the tags [ code] and [ /code]
(otherwise we simply cannot read the resonse):
set lines 190 pages 66 feed off pause off verify off
col rsn format A28 head "Rule Set name"
col rn format A30 head "Rule name"
col rt format A64 head "Rule text"
COLUMN DICTIONARY_BEGIN HEADING 'Dictionary|Build|Begin' FORMAT A10
COLUMN DICTIONARY_END HEADING 'Dictionary|Build|End' FORMAT A10
col CHECKPOINT_RETENTION_TIME head "Checkpoint|Retention|time" justify c
col LAST_ENQUEUED_SCN for 999999999999 head "Last scn|enqueued" justify c
col las format 999999999999 head "Last remote|confirmed|scn Applied" justify c
col REQUIRED_CHECKPOINT_SCN for 999999999999 head "Checkpoint|Require scn" justify c
col nam format A31 head 'Queue Owner and Name'
col table_name format A30 head 'table Name'
col queue_owner format A20 head 'Queue owner'
col table_owner format A20 head 'table Owner'
col rsname format A34 head 'Rule set name'
col cap format A22 head 'Capture name'
col ti format A22 head 'Date'
col lct format A18 head 'Last|Capture time' justify c
col cmct format A18 head 'Capture|Create time' justify c
col emct format A18 head 'Last enqueued|Message creation|Time' justify c
col ltme format A18 head 'Last message|Enqueue time' justify c
col ect format 999999999 head 'Elapsed|capture|Time' justify c
col eet format 9999999 head 'Elapsed|Enqueue|Time' justify c
col elt format 9999999 head 'Elapsed|LCR|Time' justify c
col tme format 999999999999 head 'Total|Message|Enqueued' justify c
col tmc format 999999999999 head 'Total|Message|Captured' justify c
col scn format 999999999999 head 'Scn' justify c
col emn format 999999999999 head 'Enqueued|Message|Number' justify c
col cmn format 999999999999 head 'Captured|Message|Number' justify c
col lcs format 999999999999 head 'Last scn|Scanned' justify c
col AVAILABLE_MESSAGE_NUMBER format 999999999999 head 'Last system| scn'  justify c
col capture_user format A20 head 'Capture user'
col ncs format 999999999999 head 'Captured|Start scn' justify c
col capture_type format A10 head 'Capture |Type'
col RULE_SET_NAME format a15 head "Rule set Name"
col NEGATIVE_RULE_SET_NAME format a15 head "Neg rule set"
col status format A8 head 'Status'
-- For each table in APPLY site, given by this query
col SOURCE_DATABASE format a30
set linesize 150
select distinct SOURCE_DATABASE,
       source_object_owner||'.'||source_object_name own_obj,
       SOURCE_OBJECT_TYPE objt, instantiation_scn, IGNORE_SCN,
       apply_database_link lnk
from  DBA_APPLY_INSTANTIATED_OBJECTS order by 1,2;
-- do
execute DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(  source_object_name=> 'owner.table',
           source_database_name => 'source_database' ,  instantiation_scn => NULL );
-- List instantiation objects at source
select TABLE_OWNER, TABLE_NAME,SCN, to_char(TIMESTAMP,'DD-MM-YYYY HH24:MI:SS') ti
from dba_capture_prepared_tables  order by table_owner
col LOGMINER_ID head 'Log|ID'  for 999
select  LOGMINER_ID, CAPTURE_USER,  start_scn ncs,  to_char(STATUS_CHANGE_TIME,'DD-MM HH24:MI:SS') change_time
    ,CAPTURE_TYPE,RULE_SET_NAME, negative_rule_set_name , status from dba_capture
order by logminer_id
set lines 190
col rsname format a22 head 'Rule set name'
col delay_scn head 'Delay|Scanned' justify c
col delay2 head 'Delay|Enq-Applied' justify c
col state format a24
col process_name format a8 head 'Process|Name' justify c
col LATENCY_SECONDS head 'Lat(s)'
col total_messages_captured head 'total msg|Captured'
col total_messages_enqueued head 'total msg|Enqueue'
col ENQUEUE_MESG_TIME format a17 head 'Row creation|initial time'
col CAPTURE_TIME head 'Capture at'
select a.logminer_id , a.CAPTURE_NAME cap, queue_name , AVAILABLE_MESSAGE_NUMBER, CAPTURE_MESSAGE_NUMBER lcs,
      AVAILABLE_MESSAGE_NUMBER-CAPTURE_MESSAGE_NUMBER delay_scn,
      last_enqueued_scn , applied_scn las , last_enqueued_scn-applied_scn delay2
      from dba_capture a, v$streams_capture b where a.capture_name = b.capture_name (+)
order by logminer_id
SELECT c.logminer_id,
         SUBSTR(s.program,INSTR(s.program,'(')+1,4) PROCESS_NAME,
         c.sid,
         c.serial#,
         c.state,
         to_char(c.capture_time, 'HH24:MI:SS MM/DD/YY') CAPTURE_TIME,
         to_char(c.enqueue_message_create_time,'HH24:MI:SS MM/DD/YY') ENQUEUE_MESG_TIME ,
        (SYSDATE-c.capture_message_create_time)*86400 LATENCY_SECONDS,
        c.total_messages_captured,
        c.total_messages_enqueued
   FROM V$STREAMS_CAPTURE c, V$SESSION s
   WHERE c.SID = s.SID
  AND c.SERIAL# = s.SERIAL#
order by logminer_id ;
-- Which is lowest requeired archive :
-- this query assume you have only one logminer_id or your must add ' and session# = <id_nn>  '
set serveroutput on
DECLARE
hScn number := 0;
lScn number := 0;
sScn number;
ascn number;
alog varchar2(1000);
begin
  select min(start_scn), min(applied_scn) into sScn, ascn
    from dba_capture ;
  DBMS_OUTPUT.ENABLE(2000);
  for cr in (select distinct(a.ckpt_scn)
             from system.logmnr_restart_ckpt$ a
             where a.ckpt_scn <= ascn and a.valid = 1
               and exists (select * from system.logmnr_log$ l
                   where a.ckpt_scn between l.first_change# and
                     l.next_change#)
              order by a.ckpt_scn desc)
  loop
    if (hScn = 0) then
       hScn := cr.ckpt_scn;
    else
       lScn := cr.ckpt_scn;
       exit;
    end if;
  end loop;
  if lScn = 0 then
    lScn := sScn;
  end if;
   -- select min(name) into alog from v\$archived_log where lScn between first_change# and next_change#;
  -- dbms_output.put_line('Capture will restart from SCN ' || lScn ||' in log '||alog);
    dbms_output.put_line('Capture will restart from SCN ' || lScn ||' in the following file:');
   for cr in (select name, first_time  , SEQUENCE#
               from DBA_REGISTERED_ARCHIVED_LOG
               where lScn between first_scn and next_scn order by thread#)
  loop
     dbms_output.put_line(to_char(cr.SEQUENCE#)|| ' ' ||cr.name||' ('||cr.first_time||')');
  end loop;
end;
-- List all archives
prompt If 'Name' is empty then the archive is not on disk anymore
prompt
set linesize 150 pagesize 0 heading on embedded on
col name          form A55 head 'Name' justify l
col st    form A14 head 'Start' justify l
col end    form A14 head 'End' justify l
col NEXT_CHANGE#   form 9999999999999 head 'Next Change' justify c
col FIRST_CHANGE#  form 9999999999999 head 'First Change' justify c
col SEQUENCE#     form 999999 head 'Logseq' justify c
select thread#, SEQUENCE# , to_char(FIRST_TIME,'MM-DD HH24:MI:SS') st,
       to_char(next_time,'MM-DD HH24:MI:SS') End,FIRST_CHANGE#,
       NEXT_CHANGE#, NAME name
        from ( select thread#, SEQUENCE# , FIRST_TIME, next_time,FIRST_CHANGE#,
                 NEXT_CHANGE#, NAME name
                 from v$archived_log  order by first_time desc  )
        where rownum <= 30
-- APPLY status
col apply_tag format a8 head 'Apply| Tag'
col QUEUE_NAME format a24
col DDL_HANDLER format a20
col MESSAGE_HANDLER format a20
col NEGATIVE_RULE_SET_NAME format a20 head 'Negative|rule set'
col apply_user format a20
col uappn format A30 head "Apply name"
col queue_name format A30 head "Queue name"
col apply_captured format A14 head "Type of|Applied Events" justify c
col rsn format A24 head "Rule Set name"
col sts format A8 head "Apply|Process|Status"
col apply_tag format a8 head 'Apply| Tag'
col QUEUE_NAME format a24
col DDL_HANDLER format a20
col MESSAGE_HANDLER format a20
col NEGATIVE_RULE_SET_NAME format a20 head 'Negative|rule set'
col apply_user format a20
set linesize 150
  select apply_name uappn, queue_owner, DECODE(APPLY_CAPTURED, 'YES', 'Captured', 'NO',  'User-Enqueued') APPLY_CAPTURED,
       RULE_SET_NAME rsn , apply_tag, STATUS sts  from dba_apply;
  select QUEUE_NAME,DDL_HANDLER,MESSAGE_HANDLER, NEGATIVE_RULE_SET_NAME, APPLY_USER, ERROR_NUMBER,
         to_char(STATUS_CHANGE_TIME,'DD-MM-YYYY HH24:MI:SS')STATUS_CHANGE_TIME
  from dba_apply ;
set head off
select  ERROR_MESSAGE from  dba_apply;
-- propagation status
set head on
col rsn format A28 head "Rule Set name"
col rn format A30 head "Rule name"
col rt format A64 head "Rule text"
col d_dblk format A40 head 'Destination dblink'
col nams format A41 head 'Source queue'
col namd format A66 head 'Remote queue'
col prop format A40 head 'Propagation name '
col rsname format A20 head 'Rule set name'
COLUMN TOTAL_TIME HEADING 'Total Time Executing|in Seconds' FORMAT 999999
COLUMN TOTAL_NUMBER HEADING 'Total Events Propagated' FORMAT 999999999
COLUMN TOTAL_BYTES HEADING 'Total mb| Propagated' FORMAT 9999999999
COL PROPAGATION_NAME format a26
COL SOURCE_QUEUE_NAME format a34 head "Source| queue name" justify c
COL DESTINATION_QUEUE_NAME format a24 head "Destination| queue name" justify c
col QUEUE_TO_QUEUE format a9 head "Queue to| Queue"
col RULE_SET_NAME format a18
set linesize 125
prompt
set lines 190
select PROPAGATION_NAME prop,  RULE_SET_NAME rsname , nvl(DESTINATION_DBLINK,'Local to db') d_dblk,NEGATIVE_RULE_SET_NAME
              from dba_propagation ;
  select SOURCE_QUEUE_OWNER||'.'|| SOURCE_QUEUE_NAME nams , DESTINATION_QUEUE_OWNER||'.'|| DESTINATION_QUEUE_NAME||
          decode( DESTINATION_DBLINK,null,'','@'|| DESTINATION_DBLINK) namd, status , QUEUE_TO_QUEUE
              from dba_propagation ;
-- Archive numbers
set linesize 150 pagesize 0 heading on embedded on
col name          form A55 head 'Name' justify l
col st    form A14 head 'Start' justify l
col end    form A14 head 'End' justify l
col NEXT_CHANGE#   form 9999999999999 head 'Next Change' justify c
col FIRST_CHANGE#  form 9999999999999 head 'First Change' justify c
col SEQUENCE#     form 999999 head 'Logseq' justify c
select thread#, SEQUENCE# , to_char(FIRST_TIME,'MM-DD HH24:MI:SS') st,
       to_char(next_time,'MM-DD HH24:MI:SS') End,FIRST_CHANGE#,
       NEXT_CHANGE#, NAME name
        from ( select thread#, SEQUENCE# , FIRST_TIME, next_time,FIRST_CHANGE#,
                 NEXT_CHANGE#, NAME name
                 from v$archived_log  order by first_time desc  )
        where rownum <= 30
-- List queues
col queue_table format A26 head 'Queue Table'
col queue_name format A32 head 'Queue Name'
col primary_instance format 9999 head 'Prim|inst'
col secondary_instance format 9999 head 'Sec|inst'
col owner_instance format 99 head 'Own|inst'
COLUMN MEM_MSG HEADING 'Messages|in Memory' FORMAT 99999999
COLUMN SPILL_MSGS HEADING 'Messages|Spilled' FORMAT 99999999
COLUMN NUM_MSGS HEADING 'Total Messages|in Buffered Queue' FORMAT 99999999
select
     a.owner||'.'|| a.name nam, a.queue_table,
              decode(a.queue_type,'NORMAL_QUEUE','NORMAL', 'EXCEPTION_QUEUE','EXCEPTION',a.queue_type) qt,
              trim(a.enqueue_enabled) enq, trim(a.dequeue_enabled) deq, (NUM_MSGS - SPILL_MSGS) MEM_MSG, spill_msgs, x.num_msgs msg,
              x.INST_ID owner_instance
              from dba_queues a , sys.gv_$buffered_queues x
        where
               a.qid = x.queue_id (+) and a.owner not in ( 'SYS','SYSTEM','WMSYS','SYSMAN')  order by a.owner ,qt desc
-- List instantiated objects
set linesize 150
select distinct SOURCE_DATABASE,
       source_object_owner||'.'||source_object_name own_obj,
       SOURCE_OBJECT_TYPE objt, instantiation_scn, IGNORE_SCN,
       apply_database_link lnk
from  DBA_APPLY_INSTANTIATED_OBJECTS
order by 1,2;
-- Propagation senders : to run on source site
prompt
prompt ++ EVENTS AND BYTES PROPAGATED FOR EACH PROPAGATION++
prompt
COLUMN Elapsed_propagation_TIME HEADING 'Elapsed |Propagation Time|(Seconds)' FORMAT 9999999999999999
COLUMN TOTAL_NUMBER HEADING 'Total Events|Propagated' FORMAT 9999999999999999
COLUMN SCHEDULE_STATUS HEADING 'Schedule|Status'
column elapsed_dequeue_time HEADING 'Total Dequeue|Time (Secs)'
column elapsed_propagation_time HEADING 'Total Propagation|Time (Secs)' justify c
column elapsed_pickle_time HEADING 'Total Pickle| Time(Secs)' justify c
column total_time HEADING 'Elapsed|Pickle Time|(Seconds)' justify c
column high_water_mark HEADING 'High|Water|Mark'
column acknowledgement HEADING 'Target |Ack'
prompt pickle : Pickling is the action of building the messages, wrap the LCR before enqueuing
prompt
set linesize 150
SELECT p.propagation_name,q.message_delivery_mode queue_type, DECODE(p.STATUS,
                'DISABLED', 'Disabled', 'ENABLED', 'Enabled') SCHEDULE_STATUS, q.instance,
                q.total_number TOTAL_NUMBER, q.TOTAL_BYTES/1048576 total_bytes,
                q.elapsed_dequeue_time/100 elapsed_dequeue_time, q.elapsed_pickle_time/100 elapsed_pickle_time,
                q.total_time/100 elapsed_propagation_time
  FROM  DBA_PROPAGATION p, dba_queue_schedules q
        WHERE   p.DESTINATION_DBLINK = NVL(REGEXP_SUBSTR(q.destination, '[^@]+', 1, 2), q.destination)
  AND q.SCHEMA = p.SOURCE_QUEUE_OWNER
  AND q.QNAME = p.SOURCE_QUEUE_NAME
  order by q.message_delivery_mode, p.propagation_name;
-- propagation receiver : to run on apply site
COLUMN SRC_QUEUE_NAME HEADING 'Source|Queue|Name' FORMAT A20
COLUMN DST_QUEUE_NAME HEADING 'Target|Queue|Name' FORMAT A20
COLUMN SRC_DBNAME HEADING 'Source|Database' FORMAT A15
COLUMN ELAPSED_UNPICKLE_TIME HEADING 'Unpickle|Time' FORMAT 99999999.99
COLUMN ELAPSED_RULE_TIME HEADING 'Rule|Evaluation|Time' FORMAT 99999999.99
COLUMN ELAPSED_ENQUEUE_TIME HEADING 'Enqueue|Time' FORMAT 99999999.99
SELECT SRC_QUEUE_NAME,
       SRC_DBNAME,DST_QUEUE_NAME,
       (ELAPSED_UNPICKLE_TIME / 100) ELAPSED_UNPICKLE_TIME,
       (ELAPSED_RULE_TIME / 100) ELAPSED_RULE_TIME,
       (ELAPSED_ENQUEUE_TIME / 100) ELAPSED_ENQUEUE_TIME, TOTAL_MSGS,HIGH_WATER_MARK
  FROM V$PROPAGATION_RECEIVER;
-- Apply reader
col rsid format 99999 head "Reader|Sid" justify c
COLUMN CREATION HEADING 'Message|Creation time' FORMAT A17 justify c
COLUMN LATENCY HEADING 'Latency|in|Seconds' FORMAT 9999999
col deqt format A15 head "Last|Dequeue Time" justify c
SELECT APPLY_NAME, sid rsid , (DEQUEUE_TIME-DEQUEUED_MESSAGE_CREATE_TIME)*86400 LATENCY,
        TO_CHAR(DEQUEUED_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD') CREATION, TO_CHAR(DEQUEUE_TIME,'HH24:MI:SS MM/DD') deqt,
        DEQUEUED_MESSAGE_NUMBER  FROM V$STREAMS_APPLY_READER;
-- Do we have any library cache lock ?
set head on pause off feed off linesize 150
column event format a48 head "Event type"
column wait_time format 999999 head "Total| waits"
column seconds_in_wait format 999999 head " Time waited "
column sid format 9999 head "Sid"
column state format A17 head "State"
col seq# format 999999
select
  w.sid, s.status,w.seq#, w.event
  , w.wait_time, w.seconds_in_wait , w.p1 , w.p2 , w.p3 , w.state
from
  v$session_wait w , v$session s
where
  s.sid = w.sid and
  w.event != 'pmon timer'                  and
  w.event != 'rdbms ipc message'           and
  w.event != 'PL/SQL lock timer'           and
   w.event != 'SQL*Net message from client' and
  w.event != 'client message'              and
  w.event != 'pipe get'                    and
  w.event != 'Null event'                  and
  w.event != 'wakeup time manager'         and
  w.event != 'slave wait'                  and
  w.event != 'smon timer'
  and w.event != 'class slave wait'
  and w.event != 'LogMiner: wakeup event for preparer'
  and w.event != 'Streams AQ: waiting for time management or cleanup tasks'
  and w.event != 'LogMiner: wakeup event for builder'
  and w.event != 'Streams AQ: waiting for messages in the queue'
  and w.event != 'ges remote message'
  and w.event != 'gcs remote message'
  and w.event != 'Streams AQ: qmn slave idle wait'
  and w.event != 'Streams AQ: qmn coordinator idle wait'
  and w.event != 'ASM background timer'
  and w.event != 'DIAG idle wait'
  and w.seconds_in_wait > 0
/

Similar Messages

  • Data Domain - Replication Acceleration vs Application Acceleration?

    http://www.datadomain.com/pdf/DataDomain-Cisco-SolutionBrief.pdf
    I recently read an article by Cisco detailing the WAAS appliances capability to add addional deduplicaion to Data Domain replication traffic being forwarded across the WAN.  After reading the aricle and speaking with my SE.  He recommended a 674 in Application Acceleration mode vs the WAE 7341 in Replication Acceleration mode.  The article also states Application Acceleration mode was used.
    Why is Application Acceleration mode recommended over Replication Acceleration mode for this traffic?
    I have a project requiring 15 - 20 GB / day of incremental backups to be replicated between 3 sites.  The sites are 90 ~ 110ms apart and the links are around 20mbps at each site.  Would a 674 in Application Acceleration mode really make a difference for the Data Domain replications?

    Is their any possibility you could dig up the configuration that was used on the WAAS Application Accelerator during the Data Domain testing.  I see Data Domain replication service runs over port TCP 4126.  My SE recommends disabling the WAE 674s DRE functions for the Data Domain traffic and simply reply on LZ & TFO.  How do you simply disable DRE, but still use LZ and TFO?
    I see 3 common settings;
    action optimize full                                                       LZ+TFO+DRE
    action optimize DRE no compression none          TFO only
    action pass-through                                                    bypass
    Can you do LZ+TFO only?  None of the applications in the link below show this type of action setting.  This leads me to believe my SE was really suggesting turn off DRE completely for the WAAS.
    This WAAS needs to optimize traffic for;
    Lotus-Notes, HTTP, CIFS, Directory Services, RDP, and Data Domain
    Can all applications above + Data Domain be optimized from a pair of WAE 674s in AA mode?
    http://cisco.biz/en/US/docs/app_ntwk_services/waas/waas/v407/configuration/guide/apx_apps.html
    policy-engine application
       name Data-Domain
       classifier Data-Domain
          match dst port eq 4126
    exit
       map basic
          name Data-Domain classifier Data-Domain action optimize DRE no compression LZ
    exit

  • Data type Replication and ODBC

    I want to convert table that has column with LONG datatype to
    supported datatype by replication. Currently the LONG column
    contains more than 4,000 bytes so I can't convert to Varchar(2).
    If I convert to CLOB then the application that we are using is
    connecting through ODBC drivers and CLOB is not supported by
    ODBC.
    Is any one run into this situation? What is recommended or
    practical solutions for this problem?
    Thanks.
    --Pravin

    Thx,
    I used data type java.sql.Timezone and it works fine ;) This data type parse Date to format yyyy-MM-dd hh:mm:ss, so basically does not matter if I use string or this type :) but this is better ;)
    Problem is in timezone ! :( In data type java.sql.Timezone is not possible set timezone! :(

  • Data gaurd replication

    hi all
    i have a doubt in replication method in Data gaurd
    I have configured data gaurd for my 11g database.Evrything is working fine. I want to empty tables in standby or may be drop the schema and re create them with empty tables.
    And after that is it possible to duplicate the primary again and sync it up with primary.
    if i ponder on this i can use data pump to load the data, but how will i sync it with primary.
    thanks in advance...
    feel free to ask questions...

    >
    And after that is it possible to duplicate the primary again and sync it up with primary.
    That means you simply want to recreate your Standby from Primary which is not impacted by whatever changes you made earlier to the Standby. If my understanding to your statement is correct, you are duplicating Primary again to create standby. So the changes you made in Standby is no longer available because it is recreated.

  • Transfer bulk data using replication

    Hi,
    We are having transactional replication setup between two database where one is publisher and other is subscriber. We are using push subscription for this setup.
    Problem comes when we have a bulk data updates on the publisher. On publisher size the update command gets completed in 4 mins while the same takes approx 30 mins to reach on the subscriber side. We have tried customizing the different properties in Agent
    Profile like MaxBatchSize, SubsriptionStreams etc but none of this is of any help. I have tried breaking the command and lot of per & comb but no success.
    The data that we are dealing with is around 10 millions and our production environment is not able to handle this.
    Please help. Thanks in advance!
    Samagra

    How is the production
    publisher server
    and subscriber server
    configuration ? both are same ? How about the network bandwidth ? have you tried the same task with during working
    hours and off hours ? I am thinking problem may be with network as well as both the server configuration..
    If you are doing huge operating with replication this is always expected, either you should have that much configuration or you have divided the workload on your publisher server to avoid all these issues. Why can you split the transactions ?
    Raju Rasagounder Sr MSSQL DBA

  • Data Souce Replication

    Hi
    Hello,
    We've just refreshed our BI Prod  system and the ECC Prod system has been refreshed and upgraded. We are connecting to the source system just fine.
    The problem we are having is that some, not all, of our datasources in the process chains are getting the error that the datasource needs to be replicated.
    Now I understand that the time stamps are different. Where do I find the timestamp in BI? I can find it on the extract structure in ECC 6.0.
    Also, we used the context menu on the source system and chose activate and replicated as well so why were not all of the datasources replicated?
    Archana

    Archana
    You will find datasource timestamp in table RSDS in BI.
    You will find the generate export data source time stamps for BI system in ROOSGEN in BI.
    You will find the timestamps of R/3 data source in ROOSOURCE table in R/3 or in RSOLTPSOURCE in BW.
    Heading 1: Dianne - You need to find for the datasource in RSOLTPSOURCE in BW and ROOSOURCE in R/3.
    These tables do have timestamps, if the timestamp in RSOLTPSOURCE is greater than the timestamp in
    ROOSOURCE then you dont need to replicate, Else you need to replicate the data source.
    You can create an ABAP program which would help you find the data sources which need replication
    May be the datasources were activated and you have to activate the transfer structures of the replicated data sources.
    A better way than finding the data source that need replication would be, right click on the source systme and in the context menu choose , replicate data sources.
    Then go to program RS_TRANSTRU_ACTIVATE_ALL mention the source system and run. This will activate the required transfer structures
    Hope it helps,
    Regards,
    Santosh Nagaraj

  • Data source Replication error in BW Quality system

    hi experts,
    when i replicate the data source in BW QUALITY SYSTEM it is giving the below error.
    Invalid call sequence for interfaces when rec
    changes.
    plz find the long text
    Message no. TK425
    Diagnosis
    The function you selected uses interfaces to record changes made to your objects in requests and tasks.
    Before you save the changes, a check needs to be carried out to establish whether the objects may be changed. This check must be performed before you begin to make changes.
    System Response
    The function terminates.
    Procedure
    The function you have used must correct the two Transport Organizer interfaces required. Contact your system administrator.
    After replication of data source changes are not reflecting in BW QUALITY SYSTEM.
    REGARDS
    venuscm

    Hi,
    Hope you have checked whether the DS has been transported to R/3 Quality and got activated  before Replication in BW quality.
    Check RFC connections between the systems.
    Check RSLOGSYSMAP table in both Bw & R/3 to confirm the system mappings.
    Try replicating again.......
    Regards,
    Suman

  • Oracle streems replication issue

    Hi Gurus,
    Just thought of checking with you guys on one of the issue which i have got Oracle Streams replication between two database oradb1 and oradb2 is inconsistent .
    the data is inconsistent ... between the database.
    I am not sure abt this any expert advive will be highly apprciated.

    Hi,
    What version? How do you know it is inconsistent? Is it small enough to rebuild the target site from a backup of the primary?
    if it's small enough, you can delete the erroneous transactions from the apply queue in the target database. In 11.2.0.1 and above, there is also a DBMS_COMPARISON package for this type of thing, which will identify and fix the data if you choose. I think under huge volumes of data though, you'll have to fix it yourself.
    Also, inquiring minds want to know. Specifically, how did it get out of sync? We have found Streams to be quite robust (and cheaper than the $17K/CPU they want for GoldenGate) if you:
    * Properly test and document your configuration
    * Test failure/recovery scenarios for issues such as yours
    * Lather/rinse/repeat
    HTH,
    Steve

  • Truncate data in replication

    Hi,
    I have a problem with functions removeDatabase and truncateClass, now I want to clear all data in a database with a small cost and I found these two APIs can meet my needs, but it awalys give me com.sleepycat.je.rep.DatabasePreemptedException when a request comes to replicated node, the master node works fine. Below is my code:
    trans = env.beginTransaction(null, null);
                   closeDatabase(); //Close database and entity store
                   env.removeDatabase(trans,
                             "persist#EntityStore#xxx.BdbMember#mobile");
                   env.removeDatabase(trans,
                             "persist#EntityStore#xxx.BdbMember");
                   env.removeDatabase(trans,
                             "persist#EntityStore#xxx.Card#businessNo");
                   env.removeDatabase(trans,
                             "persist#EntityStore#xxx.Card");
                   env.removeDatabase(trans,
                             "persist#EntityStore#xxx.PointFreeze");
                   env.removeDatabase(trans,
                             "persist#EntityStore#xxx.RedMemberBlockPeriod");
    reOpenDatabase("persist#EntityStore#xxx.BdbMember");
                   reOpenDatabase("persist#EntityStore#xxx.Card");
                   reOpenDatabase("persist#EntityStore#xxx.PointFreeze");
                   reOpenDatabase("persist#EntityStore#xxx.RedMemberBlockPeriod");               
                   trans.commit();
    private Database reOpenDatabase(String dbName) throws Exception {
              Transaction trans = null;
              Database db = null;
              try {
                   trans = env.beginTransaction(null, null);
                   DatabaseConfig dc = new DatabaseConfig();
                   dc.setTransactional(true);
                   dc.setAllowCreate(true);
                   db = env.openDatabase(trans, dbName, dc);
                   trans.commit();
              } catch (Exception e) {
                   if (trans != null) {
                        trans.abort();
                   throw e;
              return db;
    Please do help me!
    Thanks

    You are using Berkeley DB, Java Edition, High Availability, which is a different product from Berkeley DB, (C version). That's why we suggested that you move your question to the Berkeley DB, Java Edition forum. I know that the product names are confusing.
    Please read the javadoc for DatabasePreemptedException at http://download.oracle.com/docs/cd/E17277_02/html/java/com/sleepycat/je/rep/DatabasePreemptedException.html. There you will find out that this happens when a database has been truncated, renamed, or deleted on the master. You will get this exception on the replica node. It tells you that the database has had a major change, and you must close and reopen your cursors, database and environment handles. Please read the javadoc for more details.

  • Data Guard replication queston.

    I'm just curious about how things work when creating a physical standby database with grid control.
    I would like to know if during the creation process during the transfer of database files what happens with the current changes to the database. After the files are transferred does it then need to sync with the primary database for all changes that happened during the copy process?
    For example: If it took an entire day to transfer database files; after the transfer was complete would I then have a full day of log files to catch up?
    Thanks
    Luke

    Ouch, thanks for the bad news Larry lol. I was hoping for a miracle, but I guess that would be too good to be true. If I were to wait for the database to replicate and then apply logs I would be 6 days behind. I'm going to have to try an alt means of getting the data there. Maybe a backup and restore method. Thanks again for the information.

  • Data Source Replication

    Dear Friends, Happy New Year.
    Please give me solution for the following problem.
    I have APO System and BW system.
    In BW, I have ODS and Cube, so Cube is getting data from ODS. I did some changes in (Transferrules and Update rules)Development, Now I want to transport it to Quality. Now tell me do I need to collect 0<ODS> datasource,this will act as a Datasource for Cube and do I need to Replicate it in Qualiy and Production?.
    Thanks

    Hi Ganga,
    no need to transport the data sources.all the data sources are available to every system landscape.so just transport the transfer and update rules and every routine including in that if any,and transport the cube and ods's and just assign the data sources in the production system itself check with your team or with your basis guys they will tell you if there is any change in the naming convention of the data source.normally its directly available to all the 3 systems.so just create the data package and load the data directly.
    Hope everything is clear.
    assign points if it helps...
    Regards,
    ashok

  • Data Guard: Replication on standby is not working.

    Experts,
    I've executed below command on standby database :
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;
    I thought this command would bring all recent changes immediately. However after executing this statement, replication is not happening. I mean if I update something on primary , it's not getting replicated on standby db.
    Later I realized this command is for switching standby to primary.
    How to roll back the above change on standby ?
    Thanks

    Please see below:
    An attempt was made to cancel a managed recovery session but no managed recovery session was active.
    If you restart the recovery does it error?
    * I restarted recovery on Standby and did not error out. See below*
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT ;
    Database altered.
    If not and you do a log switch on the Primary does the archive move and not apply, or not move?
    I have done log switch on Primary with below command and it did not give any error:
    SQL> alter system switch logfile;
    System altered.
    In the primary DB alert log file I can see below errors :
    Mon Dec 12 10:59:52 2011
    Thread 1 advanced to log sequence 322 (LGWR switch)
    *Current log# 2 seq# 322 mem# 0: +DBDATA/iam/onlinelog/group_2.262.764445763*
    *Current log# 2 seq# 322 mem# 1: +DBFLASH/iam/onlinelog/group_2.258.764445763*
    Mon Dec 12 10:59:52 2011
    *Deleted Oracle managed file +DBFLASH/iam/archivelog/2011_12_01/thread_1_seq_218.383.768756737*
    Archived Log entry 804 added for thread 1 sequence 321 ID 0x41fac87c dest 1:
    Mon Dec 12 10:59:52 2011
    *FAL[server, ARC3]: FAL archive failed, see trace file.*
    ARCH: FAL archive failed. Archiver continuing
    Any light from above information ?
    Thanks
    Edited by: 859875 on Dec 12, 2011 8:09 AM

  • Issue in Material Master data replication in SRM 7.13

    Hi All,
    We have implemented SRM 7.13 and facing an issue in material mater data delta replication.When we create a new material in ECC it does not get replicated to SRM and gives the ERROR 'Specify the relevant unit of length' and 'Validation error occurred: Module COM_PRODUCT_MAT_VALIDATE, BDoc type PRODUCT_MAT.'  in SMW01.
    But when changes are done to the same material in ECC it gets repicated without any issues.
    Please provide your help to reolve this issue.
    Thanks,
    Ankur

    Hi Ankur
    Check your settings in CUNI transaction which UOM is this happening for or for all ?
    You should go to that and check what are the conversion values maintained As seen below
    In CUNI transaction go to utlilties > Adjustment select your R/3 system and click on Check mark
    Also replicate the UOM from backend . By going to R3AS and fill DNL_CUST_BASIS3 and execute
    Lastly go to Table T006 and T006A and click on Check table ...
    Doing this would sync your UOM with that of ECC and ideally the error should vanish if not
    share the screenshots
    You can solve the errors mentioned above by implementing the following notes:
    Call transaction SE16, enter SMOFAPPL, and set entry EBP.
    Check the notes as below
    520227 - Set type CRMM_PR_TAX (implement manually!)
    720819 - Consumer entries
    428989 - Filtering irrelevant data (for example, Sales and Distribution) Example: see attachment mod.txt.
    675101 - Execute report: COM_CATEGORY_TRANSPORT
    Note 872533 - FAQ: Middleware
    Note 402591 - Download Customizing object DNL_CUST_BASIS3
    Note 1609476 - Error loading tables DNL_CUST_PRICE and DNL_CUST_BASIS3 from ERP.
    Note 393939 - Table TCURX is not downloaded from R/3 System
    Note 1038966 - Entries missing in TCURX after load of DNL_CUST_BASIS3
    Also refer to these who faced similar issue as yours
    http://scn.sap.com/thread/1769891
    http://scn.sap.com/thread/1355188

  • Need Suiggestion forOracle data replication/integration/transfer strategies

    Client OLTP database(Oracle 10g on Solaris) connected to a web based 'Point of Sale' system where some of their insurance related transactions(around 2000 per day) are stored. Most of their insurance related transactions are stored in a remote central system (accessible over the intranet).
    Based on the net worth of a transaction, there is a requirement to either transfer the transaction real-time or batch it and send it across to a remote Oracle 10g(on Solaris) staging area(a part of the remote central system) and fire some stored procedures on the remote Oracle staging area to process the records transferred. Some amount of configurable data massaging would also be needed before firing the stored procedures on the remote Oracle 10g(on Solaris) staging area to process the records transferred. The outcome of the whole process also needs to be tracked.
    Client is interested in automating this process to the extent possible using the various possible replication/data transfer strategies (dblink, datapump, etl, replication kit, MQ, xml/http etc.- pro's & con's) to do so taking into consideration the standard security and bandwidth related constraints.

    Hi,
    There are lots of solutions available
    You can think and use the possible solutions for your scenarios.
    1.) Data Pump Network Exports/Imports
    http://www.oracle-base.com/articles/10g/oracle-data-pump-10g.php#NetworkExportsImports
    2.) Data replication using DBMS_COMPARISON
    http://nadvi.blogspot.in/2011/11/data-replication-using-dbmscomparison.html
    http://www.bash-dba.com/2012/02/data-synchronization-replication-using.html
    3.) Complete Schema Refresh using Data Pump
    http://www.in-oracle.com/Oracle-DBA/DBA-I/schema-refresh-Data-Pump.php
    4.) Oracle Golden Gate
    5.) Oracle Streaming
    Let you know if any query the same :)
    Thanks
    Hitgon

  • BP Replication with Sales Area Data

    Hi all,
    We are having an issue with BP replication from CRM to R3. The issue is that when we create a new BP and add sales area data to this Sold-To business partner, the BP is replicated to R3 all blanked out. There is no name, address or any data that is replicated. However, if we create a new Sold-To business partner and do not maintain any sales area data, the replication works as expected. Any ideas?
    Thanks in advance. Points will be awarded for helpful responses.
    Message was edited by:
            John S

    Yes, the sales area exists in both systems but I get errors for both scenarios...
    When I create a sold-to BP in CRM and maintain Sales Area Data, the error I get is "Business partner #### does not exist as customer, change not possible" (XR012). In the extended data part, there are two entries under Partner table. The first entry appears to be the Update to include sales area, and the second entry appears to be the Insert of a new customer. However, this new customer uses the Reference Customer information.
    When I create a sold-to BP in CRM without Sales Area Data, I get "Updating could not be completed" (S&150). In the Partner table there is one entry and it contains an Insert for the new customer with all of the new data (no reference customer info in there).
    Any ideas?

Maybe you are looking for

  • Want to Purchase Power Adaptor for HP Pavillion Dv6226 US . Need specification to buy it ?

    I am located in India . I want to purchase a power adaptor for my laptop HP Pavillion DV6226Us ....I need to know the exact specification , so that it will be useful to specify the requirement while buying it . I dont want , what is suitable for my l

  • Updating a servlet

    Here is a scenario: A user enters a query and submits it to servlet. The servlet prints "thanks for the visit, ur query is being dealt with". At the same time the servlet is accessing a database and searching for user's query. Then the servlet update

  • SFG cost same as the Raw Material price consumed

    Hi all, Is it possible for us to have the SFG price same as the price we consume? Reason being is that they want to carry the same cost to the next stage, let say consumption cost is RM 500, the SFG produced need to be RM500. Say RM->SFG1->SFG2->FG,

  • I cannot receive mails on my iMac using OS X Mavericks.

    When I open Outlook for Mac 2011 I get the following message: The server for the account Me returned the failure Login failure The username is unknown, or the login code no is not correct. The security settings for the username or login code are poss

  • Client Drops - No Exception Thrown

    Hi All, I have a client server application which has been tested extensively and works fine. I have recently migrated the server to a linux environment (it was on windows) and now am experiencing a problem. Any exception thrown in the client is progr