Data Souce Replication

Hi
Hello,
We've just refreshed our BI Prod  system and the ECC Prod system has been refreshed and upgraded. We are connecting to the source system just fine.
The problem we are having is that some, not all, of our datasources in the process chains are getting the error that the datasource needs to be replicated.
Now I understand that the time stamps are different. Where do I find the timestamp in BI? I can find it on the extract structure in ECC 6.0.
Also, we used the context menu on the source system and chose activate and replicated as well so why were not all of the datasources replicated?
Archana

Archana
You will find datasource timestamp in table RSDS in BI.
You will find the generate export data source time stamps for BI system in ROOSGEN in BI.
You will find the timestamps of R/3 data source in ROOSOURCE table in R/3 or in RSOLTPSOURCE in BW.
Heading 1: Dianne - You need to find for the datasource in RSOLTPSOURCE in BW and ROOSOURCE in R/3.
These tables do have timestamps, if the timestamp in RSOLTPSOURCE is greater than the timestamp in
ROOSOURCE then you dont need to replicate, Else you need to replicate the data source.
You can create an ABAP program which would help you find the data sources which need replication
May be the datasources were activated and you have to activate the transfer structures of the replicated data sources.
A better way than finding the data source that need replication would be, right click on the source systme and in the context menu choose , replicate data sources.
Then go to program RS_TRANSTRU_ACTIVATE_ALL mention the source system and run. This will activate the required transfer structures
Hope it helps,
Regards,
Santosh Nagaraj

Similar Messages

  • Data Domain - Replication Acceleration vs Application Acceleration?

    http://www.datadomain.com/pdf/DataDomain-Cisco-SolutionBrief.pdf
    I recently read an article by Cisco detailing the WAAS appliances capability to add addional deduplicaion to Data Domain replication traffic being forwarded across the WAN.  After reading the aricle and speaking with my SE.  He recommended a 674 in Application Acceleration mode vs the WAE 7341 in Replication Acceleration mode.  The article also states Application Acceleration mode was used.
    Why is Application Acceleration mode recommended over Replication Acceleration mode for this traffic?
    I have a project requiring 15 - 20 GB / day of incremental backups to be replicated between 3 sites.  The sites are 90 ~ 110ms apart and the links are around 20mbps at each site.  Would a 674 in Application Acceleration mode really make a difference for the Data Domain replications?

    Is their any possibility you could dig up the configuration that was used on the WAAS Application Accelerator during the Data Domain testing.  I see Data Domain replication service runs over port TCP 4126.  My SE recommends disabling the WAE 674s DRE functions for the Data Domain traffic and simply reply on LZ & TFO.  How do you simply disable DRE, but still use LZ and TFO?
    I see 3 common settings;
    action optimize full                                                       LZ+TFO+DRE
    action optimize DRE no compression none          TFO only
    action pass-through                                                    bypass
    Can you do LZ+TFO only?  None of the applications in the link below show this type of action setting.  This leads me to believe my SE was really suggesting turn off DRE completely for the WAAS.
    This WAAS needs to optimize traffic for;
    Lotus-Notes, HTTP, CIFS, Directory Services, RDP, and Data Domain
    Can all applications above + Data Domain be optimized from a pair of WAE 674s in AA mode?
    http://cisco.biz/en/US/docs/app_ntwk_services/waas/waas/v407/configuration/guide/apx_apps.html
    policy-engine application
       name Data-Domain
       classifier Data-Domain
          match dst port eq 4126
    exit
       map basic
          name Data-Domain classifier Data-Domain action optimize DRE no compression LZ
    exit

  • Not able to see PSA for the data souce 2lis_13_vdkon after enhaning

    Dear Experts,
    I have enhanced the extract structure of 2lis_13_vdkon in R/3 with 5 fields which are exist in transfer structure.
    I have replicated the data source in BI but unfortunately i have not deleted the data in the PSA.
    After replication when i try to activate the data source the system is giving Dump and im not able to see the PSA also.
    Please help me out.
    When i manage on the data source to c PSA, its giving error like this
    Invalid DataStore object name 2LIS_13_VDKON_BB: Reason: No valid entry in table RSTS
    The Dump when activating the data source is :
    Runtime Errors SAPSQL_ARRAY_INSERT_DUPREC
    Exception CX_SY_OPEN_SQL_DB
    Date and Time 03.06.2009 09:46:55
    How to correct the error
    Use an ABAP/4 Open SQL array insert only if you are sure that none of
    the records passed already exists in the database.
    If the error occures in a non-modified SAP program, you may be able to
    find an interim solution in an SAP Note.
    If you have access to SAP Notes, carry out a search with the following
    keywords:
    "SAPSQL_ARRAY_INSERT_DUPREC" "CX_SY_OPEN_SQL_DB"
    "CL_RSAR_PSA===================CP" or "CL_RSAR_PSA===================CM006"
    "_UPDATE_DIRECTORY_TABLES"
    If you cannot solve the problem yourself and want to send an error
    notification to SAP, include the following information:
    1. The description of the current problem (short dump)
    To save the description, choose "System->List->Save->Local File
    (Unconverted)".
    2. Corresponding system log
    Display the system log by calling transaction SM21.
    Restrict the time interval to 10 minutes before and five minutes
    after the short dump. Then choose "System->List->Save->Local File
    (Unconverted)".
    3. If the problem occurs in a problem of your own or a modified SAP
    program: The source code of the program
    Regards
    venu

    Hi Venu,
    The issue I think is the PSA was not deleted cleanly. If you try and activate the PSA you will get the dump again.
    ..Probably if you go through the ABAP dump you will notice it is pointing to the following code;-
       75 * Put the fields of PSA to database without check, otherwise error by
       76 * activation of table
       77
       78   IF p_psa_exists EQ rs_c_false OR
       79      i_new_version     EQ rs_c_true.
    >>>>>     INSERT rstsodsfield FROM TABLE l_t_odsfield.
       81   ELSE.
       82     DELETE FROM rstsodsfield
       83       WHERE odsname = l_s_odsfield-odsname
       84       AND  version = l_s_odsfield-version.
       85     MODIFY rstsodsfield FROM TABLE l_t_odsfield.
    Put a breakpoint in the code in line 78. and activate the DS..the program should stop in 78...change the value of variable rc_c_false so that line 82 gets executed. This will clean up the PSA entries in table rstsodsfield. Now activate the DS and delete the PSA entries in the PSA table.
    We faced the same issue and resolved it this way.
    Thanks
    -Saif

  • Data type Replication and ODBC

    I want to convert table that has column with LONG datatype to
    supported datatype by replication. Currently the LONG column
    contains more than 4,000 bytes so I can't convert to Varchar(2).
    If I convert to CLOB then the application that we are using is
    connecting through ODBC drivers and CLOB is not supported by
    ODBC.
    Is any one run into this situation? What is recommended or
    practical solutions for this problem?
    Thanks.
    --Pravin

    Thx,
    I used data type java.sql.Timezone and it works fine ;) This data type parse Date to format yyyy-MM-dd hh:mm:ss, so basically does not matter if I use string or this type :) but this is better ;)
    Problem is in timezone ! :( In data type java.sql.Timezone is not possible set timezone! :(

  • Data gaurd replication

    hi all
    i have a doubt in replication method in Data gaurd
    I have configured data gaurd for my 11g database.Evrything is working fine. I want to empty tables in standby or may be drop the schema and re create them with empty tables.
    And after that is it possible to duplicate the primary again and sync it up with primary.
    if i ponder on this i can use data pump to load the data, but how will i sync it with primary.
    thanks in advance...
    feel free to ask questions...

    >
    And after that is it possible to duplicate the primary again and sync it up with primary.
    That means you simply want to recreate your Standby from Primary which is not impacted by whatever changes you made earlier to the Standby. If my understanding to your statement is correct, you are duplicating Primary again to create standby. So the changes you made in Standby is no longer available because it is recreated.

  • Transfer bulk data using replication

    Hi,
    We are having transactional replication setup between two database where one is publisher and other is subscriber. We are using push subscription for this setup.
    Problem comes when we have a bulk data updates on the publisher. On publisher size the update command gets completed in 4 mins while the same takes approx 30 mins to reach on the subscriber side. We have tried customizing the different properties in Agent
    Profile like MaxBatchSize, SubsriptionStreams etc but none of this is of any help. I have tried breaking the command and lot of per & comb but no success.
    The data that we are dealing with is around 10 millions and our production environment is not able to handle this.
    Please help. Thanks in advance!
    Samagra

    How is the production
    publisher server
    and subscriber server
    configuration ? both are same ? How about the network bandwidth ? have you tried the same task with during working
    hours and off hours ? I am thinking problem may be with network as well as both the server configuration..
    If you are doing huge operating with replication this is always expected, either you should have that much configuration or you have divided the workload on your publisher server to avoid all these issues. Why can you split the transactions ?
    Raju Rasagounder Sr MSSQL DBA

  • Data source Replication error in BW Quality system

    hi experts,
    when i replicate the data source in BW QUALITY SYSTEM it is giving the below error.
    Invalid call sequence for interfaces when rec
    changes.
    plz find the long text
    Message no. TK425
    Diagnosis
    The function you selected uses interfaces to record changes made to your objects in requests and tasks.
    Before you save the changes, a check needs to be carried out to establish whether the objects may be changed. This check must be performed before you begin to make changes.
    System Response
    The function terminates.
    Procedure
    The function you have used must correct the two Transport Organizer interfaces required. Contact your system administrator.
    After replication of data source changes are not reflecting in BW QUALITY SYSTEM.
    REGARDS
    venuscm

    Hi,
    Hope you have checked whether the DS has been transported to R/3 Quality and got activated  before Replication in BW quality.
    Check RFC connections between the systems.
    Check RSLOGSYSMAP table in both Bw & R/3 to confirm the system mappings.
    Try replicating again.......
    Regards,
    Suman

  • Data streems replication

    Hi Guru's
    Just thought of checking with you all for the possible solution to a problem.
    The problem is with the strems replication.
    Client has come back stating that data is not replicating to the target database .
    could some put please come up with the possible factors for this failure .
    I checked the db links are working fine .
    apart from this any suggestion on this will be highly apprciated .
    Thanks in Advance .

    Not much informations, so let's grasp whatever we can.
    Can you post the results of the following queries (
    (Please, enclosed the response with the tags [ code] and [ /code]
    (otherwise we simply cannot read the resonse):
    set lines 190 pages 66 feed off pause off verify off
    col rsn format A28 head "Rule Set name"
    col rn format A30 head "Rule name"
    col rt format A64 head "Rule text"
    COLUMN DICTIONARY_BEGIN HEADING 'Dictionary|Build|Begin' FORMAT A10
    COLUMN DICTIONARY_END HEADING 'Dictionary|Build|End' FORMAT A10
    col CHECKPOINT_RETENTION_TIME head "Checkpoint|Retention|time" justify c
    col LAST_ENQUEUED_SCN for 999999999999 head "Last scn|enqueued" justify c
    col las format 999999999999 head "Last remote|confirmed|scn Applied" justify c
    col REQUIRED_CHECKPOINT_SCN for 999999999999 head "Checkpoint|Require scn" justify c
    col nam format A31 head 'Queue Owner and Name'
    col table_name format A30 head 'table Name'
    col queue_owner format A20 head 'Queue owner'
    col table_owner format A20 head 'table Owner'
    col rsname format A34 head 'Rule set name'
    col cap format A22 head 'Capture name'
    col ti format A22 head 'Date'
    col lct format A18 head 'Last|Capture time' justify c
    col cmct format A18 head 'Capture|Create time' justify c
    col emct format A18 head 'Last enqueued|Message creation|Time' justify c
    col ltme format A18 head 'Last message|Enqueue time' justify c
    col ect format 999999999 head 'Elapsed|capture|Time' justify c
    col eet format 9999999 head 'Elapsed|Enqueue|Time' justify c
    col elt format 9999999 head 'Elapsed|LCR|Time' justify c
    col tme format 999999999999 head 'Total|Message|Enqueued' justify c
    col tmc format 999999999999 head 'Total|Message|Captured' justify c
    col scn format 999999999999 head 'Scn' justify c
    col emn format 999999999999 head 'Enqueued|Message|Number' justify c
    col cmn format 999999999999 head 'Captured|Message|Number' justify c
    col lcs format 999999999999 head 'Last scn|Scanned' justify c
    col AVAILABLE_MESSAGE_NUMBER format 999999999999 head 'Last system| scn'  justify c
    col capture_user format A20 head 'Capture user'
    col ncs format 999999999999 head 'Captured|Start scn' justify c
    col capture_type format A10 head 'Capture |Type'
    col RULE_SET_NAME format a15 head "Rule set Name"
    col NEGATIVE_RULE_SET_NAME format a15 head "Neg rule set"
    col status format A8 head 'Status'
    -- For each table in APPLY site, given by this query
    col SOURCE_DATABASE format a30
    set linesize 150
    select distinct SOURCE_DATABASE,
           source_object_owner||'.'||source_object_name own_obj,
           SOURCE_OBJECT_TYPE objt, instantiation_scn, IGNORE_SCN,
           apply_database_link lnk
    from  DBA_APPLY_INSTANTIATED_OBJECTS order by 1,2;
    -- do
    execute DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(  source_object_name=> 'owner.table',
               source_database_name => 'source_database' ,  instantiation_scn => NULL );
    -- List instantiation objects at source
    select TABLE_OWNER, TABLE_NAME,SCN, to_char(TIMESTAMP,'DD-MM-YYYY HH24:MI:SS') ti
    from dba_capture_prepared_tables  order by table_owner
    col LOGMINER_ID head 'Log|ID'  for 999
    select  LOGMINER_ID, CAPTURE_USER,  start_scn ncs,  to_char(STATUS_CHANGE_TIME,'DD-MM HH24:MI:SS') change_time
        ,CAPTURE_TYPE,RULE_SET_NAME, negative_rule_set_name , status from dba_capture
    order by logminer_id
    set lines 190
    col rsname format a22 head 'Rule set name'
    col delay_scn head 'Delay|Scanned' justify c
    col delay2 head 'Delay|Enq-Applied' justify c
    col state format a24
    col process_name format a8 head 'Process|Name' justify c
    col LATENCY_SECONDS head 'Lat(s)'
    col total_messages_captured head 'total msg|Captured'
    col total_messages_enqueued head 'total msg|Enqueue'
    col ENQUEUE_MESG_TIME format a17 head 'Row creation|initial time'
    col CAPTURE_TIME head 'Capture at'
    select a.logminer_id , a.CAPTURE_NAME cap, queue_name , AVAILABLE_MESSAGE_NUMBER, CAPTURE_MESSAGE_NUMBER lcs,
          AVAILABLE_MESSAGE_NUMBER-CAPTURE_MESSAGE_NUMBER delay_scn,
          last_enqueued_scn , applied_scn las , last_enqueued_scn-applied_scn delay2
          from dba_capture a, v$streams_capture b where a.capture_name = b.capture_name (+)
    order by logminer_id
    SELECT c.logminer_id,
             SUBSTR(s.program,INSTR(s.program,'(')+1,4) PROCESS_NAME,
             c.sid,
             c.serial#,
             c.state,
             to_char(c.capture_time, 'HH24:MI:SS MM/DD/YY') CAPTURE_TIME,
             to_char(c.enqueue_message_create_time,'HH24:MI:SS MM/DD/YY') ENQUEUE_MESG_TIME ,
            (SYSDATE-c.capture_message_create_time)*86400 LATENCY_SECONDS,
            c.total_messages_captured,
            c.total_messages_enqueued
       FROM V$STREAMS_CAPTURE c, V$SESSION s
       WHERE c.SID = s.SID
      AND c.SERIAL# = s.SERIAL#
    order by logminer_id ;
    -- Which is lowest requeired archive :
    -- this query assume you have only one logminer_id or your must add ' and session# = <id_nn>  '
    set serveroutput on
    DECLARE
    hScn number := 0;
    lScn number := 0;
    sScn number;
    ascn number;
    alog varchar2(1000);
    begin
      select min(start_scn), min(applied_scn) into sScn, ascn
        from dba_capture ;
      DBMS_OUTPUT.ENABLE(2000);
      for cr in (select distinct(a.ckpt_scn)
                 from system.logmnr_restart_ckpt$ a
                 where a.ckpt_scn <= ascn and a.valid = 1
                   and exists (select * from system.logmnr_log$ l
                       where a.ckpt_scn between l.first_change# and
                         l.next_change#)
                  order by a.ckpt_scn desc)
      loop
        if (hScn = 0) then
           hScn := cr.ckpt_scn;
        else
           lScn := cr.ckpt_scn;
           exit;
        end if;
      end loop;
      if lScn = 0 then
        lScn := sScn;
      end if;
       -- select min(name) into alog from v\$archived_log where lScn between first_change# and next_change#;
      -- dbms_output.put_line('Capture will restart from SCN ' || lScn ||' in log '||alog);
        dbms_output.put_line('Capture will restart from SCN ' || lScn ||' in the following file:');
       for cr in (select name, first_time  , SEQUENCE#
                   from DBA_REGISTERED_ARCHIVED_LOG
                   where lScn between first_scn and next_scn order by thread#)
      loop
         dbms_output.put_line(to_char(cr.SEQUENCE#)|| ' ' ||cr.name||' ('||cr.first_time||')');
      end loop;
    end;
    -- List all archives
    prompt If 'Name' is empty then the archive is not on disk anymore
    prompt
    set linesize 150 pagesize 0 heading on embedded on
    col name          form A55 head 'Name' justify l
    col st    form A14 head 'Start' justify l
    col end    form A14 head 'End' justify l
    col NEXT_CHANGE#   form 9999999999999 head 'Next Change' justify c
    col FIRST_CHANGE#  form 9999999999999 head 'First Change' justify c
    col SEQUENCE#     form 999999 head 'Logseq' justify c
    select thread#, SEQUENCE# , to_char(FIRST_TIME,'MM-DD HH24:MI:SS') st,
           to_char(next_time,'MM-DD HH24:MI:SS') End,FIRST_CHANGE#,
           NEXT_CHANGE#, NAME name
            from ( select thread#, SEQUENCE# , FIRST_TIME, next_time,FIRST_CHANGE#,
                     NEXT_CHANGE#, NAME name
                     from v$archived_log  order by first_time desc  )
            where rownum <= 30
    -- APPLY status
    col apply_tag format a8 head 'Apply| Tag'
    col QUEUE_NAME format a24
    col DDL_HANDLER format a20
    col MESSAGE_HANDLER format a20
    col NEGATIVE_RULE_SET_NAME format a20 head 'Negative|rule set'
    col apply_user format a20
    col uappn format A30 head "Apply name"
    col queue_name format A30 head "Queue name"
    col apply_captured format A14 head "Type of|Applied Events" justify c
    col rsn format A24 head "Rule Set name"
    col sts format A8 head "Apply|Process|Status"
    col apply_tag format a8 head 'Apply| Tag'
    col QUEUE_NAME format a24
    col DDL_HANDLER format a20
    col MESSAGE_HANDLER format a20
    col NEGATIVE_RULE_SET_NAME format a20 head 'Negative|rule set'
    col apply_user format a20
    set linesize 150
      select apply_name uappn, queue_owner, DECODE(APPLY_CAPTURED, 'YES', 'Captured', 'NO',  'User-Enqueued') APPLY_CAPTURED,
           RULE_SET_NAME rsn , apply_tag, STATUS sts  from dba_apply;
      select QUEUE_NAME,DDL_HANDLER,MESSAGE_HANDLER, NEGATIVE_RULE_SET_NAME, APPLY_USER, ERROR_NUMBER,
             to_char(STATUS_CHANGE_TIME,'DD-MM-YYYY HH24:MI:SS')STATUS_CHANGE_TIME
      from dba_apply ;
    set head off
    select  ERROR_MESSAGE from  dba_apply;
    -- propagation status
    set head on
    col rsn format A28 head "Rule Set name"
    col rn format A30 head "Rule name"
    col rt format A64 head "Rule text"
    col d_dblk format A40 head 'Destination dblink'
    col nams format A41 head 'Source queue'
    col namd format A66 head 'Remote queue'
    col prop format A40 head 'Propagation name '
    col rsname format A20 head 'Rule set name'
    COLUMN TOTAL_TIME HEADING 'Total Time Executing|in Seconds' FORMAT 999999
    COLUMN TOTAL_NUMBER HEADING 'Total Events Propagated' FORMAT 999999999
    COLUMN TOTAL_BYTES HEADING 'Total mb| Propagated' FORMAT 9999999999
    COL PROPAGATION_NAME format a26
    COL SOURCE_QUEUE_NAME format a34 head "Source| queue name" justify c
    COL DESTINATION_QUEUE_NAME format a24 head "Destination| queue name" justify c
    col QUEUE_TO_QUEUE format a9 head "Queue to| Queue"
    col RULE_SET_NAME format a18
    set linesize 125
    prompt
    set lines 190
    select PROPAGATION_NAME prop,  RULE_SET_NAME rsname , nvl(DESTINATION_DBLINK,'Local to db') d_dblk,NEGATIVE_RULE_SET_NAME
                  from dba_propagation ;
      select SOURCE_QUEUE_OWNER||'.'|| SOURCE_QUEUE_NAME nams , DESTINATION_QUEUE_OWNER||'.'|| DESTINATION_QUEUE_NAME||
              decode( DESTINATION_DBLINK,null,'','@'|| DESTINATION_DBLINK) namd, status , QUEUE_TO_QUEUE
                  from dba_propagation ;
    -- Archive numbers
    set linesize 150 pagesize 0 heading on embedded on
    col name          form A55 head 'Name' justify l
    col st    form A14 head 'Start' justify l
    col end    form A14 head 'End' justify l
    col NEXT_CHANGE#   form 9999999999999 head 'Next Change' justify c
    col FIRST_CHANGE#  form 9999999999999 head 'First Change' justify c
    col SEQUENCE#     form 999999 head 'Logseq' justify c
    select thread#, SEQUENCE# , to_char(FIRST_TIME,'MM-DD HH24:MI:SS') st,
           to_char(next_time,'MM-DD HH24:MI:SS') End,FIRST_CHANGE#,
           NEXT_CHANGE#, NAME name
            from ( select thread#, SEQUENCE# , FIRST_TIME, next_time,FIRST_CHANGE#,
                     NEXT_CHANGE#, NAME name
                     from v$archived_log  order by first_time desc  )
            where rownum <= 30
    -- List queues
    col queue_table format A26 head 'Queue Table'
    col queue_name format A32 head 'Queue Name'
    col primary_instance format 9999 head 'Prim|inst'
    col secondary_instance format 9999 head 'Sec|inst'
    col owner_instance format 99 head 'Own|inst'
    COLUMN MEM_MSG HEADING 'Messages|in Memory' FORMAT 99999999
    COLUMN SPILL_MSGS HEADING 'Messages|Spilled' FORMAT 99999999
    COLUMN NUM_MSGS HEADING 'Total Messages|in Buffered Queue' FORMAT 99999999
    select
         a.owner||'.'|| a.name nam, a.queue_table,
                  decode(a.queue_type,'NORMAL_QUEUE','NORMAL', 'EXCEPTION_QUEUE','EXCEPTION',a.queue_type) qt,
                  trim(a.enqueue_enabled) enq, trim(a.dequeue_enabled) deq, (NUM_MSGS - SPILL_MSGS) MEM_MSG, spill_msgs, x.num_msgs msg,
                  x.INST_ID owner_instance
                  from dba_queues a , sys.gv_$buffered_queues x
            where
                   a.qid = x.queue_id (+) and a.owner not in ( 'SYS','SYSTEM','WMSYS','SYSMAN')  order by a.owner ,qt desc
    -- List instantiated objects
    set linesize 150
    select distinct SOURCE_DATABASE,
           source_object_owner||'.'||source_object_name own_obj,
           SOURCE_OBJECT_TYPE objt, instantiation_scn, IGNORE_SCN,
           apply_database_link lnk
    from  DBA_APPLY_INSTANTIATED_OBJECTS
    order by 1,2;
    -- Propagation senders : to run on source site
    prompt
    prompt ++ EVENTS AND BYTES PROPAGATED FOR EACH PROPAGATION++
    prompt
    COLUMN Elapsed_propagation_TIME HEADING 'Elapsed |Propagation Time|(Seconds)' FORMAT 9999999999999999
    COLUMN TOTAL_NUMBER HEADING 'Total Events|Propagated' FORMAT 9999999999999999
    COLUMN SCHEDULE_STATUS HEADING 'Schedule|Status'
    column elapsed_dequeue_time HEADING 'Total Dequeue|Time (Secs)'
    column elapsed_propagation_time HEADING 'Total Propagation|Time (Secs)' justify c
    column elapsed_pickle_time HEADING 'Total Pickle| Time(Secs)' justify c
    column total_time HEADING 'Elapsed|Pickle Time|(Seconds)' justify c
    column high_water_mark HEADING 'High|Water|Mark'
    column acknowledgement HEADING 'Target |Ack'
    prompt pickle : Pickling is the action of building the messages, wrap the LCR before enqueuing
    prompt
    set linesize 150
    SELECT p.propagation_name,q.message_delivery_mode queue_type, DECODE(p.STATUS,
                    'DISABLED', 'Disabled', 'ENABLED', 'Enabled') SCHEDULE_STATUS, q.instance,
                    q.total_number TOTAL_NUMBER, q.TOTAL_BYTES/1048576 total_bytes,
                    q.elapsed_dequeue_time/100 elapsed_dequeue_time, q.elapsed_pickle_time/100 elapsed_pickle_time,
                    q.total_time/100 elapsed_propagation_time
      FROM  DBA_PROPAGATION p, dba_queue_schedules q
            WHERE   p.DESTINATION_DBLINK = NVL(REGEXP_SUBSTR(q.destination, '[^@]+', 1, 2), q.destination)
      AND q.SCHEMA = p.SOURCE_QUEUE_OWNER
      AND q.QNAME = p.SOURCE_QUEUE_NAME
      order by q.message_delivery_mode, p.propagation_name;
    -- propagation receiver : to run on apply site
    COLUMN SRC_QUEUE_NAME HEADING 'Source|Queue|Name' FORMAT A20
    COLUMN DST_QUEUE_NAME HEADING 'Target|Queue|Name' FORMAT A20
    COLUMN SRC_DBNAME HEADING 'Source|Database' FORMAT A15
    COLUMN ELAPSED_UNPICKLE_TIME HEADING 'Unpickle|Time' FORMAT 99999999.99
    COLUMN ELAPSED_RULE_TIME HEADING 'Rule|Evaluation|Time' FORMAT 99999999.99
    COLUMN ELAPSED_ENQUEUE_TIME HEADING 'Enqueue|Time' FORMAT 99999999.99
    SELECT SRC_QUEUE_NAME,
           SRC_DBNAME,DST_QUEUE_NAME,
           (ELAPSED_UNPICKLE_TIME / 100) ELAPSED_UNPICKLE_TIME,
           (ELAPSED_RULE_TIME / 100) ELAPSED_RULE_TIME,
           (ELAPSED_ENQUEUE_TIME / 100) ELAPSED_ENQUEUE_TIME, TOTAL_MSGS,HIGH_WATER_MARK
      FROM V$PROPAGATION_RECEIVER;
    -- Apply reader
    col rsid format 99999 head "Reader|Sid" justify c
    COLUMN CREATION HEADING 'Message|Creation time' FORMAT A17 justify c
    COLUMN LATENCY HEADING 'Latency|in|Seconds' FORMAT 9999999
    col deqt format A15 head "Last|Dequeue Time" justify c
    SELECT APPLY_NAME, sid rsid , (DEQUEUE_TIME-DEQUEUED_MESSAGE_CREATE_TIME)*86400 LATENCY,
            TO_CHAR(DEQUEUED_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD') CREATION, TO_CHAR(DEQUEUE_TIME,'HH24:MI:SS MM/DD') deqt,
            DEQUEUED_MESSAGE_NUMBER  FROM V$STREAMS_APPLY_READER;
    -- Do we have any library cache lock ?
    set head on pause off feed off linesize 150
    column event format a48 head "Event type"
    column wait_time format 999999 head "Total| waits"
    column seconds_in_wait format 999999 head " Time waited "
    column sid format 9999 head "Sid"
    column state format A17 head "State"
    col seq# format 999999
    select
      w.sid, s.status,w.seq#, w.event
      , w.wait_time, w.seconds_in_wait , w.p1 , w.p2 , w.p3 , w.state
    from
      v$session_wait w , v$session s
    where
      s.sid = w.sid and
      w.event != 'pmon timer'                  and
      w.event != 'rdbms ipc message'           and
      w.event != 'PL/SQL lock timer'           and
       w.event != 'SQL*Net message from client' and
      w.event != 'client message'              and
      w.event != 'pipe get'                    and
      w.event != 'Null event'                  and
      w.event != 'wakeup time manager'         and
      w.event != 'slave wait'                  and
      w.event != 'smon timer'
      and w.event != 'class slave wait'
      and w.event != 'LogMiner: wakeup event for preparer'
      and w.event != 'Streams AQ: waiting for time management or cleanup tasks'
      and w.event != 'LogMiner: wakeup event for builder'
      and w.event != 'Streams AQ: waiting for messages in the queue'
      and w.event != 'ges remote message'
      and w.event != 'gcs remote message'
      and w.event != 'Streams AQ: qmn slave idle wait'
      and w.event != 'Streams AQ: qmn coordinator idle wait'
      and w.event != 'ASM background timer'
      and w.event != 'DIAG idle wait'
      and w.seconds_in_wait > 0
    /

  • Setting Data Souce dynamically in Crystal Enterprise

    A friend of mine sent me this:
    "Got a question for you.  I'm working with a vendor who is trying to deploy Crystal Reports in an Enterprise environment.  Can you tell me how to change the data source name dynamically for a RPT file?  They are telling me that the DNS name has to be in the RPT file and therefore changed manually for each environment.  This doesn't sound right to me."
    Doesn't sound right to me either. 
    Is there any reason why the data source can't be passed in code here ?
    I've changed the data source before in code and passed it to the report using VB.
    I mean, I'm not  working in the Enterprise environment much, so I'm not sure.
    Is it any different ? How would I do it ?
    Thanks,
    The Panda

    Enterprise has the ability to enter logon info and change the database info when scheduling and using the report. Local Administrator should know how to do this.

  • Master data delta update to data souce not possible

    Hi,
    0material_att data source delta update failed for a few days from the process chain and when I tried to manually run the delta infopackage, I got this message:
    Last delta update is not yet completed
    Therefore, no new delta update is possible.
    You can start the request again
    if the last delta request is red or green in the monitor (QM activity)
    I did "manage" for this data source but none of the failed delta run showed in there. I also looked for cancelled jobs in sm37 but nothing came up either. So I ran a full upload without deleting the data as my user is waiting for this fix to be shown in the report. What can I do now to enable the delta run again without deleting the data?
    sharon

    Hi Sharon,
    chk in RSMO for that master data 0material_attr, if there are any red/yellow req. If so change the QM status to RED again and goto IP and trigger the load again (it wil say that earlier delta has failed and if u want to Repeat ;ast delta,) in that give OK and exe the load again.
    In some cases, the datasource will not support repeat deltas, in that case do a re-init for the master data load. (no need to delete any datas.)
    Hope this helps. If you have any queries please ask.
    Thanks!
    Dharini

  • Truncate data in replication

    Hi,
    I have a problem with functions removeDatabase and truncateClass, now I want to clear all data in a database with a small cost and I found these two APIs can meet my needs, but it awalys give me com.sleepycat.je.rep.DatabasePreemptedException when a request comes to replicated node, the master node works fine. Below is my code:
    trans = env.beginTransaction(null, null);
                   closeDatabase(); //Close database and entity store
                   env.removeDatabase(trans,
                             "persist#EntityStore#xxx.BdbMember#mobile");
                   env.removeDatabase(trans,
                             "persist#EntityStore#xxx.BdbMember");
                   env.removeDatabase(trans,
                             "persist#EntityStore#xxx.Card#businessNo");
                   env.removeDatabase(trans,
                             "persist#EntityStore#xxx.Card");
                   env.removeDatabase(trans,
                             "persist#EntityStore#xxx.PointFreeze");
                   env.removeDatabase(trans,
                             "persist#EntityStore#xxx.RedMemberBlockPeriod");
    reOpenDatabase("persist#EntityStore#xxx.BdbMember");
                   reOpenDatabase("persist#EntityStore#xxx.Card");
                   reOpenDatabase("persist#EntityStore#xxx.PointFreeze");
                   reOpenDatabase("persist#EntityStore#xxx.RedMemberBlockPeriod");               
                   trans.commit();
    private Database reOpenDatabase(String dbName) throws Exception {
              Transaction trans = null;
              Database db = null;
              try {
                   trans = env.beginTransaction(null, null);
                   DatabaseConfig dc = new DatabaseConfig();
                   dc.setTransactional(true);
                   dc.setAllowCreate(true);
                   db = env.openDatabase(trans, dbName, dc);
                   trans.commit();
              } catch (Exception e) {
                   if (trans != null) {
                        trans.abort();
                   throw e;
              return db;
    Please do help me!
    Thanks

    You are using Berkeley DB, Java Edition, High Availability, which is a different product from Berkeley DB, (C version). That's why we suggested that you move your question to the Berkeley DB, Java Edition forum. I know that the product names are confusing.
    Please read the javadoc for DatabasePreemptedException at http://download.oracle.com/docs/cd/E17277_02/html/java/com/sleepycat/je/rep/DatabasePreemptedException.html. There you will find out that this happens when a database has been truncated, renamed, or deleted on the master. You will get this exception on the replica node. It tells you that the database has had a major change, and you must close and reopen your cursors, database and environment handles. Please read the javadoc for more details.

  • Name not found exception: look up data souce

    the following code is from a standalone client. it is used to look up a data source configured by Admin Console.
    Hashtable p = new Hashtable();
    p.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.cosnaming.CNCtxFactory");
    p.put(Context.PROVIDER_URL, "iiop://localhost:3700");
    Context ctx = new InitialContext(p);
    Object ds = ctx.lookup("jdbc/vnexDS");
    it throws the following error:
    javax.naming.NameNotFoundException. Root exception is org.omg.CosNaming.NamingContextPackage.NotFound: IDL:omg.org/CosNaming/NamingContext/NotFound:1.0
    at org.omg.CosNaming.NamingContextPackage.NotFoundHelper.read(NotFoundHelper.java:72)
    at org.omg.CosNaming._NamingContextExtStub.resolve(_NamingContextExtStub.java:406)
    at com.sun.jndi.cosnaming.CNCtx.callResolve(CNCtx.java:440)
    at com.sun.jndi.cosnaming.CNCtx.lookup(CNCtx.java:492)
    at com.sun.jndi.cosnaming.CNCtx.lookup(CNCtx.java:470)
    at javax.naming.InitialContext.lookup(InitialContext.java:347)
    at com.test.TestSunONE.main(TestSunONE.java:22)
    But I can see jdbc/vnexDS by browsing the jndi tree.
    If the above code is used to look up the remote home, it works.
    I dont know why. I am new to SunONE, please tell me what's wrong with the above code.

    Amirtharaj -- There is no known issue with chaining together calls to beans. A few pieces of information might help with troubleshooting the problem.
    - Are all the EJBs in one application?
    - Are all the EJBs in one jar?
    - Are any of the classes for the EJBs in a separate jar?
    - Are you explicity setting jndi properties or are you using the defaults in the container? If you are, how are they set?
    - Can you post a stack trace and source around where the lookup fails?
    Thanks -- Jeff

  • Data Guard replication queston.

    I'm just curious about how things work when creating a physical standby database with grid control.
    I would like to know if during the creation process during the transfer of database files what happens with the current changes to the database. After the files are transferred does it then need to sync with the primary database for all changes that happened during the copy process?
    For example: If it took an entire day to transfer database files; after the transfer was complete would I then have a full day of log files to catch up?
    Thanks
    Luke

    Ouch, thanks for the bad news Larry lol. I was hoping for a miracle, but I guess that would be too good to be true. If I were to wait for the database to replicate and then apply logs I would be 6 days behind. I'm going to have to try an alt means of getting the data there. Maybe a backup and restore method. Thanks again for the information.

  • Data Source Replication

    Dear Friends, Happy New Year.
    Please give me solution for the following problem.
    I have APO System and BW system.
    In BW, I have ODS and Cube, so Cube is getting data from ODS. I did some changes in (Transferrules and Update rules)Development, Now I want to transport it to Quality. Now tell me do I need to collect 0<ODS> datasource,this will act as a Datasource for Cube and do I need to Replicate it in Qualiy and Production?.
    Thanks

    Hi Ganga,
    no need to transport the data sources.all the data sources are available to every system landscape.so just transport the transfer and update rules and every routine including in that if any,and transport the cube and ods's and just assign the data sources in the production system itself check with your team or with your basis guys they will tell you if there is any change in the naming convention of the data source.normally its directly available to all the 3 systems.so just create the data package and load the data directly.
    Hope everything is clear.
    assign points if it helps...
    Regards,
    ashok

  • Data Guard: Replication on standby is not working.

    Experts,
    I've executed below command on standby database :
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;
    I thought this command would bring all recent changes immediately. However after executing this statement, replication is not happening. I mean if I update something on primary , it's not getting replicated on standby db.
    Later I realized this command is for switching standby to primary.
    How to roll back the above change on standby ?
    Thanks

    Please see below:
    An attempt was made to cancel a managed recovery session but no managed recovery session was active.
    If you restart the recovery does it error?
    * I restarted recovery on Standby and did not error out. See below*
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT ;
    Database altered.
    If not and you do a log switch on the Primary does the archive move and not apply, or not move?
    I have done log switch on Primary with below command and it did not give any error:
    SQL> alter system switch logfile;
    System altered.
    In the primary DB alert log file I can see below errors :
    Mon Dec 12 10:59:52 2011
    Thread 1 advanced to log sequence 322 (LGWR switch)
    *Current log# 2 seq# 322 mem# 0: +DBDATA/iam/onlinelog/group_2.262.764445763*
    *Current log# 2 seq# 322 mem# 1: +DBFLASH/iam/onlinelog/group_2.258.764445763*
    Mon Dec 12 10:59:52 2011
    *Deleted Oracle managed file +DBFLASH/iam/archivelog/2011_12_01/thread_1_seq_218.383.768756737*
    Archived Log entry 804 added for thread 1 sequence 321 ID 0x41fac87c dest 1:
    Mon Dec 12 10:59:52 2011
    *FAL[server, ARC3]: FAL archive failed, see trace file.*
    ARCH: FAL archive failed. Archiver continuing
    Any light from above information ?
    Thanks
    Edited by: 859875 on Dec 12, 2011 8:09 AM

Maybe you are looking for