Queue-to-Queue Propagation VS. Downstream Capture

Can someone please provide some insights into the advantages and disadvantages of using Queue-to-Queue Propagation VS. Downstream Capture ?
Thanks for your input.
-Reid

As far as my knowldege is concerned "Q-to-Q propagation" is a way of messaging between diffrent queeues belonging to diffrent stages of replication like staging, propgation and has its own job processes, where as downstream capture is just a capture where changes are captured at some diffrent database other then where actaully changes occured. The database where these changes occur is called as "local database" while the database where these changes are captuerd is called as downstream database because from there the changes will be "downstreamed" to diffrent nodes where apply process resides.
Kapil

Similar Messages

  • On propagation from queue to queue on same instance

    Hi !
    I have a queue that I need to propagate into another queue (and later add a transformation during the process). I have created the ques for the moment with the same payload type in order to test easily, but the propagate does not work. The messages gets in the first queue, state READY.
    I'm on 10.2.0.3 on AIX
    Here is my code:
    {color:#0000ff}CREATE OR REPLACE TYPE mst_input_type AS object(
    sender_id VARCHAR2(30),
    subject VARCHAR2(30),
    text VARCHAR2(1000));{color}
    {color:#0000ff}--input kø
    exec DBMS_AQADM.stop_QUEUE ('MST_INPUT_QUEUE');{color}
    {color:#0000ff}exec DBMS_AQADM.drop_QUEUE ('MST_INPUT_QUEUE');{color}
    {color:#0000ff}exec DBMS_AQADM.drop_QUEUE_TABLE (queue_table => 'MST_INPUT_QUEUE_TABLE');{color}
    {color:#0000ff}-- output samme format
    exec DBMS_AQADM.stop_QUEUE ('MST_OUTPUT_QUEUE2');{color}
    {color:#0000ff}exec DBMS_AQADM.drop_QUEUE ('MST_OUTPUT_QUEUE2');{color}
    {color:#0000ff}exec DBMS_AQADM.drop_QUEUE_TABLE (queue_table => 'MST_OUTPUT_QUEUE_TABLE2');{color}
    {color:#0000ff}-- input kø/tabel
    exec DBMS_AQADM.CREATE_QUEUE_TABLE (
    queue_table => 'MST_INPUT_QUEUE_TABLE',
    queue_payload_type => 'mst_input_type'
    , multiple_consumers => TRUE
    );{color}
    {color:#0000ff}BEGIN
    DBMS_AQADM.CREATE_QUEUE (
    queue_name => 'MST_input_QUEUE',
    queue_table => 'MST_input_QUEUE_TABLE'
    DBMS_AQADM.START_QUEUE (
    queue_name => 'MST_input_QUEUE'
    END;{color}
    {color:#0000ff}-- output kø/tabel samme format
    exec DBMS_AQADM.CREATE_QUEUE_TABLE (
    queue_table => 'MST_output_QUEUE_TABLE2',
    queue_payload_type => 'mst_input_type'
    );{color}
    {color:#0000ff}BEGIN
    DBMS_AQADM.CREATE_QUEUE (
    queue_name => 'MST_output_QUEUE2',
    queue_table => 'MST_output_QUEUE_TABLE2'
    DBMS_AQADM.START_QUEUE (
    queue_name => 'MST_output_QUEUE2'
    END;{color}
    {color:#0000ff}-- slet subscriber
    DECLARE
    subscriber sys.aq$_agent;
    BEGIN
    subscriber := sys.aq$_agent('subscriber1', 'MST_output_QUEUE2', null);
    DBMS_AQADM.REMOVE_SUBSCRIBER(queue_name => 'MST_input_QUEUE',subscriber => subscriber);
    END;{color}
    {color:#0000ff}--- lav en subscriber
    DECLARE
    subscriber sys.aq$_agent;
    BEGIN
    subscriber := sys.aq$_agent('subscriber1', 'mst.MST_output_QUEUE2', null);
    DBMS_AQADM.ADD_SUBSCRIBER(queue_name => 'MST_input_QUEUE',subscriber => subscriber, queue_to_queue => TRUE);
    END;{color}
    {color:#0000ff}BEGIN
    DBMS_AQADM.SCHEDULE_PROPAGATION(queue_name => 'MST_input_QUEUE', destination => NULL,
    destination_queue => 'mst.MST_output_QUEUE2',
    Next_time => 'SYSDATE + 1/(24*60)');
    END;{color}
    {color:#0000ff}--- Skriv en besked
    declare
    message_handle RAW(16);
    enqueue_options dbms_aq.enqueue_options_t;
    message_properties dbms_aq.message_properties_t;
    payload mst_input_type;
    BEGIN
    message_properties.priority := 5;
    message_properties.delay := 1; -- one second delay before sending
    payload := mst_input_type(null,null,null);
    payload.sender_id := user;
    payload.subject := 'Message fra ' || user ;
    payload.text := 'Dette er en message fra ' || user || ' sendt ca. kl. ' || systimestamp ;
    DBMS_AQ.ENQUEUE(queue_name => 'MST_input_QUEUE',
    enqueue_options => enqueue_options,
    message_properties => message_properties,
    payload => payload,
    msgid => message_handle);
    plog.info('Messageid=' || message_handle);
    commit;
    end;{color}
    {color:#000000}Can you give me a hint on how to get further? I have tried with and without the queue-to-queue options, but with little sucess.
    {color}
    Regards
    Mette, DK
    Edited by: mettemusens on Jan 11, 2009 7:57 PM

    By trying and trying I got it to work .... long hours
    I removed the queue to queue option and the subscriber name from the subscriptions definition. And then I removed the destination-queue on the propagation schedule. Then it worked.
    Mette

  • Restart downstream capture  server,dequeue operation did not work

    Hi, every one. I created real time downstream capture on SUSE Linux 10 with Oracle 10R2, it worked. But after I restarted downstream DB, the replication didn't work while capture, apply,propagation process was all in normal status . In V$Buffered_queue I saw not dequeued messages spilled to disk. Later I configured real time downstream capture on Windows XP platform, and got the same situation. I searched on this forum and found a thread about this. I changed the AQ_TM_PROCESSES from 0 to 1 on Windows XP Platform, the replication started to work. But when I changed this parameter on SUSE Linux Platform, the replication still didn't work and messages were spilled to disk in V$Buffered_queue. I don't know why. I did all of this in vmware workstation. I'm not a DBA and don't have a metalink account. Any help about this is appreciated.
    thanks.
    JunWang

    Spilled to disk? are they on the aq$_<queue_name>p table or in the queue table? Anything into dbaapply_error?
    Can you give the following output :
    set lines 190
    -- reader
    col DEQUEUED_MESSAGE_NUMBER for 999999999999
    SELECT ap.APPLY_NAME, DECODE(ap.APPLY_CAPTURED,'YES','Captured LCRS', 'NO','User-Enqueued','UNKNOWN') APPLY_CAPTURED,
           SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS_NAME, r.STATE, r.TOTAL_MESSAGES_DEQUEUED, r.sga_used
           FROM V$STREAMS_APPLY_READER r, V$SESSION s, DBA_APPLY ap
           WHERE r.SID = s.SID AND
                 r.SERIAL# = s.SERIAL# AND
                 r.APPLY_NAME = ap.APPLY_NAME;
    SELECT APPLY_NAME, sid rsid , (DEQUEUE_TIME-DEQUEUED_MESSAGE_CREATE_TIME)*86400 LATENCY,
            TO_CHAR(DEQUEUED_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD') CREATION, TO_CHAR(DEQUEUE_TIME,'HH24:MI:SS MM/DD') deqt,
            DEQUEUED_MESSAGE_NUMBER  FROM V$STREAMS_APPLY_READER;
    -- coordinator : compare to reader to see if there is an effective apply problem
    col totr format 999999999 head "Total|Received"
    col tad format 9999999 head "Total|Admin"
    col appn format A22 head "Apply name"
    col terr format 9999999 head "Total|Errors"
    col twd format 9999999 head "Total|Wait"
    col TOTAL_ROLLBACKS format 9999999 head "Total|Rollback"
    col twc format 9999999 head "Total|Wait|Commits"
    select apply_name appn, apply#,sid,state, total_received totr, total_applied tap, total_wait_deps twd, TOTAL_ROLLBACKS,
          total_wait_commits twc, total_errors terr, to_char(hwm_time,'DD-MM HH24:MI:SS')hwt
    from v$streams_apply_coordinator order by apply_name
    -- any errors?
    SELECT queue_name,source_commit_scn scn, message_count, source_database,LOCAL_TRANSACTION_ID, ERROR_MESSAGE FROM DBA_APPLY_ERROR order by message_count desc;

  • Multiple sources with single downstream capture

    Is it possible to have multiple source machines all send their redo logs to a single downstream capture DB that will collect LCRs and queue the LCRs for all the source machines?

    Yes, that's wht downstream replication is all about. read the 11G manual if you want to know how to do that.

  • Error running Archived-Log Downstream Capture Process

    I have created a Archived-Log Downstream Capture Process with ref. to following link
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/strms_ccap.htm#i1011654
    After executing the capture process get following error in trace
    ============================================================================
    Trace file /home/oracle/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_cp01_13572.trc
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORACLE_HOME = /home/oracle/app/oracle/product/11.2.0/dbhome_1
    System name: Linux
    Node name: localhost.localdomain
    Release: 2.6.18-194.el5
    Version: #1 SMP Fri Apr 2 14:58:14 EDT 2010
    Machine: x86_64
    Instance name: orcl
    Redo thread mounted by this instance: 1
    Oracle process number: 37
    Unix process pid: 13572, image: [email protected] (CP01)
    *** 2011-08-20 14:21:38.899
    *** SESSION ID:(146.2274) 2011-08-20 14:21:38.899
    *** CLIENT ID:() 2011-08-20 14:21:38.899
    *** SERVICE NAME:(SYS$USERS) 2011-08-20 14:21:38.899
    *** MODULE NAME:(STREAMS) 2011-08-20 14:21:38.899
    *** ACTION NAME:(STREAMS Capture) 2011-08-20 14:21:38.899
    knlcCopyPartialCapCtx(), setting default poll freq to 0
    knlcUpdateMetaData(), before copy IgnoreUnsuperrTable:
    source:
    Ignore Unsupported Error Table: 0 entries
    target:
    Ignore Unsupported Error Table: 0 entries
    knlcUpdateMetaData(), after copy IgnoreUnsuperrTable:
    source:
    Ignore Unsupported Error Table: 0 entries
    target:
    Ignore Unsupported Error Table: 0 entries
    knlcfrectx_Init: rs=STRMADMIN.RULESET$_66, nrs=., cuid=0, cuid_prv=0, flags=0x0
    knlcObtainRuleSetNullLock: rule set name "STRMADMIN"."RULESET$_66"
    knlcObtainRuleSetNullLock: rule set name
    knlcmaInitCapPrc+
    knlcmaGetSubsInfo+
    knlqgetsubinfo
    subscriber name EMP_DEQ
    subscriber dblinke name
    subscriber name APPLY_EMP
    subscriber dblinke name
    knlcmaTerm+
    knlcmaTermSrvs+
    knlcmaTermSrvs-
    knlcmaTerm-
    knlcCCAInit()+, err = 26802
    knlcnShouldAbort: examining error stack
    ORA-26802: Queue "STRMADMIN"."STREAMS_QUEUE" has messages.
    knlcnShouldAbort: examing error 26802
    knlcnShouldAbort: returning FALSE
    knlcCCAInit: no combined capture and apply optimization err = 26802
    knlzglr_GetLogonRoles: usr = 91,
    knlqqicbk - AQ access privilege checks:
    userid=91, username=STRMADMIN
    agent=STRM05_CAPTURE
    knlqeqi()
    knlcRecInit:
    Combined Capture and Apply Optimization is OFF
    Apply-state checkpoint mode is OFF
    last_enqueued, last_acked
    0x0000.00000000 [0] 0x0000.00000000 [0]
    captured_scn, applied_scn, logminer_start, enqueue_filter
    0x0000.0004688c [288908] 0x0000.0004688c [288908] 0x0000.0004688c [288908] 0x0000.0004688c [288908]
    flags=0
    Starting persistent Logminer Session : 13
    krvxats retval : 0
    CKPT_FREE event=FALSE CCA=FALSE Checkptfreq=1000 AV/CDC flags=0
    krvxssp retval : 0
    krvxsda retval : 0
    krvxcfi retval : 0
    #1: krvxcfi retval : 0
    #2: krvxcfi retval : 0
    About to call krvxpsr : startscn: 0x0000.0004688c
    state before krvxpsr: 0
    dbms_logrep_util.get_checkpoint_scns(): logminer sid = 13 applied_scn = 288908
    dbms_logrep_util.get_checkpoint_scns(): prev_ckpt_scn = 0 curr_ckpt_scn = 0
    *** 2011-08-20 14:21:41.810
    Begin knlcDumpCapCtx:*******************************************
    Error 1304 : ORA-01304: subordinate process error. Check alert and trace logs
    Capture Name: STRM05_CAPTURE : Instantiation#: 65
    *** 2011-08-20 14:21:41.810
    ++++ Begin KNST dump for Sid: 146 Serial#: 2274
    Init Time: 08/20/2011 14:21:38
    ++++Begin KNSTCAP dump for : STRM05_CAPTURE
    Capture#: 1 Logminer_Id: 13 State: DICTIONARY INITIALIZATION [ 08/20/2011 14:21:38]
    Capture_Message_Number: 0x0000.00000000 [0]
    Capture_Message_Create_Time: 01/01/1988 00:00:00
    Enqueue_Message_Number: 0x0000.00000000 [0]
    Enqueue_Message_Create_Time: 01/01/1988 00:00:00
    Total_Messages_Captured: 0
    Total_Messages_Created: 0 [ 01/01/1988 00:00:00]
    Total_Messages_Enqueued: 0 [ 01/01/1988 00:00:00]
    Total_Full_Evaluations: 0
    Elapsed_Capture_Time: 0 Elapsed_Rule_Time: 0
    Elapsed_Enqueue_Time: 0 Elapsed_Lcr_Time: 0
    Elapsed_Redo_Wait_Time: 0 Elapsed_Pause_Time: 0
    Apply_Name :
    Apply_DBLink :
    Apply_Messages_Sent: 0
    ++++End KNSTCAP dump
    ++++ End KNST DUMP
    +++ Begin DBA_CAPTURE dump for: STRM05_CAPTURE
    Capture_Type: DOWNSTREAM
    Version:
    Source_Database: ORCL2.LOCALDOMAIN
    Use_Database_Link: NO
    Logminer_Id: 13 Logfile_Assignment: EXPLICIT
    Status: ENABLED
    First_Scn: 0x0000.0004688c [288908]
    Start_Scn: 0x0000.0004688c [288908]
    Captured_Scn: 0x0000.0004688c [288908]
    Applied_Scn: 0x0000.0004688c [288908]
    Last_Enqueued_Scn: 0x0000.00000000 [0]
    Capture_User: STRMADMIN
    Queue: STRMADMIN.STREAMS_QUEUE
    Rule_Set_Name[+]: "STRMADMIN"."RULESET$_66"
    Checkpoint_Retention_Time: 60
    +++ End DBA_CAPTURE dump
    +++ Begin DBA_CAPTURE_PARAMETERS dump for: STRM05_CAPTURE
    PARALLELISM = 1 Set_by_User: NO
    STARTUP_SECONDS = 0 Set_by_User: NO
    TRACE_LEVEL = 7 Set_by_User: YES
    TIME_LIMIT = -1 Set_by_User: NO
    MESSAGE_LIMIT = -1 Set_by_User: NO
    MAXIMUM_SCN = 0xffff.ffffffff [281474976710655] Set_by_User: NO
    WRITE_ALERT_LOG = TRUE Set_by_User: NO
    DISABLE_ON_LIMIT = FALSE Set_by_User: NO
    DOWNSTREAM_REAL_TIME_MINE = FALSE Set_by_User: NO
    MESSAGE_TRACKING_FREQUENCY = 2000000 Set_by_User: NO
    SKIP_AUTOFILTERED_TABLE_DDL = TRUE Set_by_User: NO
    SPLIT_THRESHOLD = 1800 Set_by_User: NO
    MERGE_THRESHOLD = 60 Set_by_User: NO
    +++ End DBA_CAPTURE_PARAMETERS dump
    +++ Begin DBA_CAPTURE_EXTRA_ATTRIBUTES dump for: STRM05_CAPTURE
    USERNAME Include:YES Row_Attribute: YES DDL_Attribute: YES
    +++ End DBA_CAPTURE_EXTRA_ATTRIBUTES dump
    ++ LogMiner Session Dump Begin::
    SessionId: 13 SessionName: STRM05_CAPTURE
    Start SCN: 0x0000.00000000 [0]
    End SCN: 0x0000.00046c2d [289837]
    Processed SCN: 0x0000.0004689e [288926]
    Prepared SCN: 0x0000.000468d4 [288980]
    Read SCN: 0x0000.000468e2 [288994]
    Spill SCN: 0x0000.00000000 [0]
    Resume SCN: 0x0000.00000000 [0]
    Branch SCN: 0x0000.00000000 [0]
    Branch Time: 01/01/1988 00:00:00
    ResetLog SCN: 0x0000.00000001 [1]
    ResetLog Time: 08/18/2011 16:46:59
    DB ID: 740348291 Global DB Name: ORCL2.LOCALDOMAIN
    krvxvtm: Enabled threads: 1
    Current Thread Id: 1, Thread State 0x01
    Current Log Seqn: 107, Current Thrd Scn: 0x0000.000468e2 [288994]
    Current Session State: 0x20005, Current LM Compat: 0xb200000
    Flags: 0x3f2802d8, Real Time Apply is Off
    +++ Additional Capture Information:
    Capture Flags: 4425
    Logminer Start SCN: 0x0000.0004688c [288908]
    Enqueue Filter SCN: 0x0000.0004688c [288908]
    Low SCN: 0x0000.00000000 [0]
    Capture From Date: 01/01/1988 00:00:00
    Capture To Date: 01/01/1988 00:00:00
    Restart Capture Flag: NO
    Ping Pending: NO
    Buffered Txn Count: 0
    -- Xid Hash entry --
    -- LOB Hash entry --
    -- No TRIM LCR --
    Unsupported Reason: Unknown
    --- LCR Dump not possible ---
    End knlcDumpCapCtx:*********************************************
    *** 2011-08-20 14:21:41.810
    knluSetStatus()+{
    *** 2011-08-20 14:21:44.917
    knlcapUpdate()+{
    Updated streams$_capture_process
    finished knlcapUpdate()+ }
    finished knluSetStatus()+ }
    knluGetObjNum()+
    knlsmRaiseAlert: keltpost retval is 0
    kadso = 0 0
    KSV 1304 error in slave process
    *** 2011-08-20 14:21:44.923
    ORA-01304: subordinate process error. Check alert and trace logs
    knlz_UsrrolDes()
    knstdso: state object 0xb644b568, action 2
    knstdso: releasing so 0xb644b568 for session 146, type 0
    knldso: state object 0xa6d0dea0, action 2 memory 0x0
    kadso = 0 0
    knldso: releasing so 0xa6d0dea0
    OPIRIP: Uncaught error 447. Error stack:
    ORA-00447: fatal error in background process
    ORA-01304: subordinate process error. Check alert and trace logs
    Any suggestions???

    Output of above query
    ==============================
    CAPTURE_NAME STATUS ERROR_MESSAGE
    STRM05_CAPTURE ABORTED ORA-01304: subordinate process error. Check alert and trace logs
    Alert log.xml
    =======================
    <msg time='2011-08-25T16:58:01.865+05:30' org_id='oracle' comp_id='rdbms'
    client_id='' type='UNKNOWN' level='16'
    host_id='localhost.localdomain' host_addr='127.0.0.1' module='STREAMS'
    pid='30921'>
    <txt>Errors in file /home/oracle/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_cp01_30921.trc:
    ORA-01304: subordinate process error. Check alert and trace logs
    </txt>
    </msg>
    The orcl_cp01_30921.trc has the same thing posted in the first message.

  • Save Queue / Watch Queue For "Build Server"

    I'm trying to use Adobe Media Encoder CS6 as what we in the programming world call a "build server", i.e. watch for inputs that are new or changed and build output automatically when it finds them.
    Specifically, I want AME to automatically notice when there are new or changed PPro sequences, and automatically encode output for those sequences.
    At first, the "Watched Folder" feature seemed perfect. it will monitor a given folder and when a new file pops up automatically encode it.
    But there are 2 problems...
    It doesn't seem to support sequences. Seriously?
    Even if it did support sequences, it only seems to look for new items - not modified
    So then I started thinking about the "Save Queue" approach (queue up all my PPro sequences in AME, Save the queue, backup the "batch.xml" file, then "restore" the "batch.xml" file later when I need to re-export)
    Doing that would kind of give me a "Watched Folder" kind of functionality for sequences, except:
    It's actually the opposite of Watched Folder in that it would only re-export existing sequences and *not* new ones (of course, I guess I could always restore, add any new sequences, then re-back-up the "batch.xml" again...)
    It's not automatic - I'd have to manually do the restore / run open AME
    A sequence would always be encoded regardless whether it actually changed.
    The "Watched Folder" seems to be a no-go because it doesn't support sequences. The "batch.xml" seems pretty kludgy, but I guess it's better than having to manually re-enter the sequence information every time it changes (i.e. PPro > select sequence > File > Export > Select filename > Say yes to replace it > choose "entire sequence" > choose "Queue" > say "yes" to replace filename again > ugh).
    Does anybody have any ideas? I can't believe I'm the only person in the world that has the seemingly basic unmet need.

    Oh, one other thing I noticed about the "Save Queue" approach.
    If you look at the batch.xml file itself, you'll notice that the "parentprojectfile" is not the sequence you exported, but rather a copy of it.
    For example, I have a project "e:\video\home movies\dvd7-003.prproj", with a sequence called "master". I exported this sequence via Export > Media > Queue, the looked at the "batch.xml" that was created by AME > File > Save Queue.
    The parentprojectfile was not "e:\video\home movies\dvd7-003.prproj", but rather "c:\windows\temp\dvd-003_8.prproj". A first I thought it was just a "snapshot" copy of the project, but a file comparison showed they are very, very different. Actually, a cursory glance would have shown that: my e:\ project file is 7.7MB, the c:\ project file is only 5.5MB.
    So long story short, I would not trust a backup / restore of "batch.xml" to do what I want.
    It's not pointing to the same project, it's not even pointing to a copy of the project, and if my temp directory gets cleaned out, AME will fail when you try to encode (since it's referencing a file that no longer exists).

  • Message Status as "Scheduled" and Queue Status "Queue Stopped".

    Hi friends,
    My scenario is from Peoplesoft -> XI -> BI
    Message has reached BI, but in SXMB_MONI, its showing Message Status as "Scheduled" and Queue Status "Queue Stopped".
    How to proceed further ? How can I start that queue ?
    Thanks ain advance,
    Neena John

    Hi Neena,
    Go to SXMB_ADM -> Manage Queues -> Register Queues
    More on queues
    XI :  How to Re-Process failed XI Messages Automatically
    Run the report RSXMB_REGISTER_QUEUES and register the queues
    Run the report RSXMB_RESTART_MESSAGES for restarting ur messages
    Refer this:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/20bb9649-e86e-2910-7aa9-88ed4972a5f6
    Regards,
    Vinod.

  • Message Status "Scheduled", Queue Status "Queue Stopped"

    Hi friends,
    My scenario is from Peoplesoft -> XI -> BI
    Message has reached BI, but in SXMB_MONI, its showing Message Status as "Scheduled" and Queue Status "Queue Stopped".
    How to proceed further ? How can I start that queue ?
    Thanks ain advance,
    Neena John

    Hi Neena,
    Go to SXMB_ADM -> Manage Queues -> Register Queues
    More on queues
    XI :  How to Re-Process failed XI Messages Automatically
    Run the report RSXMB_REGISTER_QUEUES and register the queues
    Run the report RSXMB_RESTART_MESSAGES for restarting ur messages
    Refer this:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/20bb9649-e86e-2910-7aa9-88ed4972a5f6
    Regards,
    Vinod.

  • Error listening to Queue carnot Queue

    I get the following error in my application.log and do not really know, what to do with it:
    java.lang.InstantiationException: Error: com.evermind.server.jms.EvermindQueueSession
         at com.evermind.server.jms.OrionServerSessionPool.getServerSessionFull(OrionServerSessionPool.java:377)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:769)
         at com.evermind.util.ThreadPoolThread.run(ThreadPoolThread.java:66)
    my jms.xml looks as follows:
    <?xml version="1.0"?>
    <!DOCTYPE jms-server PUBLIC "Orion JMS server" "http://xmlns.oracle.com/ias/dtds/jms-server.dtd">
    <jms-server port="9127" host="127.0.0.1">
    <!--Queue bindings, these queues will be bound to their respective JNDI path for later retrieval -->
    <queue-connection-factory location="jms/carnotQCF" password="carnot" port="9127" username="admin">
    </queue-connection-factory>
    <queue-connection-factory location="jms/appointmentQCF" password="carnot" port="9127" username="admin">
    </queue-connection-factory>
    <queue-connection-factory location="jms/objectMessageQCF" password="carnot" port="9127" username="admin">
    </queue-connection-factory>
    <queue-connection-factory location="jms/d0012QCF" password="carnot" port="9127" username="admin">
    </queue-connection-factory>
    <!-- path to the log-file where JMS-events/errors are stored-->
    <!-- log>
    <file path="../log/jms.log" />
    </log -->
    <queue name="carnot Queue" location="jms/carnotqueue" persistence-file="../persistence/MDB/carnot.queue" >
    <description>Appointment Queue</description>
    </queue>
    <queue name="Appointment Queue" location="jms/appointmentQ" persistence-file="../persistence/MDB/appointment.queue" >
    <description>Appointment Queue</description>
    </queue>
    <queue name="Object Message Queue" location="jms/objectMessageQ" >
    <description>Object Message Queue</description>
    </queue>
    <queue name="D0012 Queue" location="jms/d0012Q" persistence-file="../persistence/MDB/d012.queue" >
    <description>D012 queue</description>
    </queue>
    </jms-server>
    orion-ejb-jar.xml for the MDB:
    <message-driven-deployment name="MessageListener" destination-location="jms/carnotqueue" connection-factory-location="jms/carnotQCF" max-instances="10">
    Anybody an idea?
    cheers,
    Klaus

    Hi Klaus
    We had the same error as you have.
    After a lot of troubles we are now able to run IBMs MQSeries with OC4J v. 9.0.3.0.0
    Here is an overview of our configuration (on Windows 2000)
    Added the following lines to C:\OC4J\j2ee\home\config\application.xml
    This is to use connect to MQSeries with a file based JNDI. We created the file .bindings in C:\JNDI with IBMs JMSAdmin tool.
    <resource-provider
    class="com.evermind.server.deployment.ContextScanningResourceProvider"
    name="MQSeries">
    <description> MQSeries </description>
    <property name="java.naming.factory.initial" value="com.sun.jndi.fscontext.RefFSContextFactory"> </property>
    <property name="java.naming.provider.url" value="file:/C:/JNDI"> </property>
    </resource-provider>
    We copied the following files from MQSeries to C:\OC4J\j2ee\home\lib
    com.ibm.mq.jar
    com.ibm.mqjms.jar
    com.ibm.mqbind.jar
    mqji.properties
    fscontext.jar
    providerutil.jar
    Added the following line to C:\OC4J\j2ee\home\config\server.xml
    <library path="../lib" />
    This points to C:\OC4J\j2ee\home\lib where our MQSeries .jar files are located.
    In line 176 in com.evermind.server.jms.OrionServerSessionPool.class (located in OC4J.jar) we changed the following lines:
    ((AQjmsSession)new_session).setCloseCheckInterval(2);
    new_consumer = ((QueueSession)new_session).createReceiver((Queue)destination, messageSelector);
    ((AQjmsQueueReceiver)new_consumer).setNavigationMode(1);
    new_connection.start();
    to this:
    new_consumer = ((QueueSession)new_session).createReceiver((Queue)destination, messageSelector);
    new_connection.start();
    The reason to this, is that it looks like OC4J do not accept other queue system than AQ. If we dont change those lines, we get the following error:
    java.lang.InstantiationException: Error: com.evermind.server.jms.EvermindQueueSession at com.evermind.server.jms.OrionServerSessionPool.getServerSessionFull(OrionServerSessionPool.java:377)
    at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:769)
    at com.evermind.util.ThreadPoolThread.run(ThreadPoolThread.java:66)
    In the MDB sample (C:\OC4J\j2ee\home\demo\mdb) we changed the following (ivtTCF, ivtQCF, ivtT and ivtQ are MQSeries factories and connections):
    Orion-ejb-jar.xml
    <?xml version="1.0" encoding="utf-8"?>
    <!DOCTYPE orion-ejb-jar PUBLIC "-//Evermind//DTD Enterprise JavaBeans 1.1 runtime//EN" "http://xmlns.oracle.com/ias/dtds/orion-ejb-jar.dtd">
    <orion-ejb-jar deployment-version="9.0.3.0.0" deployment-time="ec68ca361c">
    <enterprise-beans>
    <session-deployment
    name="MyCart"
    max-instances="10"
    location="MyCart">
    <resource-ref-mapping
    name="ivtTCF"
    location="java:comp/resource/MQSeries/ivtTCF">
    </resource-ref-mapping>
    <resource-env-ref-mapping
    name="ivtT"
    location="java:comp/resource/MQSeries/ivtT">
    </resource-env-ref-mapping>
    <resource-ref-mapping
    name="ivtQCF"
    location="java:comp/resource/MQSeries/ivtQCF">
    </resource-ref-mapping>
    <resource-env-ref-mapping
    name="ivtQ"
    location="java:comp/resource/MQSeries/ivtQ">
    </resource-env-ref-mapping>
    </session-deployment>
    <message-driven-deployment
    name="MessageBeanTpc"
    connection-factory-location="java:comp/resource/MQSeries/ivtTCF"
    destination-location="java:comp/resource/MQSeries/ivtT"
    subscription-name="MDBSUB">
    <resource-ref-mapping
    name="ivtTCF"
    location="java:comp/resource/MQSeries/ivtTCF">
    </resource-ref-mapping>
    <resource-env-ref-mapping
    name="ivtT"
    location="java:comp/resource/MQSeries/ivtT">
    </resource-env-ref-mapping>
    <resource-ref-mapping
    name="ivtQCF"
    location="java:comp/resource/MQSeries/ivtQCF">
    </resource-ref-mapping>
    <resource-env-ref-mapping
    name="ivtQ"
    location="java:comp/resource/MQSeries/ivtQ">
    </resource-env-ref-mapping>
    </message-driven-deployment>
    <message-driven-deployment
    connection-factory-location="java:comp/resource/MQSeries/ivtQCF"
    destination-location="java:comp/resource/MQSeries/ivtQ"
    name="MessageBeanQue">
    <resource-ref-mapping
    name="ivtTCF"
    location="java:comp/resource/MQSeries/ivtTCF">
    </resource-ref-mapping>
    <resource-env-ref-mapping
    name="ivtT"
    location="java:comp/resource/MQSeries/ivtT">
    </resource-env-ref-mapping>
    <resource-ref-mapping
    name="ivtQCF"
    location="java:comp/resource/MQSeries/ivtQCF">
    </resource-ref-mapping>
    <resource-env-ref-mapping
    name="ivtQ"
    location="java:comp/resource/MQSeries/ivtQ">
    </resource-env-ref-mapping>
    </message-driven-deployment>
    </enterprise-beans>
    <assembly-descriptor>
    <default-method-access>
    <security-role-mapping name="&lt;default-ejb-caller-role&gt;" impliesAll="true" />
    </default-method-access>
    </assembly-descriptor>
    </orion-ejb-jar>
    ejb-jar.xml
    <?xml version="1.0"?>
    <!DOCTYPE ejb-jar PUBLIC "-//Sun Microsystems, Inc.//DTD Enterprise JavaBeans 1.2//EN" "http://java.sun.com/j2ee/dtds/ejb-jar_1_2.dtd">
    <ejb-jar>
    <description>A demo cart bean package.</description>
    <display-name>A simple cart jar</display-name>
    <enterprise-beans>
    <session>
    <description>A simple shopping cart.</description>
    <display-name>Shopping Cart</display-name>
    <ejb-name>MyCart</ejb-name>
    <home>cart.ejb.CartHome</home>
    <remote>cart.ejb.Cart</remote>
    <ejb-class>cart.ejb.CartEJB</ejb-class>
    <session-type>Stateful</session-type>
    <transaction-type>Container</transaction-type>
    <resource-ref>
    <res-ref-name>ivtQCF</res-ref-name>
    <res-type>javax.jms.QueueConnectionFactory</res-type>
    <res-auth>Container</res-auth>
    <res-sharing-scope>Shareable</res-sharing-scope>
    </resource-ref>
    <resource-env-ref>
    <resource-env-ref-name>ivtQ</resource-env-ref-name>
    <resource-env-ref-type>javax.jms.Queue</resource-env-ref-type>
    </resource-env-ref>
    <resource-ref>
    <res-ref-name>ivtTCF</res-ref-name>
    <res-type>javax.jms.TopicConnectionFactory</res-type>
    <res-auth>Container</res-auth>
    <res-sharing-scope>Shareable</res-sharing-scope>
    </resource-ref>
    <resource-env-ref>
    <resource-env-ref-name>ivtT</resource-env-ref-name>
    <resource-env-ref-type>javax.jms.Topic</resource-env-ref-type>
    </resource-env-ref>
    </session>
    <message-driven>
    <description></description>
    <display-name>MessageBeanTpc</display-name>
    <ejb-name>MessageBeanTpc</ejb-name>
    <ejb-class>cart.ejb.MessageBean</ejb-class>
    <transaction-type>Container</transaction-type>
    <message-driven-destination>
    <destination-type>javax.jms.Topic</destination-type>
    <subscription-durability>Durable</subscription-durability>
    </message-driven-destination>
    <resource-ref>
    <res-ref-name>ivtQCF</res-ref-name>
    <res-type>javax.jms.QueueConnectionFactory</res-type>
    <res-auth>Container</res-auth>
    <res-sharing-scope>Shareable</res-sharing-scope>
    </resource-ref>
    <resource-env-ref>
    <resource-env-ref-name>ivtQ</resource-env-ref-name>
    <resource-env-ref-type>javax.jms.Queue</resource-env-ref-type>
    </resource-env-ref>
    <resource-ref>
    <res-ref-name>ivtTCF</res-ref-name>
    <res-type>javax.jms.TopicConnectionFactory</res-type>
    <res-auth>Container</res-auth>
    <res-sharing-scope>Shareable</res-sharing-scope>
    </resource-ref>
    <resource-env-ref>
    <resource-env-ref-name>ivtT</resource-env-ref-name>
    <resource-env-ref-type>javax.jms.Topic</resource-env-ref-type>
    </resource-env-ref>
    </message-driven>
    <message-driven>
    <description></description>
    <display-name>MessageBeanQue</display-name>
    <ejb-name>MessageBeanQueBMT</ejb-name>
    <ejb-class>cart.ejb.MessageBean</ejb-class>
    <transaction-type>Bean</transaction-type>
    <message-driven-destination>
    <destination-type>javax.jms.Queue</destination-type>
    </message-driven-destination>
    <resource-ref>
    <res-ref-name>ivtQCF</res-ref-name>
    <res-type>javax.jms.QueueConnectionFactory</res-type>
    <res-auth>Container</res-auth>
    <res-sharing-scope>Shareable</res-sharing-scope>
    </resource-ref>
    <resource-env-ref>
    <resource-env-ref-name>ivtQ</resource-env-ref-name>
    <resource-env-ref-type>javax.jms.Queue</resource-env-ref-type>
    </resource-env-ref>
    <resource-ref>
    <res-ref-name>ivtTCF</res-ref-name>
    <res-type>javax.jms.TopicConnectionFactory</res-type>
    <res-auth>Container</res-auth>
    <res-sharing-scope>Shareable</res-sharing-scope>
    </resource-ref>
    <resource-env-ref>
    <resource-env-ref-name>ivtT</resource-env-ref-name>
    <resource-env-ref-type>javax.jms.Topic</resource-env-ref-type>
    </resource-env-ref>
    </message-driven>
    </enterprise-beans>
    <assembly-descriptor>
    <container-transaction>
    <method>
    <ejb-name>MyCart</ejb-name>
    <method-name>*</method-name>
    </method>
    <trans-attribute>Required</trans-attribute>
    </container-transaction>
    <container-transaction>
    <method>
    <ejb-name>MessageBeanTpc</ejb-name>
    <method-name>onMessage</method-name>
    </method>
    <trans-attribute>Required</trans-attribute>
    </container-transaction>
    <container-transaction>
    <method>
    <ejb-name>MessageBeanQue</ejb-name>
    <method-name>onMessage</method-name>
    </method>
    <trans-attribute>Required</trans-attribute>
    </container-transaction>
    </assembly-descriptor>
    </ejb-jar>
    In MessageBean.java I removed the few lines code related to AQ, and added the following imports to MessageBean.java and CartEJB.java:
    import com.ibm.*;
    import com.ibm.mq.*;
    import com.ibm.mq.jms.*;
    import com.ibm.jms.*;
    It works!!!
    Good luck
    Ole

  • Queuing - Action Blocks (Queue Get, Queue Put , Queue List, Queue Delete)

    Does any one know how to use Queue Get, Queue Put, Queue List, Queue Delete
    action blocks?
    There is neither any help documentation nor any previous queries in the forum for this.
    Thanks and Regards
    Khaleel Badeghar

    Hi Khaleelurrehman,
    1. Put something in your Queue:
       Name: MyQueue
       ID:      4711
       Example:
       - Make a ForNextLoop and use the Link Editor to fill the Queue with 10 entries.
       - Use a Local XML Variable as Message and a Assignment to set the
         Message text. So your Message will be:
         "a Message with the ID " & For_Next_Loop_0.CurrentItem
       - Put the Message and the ID in your Queue using the Link Editor
         ID: 4700 + For_Next_Loop_0.CurrentItem
    2. Replace something in your Queue:
       Just refere to Queue-Name and Queue-ID to replace a Message with the
       Queue-Put-Action.
    3. Get one entry of your Queue:
       Just refere to Queue-Name and Queue-ID to get the Message out of the Queue
       with the Queue-Get-Action.
    4. Get a List of entries from your Queue:
       - Use the Queue-List-Action which will return a xMII-XML Structure with
         DATE and ID.
       - Use a Repeater to loop over the Output of Queue-List-Action.
       - Use a Queue-Get-Action and assign the ID of the Repeater-Output to get the
         Message for the ID.
    5. Delete one Message in your Queue:
       Use the Queue-Delete-Action to delete a Message with a specific ID from
       your Queue.
    6. Delete the whole Queue (or all Messages)
       Use Queue-List-Action + Repeater to loop + Queue-Delete-Action
    Hope this helps.
    Ciao
    Martin

  • Configure log file transfer to downstream capture daabase!

    Dear all,
    I am setting up bidirectional replication among 2 database server that are Linux based and oracle 11gR1 is the database.
    I am following the document Oracle Streams Administrator Guide, Have completed all the pre-configuration tasks but I have confusion with this step where we have to configure log file transfer to downstream capture database.
    I am unable to understand this from the documentation.
    I mean how do I configure Oracle Net so that the source database can communicate with each other in bi-directional replication?
    Configure authentication at both databases to support the transfer of redo data? How can I do this?
    Third thing is the paramter setting that obviously i can do
    Kindly help me through this step.
    Regards, Imran

    and what about this:
    Configure authentication at both databases to support the transfer of redo data?
    Thanks, ImranFor communication between the two databases, you create streams administrator at both the databases. The strmadmin talk to each other.
    Regards,
    S.K.

  • Downstream Capturing - Two Sources

    Hi
    We are planning to have two source databases and one consolidated/merge database.We are planning to use streams.I just configured downstream real time capture from one source database.
    As I can have only one real-time downstream capture process and one archive-log downstream capture process.
    How do I configure this archivelog-downstream capture process.Where in the code that differentiates between real time and archivelog.
    Thanks
    Sree

    You will find the steps for configuring downstream capture in the 11.2 Streams Replication Administrator's Guide towards the end of chapter 1. Here is the URL to the online doc section that gives examples of the source init.ora parameter log_arch_dest_2 for real-time mining and archive log mining:
    http://docs.oracle.com/cd/E11882_01/server.112/e10705/prep_rep.htm#i1014093
    In addition to the different log_arch_dest_* parameter settings between real-time and archive log mining, real-time mining requires standby logfiles at the downstream mining database. The instructions for that are also in the Streams Replication Administrator's Guide.
    Finally, the capture parameter DOWNSTREAM_REAL_TIME_MINE must be set to Y for real-time mining. The default is N for archive log mining.
    Chapter 2 in that same manual includes how to configure downstream capture using the MAINTAIN_SCHEMAS procedure

  • RAC for downstreams capture process

    I have created a real-time downstreams capture process in a RAC to protect the process of any failure but I have some doubts about this:
    1- I need create the group of standby redo log for each instance in cluster or its shared for all ?
    2- if one instance goes down and we perform Redos sending from the source via the following service depicted in the source TNSNAME:
    RAC_STR=
    (DESCRIPTION=
    (ADDRESS_LIST=
    (ADDRESS=
    (PROTOCOL=TCP)
    (HOST=VIP-instance1)
    (PORT=1521)
    (ADDRESS=
    (PROTOCOL=TCP)
    (HOST=VIP-instance1)
    (PORT=1521)
    (CONNECT_DATA=
    (SERVICE_NAME=RAC-global_name)
    configured process will be able to continue capturing changes without data redo loss ?
    Appreciate any explanation.

    >
    if one instance goes down and we perform Redos sending from the source via the following service depicted in the source TNSNAME:
    RAC_STR=
    (DESCRIPTION=
    (ADDRESS_LIST=
    (ADDRESS=
    (PROTOCOL=TCP)
    (HOST=VIP-instance1)
    (PORT=1521)
    (ADDRESS=
    (PROTOCOL=TCP)
    (HOST=VIP-instance1)
    (PORT=1521)
    (CONNECT_DATA=
    (SERVICE_NAME=RAC-global_name)
    configured process will be able to continue capturing changes without data redo loss ?You will not expirience with data loss if one of RAC instances goes down - next one will overtake yours downstream capture process and continues to mine redo from source database. But you definitly need to correct your tnsnames, because it is pointing twice to the same RAC instance "VIP-instance1"
    The downstream capture on RAC unfortunatly has other problems, with I've already expirienced, but maybe it will not concern your configuration. The undocumented problems (or bugs which are open and not solved yet) are:
    1. if your RAC DB has the phys. standby than it can happened that it discontinue to register redo from upstream Streams database.
    2. if your RAC DB has both downstream and local capture then if more as 2 RAC instances are running, the local capture can't continue with current redolog (only after log switch)

  • What is differences extraction queue, delta queue and uddate queue ?

    hi guru's
    What is differences  between extraction queue, delta queue and uddate queue ? can u describe briefly?
    Thanks & Regards
    nandi

    Dear Prabha,
    Basically when any document is posted in R/3, it is updated to the update table, from there it is taken to our delta queue for send it to BW side.
    When extraction starts, data is sent to BW from delta queue. then again this cycle starts.
    When you post any document in OLTP system (eg SAP R3),
    say create sales order by VA01, then posting is made to application tables (VBAK/VBAP) through V1 and also to sm other tables through V2, Communication structure written to update queue/extraction queue/delta queue(directly) as per the update mode selected. V3 is always followed by V2 and we are supposed to schedule it.
    From this delta queue, data is extracted by BW infopackages.
    There are various update methods according to which extraction or delta queue are used, so when document posting takes place it also write data into extraction queue (through V1 update) and if we use queued delta method then this data is collected in collection run and written to delta queue and from this delta queue we request for data from BW.
    There are lots of posts on SDN for this, please have a look on those.
    one for your reference...
    https://www.sdn.sap.com/irj/sdn/profile?userid=3507509
    Hope it helps...
    Message was edited by:
            Ashish Tewari

  • DELTA  QUEUE&EXTRACTION QUEUE

    hi all,
    1.what is differnce b/ween delta queue &extraction queue
    2.wt's delta modes adavatages & disadvantages? where we usd these delta modes?

    Hi there, I am fine thank you.
    [you can call(write) me by my name i.e. Ashish.]
    I am sending you all links:
    1. SAP Network Blog: LOGISTIC COCKPIT DELTA MECHANISM - Episode one: V3 Update, the ‘serializer’
    <b>/people/sap.user72/blog/2004/12/16/logistic-cockpit-delta-mechanism--episode-one-v3-update-the-145serializer146
    2. SAP Network Blog: LOGISTIC COCKPIT DELTA MECHANISM - Episode two: V3 Update, when some problems can occur...
    <b>/people/sap.user72/blog/2004/12/23/logistic-cockpit-delta-mechanism--episode-two-v3-update-when-some-problems-can-occur
    3. SAP Network Blog: LOGISTIC COCKPIT DELTA MECHANISM - Episode three: the new update methods
    /people/sap.user72/blog/2005/01/19/logistic-cockpit-delta-mechanism--episode-three-the-new-update-methods
    <a href="/people/sap.user72/blog/2005/01/19/logistic-cockpit-delta-mechanism--episode-three-the-new-update-methods Network Blog: LOGISTIC COCKPIT DELTA MECHANISM - Episode three: the new update methods</a>
    Incase you are not able to find them this time also, you can do one thing, put "Roberto Negro" in search at sdn.sap.com, you surely get his weblogs on first page only. Then take all three episodes of LOGISTIC COCKPIT DELTA MECHANISM.
    Regards,
    Ashish

Maybe you are looking for