Propagation Problem

Hi
i am trying to propagate message from a queue in 8i database to a queue in 9i database.The problem is that i can see the message being enqueued in 8i database but the 9i queue is always empty.Is there any problem with propagation.How can i check if there are some errors during propagation?

I've done it with 8.1.7.4 and 9.2.0.6 in both directions
with event scheduling (in ora8i i realized it by myself because it is a 9i feature)
and it is working now for half a year in production at customers site!
so, what have you tried in detail?

Similar Messages

  • Data propagation problems w/ NIS+ to LDAP migration..

    Hello All,
    I'm running in to an issue performing an NIS+ to LDAP migration with Solaris 9.
    It all happens like this: NIS+ successfully populates the directory through the 'initialUpdateAction=to_ldap' option-- afterwards, no updates made directly to LDAP are ever pushed back into NIS+.
    I'm of the understanding (which might be incorrect) that after performing the initial update, NIS+ should simply act as a cache to the data stored in LDAP. Do I need to perform an 'initialUpdateAction=from_ldap' after populating LDAP to force the direction of the data propagation to change?
    I'm experienced with LDAP, so I'm comfortable everything is all right on that side, however, I'm not so sure about NIS+. Anyone out there who has gone through this migration who'd be willing to offer some assistance or advice would be greatly appreciated.
    Many thanks in advance..
    ..Sean.

    Well, you neglected to outline exactly how you accomplished your migration.
    Starting with Tiger Server using NetInfo as a standalone server, we created an Open Directory Master, as described in Apple's Open Directory Guide. By the time we'd finished that, we had an OD admin. From there, we did as I previously described -- exported with WGM from NetInfo, imported with WGM into LDAP, deleted with WGM from NetInfo.
    See http://support.apple.com/kb/TA23888?viewlocale=en_US
    This seems to be an article on how to re-create a password that's been lost. That's not really what we need, though. The OD admin account we created works fine for other services, just not for WGM. And other admin users we created work fine for other services, but not for WGM. The problem is that although admin users can log into many services, they can't log into WGM -- only root can.

  • Oracle Advanced Queuing - Propagation problem - 11g

    Hi,
    I have a problem when propagation messages between queues. When the message is propagated, it stays on the source queue with READY state.
    I have created two queues on 11g with a propagation rule that any message from queue A are sent to queue B. My problem is that the message from the source queue stays in the source queue even after propagation, which isn't what I was expecting. The problem doesn't occur if the queues are on a different database. This problem only happens if the queues are on the same database.
    the script I use is this:
    For USERB (which has the destination queue)
    create type EVENT_MESSAGE as object (
    eventsource VARCHAR2(30),
    eventname VARCHAR2(255),
    eventid NUMBER(19,0),
    message CLOB
    DECLARE
    an_agent sys.aq$_agent;
    BEGIN
    -- create the publish/subscribe queue table
    dbms_aqadm.create_queue_table(
    queue_table => 'DESTINATION_QUEUE_TABLE',
    queue_payload_type=>'EVENT_MESSAGE',
    sort_list => 'ENQ_TIME',
    message_grouping => DBMS_AQADM.NONE,
    multiple_consumers=>true
    -- create the queue
    dbms_aqadm.create_queue(
    queue_name => 'DESTINATION',
    queue_table => 'DESTINATION_QUEUE_TABLE',
    queue_type => DBMS_AQADM.NORMAL_QUEUE,
    max_retries => 5
    dbms_aqadm.create_aq_agent(agent_name =>'DEQUEUE_AGENT');
    an_agent := sys.aq$_agent('DEQUEUE_AGENT', null, null);
    dbms_aqadm.enable_db_access(
    agent_name => 'DEQUEUE_AGENT',
    db_username => 'USERB'
    dbms_aqadm.add_subscriber(
    queue_name => 'DESTINATION',
    subscriber => an_agent,
    queue_to_queue => FALSE,
    delivery_mode => DBMS_AQADM.PERSISTENT
    -- start the queues
    dbms_aqadm.start_queue('DESTINATION');
    END;
    For USERA
    create type EVENT_MESSAGE as object (
    eventsource VARCHAR2(30),
    eventname VARCHAR2(255),
    eventid NUMBER(19,0),
    message CLOB
    BEGIN
    -- create the publish/subscribe queue table
    dbms_aqadm.create_queue_table(
    queue_table => 'SOURCE_QUEUE_TABLE',
    queue_payload_type=>'EVENT_MESSAGE',
    sort_list => 'ENQ_TIME',
    message_grouping => DBMS_AQADM.NONE,
    multiple_consumers=>true
    -- create the queue
    dbms_aqadm.create_queue(
    queue_name => 'SOURCE',
    queue_table => 'SOURCE_QUEUE_TABLE',
    queue_type => DBMS_AQADM.NORMAL_QUEUE,
    max_retries => 5
    -- start the queues
    dbms_aqadm.start_queue('SOURCE');
    -- create the propagation
    dbms_aqadm.add_subscriber(queue_name => 'SOURCE',
    subscriber => sys.aq$_agent('DEQUEUE_AGENT','USERB.DESTINATION',null),
    queue_to_queue => true);
    dbms_aqadm.schedule_propagation(queue_name => 'SOURCE',
    start_time => sysdate,
    latency => 25,
    destination_queue => 'USERB.DESTINATION');
    END;
    When I enqueue a message to the source on USERA with this:
    declare
    rc binary_integer;
    nq_opt dbms_aq.enqueue_options_t;
    nq_pro dbms_aq.message_properties_t;
    datas EVENT_MESSAGE;
    msgid raw(16);
    begin
    nq_pro.expiration := dbms_aq.never;
    nq_pro.sender_id := sys.aq$_agent('ENQUEUE_AGENT', null, null);
    datas := AGEAS_EVENT_MESSAGE('message','eventname',1,null);
    dbms_aq.enqueue('SOURCE',nq_opt,nq_pro,datas,msgid);
    end;
    The message is propagated to the destination queue, no problem, but the message state on the source queue is kept as ready. I would have expected it to be marked as processed and disappear from the queue table.
    When I look at the AQ$_SOURCE_QUEUE_TABLE_S the I see these records:
         QUEUE_NAME     NAME     ADDRESS     PROTOCOL      SUBSCRIBER TYPE
         SOURCE     (null)     "USERB"."DESTINATION"@AQ$_LOCAL     0     1736
         SOURCE     DEQUEUE_AGENT     "USERB"."DESTINATION"     0     577
    Can anyone help?

    I was talking about following oracle documentations:
    Oracle Database 11g: Advanced Queuing (Technical Whitepaper)
    Streams Advanced Queuing: Best Practices (Technical Whitepaper)
    Oracle Streams Advanced Queuing and Real Application Clusters: Scalability and Performance Guidelines (Technical Whitepaper)
    They are available at.. http://www.oracle.com/technetwork/database/features/data-integration/default-159085.html

  • Urgent--Message Propagation problem.

    Hi all,
    I am trying out the following stuff:-
    1)Creating a QueueTable.
    2)Setting its property to setMultipleCustomer(true)
    3)Create a Queue using AQ API.
    then calling the followin methods.
    4) queue.startEnqueue();
    5) queue.schedulePropagation(null, null, null, null, null);
    NOTE:- I already have a destination queue to which message should propagate in the same database. Thats why i have given all null parameters in schedule propagation.
    In client code I am using JMS API to connect to the Queues to put message.
    But when Iam trying to Create the queue I get Resource Not Found Exception.
    But when I use in the above case setMultipleConsume(false) the code doesnt give any exception, though message propagation is not done. But my intension is to use the message propagation thru schedulePropagation method. In order to use the message propagation feature I have to setMultipleCunsumer(true). and there by ending up in this problem.
    Question 1)Can anyone let me know what are the steps to do a message propagation thru Queues(Not Topic). Step by step.
    Question 2) Can anyone tell me how to and what to put in the dblink in case if I want by queue to propagate to a queue on a remote db server.
    NOTE :- Im using ORACLE AQ APIs to create Queues. and using JMS Client to connect to the queue.
    Best Regards,
    Thanks in Advance

    Sessions require session cookies.
    If the user has cookies off then the session won't be retained by the server automatically.
    The solution to this is to use URL rewriting.
    There is a method: HTTPResponse.encodeURL( String url )
    It automatically adds the JSESSIONID used to track the session onto the url if the browser is not accepting cookies.
    You need to do this on every link in the website so that session is maintained.
    Cheers,
    evnafets

  • Propagation problem - remote queue

    Hi all!
    NT4.0, O8i 8.1.7
    Propagation between local queues works fine, but for remote queue the message stay in the queue with 'Ready state'
    The test message:
    1 SELECT QUEUE, MSG_ID, MSG_STATE, SENDER_NAME, SENDER_ADDRESS, ORIGINAL_MSGID,
    2* CONSUMER_NAME, ADDRESS, PROPAGATED_MSGID FROM AQ$BM_REQ
    SQL> /
    QUEUE MSG_ID MSG_STATE
    SENDER_NAME
    SENDER_ADDRESS
    ORIGINAL_MSGID CONSUMER_NAME
    ADDRESS
    PROPAGATED_MSGID
    Q1 183EC1C40A7647BC9301822CB9E020C7 READY
    RBM
    [email protected]
    The subscriber:
    SQL> Select * From aq$bm_req_s;
    QUEUE NAME
    ADDRESS
    PROTOCOL
    Q1 RBM
    [email protected]
    0
    The prop sched:
    SQL> Select * From user_queue_schedules;
    QNAME
    DESTINATION
    START_DAT START_TI PROPAGATION_WINDOW
    NEXT_TIME
    LATENCY S PROCESS_
    SESSION_ID INSTANCE
    LAST_RUN_ LAST_RUN CURRENT_S CURRENT_ NEXT_RUN_ NEXT_RUN TOTAL_TIME TOTAL_NUMBER TOTAL_BYTES
    MAX_NUMBER MAX_BYTES AVG_NUMBER AVG_SIZE AVG_TIME FAILURES LAST_ERRO LAST_ERR
    LAST_ERROR_MSG
    Q1
    DBL_BM.AMOS
    04-JÚN-18 13:15:18
    0 N SNP5
    12, 21459 1
    04-JÚN-18 13:15:18 0 0 0
    0 0 0 0 0 0
    Test dblink:
    SQL> Select * From aq$axa_req_s@dbl_bm;
    QUEUE NAME
    ADDRESS
    PROTOCOL
    Q1 RBM
    0
    ... and no error message.
    Thanx for any idea!
    Laslo

    Hello Maria,
    I've the same problem... did you solve the problem ?? Tell me please!?
    Thank you!
    Nuno Sénica.

  • Oracle 8i, propagation problem with Advanced Active Queues

    In our application we use Advanced Active Queuing to exchange data between servers.
    The job responsable for the propagation of the messages to another server suddenly stops without obvious reason ?!
    We already tried to modify the processes parameter in the initialisation file, but without success.

    what u mean u tried to change init.ora param unsuccessfully.
    u change aq_tm_processes, job_queue_processes params, and bounce the database, there u go

  • Propagation problem in streams 11.1.0.7

    my propagation job: AQ_JOBS_32 is not working any more but it is still running in the oem streams monitor.
    In the DBA_SCHEDULER_RUNNING_JOBS View the SESSION_ID column value is null.
    I can't kill the job any way.
    I have a contradition between procedures:
    SYS.DBMS_SCHEDULER.STOP_JOB
    and
    DBMS_SCHEDULER.DROP_JOB
    the execution of the procedure:
    SYS.DBMS_SCHEDULER.STOP_JOB
    (job_name => 'SYS.AQ_JOB$_32'
    ,force => TRUE);
    RETURN AN ERROR:
    ORA-27366: no se est ejecutando el trabajo "SYS.AQ_JOB$_32"
    ORA-06512: en "SYS.DBMS_ISCHED", lnea 168
    ORA-06512: en "SYS.DBMS_SCHEDULER", lnea 515
    The execution of the procedure:
    DBMS_SCHEDULER.DROP_JOB
    (job_name => 'SYS.AQ_JOB$_32',force => TRUE);
    RETURN AN ERROR :
    ORA-27478: el trabajo "SYS.AQ_JOB$_32" se est ejecutando
    ORA-06512: en "SYS.DBMS_ISCHED", lnea 182
    ORA-06512: en "SYS.DBMS_SCHEDULER", lnea 615
    WHAT TO DO?

    You have posted this to three different group. Please post to one and only one group. Please drop two of the postings. Thank you.

  • ORA-25307: Enqueue rate too high, flow control enabled

    I am stuck. I have my stream setup and they were previously working on two of my dev environments well. Now when I get the streams setup the CAPTURE process has a state of "CAPTURING CHANGES" for like 10 seconds and then changes to state "PAUSED FOR FLOW CONTROL". I believe this is happening because the PROPAGATION process is showing an error of "ORA-25307: Enqueue rate too high, flow control enabled".
    I don't know what to tweak to get rid of this error message. The two environments are dev databases and there is minimal activity on them so i don't think it's a case of the APPLY process is lagging behind the PROPAGATION process. Has anyone run into this issue?? I've verified my db link works, my stream admin user has dba access. Any help or advise would be greatly appreciated.
    thanks, dave

    As rule of thumb, you don't need to set GLOBAL_NAME=TRUE as long as your are 100% GLOBAL_NAME compliant.
    So, setting GLOBAL_NAME=TRUE will not have any effect if your dblink is not global_name compliant
    and if your installation is global_name compliant, you don't need to set GLOBAL_NAME=TRUE.
    The first thing when you diagnose is to get the exact facts.
    Please run this queries both on source and target so that to see what are in the queues and where.
    Run it multiple time to see if figures evolves.
    - If they are fixed, then your Streams is stuck in its last stage. As a cheap and good starting point, just stop/start the capture, propagation and target apply process. Check also the alert.log on both site. when you have a propagation problem, they do contains information's. If you have re-bounced everything and no improvement then the real diagnose work must start here but then we know that the message is wrong and the problems is elsewhere.
    - if they are not fixed then your apply really lag behind for what ever good reason, but this is usually easy to find.
    set termout off
    col version new_value version noprint
    col queue_table format A26 head 'Queue Table'
    col queue_name format A32 head 'Queue Name'
    select substr(version,1,instr(version,'.',1)-1) version from v$instance;
    col mysql new_value mysql noprint
    col primary_instance format 9999 head 'Prim|inst'
    col secondary_instance format 9999 head 'Sec|inst'
    col owner_instance format 99 head 'Own|inst'
    COLUMN MEM_MSG HEADING 'Messages|in Memory' FORMAT 99999999
    COLUMN SPILL_MSGS HEADING 'Messages|Spilled' FORMAT 99999999
    COLUMN NUM_MSGS HEADING 'Total Messages|in Buffered Queue' FORMAT 99999999
    set linesize 150
    select case
      when &version=9 then ' distinct a.QID, a.owner||''.''||a.name nam, a.queue_table,
                  decode(a.queue_type,''NORMAL_QUEUE'',''NORMAL'', ''EXCEPTION_QUEUE'',''EXCEPTION'',a.queue_type) qt,
                  trim(a.enqueue_enabled) enq, trim(a.dequeue_enabled) deq, x.bufqm_nmsg msg, b.recipients
                  from dba_queues a , sys.v_$bufqm x, dba_queue_tables b
            where
                   a.qid = x.bufqm_qid (+) and a.owner not like ''SYS%''
               and a.queue_table = b.queue_table (+)
               and a.name not like ''%_E'' '
       when &version=10 then ' a.owner||''.''|| a.name nam, a.queue_table,
                  decode(a.queue_type,''NORMAL_QUEUE'',''NORMAL'', ''EXCEPTION_QUEUE'',''EXCEPTION'',a.queue_type) qt,
                  trim(a.enqueue_enabled) enq, trim(a.dequeue_enabled) deq, (NUM_MSGS - SPILL_MSGS) MEM_MSG, spill_msgs, x.num_msgs msg,
                  x.INST_ID owner_instance
                  from dba_queues a , sys.gv_$buffered_queues x
            where
                   a.qid = x.queue_id (+) and a.owner not in ( ''SYS'',''SYSTEM'',''WMSYS'')  order by a.owner ,qt desc'
       end mysql
    from dual
    set termout on
    select &mysql
    /B. Polarski

  • AQ Health Check

    Hi All,
    I am a new to AQs.
    I have to upgrade a existing system using AQ.
    One of the requirement is that monitoring the health of AQ.
    health of AQ in sense
    - The messages at an instance shouldn't be more than threshold.
    - If Enqueue/Dequeue fails how to handle the exception.
    - Is there any AQ package procedure, which can be used to monitor the AQs?

    I guess there's no procedure for monitoring AQ.
    I usually query dba_queues and dba_queue_schedules to see AQ status.<br>
    SELECT name, enqueue_enabled, dequeue_enabled FROM dba_queues;<br>
    SELECT qname, schedule_disabled FROM dba_queeu_schedules;<br>
    And also, we check following points to monitor AQ applications.<br>
    - Monitor Alert.log (ORA-25253, 25226, 25207) to make sure there is no propagation problem.<br>
    - Query whether there have been any messages left unprocessed for long time.<br>
    SELECT count(*) FROM AQ$<i>qtable</i> WHERE queue=’queue_name’ <br>
    AND msg_state=’READY’ and enq_timestamp < current_timestamp - interval ‘<i>threshold-time</i>’’ second;<br>
    <br>
    fyi,

  • Pass EJB reference to RMI object

    Hi all,
    I have RMI server outside JVM where app sever runs and some app server JVM. At app sever I have MDB bean and some statefull bean ejb (SB). Previously I create SB from MDB via calls from RMI server, but then I lose transaction propagation between MDB and SB. Now I need to create SB from MDB, because of transaction propagation problem, but RMI server must also have reference to SB ejb, because of some special functions. I try to pass handle of SB ejb to my RMI server, but if I want to call bussiness methods on SB ejb, it doesn't work and I got exception. My question is if it is possible or not, and if yes then how it is possible to do. Thanks.

    Could you tell us a bit more about the exception you get.
    Anyway, you shouldn't do that. For security reasons, most appservers prevent an handle to be used outside the client application that requested it. It seems to me that it is exactly what happens here.
    Your architecture seems a bit complex. Can't you simplify it a bit?
    /Stephane

  • ORA-25307: Enqueue rate too high

    dba_queue_schedules;
    SCHEMA   |QNAME      |DESTINATION     |SCHEDULE_DISABLED|FAILURES|LAST_ERROR_MSG
    STRMADMIN|CAPT_SRC |target.domain.COM |N                |0       |ORA-25307: Enqueue rate too high, flow control enabled
    STRMADMIN|CAPT_SRC |target.domain.COM |N                |0       |ORA-25307: Enqueue rate too high, flow control enabledV$BUFFERED_PUBLISHERS;
    QUEUE_SCHEMA|QUEUE_NAME |UNBROWSED_MSGS|OVERSPILLED_MSGS|MEMORY_USAGE|PUBLISHER_STATE
    STRMADMIN   |CAPT_SRC      |15043         |0               |30          |IN FLOW CONTROL: TOO MANY UNBROWSED MESSAGESV$BUFFERED_SUBSCRIBERS;
    SUBSCRIBER NAME|CNUM_MSGS|TOTAL_DEQUEUED_MSG|TOTAL_SPILLED_MSG
                   |20401    |0                 |0
    APP_SRC        |0        |0                 |0Anyone pls? whats wrong? is this normal to have a blank subscriber name win cnum_msgs 20401?
    Edited by: user13640691 on May 11, 2011 9:38 PM

    Sound like you have a propagation problem. Let's check it. I expect to see figures in propagation sender but nothing into receiver or reader.
    -- Propagation senders : to run on source site
    prompt
    prompt ++ EVENTS AND BYTES PROPAGATED FOR EACH PROPAGATION++
    prompt
    COLUMN Elapsed_propagation_TIME HEADING 'Elapsed |Propagation Time|(Seconds)' FORMAT 9999999999999999
    COLUMN TOTAL_NUMBER HEADING 'Total Events|Propagated' FORMAT 9999999999999999
    COLUMN SCHEDULE_STATUS HEADING 'Schedule|Status'
    column elapsed_dequeue_time HEADING 'Total Dequeue|Time (Secs)'
    column elapsed_propagation_time HEADING 'Total Propagation|Time (Secs)' justify c
    column elapsed_pickle_time HEADING 'Total Pickle| Time(Secs)' justify c
    column total_time HEADING 'Elapsed|Pickle Time|(Seconds)' justify c
    column high_water_mark HEADING 'High|Water|Mark'
    column acknowledgement HEADING 'Target |Ack'
    prompt pickle : Pickling is the action of building the messages, wrap the LCR before enqueuing
    prompt
    set linesize 150
    SELECT p.propagation_name,q.message_delivery_mode queue_type, DECODE(p.STATUS,
                    'DISABLED', 'Disabled', 'ENABLED', 'Enabled') SCHEDULE_STATUS, q.instance,
                    q.total_number TOTAL_NUMBER, q.TOTAL_BYTES/1048576 total_bytes,
                    q.elapsed_dequeue_time/100 elapsed_dequeue_time, q.elapsed_pickle_time/100 elapsed_pickle_time,
                    q.total_time/100 elapsed_propagation_time
      FROM  DBA_PROPAGATION p, dba_queue_schedules q
            WHERE   p.DESTINATION_DBLINK = NVL(REGEXP_SUBSTR(q.destination, '[^@]+', 1, 2), q.destination)
      AND q.SCHEMA = p.SOURCE_QUEUE_OWNER
      AND q.QNAME = p.SOURCE_QUEUE_NAME
      order by q.message_delivery_mode, p.propagation_name;
    -- propagation receiver : to run on target site
    COLUMN SRC_QUEUE_NAME HEADING 'Source|Queue|Name' FORMAT A20
    COLUMN DST_QUEUE_NAME HEADING 'Target|Queue|Name' FORMAT A20
    COLUMN SRC_DBNAME HEADING 'Source|Database' FORMAT A15
    COLUMN ELAPSED_UNPICKLE_TIME HEADING 'Unpickle|Time' FORMAT 99999999.99
    COLUMN ELAPSED_RULE_TIME HEADING 'Rule|Evaluation|Time' FORMAT 99999999.99
    COLUMN ELAPSED_ENQUEUE_TIME HEADING 'Enqueue|Time' FORMAT 99999999.99
    SELECT SRC_QUEUE_NAME,
           SRC_DBNAME,DST_QUEUE_NAME,
           (ELAPSED_UNPICKLE_TIME / 100) ELAPSED_UNPICKLE_TIME,
           (ELAPSED_RULE_TIME / 100) ELAPSED_RULE_TIME,
           (ELAPSED_ENQUEUE_TIME / 100) ELAPSED_ENQUEUE_TIME, TOTAL_MSGS,HIGH_WATER_MARK
      FROM V$PROPAGATION_RECEIVER;
    -- Apply reader  : run on target system
    col rsid format 99999 head "Reader|Sid" justify c
    COLUMN CREATION HEADING 'Message|Creation time' FORMAT A17 justify c
    COLUMN LATENCY HEADING 'Latency|in|Seconds' FORMAT 9999999
    col deqt format A15 head "Last|Dequeue Time" justify c
    COLUMN DEQUEUED_MESSAGE_NUMBER HEADING 'Last dequeued|scn ' FORMAT 999999999999 justify c
    SELECT APPLY_NAME, sid rsid , (DEQUEUE_TIME-DEQUEUED_MESSAGE_CREATE_TIME)*86400 LATENCY,
            TO_CHAR(DEQUEUED_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD') CREATION, TO_CHAR(DEQUEUE_TIME,'HH24:MI:SS MM/DD') deqt,
            DEQUEUED_MESSAGE_NUMBER  FROM V$STREAMS_APPLY_READER;

  • Problem in propagation

    Hi everyone,
    Thanks already for beeing reading this. :)
    We have got this problem when trying to obtain the propataion inventory. We are working in WLP 10.3.0.0 and Oracle DB 10.2. The error is as it follows:
    The propagation servlet returned a failure response: The [Download] operation is halting due to the following failure: The security policy for resource [PortalSystemDelegator] with capability [delegate_further_manage] is missing in LDAP. If you have reset LDAP or configured a new WLP domain to use a pre-existing WLP RDBMS, then you must reset your RDBMS.
    The propagation servlet returned the following log information found in [C:\Users\FRANCI~1\AppData\Local\Temp\onlineDownload__D4_H15_M51_S39.log]:
    INFO (Nov 4, 2011 3:53:53 PM CLST): Verbose logging has been disabled on the server.
    INFO (Nov 4, 2011 3:53:53 PM CLST): The propagation servlet is starting the [Download] operation.
    INFO (Nov 4, 2011 3:53:53 PM CLST): The modifier [allowMaintenanceModeDisabled] with a value of [true] will be used for this operation.
    INFO (Nov 4, 2011 3:53:53 PM CLST): Validating that current user is in the Admin role...SUCCESS
    INFO (Nov 4, 2011 3:53:53 PM CLST): Validating that Maintenance Mode is enabled...SUCCESS
    WARNING (Nov 4, 2011 3:53:53 PM CLST): The temporary directory on the server used by propagation is [/bea/user_projects/domains/domain_mov_9/servers/AdminServer/tmp/_WL_user/PortalMovistarEAR/1ovfo2/public] with a length of [104] bytes. It is recommended that you shorten this path to avoid path length related failures. See the propagation documentation on how to specify the inventoryWorkingFolder context-param for the propagation servlet.
    ERROR (Nov 4, 2011 3:53:55 PM CLST): Validating that LDAP and RDBMS security resources are in sync...FAILURE
    ERROR (Nov 4, 2011 3:53:55 PM CLST): The security policy for resource [PortalSystemDelegator] with capability [delegate_further_manage] is missing in LDAP. If you have reset LDAP or configured a new WLP domain to use a pre-existing WLP RDBMS, then you must reset your RDBMS. Otherwise, insure all available patches have been applied to your installation.
    ERROR (Nov 4, 2011 3:53:55 PM CLST): The [Download] operation is halting due to the following failure: The security policy for resource [PortalSystemDelegator] with capability [delegate_further_manage] is missing in LDAP. If you have reset LDAP or configured a new WLP domain to use a pre-existing WLP RDBMS, then you must reset your RDBMS. Otherwise, insure all available patches have been applied to your installation. The situation is that there is Portal working already (given). We have created this new domain in our development environment and pointed all datasources to a pre-existing database (in fact, this database is a loaded dump from production).
    Basically, our conclusion is that the role is in the DB (obviously, for it came within the dump from the production's DB) and it's not in the LDAP.
    ANY help will do. Really.
    Thanks in advance,
    Andres

    Thank you very very much to both of you.
    We followed the precedure you pointed out and it provided to be the solution to our troubles. We just didn't follow the last 2 steps for we didn't have any crusial data on the LDAP. We were really stucked on this and now we can go on.
    Again, thank you very much.
    Whenever you stop by Chile, we will invite you to party / BBQ / or whatever do to thank you for the help,
    Best regards,
    Andrés

  • Propagation tool problem

    I have successfully used the 8.1 sp3 propagation tool to move a portal from a dev database to a production database, and the production site comes up perfectly with all the correct settings. The problem is that if I then make a change to something like Portlet position or page layout in the dev server, and then re-run the propagation tool (with update and insert enabled) the changes do not go to production. Absolutely nothing changes. If I then delete the portal in production and re-run the propagation tool the new, modified portal shows up in production. It's like it only works with a clean db, and no updates are possible. Has anyone seen this, and are there serious issues with the propagation tool at this point?
    thanks,
    Chad

    Oh! Oh! My...I am very...
    Thanks for your help...

  • Problems when creating propagation session in Workshop

    Hi.
    I'm having some difficulties creating a propagation session i workshop. After I've chosen the source inventory file and destination inventory file and click finish in the wizard, workshop returns an error saying "Unable to merge source and destination inventories". And workshop crashes with the following stacktrace:
    java.lang.IndexOutOfBoundsException: toIndex = 300
         at java.util.SubList.<init>(AbstractList.java:705)
         at java.util.AbstractList.subList(AbstractList.java:570)
         at com.bea.p13n.management.inventory.hierarchy.InventoryNodeRelationshipList$RelationshipPagedResults.getCurrentPage(InventoryNodeRelationshipList.java:178)
         at com.bea.p13n.pagination.internal.AbstractPagedResult.nextPage(AbstractPagedResult.java:165)
         at com.bea.p13n.management.inventory.hierarchy.nodes.common.BaseNodeRules.ensureDelete(BaseNodeRules.java:820)
         at com.bea.p13n.management.inventory.hierarchy.nodes.common.BaseNodeRules.getChangeRelationships(BaseNodeRules.java:590)
         at com.bea.p13n.management.inventory.tool.common.diff.ImpliedChangeComputer.deriveImpliedChanges(ImpliedChangeComputer.java:297)
         at com.bea.p13n.management.inventory.tool.common.diff.ImpliedChangeComputer.computeElection(ImpliedChangeComputer.java:219)
         at com.bea.p13n.management.inventory.tool.common.diff.ImpliedChangeComputer.processAllElections(ImpliedChangeComputer.java:166)
         at com.bea.p13n.management.inventory.tool.common.diff.ImpliedChangeComputer.processElections(ImpliedChangeComputer.java:115)
         at com.bea.p13n.management.inventory.tool.common.diff.PropagationDifferencer.generateDifferences(PropagationDifferencer.java:168)
         at com.bea.p13n.management.inventory.tool.common.diff.TreeCombiner.combineTrees(TreeCombiner.java:96)
         at com.bea.wlp.eclipse.proptool.ProptoolUtil.combineTrees(ProptoolUtil.java:251)
         at com.bea.wlp.eclipse.proptool.editor.PropSessionElement.doLoadMergedInventory(PropSessionElement.java:221)
         at com.bea.wlp.eclipse.proptool.editor.PropSessionElement.load(PropSessionElement.java:177)
         at com.bea.wlp.eclipse.proptool.editor.PropSessionDocument.load(PropSessionDocument.java:183)
         at com.bea.wlp.eclipse.proptool.editor.PropSessionDocument$1.run(PropSessionDocument.java:129)
         at org.eclipse.core.internal.jobs.Worker.run(Worker.java:76)
    I do get some errors in the server log when I import the destination invetory file saying:
    <21.feb.2007 kl 16.19 CET> <Error> <InventoryServices> <000000> <Could not write Internett_7.cmn during export.
    java.lang.NullPointerException
    at com.bea.p13n.management.inventory.tool.common.io.TOCExporter.getOutputLocation(TOCExporter.java:694)
    at com.bea.p13n.management.inventory.tool.common.io.TOCExporter.exportAdditionalResources(TOCExporter.java:611)
    at com.bea.p13n.management.inventory.tool.common.io.TOCExporter.exportNodeXml(TOCExporter.java:581)
    at com.bea.p13n.management.inventory.tool.common.io.TOCExporter.handleNode(TOCExporter.java:163)
    at com.bea.p13n.management.inventory.tool.common.io.BaseInventoryFolderExport.handleNode(BaseInventoryFolderExpo
    rt.java:286)
    Truncated. see log file for complete stacktrace
    Complete stacktrace:
    ####<21.feb.2007 kl 16.19 CET> <Error> <InventoryServices> <WEBFORM11> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <weblogic> <> <> <1172071168301> <000000> <Could not write Ekstra_2.cmn during export.
    java.lang.NullPointerException
         at com.bea.p13n.management.inventory.tool.common.io.TOCExporter.getOutputLocation(TOCExporter.java:694)
         at com.bea.p13n.management.inventory.tool.common.io.TOCExporter.exportAdditionalResources(TOCExporter.java:611)
         at com.bea.p13n.management.inventory.tool.common.io.TOCExporter.exportNodeXml(TOCExporter.java:581)
         at com.bea.p13n.management.inventory.tool.common.io.TOCExporter.handleNode(TOCExporter.java:163)
         at com.bea.p13n.management.inventory.tool.common.io.BaseInventoryFolderExport.handleNode(BaseInventoryFolderExport.java:286)
         at com.bea.p13n.management.inventory.tool.common.io.InventoryTreeExport.handleNode(InventoryTreeExport.java:125)
         at com.bea.p13n.management.inventory.hierarchy.InventoryTreeWalker.walkDepthFirst_Recur(InventoryTreeWalker.java:205)
         at com.bea.p13n.management.inventory.hierarchy.InventoryTreeWalker.walkDepthFirst_Recur(InventoryTreeWalker.java:220)
         at com.bea.p13n.management.inventory.hierarchy.InventoryTreeWalker.walkDepthFirst_Recur(InventoryTreeWalker.java:220)
         at com.bea.p13n.management.inventory.hierarchy.InventoryTreeWalker.walkDepthFirst_Recur(InventoryTreeWalker.java:220)
         at com.bea.p13n.management.inventory.hierarchy.InventoryTreeWalker.walkDepthFirst_Recur(InventoryTreeWalker.java:220)
         at com.bea.p13n.management.inventory.hierarchy.InventoryTreeWalker.walkDepthFirst_Recur(InventoryTreeWalker.java:220)
         at com.bea.p13n.management.inventory.hierarchy.InventoryTreeWalker.walkDepthFirst_Recur(InventoryTreeWalker.java:220)
         at com.bea.p13n.management.inventory.hierarchy.InventoryTreeWalker.walkDepthFirst_Recur(InventoryTreeWalker.java:220)
         at com.bea.p13n.management.inventory.hierarchy.InventoryTreeWalker.walkDepthFirst(InventoryTreeWalker.java:154)
         at com.bea.p13n.management.inventory.hierarchy.InventoryTreeWalker.walkDepthFirst(InventoryTreeWalker.java:98)
         at com.bea.p13n.management.inventory.tool.common.io.InventoryTreeExport.walkDepthFirst(InventoryTreeExport.java:88)
         at com.bea.p13n.management.inventory.tool.common.io.InventoryFolderExport.walkDepthFirst(InventoryFolderExport.java:109)
         at com.bea.p13n.management.inventory.tool.common.io.InventoryArchiveExport.walkDepthFirst(InventoryArchiveExport.java:100)
         at com.bea.p13n.management.inventory.tool.appresident.servlet.InventoryManagementServlet.writeInventoryToLocalFile(InventoryManagementServlet.java:1110)
         at com.bea.p13n.management.inventory.tool.appresident.servlet.InventoryManagementServlet.writeInventoryToLocalFile(InventoryManagementServlet.java:1081)
         at com.bea.p13n.management.inventory.tool.appresident.servlet.InventoryManagementServlet.downloadOperation_Remote(InventoryManagementServlet.java:543)
         at com.bea.p13n.management.inventory.tool.appresident.servlet.InventoryManagementServlet.downloadOperation(InventoryManagementServlet.java:480)
         at com.bea.p13n.management.inventory.tool.appresident.servlet.InventoryManagementServlet.doService(InventoryManagementServlet.java:329)
         at com.bea.p13n.management.inventory.tool.appresident.servlet.InventoryManagementServlet.doPost(InventoryManagementServlet.java:203)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:763)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:225)
         at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:127)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:283)
         at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
    >
    Could this cause the problem I'm having in workshop?
    Sincerly
    Borge

    Hi Srujay,
    Problem may due to satellite system added manual in the solution manager system.
    Please check and add through LMDB and synchronize it.
    Rg,
    Karthik

  • Problem with propagation

    I had successfully configired streams in my environment.Initailly whatever i performed dml operations like insert its replicate to my target database .When delete one row from my source database ,not deleted from my target database hence then whatever i performed amy dml operations like insert ,update,delete not replication to my target database.Any advice it should be gratefule
    At my source
    ============
    SQL> select * from dept;
    DEPTNO DNAME LOC EMPNO
    10 ACCOUNTING NEW YORK
    20 RESEARCH DALLAS
    30 SALES CHICAGO
    40 OPERATIONS BOSTON
    50 xxx lon
    60 yyy hyd
    75 BT LON
    80 BT1 Ban
    At my target
    SQL> select * from dept;
    DEPTNO DNAME LOC EMPNO
    10 ACCOUNTING NEW YORK
    20 RESEARCH DALLAS
    30 SALES CHICAGO
    40 OPERATIONS BOSTON
    50 xxx lon
    60 yyy hyd
    70 BT LON
    80 BT1 BAN
    Mohan

    Hi
    Thanks for reply.I want any dml or ddl operation s performed either side it should propagate .Please let me where i made mistake
    Instance Setup
    In order to begin the following parameters should be set in the spfiles of participating databases:
    ALTER SYSTEM SET JOB_QUEUE_PROCESSES=1;
    ALTER SYSTEM SET AQ_TM_PROCESSES=1;
    ALTER SYSTEM SET GLOBAL_NAMES=TRUE;
    ALTER SYSTEM SET COMPATIBLE='9.2.0' SCOPE=SPFILE;
    ALTER SYSTEM SET LOG_PARALLELISM=1 SCOPE=SPFILE;
    SHUTDOWN IMMEDIATE;
    STARTUP;
    Stream Administrator Setup
    =============================
    Next we create a stream administrator, a stream queue table and a database link on the source database:
    CONN sys/password@DBA1 AS SYSDBA
    The below steps had done at source and target except db link
    =============================================
    CREATE USER strmadmin IDENTIFIED BY strmadminpw
    DEFAULT TABLESPACE users QUOTA UNLIMITED ON users;
    GRANT CONNECT, RESOURCE, SELECT_CATALOG_ROLE TO strmadmin;
    GRANT EXECUTE ON DBMS_AQADM TO strmadmin;
    GRANT EXECUTE ON DBMS_CAPTURE_ADM TO strmadmin;
    GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO strmadmin;
    GRANT EXECUTE ON DBMS_STREAMS_ADM TO strmadmin;
    GRANT EXECUTE ON DBMS_APPLY_ADM TO strmadmin;
    GRANT EXECUTE ON DBMS_FLASHBACK TO strmadmin;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
    grantee => 'strmadmin',
    grant_option => FALSE);
    END;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
    grantee => 'strmadmin',
    grant_option => FALSE);
    END;
    CONNECT strmadmin/strmadminpw@DBA1
    EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();
    CREATE DATABASE LINK dba2 CONNECT TO strmadmin IDENTIFIED BY strmadminpw USING 'DBA2';
    GRANT ALL ON scott.dept TO strmadmin;
    LogMinor Tablespace Setup
    Next we create a new tablespace to hold the logminor tables on the source database:
    CONN sys/password@DBA1 AS SYSDBA
    CREATE TABLESPACE logmnr_ts DATAFILE '/u01/app/oracle/oradata/DBA1/logmnr01.dbf'
    SIZE 25 M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
    EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('logmnr_ts');
    Supplemental Logging
    CONN sys/password@DBA1 AS SYSDBA
    ALTER TABLE scott.dept ADD SUPPLEMENTAL LOG GROUP log_group_dept_pk (deptno) ALWAYS;
    Configure the propagation process on DBA1:
    ========================================
    CONNECT strmadmin/strmadminpw@DBA1
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
    table_name => 'scott.dept',
    streams_name => 'dba1_to_dba2',
    source_queue_name => 'strmadmin.streams_queue',
    destination_queue_name => 'strmadmin.streams_queue@dba2',
    include_dml => true,
    include_ddl => true,
    source_database => 'smtp');
    END;
    Configure the capture process on DBA1:
    =====================================
    CONNECT strmadmin/strmadminpw@DBA1
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'scott.dept',
    streams_type => 'capture',
    streams_name => 'capture_simp',
    queue_name => 'strmadmin.streams_queue',
    include_dml => true,
    include_ddl => true);
    END;
    Configure Instantiation SCN
    exp USERID=scott/tiger@dba1 TABLES=DEPT FILE=D:\tab1.dmp
    GRANTS=Y ROWS=Y LOG=exportTables.log OBJECT_CONSISTENT=Y INDEXES=Y
    imp USERID=scott/tiger@dba2 FULL=Y CONSTRAINTS=Y FILE=d:\tab1.dmp
    IGNORE=Y GRANTS=Y ROWS=Y COMMIT=Y LOG=importTables.log STREAMS_CONFIGURATION=Y STREAMS_INSTANTIATION=Y
    Alternatively the instantiation SCN can be set using the DBMS_APPLY_ADM package:
    CONNECT strmadmin/strmadminpw@dba1
    DECLARE
    v_scn NUMBER;
    BEGIN
    v_scn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
    DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN@DBA2(
    source_object_name => 'scott.dept',
    source_database_name => 'dba1',
    instantiation_scn => v_scn);
    END;
    Configure the apply process on the destination database (DBA2):
    CONNECT strmadmin/strmadminpw@DBA2
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'scott.dept',
    streams_type => 'apply',
    streams_name => 'apply_simp',
    queue_name => 'strmadmin.streams_queue',
    include_dml => true,
    include_ddl => true,
    source_database => 'smtp');
    END;
    Start the apply process on destination database (DBA2) and prevent errors stopping the process:
    CONNECT strmadmin/strmadminpw@DBA2
    BEGIN
    DBMS_APPLY_ADM.SET_PARAMETER(
    apply_name => 'apply_simp',
    parameter => 'disable_on_error',
    value => 'n');
    DBMS_APPLY_ADM.START_APPLY(
    apply_name => 'apply_simp');
    END;
    Start the capture process on the source database (DBA1):
    CONNECT strmadmin/strmadminpw@DBA1
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE(
    capture_name => 'capture_simp');
    END;
    Regards
    MOhan

Maybe you are looking for