Streams Propagation Job --next_date := sys.dbms_aqadm.aq$_propaq(job)

I have implemented oracle 1 way db replication from db "A" to db "B" using oracle streams. The config is running okay but I see this plsql block always increasing in time . It is the Propagation job. The total time value presently is at 15121 secs . not sure if this will create any issues. Couldn't find anything on Metalink related to slowness except for "Streams Hangs After ORA-07445 [ksrconsume] Core Dump and Repeated Alert Log Messages "Pmon Failed To Acquire Latch, See Pmon Dump" [ID 787960.1]". Has anyone seen this behavior ?
select
log_user, priv_user, this_date, this_sec, next_date, next_sec, total_time
from
dba_jobs
LOG_USER     PRIV_USER     THIS_DATE     THIS_SEC     NEXT_DATE     NEXT_SEC     TOTAL_TIME
SYS     SYS     1/25/2010 7:36     7:36:14     1/25/2010 7:36     7:36:10     15121
I am on 10.2.0.4. Is this due to oracle bug :
"Streams Hangs After ORA-07445 [ksrconsume] Core Dump and Repeated Alert Log Messages "Pmon Failed To Acquire Latch, See Pmon Dump" [ID 787960.1]"
I haven't seen any 7445s in the db yet.
Thank You
Kev

hi, thank you for your (fast) answer! that is the solution to my problem!
now i have a follow-up question:
i've got a procedure, that runs between 1 and 2 minutes long. unlimited running lightweight jobs
would freeze the db...
how can the count of the parallel running lightweight-jobs be limited?
according to the documentation it is not possible:
There is no explicit limit to the number of lightweight jobs that can run simultaneously to process multiple instances of the event.
However, limitations may be imposed by available system resources.
could you explain to me, what it (...available system recources...) means?
eventually what i would like to have: max. two parallel running (lightweight) jobs....
thank you in advance, bye, á
ps: can i attach a file to the post anyway?
Edited by: user4786904 on 23.03.2012 07:22

Similar Messages

  • Error in the propagation job while setting up streams

    Hi,
    I am trying to setup streams using the enterprise manager. The wizard completes successfully. Capture and apply process is created properly. But everytime the propagation job has the following error:
    ORA-02019: connection description for remote database not found.
    In the tnsnames.ora I have a connection description to the remote database.
    I can't find any details on why this error happens. I also set up a db link to the remote database using the net service name defined in the tnsnames.ora and that tests fine.
    Appreciate any help in identifying the problem.
    Thanks.

    You couldn't find anything on why this happens?
    I just went to metalink, clicked on the KnowledgeBase tab, typed in "ORA-02019" and instantly retrieved 144 documents.
    Google reports 29,200 documents listing this exception.
    Bing lists 5,360 documents.
    Try your search again.

  • Error upon inserting data in sql Database using stream analytics job: Datatye error conversion

    I have a data passed into the Event Hubs, queried by stream analytic job inserting it into sql database. Upon running the job, it becomes idle a few seconds after since it has an error:
    Message: Conversion from 0 to System.Boolean failed. 0 was of type - System.Int64.
    Conversion from 0 to System.Boolean failed. 0 was of type - System.Int64. Exception message at level [1], exception number [0], parent exception number [0]: Conversion from 0 to System.Boolean failed. 0 was of type - System.Int64.
    The data type in one of my field(IsHistorical) is Boolean with a value of false. The data type of the column in the sql table where this is to be inserted is of type bit. In this case, it seems that stream analytics could not convert the value "false"
    into a bit data type when inserting in sql table.
    I'm wondering if you already have encountered this problem. Could you help me resolve this problem?
    Thank you.

    Azure Stream Analytics does not have Boolean type. On input we will convert JSON Boolean value to bigint.
    Here is the list of supported types and conversions:
    https://msdn.microsoft.com/en-us/library/azure/dn835065.aspx 
    You  can fix this erro by changing column type from bit to int in SQL table schema.

  • PowerBI & Azure Stream Analytics jobs login issue

    Hello team,
    We are working as early-adopter partner for Azure Stream Analytics along with azure IoT suite, we recently have got 'PowerBI' services enabled as 'output' connector of stream analytics job on our corporate subscription & accessing our same org. id to
    login into Azure Stream analytics & powerBI services.
    But, to the great surprise, after creating SA job, configured 'powerbi' as output , it's getting redirect for authorization , applied powerbi 'dataset' & 'table' name. But, after logged into the app.powerbi.com portal, not able to see the 'stream analytics
    job dataset' & 'table'.
    Note: We are using same Org id for login & creating SA jobs & login into powerbi preview portal.
    Would be great if there's a specific instructions/guide for connecting powerbi with ASA job apart from this. Any pointer will be appreciated.
    Thanks,
    Anindita Basak
    MAX451, Inc. 
    Anindita

    Hello there,
    Thanks for the reply.
    No , we're not able to see any event status as 'Failed' on SA operation logs. Attached the relevant screenshot of event logs.
    The jobs are running fine, if we use SQL Azure tables as 'output' connector, the data is available. Only using PowerBI output connector, 'datasets' are not visible though we're using same org id (i.e
    [email protected]) for creation of ASA jobs & login into powerbi subscription.
    Thanks for your help!.
    Anindita Basak
    MAX451, Inc
    Anindita

  • AQ Propagation job hanging

    Hi,
    We have an Oracle 11 Enterprise Edition installed on our central UNIX environment.
    On the other side we have a PC's that runs an oracle 11 xe database on Windows.
    Between these databases, data is send with AQ.
    Now, we noticed that when AQ is sending data and at that same moment the network connection between the databases is interrupted, then the propagation job hangs.
    Restarting the progragation also is hanging because of the hanging progagation job.
    The hanging propagation job is then killed on the UNIX at the central environment.
    When this issue occurs when sending from the PC to the central server, we reboot the PC.
    Afterwards we can restart the propagation.
    Is someone recogniziging this problem?
    Is this a know bug?

    Probably better off asking AQ questions in the AQ forum.
    Advanced Queueing
    Cheers,

  • Streams propagation error

    I have 2 Oracle 11.2 standard edition databases and I am attempting to perform synchronous capture replication from one to the other.
    I have a propagation rule created with:
    begin
    DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
    table_name => 'duptest.test1',
    streams_name => 'send_test1',
    source_queue_name => 'strmadmin.streams_capt_q',
    destination_queue_name => '[email protected]',
    include_dml => true,
    include_ddl => false,
    inclusion_rule => true);
    end;
    As stradmin on the source database I can query the streams_apply_q_table at orac11g.world (the destination database).
    However the propagation fails and produces a job trace file stating:
    kwqpdest: exception 24033
    kwqpdest: Error 24033 propagating to "STRMADMIN"."STREAMS_APPLY_Q"
    94272890B6B600FFE040007F01006D4C
    Can anyone suggest what is wrong and how to fix this?

    Paul,
    The capture process is:
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'duptest.test1',
    streams_type => 'sync_capture',
    streams_name => 'sync_capture',
    queue_name => 'streams_capt_q');
    END;
    The apply process on the destination database is:
    begin
    dbms_apply_adm.create_apply(
    queue_name => 'streams_apply_q',
    apply_name => 'sync_apply',
    apply_captured => false,
    source_database => 'ORALIN1');
    end;
    begin
    dbms_streams_adm.add_table_rules(
    table_name => 'duptest.test1',
    streams_type => 'apply',
    streams_name => 'sync_apply',
    queue_name => 'streams_apply_q',
    include_dml => true,
    include_ddl => false,
    include_tagged_lcr => false,
    source_database => 'ORALIN1',
    inclusion_rule => true);
    end;
    There is an entry in the AQ$_STREAMS_CAPT_Q_TABLE_E table, and no entries in dba_apply_error or streams_apply_q_table so I am assuming that the message does not make it to the destination queue.
    I have been assuming that the propagation and apply steps are independent. I.e. The apply queue separates the propagation activity from the apply activity. Is this wrong?
    Thanks

  • Stream propagation Healt Check Script

    Hi,
    We are working in a Local Capture Proccessing.
    We have two servers (Server1: Master, Server2: Satellite. When we run the Stream HC Script get the following results for SCHEDULE FOR EACH PROPAGATION.
    http://subefotos.com/ver/?2e4660f0841f08271724853614f72d12o.png
    We have two propagation with the same name, one disabled (with 17 failures) and other enabled. Is possible delete the disabled propagation?
    Version: 10.2.0.4 x64
    Thanks
    Edited by: 977505 on Dec 18, 2012 3:36 AM
    Edited by: 977505 on Dec 18, 2012 3:41 AM

    Yes, you can drop disable propagation using drop_propagation package.

  • AQ troubleshooting doc

    Hi All,
    I came across this good article on AQ troubleshooting. Thought of sharing it with all, hence, this post. Out of things mentioned in article, I have outlined a few steps, that helped me in our environment while troubleshooting AQ issues.
    How to troubleshoot AQ issues?
    https://blogs.oracle.com/db/entry/oracle_support_master_note_for_troubleshooting_advanced_queuing_and_oracle_streams_propagation_issue
    a step-by-step methodology for troubleshooting and resolving problems with Advanced Queuing Propagation in both Streams and basic Advanced Queuing environments
    1. Check if queues are buffered (in-memory) or persistent (on disk in a queue_table).
    SQL> select * from  v$buffered_queues;
    no rows selected
    2. Check if queue_to_queue or queue_to_dblink is used.
    col destination for a15
    col session_id for a15
    set line 200
    select qname,destination,session_id,process_name,schedule_disabled,instance, current_start_time
    from dba_queue_schedules order by current_start_time desc,schedule_disabled desc ;
    3. Check if queues are propagating data at all or are slow?
    select TOTAL_NUMBER from DBA_QUEUE_SCHEDULES where QNAME='&queue_name';
    4. Check job_queue_processes parameter. Should be more than 4.
    5. Identify job queue processes...
    For 10.2 and lower
    select p.SPID, p.PROGRAM
    from V$PROCESS p, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j
    where s.SID=jr.SID
    and s.PADDR=p.ADDR
    and jr.JOB=j.JOB
    and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%';
    For 11.1 and higher
    col PROGRAM for a30
    select p.SPID, p.PROGRAM, j.JOB_name
    from v$PROCESS p, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j
    where s.SID=jr.SESSION_ID
    and s.PADDR=p.ADDR
    and jr.JOB_name=j.JOB_NAME
    --and j.JOB_NAME like '%AQ_JOB$_%';
    6. Check Alert.log and tracefiles for more information.
    7. Check if DBlink is working fine through owner of DBlink.
    8. Check queue errors and also find out associated Queue table
    set linesize 140;
    column destination format a25;
    column last_error_msg format a35;
    column schema format a15
    select schema,
           qname,
           destination,
           failures,
           last_error_date,
           last_error_time,
           last_error_msg
    from   dba_queue_schedules
    where  failures != 0;
    select QUEUE_TABLE from DBA_QUEUES where NAME ='&queue_name';
    Check what queue is supposed to do
    column qname format a40
    column user_comment format a40
    column last_error_msg format a40
    column destination format a25
    select distinct a.schema || '.' || a.qname qname
          ,a.destination
          ,a.schedule_disabled
          ,b.user_comment
    from dba_queue_schedules a, dba_queues b
    where a.qname=b.name;
    9. Check if Queues are disabled.
    select schema || '.' || qname,
           destination,
           schedule_disabled,
           last_error_msg
    from dba_queue_schedules
    where schedule_disabled='Y';
    10. If queue is DISABLED...enable it using following
    select 'exec dbms_aqadm.enable_propagation_schedule(''' || schema || '.' || qname || ''', ''' || destination || ''');'
    from dba_queue_schedules
    where schedule_disabled='Y';
    10.1 Check if propagation has been set correctly
    Check that the propagation schedule has been created and that a job queue process has been assigned. Look for the entry in DBA_QUEUE_SCHEDULES and SYS.AQ$_SCHEDULES for your schedule. For 10g and below, check that it has a JOBNO entry in SYS.AQ$_SCHEDULES, and that there is an entry in DBA_JOBS with that JOBNO. For 11g and above, check that the schedule has a JOB_NAME entry in SYS.AQ$_SCHEDULES, and that there is an entry in DBA_SCHEDULER_JOBS with that JOB_NAME. Check the destination is as intended and spelled correctly.
    10.2 Check if a Process_Name has been assigned to a queue, if no process_name is assigned...schedule is not currently executing. You may need to execute this statement no. of times to verify if a process is being allocated.
    10.3 if a process_name is assigned and schedule executing but failing...Refer to step 8 for errors.
    10.4 Check if queue tables exists in sys
    SQL> select NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED
    from DBA_QUEUES where owner='SYS'
    and QUEUE_TABLE like '%PROP_TABLE%';
    If the %PROP_NOTIFY queue is not enabled for enqueue or dequeue, it should be so enabled using DBMS_AQADM.START_QUEUE. However, the exception queue AQ$_AQ$_PROP_TABLE_E should not be enabled for enqueue or dequeue.
    10.5 Check that the remote queue the propagation is transferring messages to exists and is enabled for enqueue
    If the AQ_PROP_NOTIFY queue is not enabled for enqueue or dequeue, it should be so enabled using DBMS_AQADM.START_QUEUE. However, the exception queue AQ$_AQ$_PROP_TABLE_E should not be enabled for enqueue or dequeue.
    11. Check performance of each queue.
    col last_run_date for a40
    col qname for a25
    col NEXT_RUN_DATE for a40
    col seconds for 9999
    set line 200
    select qname,
           last_run_date,
           NEXT_RUN_DATE,
           total_number MESSAGES,
           total_bytes/1024 KBYTES,
           total_time SECONDS,
           round(total_bytes/(total_time+0.0000001)) BYTES_PER_SEC, process_name
    from dba_queue_schedules
    order by BYTES_PER_SEC;
    12. Check if there are locking issues...High value for Ctime>1800 indicates suspicious lock
    select * from gv$transaction_enqueue order by ctime;
    12.1 Find out objects accessed by session
      select * from gv$access
    where sid = 176 and object like 'T_%'
    and owner = 'owner_name';
       INST_ID        SID OWNER                          OBJECT                         TYPE
             3        176 owner_name                    Some_queue_table_name         TABLE
             2        176 owner_name                    Some_queue_table_name         TABLE
             1        176 owner_name                    Some_queue_table_name         TABLE
    12.2
    select * from gv$lock
    where sid = 176 and inst_id = 3;
       INST_ID ADDR     KADDR           SID TY        ID1        ID2      LMODE   REQUEST      CTIME      BLOCK
             3 A404050C A4040520        176 TM      35580          0          3         0      82823          2
             3 A40407A0 A40407B4        176 TM      35578          0          3         0      82823          2
             3 A4040058 A404006C        176 TM      35591          0          3         0     
    12.3
    select object_name 
    from gv$lock l join dba_objects on id1 = object_id
    where sid = 176 and inst_id = 3
    and type = 'TM';
    OBJECT_NAME
    AQ$_queue_table_name_T
    AQ$_queue_table_name_H
    AQ$_queue_table_name_I
    Some_queue_table_name
    12.4. It could be that session is stuck...mostly, it ll be job in dbms_job trying to propagate message for 10.2 and below version
    select /*+rule*/ *
    from dba_jobs_running
    where sid = 176;
           SID        JOB   FAILURES LAST_DATE LAST_SEC                 THIS_DATE THIS_SEC                INSTANCE
           176    2867992                                               13-MAY-07 16:27:06                  3
    select job,what,this_date,next_date,broken
    from dba_jobs
    where job = 2867992;
           JOB WHAT                                               THIS_DATE         NEXT_DATE         B
       2867992 next_date := sys.dbms_aqadm.aq$_propaq(job);       13-MAY-2007 16:27 13-MAY-2007 16:27 N
    Check the job has an associated propagation schedule.  If it doesn’t then that means the locks being seen are problems because the job is not doing anything.
    select sid,jobno
    from sys.aq$_schedules
    where jobno = 2867992;
    no rows selected
    Check the job still has a thread running within the Oracle executable:
    select sid,spid,p.program
    from gv$session s join gv$process p on paddr = addr
    where s.sid= 176 and s.inst_id = 3;
           SID SPID         PROGRAM
           176 4608         ORACLE.EXE (J044)
    v$session_wait shows it is still waiting for input even though the link has gone, confirming the issue:
    select event from gv$session_wait  where sid = 176  and inst_id = 3;
    EVENT
    SQL*Net message from dblink
    13. Tracing queues
    10.2 and below
    connect / as sysdba
    select p.SPID, p.PROGRAM
    from v$PROCESS p, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j
    where s.SID=jr.SID
    and s.PADDR=p.ADDR
    and jr.JOB=j.JOB
    and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%';
    -- For the process id (SPID) attach to it via oradebug and generate the following trace
    oradebug setospid <SPID>
    oradebug unlimit
    oradebug Event 10046 trace name context forever, level 12
    oradebug Event 24040 trace name context forever, level 10
    -- Trace the process for 5 minutes
    oradebug Event 10046 trace name context off
    oradebug Event 24040 trace name context off
    -- The following command returns the pathname/filename to the file being written to
    oradebug tracefile_name
    11g
    connect / as sysdba
    col PROGRAM for a30
    select p.SPID, p.PROGRAM, j.JOB_NAME
    from v$PROCESS p, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j
    where s.SID=jr.SESSION_ID
    and s.PADDR=p.ADDR
    and jr.JOB_NAME=j.JOB_NAME
    and j.JOB_NAME like '%AQ_JOB$_%';
    -- For the process id (SPID) attach to it via oradebug and generate the following trace
    oradebug setospid <SPID>
    oradebug unlimit
    oradebug Event 10046 trace name context forever, level 12
    oradebug Event 24040 trace name context forever, level 10
    -- Trace the process for 5 minutes
    oradebug Event 10046 trace name context off
    oradebug Event 24040 trace name context off
    -- The following command returns the pathname/filename to the file being written to
    oradebug tracefile_name
    How to Enable/Diasble queue
    col desitnation for a25
    select QNAME,DESTINATION,SCHEDULE_DISABLED from dba_queue_Schedules where destination='DB_link';
    exec dbms_aqadm.DISABLE_PROPAGATION_SCHEDULE(QUEUE_NAME=>'&Enter_SchemaName_QueueName',DESTINATION=>'&Enter_Destination');
    exec dbms_aqadm.ENABLE_PROPAGATION_SCHEDULE(QUEUE_NAME=>'&Enter_SchemaName_QueueName',DESTINATION=>'&Enter_Destination');
    exec dbms_aqadm.unschedule_propagation(QUEUE_NAME=>'&Enter_SchemaName_QueueName',DESTINATION=>'&Enter_Destination');
    exec dbms_aqadm.schedule_propagation(QUEUE_NAME=>'&Enter_SchemaName_QueueName',DESTINATION=>'&Enter_Destination');

    My audio has the scratchy, stuttering, speedy
    problems others have reported. I have followed all
    the tips and recommendations in the troubleshooting
    document for this problem, and I am incredibly
    frustrated.
    I'm running XP and everything was fine on iTunes 6.
    Nothing in my system has changed other than
    installing iTunes 7 - so I can't imagine that there
    is anything out of whack on my system.
    Does anyone have any real solutions for this problem?
    Sorry to report that I too have this problem. I was hoping that someone could help me get back to i-Tunes 6 as that worked very well for me. What I do with the scratchy popping thing for now is I simply close the i-Tunes and restart it. Most of the time that will work on the first try.
    I'm using windows XP also with no other issues.

  • AQ Apply causes the ORA-12805: parallel query server died unexpectedly erro

    Hi gurus,
    Please help me here and than you. How do I solve this ORA-12805: parallel query server died unexpectedly problem? I have posted this on the streams forum but have not got any answer. Sorry to post it here but I am desperate.
    I followed the samples and created two queue tables of sys.mgw_basic_msg_t and two queues (Source_Q and Dest_Q) on them. I started both of them. Then I added two subscribers of the Dest_Q to the Source_Q. Both Q's are in the same database. I am in 10gR2 enterprise edition.
    DECLARE
    subscriber1 sys.aq$_agent;
    BEGIN
    subscriber1 := sys.aq$_agent('JW1', 'ben.Dest_Q', NULL);
    dbms_aqadm.add_subscriber(
    queue_name => 'ben.Source_Q',
    subscriber => subscriber1,
    queue_to_queue => true
    END;
    DECLARE
    subscriber1 sys.aq$_agent;
    BEGIN
    subscriber1 := sys.aq$_agent('JW2', 'ben.Dest_Q', NULL);
    dbms_aqadm.add_subscriber(
    queue_name => 'ben.Source_Q',
    subscriber => subscriber1,
    queue_to_queue => true
    END;
    Then I set up the schedule,
    BEGIN
    dbms_aqadm.schedule_propagation(
    queue_name => 'ben.Source_Q',
    start_time => sysdate,
    duration => 30,
    next_time => 'sysdate + 30/86400',
    latency => 10
    END;
    And then create and start the apply
    begin
    dbms_apply_adm.create_apply(
    queue_name => 'ben.Dest_Q',
    apply_name => 'mncis_apply',
    message_handler => 'ben.sprocMessageHandler',
    apply_user => 'ben'
    exception
    when others then
    dbms_output.put_line(SQLErrM);
    end;
    begin
    DBMS_APPLY_ADM.START_APPLY(
    apply_name => 'mncis_apply'
    end;
    Then I manually enqueued messages into the Source_Q. It is always successful, but then nothing happens. The propagation from Source_Q to Dest_Q never happens. Querying the dba_jobs shows this,
    select job, last_date, next_date, what from dba_jobs;
    1 09/21/2006 14:36 09/21/2006 14:37
    EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS();
    4001 09/21/2006 08:45 09/21/2006 16:45
    wwv_flow_cache.purge_sessions(p_purge_sess_older_then_hrs => 24);
    4002 09/21/2006 14:27 09/21/2006 14:37
    wwv_flow_mail.push_queue(wwv_flow_platform.get_preference('SMTP_HOST_ADDRESS'),wwv_flow_platform.get
    444 09/21/2006 14:37
    next_date := sys.dbms_aqadm.aq$_propaq(job);
    I noticed that there is never a last_date value for job 444 (this number changes all the time), sys.dbms_aqadm.aq$_propaq(job).
    Query the dba_apply view
    select apply_name, queue_name, status from dba_apply;
    Apply Process Name QUEUE_NAME STATUS
    MNCIS_APPLY DEST_Q ENABLED
    Manually dequeuing is always succssful.
    When I created an APPLY on the source_q with a user-defined message handler, it tries to do the apply, but the operation was aborted with the ORA-12805: parallel query server died unexpectedly error.
    Please tell me what I did wrong. Thank you!
    Ben

    Thanks again.
    Here is what I found from that table.
    I added three subscribers, JW1, JW2 and JW3. The first two were added like this,
    DECLARE
         subscriber1 sys.aq$_agent;
    BEGIN
         subscriber1 := sys.aq$_agent('JW1', 'ben.dest_q', NULL);
         dbms_aqadm.add_subscriber(
              queue_name => 'ben.source_q',
              subscriber => subscriber1,     
              queue_to_queue => true
    END;
    DECLARE
         subscriber2 sys.aq$_agent;
    BEGIN
         subscriber1 := sys.aq$_agent('JW2', 'ben.dest_q', NULL);
         dbms_aqadm.add_subscriber(
              queue_name => 'ben.source_q',
              subscriber => subscriber2,     
              queue_to_queue => true
    END;
    And the third one, JW3 was added like this,
    DECLARE
         subscriber3 sys.aq$_agent;
    BEGIN
         subscriber3 := sys.aq$_agent('JW3', null, NULL);
         dbms_aqadm.add_subscriber(
              queue_name => 'ben.source_q',
              subscriber => subscriber3,     
              queue_to_queue => false
    END;
    the msg_state shows processed for consumer JW3, and ready for consumers J1 and JW2. But I don't know where the message was processed to for JW3. The docs say you can propagate messages from one q to another in the same database. Also when I use dbms_propagation_adm.create_propagation, I got these errors:
    ORA-01422: exact fetch returns more than requested number of rows
    ORA-06512: at "SYS.DBMS_PROPAGATION_INTERNAL", line 372
    ORA-06512: at "SYS.DBMS_PROPAGATION_ADM", line 39
    ORA-06512: at line 2
    Thanks a lot.
    Ben

  • Some AQ managed tables aren't cleared and contain loads of rows

    Hi,
    I have been running a system made up of two nodes (Oracle XE 10gR2) connected by queues for a couple of years without too much hassle up to date.
    Yesterday I found that two of the tables created at the time of the creation of the queue contained a few hundred thousands rows whereas the same tables in a test environment that runs almost the same data were empty. I am talking of AQ$_queue_table_name_H and $AQ_queue_table_name_I.
    I stopped the propagation and when the queue was empty I truncated the content of the two tables without problems, but the question is, is there some internal Oracle process that is supposed to take care of this and for some reason is not working?
    I checked the documentation but I don't find much details about housekeeping jobs.
    Is the monitoring of these tables something that the DBA should add to his/her tasks?
    Thanks
    Flavio

    Hi,
    Sorry I missed the XE thing.
    So, when you say aq_tm_processes is set to zero as recommended I'm assuming you mean it is defaulted and not explicitly set to zero? You can check using this:
    connect / as sysdba
    set serveroutput on
    declare
    mycheck number;
    begin
    select 1 into mycheck from v$parameter where name = 'aq_tm_processes' and value = '0'
    and (ismodified != 'FALSE' OR isdefault='FALSE');
    if mycheck = 1 then
    dbms_output.put_line('The parameter ''aq_tm_processes'' is explicitly set to 0!');
    end if;
    exception when no_data_found then
    dbms_output.put_line('The parameter ''aq_tm_processes'' is not explicitly set to 0.');
    end;
    /If it is explicitly zero then remove it:
    alter system reset aq_tm_processes scope=spfile sid='*';
    --restart the databaseIf it isn't then check that the background processes are running:
    select sid,type,program,event from v$session where program like '%(Q%';sys.dbms_aqadm.aq$_propaq handles propagation not queue cleanup activities, the cleanup process should be removing data if the retention is set to 0, this is a background process so its frequency is defined internally.
    Can you also check for orphaned messages on your queue?
    select count(*)
    from AQ$_<queue_table_name>_I i
    where not exists (select t.msgid
                      FROM <queue_table_name> t where i.msgid = t.msgid);
    select count(*)
    from AQ$_<queue_table_name>_T i
    where not exists (select t.msgid
                      FROM <queue_table_name> t where i.msgid = t.msgid);
    select count(*)
    from AQ$_<queue_table_name>_H i
    where not exists (select t.msgid
                      FROM <queue_table_name> t where i.msgid = t.msgid);Thanks
    Paul
    Edited by: pdtill2508 on May 23, 2012 9:55 AM

  • Propagation problem in streams 11.1.0.7

    my propagation job: AQ_JOBS_32 is not working any more but it is still running in the oem streams monitor.
    In the DBA_SCHEDULER_RUNNING_JOBS View the SESSION_ID column value is null.
    I can't kill the job any way.
    I have a contradition between procedures:
    SYS.DBMS_SCHEDULER.STOP_JOB
    and
    DBMS_SCHEDULER.DROP_JOB
    the execution of the procedure:
    SYS.DBMS_SCHEDULER.STOP_JOB
    (job_name => 'SYS.AQ_JOB$_32'
    ,force => TRUE);
    RETURN AN ERROR:
    ORA-27366: no se est ejecutando el trabajo "SYS.AQ_JOB$_32"
    ORA-06512: en "SYS.DBMS_ISCHED", lnea 168
    ORA-06512: en "SYS.DBMS_SCHEDULER", lnea 515
    The execution of the procedure:
    DBMS_SCHEDULER.DROP_JOB
    (job_name => 'SYS.AQ_JOB$_32',force => TRUE);
    RETURN AN ERROR :
    ORA-27478: el trabajo "SYS.AQ_JOB$_32" se est ejecutando
    ORA-06512: en "SYS.DBMS_ISCHED", lnea 182
    ORA-06512: en "SYS.DBMS_SCHEDULER", lnea 615
    WHAT TO DO?

    You have posted this to three different group. Please post to one and only one group. Please drop two of the postings. Thank you.

  • Needs tips for tuning Streams 10g propagation

    I setup bi-directional streams replication in Oracle 10.2.0.3 non-RAC between a pair of Solaris 10 servers on the same subnet with 100 Mbps fiber connection. Setup was done using MAINTAIN_SCHEMAS and it uses queue-to-queue propagation. Everything works well in both databases, but performance bottlenecks occur with propagation. Please advise on tuning options for Streams propagation.
    Example, while importing 100 million rows into the first DB we see the count continually rise in both databases at a ratio of 5:1. That is, the local db grows five times faster than the remote db, so by the time the first database had 20 million rows loaded the second db only received 4 million rows through replication. As time went on the ratio got a bit worse, all the way down to 10:1. Both DBs have a 3 GB SGA and 1 GB pga. The AWR/ADDM reports very good statistics.
    There are 4 capture processes in each db, and I raised the number of apply processes in each db to 6 with no impact detected after the change. All of these apply and capture processes seem to be starved for work. Capture continually goes into the state of "paused for flow control" while waiting for things to propagate out.
    From what I have read in the Oracle Streams documentation, propagation is not parallelized between a given source + destination pair. What options do I have?

    AQ_TM_PROCESSES sure makes life interesting. I agree with the recommendation to completely omit this parameter from your spfile and let Oracle auto-manage it, but that does not mean Streams will always work right. Sometimes flow control will get stuck in the on position, and the way you kick Oracle in the pants is to manually set AQ_TM_PROCESSES to some value and then remove the parameter later to get back to the recommended configuration.
    I have read a few things about creating more propagation jobs. That is, simulate parallelism through multiple jobs because propagation parallelism is not supported. Right now my database only has one schema that owns one dblink to a single remote DB, and there is one propagation job configured for q2q over that link. This was all setup by the Oracle scripts when I ran maintain_schemas to setup replication. I'd like more documentation from Oracle on how else it could be configured or modified to have more propagation jobs to simulate parallelism.
    Another angst ... the Oracle docs suggest when importing large tables to set commit=y so that if the process dies you can re-run the same import command and let Oracle ignore errors about existing rows. Yes, commit=y is great for importing into Streams because things will propagate and apply immediately with lower impact on flow / buffers / undo. But, wow, massive penalty if you follow their advice completely. For example, I killed the import after 3 million rows (6 hours) and restarted it. During the second attempt of importing data each of the 3 million rows generated an error but import kept going. It took 4 days to get past the errors! Recall it only took 6 hours to load the 3 million rows, but it took 96 hours to resume the import and get back to that same point. It would have been much better to truncate the table and let import reload things rather than generate errors.

  • Oracle Streams 10gR2 Schedule Propagation or Application

    Hi all,
    Is there a way to schedule the propagation and application when configuring Oracle Streams?, I mean, I don't want to execute online replication because I have other object outside Oracle that need to replicate withing the db to have my aplication (red only) in sync.
    So, where I can do that.
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name => 'shm',
    streams_type => 'apply',
    streams_name => 'apply_from_db1',
    queue_name => 'strmadmin.from_db1',
    include_dml => true,
    include_ddl => true,
    source_database => 'db1.world',
    inclusion_rule => true);
    END;
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
    schema_name => 'shm',
    streams_name => 'db1_to_db2',
    source_queue_name => 'strmadmin.captured_db1',
    destination_queue_name => '[email protected]',
    include_dml => true,
    include_ddl => true,
    source_database => 'db1.world',
    inclusion_rule => true,
    queue_to_queue => true);
    END;
    /

    Hello, Am I not being clear with my question?. Or maybe I'm doing everything in the wrong way about Stream propagation and application processes?.
    I looked into all documentation available and I can't find how to schedule the processes ... what's the part I'm missundertanding?.
    AS long I have other non Oracle objects (file system objects, we say some ECM file system objects) to replicate with the data schema, I can't replicate (automatically) all the changes occurred at source table's schema to the destiny database schema. So, I'm replicating ECM file systems objects within other external tool, and It has Windows schedule integration ... but how I can schedule propagation and/or application processes in the a 2-way Streams environment.
    Please, I need a hint from somebody.
    Thanks.

  • Sample scripts for streams setting source 9i-- destination10g

    I need to set up streams across 9i to 10g (both in windows OS)
    tried out sucessfully setting up across 9i-->9i(using OEM - using sample by oracle ) and
    10g-->10g(http://www.oracle.com/technology/obe/obe10gdb/integrate/streams/streams.htm#t6 which uses scripts)
    I need to implement streams from 9i to 10g. the problem is:
    packages used in 10g demo are not available in 9i.
    Do we have a sample script to implement streams across 9i-->10g?

    thanks Arvind, that would be really great.Me trying to have a demo so trying the demo scripts on dept table.Me trying since a month.I have moved my 9.2.0.1.0 source to 9.2.0.7 then applied the patchset 3 for 9.2.0.7 to fix the bug as i got to know there was a bug with streams across 9i,10g
    bug no:4285404 - PROPROGATION FROM 9.2 AND 10.1 TO 10.2
    Note: Executed the same script, with 4.2.2 and not 4.2.1(it is optional) ,as when i tried to export then import and then when i tried to delete supplimental log group from target it said "trying to drop non existant group"
    also when i query capture process it is showing LCRs getting queued,propogation also showing data is propagated from source, apply doesnt have errors but showing 0 for transactions assigned as well as applied.
    looks like destination queue not getting populated though at source propagation is sucessful
    Please find
    1.scripts
    2.init parameters of 9i (source)
    3. init parameters of 10g (target)
    SCRIPT:
    2.1 Create Streams Administrator :
    connect SYS/password as SYSDBA
    create user STRMADMIN identified by STRMADMIN;
    2.2 Grant the necessary privileges to the Streams Administrator :
    GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE to STRMADMIN;
    GRANT SELECT ANY DICTIONARY TO STRMADMIN;
    GRANT EXECUTE ON DBMS_AQ TO STRMADMIN;
    GRANT EXECUTE ON DBMS_AQADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_FLASHBACK TO STRMADMIN;
    GRANT EXECUTE ON DBMS_STREAMS_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_CAPTURE_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_APPLY_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_RULE_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO STRMADMIN;
    BEGIN
    DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => 'ENQUEUE_ANY',
    grantee => 'STRMADMIN',
    admin_option => FALSE);
    END;
    BEGIN
    DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => 'DEQUEUE_ANY',
    grantee => 'STRMADMIN',
    admin_option => FALSE);
    END;
    BEGIN
    DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => 'MANAGE_ANY',
    grantee => 'STRMADMIN',
    admin_option => TRUE);
    END;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    END;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_ANY_RULE_SET,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.ALTER_ANY_RULE_SET,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE_SET,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_ANY_RULE,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.ALTER_ANY_RULE,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    END;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.EXECUTE_ANY_EVALUATION_CONTEXT,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    END;
    2.3 Create streams queue :
    connect STRMADMIN/STRMADMIN
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'STREAMS_QUEUE_TABLE',
    queue_name => 'STREAMS_QUEUE',
    queue_user => 'STRMADMIN');
    END;
    2.4 Add apply rules for the table at the destination database :
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'SCOTT.DEPT',
    streams_type => 'APPLY',
    streams_name => 'STRMADMIN_APPLY',
    queue_name => 'STRMADMIN.STREAMS_QUEUE',
    include_dml => true,
    include_ddl => true,
    source_database => 'str1');
    END;
    2.5 Specify an 'APPLY USER' at the destination database:
    This is the user who would apply all DML statements and DDL statements.
    The user specified in the APPLY_USER parameter must have the necessary
    privileges to perform DML and DDL changes on the apply objects.
    BEGIN
    DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name => 'STRMADMIN_APPLY',
    apply_user => 'SCOTT');
    END;
    2.6 If you do not wish the apply process to abort for every error that it
    encounters, you can set the below paramter.
    The default value is 'Y' which means that apply process would abort due to
    any error.
    When set to 'N', the apply process will not abort for any error that it
    encounters, but the error details would be logged in DBA_APPLY_ERROR.
    BEGIN
    DBMS_APPLY_ADM.SET_PARAMETER(
    apply_name => 'STRMADMIN_APPLY',
    parameter => 'DISABLE_ON_ERROR',
    value => 'N' );
    END;
    2.7 Start the Apply process :
    BEGIN
    DBMS_APPLY_ADM.START_APPLY(apply_name => 'STRMADMIN_APPLY');
    END;
    Section 3
    Steps to be carried out at the Source Database (V920.IDC.ORACLE.COM)
    3.1 Move LogMiner tables from SYSTEM tablespace:
    By default, all LogMiner tables are created in the SYSTEM tablespace.
    It is a good practice to create an alternate tablespace for the LogMiner
    tables.
    CREATE TABLESPACE LOGMNRTS DATAFILE 'logmnrts.dbf' SIZE 25M AUTOEXTEND ON
    MAXSIZE UNLIMITED;
    BEGIN
    DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS');
    END;
    3.2 Turn on supplemental logging for DEPT table :
    connect SYS/password as SYSDBA
    ALTER TABLE scott.dept ADD SUPPLEMENTAL LOG GROUP dept_pk
    (deptno) ALWAYS;
    3.3 Create Streams Administrator and Grant the necessary privileges :
    Repeat steps 2.1 and 2.2 for creating the user and granting the required
    privileges.
    3.4 Create a database link to the destination database :
    connect STRMADMIN/STRMADMIN
    CREATE DATABASE LINK str2 connect to
    STRMADMIN identified by STRMADMIN using 'str2' ;
    //db link working fine.I tested it
    3.5 Create streams queue:
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_name => 'STREAMS_QUEUE',
    queue_table =>'STREAMS_QUEUE_TABLE',
    queue_user => 'STRMADMIN');
    END;
    3.6 Add capture rules for the table at the source database:
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'SCOTT.DEPT',
    streams_type => 'CAPTURE',
    streams_name => 'STRMADMIN_CAPTURE',
    queue_name => 'STRMADMIN.STREAMS_QUEUE',
    include_dml => true,
    include_ddl => true,
    source_database => 'str1');
    END;
    3.7 Add propagation rules for the table at the source database.
    This step will also create a propagation job to the destination database.
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
    table_name => 'SCOTT.DEPT',
    streams_name => 'STRMADMIN_PROPAGATE',
    source_queue_name => 'STRMADMIN.STREAMS_QUEUE',
    destination_queue_name => 'STRMADMIN.STREAMS_QUEUE@str2,
    include_dml => true,
    include_ddl => true,
    source_database => 'str1');
    END;
    Section 4
    Export, import and instantiation of tables from Source to Destination Database
    4.1 If the objects are not present in the destination database, perform an
    export of the objects from the source database and import them into the
    destination database
    Export from the Source Database:
    Specify the OBJECT_CONSISTENT=Y clause on the export command.
    By doing this, an export is performed that is consistent for each
    individual object at a particular system change number (SCN).
    exp [email protected] TABLES=SCOTT.DEPT FILE=tables.dmp
    GRANTS=Y ROWS=Y LOG=exportTables.log OBJECT_CONSISTENT=Y
    INDEXES=Y STATISTICS = NONE
    Import into the Destination Database:
    Specify STREAMS_INSTANTIATION=Y clause in the import command.
    By doing this, the streams metadata is updated with the appropriate
    information in the destination database corresponding to the SCN that
    is recorded in the export file.
    imp [email protected] FULL=Y CONSTRAINTS=Y
    FILE=tables.dmp IGNORE=Y GRANTS=Y ROWS=Y COMMIT=Y LOG=importTables.log
    STREAMS_INSTANTIATION=Y
    4.2 If the objects are already present in the desination database, there are
    2 ways of instanitating the objects at the destination site.
    1. By means of Metadata-only export/import :
    Export from the Source Database by specifying ROWS=N
    exp USERID=SYSTEM@str1TABLES=SCOTT.DEPT FILE=tables.dmp
    ROWS=N LOG=exportTables.log OBJECT_CONSISTENT=Y
    Import into the destination database using IGNORE=Y
    imp USERID=SYSTEM@str2FULL=Y FILE=tables.dmp IGNORE=Y
    LOG=importTables.log STREAMS_INSTANTIATION=Y
    2. By Manaually instantiating the objects
    Get the Instantiation SCN at the source database:
    connect STRMADMIN/STRMADMIN@source
    set serveroutput on
    DECLARE
    iscn NUMBER; -- Variable to hold instantiation SCN value
    BEGIN
    iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
    DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' || iscn);
    END;
    Instantiate the objects at the destination database with this SCN value.
    The SET_TABLE_INSTANTIATION_SCN procedure controls which LCRs for a table
    are to be applied by the apply process.
    If the commit SCN of an LCR from the source database is less than or
    equal to this instantiation SCN , then the apply process discards the LCR.
    Else, the apply process applies the LCR.
    connect STRMADMIN/STRMADMIN@destination
    BEGIN
    DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
    source_object_name => 'SCOTT.DEPT',
    source_database_name => 'str1',
    instantiation_scn => &iscn);
    END;
    Enter value for iscn:
    <Provide the value of SCN that you got from the source database>
    Finally start the Capture Process:
    connect STRMADMIN/STRMADMIN@source
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'STRMADMIN_CAPTURE');
    END;
    INIT.ora at 9i
    # Copyright (c) 1991, 2001, 2002 by Oracle Corporation
    # Archive
    log_archive_dest_1='LOCATION=D:\oracle\oradata\str1\archive'
    log_archive_format=%t_%s.dbf
    log_archive_start=true
    # Cache and I/O
    db_block_size=8192
    db_cache_size=25165824
    db_file_multiblock_read_count=16
    # Cursors and Library Cache
    open_cursors=300
    # Database Identification
    db_domain=""
    db_name=str1
    # Diagnostics and Statistics
    background_dump_dest=D:\oracle\admin\str1\bdump
    core_dump_dest=D:\oracle\admin\str1\cdump
    timed_statistics=TRUE
    user_dump_dest=D:\oracle\admin\str1\udump
    # File Configuration
    control_files=("D:\oracle\oradata\str1\CONTROL01.CTL", "D:\oracle\oradata\str1\CONTROL02.CTL", "D:\oracle\oradata\str1\CONTROL03.CTL")
    # Instance Identification
    instance_name=str1
    # Job Queues
    job_queue_processes=10
    # MTS
    dispatchers="(PROTOCOL=TCP) (SERVICE=str1XDB)"
    # Miscellaneous
    aq_tm_processes=1
    compatible=9.2.0.0.0
    # Optimizer
    hash_join_enabled=TRUE
    query_rewrite_enabled=FALSE
    star_transformation_enabled=FALSE
    # Pools
    java_pool_size=33554432
    large_pool_size=8388608
    shared_pool_size=100663296
    # Processes and Sessions
    processes=150
    # Redo Log and Recovery
    fast_start_mttr_target=300
    # Security and Auditing
    remote_login_passwordfile=EXCLUSIVE
    # Sort, Hash Joins, Bitmap Indexes
    pga_aggregate_target=25165824
    sort_area_size=524288
    # System Managed Undo and Rollback Segments
    undo_management=AUTO
    undo_retention=10800
    undo_tablespace=UNDOTBS1
    firstspare_parameter=50
    jobqueue_interval=1
    aq_tm_processes=1
    transaction_auditing=TRUE
    global_names=TRUE
    logmnr_max_persistent_sessions=5
    log_parallelism=1
    parallel_max_servers=2
    open_links=5
    INIT>ora at 10g (target)
    # Copyright (c) 1991, 2001, 2002 by Oracle Corporation
    # Archive
    log_archive_format=ARC%S_%R.%T
    # Cache and I/O
    db_block_size=8192
    db_cache_size=25165824
    db_file_multiblock_read_count=16
    # Cursors and Library Cache
    open_cursors=300
    # Database Identification
    db_domain=""
    db_name=str2
    # Diagnostics and Statistics
    background_dump_dest=D:\oracle\product\10.1.0\admin\str2\bdump
    core_dump_dest=D:\oracle\product\10.1.0\admin\str2\cdump
    user_dump_dest=D:\oracle\product\10.1.0\admin\str2\udump
    # File Configuration
    control_files=("D:\oracle\product\10.1.0\oradata\str2\control01.ctl", "D:\oracle\product\10.1.0\oradata\str2\control02.ctl", "D:\oracle\product\10.1.0\oradata\str2\control03.ctl")
    db_recovery_file_dest=D:\oracle\product\10.1.0\flash_recovery_area
    db_recovery_file_dest_size=2147483648
    # Job Queues
    job_queue_processes=10
    # Miscellaneous
    compatible=10.1.0.2.0
    # Pools
    java_pool_size=50331648
    large_pool_size=8388608
    shared_pool_size=83886080
    # Processes and Sessions
    processes=150
    sessions=4
    # Security and Auditing
    remote_login_passwordfile=EXCLUSIVE
    # Shared Server
    dispatchers="(PROTOCOL=TCP) (SERVICE=str2XDB)"
    # Sort, Hash Joins, Bitmap Indexes
    pga_aggregate_target=25165824
    sort_area_size=65536
    # System Managed Undo and Rollback Segments
    undo_management=AUTO
    undo_tablespace=UNDOTBS1
    sga_target=600000000
    parallel_max_servers=2
    global_names=TRUE
    open_links=4
    logmnr_max_persistent_sessions=4
    REMOTE_ARCHIVE_ENABLE=TRUE
    streams_pool_size=300000000
    undo_retention=1000
    tahnks a lot...

  • Applying subset rules in Oracle streams

    Hi All,
    I am working to configure Streams.I am abe to do repliacation on table table unidirectional & bidirectional. I am facing problem in add_subset rules as capture,propagation & apply process is not showing error. The fillowing is the script i am using to configure add_subset_rules. Please guide me what is the wrong & how to go about it.
    he Global Database Name of the Source Database is POCSRC. The Global Database Name of the Destination Database is POCDESTN. In the example setup, DEPT table belonging to SCOTT schema has been used for demonstration purpose.
    Section 1 - Initialization Parameters Relevant to Streams
    •     COMPATIBLE: 9.2.0.
    •     GLOBAL_NAMES: TRUE
    •     JOB_QUEUE_PROCESSES : 2
    •     AQ_TM_PROCESSES : 4
    •     LOGMNR_MAX_PERSISTENT_SESSIONS : 4
    •     LOG_PARALLELISM: 1
    •     PARALLEL_MAX_SERVERS:4
    •     SHARED_POOL_SIZE: 350 MB
    •     OPEN_LINKS : 4
    •     Database running in ARCHIVELOG mode.
    Steps to be carried out at the Destination Database (POCDESTN.)
    1. Create Streams Administrator :
    connect SYS/pocdestn@pocdestn as SYSDBA
    create user STRMADMIN identified by STRMADMIN default tablespace users;
    2. Grant the necessary privileges to the Streams Administrator :
    GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE to STRMADMIN;
    GRANT SELECT ANY DICTIONARY TO STRMADMIN;
    GRANT EXECUTE ON DBMS_AQ TO STRMADMIN;
    GRANT EXECUTE ON DBMS_AQADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_FLASHBACK TO STRMADMIN;
    GRANT EXECUTE ON DBMS_STREAMS_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_CAPTURE_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_APPLY_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_RULE_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO STRMADMIN;
    BEGIN
    DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => 'ENQUEUE_ANY',
    grantee => 'STRMADMIN',
    admin_option => FALSE);
    END;
    BEGIN
    DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => 'DEQUEUE_ANY',
    grantee => 'STRMADMIN',
    admin_option => FALSE);
    END;
    BEGIN
    DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => 'MANAGE_ANY',
    grantee => 'STRMADMIN',
    admin_option => TRUE);
    END;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    END;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_ANY_RULE_SET,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.ALTER_ANY_RULE_SET,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE_SET,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_ANY_RULE,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.ALTER_ANY_RULE,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    END;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.EXECUTE_ANY_EVALUATION_CONTEXT,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    END;
    3. Create streams queue :
    connect STRMADMIN/STRMADMIN@POCDESTN
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'STREAMS_QUEUE_TABLE',
    queue_name => 'STREAMS_QUEUE',
    queue_user => 'STRMADMIN');
    END;
    4. Add apply rules for the table at the destination database :
    BEGIN
    DBMS_STREAMS_ADM.ADD_SUBSET_RULES(
    TABLE_NAME=>'SCOTT.EMP',
    STREAMS_TYPE=>'APPLY',
    STREAMS_NAME=>'STRMADMIN_APPLY',
    QUEUE_NAME=>'STRMADMIN.STREAMS_QUEUE',
    DML_CONDITION=>'empno =7521',
    INCLUDE_TAGGED_LCR=>FALSE,
    SOURCE_DATABASE=>'POCSRC');
    END;
    5. Specify an 'APPLY USER' at the destination database:
    BEGIN
    DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name => 'STRMADMIN_APPLY',
    apply_user => 'SCOTT');
    END;
    6. BEGIN
    DBMS_APPLY_ADM.SET_PARAMETER(
    apply_name => 'STRMADMIN_APPLY',
    parameter => 'DISABLE_ON_ERROR',
    value => 'N' );
    END;
    7. Start the Apply process :
    BEGIN
    DBMS_APPLY_ADM.START_APPLY(apply_name => 'STRMADMIN_APPLY');
    END;
    Section 3 - Steps to be carried out at the Source Database (POCSRC.)
    1. Move LogMiner tables from SYSTEM tablespace:
    By default, all LogMiner tables are created in the SYSTEM tablespace. It is a good practice to create an alternate tablespace for the LogMiner tables.
    CREATE TABLESPACE LOGMNRTS DATAFILE 'd:\oracle\oradata\POCSRC\logmnrts.dbf' SIZE 25M AUTOEXTEND ON MAXSIZE UNLIMITED;
    BEGIN
    DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS');
    END;
    2. Turn on supplemental logging for DEPT table :
    connect SYS/password as SYSDBA
    ALTER TABLE scott.emp ADD SUPPLEMENTAL LOG GROUP emp_pk
    (empno) ALWAYS;
    3. Create Streams Administrator and Grant the necessary privileges :
    3.1 Create Streams Administrator :
    connect SYS/password as SYSDBA
    create user STRMADMIN identified by STRMADMIN default tablespace users;
    3.2 Grant the necessary privileges to the Streams Administrator :
    GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE to STRMADMIN;
    GRANT SELECT ANY DICTIONARY TO STRMADMIN;
    GRANT EXECUTE ON DBMS_AQ TO STRMADMIN;
    GRANT EXECUTE ON DBMS_AQADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_FLASHBACK TO STRMADMIN;
    GRANT EXECUTE ON DBMS_STREAMS_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_CAPTURE_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_APPLY_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_RULE_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO STRMADMIN;
    BEGIN
    DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => 'ENQUEUE_ANY',
    grantee => 'STRMADMIN',
    admin_option => FALSE);
    END;
    BEGIN
    DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => 'DEQUEUE_ANY',
    grantee => 'STRMADMIN',
    admin_option => FALSE);
    END;
    BEGIN
    DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => 'MANAGE_ANY',
    grantee => 'STRMADMIN',
    admin_option => TRUE);
    END;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    END;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_ANY_RULE_SET,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.ALTER_ANY_RULE_SET,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE_SET,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_ANY_RULE,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.ALTER_ANY_RULE,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    END;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.EXECUTE_ANY_EVALUATION_CONTEXT,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    END;
    4. Create a database link to the destination database :
    connect STRMADMIN/STRMADMIN@pocsrc
    CREATE DATABASE LINK POCDESTN connect to
    STRMADMIN identified by STRMADMIN using 'POCDESTN';
    Test the database link to be working properly by querying against the destination database.
    Eg : select * from global_name@POCDESTN;
    5. Create streams queue:
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_name => 'STREAMS_QUEUE',
    queue_table =>'STREAMS_QUEUE_TABLE',
    queue_user => 'STRMADMIN');
    END;
    6. Add capture rules for the table at the source database:
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'SCOTT.EMP',
    streams_type => 'CAPTURE',
    streams_name => 'STRMADMIN_CAPTURE',
    queue_name => 'STRMADMIN.STREAMS_QUEUE',
    include_dml => true,
    include_ddl => true,
    source_database => 'POCSRC');
    END;
    7. Add propagation rules for the table at the source database.
    This step will also create a propagation job to the destination database.
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
    table_name => 'SCOTT.emp’,
    streams_name => 'STRMADMIN_PROPAGATE',
    source_queue_name => 'STRMADMIN.STREAMS_QUEUE',
    destination_queue_name => 'STRMADMIN.STREAMS_QUEUE@POCDESTN',
    include_dml => true,
    include_ddl => true,
    source_database => 'POCSRC');
    END;
    Section 4 - Export, import and instantiation of tables from Source to Destination Database
    1. If the objects are not present in the destination database, perform an export of the objects from the source database and import them into the destination database
    Export from the Source Database:
    Specify the OBJECT_CONSISTENT=Y clause on the export command.
    By doing this, an export is performed that is consistent for each individual object at a particular system change number (SCN).
    exp USERID=SYSTEM/POCSRC@POCSRC TABLES=SCOTT.TEST FILE=DEPT.dmp GRANTS=Y ROWS=Y LOG=exportDEPT.log OBJECT_CONSISTENT=Y INDEXES=Y STATISTICS = NONE
    Import into the Destination Database:
    Specify STREAMS_INSTANTIATION=Y clause in the import command.
    By doing this, the streams metadata is updated with the appropriate information in the destination database corresponding to the SCN that is recorded in the export file.
    imp USERID=SYSTEM/POCDESTN@POCDESTN FULL=Y CONSTRAINTS=Y FILE=DEPT.dmp IGNORE=Y GRANTS=Y ROWS=Y COMMIT=Y LOG=importDEPT.log STREAMS_INSTANTIATION=Y
    2. If the objects are already present in the desination database, check that they are also consistent at data level, otherwise the apply process may fail with error ORA-1403 when apply a DML on a not consistent row. There are 2 ways of instanitating the objects at the destination site.
    1. By means of Metadata-only export/import :
    Export from the Source Database by specifying ROWS=N
    exp USERID=SYSTEM/POCSRC@POCSRC TABLES=SCOTT.DEPT FILE=tables.dmp
    ROWS=N LOG=exportTables.log OBJECT_CONSISTENT=Y
    exp USERID=SYSTEM/POCSRC@POCSRC TABLES=SCOTT.EMP FILE=tables.dmp
    ROWS=N LOG=exportTables.log OBJECT_CONSISTENT=Y
    For Test table -
    exp USERID=SYSTEM/POCSRC@POCSRC TABLES=SCOTT.TEST FILE=tables.dmp
    ROWS=N LOG=exportTables.log OBJECT_CONSISTENT=Y
    Import into the destination database using IGNORE=Y
    imp USERID=SYSTEM/POCDESTN@POCDESTN FULL=Y FILE=tables.dmp IGNORE=Y
    LOG=importTables.log STREAMS_INSTANTIATION=Y
    2. By Manaually instantiating the objects
    Get the Instantiation SCN at the source database:
    connect STRMADMIN/STRMADMIN@POCSRC
    set serveroutput on
    DECLARE
    iscn NUMBER; -- Variable to hold instantiation SCN value
    BEGIN
    iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
    DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' || iscn);
    END;
    Instantiate the objects at the destination database with this SCN value.
    The SET_TABLE_INSTANTIATION_SCN procedure controls which LCRs for a table are to be applied by the apply process. If the commit SCN of an LCR from the source database is less than or equal to this instantiation SCN , then the apply process discards the LCR. Else, the apply process applies the LCR.
    connect STRMADMIN/STRMADMIN@POCDESTN
    BEGIN
    DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
    source_object_name => 'SCOTT.DEPT',
    source_database_name => 'POCSRC',
    instantiation_scn => &iscn);
    END;
    connect STRMADMIN/STRMADMIN@POCDESTN
    BEGIN
    DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
    source_object_name => 'SCOTT.EMP',
    source_database_name => 'POCSRC',
    instantiation_scn => &iscn);
    END;
    Enter value for iscn:
    <Provide the value of SCN that you got from the source database>
    Finally start the Capture Process:
    connect STRMADMIN/STRMADMIN@POCSRC
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => ‘STRMADMIN_CAPTURE');
    END;
    Please mail me [email protected]
    Thanks.
    Raghunath

    What you are trying to do that you can not do is unclear. You wrote:
    "I am facing problem in add_subset rules as capture,propagation & apply process is not showing error."
    Personally I don't consider it a problem when my code doesn't raise an error. So what is not working the way you think it should? Also what version are you in (to 4 decimal places).

Maybe you are looking for

  • URLClassLoader issues

    Hi, I'm trying to create a dynamic "class finding" ClassLoader using URLClassLoader and, at run-time, adding various dependency jar files to the class loader and then creating an instance of a class with this ClassLoader. However, the issue I am runn

  • Anybody else just really disappointed with iPhone 3GS?

    so i just traded up my 3 year old sanyo katana (sprint) flip-phone (which worked perfectly, without a hitch, ever) for a new iphone 3GS (at&t) and frankly i'm pretty dissatisfied with the switch. yeah, it's a fun toy, but phone service and coverage i

  • How to put the proper header at each column in write to measurement file (.lvm) ?

    Hi,        i would like to know one thing about the write to measurement file. Can i put the proper header at each column in write to measurement file (.lvm) ? and how can i do for it ? Could you show me a way to make it ? i am looking forward your k

  • How can I put my Albums on iWeb into a certain sequence?

    I publish albums from iPhoto to iWeb, but there they appear in random order, not in the order I gave them in iPhoto, which reflects a chronological order. Thanks for your help. Cheers, Veit

  • Camera Raw has disappeared.

    Just upgraded to Mountain Lion. not sure connected, but I right click in Bridge and the open with to ACR is gone. Just did a command i on the file and it brings up apps and I don't see ACR or Camera Raw anywhere.