Dml triggers behaviour on secondary replica

Quick Q: Say on primary replica database has DML triggers on tables which again writes to other tables (writes to audit log history table) when a DML action is performed on principle table - what is the behavior on secondary replica db? 
Mahesh

I understand that part on primary replica and transaction log, but what's the behavior of triggers on secondary replica - are they triggered or disabled?
They aren't triggered, replaying the log doesn't cause anything such as that to happen. There is nothing to cause them to fire, it's not redoing the statement used, it's just redoing the log. Individual statements are not captured by the log, just the changes
are reflected.
-Sean
The views, opinions, and posts do not reflect those of my company and are solely my own. No warranty, service, or results are expressed or implied.

Similar Messages

  • Help on DML Triggers On Schema

    All,
    We are in the process of implementing audit table for specific schemas:(user1 120 tables)
    I was successfully able to create DDL Triggers on the Schema with ----- DDL ON SCHEMA
    but for tracking DML operations on each table by different users in user1 schema... do i need to create individual trigger for each table...
    BEFORE INSERT OR UPDATE OR DELETE ON <TABLE_NAME>
    is this the only way...
    Any Ideas?
    Regards,
    ~Ora

    Hi,
    as you said in your first post, for DDL operations you can use SCHEMA a level trigger, but for DLM operations you will have to stick with one trigger per Table.
    Here is some piece of code to generate the triggers for you.
    drop type line_tt;
    create or replace type line_t as object (x varchar2(4000));
    create or replace type line_tt as table of line_t;
    create or replace function generate_audit_triggers return line_tt pipelined
    is
           cursor my_tables is
                  select user as owner, table_name from user_tables where temporary = 'N';
           cursor my_table_cols(tablename in varchar2) is
                  select column_name from user_tab_columns where table_name = tablename order by column_name;
           sqlstatement varchar2(4000);      
           wherestatement varchar2(4000);
    begin
         for r_table in my_tables loop
             -- generate code for insert trigger
             pipe row(line_t('create or replace trigger ' || r_table.owner || '.' || substr('SPYI_' || r_table.table_name, 1, 30)));
             pipe row(line_t('before insert on ' || r_table.owner || '.' || r_table.table_name));
             pipe row(line_t('for each row'));
             pipe row(line_t('begin'));
             pipe row(line_t('insert into AUDIT_DATA(sqlstatement) values('));
             sqlstatement := '''insert into ' || r_table.owner || '.' || r_table.table_name || '(';
             for r_column in my_table_cols(r_table.table_name) loop
                 sqlstatement := sqlstatement || r_column.column_name;
                 sqlstatement := sqlstatement || ',';
             end loop;
             sqlstatement := substr(sqlstatement, 1, length(sqlstatement) - 1);
             sqlstatement := sqlstatement || ') values ('''''' || ';
             for r_column in my_table_cols(r_table.table_name) loop
                 sqlstatement := sqlstatement || ':new.' || r_column.column_name;
                 sqlstatement := sqlstatement || ' || '''''','''''' || ';
             end loop;
             sqlstatement := substr(sqlstatement, 1, length(sqlstatement) - 10);
             sqlstatement := sqlstatement || ''''');''';
             pipe row(line_t(sqlstatement));
             pipe row(line_t(');'));
             pipe row(line_t('end;'));
             pipe row(line_t('/'));
             -- generate code for update trigger
             pipe row(line_t('create or replace trigger ' || r_table.owner || '.' || substr('SPYU_' || r_table.table_name, 1, 30)));
             pipe row(line_t('before update on ' || r_table.owner || '.' || r_table.table_name));
             pipe row(line_t('for each row'));
             pipe row(line_t('begin'));
             sqlstatement := 'if (';
             for r_column in my_table_cols(r_table.table_name) loop
                 sqlstatement := sqlstatement || '''a''|| ' || ':old.' || r_column.column_name || ' <> ''a''|| :new.' || r_column.column_name || ' or ';
             end loop;
             sqlstatement := substr(sqlstatement, 1, length(sqlstatement) - 4);
             sqlstatement := sqlstatement || ') then';
             pipe row(line_t(sqlstatement));
             pipe row(line_t('insert into AUDIT_DATA(sqlstatement) values('));
             sqlstatement := '''update ' || r_table.owner || '.' || r_table.table_name || ' set ';           
             wherestatement := ' where ';
             for r_column in my_table_cols(r_table.table_name) loop
                 sqlstatement := sqlstatement || r_column.column_name || '=''''' || ''' || :new.' || r_column.column_name || ' || '''''',';
                 wherestatement := wherestatement || '''''a''''||' || r_column.column_name || '=''''a''''||''''' || ''' || :old.' || r_column.column_name || ' || '''''' and ';
             end loop;
             sqlstatement := substr(sqlstatement, 1, length(sqlstatement) - 1);
             wherestatement := substr(wherestatement, 1, length(wherestatement) - 5);
             sqlstatement := sqlstatement || wherestatement || ';''';
             pipe row(line_t(sqlstatement));
             pipe row(line_t(');'));
             pipe row(line_t('end if;'));
             pipe row(line_t('end;'));
             pipe row(line_t('/'));
         end loop;
    end;
    show err
    drop table audit_data;
    create table audit_data (
           sqlstatement varchar2(4000)
    spool tmp.sql
    set head off
    set linesize 500
    set echo off
    select x from table(generate_audit_triggers);
    spool offHope this helps,
    Francois

  • DPM 2012 R2, Scale-out and secondary replicas

    Hi all,
    TechNet says scale-out protection for VMs and DPM chaining isn't supported - is this still the case for 2012 R2?
    Our scenario: We have numerous clusters with a ~dozen Hyper-V (2012/2012R2) nodes each, and lots of running VMs. We need to use DPM scale-out to have multiple DPM servers protect the
    clusters, but also need off-site replicas for the data sources.
    Cheers

    Hi,
    There is no change in DPM 2012 R2 with respect to secondary protection for scale out Hyper-v.  Consider backing up to Azure for offsite solution.
    Backing Up DPM using Windows Azure Backup
    http://technet.microsoft.com/en-us/library/jj728752.aspx
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Help with triggering behaviours

    Hi, I've just started using Java3D to build a simple networked virtual world for a university project. At the moment in the client each person in the world has their own transform group, and when a message arrives from the server saying someone has moved the TransformGroup methods getTransform() and setTransform() are used to move them
    Obviously this is not ideal as it look jerky, i would prefer to use behaviours to move people so i could the built in interpolators and use animation.
    My question is how can i trigger the relevant behaviour to wake up when a message is recieved? All of the standard WakeUpCriterion take events from the scene or the user and are no good to me, will i have to write my own WakeUpCriterion class or is there an easier way? If I do have to write my own, could anyone help me as I'm not sure what the methods allElements() and triggeredElements() are supposed to do
    Thanks in advance :-)

    actually i am writing such a program too!!
    hey...how can you make all the clients view the same scene???or did they create their own seperatly??

  • DML operations on a Replicated Object when it is in QUIZED State.

    Hi,
    How do we do DML operations on a Replicated object(Multi-Master Replication) when the object is in QUIZED State.
    As such my intenttion is to bring the Replication in a QUIZED State and then do some DML then bring up the Replicator and allow it to sync the Database.

    Hi Anita,
    Thank you so much for patiently explaining me about issues on Replication.
    Now, let me explain my objective. We are about to implement Oracle 8.1.6 Multi Master Replication. As a Fall back mechanism management is so concerned that "if anything goes wrong in Replication, how do we prevent the "stopshow" from carrying out the normal transaction till the issues gets settled.
    To achieve this objective, we are doing some investigation on this issue. As i am quite new to Oracle Replication I thought there is no way that i could do this in a very short "window" period.
    I tried to stop the Replication using the suggested method it worked well. Meaning i am able to do the DML transaction. Now the problem is when we decide to bring up the system back to Replication. How do we go doing that. As i thought that the DBMS package that resumes the replication would do the "sync" on both sides of the Replication. But it is not happening.
    I referred to the Oracle documentation on Replication and was quite eager to use the package DBMS_RECTIFIER_DIFF.DIFFERENCES and DBMS_RECTIFIER_DIFF.RECTIFY procedures.
    But i am yet to test this. If this works out for a specific table. Then my plan is to deploy a mechanism where in b4 i start the replication i would execute these packages for all the replicated tables.
    Hope you now understand what i intend to do and Pls advise me if i am on the right track on reaching my goal.
    Thanx in advance.
    Senthil.

  • Golden Gate - DML statements are not replicated to target database

    Hi,
    Testing Environment
    Source:
    OS: RHEL 4.6, Database: 11gR2, Golden Gate 10.4, ASM
    extract ext1
    connection to database
    userid ggate, password qwerty
    hostname and port for trail
    rmthost win2003, mgrport 7800
    path and name for trial
    rmttrail C:\app\admin\GOLDENGATE\dirdat\lt
    EXTTRAIL /u01/oracle/goldengate/dirdat/lt
    --TRANLOGOPTIONS ASMUSER SYS@ASM, ASMPASSWORD sys ALTARCHIVELOGDEST /u03/app/arch/ORCL/archivelog
    --DDL support
    ddl include mapped objname sender.*;
    --DML
    table sender.*;
    Target:
    OS: Windows 2003, Database: 11gR2, Golden Gate 10.4
    --replicate group
    replicat rep1
    --source and target defintions
    ASSUMETARGETDEFS
    --target database login
    userid ggate, password ggate
    --file for discared transaction
    discardfile C:\app\admin\GOLDENGATE\discard\rep1_disc.txt, append, megabytes 10
    --ddl support
    DDL
    --specifying table mapping
    map sender.* ,target receiver.* ;
    I've Successfully setup Oracle Golden Gate test environment as above.
    DDL statements are replicating successfully to target database.
    while DML statements are not being replicated to target database.
    Pl. try to solve the problem
    Regards,
    Edited by: Vihang Astik on Jul 2, 2010 2:33 PM

    Almost ok but how you will handle the overlapping (transactions captured by expdp & captured by Extract too) of transactions for the new table ?
    Metalink doc ID 1332674.1 has the complete steps. Follow the "without HANDLECOLLISIONS" approach.

  • Is it possible to see DML generated by Applying the LCR?

    We've got a fairly strange situation happening at one of our client sites. Our system includes triggers on the target/replica tables, to track changes for incremental refresh purposes. Been working great (mostly) for years with Streams. However, recently some logic was added to the triggers to NOT record changes if a specific column on the table is being UPDATEd (due to aa source application change), so in the UPDATE section of the trigger, the UPDATING('<COLUMN_NAME'>) function is used to determine if that column is actually being updated or not. THis works because in the source application, this column ONLY evre gets updated during one single process that we DO actually want to ignore. Again, this logic worked great internally as well at almost all client sites. However, at one client, this logic is NOT working, which is leading to the theory that for some reason, when applying the actual LCR to the table, STreams is firing off an UPDATE statement that is updating ALL columns, regardless of whether the values changed or not. Whereas in most situations, the evidence suggests that Streams only puts those columns in the SET clause of the UDPATE statement that are in fact different/changed.
    So - this then begs the question: is there some way to "see" the UPDATE statement generated by Streams when applying the LCR? We've been able to look at the actual LCR contents (all the OLD and NEW values) and so can see that this value is in fact, NOT changing. Yet the UPDATING logic in the trigger seems to indicate that it is still being included in the SET clause for some reason...
    Have tried adding logic to the trigger itself to look at v$SQL, etc to see the actual UDPATE statement but that doesn't seem to be working. Any other tips for debuging this one?
    This is 11.2.0.2 - alternatively: any known Streams bugs w/ this release that we need to look for patches for that might address this?
    Thanks for any/all ideas.
    Cheers,
    Jim

    Yes you can see the DML executed by the apply servers. This is documented here:
    http://docs.oracle.com/cd/E11882_01/server.112/e17069/strms_trapply.htm#STRMS1062
    Regards,

  • How to monitor DML acticity in a database

    Besides logminer and fine grain auditing; is there any other functionality in oracle that can be use to capture DML activities for a given table in a given time? What I am trying to accomplish is to create a matrix on daily basis of all the DML operation that occured in my database (for a given schema).
    Thank you.

    Hi Nigel,
    capture DML activities for a given table in a given time?I use DML triggers:
    http://www.dba-oracle.com/art_builder_proper_oracle_design_for_auditing.htm
    Enabling auditing options may not always be sufficient to evaluate suspicious activity within your database. When you enable auditing, Oracle places records in the SYS.AUD$ table in accordance with the auditing options that you have specified. One limitation to this type of auditing is that SYS.AUD$ does not provide you with value-based information. You need to write triggers to record the before and after values on a per-row basis.
    Auditing with Oracle supports DML and DDL statements on objects and structures. Triggers support DML statements issued against objects, and can be used to record the actual values before and after the statement.
    In some facilities, audit commands are considered security audit utilities, while triggers are referred to as financial auditing. This is because triggers can provide a method to track actual changes to values in a table. Although, similar to the AUDIT command, you can use triggers to record information, you should customize your auditing by using triggers only when you need more detailed audit information.
    AFTER triggers are normally used to avoid unnecessary statement generation for actions that fail due to integrity constraints. AFTER triggers are executed only after all integrity constraints have been checked. AFTER ROW triggers provide value-based auditing for each row of the tables and support the use of “reason codes.” A reason for the statement or transaction, along with the user, sysdate, and old and new values, can be inserted into another table for auditing purposes.
    Oracle auditing can be used for successful and unsuccessful actions, as well as connections, disconnections, and session I/O activities. With auditing, you can decide if the actions should be BY ACCESS or BY SESSION. Triggers can only audit successful actions against the table on which they are created. If auditing is being performed using a trigger, any rollback or unsuccessful action will not be recorded.
    Auditing provides an easy, error-free method to tracking, with all the audit records stored in one place. Triggers are more difficult to create and maintain.
    You can also use the standard "audit" functionality to audit DML:
    http://www.dba-oracle.com/t_audit_table_command.htm
    Hope this helps . . .
    Donald K. Burleson
    Oracle Press author
    Author of "Oracle Tuning: The Definitive Reference"
    http://www.rampant-books.com/book_2005_1_awr_proactive_tuning.htm

  • SQL 2012 Always on AG Primay went to secondary

    I have a Database in sql 2012 in an Always on AG that went from primary to secondary. I am not sure what happened or made it failover to secondary. So my apps were pointing to the primary all had problems updating the database but were still running.
    what do I have to change on my app so that they point to the AG listener instead of just the primary server?
    this way whenever the primary changes, the app will use the other server as primary

    Hi JonDoe,
    If an availability group possesses only one secondary replica and it is not configured to allow read-access to the secondary replica, clients can connect to the primary replica by using a database mirroring connection string.
     Optionally, the connection string can also supply the name of another server instance, the
    failover partner name, to identify the server instance that initially hosts the secondary replica as the failover partner name.
    While migrating from database mirroring to AlwaysOn Availability Groups, applications can specify the database mirroring connection string as long as only one secondary replica exists and it disallows user connections. For more information,
    see: http://msdn.microsoft.com/en-us/library/hh213417.aspx#CCBehaviorOnFailover
    There is an article about configure SQL Server to automatically redirect the read-only workloads after a failover in AlwayOn Availability Groups, you can review it.
    http://www.mssqltips.com/sqlservertip/2869/configure-sql-server-2012-alwayson-availability-groups-readonly-routing-using-tsql/
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • OGG and firing triggers

    Hi,
    I am designing and implementing OGG to replicate a set of about 200 tables from a master system to two target systems. Updates to these tables only occur on the master system so this is a 1:n uni-directional situation. The master and target systems are all Oracle.
    Some or all of the target tables have triggers on them. For 95% of the tables, we don't want the triggers to fire when GG replicates to the table. This is easily accomplished by using the SUPPRESSTRIGGERS subparameter of the DBOPTIONS parameter in the replicat parameter file. So, I set up one extract/pump/replicat process to replicate the data to these tables.
    Of the remaining 5% of the tables, most of the time we want the triggers to fire on the target systems when data is replicated to the target table. Again this is easily accomplished by setting up another extract/pump/replicat process with DBOPTIONS NOSUPPRESSTRIGGER.
    There are a couple of tables on the target systems which have more than one trigger and we want one of the triggers to fire when replicating but not the other ones. I believe that I can partly accomplish this by putting a SQLEXEC on the MAP statement for these tables which issues an ALTER TRIGGER <schema.trigger>; DISABLE. The documentation says this SQLEXEC will run before the data is replicated to the target table so the trigger will be disabled when the replication happens.
    My problem is... How do I enable the trigger again once I have replicated data to the target table? I don't think it's possible to have a SQLEXEC run after the replication happens. Is that true?
    I have considered simply disabling the triggers when replication starts with a standalone SQLEXEC statement that runs when replication starts and enabling them using SQLEXEC ONEXIT to enable the triggers again when replication stops. this will work but I'm not sure my customer will buy this solution.
    Thanks a lot for any help you can provide.

    I've figured out how to do this. Start the replicat with DBOPTIONS SUPPRESSTRIGGERS specified. Then run a standalone SQLEXEC which executes the following stored procedure call for each trigger you want to have fire: dbms_ddl.set_trigger_firing_property(<trigger_owner> '<trigger_name>', FALSE);
    I have a set of eight triggers I want to fire so I created a simple stored procedure that executes this SP for each of my triggers. It could probably be done inline in the replicat parameter deck too.
    Also, I have a second SP which I call using SQLEXEC ON EXIT. This disables all the triggers I enabled at the beginning.
    Looking at ALL_TRIGGERS at the FIRE_ONCE column/property will tell you what state a trigger is in. Ironically, NO means the the trigger will fire and YES means it will not.

  • Configure replicas based on existing encrypted DB

    I am researching SQL Server Always On. I have put together a configuration that uses an existing DB which is encrypted. It is NOT a TDE DB. I cannot find any examples of how to configure this setup and I'm having an issue where my secondary replica cannot
    be constructed properly because I cannot restore the DB Master Key on the replica. I get a cannot perform operation on a read only DB. I've tried to find information about how to configure a Always On setup with an encrypted DB but could not find anything.
    MS has a page where they refer to encrypted DB in a Availability Group - but don't talk about it at all. They refer to working with a decrypted DB (which I don't have). 
    http://technet.microsoft.com/en-us/library/hh510178.aspx
    I was thinking that I could fail-over to the secondary but it fails. Not sure why but I'm thinking it may be because the two DB's are not exact. 
    What do I need to do explicitly to get this to work?
    Peter

    Peter,
    I'm not sure if that specific to azure, but the DMK is an object in the database. You don't have to restore it.
    "Please create a master key in the database or open the master key in the session before performing this operation."
    I see this, but you posted this:
    No - not using the SMK. Just using a DMK and certificate
    which are contradictory. If you were only using the DMK then you shouldn't have this problem. So either you're using the SMK to automatically decrypt the DMK and so on or azure doesn't support this.
    I can easily do this (granted, not using azure) with and without the SMK (if I restore the SMK previously as I stated before).
    Repro:
    CREATE DATABASE EncryptedDB
    GO
    USE EncryptedDB
    GO
    CREATE TABLE EncryptedData
    ID INT IDENTITY(1,1) NOT NULL,
    EncryptedValue VARBINARY(8000) NOT NULL
    GO
    IF NOT EXISTS( SELECT 1 FROM sys.symmetric_keys WHERE name = '##MS_DatabaseMasterKey##')
    CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'My$trongP@$$'
    GO
    CREATE CERTIFICATE CertToEncryptSymKey
    WITH SUBJECT = 'Certificate To Encrypt The Symmetric Key.'
    GO
    CREATE SYMMETRIC KEY SymEncKey
    WITH ALGORITHM = AES_256
    ENCRYPTION BY CERTIFICATE CertToEncryptSymKey
    GO
    OPEN SYMMETRIC KEY SymEncKey DECRYPTION BY CERTIFICATE CertToEncryptSymKey
    INSERT INTO EncryptedData(EncryptedValue) VALUES (ENCRYPTBYKEY(KEY_GUID('SymEncKey'), N'Super Secret!', 0, ''))
    CLOSE SYMMETRIC KEY SymEncKEy
    -- restore database, etc on another instance, setup into AOAG
    SELECT * FROM EncryptedData
    SELECT * FROM sys.symmetric_keys
    OPEN MASTER KEY DECRYPTION BY PASSWORD = 'My$trongP@$$'
    OPEN SYMMETRIC KEY SymEncKey DECRYPTION BY CERTIFICATE CertToEncryptSymKey
    SELECT *, CAST(DECRYPTBYKEY(EncryptedValue, 0, null) AS NVARCHAR(4000)) FROM EncryptedData
    CLOSE SYMMETRIC KEY SymEncKey
    CLOSE MASTER KEY
    Sean Gallardy | Blog |
    Twitter

  • High Log Send Queue on replica. What does it mean?

    As far as I understand Always on  synchronization, The performance counter SQL Server:Database Replica --> Log Send Queue is relevant only on the primary, which is sending the log to the replicas.How come I have very high "Log Send queue"
    in the Secondary replica?

    Hi Gal1,
    According to your description, you understood that Log Send Queue was relevant only on the primary replica which sent the log to the secondary replica. And there was very high Log Send Queue on secondary replica, then you wanted to know what the meaning
    of this state is.
    Firstly, I would like to deliver more accurate explanation about Log Send Queue to you. 
    The SQLServer:Database Replica performance object contains performance counters that report information about the secondary databases of an AlwaysOn availability group in SQL Server 2014. This object is valid only on an instance of SQL Server that hosts
    a secondary replica. Log Send Queue is one of the counters in SQLServer:Database Replica. And Log Send Queue shows amount of log records in the log files of the primary database, in kilobytes, that has not yet been sent to the secondary replica. This value
    is sent to the secondary replica from the primary replica. Queue size does not include FILESTREAM files that are sent to a secondary.
    Besides, in the synchronous mode, the commit towards the application is delayed until an acknowledge from the synchronous secondary replicas return and the acknowledge is sent when the content sent to the replica is persisted in the transaction log of secondary
    replica.
    For more information, you can refer to this article: SQL Server 2012 AlwaysOn – Part 12 – Performance Aspects and Performance Monitoring II (http://blogs.msdn.com/b/saponsqlserver/archive/2013/04/24/sql-server-2012-alwayson-part-12-performance-aspects-and-performance-monitoring-ii.aspx).
    Best regards,
    Qiuyun Yu

  • Running scripts on primary and secondary instances.

    Hello,
    I created an AlwaysOn Availability group with 4 instances and I added 1 database to the availability group. On every instance the database is readable.
    When I run sp_updatestats, on one of the secondary instances, I get this error message.
    Msg 15635, Level 16, State 1, Procedure sp_updatestats, Line 21
    Cannot execute 'sp_updatestats' because the database is in read-only access mode.
    When I run
    SELECT
    name,
    is_read_only FROM
    sys.databases
    where name
    = 'mydb' I get
    Name        
    is_read_only
    mydb  
           0
    That probably makes sense because I am looking at a copy of the database on the primary instance and the database on the primary instance
    is not read only? Or am I looking at a bug? Should the availability group change is_read_only to 1 on the secondary instances?
    Is it possible to detect that a database is synchronizing or is read only and that you should not update statistics or rebuild indexes?
    It has to be dynamic. I do not want to have to exclude and include database when the primary instance changes.
    Something like
    If (database is on primary availability group instance == true)
    Exec task
    Else
    Forget about it
    Greetings,
    Erik

    Hi hcv,
    Each availability group defines a set of two or more failover partners known as availability replicas.
    Every availability replica is assigned an initial role—either the primary role or the secondary role. The role of a given replica determines whether it hosts read-write databases or read-only databases. One replica, known as the primary replica, is assigned
    the primary role and hosts read-write databases, which are known as primary databases. At least one other replica, known as a secondary replica, is assigned the secondary role. A secondary replica hosts read-only databases, known as secondary databases. So
    once a database is set to READ ONLY, Statistics will not be automatically updated
     and we will not be able to create indexes and so on. If you change the original secondary replica resolves to the primary role, its databases become the primary databases, then you can execute sp_updatestats on the read-write databases.
    For more information, see:
    Overview of AlwaysOn Availability Groups (SQL Server)
    Regards,
    Sofiya Li
    If you have any feedback on our support, please click here.
    Sofiya Li
    TechNet Community Support

  • COMMIT in Triggers?

    Do I need to put COMMIT in DML Triggers?
    I cannot find any references about it in documentation.

    I found this
    Restrictions:
    The PL/SQL block of a trigger cannot contain transaction control SQL statements (COMMIT, ROLLBACK, SAVEPOINT, and SET CONSTRAINT) if the block is executed within the same transactionAt this URL
    http://technet.oracle.com/docs/products/oracle9i/doc_library/901_doc/server.901/a90125/statements_76a.htm#2063897
    No - you can NOT have a commit or rollback in a trigger.
    eric

  • Steps to add the pull subscription on secondary (post failover) in 2012

    We are upgrading the servers to 2008 to 2012 and implementing the Always ON for the HA and replication. I am stuck on making the replication work between the primary publisher and the subscriber secondary post failover. Did any one faced and fixed
    the issue and if so, please share the details steps.

    Hi starter_dba,
    Before you configuring replication and AlwaysOn availability groups, there are some notes which need to consider. For example, the distributor should not be a host for any of the current replicas of the availability group that the publishing
    database is a member of. You need to configure the secondary replica hosts as replication publishers.
    For more information about configuring replication for AlwaysOn Availability Groups, you can review the following articles,
    http://technet.microsoft.com/en-us/library/hh710046.aspx
    Expanding AlwaysOn Availability Groups with Replication Subscribers:
    https://www.simple-talk.com/sql/backup-and-recovery/expanding-alwayson-availability-groups-with-replication-subscribers/
    Regards,
    Sofiya Li
    If you have any feedback on our support, please click here.
    Sofiya Li
    TechNet Community Support

Maybe you are looking for

  • Invoking bpel process from java in oracle soa/bpm 11g

    Hi, We have some java code to invoke bpel process in oracle BPM 10g following the instructions in http://download-east.oracle.com/docs/cd/B14099_19/integrate.1012/b14448/invoke.htm. Basically the steps are: 1) get a Locator (com.oracle.bpel.client.Lo

  • Random shuffle no longer works

    I have itunes installed on my pc.  Random shuffle feature no longer works. Latest version on itunes is installed.

  • Iphone 5 not syncing music correctly

    Wondering if theres anyone out there in internet world that can help me . I have a macbook air thats running yosemite and an iPhone 5 on the latest iOS . Recently I've been having quite a lot of trouble syncing music from iTunes to my iPhone . By tro

  • Single Frame Exports From the Viewer Window

    Why, when i choose to 'open in viewer' from the timeline, does it only export a static frame, or sometimes single frame, when i choose to export using quicktime conversion? Alternatively, i have to make an in and out point on the timeline, and export

  • [Bug] [pagetemplate-metadata]  Was: [Bug] [JSP Editor]

    Edit: Ok the problem happens after you manually move pages and do a project refresh, the /META-INF/pagetemplate-metadata.xml file's content isn't updated and thus bad references show up everywhere in your project. ~ Simon Original: Hello JDeveloper t