Transactional Replication and Database Snapshots

Hi,
I have a database that is a publisher in transactional replication.
I create a database snapshot on that database and then let transactions replicate to the subscriber(s).
I revert the database back to the snapshot.
What happens to replication?
Dan Jameson
Associate Director of IT/DBA
Children's Oncology Group
http://www.ChildrensOncologyGroup.org

Your Log Reader Agent could fail if the publication database LSN is less than the value of the transaction sequence number (max xact_seqno) at the distribution database.  In which case you could execute
sp_replrestart to resynchronize the Publisher metadata with the Distributor metadata.
Afterwards it would wise to run a data validation to see how out of sync you are with the Subscriber and use
tablediff utility or SQL Data Compare to bring the Publisher and Subscriber back into convergence.  Reinitialization is an option as well.  It depends on exactly what you are trying to achieve.
Brandon Williams (blog |
linkedin)

Similar Messages

  • Transactional Replication - Generate a snapshot for a new art only

    I will apreciate any help on this, thanks ahead!
    We are running the below script that works well for us at dev and other environments but when running on prod the generated snapshot is for all articles in the publication rather than the desired results, for the new art only.
    I have copied below the code being used.
    SET NOCOUNT ON; SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
    DECLARE @rc int, @publication sysname, @article sysname, @subscriber sysname, @destination_db sysname
    ,@delete_article_from_replication_configuration bit ,@debug bit = 1;
    SELECT @publication = N'MyPub'
    ,@destination_db = N'dest_database'
    ,@subscriber = N'MyServer'
    SELECT @article = N'MyArt'
    -- SET immediate_sync and allow_anonymous to false
    EXEC sp_changepublication
    @publication = @publication,
    @property = N'immediate_sync',
    @value = N'false';
    EXEC sp_changepublication
    @publication = @publication,
    @property = N'allow_anonymous',
    @value = N'false';
    -- add article
    DECLARE @error_message nvarchar(4000);
    IF NOT EXISTS (SELECT * FROM dbo.sysarticles a INNER JOIN dbo.syspublications p ON a.pubid = p.pubid WHERE a.name = @article AND p.name = @publication)
    BEGIN;
    EXEC @rc = sp_addarticle
    @publication = @publication
    ,@article = @article
    ,@source_owner = N'dbo'
    ,@source_object = @article
    ,@destination_table = @article
    ,@type = N'logbased'
    ,@creation_script = null
    ,@description = null
    ,@pre_creation_cmd = N'none'
    ,@schema_option = 0x000000000803100D /* 0x000000000803FFDF */
    ,@status = 16 /* 8 */
    ,@vertical_partition = N'false'
    ,@ins_cmd = N'SQL'
    ,@del_cmd = N'SQL'
    ,@upd_cmd = N'SQL'
    ,@filter = null
    ,@sync_object = null
    ,@auto_identity_range = N'false'
    ,@identityrangemanagementoption = N'manual';
    IF ( (@@ERROR <> 0) OR (@rc <> 0) )
    BEGIN;
    SELECT @error_message = ERROR_MESSAGE(); RAISERROR(@error_message, 16, 1);
    IF (@@TRANCOUNT > 0) ROLLBACK TRAN; RETURN;
    END;
    PRINT 'The article ''' + @article + ''' has been added to publication ''' + @publication + '''';
    END;
    -- add subscription
    IF NOT EXISTS (SELECT * from syssubscriptions WHERE dest_db NOT LIKE 'virtual' AND srvname LIKE @subscriber AND artid IN
    (SELECT artid FROM dbo.sysarticles a INNER JOIN dbo.syspublications p ON a.pubid = p.pubid WHERE a.name = @article AND p.name = @publication ))
    BEGIN;
    EXEC @rc = sp_addsubscription
    @publication = @publication
    ,@subscriber = @subscriber
    ,@destination_db = @destination_db
    ,@subscription_type = N'Pull'
    ,@sync_type = N'automatic'
    --,@sync_type = N'replication support only'
    ,@article = @article
    ,@update_mode = N'read only'
    ,@subscriber_type = 0
    ,@subscriptionstreams = 4;
    IF ( (@@ERROR <> 0) OR (@rc <> 0) )
    BEGIN;
    SELECT @error_message = ERROR_MESSAGE(); RAISERROR(@error_message, 16, 1);
    IF (@@TRANCOUNT > 0) ROLLBACK TRAN; RETURN;
    END;
    PRINT 'The subscription ''' + @subscriber + ''' for article ''' + @article + ''' has been created''';
    END;
    EXEC sp_changepublication
    @publication = @publication
    ,@property = N'sync_method'
    ,@value = N'native'
    ,@force_invalidate_snapshot = 0
    ,@force_reinit_subscription = 0;
    -- create snapshot
    EXEC sp_addpublication_snapshot
    @publication = @publication
    ,@frequency_type = 1
    ,@frequency_interval = 1
    ,@frequency_relative_interval = 0
    ,@frequency_recurrence_factor = 0
    ,@frequency_subday = 0
    ,@frequency_subday_interval = 0
    ,@active_start_time_of_day = 0
    ,@active_end_time_of_day = 235959
    ,@active_start_date = 0
    ,@active_end_date = 0;
    Yaniv Etrogi
    site |
    blog | linked in |
    mail
    Please click the Mark as Answer button if a post solves your problem! or
    Vote As Helpful

    Hello,
    1. Verify if you are using CONCURRENT or NATIVE method for synchronization by running the following command.
    Use yourdb
    select sync_method from syspublications
    If the value is 3 or 4 then it is CONCURRENT and if it is 0 then it is NATIVE.
    For more information check
    http://msdn.microsoft.com/en-us/library/ms189805.aspx
    2) Then add the subscription for this new article using the following command
    EXEC sp_addsubscription @publication = 'yourpublication', @article = 'test',
    @subscriber =‘subs_servername', @destination_db = 'subs_DBNAME',
    @reserved='Internal'
    If you are using the NATIVE method for synchronization then the parameter
    @reserved=’Internal’ is optional but there is no harm in using it anyways. But if it is CONCURRENT then you have to use that parameter. Else the next time you run the snapshot agent it is going to generate a snapshot for all the articles.
    Lastly start the SNAPSHOT AGENT job from the job activity monitor. To find
    the job name follow these steps.
    · select * from msdb..sysjobs where name like '%yourpublication%'
    · Right click on each of those jobs and find which one contains the step
    ‘Snapshot Agent startup message’. This is the job that you want to
    start from the first step.
    3. Verify that the snapshot was generated for only one article.
    Regards, Pradyothana DP. Please Mark This As Answer if it solved your issue. Please Mark This As Helpful if it helps to solve your issue. ========================================================== http://www.dbainhouse.blogspot.in/

  • Transaction Sync and Database Size

    Hello,
    We're using BDB (via the Java interface) as the persistent store for a messaging platform. In order to achieve high performance, the transactional operations are configured to not sync, i.e., TransactionConfig.setSync(false) . While we do achieve better performance, the size of the database does seem rather large. We checkpoint on a periodic basis, and each time we checkpoint, the size of the database grows, even though records (messages in our world) are being deleted. So, if I were to add, say 10000 records, delete all of them and then checkpoint, the size of the database would actually grow! In addition, the database file, while being large, is also very sparse - a 30GB file when compressed reduces in size to 0.5 GB.
    We notice that if we configure our transactional operations to sync, the size is much smaller, and stays constant, i.e., if I were to insert and subsequently delete 10000 records into a database whose file is X MB, the size of the database file after the operations would be roughly X MB.
    I understand that transaction logs are applied to the database when we checkpoint, but should I be configuring the behaviour of the checkpointing (via CheckpoinConfig )?
    Also, I am checkpointing periodically from a separate thread. Does BDB itself spawn any threads for checkpointing?
    Our environment is as follows:
    RedHat EL 2.6.9-34.ELsmp
    Java 1.5
    BDB 4.5.20
    Thanks much in advance,
    Prashanth

    Hi Prashanth,
    If your delete load is high, your workload should benefit from setting the DB_REVSPLITOFF flag, which keeps around the structure of the btree regardless of records being deleted. The result should be less splits and merges, and is therefore better concurrency.
    Here you can find some documentation that should help you:
    Access method tuning: http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_misc/tune.html
    Transaction tuning: http://www.oracle.com/technology/documentation/berkeley-db/db/ref/transapp/tune.html
    If you are filling the cache with dirty pages, you can indeed call checkpoint() periodically in the application, or you can create a memp_trickle thread. See the following sections of the documentation:
    - Javadoc: Environment -> trickleCacheWrite" http://www.oracle.com/technology/documentation/berkeley-db/db/java/com/sleepycat/db/Environment.html#trickleCacheWrite(int)
    Some related thread for the "database size issue", can be found here: http://forums.oracle.com/forums/thread.jspa?threadID=534371&tstart=0
    Bogdan Coman

  • Urgent: EJB Transaction mechanism and Database Transaction mechanism

    Anybody please clarify me how EJB transaction mechanism use the underlying database transaction mechanism? Here my concern is that in the context EJB transaction, how much reponsibilities are performed by EJB container and how much responsibilities are performed by underlying database server. I will deem it a great favor if you kindly explain the whole story with example(s).

    Actually the ejb container is managing the persistence.
    It will be like this.
    if u r using entity beans or statefull beans
    while creating entity bean class you have to specify in the
    deployment descriptor, which table in the database this bean is representing .
    On the runtime , when you are creating an instance of a entity bean ,that instance will be corresponds to a row in the mapped table.
    what all changes you have made to that instance's attributes ie;
    columns in that row that all will be avilable in the session
    When you commit this particular session .this changes will be written to disk.
    that's how the change is managed ...
    assume if one user is modifying the particular row and another user is deleting it ..which ever transaction commits first will be get effected.
    if modification is committing first and then delete the row will be deleted last.but if first delete and then modify while commiting modifycation..
    you should get an error saying that particular row is missing from storage
    this how ejb container is manging the persistence
    in all cases even in case of synchronus acess
    i think u r cleard with this much

  • Taking snapshot of oracle tables to sql server using transactional replication is taking a long time

    Hi All,
    I am trying to replicate around 200 oracle tables onto sql server using transaction replication and it taking a long time i.e the initial snapshot is taking more than 24 hrs and it still going on.
    Is there any way to replicate those these tables faster?
    Kindly help me out..
    Thanks

    Hi,
    According to the description, I know the replication is working fine. But it is very slow. 
    1. Check the CPU usage on Oracle publisher and SQL Server. This issue may due to slow client processing (Oracle performance) or Network performance issues.
    2. Based on SQL Server 2008 Books Online ‘Performance Tuning for Oracle Publishers’ (http://msdn.microsoft.com/en-us/library/ms151179(SQL.100).aspx). You can enable the transaction
    job set and follow the instructions based on
    http://msdn.microsoft.com/en-us/library/ms147884(v=sql.100).aspx.
    2. You can enable replication agent logging to check the replication behavior. You may follow these steps to collect them:
    To enable Distribution Agent verbose logging. Please follow these steps:
    a. Open SQL Server Agent on the distribution server.
    b. Under Jobs folder, find out the Distribution Agent.
    c. Right click the job and choose Properties.
    d. Select Steps tap, it should be like this:
    e. Click Run agent and click Edit button, add following scripts by the end of scripts in the command box:
            -Output C:\Temp\OUTPUTFILE.txt -Outputverboselevel 2
    f. Exit the dialogs
     For more information about the steps, please refer to:
    http://support.microsoft.com/kb/312292
    Hope the information helps.
    Tracy Cai
    TechNet Community Support

  • Transaction replication failing at creation of Publication

    Hi Gurus, I am facing very uncommon issue  when setting up transactional replication. My instance is running on MSSQL 2005 SP3 and same instance act as Distributor and Publisher.
    Step1: Created Distributor with no issues.
    Step2: Enable Pblisher - Same instance and a database under this instance runnning in full recovery mode.
    Step3: Create Publication - Went thru all the steps and selected couple tables(articles) to create initial snapshot. After provide a name to the publication, click ok - its throwing following error for both articles:
    TITLE: New Publication Wizard
    SQL Server Management Studio could not create article 'abc_Table'.
    ADDITIONAL INFORMATION:
    An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
    The article 'abc_table ' does not exist. Changed database context to '<publisher_DB> '. (Microsoft SQL Server, Error: 20027)
    Both tables have primary key and available in publisher database. I did setup n number Transactional replications and did not see this kind of error so far. Did not see any help when googled/Binged, Any help is highly appreciated. Thanks in advance for your
    help.
    Thanks, TTeam

    Hi Mohammad, Please see the table structure it is under dbo schema.
    CREATE
    TABLE [dbo].[busmgt](
    [id] [int]
    NOT
    NULL,
    [persid] [nvarchar]
    (30)
    NULL,
    [hier_parent] [binary]
    (16)
    NULL,
    [hier_child] [binary]
    (16)
    NOT
    NULL,
    [last_mod_dt] [int]
    NULL,
    [last_mod_by] [binary]
    (16)
    NULL,
    [cost] [int]
    NULL,
    [sym] [nvarchar]
    (60)
    NOT
    NULL,
    [nx_desc] [nvarchar]
    (40)
    NULL,
    [bm_rep] [int]
    NULL,
    [ci_rel_type] [int]
    NULL,
    [tenant] [binary]
    (16)
    NULL,
    [del] [int]
    NOT
    NULL
    DEFAULT
    ('0'),
    CONSTRAINT [PK__busmgt__187915EB]
    PRIMARY
    KEY
    NONCLUSTERED
    [id]
    ASC
    WITH
    (PAD_INDEX
    =
    OFF,
    STATISTICS_NORECOMPUTE
    =
    OFF,
    IGNORE_DUP_KEY
    =
    OFF,
    ALLOW_ROW_LOCKS
    =
    ON,
    ALLOW_PAGE_LOCKS
    =
    ON)
    ON [PRIMARY]
    ON [PRIMARY]
    Thanks, TTeam

  • Transactional replication update causing deadlock

    Hello,
    I am using sql server 2012 SE. I am using Transactional replication and noticed that we have been receiving deadlocks lately. The select statement that is inserting data into a temptable is getting deadlocked by the transactionalreplication update statement.
    I am in the process of avoiding this deadlock. Is adding any missing index is the only solution here? PLease find the deadlock information below:
    3/13/2015 11:03:17,spid3s,Unknown,waiter id=process85b6b2928 mode=IX requestType=wait
    03/13/2015 11:03:17,spid3s,Unknown,waiter-list
    03/13/2015 11:03:17,spid3s,Unknown,owner id=process505246558 mode=S
    03/13/2015 11:03:17,spid3s,Unknown,owner-list
    03/13/2015 11:03:17,spid3s,Unknown,pagelock fileid=1 pageid=8285871 dbid=6 subresource=FULL objectname=PaigahDB.dbo.EmailLogs id=lock73c690280 mode=S associatedObjectId=72057594176077824
    03/13/2015 11:03:17,spid3s,Unknown,waiter id=process505246558 mode=S requestType=wait
    03/13/2015 11:03:17,spid3s,Unknown,waiter-list
    03/13/2015 11:03:17,spid3s,Unknown,owner id=process85b6b2928 mode=IX
    03/13/2015 11:03:17,spid3s,Unknown,owner-list
    03/13/2015 11:03:17,spid3s,Unknown,pagelock fileid=1 pageid=8286764 dbid=6 subresource=FULL objectname=PaigahDB.dbo.EmailLogs id=lock3d4201500 mode=IX associatedObjectId=72057594176077824
    03/13/2015 11:03:17,spid3s,Unknown,resource-list
    03/13/2015 11:03:17,spid3s,Unknown,Proc [Database Id = 6 Object Id = 323532236]
    03/13/2015 11:03:17,spid3s,Unknown,inputbuf
    03/13/2015 11:03:17,spid3s,Unknown,[FullName] = case substring(@bitmap<c/>2<c/>1) & 8 when 8 then @c12 else [FullName] end<c/>
    03/13/2015 11:03:17,spid3s,Unknown,[LastName] = case substring(@bitmap<c/>2<c/>1) & 4 when 4 then @c11 else [LastName] end<c/>
    03/13/2015 11:03:17,spid3s,Unknown,[FirstName] = case substring(@bitmap<c/>2<c/>1) & 2 when 2 then @c10 else [FirstName] end<c/>
    03/13/2015 11:03:17,spid3s,Unknown,[AddressOrder] = case substring(@bitmap<c/>2<c/>1) & 1 when 1 then @c9 else [AddressOrder] end<c/>
    03/13/2015 11:03:17,spid3s,Unknown,[EmailType] = case substring(@bitmap<c/>1<c/>1) & 128 when 128 then @c8 else [EmailType] end<c/>
    03/13/2015 11:03:17,spid3s,Unknown,[AddressContactAttempts] = case substring(@bitmap<c/>1<c/>1) & 64 when 64 then @c7 else [AddressContactAttempts] end<c/>
    03/13/2015 11:03:17,spid3s,Unknown,[Address] = case substring(@bitmap<c/>1<c/>1) & 32 when 32 then @c6 else [Address] end<c/>
    03/13/2015 11:03:17,spid3s,Unknown,[Timestamp] = case substring(@bitmap<c/>1<c/>1) & 16 when 16 then @c5 else [Timestamp] end<c/>
    03/13/2015 11:03:17,spid3s,Unknown,[LogStatus] = case substring(@bitmap<c/>1<c/>1) & 8 when 8 then @c4 else [LogStatus] end<c/>
    03/13/2015 11:03:17,spid3s,Unknown,[JobID] = case substring(@bitmap<c/>1<c/>1) & 4 when 4 then @c3 else [JobID] end<c/>
    03/13/2015 11:03:17,spid3s,Unknown,[EmailAddressID] = case substring(@bitmap<c/>1<c/>1) & 2 when 2 then @c2 else [EmailAddressID] end<c/>
    03/13/2015 11:03:17,spid3s,Unknown,update [dbo].[EmailLogs] set
    03/13/2015 11:03:17,spid3s,Unknown,frame procname=PaigahDB.dbo.sp_MSupd_dboEmailLogs line=52 stmtstart=4900 stmtend=8184 sqlhandle=0x03000600ccb54813d9b8250031a4000001000000000000000000000000000000000000000000000000000000
    03/13/2015 11:03:17,spid3s,Unknown,executionStack
    03/13/2015 11:03:17,spid3s,Unknown,process id=process85b6b2928 taskpriority=0 logused=793868 waitresource=PAGE: 6:1:8285871  waittime=1805 ownerId=591093165 transactionname=user_transaction lasttranstarted=2015-03-13T11:03:07.047 XDES=0x8fbb7cd28 lockMode=IX
    schedulerid=2 kpid=1496 status=suspended spid=70 sbid=0 ecid=0 priority=0 trancount=2 lastbatchstarted=2015-03-13T11:03:15.870 lastbatchcompleted=2015-03-13T11:03:15.867 lastattention=1900-01-01T00:00:00.867 clientapp=webapp_rep_10032014 hostname=DBServer
     hostpid=3112 loginname=dom\user isolationlevel=read committed (2) xactid=591093165 currentdb=6 lockTimeout=4294967295 clientoption1=671156320 clientoption2=128056
    03/13/2015 11:03:17,spid3s,Unknown,Proc [Database Id = 17 Object Id = 853578079]
    03/13/2015 11:03:17,spid3s,Unknown,inputbuf
    03/13/2015 11:03:17,spid3s,Unknown,where el.[Timestamp] between ji.LastExportTime and @currentExportTime;
    03/13/2015 11:03:17,spid3s,Unknown,join PaigahDB.dbo.EmailLogStatuses es on el.LogStatus = es.EmailLogStatusID
    03/13/2015 11:03:17,spid3s,Unknown,join PaigahDB.dbo.EmailLogs el on el.JobID = ji.EmailJobId
    03/13/2015 11:03:17,spid3s,Unknown,from #JobIds ji
    03/13/2015 11:03:17,spid3s,Unknown,select 'EMAIL'<c/> el.EmailLogID<c/> el.ImportPersonId<c/> el.JobID<c/> el.Timestamp<c/> es.Details<c/> es.Name<c/> el.AddressContactAttempts<c/> null<c/> null<c/>
    el.LogStatus<c/> es.Details<c/> el.EmailLogIdentity
    03/13/2015 11:03:17,spid3s,Unknown,(MessageType<c/> LogId<c/> ImportPersonId<c/> JobId<c/> [Timestamp]<c/> Details<c/> Name<c/> NumberContactAttempts<c/> Number<c/> PatientResponse<c/> LogStatusID<c/>
    LogStatus<c/> LogIdentity)
    03/13/2015 11:03:17,spid3s,Unknown,insert into #LogFlexes
    03/13/2015 11:03:17,spid3s,Unknown,frame procname=PlazaDB.dbo.GetJobResults line=111 stmtstart=7788 stmtend=9074 sqlhandle=0x030011005f91e032b241de0050a4000001000000000000000000000000000000000000000000000000000000
    03/13/2015 11:03:17,spid3s,Unknown,executionStack
    03/13/2015 11:03:17,spid3s,Unknown,process id=process505246558 taskpriority=0 logused=0 waitresource=PAGE: 6:1:8286764  waittime=7082 ownerId=591097183 transactionname=INSERT lasttranstarted=2015-03-13T11:03:10.667 XDES=0x8f5b68d28 lockMode=S schedulerid=1
    kpid=5128 status=suspended spid=188 sbid=0 ecid=0 priority=0 trancount=2 lastbatchstarted=2015-03-13T11:03:10.397 lastbatchcompleted=2015-03-13T11:03:10.397 lastattention=2015-03-13T10:52:07.350 clientapp=.Net SqlClient Data Provider hostname=GW-W hostpid=5468
    loginname=abc\admin isolationlevel=read committed (2) xactid=591097183 currentdb=17 lockTimeout=4294967295 clientoption1=673185824 clientoption2=128056
    03/13/2015 11:03:17,spid3s,Unknown,process-list
    03/13/2015 11:03:17,spid3s,Unknown,deadlock victim=process505246558
    03/13/2015 11:03:17,spid3s,Unknown,deadlock-list
    Experts I need your valuable inputs.
    Thanks a ton

    This is what I would do.
    Check when update statistics were ran last time?
    Check for any fragmentation on tables?
    Turn on trace flag 1204, 1222
    Use deadlock graph, deadlock chain and narrow down the statements and tune them.
    See below link which might be useful.
    https://www.simple-talk.com/sql/performance/sql-server-deadlocks-by-example/

  • Does SAP use SQL server's snapshot and transactional replication?

    Gurus:
    Could you help with this?
    Does SAP use SQL server's snapshot and transactional replication?
    Thanks!

    Hi Christy,
    no, SAP does not directly leverage these functions.
    But none the less, it is up to you to use these on your system. I regulary use the snapshot functionality when applying Support Packages. In case somehing goes wrong a snapshot is the easiest way to roll back the import process (not exactly the best choice when talking about production and users keep on working while importing).
    Have a look at this [document |http://download.microsoft.com/download/d/9/4/d948f981-926e-40fa-a026-5bfcf076d9b9/SAP_SQL2005_Best Practices.doc]. It deals with Best Practices and also covers snapshot, replication and mirroring.
    Sven

  • Is it possible to configure transaction and merge replication one database as a publisher?

    Hi All,
    We have a requirement to configure replication between the servers and its plan is as fallows:
    A------>B -- One way transactional replication ---  Already Configured
    A------> C --Merge Replication (Both ways)--- This is in plan
    1) Our requirement to configure Merge replication is to allow multiple users to access both publisher and subscriber databases. Configuring Merge replication with a combination of Transactional replication is correct or do we end up facing issues?
    2) A to B transactional replication is already configured so if we configure merge replication between A to C will effect existing replication?
    Please let us know if you need any details on this. Thank You.
    Grateful to your time and support. Regards, Shiva

    Hi Sir,
    Thanks for the information. But I have small doubt, "best to use merge all the way" is that mean you recommend to use Merge replication for all the servers.
    A------>B --
    Merge Replication (Both ways) 
    A------> C --Merge Replication (Both ways)
    Please correct me if I am wrong
    Grateful to your time and support. Regards, Shiva

  • SQL Server 2008 R2 Replication - not applying snapshot and not updating all repliacted columns

    We are using transactional replicating on SQL Server 2008 R2 (SP1) using a remote distributor. We are replicating from BaanLN, which is an ERP application to up to 5 subscribers, all using push publications. 
    Tables can range from a couple million rows to 12 million rows and 100's of GBs in size. 
    And it's due to the size of the tables that it was designed with a one publisher to one table architecture.  
    Until recently it has been working very smooth (last four years)) but we have come across two issues I have never encountered.
    While this has happen a half dozen times before, it last occurred a couple weeks ago when I was adding three new publications, again a one table per publication architecture.
    We use standard SS repl proc calls to create the publications, which have been successful for years. 
    On this occasion replication created the three publications, assigned the subscribers and even generated the new snapshot for all three new publications. 
    However,  while it appeared that replication had created all the publications correctly from end to end, it actually only applied one of the three snapshot and created the new table on both of the new subscribers (two on each of the
    publications).  It only applied the snapshot to one of the two subscribers for the second publications, and did not apply to any on the third.  
    I let it run for three hours to see if it was a back log issue. 
    Replication was showing commands coming across when looking at the sync verification at the publisher and 
    it would even successfully pass a tracer token through each of the three new publications, despite there not being tables on either subscriber on one of the publishers and missing on one of the subscribers on another.  
    I ended up attempting to reinitialize roughly a dozen times, spanning a day, and one of the two remaining publications was correctly reinitialized and the snapshot applied, but the second of the two (failed) again had the same mysterious result, and
    again looked like it was successful based on all the monitoring. 
    So I kept reinitializing the last and after multiple attempts spanning a day, it too finally was built correctly.  
    Now the story only get a little stranger.  We just found out yesterday that on Friday the 17th 
    at 7:45, the approximate time started the aforementioned deployment of the three new publications, 
    we also had three transaction from a stable and vetted publication send over all changes except for a single status column. 
    This publication has 12 million rows and is very active, with thousands of changes daily. 
    , The three rows did not replicate a status change from a 5 to a 6. 
    We verified that the status was in fact 6 on the publisher, and 
    5 on both subscribers, yet no messages or errors.  All the other rows successfully updated.  
    We fixed it by updating the publication from 6 back to 5 then back to 6 again on those specific rows and it worked.
    The CPU is low and overall latency is minimal on the distributor. 
    From all accounts the replication is stable and smooth, but very busy. 
    The issues above have only recently started.  I am not sure where to look for a problem, and to that end, a solution.

    I suspect the problem with the new publication/subscriptions not initializing may have been a result of timeouts but it is hard to say for sure.  The fact that it eventually succeeded after multiple attempts leads me to believe this.  If this happens
    again, enable verbose agent logging for the Distribution Agent to see if you are getting query timeouts.  Add the parameters
    -OutputVerboseLevel 2 -Output C:\TEMP\DistributionAgent.log to the Distribution Agent Run Agent job step, rerun the agent, and collect the log.
    If you are getting query timeouts, try increasing the Distribution Agent -QueryTimeOut parameter.  The default is 1800 seconds.  Try bumping this up to 3600 seconds.
    Regarding the three transactions not replicating, inspect MSrepl_errors in the distribution database for the time these transactions occurred and see if any errors occurred.
    Brandon Williams (blog |
    linkedin)

  • Transactional Replication sp_MSupd RUNNABLE and never ends

    Hello,
    I came across not understandable problem connecting transactional pull replication. Once or twice a month I have a problem that stored procedure on subscription server (sp_MSupd) cannot be finished. It is not blocked by any other session and does not have
    any wait type. It simply hangs and remains in task status RUNNABLE. The only one way to recover the replication is initilizing it from backup or snapshot. Do you have any suggestions. Have you faced similiar problem?

    Hi, Lydia
    Thank you for your reply. What do you mean that this is the "busy update on the publication". Could you please describe it in more details? Does it mean that on the publisher there was a huge update on a huge amount of rows? On the Publisher to
    Distributor History I can see that there is no problem. All transactions are delivered without any delays (few seconds). But in Distributor to Subscriber History I have all actions completed in few seconds except the last one that hangs. How can I check if
    it is the "busy update on the publication" problem? What should be done if it hangs more than few hours?
    I found out that sp_MSget_repl_commands hangs on Publisher for remote Subscriber. It hangs with ASYNC_NETWORK_IO
    Hi KirKuz,
    I originally mean distribution agent reader &writer latency issue, and you can use SQL profiler or DMVs to examine in detail the Transaction Replication, which is described in this
    blog.
    Regarding to ASYNC_NETWORK_IO, based on my research, it's simply waiting for something external to SQL. The drive with your distribution database might be a bottleneck of such issue, or maybe the distribution tables are getting too large. Please check your
    disk performance, also check your indexes on the replication tables for fragmentation.
    Here is a similar thread for your reference.
    https://social.msdn.microsoft.com/Forums/sqlserver/en-US/d0117651-f94c-488a-83e8-30038e38d510/transactional-replication-slow-running-spmsgetreplcommands?forum=sqlreplication
    Thanks,
    Lydia Zhang
    Lydia Zhang
    TechNet Community Support

  • Push and Pull Transaction Replication

    How do we know if the current replication is PUSH or PULL and where the distributor database is at Publisher or Subscriber?
    Thanks,

    Try the below:
    --First you find the distributor servername using the below running in publisher
    Use master
    EXEC sp_helpdistributor;
    --Then you can run the below to find the type (use distributor database)
    SELECT
    (CASE
    WHEN mdh.runstatus = '1' THEN 'Start - '+cast(mdh.runstatus as varchar)
    WHEN mdh.runstatus = '2' THEN 'Succeed - '+cast(mdh.runstatus as varchar)
    WHEN mdh.runstatus = '3' THEN 'InProgress - '+cast(mdh.runstatus as varchar)
    WHEN mdh.runstatus = '4' THEN 'Idle - '+cast(mdh.runstatus as varchar)
    WHEN mdh.runstatus = '5' THEN 'Retry - '+cast(mdh.runstatus as varchar)
    WHEN mdh.runstatus = '6' THEN 'Fail - '+cast(mdh.runstatus as varchar)
    ELSE CAST(mdh.runstatus AS VARCHAR)
    END) [Run Status],
    mda.subscriber_db [Subscriber DB],
    mda.publication [PUB Name],
    CONVERT(VARCHAR(25),mdh.[time]) [LastSynchronized],
    und.UndelivCmdsInDistDB [UndistCom],
    mdh.comments [Comments],
    'select * from distribution.dbo.msrepl_errors (nolock) where id = ' + CAST(mdh.error_id AS VARCHAR(8)) [Query More Info],
    mdh.xact_seqno [SEQ_NO],
    (CASE
    WHEN mda.subscription_type = '0' THEN 'Push'
    WHEN mda.subscription_type = '1' THEN 'Pull'
    WHEN mda.subscription_type = '2' THEN 'Anonymous'
    ELSE CAST(mda.subscription_type AS VARCHAR)
    END) [SUB Type],
    mda.publisher_db+' - '+CAST(mda.publisher_database_id as varchar) [Publisher DB],
    mda.name [Pub - DB - Publication - SUB - AgentID]
    FROM distribution.dbo.MSdistribution_agents mda
    LEFT JOIN distribution.dbo.MSdistribution_history mdh ON mdh.agent_id = mda.id
    JOIN
    (SELECT s.agent_id, MaxAgentValue.[time], SUM(CASE WHEN xact_seqno > MaxAgentValue.maxseq THEN 1 ELSE 0 END) AS UndelivCmdsInDistDB
    FROM distribution.dbo.MSrepl_commands t (NOLOCK)
    JOIN distribution.dbo.MSsubscriptions AS s (NOLOCK) ON (t.article_id = s.article_id AND t.publisher_database_id=s.publisher_database_id )
    JOIN
    (SELECT hist.agent_id, MAX(hist.[time]) AS [time], h.maxseq
    FROM distribution.dbo.MSdistribution_history hist (NOLOCK)
    JOIN (SELECT agent_id,ISNULL(MAX(xact_seqno),0x0) AS maxseq
    FROM distribution.dbo.MSdistribution_history (NOLOCK)
    GROUP BY agent_id) AS h
    ON (hist.agent_id=h.agent_id AND h.maxseq=hist.xact_seqno)
    GROUP BY hist.agent_id, h.maxseq
    ) AS MaxAgentValue
    ON MaxAgentValue.agent_id = s.agent_id
    GROUP BY s.agent_id, MaxAgentValue.[time]
    ) und
    ON mda.id = und.agent_id AND und.[time] = mdh.[time]
    where mda.subscriber_db<>'virtual' -- created when your publication has the immediate_sync property set to true. This property dictates whether snapshot is available all the time for new subscriptions to be initialized. This affects the cleanup behavior of transactional replication. If this property is set to true, the transactions will be retained for max retention period instead of it getting cleaned up as soon as all the subscriptions got the change.
    --and mdh.runstatus='6' --Fail
    --and mdh.runstatus<>'2' --Succeed
    order by mdh.[time]
    Ref: http://stackoverflow.com/questions/220340/how-do-i-check-sql-replication-status-via-t-sql
    EDIT: You can remove unwanted info while executing. I just provided as it is from the reference site thinking that would be useful to you.

  • Invalid rowid error while running the snapshot agent in transactional replication

    Hi All,
    I am getting an Invalid rowid error while replicating an large tables i.e around 30 millions rows from oracle(publisher) to sql server(Subscriber) while running the snapshot agent in transactional replication.
    Its taking around 18 hours and its then its throwing this error.
    Is there any faster way that i can replicate the initial snapshot this large table as 18 hours is very high on time.
    Kindly suggest.i am always got quick and accurate response always..hope the same in this case also.
    Thanks,

    Hi,
    Could you please create a replication with some small tables for a test?
    You can disable the firewall on both sides and rerun snapshot. Enable verbose logging to level 4 for snapshot agent and check the results if it fails.
    http://support.microsoft.com/kb/312292
    Here is a document says for the error: ORA-10632: Invalid rowid Cause: Segment Highwatermark was overwritten due to shrink and space reused Action: Reissue this command.
    I also suggest you contact the Oracle support team for further help.
    Thanks.
    Tracy Cai
    TechNet Community Support

  • Transactional Replication: Alter view changes are not reflect on Subscription database

    Hi All,
    we are configured transactional replication in our environment on sql server 2008 R2 , Yesterday I made a view alter on publisher database the view also present in replicated articles but unfortunately the changes not reflect in subscription, I already have
    checked the : Replicate Schema change option in Subscription option its also true, there is not latency exist in replication monitor , i have checked the blocking on subscription and publication. one more thing I tested the changes on replicated table its
    working fine
    Please help me to fix the issue.
    Regards,
    Pawan Singh
    Thanks

    Hi Pawan,
    According to your description, the alter on the view in publication doesn't be reflected in subscription database. As my analysis, the issue could be caused by that the distribution agent job doesn’t run after altering the view.
    I make a test on my computer, and set up transactional replication to replicate tables and views. Firstly, when creating subscription, I set the distribution agent job ‘Run continuously’(as the screenshot below), and alter the view in publication database,
    then the change is successfully reflected to the corresponding view in subscription database.
    However, I also make another test with setting the distribution agent job ‘Run on demand only’(It is determined by you), and find that it is not reflected to subscription database unless I run the distribute agent job manually.
    The distribution agent is used to read the updated transactions written to the distribution database and applies the change to the subscription database, so please check if your distribution agent job runs after you alter the view. If not, please run the
    job and check if the issue still occurs.
    Regards,
    Michelle Li

  • At what point is it a good practice to Drop and Add back an Article for Transactional Replication?

    Hi,
    We have transactional replication Setup in our company , a set of tables involved in replication needed to be reloaded on prod say about 12-13 Million rows .
    We decided to drop the articles and add it back to replication so that a new snapshot for the specific articles can be generated and the transfer of data is fast and no breakage in the replication.
    But what is the best practice or a threshold point for taking this route i.e dropping an article and adding it back ?
    I mean 
    when Is it good to go this route  i.e when the load is more than 10, 000 rows or 50.000 rows  100,000 or at what number do we start this process?
    Thanks,
    Jack

    That is a function of horsepower and bandwidth.
    If you do drop a table out rather than replicate a 1% change of its data, 100% of the table will need to be snapshotted which might cause havoc with users trying to access that table, and if you are running immediate sync all of the tables will need to be
    resnapshotted.
    With the Enterprise Edition of SQL Server you will be able to use a sync type of database_snapshot which will mean no locking, otherwise it could be painful. Initialize from a backup is also an option to save you the cost of a snapshot.
    looking for a book on SQL Server 2008 Administration?
    http://www.amazon.com/Microsoft-Server-2008-Management-Administration/dp/067233044X looking for a book on SQL Server 2008 Full-Text Search?
    http://www.amazon.com/Pro-Full-Text-Search-Server-2008/dp/1430215941

Maybe you are looking for

  • How do I access group calendars on 10.8 server from a calendar client?

    Hi, I have a brand new installation of 10.8.2 . I installed the server app and started Calendar, Contacts, DNS, File Sharing, Mail, Open Directory and the Wiki services. I created a group with some users in it. I also created a group wiki and enabled

  • How to retrieve  sales Order details from BAPI_SALESORDER_GETLIST

    Hi Experts, I am using Visual Studio 2003 and SAP.Net Connector for Microsoft .NET 1.0.3 . I need to get the details of sales order.Following is the code snip. protected ConnectorDemo2.BAPIRETURN bapireturn;  /*Manually Added */ proxy.Connection =  S

  • How to handle multiple part appraisers in OSA

    how do I configure the system for two part-appraisers and one final appraiser? 1. do I need to configure two standard columns PAPP? 2. how do I autorize partappr1 to access PAPP column 1 and partappr2 to access PAPP column2? 3. how to specify who the

  • R7000 slow download speeds

    only realized after 8 months that my R7000 router always has a download speed close to 3-5 mbps when my ISP provided me 75mbps upload/download. I...

  • Can not get past blinking question mark after restarting in os 9

    I have a ibook g3 clamshell 576mb ram 366mhz 30gb hd. I added the hard drive and upgraded the memory and then forgot to update the firmware before I installed os x tiger. So I keep getting kernal panics because the firmware is not upgraded. Like a fo