Is it possible to configure transaction and merge replication one database as a publisher?

Hi All,
We have a requirement to configure replication between the servers and its plan is as fallows:
A------>B -- One way transactional replication ---  Already Configured
A------> C --Merge Replication (Both ways)--- This is in plan
1) Our requirement to configure Merge replication is to allow multiple users to access both publisher and subscriber databases. Configuring Merge replication with a combination of Transactional replication is correct or do we end up facing issues?
2) A to B transactional replication is already configured so if we configure merge replication between A to C will effect existing replication?
Please let us know if you need any details on this. Thank You.
Grateful to your time and support. Regards, Shiva

Hi Sir,
Thanks for the information. But I have small doubt, "best to use merge all the way" is that mean you recommend to use Merge replication for all the servers.
A------>B --
Merge Replication (Both ways) 
A------> C --Merge Replication (Both ways)
Please correct me if I am wrong
Grateful to your time and support. Regards, Shiva

Similar Messages

  • Is it possible to copy formats and styles from one document to another?

    Is it possible to copy formats and styles from one document to another?
    Or use a commen styledocument as a masterdocument?

    Only by creating a template with what you want.
    Apple has removed the ability to import styles inPAges 5 along with almost 100 other features.
    Peter

  • How do I take a 4-piece chart design and merge into one chart?

    How do I take a 4-piece design chart and merge into one chart?

    But it can be done with Adobe Acrobat, or the PDF Pack online service.

  • Is it possible to retreive data and store it in database from a AWM cube?

    Hi all,
    Table to cube is possible by maintaining..but Is it possible to retreive data as it stored in cube and store it in database from a AWM cube?
    Regards,
    Arjun Jkoshi

    Hi there,
    Yes, it is possible - and very easy. Remember an OLAP cube is fully integrated with the Oracle database and therefore treated very much as a native object.
    With 11g OLAP, cube views are created automatically when you define a cube using AWM. These views provide SQL access to the data in the OLAP cube meaning that it is very easy to transfer data into a table using techniques such as 'create table as select * from cube_view' or 'insert into table select * from cube_view'. You can use WHERE clauses to filter specific values from the cube into the table, and in 11g, an optimisation has been added to ensure that NULL rows are eliminated from the result set automatically (OLAP cubes are typically very sparse and therefore contain many NULL values)
    With 10g OLAP, cube views can be added on top of existing cubes that have been created using AWM. It is easiest to do this using the [view generator|http://www.oracle.com/technology/products/bi/olap/viewGenerator_1_0_2.zip] utility from the [OTN OLAP home page|http://www.oracle.com/technology/products/bi/olap/index.html]. With the views in place it is once again very easy to transfer data into a table using techniques such as 'create table as select * from cube_view' or 'insert into table select * from cube_view'.
    I hope this is clear and makes sense. Which version of Oracle OLAP are you using?
    Thanks
    Stuart

  • Merge Replication: subscriber database error

    Hello
    We have a principal application database running on SQL Server 2005, which has 300 GB information.
    This database is synchronized with multiple subscribers using merge replication agent, allowing changes at both the publisher and subscribers.
    Client applications connect to the subscribers and not the publisher, so the information is loaded in the subscribers.
    Last week, one of the subscribers had storage level error (broken discs) that caused corruption in the file system of the server.
    Then, the merge replication with this server began to fail due to bad sectors of the file system. But the database remained online, and the application continued to function normally, so new information was loaded in database.
    Because disk errors, it was not possible backed up the affected database or copy databases files to save the information. Subscribers regular backups were NOT performed.
    After resolving disk errors and repaired the file system, database was restored,
    but had consistency errors which were repaired by the DBCC CHECKDB command with
    repair_allow_data_loss option.
    Comparing the number of records before and after repair, we estimates that some records were lost in the database.
    Replication has not been restored yet.
    Now we have the following questions:
    - What happens with lost data in the subscriber database if the merge synchronization is reactivated? We estimate that the lost data will be replicated to the Publisher, and want to avoid that.
    - Is there any way to sync (download) data from the Publisher to the Subscriber to load the missing records again? Avoiding of course the upload from the subscriber to the publisher.
    - If the above point is possible, may you then make the upload to the publisher to sync the information loaded into the database subscriber?
    - If these actions are not possible, have we another ways to sync the information, to avoid losing data in the subscriber?
    Thanks in advance
    Javier
    Javier Mariani

    Hi,
    You can configure the Synchronization direction in Article Property. Right-click the Publication, click Properties. In the Articles page, set the properties of the articles. See the image:
    Here is a thread for your reference:
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/dc21b614-b736-409b-883e-3af6c75ab546/merge-replication-synchronization-direction-as-downloadonly-to-subscriber-allow-subscriber?forum=sqlreplication
    Thanks.
    Tracy Cai
    TechNet Community Support

  • Merge Replication from 2012 SQL Server publisher/distributor to 2014 SQL Express subscriber

    Hi, I'm having a question about which version to use when running SQL Merge Replication.
    In my test enviroment I have a SQL 2012 server running as Publisher and Distributor for my Merge Replication. I have now set up a test client running Win7 with SQL 2014 Express as a subscriber. The subscription initializes and runs perfectly.
    But according to the SQL documentation this should not be possible as it states that the subscriber should have a version number equal or lower than the publisher. My subscription client was set ut using SQL-scripting, if I try using the wizard it will just
    cause an error message telling that this is not possible since the client has a newer version number. I still not encountered any problems after I set it up using the script with the sp_addmergepullsubscription and sp_addmergepullsubscription_agent procedures.
    The roll-out of my solution will start in a few weeks and I now have to decide whether to use 2014 Express or 2012 Express on the clients. Using 2014 Express will save me for a lot of upgades in a year or two. The 2012 SQL Server running as publisher/distributor
    will be upgraded to/replaced with a 2014 Server in some few months.
    I'm looking for good advices and recommendations for what to choose on the clients, 2012 Express or 2014 Express. Are the any known problems with using 2014 Express as a subscriber to a publisher/distributor running 2012 SQL Server?
    All responses are welcome!
    Regards, Anders

    Hi,
    As you understand that a Subscriber to a merge publication can be any version less than or equal to the Publisher version. When you set up a subscription from a publisher to a subscriber using SQL Server Management Studio, you will get error
    “The selected Subscriber does not satisfy the minimum version compatibility level of the selected publication.” And you can use T-SQL script to get around this error message.
    I would install SQL Server 2012 Express on the clients. Here is the reasons:
    1. If you upgrade the publisher to SQL Server 2014, it is still working with SQL Server 2012 on the subscriber.
    2. I cannot tell whether there are other potential problems if I choose the later one besides the error message.
    Reference:
    http://msdn.microsoft.com/en-us/library/ms143699(v=sql.120).aspx
    Thanks.
    Tracy Cai
    TechNet Community Support

  • Merge replication - multi site, multi distributor, publisher, subscribers ...

    Hi,
    We have 2 sites at the moment. Each site has 2 servers. To reduce network load I was aiming at doing the following with a merge replication :
    * configure one server at each site to be distributor and publisher
    * configure the other server at each site to be a subscriber
    * configure the 2 publishers to also be subscribers to each other to sync the sites.
    Or even better:
    * have all the servers be distributors, publishers and subscribers, so that if one dies, who cares as they all have the same configuration.
    Also, I was thinking of:
    * having the distributor and publishers use a DFS share replicated on both sites (or maybe the same share on each sql server ? not sure what the best practice is on this one)
    However, I can't seem to find a good post / thread where it describes this kind of scenario in a step by step way nor what happens to the identity range management.
    If you could share any light on this or point me to the right English terms to describe the above scenario, thanks in advance.
    Olivier
    PS: We only have STD edition, so peer to peer and AG are not available to US. (which is why we are using merge)
    PS2: One way I could see this work is if I manually set the identity columns in the tables so that they won't overlap and make the range management manual. Then just setup publications / distributors and subscribers on each node. (But if I can dodge the
    manual management, it be great)

    Hi Brandon,
    Thx for the reply
    we are using web synchronization.
    According to the documentation ('The business logic handler you specify is executed for every row that is synchronized.') I would think this will do exactly the same as our trigger on the article.
    When we would use a business logic handler to update the record which is just uploaded to the Publisher I think we will end up with the same checksum error.
    We worked around the problem by not changing the status of orders once they are uploaded to the Publisher.
    For that part of the process we now have only 2 statusses anymore :
    - Not to be sent to central server
    - Upload(ed) to central server. 
    So an order is created at the subscriber and has the status 'not to be sent to central server'
    In a synchro-window the user can mark orders which must be uploaded to the central server.
    Only when the users clicks on the synchro button the orders which are marked (on screen) to be synchronised will get the status 'Upload(ed) to central server' and the pull subscriptions is started.
    On the central server we process only the articles with the status 'Upload(ed) to central server'.
    Before we had the statusses :
    - Not to be sent to central server
    - Upload to central server. 
    - Uploaded to central server.
    And we had our own trigger on the Publisher which updated the status of records from 'Upload to central server ' to 'uploaded to central server' which was not reliable because we often had checksum errors on that article.

  • Configure berkerly db more than one database

    I am now trying to write an application, I use berkerly db java edition.
    now I want two database to save different thing, since the memory is limited,one database's data should all be in memory, the other database can do disk I/O. should I use two environment and create the two database separately or I create the two database in the same environment, which one is more effective and can save more memory.
    Chould anyone tell me how to configure the two database(like the cache size) using some sampling code... Thanks a lot!

    The above suggestions are good. You might also consider using a link table from MS Access to have the linking done right inside MS Access. 
    If you can store your access mdb file on the server where your Oracle database resides this will greatly reduce your network traffic.  If the MS Access database must reside on a different machine it can still work but the gains won't be as great.  Still, this is an efficient way to do this if you can.
    To create a link table in MS Access choose, New, then 'Link Table'. In the Link dialog that appears you can choose ODBC.  I recommend using the SQORA driver from Oracle that is installed with SQL Plus.
    If you have to link in Crystal Reports instead you still can. However, all data from the Oracle table and Access table will be pulled to the client that hosts Crystal Reports and sorting will be done on the Crystal client.
    You definitely have some good options though.

  • Possible to use arrays and merge?

    Is it possible to use the results of APEX_UTIL.STRING_TO_TABLE in a merge statement? For example:
    l_selected := APEX_UTIL.STRING_TO_TABLE(:P4_V_METHODS);<br>
    l_methods := APEX_UTIL.STRING_TO_TABLE(:P4_VERIFICATION_SEQ);<p>
    merge into verification_results vr<br>
         using l_methods on (<br>
         l_methods = vr.verification_seq<br>
    ) <br>
    WHEN MATCHED THEN ...<p>
    etc.<p>
    It's a bit of a convoluted task that may not be possible. I've got checkboxes that hold the reference values (method_seq) and a hidden item that returns a string of the primary keys of a rel table (verification_results.verification_seq) and I want to merge what was stored and any new/different things that were checked.
    Y'all are going to want an example on apex.oracle.com, aren't you...
    thanks!

    Short answer, no. Used a couple arrays and loops instead.

  • TSQL Script to monitor SQL Server transactional and snapshot replication

    Hi Team,
    Could you please let me know do you have any TSQL script to monitor replication(Transactional, Snapshot) with current status ? I have tried below script but it giving error. could you please have a look at the below script or do you have any other new TSQL
    script to monitor the replication status ?
    "Msg 8164, Level 16, State 1, Procedure sp_MSload_tmp_replication_status, Line 80
    An INSERT EXEC statement cannot be nested."
    DECLARE @srvname VARCHAR(100)
    DECLARE @pub_db VARCHAR(100)
    DECLARE @pubname VARCHAR(100)
    CREATE TABLE #replmonitor(status    INT NULL,warning    INT NULL,subscriber    sysname NULL,subscriber_db    sysname NULL,publisher_db    sysname NULL,
    publication    sysname NULL,publication_type    INT NULL,subtype    INT NULL,latency    INT NULL,latencythreshold    INT NULL,agentnotrunning    INT NULL,
    agentnotrunningthreshold    INT NULL,timetoexpiration    INT NULL,expirationthreshold    INT NULL,last_distsync    DATETIME,
    distribution_agentname    sysname NULL,mergeagentname    sysname NULL,mergesubscriptionfriendlyname    sysname NULL,mergeagentlocation    sysname NULL,
    mergeconnectiontype    INT NULL,mergePerformance    INT NULL,mergerunspeed    FLOAT,mergerunduration    INT NULL,monitorranking    INT NULL,
    distributionagentjobid    BINARY(16),mergeagentjobid    BINARY(16),distributionagentid    INT NULL,distributionagentprofileid    INT NULL,
    mergeagentid    INT NULL,mergeagentprofileid    INT NULL,logreaderagentname VARCHAR(100),publisher varchar(100))
    DECLARE replmonitor CURSOR FOR
    SELECT b.srvname,a.publisher_db,a.publication
    FROM distribution.dbo.MSpublications a,  master.dbo.sysservers b
    WHERE a.publisher_id=b.srvid
    OPEN replmonitor 
    FETCH NEXT FROM replmonitor INTO @srvname,@pub_db,@pubname
    WHILE @@FETCH_STATUS = 0
    BEGIN
    INSERT INTO #replmonitor
    EXEC distribution.dbo.sp_replmonitorhelpsubscription  @publisher = @srvname
         , @publisher_db = @pub_db
         ,  @publication = @pubname
         , @publication_type = 0
    FETCH NEXT FROM replmonitor INTO @srvname,@pub_db,@pubname
    END
    CLOSE replmonitor
    DEALLOCATE replmonitor
    SELECT publication,publisher_db,subscriber,subscriber_db,
            CASE publication_type WHEN 0 THEN 'Transactional publication'
                WHEN 1 THEN 'Snapshot publication'
                WHEN 2 THEN 'Merge publication'
                ELSE 'Not Known' END,
            CASE subtype WHEN 0 THEN 'Push'
                WHEN 1 THEN 'Pull'
                WHEN 2 THEN 'Anonymous'
                ELSE 'Not Known' END,
            CASE status WHEN 1 THEN 'Started'
                WHEN 2 THEN 'Succeeded'
                WHEN 3 THEN 'In progress'
                WHEN 4 THEN 'Idle'
                WHEN 5 THEN 'Retrying'
                WHEN 6 THEN 'Failed'
                ELSE 'Not Known' END,
            CASE warning WHEN 0 THEN 'No Issues in Replication' ELSE 'Check Replication' END,
            latency, latencythreshold, 
            'LatencyStatus'= CASE WHEN (latency > latencythreshold) THEN 'High Latency'
            ELSE 'No Latency' END,
            distribution_agentname,'DistributorStatus'= CASE WHEN (DATEDIFF(hh,last_distsync,GETDATE())>1) THEN 'Distributor has not executed more than n hour'
            ELSE 'Distributor running fine' END
            FROM #replmonitor
    --DROP TABLE #replmonitor
    Rajeev R

    INSERT INTO #replmonitor
    Hi Rajeev,
    Could you please use the following query and check if it is successful?
    INSERT INTO #replmonitor
    SELECT a.*
    FROM OPENROWSET
    ('SQLNCLI', 'Server=DBServer;Trusted_Connection=yes;',
    'SET FMTONLY OFF; exec distribution..sp_replmonitorhelpsubscription
    @publisher = DBServer,
    @publication_type = 0,
    @publication=MyPublication') AS a;
    There is a similar thread for your reference.
    https://social.msdn.microsoft.com/Forums/sqlserver/en-US/634090bf-915e-4d97-b71a-58cf47d62a8a/msg-8164-level-16-state-1-procedure-spmsloadtmpreplicationstatus-line-80?forum=sqlreplication
    Thanks,
    Lydia Zhang
    Lydia Zhang
    TechNet Community Support

  • Configuring OID and logging into a database with a OID username\password

    I've installed the Oracle database software (v9.2.0.2.1) and created a database called OIDDB. I then installed the "Management and Integration" portion of the Oracle database software which includes "Oracle Management Server" and "Oracle Internet Directory". All of this installation is on the same database box.
    On another server, my other database (called: BAHDB) resides that I OID enabled. I've already used the "Database Configuration Assistant" to register it with the directory service.
    What else needs to be done at this point to log into the database without a Oracle username and password? What high level main steps am I missing? Do I need to do anything with Oracle wallets?

    There's now a difference when you use domains in your network configuration:
    tnsping server.at.your.domain should get resolved through tnsnames.
    tnsping server shoudl get resolved through the LDAP server.
    The illegal userid/password error makes me suspect you try to logon with the fully qualified server name; LDAP should respond with something like "no nickname found".
    Also - the guest user is only used as a "stub" - you should be able to login with a name, defined in OID - not defined in the database!
    Things to check:
    - rdbms_server_dn - DBCA should take care of it during registration of the database in OID, but it does not always do that. Setting it requires a restart. Remeember, it is case sensitive! Format is cn=SID,cn=OracleContext,dc=<your realm here>
    - Wallet. Was it created, what is in it?
    - User mapping (Use the Enterprise Security Manager - part of the client install); open Realms, open OracleDefaultDomain within Enterprise Domains, third tab ("Database Schema Mapping"), select "Add".
    Navigate to cn=Users,dc=<your realm>. Make sure Subtree level is selected, and use your globally identified user (you used guest) for schema; click OK.
    Click Apply
    You should now be able to logon as any user, defined in the OID under cn=Users,dc=<your realm> - orcladmin should do.
    when logged on, show user will respond with "guest"; use select sys_context('userenv','external_name') from dual; to find out the real user.
    The way you now set it up probably will fail to login with 'user lacks create session privilege' - for test purposes, grant create session to guest - in a 'normal' eenviroment, you would use enterprise roles, and assign these to the OID defined users, not directly to the "stub user" (guest in your example)

  • Is it possible to put (9iAS and 8i) in one server box?

    I'm trying built a testing enviroment at home. I only have 1 server win2000 and I have installed 8i on it already. Can I also install 9iAS to have both the database server and application server together?
    Thanks
    Wenchel
    null

    In what order did you install these products and what about the Oracle homes and their directories. I'll appreciate if you can provide this info 'cos I have tried to do this installation (one box) without success we had to rely on the separate boxes.

  • Is it possible to show live and vod in one stream?

    hello
    i am playing here with fms i want to make a sort of home
    tv-channel
    i have created ".swf-tv_set" for the end user wich connects
    to my server
    the question is: what techniques should i use (on server-side
    maybe) to control what type of data (live or vod) is being streamed
    to the user? i have also created "director's swf" to switch between
    output data, but how can i switch live/vod in one stream, to which
    the user is already connected?
    thank you in advance.

    Hi mexxik,
    To switch between VOD and LIVE, you 're advised to code it at
    the SSAS as long as the LIVE stream is not coming from Flash Media
    Encoder.
    The SSAS could be something like below:
    application.userStream= Stream.get("currentStream");
    application.userStream.setBufferTime(0);
    application.userStream.play(videoSignal2,-1);
    Sidney

  • Large and Many Replication Manager Database Log Files

    Hi All,
    I've recently added replication manager support to our database systems. After enabling the replication manager I end up with many log.* files of many gigabytes a piece on the master. This makes backing up the database difficult. Is there a way to purge the log files more often?
    It also seems that the replication slave never finishes syncronizing with the master.
    Thank you,
    Rob

    So, I setup a debug env on test machines, with a snapshot of the db. We now set rep_set_limit to 5mb.
    Now its failing to sync, so I recompiled with --enable-diagnostic and enabled DB_VERB_REPLICATION.
    On the master we see this:
    2007-06-06 18:40:26.646768500 DBMSG: ERROR:: sendpages: 2257, page lsn [293276][4069284]
    2007-06-06 18:40:26.646775500 DBMSG: ERROR:: bulk_msg: Copying LSN [640947][6755391] of 4152 bytes to 0x2afb35e370
    2007-06-06 18:40:26.646782500 DBMSG: ERROR:: sendpages: 2257, lsn [640947][6755391]
    2007-06-06 18:40:26.646794500 DBMSG: ERROR:: sendpages: 2258, page lsn [309305][9487507]
    2007-06-06 18:40:26.646801500 DBMSG: ERROR:: bulk_msg: Copying LSN [640947][6755391] of 4152 bytes to 0x2afb35f3b4
    2007-06-06 18:40:26.646803500 DBMSG: ERROR:: sendpages: 2258, lsn [640947][6755391]
    2007-06-06 18:40:26.646809500 DBMSG: ERROR:: send_bulk: Send 562140 (0x893dc) bulk buffer bytes
    2007-06-06 18:40:26.646816500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_send_message: msgv = 3 logv 12 gen = 12 eid 0, type bulk_page, LSN [640947][6755391] nobuf
    2007-06-06 18:40:26.647064500 DBMSG: ERROR:: wrote only 147456 bytes to site 10.0.3.235:9003
    2007-06-06 18:40:26.648559500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_process_message: msgv = 3 logv 12 gen = 12 eid 0, type page_req, LSN [0][0]
    2007-06-06 18:40:26.648561500 DBMSG: ERROR:: page_req: file 0 page 2124 to 2124
    2007-06-06 18:40:26.648562500 DBMSG: ERROR:: page_req: Open 0 via mpf_open
    2007-06-06 18:40:26.648563500 DBMSG: ERROR:: sendpages: file 0 page 2124 to 2124
    2007-06-06 18:40:26.649966500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_process_message: msgv = 3 logv 12 gen = 12 eid 0, type page_req, LSN [0][0]
    2007-06-06 18:40:26.649968500 DBMSG: ERROR:: page_req: file 0 page 2124 to 2124
    2007-06-06 18:40:26.649970500 DBMSG: ERROR:: page_req: Open 0 via mpf_open
    2007-06-06 18:40:26.649971500 DBMSG: ERROR:: sendpages: file 0 page 2124 to 2124
    2007-06-06 18:40:26.651699500 DBMSG: ERROR:: sendpages: 2124, page lsn [516423][7302945]
    2007-06-06 18:40:26.651702500 DBMSG: ERROR:: bulk_msg: Copying LSN [640947][6755391] of 4152 bytes to 0x2afb3d801c
    2007-06-06 18:40:26.651704500 DBMSG: ERROR:: sendpages: 2124, lsn [640947][6755391]
    2007-06-06 18:40:26.651705500 DBMSG: ERROR:: send_bulk: Send 4164 (0x1044) bulk buffer bytes
    2007-06-06 18:40:26.651706500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_send_message: msgv = 3 logv 12 gen = 12 eid 0, type bulk_page, LSN [640947][6755391] nobuf
    2007-06-06 18:40:26.652858500 DBMSG: ERROR:: sendpages: 2124, page lsn [516423][7302945]
    2007-06-06 18:40:26.652860500 DBMSG: ERROR:: bulk_msg: Copying LSN [640947][6755391] of 4152 bytes to 0x2afb2d701c
    2007-06-06 18:40:26.652861500 DBMSG: ERROR:: sendpages: 2124, lsn [640947][6755391]
    2007-06-06 18:40:26.652862500 DBMSG: ERROR:: send_bulk: Send 4164 (0x1044) bulk buffer bytes
    2007-06-06 18:40:26.652864500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_send_message: msgv = 3 logv 12 gen = 12 eid 0, type bulk_page, LSN [640947][6755391] nobuf
    2007-06-06 18:40:38.951290500 1 28888 dbnet: 0,0: MSG: ** checkpoint start **
    2007-06-06 18:40:38.951321500 1 28888 dbnet: 0,0: MSG: ** checkpoint end **
    On the slave, we see this:
    2007-06-06 18:40:26.668636500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.668637500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.668644500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb66c1fc ep 0x2afb671344 pgrec data 0x2afb66c1fc, size 4152 (0x1038)
    2007-06-06 18:40:26.668645500 DBMSG: ERROR:: PAGE: Received page 2254 from file 0
    2007-06-06 18:40:26.668658500 DBMSG: ERROR:: PAGE: Received duplicate page 2254 from file 0
    2007-06-06 18:40:26.668664500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.668666500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.668672500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb66d240 ep 0x2afb671344 pgrec data 0x2afb66d240, size 4152 (0x1038)
    2007-06-06 18:40:26.668674500 DBMSG: ERROR:: PAGE: Received page 2255 from file 0
    2007-06-06 18:40:26.668686500 DBMSG: ERROR:: PAGE: Received duplicate page 2255 from file 0
    2007-06-06 18:40:26.668703500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.668704500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.668706500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb66e284 ep 0x2afb671344 pgrec data 0x2afb66e284, size 4152 (0x1038)
    2007-06-06 18:40:26.668707500 DBMSG: ERROR:: PAGE: Received page 2256 from file 0
    2007-06-06 18:40:26.668714500 DBMSG: ERROR:: PAGE: Received duplicate page 2256 from file 0
    2007-06-06 18:40:26.668715500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.668722500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.668723500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb66f2c8 ep 0x2afb671344 pgrec data 0x2afb66f2c8, size 4152 (0x1038)
    2007-06-06 18:40:26.668730500 DBMSG: ERROR:: PAGE: Received page 2257 from file 0
    2007-06-06 18:40:26.668743500 DBMSG: ERROR:: PAGE: Received duplicate page 2257 from file 0
    2007-06-06 18:40:26.668750500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.668752500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.668758500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb67030c ep 0x2afb671344 pgrec data 0x2afb67030c, size 4152 (0x1038)
    2007-06-06 18:40:26.668760500 DBMSG: ERROR:: PAGE: Received page 2258 from file 0
    2007-06-06 18:40:26.668772500 DBMSG: ERROR:: PAGE: Received duplicate page 2258 from file 0
    2007-06-06 18:40:26.668779500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.690980500 DBMSG: ERROR:: /ask/bloglines/db/sitedb-slave rep_process_message: msgv = 3 logv 12 gen = 12 eid 0, type bulk_page, LSN [640947][6755391]
    2007-06-06 18:40:26.690982500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.690983500 DBMSG: ERROR:: rep_bulk_page: p 0x736584 ep 0x7375bc pgrec data 0x736584, size 4152 (0x1038)
    2007-06-06 18:40:26.690985500 DBMSG: ERROR:: PAGE: Received page 2124 from file 0
    2007-06-06 18:40:26.690986500 DBMSG: ERROR:: PAGE: Received duplicate page 2124 from file 0
    2007-06-06 18:40:26.690992500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:36.289310500 DBMSG: ERROR:: election thread is exiting
    I have full log files if that could help, these are just the end of those.
    Any ideas? Thanks...
    -Paul

  • Steps by Step document for configuration Merge Replication in 2005

    Hi ,
    Can anyone provide link to configure Merge Replication in 2005 with print screen.Also let me know the difference between Transcation level and Merge Replication.
    regards
    Vijay

    Here you have one tutorial:
    http://www.sqlshack.com/sql-server-replications/
    Use merge replication if you want to reflect changes from the subscriber to the publisher or viceversa.
    Use transactional replication if you want changes from the publisher to the subscriber.
    You can use transactional databases with Oracle databases.
    http://www.sqlshack.com/sql-server-replications/
    http://www.sqlshack.com/sql-server-replications/
    http://www.sqlshack.com/sql-server-replications/
    http://www.sqlshack.com/sql-server-replications/
    http://www.sqlshack.com/sql-server-replications/
    MVP MCT MCTS Daniel Calbimonte
    http://elpaladintecnologico.blogspot.com

Maybe you are looking for

  • Compress to fit a specific file size

    Hello everyone. I'm new to these parts of town, anyway. I was wondering if anyone can help me out. I have a 5 minute video that I want to put onto youtube.com. Youtube has a 10minute/100mb limit for it's users. My problem is, how do i convert my 5 mi

  • Programming button for single and double click

    I know how to programme a button to play a certain movie segment. However, in my current project I need to be able to programme a button in the menu so that 1. on rollover - nothing 2. on single mouse click - button colour changes + a parapraph of te

  • Downloading Photoshop CC after trial

    i have completed the Photoshop CC trial and need to install the product now but am unsure how to. the serial number appears to be incorrect also

  • Iphone language in Estonian?

    For a long time now, I have always been curious to why none of the apple products have made Estonian an option as a software language. I would really like to put my entire phone into Estonian rather than today's date and the days of the week in my ca

  • Updating a table with billion rows

    It was an interview question, what's the best way to update a table with 10 billion rows. Give me your suggestions. Thanks in advance. svk