Issue in replication
Hello,
We have a a SLT system [SAP NetWeaver 7.3 with DMIS 2011_1_730 0005 ].
When we try to replicate tables from LTRC transaction, we see that the tables go into Replication[Initial mode] and stay like that.
But when i go to proceesing steps tab and do Define Load / Replication Object and try to replicate the tables it replicates fine.
Could you please let me know how to resolve this issue.
Thanks,
Rajiv
My responses to your 3 questions
#1.
Prices are missing in CRM because you need to run specific loads to sync up the pricing procedures and pricing conditions in CRM.
#2.
The pricing procedures have to be downloaded first into CRM. Please check R3AC5 transaction object CRM_PRC_PROC.
Also in the same R3AC5 transaction you will notice other adapter objects for specific pricing tables. You have to execute them to bring pricing condition data to CRM.
#3.
No customizing objects exist for this requirement.
you have to use customizing SPRO transaction 'Customer Relationship Management -> Transactions -> Settings For Sales Transactions -> Define Reasons For Rejection' to define it one more time in CRM.
Similar Messages
-
Archiver Issue| Automatic replication process | Archiver thread blocking
Hi,
Recently, we are facing a strange issue with Archiver auto replication process. Sometime, archiver thread is blocking and auto replication is stalling. We observed this issue after we apply latest OUCM patches downloded from Oracle support site.
As an work around, we are doing UCM restart.
Anybody got this kind off issue? We are following this up with Oracle in parallel. Any help regarding this is highly appreciated.
Gowtham JWhat version/patch set/core update of UCM are you using?
Jonathan
http://redstonecontentsolutions.com
http://corecontentonly.com -
Change PO issue in replication.
HI,
I have created a PO and it was replicated to ECC without any issue. When i change it was sucessful in srm but in ecc, i am getting the below error.
1. Instance of XXXXXXXX Object Type Purchase Order could not be changed.
2. PO XXXXXXXXXX: Indicator for GR based Invoice Verification used not allowed.
Pls tell me to solve this problem.Hi,
Which version of SRM are you using and what are the SP levels?
Can you remove the check box for 'GR Based IV verification' and retransfer the PO?
Regards,
Nikhil -
Issue with Replication Latency with computed columns
We have transactional replication enabled and there are 15 tables that are replicated. Out of 15 there are 6 tables where we have computed columns that are persisted. When ever there are several writes, if the actual transaction size is 100 MB on the publisher,
the log reader agent is queuing up almost 30-40 GB of data and the latency is significantly increasing and the transaction log is getting held up by REPLICATION in log_reuse_wait.
An example schema for a table is
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[address](
[address_id] [int] IDENTITY(1,1) NOT NULL,
[crm_entity_id] [int] NOT NULL,
[address_title] [varchar](600) NOT NULL,
[address1] [varchar](300) NULL,
[address2] [varchar](300) NULL,
[address3] [varchar](300) NULL,
[city] [varchar](300) NULL,
[state_name] [varchar](15) NULL,
[state_non_roman] [varchar](300) NULL,
[postal_code] [varchar](60) NULL,
[district] [varchar](15) NULL,
[country] [varchar](15) NULL,
[country_non_roman] [varchar](150) NULL,
[non_roman] [char](1) NOT NULL,
[is_primary] [char](1) NOT NULL,
[parent_address_id] [int] NULL,
[vat_supply_to] [char](1) NOT NULL,
[created_by] [char](8) NOT NULL,
[created_time] [datetime] NOT NULL,
[modified_by] [char](8) NOT NULL,
[modified_time] [datetime] NOT NULL,
[address_title_uni] AS (case when [address_title] IS NULL then NULL else CONVERT([nvarchar](200),[dbo].[udfVarBinaryToUTF16](CONVERT([varbinary](600),[address_title],0)),0) end) PERSISTED,
[address1_uni] AS (case when [address1] IS NULL then NULL else CONVERT([nvarchar](100),[dbo].[udfVarBinaryToUTF16](CONVERT([varbinary](300),[address1],0)),0) end) PERSISTED,
[address2_uni] AS (case when [address2] IS NULL then NULL else CONVERT([nvarchar](100),[dbo].[udfVarBinaryToUTF16](CONVERT([varbinary](300),[address2],0)),0) end) PERSISTED,
[address3_uni] AS (case when [address3] IS NULL then NULL else CONVERT([nvarchar](100),[dbo].[udfVarBinaryToUTF16](CONVERT([varbinary](300),[address3],0)),0) end) PERSISTED,
[city_uni] AS (case when [city] IS NULL then NULL else CONVERT([nvarchar](100),[dbo].[udfVarBinaryToUTF16](CONVERT([varbinary](300),[city],0)),0) end) PERSISTED,
[state_non_roman_uni] AS (case when [state_non_roman] IS NULL then NULL else CONVERT([nvarchar](100),[dbo].[udfVarBinaryToUTF16](CONVERT([varbinary](300),[state_non_roman],0)),0) end) PERSISTED,
[postal_code_uni] AS (case when [postal_code] IS NULL then NULL else CONVERT([nvarchar](20),[dbo].[udfVarBinaryToUTF16](CONVERT([varbinary](60),[postal_code],0)),0) end) PERSISTED,
[country_non_roman_uni] AS (case when [country_non_roman] IS NULL then NULL else CONVERT([nvarchar](50),[dbo].[udfVarBinaryToUTF16](CONVERT([varbinary](150),[country_non_roman],0)),0) end) PERSISTED,
CONSTRAINT [pk_address] PRIMARY KEY CLUSTERED
[address_id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [dbo].[address] WITH CHECK ADD CONSTRAINT [fk_address] FOREIGN KEY([crm_entity_id])
REFERENCES [dbo].[crm_entity] ([crm_entity_id])
GO
ALTER TABLE [dbo].[address] CHECK CONSTRAINT [fk_address]
GO
ALTER TABLE [dbo].[address] WITH CHECK ADD CONSTRAINT [fk_address2] FOREIGN KEY([parent_address_id])
REFERENCES [dbo].[address] ([address_id])
GO
ALTER TABLE [dbo].[address] CHECK CONSTRAINT [fk_address2]
GOWe have transactional replication enabled and there are 15 tables that are replicated. Out of 15 there are 6 tables where we have computed columns that are persisted. When ever there are several writes, if the actual transaction size is 100 MB on the publisher,
the log reader agent is queuing up almost 30-40 GB of data and the latency is significantly increasing and the transaction log is getting held up by REPLICATION in log_reuse_wait.
An example schema for a table is
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[address](
[address_id] [int] IDENTITY(1,1) NOT NULL,
[crm_entity_id] [int] NOT NULL,
[address_title] [varchar](600) NOT NULL,
[address1] [varchar](300) NULL,
[address2] [varchar](300) NULL,
[address3] [varchar](300) NULL,
[city] [varchar](300) NULL,
[state_name] [varchar](15) NULL,
[state_non_roman] [varchar](300) NULL,
[postal_code] [varchar](60) NULL,
[district] [varchar](15) NULL,
[country] [varchar](15) NULL,
[country_non_roman] [varchar](150) NULL,
[non_roman] [char](1) NOT NULL,
[is_primary] [char](1) NOT NULL,
[parent_address_id] [int] NULL,
[vat_supply_to] [char](1) NOT NULL,
[created_by] [char](8) NOT NULL,
[created_time] [datetime] NOT NULL,
[modified_by] [char](8) NOT NULL,
[modified_time] [datetime] NOT NULL,
[address_title_uni] AS (case when [address_title] IS NULL then NULL else CONVERT([nvarchar](200),[dbo].[udfVarBinaryToUTF16](CONVERT([varbinary](600),[address_title],0)),0) end) PERSISTED,
[address1_uni] AS (case when [address1] IS NULL then NULL else CONVERT([nvarchar](100),[dbo].[udfVarBinaryToUTF16](CONVERT([varbinary](300),[address1],0)),0) end) PERSISTED,
[address2_uni] AS (case when [address2] IS NULL then NULL else CONVERT([nvarchar](100),[dbo].[udfVarBinaryToUTF16](CONVERT([varbinary](300),[address2],0)),0) end) PERSISTED,
[address3_uni] AS (case when [address3] IS NULL then NULL else CONVERT([nvarchar](100),[dbo].[udfVarBinaryToUTF16](CONVERT([varbinary](300),[address3],0)),0) end) PERSISTED,
[city_uni] AS (case when [city] IS NULL then NULL else CONVERT([nvarchar](100),[dbo].[udfVarBinaryToUTF16](CONVERT([varbinary](300),[city],0)),0) end) PERSISTED,
[state_non_roman_uni] AS (case when [state_non_roman] IS NULL then NULL else CONVERT([nvarchar](100),[dbo].[udfVarBinaryToUTF16](CONVERT([varbinary](300),[state_non_roman],0)),0) end) PERSISTED,
[postal_code_uni] AS (case when [postal_code] IS NULL then NULL else CONVERT([nvarchar](20),[dbo].[udfVarBinaryToUTF16](CONVERT([varbinary](60),[postal_code],0)),0) end) PERSISTED,
[country_non_roman_uni] AS (case when [country_non_roman] IS NULL then NULL else CONVERT([nvarchar](50),[dbo].[udfVarBinaryToUTF16](CONVERT([varbinary](150),[country_non_roman],0)),0) end) PERSISTED,
CONSTRAINT [pk_address] PRIMARY KEY CLUSTERED
[address_id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [dbo].[address] WITH CHECK ADD CONSTRAINT [fk_address] FOREIGN KEY([crm_entity_id])
REFERENCES [dbo].[crm_entity] ([crm_entity_id])
GO
ALTER TABLE [dbo].[address] CHECK CONSTRAINT [fk_address]
GO
ALTER TABLE [dbo].[address] WITH CHECK ADD CONSTRAINT [fk_address2] FOREIGN KEY([parent_address_id])
REFERENCES [dbo].[address] ([address_id])
GO
ALTER TABLE [dbo].[address] CHECK CONSTRAINT [fk_address2]
GO -
Issue with Replication of Bank Account Numbers from ECC to CRM.
Hi,
I am trying to transfer business partners from ECC to CRM, I see errors in SMW01, stating that Bank Key is not valid for US, I have then performed the Initial load for DNL_CUST_BNKA to tranfer Bank Master Data, but again getting error but this time on BANK ACCOUNT NUMBER. How do I transfer the BANK ACCOUNT NUMBER of Business Partners from ECC to CRM. Before starting the initial load for BUPA_MAIN do I need to run any other Initial load to move the BANK ACCOUNT NUMBERS to CRM?
I have already moved Bank Master data using DNL_CUST_BNKA, but can't able to replicate BANK ACCOUNT NUMBERS, can someone please provide me some steps which I need to perform to tranfer BANK ACCOUNT NUMBER.
Now when I ran the initial load for BUPA_MAIN , all Business partners which do not have BANK ACCOUNT NUMBERS are replicated successfully but the one which have Bank Details did not got transferred and gave errors for BANK ACCOUNT NUMBER, Please provide some suggestions.
Thanks.
JenniferWillie,
Thanks for the reply:
Error: BANK ACCOUNT NUMBER ******8547 is not valid
Support pack level:
SAP_ABA 701 0003
SAP_BASIS 701 0003
BBPCRM 700 0002
Thanks.
Jennifer -
Hi,
I am using windows server 2008 R2 as PDC and ADC. My ADC was offline for more than 7 month .Now when it come online i am facing issue wit replication...
Problem: "Trust relation between this computer and the primary domain failed"
Event viewer log is-
It has been too long since this machine last replicated with the named source machine. The time between replications with this source has exceeded the tombstone lifetime. Replication has been stopped with this source.
The reason that replication is not allowed to continue is that the two DCs may contain lingering objects. Objects that have been deleted and garbage collected from an Active Directory Domain Services partition but still exist in the writable
partitions of other DCs in the same domain, or read-only partitions of global catalog servers in other domains in the forest are known as "lingering objects". If the local destination DC was allowed to replicate with the source DC, these potential
lingering object would be recreated in the local Active Directory Domain Services database.
Time of last successful replication:
2014-08-01 08:52:53
Invocation ID of source directory server:......................................
I have tried repadmin /remove lingering object............................. ,tried re joining the system to domain,netdome trust command etc...still issue not getting resolved.Need help on this
RegardsIt is normal that, when a DC does not replicate since a long time, it gets tombstoned.
Please proceed as the following:
On the faulty DC, run dcpromo /forceremoval to force its demotion
Check if the faulty DC holds any FSMO roles using netdom query fsmo. In case it holds some, seize them on the other DC: https://support.microsoft.com/en-us/kb/255504?wa=wsignin1.0
Do a metadata cleanup to remove the faulty DC references: https://technet.microsoft.com/en-us/library/cc816907%28v=ws.10%29.aspx
Once done, use dcdiag command to make sure that your left DC is in healthy state.
This posting is provided AS IS with no warranties or guarantees , and confers no rights.
Ahmed MALEK
My Website Link
My Linkedin Profile
My MVP Profile -
Hello Experts,
We are trying to do an MDG implementation for Vendor master data. We are using the Data Model BP for this. Initially we have done all the configurations for master data BP. Later since we wanted the Master Data to be Vendor, we changed the configurations to Vendor instead of BP. Now we are facing issues in replication of Vendor master into the ECC system. MDG and ECC reside in the same system. We see the Access Class 'CL_MDG_BS_SUPPL_TYP2CHK_ACCESS' assigned in the Reuse area. We are a lot confused between the BP settings and Vendor master data settings. We have got the following queries, request your reply
1. Whatever configuration that are given in the config guide for Vendor we have done that. Is there anything else needed to be done for the config?
2. Do we require to do UIBB setting changes for Vendor and BP separately?
3. Is replication required for Vendor Master Data?
4. Does the reuse area Access class required to be changed for Vendor Master?
Kind Regards,
Thamizharasi N1. Whatever configuration that are given in the config guide for Vendor we have done that. Is there anything else needed to be done for the config?
you got the check the following
General BP setting.
Supplier settings
CVI integration setting under each node of MDGIMG
2. Do we require to do UIBB setting changes for Vendor and BP separately?
Check out the changes mention in Config.doc if change needs to done do it.
3. Is replication required for Vendor Master Data?
Replication of data depends on your business requirement whether you need to create them in SRM or visa-versa.
4. Does the reuse area Access class required to be changed for Vendor Master?
currently we are using the access class with no issue ,hence continue using it if no issue
regards
shankar -
Product category replication issue
Hi friends,
We are trying to replicate product category from one of our R/3 backend to the SRM box. There is only one product category we have specified in the filters in R3AC3, but DNL_CUST_PROD1 is showing status running from past 3 days.
I would like to point out that the same scenario is working fine for other R/3 backends.
Any idea/suggestion about the reason for this issue in replication process?
Thanks
GauravHi Robin,
We are also expecting some customizing problem as the reason for this issue.
I just want to again point out that Product category replication is working for other R/3 backends. Only one specific R/3 backend is causing this issue.
Can you advise what could be these backend specific settings and where we can verify them?
Thanks
Gaurav -
Help needed on Fixing replication latency issues
Hi Team,
Good day!!!
I am new to replication setup and I am working on the issues with replication latency.
Can you please provide us your experience and expertise on fixing and troubleshooting replication issues.
Below are some queries on that,
1) How to check the replication errors?
2) how to find out the replication latency?
3) What are the steps to fix the replication latency?
4) how to teoubleshoot the issues coming on replication latency?
Awaiting your valuable response and replies.
Thanks in advance,
Sarvan.NHi Sarvan,
Firstly, usually, we check replication errors via the Replication Monitor or by viewing replication agent job history. Besides the two methods, we can also check replication errors from the SQL Server error log and so on. For more details,
please review this
article.
Secondly, based on my research, the latency issues always occur in Transactional Replication. We can check that if there are latency issues by using Replication Monitor as well as Transact-SQL commands. Below is an example that posts a tracer token
record and uses the returned ID of the posted tracer token to view latency information. Refer to :
Measure Latency and Validate Connections for Transactional Replication.
DECLARE @publication AS sysname;
DECLARE @tokenID AS int;
SET @publication = N'AdvWorksProductTran';
USE [AdventureWorks2012]
-- Insert a new tracer token in the publication database.
EXEC sys.sp_posttracertoken
@publication = @publication,
@tracer_token_id = @tokenID OUTPUT;
SELECT 'The ID of the new tracer token is ''' +
CONVERT(varchar,@tokenID) + '''.'
GO
-- Wait 10 seconds for the token to make it to the Subscriber.
WAITFOR DELAY '00:00:10';
GO
-- Get latency information for the last inserted token.
DECLARE @publication AS sysname;
DECLARE @tokenID AS int;
SET @publication = N'AdvWorksProductTran';
CREATE TABLE #tokens (tracer_id int, publisher_commit datetime)
-- Return tracer token information to a temp table.
INSERT #tokens (tracer_id, publisher_commit)
EXEC sys.sp_helptracertokens @publication = @publication;
SET @tokenID = (SELECT TOP 1 tracer_id FROM #tokens
ORDER BY publisher_commit DESC)
DROP TABLE #tokens
-- Get history for the tracer token.
EXEC sys.sp_helptracertokenhistory
@publication = @publication,
@tracer_id = @tokenID;
GO
Thirdly, transactional replication latency issues could be caused by many factors, such as network traffic or bandwidth constraints, transactional load on the publisher, system resources, blocking/locking issues and so on. I recommend you troubleshoot latency
issues referring to the following blog.
http://blogs.msdn.com/b/repltalk/archive/2010/02/21/transactional-replication-conversations.aspx
Thanks,
Lydia Zhang -
SQL Server 2008 R2 Replication - not applying snapshot and not updating all repliacted columns
We are using transactional replicating on SQL Server 2008 R2 (SP1) using a remote distributor. We are replicating from BaanLN, which is an ERP application to up to 5 subscribers, all using push publications.
Tables can range from a couple million rows to 12 million rows and 100's of GBs in size.
And it's due to the size of the tables that it was designed with a one publisher to one table architecture.
Until recently it has been working very smooth (last four years)) but we have come across two issues I have never encountered.
While this has happen a half dozen times before, it last occurred a couple weeks ago when I was adding three new publications, again a one table per publication architecture.
We use standard SS repl proc calls to create the publications, which have been successful for years.
On this occasion replication created the three publications, assigned the subscribers and even generated the new snapshot for all three new publications.
However, while it appeared that replication had created all the publications correctly from end to end, it actually only applied one of the three snapshot and created the new table on both of the new subscribers (two on each of the
publications). It only applied the snapshot to one of the two subscribers for the second publications, and did not apply to any on the third.
I let it run for three hours to see if it was a back log issue.
Replication was showing commands coming across when looking at the sync verification at the publisher and
it would even successfully pass a tracer token through each of the three new publications, despite there not being tables on either subscriber on one of the publishers and missing on one of the subscribers on another.
I ended up attempting to reinitialize roughly a dozen times, spanning a day, and one of the two remaining publications was correctly reinitialized and the snapshot applied, but the second of the two (failed) again had the same mysterious result, and
again looked like it was successful based on all the monitoring.
So I kept reinitializing the last and after multiple attempts spanning a day, it too finally was built correctly.
Now the story only get a little stranger. We just found out yesterday that on Friday the 17th
at 7:45, the approximate time started the aforementioned deployment of the three new publications,
we also had three transaction from a stable and vetted publication send over all changes except for a single status column.
This publication has 12 million rows and is very active, with thousands of changes daily.
, The three rows did not replicate a status change from a 5 to a 6.
We verified that the status was in fact 6 on the publisher, and
5 on both subscribers, yet no messages or errors. All the other rows successfully updated.
We fixed it by updating the publication from 6 back to 5 then back to 6 again on those specific rows and it worked.
The CPU is low and overall latency is minimal on the distributor.
From all accounts the replication is stable and smooth, but very busy.
The issues above have only recently started. I am not sure where to look for a problem, and to that end, a solution.I suspect the problem with the new publication/subscriptions not initializing may have been a result of timeouts but it is hard to say for sure. The fact that it eventually succeeded after multiple attempts leads me to believe this. If this happens
again, enable verbose agent logging for the Distribution Agent to see if you are getting query timeouts. Add the parameters
-OutputVerboseLevel 2 -Output C:\TEMP\DistributionAgent.log to the Distribution Agent Run Agent job step, rerun the agent, and collect the log.
If you are getting query timeouts, try increasing the Distribution Agent -QueryTimeOut parameter. The default is 1800 seconds. Try bumping this up to 3600 seconds.
Regarding the three transactions not replicating, inspect MSrepl_errors in the distribution database for the time these transactions occurred and see if any errors occurred.
Brandon Williams (blog |
linkedin) -
TSQL Script to monitor SQL Server transactional and snapshot replication
Hi Team,
Could you please let me know do you have any TSQL script to monitor replication(Transactional, Snapshot) with current status ? I have tried below script but it giving error. could you please have a look at the below script or do you have any other new TSQL
script to monitor the replication status ?
"Msg 8164, Level 16, State 1, Procedure sp_MSload_tmp_replication_status, Line 80
An INSERT EXEC statement cannot be nested."
DECLARE @srvname VARCHAR(100)
DECLARE @pub_db VARCHAR(100)
DECLARE @pubname VARCHAR(100)
CREATE TABLE #replmonitor(status INT NULL,warning INT NULL,subscriber sysname NULL,subscriber_db sysname NULL,publisher_db sysname NULL,
publication sysname NULL,publication_type INT NULL,subtype INT NULL,latency INT NULL,latencythreshold INT NULL,agentnotrunning INT NULL,
agentnotrunningthreshold INT NULL,timetoexpiration INT NULL,expirationthreshold INT NULL,last_distsync DATETIME,
distribution_agentname sysname NULL,mergeagentname sysname NULL,mergesubscriptionfriendlyname sysname NULL,mergeagentlocation sysname NULL,
mergeconnectiontype INT NULL,mergePerformance INT NULL,mergerunspeed FLOAT,mergerunduration INT NULL,monitorranking INT NULL,
distributionagentjobid BINARY(16),mergeagentjobid BINARY(16),distributionagentid INT NULL,distributionagentprofileid INT NULL,
mergeagentid INT NULL,mergeagentprofileid INT NULL,logreaderagentname VARCHAR(100),publisher varchar(100))
DECLARE replmonitor CURSOR FOR
SELECT b.srvname,a.publisher_db,a.publication
FROM distribution.dbo.MSpublications a, master.dbo.sysservers b
WHERE a.publisher_id=b.srvid
OPEN replmonitor
FETCH NEXT FROM replmonitor INTO @srvname,@pub_db,@pubname
WHILE @@FETCH_STATUS = 0
BEGIN
INSERT INTO #replmonitor
EXEC distribution.dbo.sp_replmonitorhelpsubscription @publisher = @srvname
, @publisher_db = @pub_db
, @publication = @pubname
, @publication_type = 0
FETCH NEXT FROM replmonitor INTO @srvname,@pub_db,@pubname
END
CLOSE replmonitor
DEALLOCATE replmonitor
SELECT publication,publisher_db,subscriber,subscriber_db,
CASE publication_type WHEN 0 THEN 'Transactional publication'
WHEN 1 THEN 'Snapshot publication'
WHEN 2 THEN 'Merge publication'
ELSE 'Not Known' END,
CASE subtype WHEN 0 THEN 'Push'
WHEN 1 THEN 'Pull'
WHEN 2 THEN 'Anonymous'
ELSE 'Not Known' END,
CASE status WHEN 1 THEN 'Started'
WHEN 2 THEN 'Succeeded'
WHEN 3 THEN 'In progress'
WHEN 4 THEN 'Idle'
WHEN 5 THEN 'Retrying'
WHEN 6 THEN 'Failed'
ELSE 'Not Known' END,
CASE warning WHEN 0 THEN 'No Issues in Replication' ELSE 'Check Replication' END,
latency, latencythreshold,
'LatencyStatus'= CASE WHEN (latency > latencythreshold) THEN 'High Latency'
ELSE 'No Latency' END,
distribution_agentname,'DistributorStatus'= CASE WHEN (DATEDIFF(hh,last_distsync,GETDATE())>1) THEN 'Distributor has not executed more than n hour'
ELSE 'Distributor running fine' END
FROM #replmonitor
--DROP TABLE #replmonitor
Rajeev RINSERT INTO #replmonitor
Hi Rajeev,
Could you please use the following query and check if it is successful?
INSERT INTO #replmonitor
SELECT a.*
FROM OPENROWSET
('SQLNCLI', 'Server=DBServer;Trusted_Connection=yes;',
'SET FMTONLY OFF; exec distribution..sp_replmonitorhelpsubscription
@publisher = DBServer,
@publication_type = 0,
@publication=MyPublication') AS a;
There is a similar thread for your reference.
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/634090bf-915e-4d97-b71a-58cf47d62a8a/msg-8164-level-16-state-1-procedure-spmsloadtmpreplicationstatus-line-80?forum=sqlreplication
Thanks,
Lydia Zhang
Lydia Zhang
TechNet Community Support -
SRM7.0 - PO status issue in ECS
Hi,
We are observing status (Tracking-Status) of ECS POs is always "In Transfer to Execution system". Can some body help me in analysing why is it so?.
We are working on SRM7.0-SP03 and we are observing above mentioned issue only in Extended Classic Scenario POs. We are not having any issues with replication of POS to ECC and creation of follow-on documents. This is fyi.
Regards,
GMSAP Note 1366608 - PO status remains in "In Transfer to
Execution Syst."
Symptom
Sometimes the Purchase Orders (POs) in extended classic scenario (ECS)
remain "In Transfer to Execution Syst." status. Because of which the "Edit"
button is not visible in purchase order. Also, no confirmation or invoice
can be created.
Steps to reproduce the symptom:
1. Check the flag "Enable Consistency Checks in Back-End"
in ECS customizing (SPRO).
2. Create a new PO / Copy the existing ECS PO.
3. Click on "Check" button.
4. Click on "Order" button.
5. Check that the PO is successfully replicated to R/3.
6. Check the status of the PO in SRM side. The status of the PO will be
shown as "Ordered" when opened from portal. However, the intermediate
status "In transfer to Execution System" will be active which can be seen
in "Tracking" tab of PO.
In standard, when an ECS purchase order is successfully replicated to ERP
system the status "In transfer to Execution System" of ECS PO on SRM side
should become inactive.
Cause and Prerequisites
The problem is due to a program error.
The RFC sessions which are opened during the backend checks (with test run
- ON) are not closed properly, because of which the locks on PO objects on
ERP side are not deleted.
Backend Consistency Checks should be active in SRM.
Solution
Implement the attached automatic corrections using SAP Note assistant or
import the corresponding Support Package.
BR
MUTHU -
SQL Server 2014 Replication: Peer-to-peer replication
SQL Server 2014 Replication wizard: Peer-to-peer replication -> Agent Security Settings setup only Log Reader Agent Security Settings are available.
After I selected replication type and articles only Log Reader Agent Status was available, the Snapshot Agent Reader Status displays text:
"A Snapshot Agent job has not been created for this publication."
Another issue with replication:
"Peer-to-peer publications only support a '@sync_type' parameter value of 'replication support only', 'initialize with backup' or 'initialize from lsn'.
The subscription could not be found."
Search how to resolve issues in SQL Server 2014?Please check this similar post ..
http://blogs.msdn.com/b/sqljourney/archive/2013/10/01/an-interesting-issue-with-peer-to-peer-replication.aspx
http://social.msdn.microsoft.com/Forums/sqlserver/en-US/15701595-f5b1-4a10-b4aa-c56a94d64785/peertopeer-publications-only-support-a-synctype-parameter-value-of-replication-support-only?forum=sqlreplication
Raju Rasagounder Sr MSSQL DBA -
Management Store Replication to Edge Servers
I have an issue with replication of the Management Store to Edge Servers.
In four separate countries I have a Enterprise FE pool and an Edge Pool segregated by a firewall in each country. Only in one country does the Management Store replicate to the Edge Server successfully (see below).
The bottom two entries with a status of "True" are for the FE and Edge that replicate OK.
From all the FE servers I can telnet to to their partner Edge server on port 4443 and I can browse to the replication service on https://servername.fqdn:4443/replicationwebservice. The certificates look fine too.
If I run the Lync Server logging tool on the failing Edge servers and force replication from a FE server with Invoke-CsManagementStoreReplication there is nothing showing up in the XDS_Replica_Replicator log at all. If I do the same on the good Edge server
I get a whole bunch of stuff in the log.
I thought maybe it was a firewall issue but I have subsequently opened up the source IP range on my firewall rules to allow all to speak to the Edge servers' internal interface on 4443. Still nothing.
From the timestamps in the above screenshot you can see that the Edge severs have at least once reported back to the FE servers as the LastStatusReport value is not null but you can also see that that was a long time ago.
Any ideas?By any chance you see SHENNEL errors in Eventviewer of the Edge server?. I've see the exact thing happening when the edge internal certificate is not trusted by the Front End server.
http://thamaraw.com
I get a couple of Schannel errors regarding TLS 1.2 but then I get the exact same errors on the Edge server that replicates OK so I don't think that's an issue. Also if the FE didn't trust the cert of the Edge surely I wouldn't be able to browse to the replication
web service on the Edge, which I can? -
DFSr supported cluster configurations - replication between shared storage
I have a very specific configuration for DFSr that appears to be suffering severe performance issues when hosted on a cluster, as part of a DFS replication group.
My configuration:
3 Physical machines (blades) within a physical quadrant.
3 Physical machines (blades) hosted within a separate physical quadrant
Both quadrants are extremely well connected, local, 10GBit/s fibre.
There is local storage in each quadrant, no storage replication takes place.
The 3 machines in the first quadrant are MS clustered with shared storage LUNs on a 3PAR filer.
The 3 machines in the second quadrant are also clustered with shared storage, but on a separate 3PAR device.
8 shared LUNs are presented to the cluster in the first quadrant, and an identical storage layout is connected in the second quadrant. Each LUN has an associated HAFS application associated with it which can fail-over onto any machine in the local cluster.
DFS replication groups have been set up for each LUN and data is replicated from an "Active" cluster node entry point, to a "Passive" cluster node that provides no entry point to the data via DFSn and a Read-Only copy on it's shared cluster
storage.
For the sake of argument, assume that all HAFS application instances in the first quadrant are "Active" in a read/write configuration, and all "Passive" instances of the HAFS applications in the other quadrants are Read-Only.
This guide: http://blogs.technet.com/b/filecab/archive/2009/06/29/deploying-dfs-replication-on-a-windows-failover-cluster-part-i.aspx defines
how to add a clustered service to a replication group. It clearly shows using "Shared storage" for the cluster, which is common sense otherwise there effectively is no application fail-over possible and removes the entire point of using a resilient
cluster.
This article: http://technet.microsoft.com/en-us/library/cc773238(v=ws.10).aspx#BKMK_061 defines the following:
DFS Replication in Windows Server 2012 and Windows Server 2008 R2 includes the ability to add a failover cluster
as a member of a replication group. The DFS Replication service on versions of Windows prior to Windows Server 2008 R2
is not designed to coordinate with a failover cluster, and the service will not fail over to another node.
It then goes on to state, quite incredibly: DFS Replication does not support replicating files on Cluster Shared Volumes.
Stating quite simply that DFSr does not support Cluster Shared Volumes makes absolutely no sense at all after stating clusters
are supported in replication groups and a technet guide is provided to setup and configure this configuration. What possible use is a clustered HAFS solution that has no shared storage between the clustered nodes - none at all.
My question: I need some clarification, is the text meant to read "between" Clustered
Shared Volumes?
The storage configuration must to be shared in order to form a clustered service in the first place. What
we am seeing from experience is a serious degradation of
performance when attempting to replicate / write data between two clusters running a HAFS configuration, in a DFS replication group.
If for instance, as a test, local / logical storage is mounted to a physical machine the performance of a DFS replication group between the unshared, logical storage on the physical nodes is approaching 15k small files per minute on initial write and even high
for file amendments. When replicating between two nodes in a cluster, with shared clustered storage the solution manages a weak 2,500 files per minute on initial write and only 260 files per minute when attempting to update data / amend files.
By testing various configurations we have effectively ruled out the SAN, the storage, drivers, firmware, DFSr configuration, replication group configuration - the only factor left that makes any difference is replicating from shared clustered storage, to another
shared clustered storage LUN.
So in summary:
Logical Volume ---> Logical Volume = Fast
Logical Volume ---> Clustered Shared Volume = ??
Clusted Shared Volume ---> Clustered Shared Volume = Pitifully slow
Can anyone explain why this might be?
The guidance in the article is in clear conflict with all other evidence provided around DFSr and clustering, however it seems to lean towards why we may be seeing a real issue with replication performance.
Many thanks for your time and any help/replies that may be received.
PaulHello Shaon Shan,
I am also having the same scenario at one of my customer place.
We have two FileServers running on Hyper-V 2012 R2 as guest VM using Cluster Shared Volume. Even the data partition drive also a part of CSV.
It's really confusing whether the DFS replication on CSV are supported or not, then what would be consequence if using.
In my knowledge we have some customers they are using Hyper-V 2008 R2 and DFS is configured and running fine on CSV since more than 4 years without any issue.
Appreciate if you can please elaborate and explain in details about the limitations on using CSV.
Thanks in advance,
Abul
Maybe you are looking for
-
Macromedia Dreamweaver MX 2004 Error
I work in a school and a pupil has come to me with this error. "You have not entered a valid name. Names can only contain letters and numbers and cannot start with a number." He has typed the name starting with a letter, still he gets get the same er
-
Pleassss i want to know bi 7 supporting project point or production issue
Hi expert, i am pradeep .a job seeker could pleassss help me. i want to know bi 7 supporting project point or production issue and in bi 7 what are role in production project .shore to assign point my id is [email protected] thanks®ards pradeep.k
-
How account assignment category determains G/L account in purchase order
Expert, when i place an order with account assignment category for example F, a label named "account assingment" will appears with default G/L account in item detail area. what i wonder is how account assignment category determains it and where to c
-
Redeeming iTunes gift card on iPod touch
If I redeem my iTunes gift card on my iPod will my iTunes gift card sync to the computer (in case I want to buy a video)
-
HT201302 how to transfer my camera photos and camcorder video to my ipad?
What cable can I use to transfer my camera photos and camcorder video to my ipad?