SPL external data - SPL screening could not be performed

Dear all
My customer asked me to screen a list of ca. 4.000 (external) partners via t-code /SAPSLL/SPL_CHS03 .
The system stopped processing with the following message: 'Sanctioned party list screening could not be performed.'
Checking on possible root causes, I did the following:
1. CONS system: Reduced workload to 100 partners, then increased the amount step-by-step to check on a possible workload limit. Surprisingly, in the end I could process all 4.000 partners without problem!
2. Trying again in PRD system: Same problem as before. Using even 2.000 partners, the system will tell me ' Sanctioned party list screening could not be performed'. But increasing the amount from a low data volume seems possible.
^^Is that a known / typical error of the SPL screening? I would like to learn more about that strange system behaviour. If you know about possible solution approaches, I would be grateful for sharing. Thank you guys!
Best regards
t.

Hi Tzunami,
This issue can occur due to duplicate entries in the list you are trying to screen. It can also occur if there are no duplicate entires in the list, yet the system is reading them as duplicate.
Initially the list may not look like it has any duplicate entries but when the system begins, it may read the entries as duplicate due to a system limitation.
The maximum length of a business partner is 8 numbers and therfore for example the system will read the following:
2140002826 -> becomes 21400028
2140002896 -> becomes 21400028
4961001220 -> becomes 49610012
4961001203 -> becomes 49610012
These "duplicate" entries will lead to the end of the program as you have described.
Best Regards,
Eoin Hurley

Similar Messages

  • Data source executive could not be found

    Hello there, I have recentley did my first dynamic website
    using coldfusion MX 7. After I uploaded all the files in the server
    of godaddy.com on a window platform which supports cfm, I had
    realised that all the DSN routes were still
    C:cfusion7\wwwroot\database\executive.Users... so I changed all the
    tags to D:\Hosting\executivemalta\database\executive.Users....
    Unfortunatley I keep on receiving the error Data source executive
    could not be found! I have also created an access database from the
    godaddy control panel and located the database to a folder called
    accesscf and changed the DSN route as instructed by godaddy.
    Unfortunatley that didn't even work. Could somebody out there be
    generous enough to help me solve this problem? The following is the
    error:
    Error Occurred While Processing Request
    Data source executive could not be found.
    The error occurred in D:\Hosting\executivemalta\index2.cfm:
    line 4
    2 : <cfset MM_redirectLoginSuccess="../homepage.cfm">
    3 : <cfset MM_redirectLoginFailed="../failed.cfm">
    4 : <cfquery name="MM_rsUser"
    datasource="#Request.DSN#">
    5 : SELECT Username,Password FROM
    D:\Hosting\executivemalta\accesscf\executive.Users WHERE
    Username='#FORM.username#' AND Password='#FORM.password#'
    6 : </cfquery>
    DATASOURCE executive
    Please try the following:
    Check the ColdFusion documentation to verify that you are
    using the correct syntax.
    Search the Knowledge Base to find a solution to your problem.
    Browser Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1;
    SV1; .NET CLR 1.1.4322)
    Remote Address 212.56.128.21
    Referrer
    http://www.the-executive.biz/index2.cfm
    Date/Time 08-Aug-06 11:17 PM
    Stack Trace (click to expand)
    at
    cfindex22ecfm807111997.runPage(D:\Hosting\executivemalta\index2.cfm:4)
    at
    cfindex22ecfm807111997.runPage(D:\Hosting\executivemalta\index2.cfm:4)
    java.sql.SQLException: Data source "executive" not found.
    at coldfusion.sql.DataSrcImpl.validate(DataSrcImpl.java:90)
    at coldfusion.sql.SqlImpl.validate(SqlImpl.java:203)
    at
    coldfusion.tagext.sql.QueryTag.doStartTag(QueryTag.java:443)
    at
    cfindex22ecfm807111997.runPage(D:\Hosting\executivemalta\index2.cfm:4)
    at coldfusion.runtime.CfJspPage.invoke(CfJspPage.java:152)
    at
    coldfusion.tagext.lang.IncludeTag.doStartTag(IncludeTag.java:343)
    at
    coldfusion.filter.CfincludeFilter.invoke(CfincludeFilter.java:65)
    at
    coldfusion.filter.ApplicationFilter.invoke(ApplicationFilter.java:210)
    at coldfusion.filter.PathFilter.invoke(PathFilter.java:86)
    at
    coldfusion.filter.ExceptionFilter.invoke(ExceptionFilter.java:50)
    at
    coldfusion.filter.ClientScopePersistenceFilter.invoke(ClientScopePersistenceFilter.java:2 8)
    at
    coldfusion.filter.BrowserFilter.invoke(BrowserFilter.java:38)
    at
    coldfusion.filter.GlobalsFilter.invoke(GlobalsFilter.java:38)
    at
    coldfusion.filter.DatasourceFilter.invoke(DatasourceFilter.java:22)
    at coldfusion.CfmServlet.service(CfmServlet.java:105)
    at
    coldfusion.bootstrap.BootstrapServlet.service(BootstrapServlet.java:78)
    at
    jrun.servlet.ServletInvoker.invoke(ServletInvoker.java:91)
    at
    jrun.servlet.JRunInvokerChain.invokeNext(JRunInvokerChain.java:42)
    at
    jrun.servlet.JRunRequestDispatcher.invoke(JRunRequestDispatcher.java:257)
    at
    jrun.servlet.ServletEngineService.dispatch(ServletEngineService.java:527)
    at
    jrun.servlet.jrpp.JRunProxyService.invokeRunnable(JRunProxyService.java:204)
    at
    jrunx.scheduler.ThreadPool$DownstreamMetrics.invokeRunnable(ThreadPool.java:349)
    at
    jrunx.scheduler.ThreadPool$ThreadThrottle.invokeRunnable(ThreadPool.java:457)
    at
    jrunx.scheduler.ThreadPool$UpstreamMetrics.invokeRunnable(ThreadPool.java:295)
    at jrunx.scheduler.WorkerThread.run(WorkerThread.java:66)

    Hi,
    I updated Sources\SXS with fresh files and now it looks like it works?????
    /SaiTech

  • HT3546 I get an error when trying to back up on my TC. Message says - The backup disk image "/Volumes/Data/Macintosh.sparsebundle" could not be accessed (error -1).

    I get an error when trying to back up on my TC. Message says - The backup disk image “/Volumes/Data/Macintosh.sparsebundle” could not be accessed (error -1).
    Any suggestions

    See here...
    https://discussions.apple.com/message/20933934#20933934

  • HT201514 When I try to back up ,I get the error message:The backup disk image "/Volumes/Data/ iMac.sparsebundle" could not be accessed (error -1).

    When I try to backup I get the error message.The backup disk image"/Volumes/Data/iMac.sparsebundle" could not be accessed(error-1) Any Ideas? it was working fine and i can't think of any changes I made to the system to affect this?

    Start with the best resource available..... Time Machine --- Troubleshooting from Pondini, the wizard of all things Time Machine.
    " . . . sparse bundle could not be accessed (error -1)"

  • The backup disk image "/volumes/data/GuisselleiMac.sparsebundle" could not be created error null

    Hello...
    I just got my first 2tb TimeCapsule (TC). Unfortunately, it is not backing up!! I keep on getting the message "the backup disk image "/volumes/data/GuisselleiMac.sparsebundle" could not be created (error (null))"
    I've looked in many websites but can't find a solution... I've tried re-setting the TC to original settings, turning my Mac off and on, changing the name of my mac, changing the name of my mac on the network, nothing works!!!
    My TC is connected with an ethernet cable to a linksys router wich is connected to the cable modem, so it is in bridge mode. I have not connected the TC to my mac, I don't think this is necessary..
    Can someone help me please!!!

    My TC is connected with an ethernet cable to a linksys router wich is connected to the cable modem, so it is in bridge mode. I have not connected the TC to my mac, I don't think this is necessary..
    Your TC is not directly connected to the Mac and unfortunately it does seem to matter.
    The reason the TC fails is not entirely clear but it has to do with Mavericks.
    Let me recommend you test what happens with the TC reset and plugged directly into the Mac by ethernet.
    You will need to run it in router mode. Ignore all the errors and see if it now works.
    The other suggestion is to manually mount the TC hard disk
    First of all find it in the network via network utility.
    Run netstat and get the current routing table.
    Locate your TC on it.. On mine above it shows as tcgen4.local
    So in Finder top menu.. Go > Connect to Server.
    Type in AFP://TCnetworkname.local (exactly as it appears in netstat)
    Or use the IP address.. the problem is that in your setup it is probably not fixed.
    The current IP can be found by pinging the name. So ping in a terminal or in network utility.
    ping tcgen4
    ping: cannot resolve tcgen4: Unknown host
    Note how you must use the full domain.
    ping tcgen4.local
    PING tcgen4.local (192.168.2.201): 56 data bytes
    64 bytes from 192.168.2.201: icmp_seq=0 ttl=255 time=4.160 ms
    64 bytes from 192.168.2.201: icmp_seq=1 ttl=255 time=3.742 ms
    64 bytes from 192.168.2.201: icmp_seq=2 ttl=255 time=3.332 ms
    64 bytes from 192.168.2.201: icmp_seq=3 ttl=255 time=3.785 ms
    ^C
    --- tcgen4.local ping statistics ---
    4 packets transmitted, 4 packets received, 0.0% packet loss
    round-trip min/avg/max/stddev = 3.332/3.755/4.160/0.293 ms
    So you have now established the IP..
    And type that in.
    AFP://192.168.2.201 (or whatever shows).
    Once you have mounted the disk .. delete the old Time Machine setup as per A4 in Pondini.. and set up again.
    It should work.. but I strongly recommend a direct link to the TC.

  • SCOM 2012 Data Access Service could not start

    Hello,
    I have 3 SCOM 2012 servers. I got the error that the All Managament Server Pool is not available. Google search give me a lot of posts to add folder PoolManager at HKLM\System\CurrontControlSet\services\HealthService\Parameters with the DWORD keys PoolLeaseRequestPeriodSeconds
    600 and PoolNetworkLatencySeconds 120. After restarting last SCOM server the Data Access Service could not run "Windows could not start the System Center Data Access Service. Error 1053: The service did not respond to the start or control request in a
    timely fashion"
    At Operations Manager Eventlog I see an error: "OpsMgr Management Configuration Service failed to communicate with System Center Data Access Service due the following exception: System.Runtime.Remoting.RemotingException: Unable to get ISdkService interface.
    Please make sure local Sdk Service is running."
    At System Eventlog I see an error: "A timeout was reached (30000 milliseconds) while waiting for the System Center Data Access Service service to connect."
    Why did it run on the both other servers and failes on the third one? What can I check?
    Regards, Doreen

    Hi,
    According to my understanding, after editing the registry key value, two of three management servers work fine now, and the issue only occur to the third one. Whether all those servers are running with Windows server 2012?
    I would like to suggest you re-check the service account for the data access service, maybe we can re-enter the account credential. Those three servers may use the same account for the service, make sure the account is not disabled and you enter in the right
    password for it.
    Here is a similar thread, please go through it for more details:
    http://social.technet.microsoft.com/Forums/en-US/f2281934-513d-4b60-bb4b-34ae8a892ace/failed-to-open-the-console-and-system-center-data-access-service-wont-start-scom-2012?forum=operationsmanagergeneral
    In addition, please also restart the third server and check the result, and I know that there is a similar error message in SCOM 2007:
    http://blogs.technet.com/b/csstwplatform/archive/2012/01/02/scom-2007-r2-unable-to-open-the-scom-console-after-server-reboot-the-sdk-service-is-stopped.aspx
    Regards,
    Yan Li
    Regards, Yan Li

  • The backup disk image "/Volumes/Data/Macintosh.sparsebundle" could not be accessed (error (null))

    I keep getting "The backup disk image "/Volumes/Data/Macintosh.sparsebundle" could not be accessed (error (null)).
    I have updated the firmware on the Time Capsule and unplugged and replugged it in after waiting more than 10 seconds.
    I cannot access the sparsebundle to repair from the Disk Utility.
    Does anyone have any other suggestions?

    It sounds like you have already tried some of pondini's suggestions..
    But C17 is the main one, even if the error is a different number..
    http://pondini.org/TM/Troubleshooting.html
    Mavericks in particular is having issues with the TC..
    So I strongly recommend a full factory reset of the TC.
    Make sure all names are short, no spaces and pure alphanumeric. See C9 for more info.
    Because Mavericks now uses SMB by default for network protocol, it is worthwhile mounting the TC manually using AFP.
    In finder use the top menu, Go, Connect to Server.
    Type in as follows,
    AFP://TCname
    Replace TCname with your actual TC name..
    If the computer fails to find the TC, try again using the full domain name.
    AFP://TCname.local is the standard one. The TC opts for local although you now have no place to set it in the airport utility.
    If you still have trouble tell me.
    Once the computer locates the network resources, it will ask for a password, public by default. Type in your password if you changed it or public. And store it in the keychain.
    Now reset TM.. see A4.
    And point the TM to the TC again which you have mounted.
    See if that helps.

  • The backup disk image "/Volumes/Data/Macintosh.sparsebundle" could not be accessed (error -1).

    I ma getting an error message from my Time Capsule, "The backup disk image “/Volumes/Data/Macintosh.sparsebundle” could not be accessed (error -1)." The backup is not working.  How do I fix this?

    Just a reboot of the TC or whole network will fix most of the network issues which are unfortunately very common with Lion and worse with Mountain Lion.
    Look at C17 for that particular error code here.
    http://pondini.org/TM/Troubleshooting.html

  • The Data Model connection could not be created

    Hello, 
    Using powerquery, I am trying to access daily logs conveniently, the are over 24m records in a month. 
    I have selected the columns i need, and checked the box ' load data to Model', Once the query has ran it returns the error, "The Data Model connection could not be created".
    Thank you for any help on this :)

    This is a tough one because as an add-in, we can't get much information from the API for failures like this. Our general recommendation for people working with big data sets is to use 64-bit Excel. I realize that's a big hammer, but it might be your only
    option if you need to work with a 24 million row data set.
    Another option (which you may have already done) is to filter and aggregate that dataset as much as possible before adding it back into Excel. The analysis done inside the Power Query editor is less likely to negatively affect the working set on your machine.
    (It does depend how the query is built and where the data is come from. If we have to pull all the data into memory to do some of the operations then it won't help at all.)

  • Time Machine Backup error - The backup disk image "/Volumes/Data/Extreme.sparsebundle" could not be accessed (error -1).

    I am getting a Time Machine backup error "The backup disk image “/Volumes/Data/Extreme.sparsebundle” could not be accessed (error -1)."
    Then a Finder window pops up with the backup directory in it.  When I open it I see all the backups that were completed successfully and then the one there is simply always shows the data and .inProgress.  I thought that maybe if I could delete the one that is perpetually in progress it would start a new one and run fine, but I won't let me delete it.

    Reboot the TC.. or go to the 5.6 utility.. and disconnect all users.. to force it to reconnect.
    C17 in troubleshooting.. to give more specifics to Kappy's comment.
    http://pondini.org/TM/Troubleshooting.html

  • The screen could not rotate automatically while without lock it, how to solve this problem?

    The screen could not rotate automatically while without lock it, how to solve this problem?

    If there isn't a lock symbol at the top of the screen next to the battery indicator then try a soft-reset and see if it rotates after the iPad has restarted : press and hold both the sleep and home buttons for about 10 to 15 seconds (ignore the red slider), after which the Apple logo should appear - you won't lose any content, it's the iPad equivalent of a reboot.

  • RDS Gateway 2012, RemoteApp Displays "A Revocation check could not be performed for the Certificate" via RDWEB

    I have searched through the forums and there are a number of posts that are similar but all the checks they list seem to not apply to this one.
    My current setup is as follows
    All Servers are 2012 R2
    1 x DC server
    1 x RDS Gateway server with RDS Web installed
    1 x Session Host Server
    Certificate supplied by godaddy with 5 names. (included is the name of the RDS Gateway/Web server in the certificate, the internal name of the session host server is not included as the internal names are differnet to the external)
    My tests are as follows
    Navigating to the RDSWEB page from a machine inside the same network (windows 7 sp1) but not on the same domain is fine no errors and logging in and launching any published application is fine with no errors.
    However logging in on another machine that is external from the network (windows 7 sp1) is ok up to the point of launching any of the published apps I get the error about ""A Revocation check could not be performed for the Certificate". this
    prompts twice but does allow you to continue and login and use the app till the next time. If I view the certificate from the warning message all appears to be ok with all certs in the chain.
    I have imported the root and intermediate certs to each of the gateway/rdsweb server and session host server into the computer cert store just to be on the safe side. This has not helped, I have also run the following command from both windows 7 machines
    with no errors on either
    certutil -f –urlfetch -verify c:\export.cer
    I cant seem to see where this is failing and I am beginning to think there is something wrong with godaddy cert itself somehow.
    If I skip rdsweb and just use MSTSC with the gateway server settings then I can login to any machine on the network with no errors so this is only related to launching published apps on the 2012 R2 RDWEB or session host servers.
    Any help appreciated

    Hi,
    1. Please make sure the client PCs have mstsc.exe (6.3.9600) installed.
    2. If you are seeing a name mismatch error, you can set the published name via this cmdlet:
    Change published FQDN for Server 2012 or 2012 R2 RDS Deployment
    http://gallery.technet.microsoft.com/Change-published-FQDN-for-2a029b80
    To be clear, the above cmdlet changes the name that shows up next to Remote computer on the prompt you see when launching a RemoteApp.  You should have a DNS A record on your internal network pointing to the private ip address of your RDCB server. 
    Additionally, in RD Gateway Manager, Properties of your RD RAP, Network Resource tab, you should select Allow users to connect to any network resource or if you choose to use RD Gateway Managed group you will need to add all of the appropriate names to the
    group.
    For example, when launching a RemoteApp you would see something like Remote computer: rdcb.domain.com and Gateway server: gateway.domain.com .  Both of these names need to be on your GoDaddy certificate.
    Please verify the above and reply back so that we may assist you further if needed.  It is possible you have an issue with the revocation check but I would like you to make sure that the above is in place first.
    Thanks.
    -TP
    Thanks for the response.
    To be clear I am only seeing a name mismatch and revocation error if I assign a self signed cert to the session host as advised earlier in the thread by "Dharmesh Solanki", if I remove this and assign the 3rd party certificate I then
    just get the revocation error , I have already ran the powershell to change the FQDN's but this has not resolved the issue although the RDP connection details now match the external url for RDWEB when looking at one of the remoteapp files. The workspace
    ID still shows an internal name though inside this same file. 
    RD Gateway is already set to connect any resource, when connecting using remote app both names (RDCB/RDGateway) show as being correct and are contained within the same UCC certificate. I also already have a DNS entry for the Connection broker pointing to
    the internal ip.
    Do you know if the I need the internal name of the session host servers contained within the same UCC certificate seeing as they are different fqdn's than what I am using for external access ? I resigned the UCC certificate and included the internal name
    of the session host server to see if this would help but for some reason I am still seeing the revocation error. I will check on a windows 8 client pc this evening to see if this gets any further as the majority of the testing has been done on windows 7 sp1
    client pc's
    Thanks

  • The operation could not be performed because OLE DB provider "OraOLEDB.Oracle" for linked server ...

    Our setup is that we have two databases; a SQL Server 2008 database and an Oracle database (11g). I've got the oracle MTS stuff installed and the Oracle MTS Recovery Service is running. I have DTC configured to allow distributed transactions. All access to the Oracle tables takes place via views in the SQL Server database that go against Oracle tables in the linked server.
    (With regard to DTC config: Checked-> Network DTC Access, Allow Remote Clients, Allow Inbound, Allow Outbound, Mutual Authentication (tried all 3 options), Enable XA Transactions and Enable SNA LU 6.2 Transactions. DTC logs in as NT AUTHORITY\NetworkService)
    Our app is an ASP.NET MVC 4.0 app that calls into a number of WCF services to perform database work. Currently the web app and the WCF service share the same app pool (not sure if it's relevant, but just in case...)
    Some of our services are transactional, others are not.
    Each WCF service that is transactional has the following attribute on its interface:
    [ServiceContract(SessionMode=SessionMode.Required)]
    and the following attribute on the method signatures in the interface:
    [TransactionFlow(TransactionFlowOption.Allowed)]
    and the following attribute on every method implementations:
    [OperationBehavior(TransactionScopeRequired = true, TransactionAutoComplete = true)]
    In my data access layer, all the transactional methods are set up as follows:
    using (IDbConnection conn = DbTools.GetConnection(_configStr, _connStr))
    using (IDbCommand cmd = DbTools.GetCommand(conn, "SET XACT_ABORT ON"))
    cmd.ExecuteNonQuery();
    using (IDbCommand cmd = DbTools.GetCommand(conn, sql))
    ... Perform actual database work ...
    Services that are transactional call transactional DAL code. The idea was to keep the stuff that needs to be transactional (a few cases) separate from the stuff that doesn't need to be transactional (~95% of the cases).
    There ought not be cases where transactional and non-transactional WCF methods are called from within a transaction (though I haven't verified this and this may be the cause of my problems. I'm not sure, which is part of why I'm asking here.)
    As I mentioned before, in most cases, this all works fine.
    Periodically, and I cannot identify what initiates it, I start getting errors. And once they start, pretty much everything starts failing for a while. Eventually things start working again. Not sure why... This is all in a test environment with a single user.
    Sometimes the error is:
    Unable to start a nested transaction for OLE DB provider "OraOLEDB.Oracle" for linked server "ORACLSERVERNAME". A nested transaction was required because the XACT_ABORT option was set to OFF.
    This message, I'm guessing is happening when I have non-transactional stuff within transactions, as I'm not setting XACT_ABORT in the non-transactional code (that's totally doable, if that will fix my issue).
    Most often, however, the error is this:
    System.Data.SqlClient.SqlException (0x80131904): The operation could not be performed because OLE DB provider "OraOLEDB.Oracle" for linked server "ORACLSERVERNAME" was unable to begin a distributed transaction.
    Now, originally we only had transactions on SQL Server tables and that all worked fine. It wasn't until we added transaction support for some of the Oracle tables that things started failing. I know the Oracle transactions work. And as I said, most of the time, everything is just hunky dorey and then sometimes it starts failing and keeps failing for a while until it decides to stop failing and then it all works again.
    I noticed that our transactions didn't seem to have a DistributedIdentifier set, so I added the EnsureDistributed() method from this blog post: http://www.make-awesome.com/2010/04/forcibly-creating-a-distributed-net-transaction/
    Instead of a hardcoded Guid (which seemed to cause a lot of problems), I have it generating a new Guid for each transaction and that seems to work, but it has not fixed my problem. I'm wondering if the lack of a DistribuedIdentifier is indicative of some other underlying problem. I've never dealt with an environment quite like this before, so I'm not sure what is "normal".
    I've also noticed that the DistributedIdentifier doesn't get passed to WCF. From the client, I have a DistributedIdentifier and a LocalIdentifier in Transaction.Current.TransactionInformation. In the WCF server, however there is only a LocalIdentifier set and it is a different Guid from the client side (which makes sense, but I would have expected the DistributedIdentifier to go across).
    So I changed the wait the code above works and instead, on the WCF side, I call a method that calls Transaction.Current.EnlistDurable() with the DummyEnlistmentNotification class from the link above (though with a unique Guid for each transaction instead of the hardcoded guid in the link). I now havea  DistributedIdentifier on the server-side, but it still doesn't fix the problem.
    It appears that when I'm in the midst of transactions failing, even after I shut down IIS, I'm unable to get the DTC service to shutdown and restart. If I go into Component Services and change the security settings, for example, and hit Apply or OK, after a bit of a wait I get a dialgo that says, "Failed ot restart the MS DTC serivce. Please examine the eventlog for further details."
    In the eventlog I get a series of events:
    1 (from MSDTC): "The MS DTC service is stopping"
    2 (From MSSQL$SQLEXPRESS): "The connection has been lost with Microsoft Distributed Transaction Coordinator (MS DTC). Recovery of any in-doubt distributed transactions
    involving Microsoft Distributed Transaction Coordinator (MS DTC) will begin once the connection is re-established. This is an informational
    message only. No user action is required."
    -- Folowed by these 3 identical messages
    3 (from MSDTC Client 2): 'MSDTC encountered an error (HR=0x80000171) while attempting to establish a secure connection with system GCOVA38.'
    4 (from MSDTC Client 2): 'MSDTC encountered an error (HR=0x80000171) while attempting to establish a secure connection with system GCOVA38.'
    5 (from MSDTC Client 2): 'MSDTC encountered an error (HR=0x80000171) while attempting to establish a secure connection with system GCOVA38.'
    6 (From MSDTC 2): MSDTC started with the following settings: Security Configuration (OFF = 0 and ON = 1):
    Allow Remote Administrator = 0,
    Network Clients = 1,
    Trasaction Manager Communication:
    Allow Inbound Transactions = 1,
    Allow Outbound Transactions = 1,
    Transaction Internet Protocol (TIP) = 0,
    Enable XA Transactions = 1,
    Enable SNA LU 6.2 Transactions = 1,
    MSDTC Communications Security = Mutual Authentication Required, Account = NT AUTHORITY\NetworkService,
    Firewall Exclusion Detected = 0
    Transaction Bridge Installed = 0
    Filtering Duplicate Events = 1
    This makes me wonder if there's something maybe holding a transaction open somewhere?

    The statement executed from the sql server. (Installed version sql server 2008 64 bit standard edition SP1 and oracle 11g 64 bit client), DTS enabled
    Below is the actual sql statement issued
    SET XACT_ABORT ON
    BEGIN TRAN
    insert into XXX..EUINTGR.UPLOAD_LWP ([ALTID]
              ,[GRANT_FROM],[GRANT_TO],[NO_OF_DAYS],[LEAVENAME],[LEAVEREASON],[FROMHALFTAG]
              ,[TOHALFTAG] ,[UNIT_USER],[UPLOAD_REF_NO],[STATUS],[LOGINID],[AVAILTYPE],[LV_REV_ENTRY])
              values('IS2755','2010-06-01',
    '2010-06-01','.5',     'LWOP'     ,'PERSONAL'     ,'F',     'F',     'EUINTGR',
    '20101',1,1,0,'ENTRY')
    rollback TRAN
    OLE DB provider "ORAOLEDB.ORACLE" for linked server "XXX" returned message "New transaction cannot enlist in the specified transaction coordinator. ".
    Msg 7391, Level 16, State 2, Line 3
    The operation could not be performed because OLE DB provider "ORAOLEDB.ORACLE" for linked server "XXX" was unable to begin a distributed transaction.
    Able to execute the above statement successfully without using transaction.We need to run the statement with transaction.

  • Error - The request could not be performed due to an error from the I/O device

    Hello, 
    I have a Hyper-V server with a few virtual machines. 
    The host runs Windows Server 2012 R2 with Hyper-V. 
    VMs are Windows Server 2012R2 Generation 2 and Windows Server 2003 Generation 1. 
    All VMs running on VHDX on local host disks, no raid, no storage. Most VMs run on dedicated disks. 
    I am having the following error when I demand large amount of I/O on VMs:. "The request could not be performed due to an error from the I/O device" 
    This error happens when I run robocopy which requires large amount of writing, or on a SQL 2014 VM which also requires many reads and writes. 
    Whenever this error occurs, the replicas of the VMs require resynchronization and the MSSQL service stops. 
    Analyzing the events of the Host, I find the following warning multiple times: "The IO operation at logical block address 0x31fd01 for Disc 4 (PDO name: \ Device \ 0000005d) was retried." Disc 4 is where SQL runs. 
    Is there any special configuration that must be done to avoid these errors? 
    Thank you! 
    Rafael

    Hi Eng.Rafael Grecco,
    >>Analyzing the events of the Host, I find the following warning multiple times: "The IO operation at logical block address 0x31fd01 for Disc 4 (PDO name: \ Device \ 0000005d) was retried." Disc 4 is where SQL runs. 
    >>Chkdsk /r didn't return any error.
    It seems that it is not a hyper-v issue .
    I would suggest you to keep the driver up-to-date for your hyper-v host .
    In addition , here is a similar thread :
    http://answers.microsoft.com/en-us/windows/forum/windows_8-hardware/the-io-operation-at-logical-block-address-for-disk/23c32152-c2a6-4c6d-b229-95dc1470231a
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Analysis could not be performed in time. There is a possible serious performance issue

    Can someone please advise what I need to do to correct this critical error?
    My computer is VERY slow and when I ran the event viewer, this is listed as CRITICAL.  Any information would be appreciated.
    Log Name:      Microsoft-Windows-Diagnostics-Performance/Operational
    Source:        Microsoft-Windows-Diagnostics-Performance
    Date:          6/24/2010 10:18:46 AM
    Event ID:      400
    Task Category: System Performance Monitoring
    Level:         Critical
    Keywords:      Event Log
    User:          LOCAL SERVICE
    Computer:      user-PC
    Description:
    Information about the system performance monitoring event:
         Scenario  : System Responsiveness
         Analysis result  : Analysis could not be performed in time. There is a possible serious performance issue
         Incident Time (UTC) : 6/24/2010 5:17:07 PM
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Microsoft-Windows-Diagnostics-Performance" Guid="{cfc18ec0-96b1-4eba-961b-622caee05b0a}" />
        <EventID>400</EventID>
        <Version>1</Version>
        <Level>1</Level>
        <Task>4005</Task>
        <Opcode>37</Opcode>
        <Keywords>0x8000000000010000</Keywords>
        <TimeCreated SystemTime="2010-06-24T17:18:46.941Z" />
        <EventRecordID>5491</EventRecordID>
        <Correlation ActivityID="{00000000-E6C8-0000-F4BB-058D9113CB01}" />
        <Execution ProcessID="1884" ThreadID="5956" />
        <Channel>Microsoft-Windows-Diagnostics-Performance/Operational</Channel>
        <Computer>user-PC</Computer>
        <Security UserID="S-1-5-19" />
      </System>
      <EventData>
        <Data Name="ShellScenarioStartTime">2010-06-24T17:17:07.442Z</Data>
        <Data Name="ShellScenarioEndTime">2010-06-24T17:17:12.442Z</Data>
        <Data Name="ShellSubScenario">1</Data>
        <Data Name="ShellScenarioDuration">5000</Data>
        <Data Name="ShellRootCauseBits">0</Data>
        <Data Name="ShellAnalysisResult">2</Data>
        <Data Name="ShellDegradationType">1</Data>
        <Data Name="ShellTsVersion">1</Data>
        <Data Name="ShellMachineUpTimeHours">0</Data>
        <Data Name="ShellMachineSleepPattern">0</Data>
      </EventData>
    </Event>

    I do get the same problem. It believe it started after I switched from HDD to SSD some month's ago. My machine is very fast now so I do not have performance problems (only a very slow power on boot).
    The description above is exactly the same as I have.
    Does somebody has the same problems?

Maybe you are looking for