Execute-asynchronous
Hi All,
When I created the inbound proxy these are the objects generated.
<b>Type Name Shorttext</b>
Interface ZJSII_IN_JS_MI Proxy Interface
Structure ZJSSRC_DT Proxy Structure
Structure ZJSSRC_DT_HEADER Proxy Structure
Structure ZJSSRC_DT_DETAIL Proxy Structure
Structure ZJSSRC_DT_RECORDSET Proxy Structure
Structure ZJSSRC_MT Proxy Structure
Table Type ZJSSRC_DT_DETAIL_TAB Proxy Table Type
Table Type ZJSSRC_DT_HEADER_TAB Proxy Table Type
Now I just need to edit the execute-asynchronous method so that I can view my data that has come from XI server.
I have edited the method like
method ZJSII_IN_JS_MI~EXECUTE_ASYNCHRONOUS.
DATA : HTYPE TYPE ZJSSRC_DT_HEADER-KEYFIELD,
DTYPE TYPE ZJSSRC_DT_DETAIL-KEYFIELD.
HTYPE = INPUT-SRC_MT-RECORDSET-HEADER-KEYFIELD.
DTYPE = INPUT-SRC_MT-RECORDSET-DETAIL-KEYFIELD.
ENDMETHOD.
But I am getting the error field
"INPUT-SRC_MT-RECORDSET-HEADER-KEYFIELD" IS unknown.
Any inputs on this will be of great help.
Thanks & Regards,
Jai Shankar.
Hi,
Check your message type and Message Interface. Because I think, there is no field in the message type. Check this once.
Then if there is no fields etc then regenerate the proxy and activate again.
Thanks,
Moorthy
Similar Messages
-
Client Proxy--unable to edit Execute Asynchronous method !!!!
Hi All,
With Reference to the blog stated below
/people/sravya.talanki2/blog/2006/07/28/smarter-approach-for-coding-abap-proxies
I tried to write code inside the Execute Asynchronous method. But I was unable to edit the method and it comes as can not edit Proxy Objects. Is there any steps to do to make edit option available for this method.
Regards,
Sundar.Hi,
you never write code in client proxies in the generated method
you can only do it for server proxies
if you want to use the client proxy you need to call it from your own
report (or function module)
Regards,
michal
<a href="/people/michal.krawczyk2/blog/2005/06/28/xipi-faq-frequently-asked-questions"><b>XI / PI FAQ - Frequently Asked Questions</b></a> -
How can I run a custom program ASYNCHRONOUSLY when booting WinPE?
I have a custom application that I want to have running during WinPE for my Litetouch deployments. I had this working in SCCM and now I want to get it working in MDT.
I have the Netcheck.exe application in my Extras folder, it ends up on the root of my X: drive.
Here is my unattend.xml file.
<?xml version="1.0" encoding="utf-8"?>
<unattend xmlns="urn:schemas-microsoft-com:unattend">
<settings pass="windowsPE">
<component name="Microsoft-Windows-Setup" processorArchitecture="x86" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State">
<Display>
<ColorDepth>32</ColorDepth>
<HorizontalResolution>1024</HorizontalResolution>
<RefreshRate>60</RefreshRate>
<VerticalResolution>768</VerticalResolution>
</Display>
<RunAsynchronous>
<RunAsynchronousCommand>
<Order>1</Order>
<Path>X:\NetCheck.exe</Path>
<Description>Run the NetCheck app</Description>
</RunAsynchronousCommand>
<RunAsynchronousCommand>
<Order>2</Order>
<Path>wscript.exe X:\Deploy\Scripts\LiteTouch.wsf</Path>
<Description>Lite Touch PE</Description>
</RunAsynchronousCommand>
</RunAsynchronous>
</component>
</settings>
<cpi:offlineImage cpi:source="" xmlns:cpi="urn:schemas-microsoft-com:cpi" />
</unattend>
By default, the Litetouch.wsf file is launched synchronously, and so my Netcheck app won't run until the Litetouch wizard closes (this won't work for my needs). Also, synchronous commands are run before Asynchronous commands. Therefore, I need to run
both my custom Netcheck app and the Litetouch wizard Asynchronously.
However, every time I use the unattend file that I pasted above, WinPE boots and then immediately reboots. If I am quick and hit F8 I get a command prompt, and then my Netcheck app and the Litetouch wizard both run (sweet!), but then when I close that cmd
prompt, WinPE shuts down (lame).
The wpeinit.log is shown below. Everything seems to look good, so what is wrong? How can I accomplish this?
2014-09-09 14:23:02.588, Info WPEINIT is processing the unattend file [X:\unattend.xml]
2014-09-09 14:23:02.588, Info Spent 141ms initializing removable media before unattend search
2014-09-09 14:23:02.604, Info ==== Initializing Display Settings ====
2014-09-09 14:23:02.620, Info Setting display resolution 1024x768x32@60: 0x00000000
2014-09-09 14:23:02.620, Info STATUS: SUCCESS (0x00000000)
2014-09-09 14:23:02.620, Info ==== Initializing Computer Name ====
2014-09-09 14:23:02.620, Info Generating a random computer name
2014-09-09 14:23:02.620, Info No computer name specified, generating a random name.
2014-09-09 14:23:02.620, Info Renaming computer to MININT-9KBBIFF.
2014-09-09 14:23:02.620, Info Waiting on the profiling mutex handle
2014-09-09 14:23:02.620, Info Acquired profiling mutex
2014-09-09 14:23:02.620, Info Service winmgmt disable: 0x00000000
2014-09-09 14:23:02.620, Info Service winmgmt stop: 0x00000000
2014-09-09 14:23:02.620, Info Service winmgmt enable: 0x00000000
2014-09-09 14:23:02.620, Info Released profiling mutex
2014-09-09 14:23:02.620, Info STATUS: SUCCESS (0x00000000)
2014-09-09 14:23:02.620, Info ==== Initializing Virtual Memory Paging File ====
2014-09-09 14:23:02.620, Info No WinPE page file setting specified
2014-09-09 14:23:02.635, Info STATUS: SUCCESS (0x00000001)
2014-09-09 14:23:02.635, Info ==== Initializing Optional Components ====
2014-09-09 14:23:02.635, Info WinPE optional component 'Microsoft-WinPE-HTA' is present
2014-09-09 14:23:02.651, Info WinPE optional component 'Microsoft-WinPE-MDAC' is present
2014-09-09 14:23:02.651, Info WinPE optional component 'Microsoft-WinPE-WMI' is present
2014-09-09 14:23:02.667, Info WinPE optional component 'Microsoft-WinPE-WSH' is present
2014-09-09 14:23:02.682, Info STATUS: SUCCESS (0x00000000)
2014-09-09 14:23:02.682, Info ==== Initializing Network Access and Applying Configuration ====
2014-09-09 14:23:02.682, Info No EnableNetwork unattend setting was specified; the default action for this context is to enable networking support.
2014-09-09 14:23:02.682, Info Global handle for profiling mutex is non-null
2014-09-09 14:23:02.682, Info Waiting on the profiling mutex handle
2014-09-09 14:23:02.682, Info Acquired profiling mutex
2014-09-09 14:23:02.997, Info Install MS_MSCLIENT: 0x0004a020
2014-09-09 14:23:02.997, Info Install MS_NETBIOS: 0x0004a020
2014-09-09 14:23:03.138, Info Install MS_SMB: 0x0004a020
2014-09-09 14:23:03.326, Info Install MS_TCPIP6: 0x0004a020
2014-09-09 14:23:03.702, Info Install MS_TCPIP: 0x0004a020
2014-09-09 14:23:03.702, Info Service dhcp start: 0x00000000
2014-09-09 14:23:03.702, Info Service lmhosts start: 0x00000000
2014-09-09 14:23:03.827, Info Service ikeext start: 0x00000000
2014-09-09 14:23:03.921, Info Service mpssvc start: 0x00000000
2014-09-09 14:23:03.921, Info Service mrxsmb10 start: 0x00000000
2014-09-09 14:23:03.921, Info Released profiling mutex
2014-09-09 14:23:03.921, Info Spent 1250ms installing network components
2014-09-09 14:23:04.108, Info Installing device root\kdnic X:\windows\INF\kdnic.inf succeeded
2014-09-09 14:23:04.608, Info Installing device vmbus\{f8615163-df3e-46c5-913f-f2d2f965ed0e} X:\windows\INF\wnetvsc.inf succeeded
2014-09-09 14:23:04.670, Info Spent 750ms installing network drivers
2014-09-09 14:23:09.768, Info QueryAdapterStatus: found operational adapter with DHCP address assigned.
2014-09-09 14:23:09.768, Info Spent 5062ms confirming network initialization; status 0x00000000
2014-09-09 14:23:09.768, Info STATUS: SUCCESS (0x00000000)
2014-09-09 14:23:09.768, Info ==== Applying Firewall Settings ====
2014-09-09 14:23:09.768, Info STATUS: SUCCESS (0x00000001)
2014-09-09 14:23:09.768, Info ==== Executing Synchronous User-Provided Commands ====
2014-09-09 14:23:09.768, Info STATUS: SUCCESS (0x00000001)
2014-09-09 14:23:09.768, Info ==== Executing Asynchronous User-Provided Commands ====
2014-09-09 14:23:09.768, Info Parsing RunAsynchronousCommand: 2 entries
2014-09-09 14:23:09.768, Info Command 0: 0x00000000
2014-09-09 14:23:09.768, Info Successfully executed command 'X:\NetCheck.exe'
2014-09-09 14:23:09.768, Info Command 1: 0x00000000
2014-09-09 14:23:09.784, Info Successfully executed command 'wscript.exe X:\Deploy\Scripts\LiteTouch.wsf'
2014-09-09 14:23:09.784, Info STATUS: SUCCESS (0x00000000)
2014-09-09 14:23:09.784, Info ==== Applying Shutdown Settings ====
2014-09-09 14:23:09.784, Info No shutdown setting was specified
2014-09-09 14:23:09.784, Info STATUS: SUCCESS (0x00000001)Here is how i ended up solving my problem.
Change the unattend file to look like this:
<?xml version="1.0" encoding="utf-8"?>
<unattend xmlns="urn:schemas-microsoft-com:unattend">
<settings pass="windowsPE">
<component name="Microsoft-Windows-Setup" processorArchitecture="x86" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State">
<Display>
<ColorDepth>32</ColorDepth>
<HorizontalResolution>1024</HorizontalResolution>
<RefreshRate>60</RefreshRate>
<VerticalResolution>768</VerticalResolution>
</Display>
<RunSynchronous>
<RunSynchronousCommand>
<Order>1</Order>
<Path>wscript.exe X:\Deploy.vbs</Path>
<Description>Run the .vbs file that kicks off Netcheck and the Litetouch wizard</Description>
</RunSynchronousCommand>
</RunSynchronous>
</component>
</settings>
<cpi:offlineImage cpi:source="" xmlns:cpi="urn:schemas-microsoft-com:cpi" />
</unattend>
Here is the Deploy.vbs:
Set objShell = Wscript.CreateObject("Wscript.shell")
objShell.Run "X:\Netcheck.exe", 0, false
objShell.Run "wscript.exe X:\Deploy\Scripts\LiteTouch.wsf", 0, true -
Asynchronous RFC calling using BPM
Hi to all,
I have this scenario:
FILE -> XI -> RFC -> XI
I want to use the BPM to do this, thus I have built this BPM:
START -> RECEIVE -> BLOCK1(SEND) -> BLOCK1(RECEIVE) -> STOP
My problem is that I want to use an asynchronous scenario and, thus, when XI sends message to RFC, BPM seems to become inactive and the BPM is not able to receive the RFC Response. How can I solve this problem? How can I mantain BPM active to receive the RFC Response?
Thanks to all!Hey,
I suppose you are trying to do a similar senario
The scenario must be executed asynchronously, but there needs to be an automatic confirmation that the business data was successfully processed (this would be the equivalent of an applicationacknowledgement). Cross-component BPM (ccBPM) will be used to process the confirmation message.
How To Use BAPI wrappers in asynchronous scenarios with ccBPM
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/59ef6011-0d01-0010-bfb0-b51381e00509
<b>Cheers,
*RAJ*
*REWARD POINTS IF FOUND USEFULL*</b> -
Message doesnot return back in sequantial asynchronous scenario
I have an scenario like this,all of messages are executing asynchronously by calling from sproxy,
Request
clientZET029 --> NumberRangeRequest_Out XI NumberRangeRequest_In --> clientOER030
Response
clientOER030 --> NumberRangeConfirmation_Out XI clientZET029 NumberRangeConfirmation_In--> clientA
when i try to carry out all scenario i get error "XI Error NO_RECEIVER_CASE_ASYNC.RCVR_DETERMINATION"
I executing NumberRangeRequest_Out service from sproxy of ZET029 and expecting a return message from OER030 but it arrives to OER030 but not come back it gives error, and like you see , sender of all messages are ZET029 it is the problem i think
here my sxmb_moni logs
Processed successfully ZET029 NumberRangeRequest_Out NumberRangeRequest_Out SENDER PROXY IENGINE
Processed successfully ZET029 NumberRangeRequest_Out OER030 NumberRangeRequest_In CENTRAL IENGINE IENGINE
Processed successfully ZET029 OER030 NumberRangeRequest_In RECEIVERIENGINE PROXY
System Error - Manual Restart Possible ZET029 NumberRangeConfirmation_Out CENTRAL IENGINE
Processed successfully@ ZET029 NumberRangeConfirmation_Out NumberRangeConfirmation_Out SENDER PROXY IENGINE
ThanksI can't understand in asynch how response will come back to sender?
-
Is a call library function node in LabView 8.6 synchronous or asynchronous?
I tried setting a sub-VI with a call library function node in it to "subroutine" execution status. The error list indicated that the Call Library Function node in the block diagram was an asynchronous node. The LabView on-line help content indicates
"...CINs and shared libraries execute synchronously, so LabVIEW cannot use the execution thread used by these objects for any other tasks. "
Does anyone know for sure what the status of a Call Library Function is? Does it depend upon the specific code in the DLL being called?
Thanks,
Mike H.Based on the help page it looks like it should execute asynchronously.
The thing in the description that leads me to believe they execute asynchronously is that you can configure the library to run as a multi-threaded operation.
Please take a look here to see the difference between synchronous and asynchronous execution.
Since the code even has the ability to be multi-threaded, you can consider it as running in parallel to your other code.
Any data returned is passed to the thread that called that function.
Cory K -
Is replication asynchronous ?
I have a client with a Near Cache, and with the configuration specified as "Client Configuration" below, and two servers with the configuration specified as "Server Configuration" below (note: added the backup-count node). When an object is put into the cache on the client side, it is reasonable to expect the object to be synchronously sent to one of the servers.
The question is, does the replication of that object to the second server get done synchronously or asynchronously? Is there a way to explicitly specify that?
Thanks
Adsen
Client Configuration:
<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>*</cache-name>
<scheme-name>near-scheme</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<near-scheme>
<scheme-name>near-scheme</scheme-name>
<front-scheme>
<local-scheme>
<scheme-ref>local-cache</scheme-ref>
</local-scheme>
</front-scheme>
<back-scheme>
<distributed-scheme>
<scheme-ref>distributed-cache</scheme-ref>
</distributed-scheme>
</back-scheme>
</near-scheme>
<!--
Default Distributed caching scheme.
-->
<local-scheme>
<scheme-name>local-cache</scheme-name>
<service-name>LocalCache</service-name>
<eviction-policy>LRU</eviction-policy>
<high-units>0</high-units>
<low-units>0</low-units>
<unit-calculator>FIXED</unit-calculator>
<expiry-delay>0</expiry-delay>
<flush-delay>0</flush-delay>
<pre-load>false</pre-load>
</local-scheme>
<distributed-scheme>
<scheme-name>distributed-cache</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<local-scheme>
<scheme-ref>unlimited-backing-map</scheme-ref>
</local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
<local-scheme>
<scheme-name>unlimited-backing-map</scheme-name>
</local-scheme>
</caching-schemes>
</cache-config>
Server Configuration:
<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>*</cache-name>
<scheme-name>distributed-cache</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<!--
Distributed caching scheme.
-->
<distributed-scheme>
<scheme-name>distributed-cache</scheme-name>
<service-name>DistributedCache</service-name>
<!-- To use POF serialization for this partitioned service,
uncomment the following section -->
<!--
<serializer>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
</serializer>
-->
<backing-map-scheme>
<local-scheme>
<scheme-ref>unlimited-backing-map</scheme-ref>
</local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
<backup-count>1</backup-count>
</distributed-scheme>
<!--
Backing map scheme definition used by all the caches that do
not require any eviction policies
-->
<local-scheme>
<scheme-name>unlimited-backing-map</scheme-name>
</local-scheme>
</caching-schemes>
</cache-config>user10307225 wrote:
Hi Aleks,
Thanks for your reply.
Is there any way to configure Coherence to do the replication Asynchronously to avoid the replication delays?Hi Adsen,
that is a very bad idea, as that means that your code can return earlier than your data is safe. Now if the primary node died after your code returned but before the data was backed up, your data would be lost.
The main strong point of Coherence is that it safeguards your data, and what you propose makes that theoretically impossible.
If you have concerns about performance then Coherence has multitude of options allowing you to tune the system, but until those performance issues manifest, don't worry about them. And of course stress test your system frequently during development to ensure that if the problems are valid then they do manifest :-)
Also the usual way of improving throughput in high performance systems is batching.
For improving latency, you could
- try to minimize the traffic you need, e.g. send an entry-processor doing the modification instead of sending your entire cached value over the network
- try to have your synchronous code minimal but still retain data safety but e.g. execute the logic asynchronously (with data safety ensuring that it will be executed), e.g. send a command, which will be enqueued synchronously but executed asynchronously (the command pattern can be found in the Coherence Incubator: http://coherence.oracle.com/display/INCUBATOR/Command+Pattern )
Best regards,
Robert -
Sharepoint Foundation Search errors SBS 2011
I have a SBS 2011 Server that is having SharePoint Foundation Search Errors and my backup is not working because of it. The service will not start. Any ideas as to how to fix this would be much appreciated.
Error 1
The gatherer is unable to read the registry ContentSourceID missing..
Context: Application 'Search_index_file_on_the_search_server', Catalog 'Search'
Details:
The operation completed successfully.
(0x00000000)
Error 2
Component: add2c3f0-cc4c-41ae-aa1e-ce8ac2088d23
An index corruption of type WidSetFormat was detected in catalog Search. Stack trace is
tquery offset=0x0000000000034F68 (0x000007FEBE804F68)
tquery offset=0x000000000001E39D (0x000007FEBE7EE39D)
tquery offset=0x00000000000EDF54 (0x000007FEBE8BDF54)
tquery offset=0x000000000012C5B4 (0x000007FEBE8FC5B4)
tquery offset=0x000000000012CD77 (0x000007FEBE8FCD77)
tquery offset=0x0000000000124AF6 (0x000007FEBE8F4AF6)
tquery offset=0x0000000000125373 (0x000007FEBE8F5373)
tquery offset=0x0000000000126F9D (0x000007FEBE8F6F9D)
Error 3
The plug-in in SPSearch4.Indexer.1 cannot be initialized.
Context: Application 'add2c3f0-cc4c-41ae-aa1e-ce8ac2088d23', Catalog 'Search'
Details:
(0xc0041800)
Error 4
Content index on Component: add2c3f0-cc4c-41ae-aa1e-ce8ac2088d23
could not be initialized. Error Search.The content index is corrupt. 0xc0041800
Error 5
The application cannot be initialized.
Context: Application 'Search_index_file_on_the_search_server'
Details:
Unspecified error
(0x80004005)
Error 6
The gatherer object cannot be initialized.
Context: Application 'Search_index_file_on_the_search_server', Catalog 'Search'
Details:
Unspecified error
(0x80004005)
Critical Error
The Execute method of job definition Microsoft.SharePoint.Search.Administration.SPSearchJobDefinition (ID 776e67a1-4b09-4da4-8544-25d0b287f49e) threw an exception. More information is included below.
The device is not ready.Larry,
I have an online backup that is backing up the data, the SBS backup worked the day before, but failed last night.
The forum said I couldn't post a link, so I modified it below. Spaces are slashes.
www dot altaro dot com hyper-v sbs-2011-backups-failing-vss-error-0x800423f3-event-id-8230-spfarm-spsearch
Below are two of the errors that I'm getting. Maybe if I fix sharepoint search, then that would fix my problem??
Volume Shadow Copy Service error: Failed resolving account spsearch with status 1376. Check connection to domain controller and VssAccessControl registry key.
Operation:
Gather writers' status
Executing Asynchronous Operation
Context:
Current State: GatherWriterStatus
Error-specific details:
Error: NetLocalGroupGetMemebers(spsearch), 0x80070560, The specified local group does not exist.
The backup operation that started at '2013-08-24T03:00:33.076000000Z' has failed because the Volume Shadow Copy Service operation to create a shadow copy of the volumes being backed up failed with following error code '2155348129'. Please review the
event details for a solution, and then rerun the backup operation once the issue is resolved.
Thanks,
John -
Hi Expert,
By JOB_OPEN, JOB_SUBMIT, JOB_CLOSE, we can schedule a job.
When the first two FMs is executed, the job is still in 'scheduled' status. Only when the FM JOB_CLOSE is executed, the job will be in 'released' status.
In my applicaiton, I need to check the job status. If the job is in 'scheduled' status, I think that the job creation is not successsful; Otherwise, the job creation is successful.
But the issue is, if job is in 'scheduled' status, how can I know whether all three FMs is executed in the job creation or only JOB_OPEN and JOB_SUBMIT are executed while JOB_CLOSE still not starts?
In the later case, the applicaiton should wait some time and check the job later.
Thanks for your support
Best Regards, Johnney.Hi,
Suppose there two APIs.
In API1, there are four steps:
1. call FM JOB_OPEN
2. call FM JOB_SUBMIT
3. Save the Jobname and Jobcount in the DB
4. CAll JOB_CLOSE
in API2, there are two steps:
1. get the Jobname and Jobcount from the DB
2. Call FM BP_JOBLIST_STATUS_GET to get the job status.
API1 and API2 are executed asynchronically. Consider the following case:
After STEP3 is executed(STEP4 has not yet been started), the API2 starts to run. Now the job status that API2 get is 'Scheduled'.
This is not correct, because the job creation is still not finished. The API2 should wait some time and check later.
So my question, is how API2 know that the job creation is not finished.
Thanks & Best Regards, Johnney. -
DPM 2012 Backing up a VM on Server 2012 Hyper-V CSV Host - Not Working with Hardware VSS
Hi All,
I'm trying to backup a VM on a 2012 Cluster. I can do it by using the system VSS provider, but when I try to use the hardware provider, (Dell equalogic) it doesn't work. DPM will sit for a while trying and then report a retryable VSS error.
The only error I'm seeing on the Host is the following:
Event ID 12297Volume Shadow Copy Service error: The I/O writes cannot be flushed during the shadow copy creation period on volume \\?\Volume{3312155e-569a-42f3-ab3a-baff892a2681}\. The volume index in the shadow copy set is 0. Error details: Open[0x00000000, The operation completed successfully.
], Flush[0x80042313, The shadow copy provider timed out while flushing data to the volume being shadow copied. This is probably due to excessive activity on the volume. Try again later when the volume is not being used so heavily.
], Release[0x00000000, The operation completed successfully.
], OnRun[0x00000000, The operation completed successfully.
Operation:
Executing Asynchronous Operation
Context:
Current State: DoSnapshotSet
I don't know where to go from here - There is no activity on the CSV (this is the only VM on it, and both the CSV and VM were created specifically for testing this issue
Does anyone have any ideas? I'm desperate.
Update:
Ok, so I can Take DPM out of the picture. Trying to do a snapshot from the Dell Auto-Snapshot manager, I get the same errors. But I also get a bit more information:
Started at 3:02:47 PM
Gathering Information...
Phase 1: Checking pre-requisites... (3:02:47 PM)
Phase 2: Initializing Smart Copy Operation (3:02:47 PM)
Adding components from cluster node SB-BLADE01
Adding components from cluster node SB-BLADE04
Adding components from cluster node SB-BLADE02
Retrieving writer information
Phase 3: Adding Components and Volumes (3:02:52 PM)
Adding components to the Smart Copy Set
Adding volumes to the Smart Copy Set
Phase 4: Creating Smart Copy (3:02:52 PM)
Creating Smart Copy Set
An error occurred:
An error occurred during phase: Creating Smart Copy
Exception from HRESULT: 0x80042313.
Creating Smart Copy Set
An error occurred:
An error occurred during phase: Creating Smart Copy
Exception from HRESULT: 0x80042313.
An error occurred:
Writer 'Microsoft Hyper-V VSS Writer' reported an error: 'VSS_WS_FAILED_AT_FREEZE'. Check the application component to verify it is in a valid state for the operation.
An error occurred:
One or more errors occurred during the operation. Check the detailed progress updates for details.
An error occurred:
Smart Copy creation failed.
Source: Creating Smart Copy Set
An error occurred:
An error occurred during phase: Creating Smart Copy
Exception from HRESULT: 0x80042313.
An error occurred:
Writer 'Microsoft Hyper-V VSS Writer' reported an error: 'VSS_WS_FAILED_AT_FREEZE'. Check the application component to verify it is in a valid state for the operation.
An error occurred:
One or more errors occurred during the operation. Check the detailed progress updates for details.
Error: VSS can no longer flush I/O writes.
Thanks,
JohnI had a similar issue with an environment that had previously been working with the Dell HIT configured correctly. As we added a third node to the cluster I began seeing this problem.
In my case I had the HIT volume max sessions per volume at 6 and maximum sessions per volume slice set to 2 and the CSV was using a LUN/Volume on the SAN that was split across 2 members.
When the backup takes place and Dell HIT is configured to use SAN snapshots the vss-control iSCSI target is used which in my case exceeded my limits for maximum connections per volume as I'm using 2 paths per Hyper-V node with MPIO (this is my
current theory).
Once I'd modified these settings I could then back up the VHD's on that CSV again.
Hope this helps. -
Error 0x80070057 when configuring Windows 7 Backup even after repair installation !
I've started getting Error 0x80070057 when trying to configure Windows 7 SP1 Backup "The parameter is incorrect".
Already did the following:
- http://support.microsoft.com/kb/982736/pl
- chkdsk c: /F /R (no errors)
- sfc /scannow (no errors)
- registry tweaks
- removing old registry backup entries
- moved partition to a different disk
- in-place repair install !!!
AND NOTHING !Nothing wrong with sfc /scannow
Event viewer show such errors:
Błąd składnika Express Writer: nie można dodać składników Express Writer z katalogu System.
Operation:
Initializing Writer
Gathering Writer Data
Executing Asynchronous Operation
Context:
File Path: C:\windows\Vss\Writers\System\
Execution Context: Requestor
Current State: GatherWriterMetadata
Error-specific details:
Error: FindFirstFile(C:\windows\Vss\Writers\System\*.xml), 0x80070002, The system cannot find the file specified.
Błąd Usługi kopiowania woluminów w tle: nieoczekiwany błąd podczas badania interfejsu IVssWriterCallback. hr = 0x80070005, Access is denied.
. To jest często spowodowane przez niepoprawne ustawienia zabezpieczeń w procesie zapisującym lub żądającym.
Operation:
Gathering Writer Data
Context:
Writer Class Id: {e8132975-6f93-4464-a53e-1050253ae220}
Writer Name: System Writer
Writer Instance ID: {06036e1a-27d9-4b22-8aa2-1975e33169ca}
Błąd Usługi kopiowania woluminów w tle: nieoczekiwany błąd podczas wywoływania procedury RegSetValueExW(0x000001ec,SYSTEM\CurrentControlSet\Services\VSS\Diag\Registry Writer,0,REG_BINARY,000000000297EEB0.72). hr = 0x80070005, Access is denied.
Operation:
BackupShutdown Event
Context:
Execution Context: Writer
Writer Class Id: {afbab4a2-367d-4d15-a586-71dbb18f8485}
Writer Name: Registry Writer
Writer Instance ID: {232c094d-e556-42cc-9c03-4544badc76e0}
Błąd Usługi kopiowania woluminów w tle: nieoczekiwany błąd podczas wywoływania procedury RegSetValueExW(0x0000031c,SYSTEM\CurrentControlSet\Services\VSS\Diag\VssvcPublisher,0,REG_BINARY,00000000041EF220.72). hr = 0x80070005, Access is denied.
Błąd Usługi kopiowania woluminów w tle: nieoczekiwany błąd podczas wywoływania procedury RegSetValueExW(0x000001f4,SYSTEM\CurrentControlSet\Services\VSS\Diag\COM+ REGDB Writer,0,REG_BINARY,00000000017BF3E0.72). hr = 0x80070005, Access is denied.
Operation:
BackupShutdown Event
Context:
Execution Context: Writer
Writer Class Id: {542da469-d3e1-473c-9f4f-7847f01fc64f}
Writer Name: COM+ REGDB Writer
Writer Instance ID: {5f6f2b71-85e8-4f43-b3e2-667f1ea0daaf} -
Large number VSS errors "The specified network resource or device is no longer available."
I have a 2 node Hyper V Cluster backed up via Veeam Off-Host Proxy. I've already logged, without answer, a thread about the Off-Host Proxy server randomly (I guess) creating new duplicate iSCSI connections.
Today I have a single VM that won't backup. On inspection through the hosting servers Application Log I see a high amount of VSS errors being logged which I believe is possibly related to the iSCSI issue being logged by the SAN.
I'm not sure how to "show" the issue without a giant wall of the events but there is like half a dozen different events being logged when whatever this is is happening. For instance as well last night 60/61 VMs backed up find, but these errors
were logged constantly while the backup ran...
The question as well is is this a Windows/VSS issue, a Veeam issue or a Equallogic issue?
Also how do I find what '\\?\Volume{06fba49e-9519-11e4-80cc-000af75dc050}\'. actually is?
In order of how they come I guess:
Log Name: Application
Source: VSS
Date: 6/01/2015 7:28:19 AM
Event ID: 8229
Task Category: None
Level: Warning
Keywords: Classic
User: N/A
Computer: HOST02.domain.private
Description:
A VSS writer has rejected an event with error 0x800423f3, The writer experienced a transient error. If the backup process is retried,
the error may not reoccur.
. Changes that the writer made to the writer components while handling the event will not be available to the requester. Check the event log for related events from the application hosting the VSS writer.
Operation:
PrepareForSnapshot Event
Context:
Execution Context: Writer
Writer Class Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de}
Writer Name: Microsoft Hyper-V VSS Writer
Writer Instance ID: {881f5207-f769-4b40-986a-c6bd56d8aa1e}
Command Line: C:\Windows\system32\vmms.exe
Process ID: 3268
Log Name: Application
Source: VSS
Date: 6/01/2015 7:29:31 AM
Event ID: 8193
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: HOST02.domain.private
Description:
Volume Shadow Copy Service error: Unexpected error calling routine Error calling CreateFile on volume '\\?\Volume{06fba49e-9519-11e4-80cc-000af75dc050}\'. hr = 0x80070037, The specified network resource or device is no longer available.
Operation:
Check If Volume Is Supported by Provider
Context:
Execution Context: Coordinator
Provider ID: {d4689bdf-7b60-4f6e-9afb-2d13c01b12ea}
Volume Name: \\?\Volume{06fba49e-9519-11e4-80cc-000af75dc050}\
Log Name: Application
Source: EqualLogic
Date: 6/01/2015 7:29:43 AM
Event ID: 4001
Task Category: VSS
Level: Error
Keywords: Classic
User: N/A
Computer: HOST02.domain.private
Description:
iSCSI logout error 0xEFFF0040 from target NULL.
Log Name: Application
Source: VSS
Date: 6/01/2015 7:29:43 AM
Event ID: 12293
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: HOST02.domain.private
Description:
Volume Shadow Copy Service error: Error calling a routine on a Shadow Copy Provider {d4689bdf-7b60-4f6e-9afb-2d13c01b12ea}. Routine details OnLunStateChange(\\?\mpio#disk&ven_eqlogic&prod_100e-00&rev_7.0_#1&7f6ac24&0&363846433631363644434545463233334146344141353632363038303136#{53f56307-b6bf-11d0-94f2-00a0c91efb8b})
failed with error 0xefff0040 [hr = 0xefff0040].
Operation:
Notifying hardware provider to free a drive
Break with LUN mask
Delete Shadow Copies
Processing PostFinalCommitSnapshots
Executing Asynchronous Operation
Context:
Volume Name: \\?\mpio#disk&ven_eqlogic&prod_100e-00&rev_7.0_#1&7f6ac24&0&363846433631363644434545463233334146344141353632363038303136#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}
Volume Name: \\?\Volume{06fba49e-9519-11e4-80cc-000af75dc050}
Snapshot ID: {3091377b-2e26-43ed-a11b-f31adbde0b1f}
Execution Context: Provider
Provider Name: Dell EqualLogic VSS HW Provider
Provider Version: 4.7.1
Provider ID: {d4689bdf-7b60-4f6e-9afb-2d13c01b12ea}
Snapshot Context: 4194336
Provider Name: Dell EqualLogic VSS HW Provider
Provider Version: 4.7.1
Provider ID: {d4689bdf-7b60-4f6e-9afb-2d13c01b12ea}
Current State: DoSnapshotSet
Log Name: Application
Source: VSS
Date: 6/01/2015 7:29:43 AM
Event ID: 12293
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: HOST02.domain.private
Description:
Volume Shadow Copy Service error: Error calling a routine on a Shadow Copy Provider {d4689bdf-7b60-4f6e-9afb-2d13c01b12ea}. Routine details could not free LUN [hr = 0x8004230f, The shadow copy provider had an unexpected error while trying to process the
specified operation.
Operation:
Break with LUN mask
Delete Shadow Copies
Processing PostFinalCommitSnapshots
Executing Asynchronous Operation
Context:
Volume Name: \\?\Volume{06fba49e-9519-11e4-80cc-000af75dc050}
Snapshot ID: {3091377b-2e26-43ed-a11b-f31adbde0b1f}
Execution Context: Provider
Provider Name: Dell EqualLogic VSS HW Provider
Provider Version: 4.7.1
Provider ID: {d4689bdf-7b60-4f6e-9afb-2d13c01b12ea}
Snapshot Context: 4194336
Provider Name: Dell EqualLogic VSS HW Provider
Provider Version: 4.7.1
Provider ID: {d4689bdf-7b60-4f6e-9afb-2d13c01b12ea}
Current State: DoSnapshotSet
(Below logged 3 times)
Log Name: Application
Source: VSS
Date: 6/01/2015 7:29:49 AM
Event ID: 8193
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: HOST02.domain.private
Description:
Volume Shadow Copy Service error: Unexpected error calling routine Error calling CreateFile on volume '\\?\Volume{06fba49e-9519-11e4-80cc-000af75dc050}\'. hr = 0x80070037, The specified network resource or device is no longer available.
Operation:
Check If Volume Is Supported by Provider
Context:
Execution Context: Coordinator
Provider ID: {d4689bdf-7b60-4f6e-9afb-2d13c01b12ea}
Volume Name: \\?\Volume{06fba49e-9519-11e4-80cc-000af75dc050}\This is the issue having with redundant iSCSI connections being created:
Not sure this is a Veeam issue, just that it's occurring with Veeam backup process.
We have a server configured as an Off-Host Proxy with Veeam; it connects to the SAN storage via iSCSI same as the other hosts but accesses the storage as read only.
Randomly we will get alerts from the SAN as below:
iSCSI login to target '172.16.0.50:3260, iqn.2001-05.com.equallogic:8-661fc6-e612eedc6-32600005c9254a75-arcvmstore1-2015-01-03-14:13:37.2774.1' from initiator '172.16.0.44:64108, iqn.1991-05.com.microsoft:arcbackproxy.domain.local' failed for the following reason:
Requested target not found.
On inspection of the Off-Host Proxy's iSCSI configuration additional iSCSI connections are present as inactive, with an existing connections name amended with the date at the end. For instance (and in the example error above) there will be:
iqn.2001-05.com.equallogic:8-661fc6-e612eedc6-32600005c9254a75-arcvmstore1
and
iqn.2001-05.com.equallogic:8-661fc6-e612eedc6-32600005c9254a75-arcvmstore1-2015-01-03-14:13:37.2774.1
Not really causing an issue, the backups are running OK, or any issues we have aren't related. But getting annoying getting the alerts from the SAN of the Off-Host Proxy trying to connect to an incorrect iSCSI target.
Any ideas why this is occurring?
Obvious is we delete the extra connection, which stops the alert, but the issue will reoccur within a day or two usually.
Today I found twice the alert as above and sure enough the iSCSI on the Off-Host Proxy is as below:
Maybe these redunandant connections being created is related? Maybe that's the where the "The specified network resource or device is no longer available." is coming from or something? -
Workflow status is "In Progress"
Hi ,
I have created a workflow which is attached to custom list. Problem with this workflow is , sometimes workflow does not update
status of the workflow in the list.
All tasks associated with workflow are completed but status in list is shown as “In Progress” instead of completed.
Workflow: state machine workflow
Environment: Sharepoint2013
Anyone faced this issue or know about the solution , please share the solution.<o:p></o:p>
Regards,
SujeetHi,
According to your description, my understanding is that sometimes your state machine workflow status still stuck on "In Progress" when all the tasks has completed.
The steps in a state machine workflow execute asynchronously. This means that they are not necessarily performed one after another, but instead are triggered by actions and states. In this case, the workflow status may have not been refreshed.
I suggest you can wait for some time and refresh the page to see if the status has changed.
If the issue still exists, I suggest you can check if the workflow has executed the completed state also can track the execute steps with debugging.
Here are some detailed articles for your reference:
Creating SharePoint Workflow Solutions
How to Debug a workflow with Visual Studio
How To... Create and debug a state machine workflow
Thanks
Best Regards,
Jerry Guo
TechNet Community Support
Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
[email protected] -
Parallel processing using ABAP objects
Hello friends,
I had posted in the performance tuning forum , regarding a performance issue problem , I am reposting it as it involves OO concept .
the link for the previous posting
Link: [Independent processing of elements inside internal table;
Here is the scenario,
I have a internal table with 10 records(indepentent) , and i need to process them .The processing of one record doesnt have any influence on the another . When we go for loop , the performance issue is that , the 10 th record has to wait until the 9 records get processed even though there is no dependency on the output.
Could some one tell a way out to improve the performance..
If i am not clear with the question , i would explain it still clearer...
A internal table has 5 numbers , say( 1,3,4,6,7)
we are trying to find square of each number ,,,
If it is a loop the finding of suare of 7 has to wait until 6 is getting completed and it is waste of time ...
This is related to parallel processing , I have refered to parallel processing documents,But I want to do this conceptually ..
I am not using conventional procedural paradigm but Object orientedness...I am having a method which is performing this action .What am I supposed to do in that regard.
Comradely ,
K.SibiHi,
As examplified by Edward, there is no RFC/asynchronous support for Methods of ABAP Objects as such. You would indeed need to "wrap" your method or ABAP Object in a Function Module, that you can then call with the addition "STARTING NEW TASK". Optionally, you can define a Method that will process the results of the Function Module that is executed asynchronously, as demonstrated as well in Edward's program.
You do need some additional code to avoid the situation where your program takes all the available resources on the Application Server. Theoretically, you cannot bring the server or system down, as there is a system profile parameter that determines the maximum number of asynchronous tasks that the system will allow. However, in a productive environment, it would be a good idea to limit the number of asynchronous tasks started from your program so that other programs can use some as well.
Function Group SPBT contains a set of Function Modules to manage parallel processing. In particular, FM SPBT_INITIALIZE will "initialize" a Server Group and return the maximum number of Parallel Tasks, as well as the number of free ones at the time of the initialization. The other FM of interest is SPBT_GET_CURR_RESOURCE_INFO, that can be called after the Server Group has been initialized, whenever you want to "fork" a new asynchronous task. This FM will give you the number of free tasks available for Parallel Processing at the time of calling the Function Module.
Below is a code snippet showing how these Function Modules could be used, so that your program always leaves a minimum of 2 tasks for Parallel Processing, that will be available for other programs in the system.
IF md_parallel IS NOT INITIAL.
IF md_parallel_init IS INITIAL.
*----- Server Group not initialized yet => Initialize it, and get the number of tasks available
CALL FUNCTION 'SPBT_INITIALIZE'
EXPORTING
GROUP_NAME = ' '
IMPORTING
max_pbt_wps = ld_max_tasks
free_pbt_wps = ld_free_tasks
EXCEPTIONS
invalid_group_name = 1
internal_error = 2
pbt_env_already_initialized = 3
currently_no_resources_avail = 4
no_pbt_resources_found = 5
cant_init_different_pbt_groups = 6
OTHERS = 7.
md_parallel_init = 'X'.
ELSE.
*----- Server Group initialized => check how many free tasks are available in the Server Group
for parallel processing
CALL FUNCTION 'SPBT_GET_CURR_RESOURCE_INFO'
IMPORTING
max_pbt_wps = ld_max_tasks
free_pbt_wps = ld_free_tasks
EXCEPTIONS
internal_error = 1
pbt_env_not_initialized_yet = 2
OTHERS = 3.
ENDIF.
IF ld_free_tasks GE 2.
*----- We have at leasr 2 remaining available tasks => reserve one
ld_taskid = ld_taskid + 1.
ENDIF.
ENDIF.
You may also need to program a WAIT statement, to wait until all asynchronous tasks "forked" from your program have completed their processing. Otherwise, you might find yourself in the situation where your main program has finished its processing, but some of the asynchronous tasks that it started are still running. If you do not need to report on the results of these asynchronous tasks, then that is not an issue. But, if you need to report on the success/failure of the processing performed by the asynchronous tasks, you would most likely report incomplete results in your program.
In the example where you have 10 entries to process asynchronously in an internal table, if you do not WAIT until all asynchronous tasks have completed, your program might report success/failure for only 8 of the 10 entries, because your program has completed before the asynchronous tasks for entries 9 and 10 in your internal table.
Given the complexity of Parallel Processing, you would only consider it in a customer program for situations where you have many (ie, thousands, if not tens of thousands) records to process, that the processing for each record tends to take a long time (like creating a Sales Order or Material via BAPI calls), and that you have a limited time window to process all of these records.
Well, whatever your decision is, good luck. -
Invoking a specfic configuration in integration directory (abap proxy)
Hello,
I have a ABAP inbound and ABAP outbound proxy sitting on different SAP Backends,both the abap inbound and outbound sit on multiple systems,but the message mapping and interface mapping is 1.
In the integration direction I have multiple configuration scenarios, corresponding to above mentioned scenario( business systems, communication channels,sender agreements etc).
In the outbound proxy system,I have written a report to call the outbound proxy just by calling execute asynchronous. how to I make sure that a particular configuration in integration directory is executed.
regards
kaushikHi ,
When you generate a Proxy in SPROXY for a perticular Interface , few methods and Classes are generated.
So in report, when you can this Outbound Proxy ... These generated methods are Used.
Like : example
CALL METHOD cl_ref->BWDATA_A_O
EXPORTING
output = WA_OUTPUT.
Here BWDATA_A_O is a method for a perticular Interface . .. SO it will call this interface only ....
Regards
Prabhat Sharma.
Maybe you are looking for
-
Issue with Dashboard Prompt passing value to Presentation Variable
I have 2 tabular reports that grab 2 variables respectively: @Month and @Year. The reports work fine. My issues is that to compute the most recent month & year, so I can pass the values to the 2 variables mentioned above, I have a dashboard prompt on
-
How do I get iCal to update changes on my Mac and iPhone if I make changes on either?
Even when I sync using iTunes it only adds changes my iPhone.
-
INNER join with dynamic table name ?
Hi, I have a problem with this statement. DATA: g_dso_bic_dofr TYPE tabname. SELECT t1~/bic/ziparomr t2~/bic/zifremom INTO (wa_rater_paromr-/bic/ziparomr, wa_rater_paromr-/bic/zifremom) * FROM /bic/azd0bfr5100 AS t1 "equivalent to
-
Photoshop elements organizer crashing
("Adobe Elements Organizer 12 Quit Unexpectedly" continuously while trying to tag pictures. MacBook Pro (OS X Ver 10.9.5) & Elements Organizer 12.0 - both purchase Aug 2014. Is there a fix for this?
-
"[STANDBY] ExecuteThread" (some of threads happening to STANDBY)
Hi, I am using weblogic 9.2 version . Some of my threads are going into STANDBY mode , i think they are facing deadlock condtion. After restarting the managed server with in some time some of the threads displaying as: "[STANDBY] ExecuteThread: '30'