Workflow - long runtime

hi gurus,
we are experiencing slow system perf because of workflow running.  basis team told us that wf-batch is eating up resources in server.  when they checked it, it is targetting to this table: CDCLS. and the report is SAPLZWF_.  we have no idea on how to pinpoint the cause of slowness in the system perf.  can you give us an idea on how to improve our system?
regards,
paul

Hi,
I don't see the event queue helping here.
>yes, our workflow has an object type which contains the FM that accesses CDPOS/CDCLS table. i think this is the one that is causing the slowness but how can we help improve its runtime? is accessing these tables NOT recommended at all?
Well, probably there is (at least I hope) a good reason to access the change doc tables. Is it custom code? Can you ask the original developer why the change documents are used? Or can you find the reason from documentation/comments? Can you ask an ABAPer to analyze the code? Can you test/debug it from SWO1. Perhaps the code is just written in non-performance optimized way and with slight changes you could make it significantly faster.
Regards,
Karri

Similar Messages

  • Urgent - How to create instance of Business Object in workflow at runtime

    Hi all,
       I have a requirement as follows...
    1) I have a Business Object ISUPOD in which the Key field is Pointof Delivery
    2) I am getting the value for Pointof Delivery in the step-2 of my workflow.
    3) Now I want to create a instance for the Business Object using this key field value in my Workflow at runtime and use the instance in the following steps.
    How can I do this ???
    Thanks,
    Sivagami

    Hi Ravi,
      Thanks for the solution...
      There is also a wizard that will generate an activity to do this. We have to just go to Wizards->Include "Create Object Reference" which will create the task  with the BO & Method referred by you.
    Thanks,
    Sivagami

  • How to kill a workflow in runtime?

    Friends plz let me know how to kill a workflow in runtime?.

    Hi,
    You can use SAP_WAPI_SET_WORKITEM_STATUS to logically delete the work Items. Just pass the status as "CANCELLED" to this function module.
    Logically delete means workitem will be deleted from the table but with archiving.
    Report RSWWWIDE (txn SWWL) deletes work items from tables without archiving, I mean work item will be deleted permanently.Therefore, this report should not be used in a production system.
    Thanks
    Yogesh Sharma

  • Long runtime report SMIGR_CREATE_DDL

    Hi SAP Experts.
    I am migrating a SAP ERP 6.0 SR3 32bits to x64, with system copy export/import. But the report SMIGR_CREATE_DDL has long runtime and it doesn´t finish.
    How can I solve the problem?.
    Best regards.
    Luis Gomez.

    Hi
    As far as i know, the report primarily only needed on BI systems. As long as you don't have partitioned tables or bitmap indexes, you don't have to run SMIGR_CREATE_DDL. You will only end up with an empty directory
    But to troubleshoot your problem, can you please tell us which database/version you have? Can you see which SQL statements are running?
    Best regards
    Michael
    Edit: i just tested the report on a ERP 6.0 system (on Oracle 10.2.0.2), it took ~2 hrs to run and the output was empty.

  • CDB Upgrade 4.0 - 5.0: Long Runtime

    Hello all,
    We are in the middle of CRM Upgrade from 4.0 -> 7.0 and currenty doing CDB upgrade from 4.0 -> 5.0. As part of Segment download, I am downloading CAPGEN_OBJECT_WRITE and it has created few lacs entries in SMQ2.
    The system is processing those entries for last 3 days and although the entries are being processed, we cannot afford to have such long runtime during Go-live. Did I miss something?
    Have you ever faced such scenario? Appreciate your valuable feedback on this.
    Thanks in advance,
    Regards
    Pijush

    Hi William,
    Cobras has it limitation when it comes to internet subscriber -
    noted in the link : Internet subscribers, Bridge, AMIS, SMTP users and such will not be included.
    http://www.ciscounitytools.com/Applications/General/COBRAS/Help/COBRAS.htm
    You might try using Subscriber Information Dump under tools depot > administration tools > Subscriber Information Dump and export and import to the new unity server.
    Rick Mai

  • Web Application Designer 7 - Long Runtime

    Hi,
    I'm working in BI-7 environment and to fulfil the users' requirement we have developed a web template having almost 30 queries in it.
    We are facing very long runtime of that report on web. Afer analysing with BI Statistics we came to know that DB and OLAP are not taking very long time to run but its the front-end (web template) which is causing delay. Another observation is maximum time is consumed while web template is being loaded/initialized, and once loaded fliping between different tabs (reports) doesn't take much time.
    My questions are;
    What can I do to reduce web template intialization/loading time?
    Is there any way I can get time taken by front-end in statistics? (currently we can get DB and OLAP time through BI statistics cube and consider remaing time as front-end time, because standard BI statistics cube is unable to get front-end time when report is running on browser)
    What is the technical processes involve when information moves back from DB to browser?
    Your earliest help would be highly appreciated. Please let me know if you require any further information.
    Regards,
    Shabbar
    0044 (0) 7856 048 843

    Hi,
    It asks you for a log in to the Portal, because the output of the Web Templates can be viewed only through the Enterprise Portal. This is perfectly normal. BI-EP Configuraion should be proper and you need to have a Login-id and Password for the Portal.
    For using WAD and design the front end, go through the below link. It would help you.
    http://help.sap.com/saphelp_nw70/helpdata/en/b2/e50138fede083de10000009b38f8cf/frameset.htm

  • Long runtimes due to P to BP integration

    Hi all,
    The folks from my project are wondering if any of the experts out there have faced the following issue before. We have raised an OSS note for it but have yet to receive any concrete solution from SAP. As such, we are exploring other avenues of resolving this matter.
    Currently, we are facing an issue where a standard infotype BADi is causing extremely long runtimes for programs that update certain affected infotypes. The BADi name is HR_INTEGRATION_TO_BP and SAP recommends that it should be activated when E-Recruitment is implemented. A fairly detailed technical description is provided as follows.
    1. Within IN_UPDATE method of the BADi, a function module, HCM_P_BP_INTEGRATION is called to create linkages between a person object and business partner object.
    2. Function module RH_ALEOX_BUPA_WRITE_CP will be called within HCM_P_BP_INTEGRATION to perform the database updates.
    3. Inside RH_ALEOX_BUPA_WRITE_CP, there are several subroutines of interest, such as CP_BP_UPDATE_SMTP_BPS and CP_BP_UPDATE_FAX_BPS. These subroutines are structured similarly and will call function module BUPA_CENTRAL_EXPL_SAVE_HR to create database entries.
    4. In BUPA_CENTRAL_EXPL_SAVE_HR, subroutine ADDRESS_DATA_SAVE_ES_NOUPDTASK will call function module, BUP_MEMORY_PREPARE_FOR_UPD_ADR, which is where the problem begins.
    5. BUP_MEMORY_PREPARE_FOR_UPD_ADR contains 2 subroutines, PREPARE_BUT020 and PREPARE_BUT021. Both contain similar code where a LOOP is performed on a global internal table (GT_BUT020_MEM_SORT/GT_BUT021_MEM_SORT) and entries are appended to another global internal table (GT_BUT020_MEM/GT_BUT021_MEM). These tables (GT_BUT020_MEM/GT_BUT021_MEM) will be used later on for updates to database tables BUT020 or BUT021_FS. However, we noticed that these 2 tables are not cleared after updating the database, which results in an ever increasing number of entries that are updated into the database, even though many of them may have already been updated.
    If any of you are interested in seeing if this issue affects you, simply run a program that will update either infotype 0000, 0001, 0002, 0006 subty 1, 0009 subty 0 or 0105 subty (0001, 0005, 0010 or 0020) to replicate this scenario if E-recruitment is implemented in your system. Not many infotype updates are required to see the issue, just 2 are enough to tell if the tables in point 5 are being cleared. (We have observed that this issue occurs during the creation of a new personnel number (and hence a new business partner). For existing personnel numbers, the same code is executed but the internal tables in point 5 are not populated.)
    System details: SAP ECC 6.0 (Support package: SAPKA70021) with E-Recruitment (Support package: SAPK-60017INERECRUIT) implemented.
    Thanks for reading.

    Hi Annabelle,
    We have a similar setup, but are on SAPK-60406INERECRUIT.  Although the issue does not always occur, we do have a case where the error ADDRESS_DATA_SAVE_ES is thrown.
    Did you ever resolve your issue?  Hoping that solution can help guide me.
    Thanks
    Shane

  • BPS0 - very long runtime

    Hi gurus,
    During the manual planning in BPS0 long runtime occurs.
    FOX formulas are used.
    There is lot of data selected, but it is business needs.
    Memory is OK as I can see in st02 - 10-15% of resources are usually used, no dumps, but very long runtime.
    I examine hardware, system, db with different methods, nothing unusual.
    Could you please give me more advices, how I can do extra check of the system? (from basis point of view preferably)
    BW 3.1. - patch 22
    SEM-BW 3.5 - patch 18
    Thanks in advance
    Elena

    Hello Elena,
    you need to take a structured approach. "Examining" things is fine but usually does not lead to results quickly.
    Performance tuning works best as follows:
    1) Check statistics or run a trace
    2) Find the slowest part
    3) Make this part run faster (better, eliminate it)
    4) Back to #1 until it is fast enough
    For the first round, use the BPS statistics. They will tell you if BW data selection or BPS functions are the slowest part.
    If BW is the problem, use aggregates and do all the things to speed up BW (see course BW360).
    If BPS is the problem, check the webinar I did earlier this year: https://www.sdn.sap.com/irj/sdn/webinar?rid=/webcontent/uuid/2ad07de4-0601-0010-a58c-96b6685298f9 [original link is broken]
    Also the BPS performance guide is a must read: https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/7c85d590-0201-0010-20b5-f9d0aa10c53f
    Next, would be SQL trace and ABAP performance trace (ST05, SE30). Check the traces for any custom coding or custom tables at the top of the runtime measurements.
    Finally, you can often see from the program names in the ABAP runtime trace, which components in BPS are the slowest. See if you can match this to the configuration that's used in BPS (variables, characteristic relationships, data slices, etc).
    Regards
    Marc
    SAP NetWeaver RIG

  • Long runtime for CU50

    Hi there, is there anywhere we can update the statistic for table CABN? We encountered long runtime when execute transaction code CU50, which we found out the process keep accessing the CABN table which contains more than 10k characteristics record. Thanks

    If you are running on IBM i (i5/OS, OS/400), there is no need to update statistics for a database table, because that is done automatically by the database.
    If you have a slow transaction, you can analyze it through transaction ST05 and then use the Explain function on the longest running statement. Within the Explain, there is a function "Index advised", that might help in your case.
    Kind regards,
    Christian Bartels.

  • Long runtimes while performing CCR

    Hello All,
    After running the delta report job we found some inconsistency for stocks when we try to delete or push to apo ( once performing the iteration ) the entries are not being deleted nor pushed  and is taking long runtimes. We dont see this issue for any other elements except stocks. Please let me know the reasons on why this would be happening and also please let me know if there is any way in which we can rectify this inconsistency for stock b/w ECC and apo .
    Thanks
    Uday

    Uday,
    I had one experience several years back with long CCR runtimes for Stock elements that might apply to you.
    For CCR, you have 6 categories of stocks to check.  If any of these stock category elements is not actually contained in any of your integration models, the CCR search can take a long time searching through ALL integration models trying to find a 'hit'.
    There are two possible solutions.  Ensure that you ONLY select CCR stock types that are contained in your CFM1 integration models.  If possible, deselect the CCR stock types that have no actual stocks within the integration models (where such stocks do not actually exist in ECC).  If this does not meet your business requirement, then try performing your CCR ONLY on the integration model(s) that contain the stock entries.  Do not leave the CCR field "Model Name" blank.
    With respect to the stock inconsistencies, 'how bad is it'?  It is common to have one or two Stock inconsistencies every day if you have hundreds of thousands of stock elements to keep in synch.  The most common reason I see for excessive stock entries in CCR is improperly coded enhancements.
    Best Regards,
    DB49

  • SharePoint Workflow Long Date does not output day of week!

    Hi all, 
    We have developed a workflow that sets a time delay and notifies users after three working days have elapsed. It detects a working day by checking whether the words "Saturday" or "Sunday" occur in the long date version of "Today".
    This workflow was running just fine but recently we found that emails were being sent out on the weekend and the cause for this was that the long date did not contain the day of week at all!. 
    I have been looking at this issue for a while and the only thing I have found is that when I change the site regional settings away from UK, the long date displays correctly again. In fact, it works for most locales, but as soon as I change the locale back
    to UK it stops working again. I have confirmed that this behaviour is consistent across more than one environment, and even on SharePoint Online!!
    Has anyone experienced this or have any solution? To me this seems like a bug and a call to MS but I thought I would post here to see if anyone had similar experiences. The only discussion I have ever found on this subject is linked below but I do not see
    a resolution to the problem. I don't think changing our locale is a solution! :-)
    https://social.msdn.microsoft.com/Forums/en-US/174e853f-69b6-46ab-a1a8-674daec898c0/workflow-lookup-on-datetime-field-format-set-to-long-date-but-missing-day-of-week?forum=sharepointcustomizationprevious
    Thanks,
    Tal

    My last reply that this did not fix the problem when I reactivated the workflow was deleted by someone from this forum post. 
    I still have this issue with the calculated field and will probably end up calling Microsoft Tech Support to have them help me figure out why this works in my WSS 3.0 version of our Sharepoint Help Desk app and not in the SFS 2010 version.  I have the
    exact same formula in the calculated field for both versions.  The SFS 2010 version always changes to "Sat" after any modification and there is no place in the workflows that get invoke when items are changed that this calculated field or any elements
    in the formula get touched.
    Alan-Seattle

  • DSO activation - long runtime

    Hello guys,
    in our BW system, activation of a DSO request takes a long time (> 1 hr), although only a small number of records (some hundred) has been loaded. When examining the log in sm37, I found out that there are no entries for the time in question (mind the time gap between 08:21:18 and 09:25:55):
    08:21:13 Job started
    08:21:13 Step 001 started (program RSPROCESS, variant &0000001044887, user ID BWBATCH)
    08:21:18 Activation is running: Data target CUSDSO06, from 132,353 to 132,353
    09:25:55 Overlapping check with archived data areas for InfoProvider CUSDSO06
    09:25:55 Check not necessary, as no data has been archived for CUSDSO06
    09:25:55 Data to be activated successfully checked against archiving objects
    09:25:57 Status transition 2 / 2 to 7 / 7 completed successfully
    ... (further lines concerning SID generation etc. omitted)
    The actual activation is executed within several seconds. So I wondered what happens in the time between 08:21 and 09:25. I tried to find out more on that by tracing (transaction st05) the process. The trace shows that for each request that has ever been loaded into the DSO, some tables are read (see below). Since there are more than 4000 requests and reading for each request takes around 1 second, the runtime sums up to more than one hour.
    Yet, only ONE request has to be activated (all other of these requests have been activated during the months before, so they should be quite irrelevant to the actual activation job).
    226
    RSBKREQUE
    FETCH
    1
    0
    7
    RSSTATMAN
    REOPEN
    0
    SELECT WHERE "RNR" = 'DTPR_4669H4NELXUK91DGMUIWCY6FY' AND "DTA_SOURCE" = '/BIC/B0000302' AND "DTA_SOURCE_TYPE" = 'TFSTRU' AND "DTA_DEST" = 'CUSDSO06' AND "DTA_DEST_TYPE" = 'ODSO'
    263
    RSSTATMAN
    FETCH
    0
    1403
    6
    RSSTATMAN
    REOPEN
    0
    SELECT WHERE "RNR" = 'DTPR_4669H4NELXUK91DGMUIWCY6FY' AND "DTA_SOURCE" = 'SALES_CUSTOMERS_DS            FILE_HU' AND "DTA_SOURCE_TYPE" = 'DTASRC' AND "DTA_DEST" = 'CUSDSO06' AND "DTA_DEST_TYPE" = 'ODSO'
    232
    RSSTATMAN
    FETCH
    1
    1403
    5
    RSSTATMAN
    REOPEN
    0
    SELECT WHERE "RNR" = 'DTPR_4669H4NELXUK91DGMUIWCY6FY' AND "DTA_SOURCE" = 'SALES_CUSTOMERS_DS            FILE_HU' AND "DTA_SOURCE_TYPE" = 'DTASRC' AND "DTA_DEST" = 'CUSDSO06' AND "DTA_DEST_TYPE" = 'ODSO'
    227
    RSSTATMAN
    FETCH
    1
    1403
    6
    RSSELDONE
    REOPEN
    0
    SELECT WHERE "RNR" = 'REQU_4669GG41VH6YPLG04AL85I5GE' AND ROWNUM <= 1
    902
    RSSELDONE
    FETCH
    1
    0
    6
    / /RREQUID
    REOPEN
    0
    SELECT WHERE "SID" = 119751
    220
    / /RREQUID
    FETCH
    1
    0
    5
    RSBKREQUE
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    230
    RSBKREQUE
    FETCH
    1
    0
    6
    RSBKREQUE
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751 AND ROWNUM <= 1
    684
    RSBKREQUE
    FETCH
    1
    0
    6
    RSBKREQUE
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    201
    RSBKREQUE
    FETCH
    1
    0
    6
    RSBKREQUE
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    194
    RSBKREQUE
    FETCH
    1
    0
    6
    RSBKREQUE
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    195
    RSBKREQUE
    FETCH
    1
    0
    6
    RSBKREQUE
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    195
    RSBKREQUE
    FETCH
    1
    0
    6
    RSBKREQUE
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    309
    RSBKREQUE
    FETCH
    1
    0
    6
    RSBKREQUE
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    264
    RSBKREQUE
    FETCH
    1
    1403
    7
    RSBMNODES
    REOPEN
    0
    SELECT WHERE "LOGID" = 'DTPR_119751' AND "NODE" = 0
    410
    RSBMNODES
    FETCH
    1
    0
    5
    RSBMNODES
    REOPEN
    0
    SELECT WHERE "LOGID" = 'DTPR_119751' AND "NODE" = 0
    242
    RSBMNODES
    FETCH
    1
    0
    6
    RSBMLOG
    REOPEN
    0
    SELECT WHERE "LOGID" = 'DTPR_119751'
    247
    RSBMLOG
    FETCH
    1
    0
    12
    RSBMNODES
    REOPEN
    0
    SELECT WHERE "LOGID" = 'DTPR_119751'
    761
    RSBMNODES
    FETCH
    30
    1403
    6
    RSBMONMESS
    REOPEN
    0
    SELECT WHERE "LOGID" = 'DTPR_119751' ORDER BY "NODE" , "POSIT"
    645
    RSBMONMESS
    FETCH
    17
    1403
    6
    RSBMLOGPAR
    REOPEN
    0
    SELECT WHERE "LOGID" = 'DTPR_119751'
    431
    RSBMLOGPAR
    FETCH
    7
    1403
    5
    RSBKDATAP
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    353
    RSBKDATAP
    FETCH
    2
    1403
    6
    RSBKDATAP
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    246
    RSBKDATAP
    FETCH
    0
    1403
    6
    RSBKDATA_V
    REOPEN
    0
    SELECT WHERE "REQUID30" = 'DTPR_4668XBCECQQT3RHYI54CCMUQM'
    314.804
    RSBKDATA_V
    FETCH
    0
    1403
    13
    RSBMNODES
    REOPEN
    0
    SELECT WHERE "LOGID" = 'DTPR_119751'
    1.114
    RSBMNODES
    FETCH
    30
    1403
    6
    RSBMONMESS
    REOPEN
    0
    SELECT WHERE "LOGID" = 'DTPR_119751' ORDER BY "NODE" , "POSIT"
    639
    RSBMONMESS
    FETCH
    17
    1403
    7
    RSBMLOGPAR
    REOPEN
    0
    SELECT WHERE "LOGID" = 'DTPR_119751'
    374
    RSBMLOGPAR
    FETCH
    7
    1403
    6
    RSBKDATAP
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    329
    RSBKDATAP
    FETCH
    2
    1403
    6
    RSBKDATAP
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    242
    RSBKDATAP
    FETCH
    0
    1403
    6
    RSBKDATA_V
    REOPEN
    0
    SELECT WHERE "REQUID30" = 'DTPR_4668XBCECQQT3RHYI54CCMUQM'
    312.963
    RSBKDATA_V
    FETCH
    0
    1403
    8
    RSBKREQUE
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    589
    RSBKREQUE
    FETCH
    1
    0
    6
    RSBKSELECT
    REOPEN
    0
    SELECT WHERE "REQUID" = '                       119751'
    287
    RSBKSELECT
    FETCH
    0
    1403
    |        6|RSBKREQUE |REOPEN |     |     0|SELECT WHERE "REQUID" = 119751      
    Any ideas?
    Many thanks,
    Regards,
    Günter

    check the profile parameter settings in RZ10. You can try number range buffering for better performnce

  • CAF Java-Based COs with SAP Workflows as Runtime Environment?

    Hi Folks,
    we have a big CAF project hitting us at the moment.
    Due to the heavy load we expect we do want to have the traditional SAP workflows as the runtime environment for our GPs.
    As far as I know we can transfer WFs designed with CAF GP to the R/3 WF system, right?
    My question is know: Can we still develop in Java our CAF objects and use them as we would do that in a portal based GP??
    If this should be possible - is this creating an overhead killing the benefit we gain by shifting from portal to the ABAP stack?
    Any hint is appreciated!
    helge

    Hi Helge,
    Technically, there is no transfer from Guided Procedures to the R/3 Workflow since the process definitions still stay in the GP Design Time. The business logic still takes place on the Java Stack. Only the low-level workflow operations are delegated to the r/3 Workflow.
    That's a difficult dilemma that you are facing right now. As far as I know, the configuration to get Guided Procedures running against the Business Workflow (R/3 Workflow) is quite complex and time consuming. Main pain points are callbacks registrations for background steps, user mapping and endpoints configuration. These are the main tasks. But to reach the right configuration, you will have to be tricky and patient.
    Furthermore, there will be an overhead due the RFC Roundtrips between the Java Stack and the ABAP Stack.
    Hope this helps you.
    Best regards,
    David

  • Labview Built Executable - Long Runtime Startup Time

    Hi All,
    I have a LV 2011 SP1 application that has been built (executable) on a development machine running Windows 7 Professional. The application is copied to the target runtime machine. This machine has the LV 2011 runtime plus other DAQmx pre-requisites. The target machine is also Windows 7 Professional, an quad core @ 3.3GHz Xeon machine that, on paper, is significantly faster than the development machine.
    I run the built application on the development machine. It takes around 1-2 seconds for the Startup VI Front panel to show. The application loads and runs. So far so good.
    I run the built application on the runtime machine. It takes just over 90 seconds for the Startup VI Front panel to show. The exe process during this time is showing 0% CPU usage and a very small memory footprint (around 32MB). Eventually the Startup VI is shown and CPU usage and Memory consumption climb almost immediately to around 2% and 80MB. This is normal. From this point on the application runs as it should.
    My question - why does the application startup time so dramatically different on the target machine? Is there some other startup process inherent in the runtime engine that is taking longer? I have tried loading the evaulation version of Labview 2011 SP1 on the target machine but this appears to make no difference. I know that this delay is more an annoyance than a show-stopper but my clients are asking questions and it would be good to provide some answers.
    Some basic web searches have revealed others having similar problems and often the problem is related to some Windows service or other. I have also disabled the firewall on the target PC (though it is not connected to the internet, just a small IO network with ethernet chassis CompactDAQ modules) with no apparent difference. Unfortunately I cannot disable the virus scanner due to company policies.
    Thanks for your help all.

    I have been having this problem and it is very annoying. I am unable to figure out what exactly is slowing load time of an exe in a target machine. My target machines (3 of them) are not connected to the network. I have LabVIEW 2012 and am pretty sure all drivers daqmx,VISA, and Runtime have been installed correctly.
    I have noticed this issue isn't there when the entire development environment is installed. To troubleshoot, I am using my personal laptop as test site (because I can't travel to other cities to fix it without knowing the solution), and my laptop has no previous installations of LabVIEW. I install the application and drivers using the installer I build, but it is exhibiting the same behavior. I must note here that I did not see this problem for LabVIEW 2010 that I was previously using, but my application design has changed. Nonetheless, I have checked the functionality of my application and am absolutely sure it has nothing to do with slow load times.
    I am beginning to suspect some component has a bug in it for LabVIEW 2012 but, I am in no position to validate that. Is there anyone that has found a concrete solution that has made their application open instantly and run?
    Thanks a bunch!
    V
    I may not be perfect, but I'm all I got!

  • SAP  workflow new runtime version

    How do  i make workflows item made created in the old run time version  to assume new workflow path of new runtime  version?
    regrads.
    Rendi

    Hi,
    You can not make the new version to assume from the middle.
    But You can call back a workflow using the function module SWP_WI_CALLBACK_RECOVER  to make a workflow start from beginning again.
    This will execute in the newer version.But, you should not be using this for those workitems which involves postings etc.
    Regards
    Kesari Katakam

Maybe you are looking for