New developer system

Hi guys,
Just wondering if its possible, we are looking to write an interface between our software and SAP Business one, and would like to know if there is a system we can use somewhere that will allow us to connect to it in order to do the development and testing work.
Perhaps there is an online version or a trial system that can be obtained.  I'm sorry if this seems incredibly simple, we are very new to SAP.

Hi Thomson,
you  can customize the SAP Business One product through SDK tool, the interface could be .NET or Java.
Regards
shiva

Similar Messages

  • New Development system copy from Production

    Hi All,
    Hope you can help me out here with some ideas.
    We are currently planning to implement a new SAP BW Development system, and for this we would like to make it a 1:1 copy of our current production system. Meaning all from customizing, ABAP, workbench objects etc.
    But what we do not want is all the transaction and master data in infoobject, cubes etc., we are only interested in the structure and objects. Therefore we can rule out "System copy".
    My question is then, is there any other ways to do this when system copy is a no go? I know we can do client copy but then we will not get all the workbench / ABAP programs which we need.
    Any ideas on how to do this?
    Thx. in Advance

    Hi Rasmus,
    But what we do not want is all the transaction and master data in infoobject, cubes etc., we are only interested in the structure and objects. Therefore we can rule out "System copy".
    Options 1: As BI objects are cross client , copying client will not solve your purpose 100%.
                    You can perform system copy and then clean up data in DEV
    Option 2: You can create a new Development and then create TRs for required configuration objects and move to new development system.
    Option 3: Use SLO tools to perform container copy without any data.
    Hope this helps.
    Regards,
    Deepak Kori

  • New Development System

    Hi, just wanted to bounce some ideas off you guys.
    We are about to create a new ECC 6 system initially a sandbox system for blue print then a Dev system.
    My organisation as a global finance SAP template that we will use.  So the steps would be a fresh install, then install the global finace templete, not sure how but will probably a system copy.
    We also have another stream of work being undertaken in another part of the organisation which is all being done in its own landscape.  We need to split out some of this functionality to use on our system (about 2000 transports). 
    I don't want to copy all of this work over to the Dev system probably from their QA system as it has a load of master / transaction data, but it will give everything we require OK we would need to some clean up work.
    The other option would be to selct the transports that we require and apply them, sequencing will be a issue,
    Ideally I guess the best solution is for all the work to be performed again in the new Dev system
    Its a tricky one, I need the fucntionality but want to keep the system as clean as possible.  The QA system will be built on the transports from Dev.
    Any thoughts appreciated,
    Thanks

    Ideally I guess the best solution is for all the work to be performed again in the new Dev system
    You will be right on this.  Problem w/ a "easy" solution via transport is that you may run into cross development with the same object fighting eachother.  This could result in unexpected behavior in the system, which would turn into a diagnostics nightmare.
    We are currently in middle of upgrading to ECC 6.0.  The plan here inhouse is to actually freeze all modifications for period of 1 quarter, allowing only production fixes only.  This would minimize the number of transports that we have to manually consolidate.

  • Error During importing Transport Request into New Refreshed system

    Dear Experts,
    we have refreshed the Development BI system ( DB0 ) with DB1 which is
    our New development system for Release 2 on Nov 5th ,2007. The
    additional transports which went successfully to production till date
    should to be manually moved to DB1.Some of the transports which are
    involved ODS are getting failed with the below Error.
    Start of the after-import method RS_ODSO_AFTER_IMPORT for object type(s) ODSO (Activation Mode)
    Activation of Objects with Type DataStore Object
    Checking Objects with Type DataStore Object
    Checking DataStore Object ZOTCODF3
    DataStore object ZOTCODF3 is consistent
    Saving Objects with Type DataStore Object
    Internal Activation (DataStore Object )
    Preprocessing / Creation of DDIC Objects for DataStore Object ZOTCODF3
    Database table /BIC/AZOTCODF340 was deleted
    Table/view /BIC/AZOTCODF300 (type 0) from DataStore object ZOTCODF3 saved
    Creation/deletion of indexes for active table
    Table/view /BIC/AZOTCODF340 (type 4) from DataStore object ZOTCODF3 saved
    Table type /BIC/WAZOTCODF300 saved
    Table/view /BIC/VZOTCODF32 (type VIEW) from DataStore object ZOTCODF3 saved
    Change log for DataStore object "ZOTCODF3" saved successfully
    Activate all Dictionary objects ( 5 ):
    All DDIC objects have been activated / deleted
    Post Processing/Checking the Activation for DataStore Object ZOTCODF3
    Creating Export DataSource and dependent Objects
    <b>The creation of the export DataSource failed</b>Reading the Metadata of ZOTCODF3 ...
    Creating DataSource 8ZOTCODF3 ...
    <b>Name or password is incorrect (repeat lo gon)
    Name or password is incorrect (repeat lo gon)Error when creating the export DataSource and dependent</b> ObjectsCompare active / modified versions for DataStore object &2
    Update/activation program for DataStore object ZOTCODF3 is being regenerated
    <b>Error when activating DataStore Object ZOTCODF3</b>
    Resetting of Incorrect Objects Back to the Active Version (DataStore Object )
    Preprocessing / Creation of DDIC Objects for DataStore Object ZOTCODF3
    Database table /BIC/AZOTCODF340 was deleted
    Table/view /BIC/AZOTCODF300 (type 0) from DataStore object ZOTCODF3 saved
    Creation/deletion of indexes for active table
    Table/view /BIC/AZOTCODF340 (type 4) from DataStore object ZOTCODF3 saved
    Table type /BIC/WAZOTCODF300 saved
    Table/view /BIC/VZOTCODF32 (type VIEW) from DataStore object ZOTCODF3 saved
    <b>Versioning not possible for PSA 8ZOTCODF3_CA
    Error while saving change log for DataStore object ZOTCODF3</b>
    When i tried to reimport, DDIC is getting locked.we havent changed DDIC
    password as we confirmed logging with the password we have.
    we are on Stack 11. Please help us in resolving the issue at the
    earliest.
    Regards
    Ravi Patneedi

    Hi,
    Ask Basis people, to check myself datamart RFC login through SM59 via background user(ALEREMOTE). Issue related with Initial password .
    Regards,
    Saran

  • 2 Developement Systems (4 System Landscape)???

    Dear all,
    an Idea of a person in our project is (because he wants make make sure that nobody/no further project destroys - nonsense but reality) that we have to customize new objects etc. in a further development system AB2 an transport these into the original developmet system AB1.
    Does anybody could give me some points pro or cons or have experience in such a construction?
    Many thanks
    DiDi

    Pro:
    Provided access is limited in the new development system, the design should be stable and controlled
    If using a strict and unique naming convention, you can prevent collisions and overwriting
    Con:
    Expensive to allocate hardware/resource to configure new landscape
    If objects are transported with the same technical names, collisions and overwriting will occur
    I think every situation is different, but the most typical scenario and justification for a dual landscape is having one system to support break fixes in the current production environment and another system for new projects. Making a new system for each project introduces more complexities. If you have too many projects going on simultaneously, raise the issue to the program manager or someone who has visibility over multiple projects.

  • Error -17600 when switching from LabVIEW Development System to LabVIEW Run-Time Engine in Adapter Configuration

    I receive an error message (code -17600) while loading my test sequence after switching from LabVIEW Development System (2009 f3) to LabVIEW Run-TIme Engine using the Adapter Configuration.
    ErrorCode: -17600,
    Failed to load a required step's associated module.
    LabVIEW Run-Time Engine version 9.0.1f3.
    When I switch back to the LV development system, everything is OK, and the sequence loads and runs perfectly.
    My TestStand Engine Version is 2012 f1 (5.0.0.262).
    I'd appreciate any help on this issue.
    Roman

    Hi Roman,
    There are a couple of things you can try:
    1) Determine if the LabVIEW RunTime Engine is corrupted in some way. Create a new simple VI with no sub-VIs, using the same LabVIEW Development system you used for mass-compiling the VIs. Create a TestStand step that calls this VI and ensure it runs correctly. Now switch your LabVIEW adapter to use the RuntimeEngine and choose the "Auto detect using VI version" option.
    Check if the simple VI is loadable and runs without errors in TestStand.
    If the step generates the same error, you should try a re-install of the LabVIEW development system.
    If not, its most likely that there is some VI you are using that is not loadable in the LabVIEW Runtime Engine because:
    1) Some sub-VI is still not saved in the right version or bitness. Open the VI heirarchy of the top-level VI that you are calling from TestStand and examine the paths of all the sub-VIs to check if they were in the folder you masscompiled and re-save any that are outside this directory.
    Also, when you try to close the top level VI, do you get a prompt to save any unsaved files? If so, they could be the sub-VIs that are not saved in the right version. Save all of them.
    Check if you are loading any VIs programatically and if these are compiled and saved in the right version as well.
    2) There is some feature you are using in your LabVIEW code that is not supported in the LabVIEW RunTime Engine. To check this, add your top-level VI to a LabVIEW project and create a new build specification and create a new executable from this VI.
        Right-click "Build Specifications" and choose "New->Application(EXE)".
        In the Application Properties window, select Source Files and choose the top level VI as the start-up VI.
        Save the properties.
        Right-click on the newly created build specification and choose Build.
    Run this executable (it will be run using the LabVIEW RunTime) and check if the VI has a broken arrow indicating that it cannot be loaded and run in the LabVIEW Runtime Engine.
    You might need to examine your code and find the feature which is not supported in the LabVIEW RunTime and find an alternative.
    Another thing i forgot to mention the last time around is if you are using 64-bit LabVIEW with 32-bit TestStand, then executing code using LabVIEW RTE from TestStand will not work since the 64-bit LabVIEW RTE dll cannot be loaded by the 32-bit TestStand process.
    If none of the above steps resolve the issue, consider sharing your LabVIEW code so i can take a look.
    Regards,
    TRJ

  • "Open VI Reference" slowly in development system

    Hello,
    One of our programs has a plugin structure. We open more than 50 VIs over VI Server. This is fast in runtime system, but slowly in the development system. The problem is the function "Open VI Reference". If you try to open an Ref to a very small vi, it takes nearly 2s per VI. In runtime system this is done in 2-4ms.
    So the program takes ages to start in development system.
    Is there a way to speed this up?
    Thanks
    Sletrab
    Attachments:
    Bild348.png ‏99 KB

    Is it always slow or fast the first time? Could it be that you're not closing the ref afterwards?
    And/or debugging setting in the vi's,, which ofcourse is turned off in a .exe
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • LV 8.6 will Not Compile in LV 2010 - Labview 10.0 Development System Has Encountered a Problem and Needs to Close

    I submitted this support request to NI on 8/12/2010.
    When I compile my LV 8.6 app in LV2010 I get this error:
    "LabVIEW 10.0 Development System has encountered a problem and needs to close.  We are sorry for the inconvenience."
    I was told to do a "Mass Compile" of my LV 8.6 app in LV2010...this failed too.
    I was then told to go to each and every vi and "Mass Compile" individually...after about the 50th vi this got old quickly...and it still didn't compile.  I then sent NI tech support "my code".  The good news is my LV 8.6 app didn't compile with LV2010 @ NI.
    My LV 8.6 app compiles and runs great in LV 8.6.  I don't want to be left behind with the newer upgrades and I want to move to LV2010.  I have lots of LV8.6 code to maintain and I really don't have the time to debug all of my apps.
    I was told this will be looked @ in LV2010 SP1.
    One note...back up your LV8.6 data before you move to LV2010.  Once your LV8.6 code is compiled in LV2010 you will not be able to go back to LV8.6.
    I restored all of my LV8.6 code and I'm back working with LV8.6.
    It's a tough call, do I stay in LV8.6 and get left behind?
    Do I bite the bullet and try to debug this mess in LV2010?
    I was told the compiler is completely different in LV2010.  That's great, but one reason I have NI Maintenance Agreement is to keep updated with the latest software.  I can't afford to re-compile LV code every few years.  Like most people, maintaining my Apps with customer's revisions, and modifications is enough work.  I don't want more work!
    I was told LV2010 SP1 would likely appear in May or June of 2011.  I'd hate to break out my old Turbo Pascal apps again...but hey...they still work!  My NI maintenance agreement is due this month too, I guess I'll pay NI one more year, and see if they come up with a solution.  But if NI doesn't fix this LV8.6 compile in LV2010 problem...I don't see any value staying current with LV software.
    I found another Bug with LV2010...you are going to love this one!
    There is a new "LV Update Service".  Perfect!  I like updating my LV software when new patches are available.  When I click "update" the update spins over and over "Checking for New Version".  I have let it run ALL day with no results...just sits and spins over and over.
    OK, I know give NI a break!  Yes, LV2010 has a new compiler...and Yes, I will renew my NI maintenance agreement.  I just want NI to know failing to compile just one LV8.6 app in LV2010 is not a good idea for customer relations.
    Thanks,
    Doug

    For your update service problem
    Unable to Update Current Version of NI Update Service
    Why am I Unable to Update My Version of NI Update Service in Windows Vista or Windows 7?

  • Is there a way to activate the whole application in Non Development system?

    Hi All,
    Is there a way to activate the whole application in Non Development system? Using some BRF Plus Tool.
    We copied a sample application and customized the same as per our requirement. The same is then released to Test System for testing. On Test system this application with all component is in non-active state. We re activated the application with all the component and released it to Test System. But still the application is inactive.
    Application is a of storage type system and so cannot use changeability exit to activate on test system.
    TR log shows imported with error. Below is the extract of the error:
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    BRF+: Runtime client: 000 / target client: 400
    BRF+: Object activation of new version started for 418 object IDs
    BRF+: <<< *** BEGIN: Content from Application Component  CA  *** >>>
    BRF+: <<< BEGIN: Messages for IDs (53465BA36D8651B0E1008000AC11420B/ )  Table 'Dunning Proposal Line Items (Table)' >>>
    No active version found for 23.04.2014 08:14:10 with timestamp
    No active version found for IT_FKKMAVS with timestamp 23.04.2014 08:14:10
    No active version found for IT_FKKMAVS with timestamp 23.04.2014 08:14:11
    BRF+: <<< END  : Messages for IDs (53465BA36D8651B0E1008000AC11420B/ )  Table 'Dunning Proposal Line Items (Table)' >>>
    BRF+: <<< *** END  : Content from Application Component  CA  *** >>>
    BRF+: Object activation failed (step: Activate )
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    BRF+: Import queue update with RC 12
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    Errors occurred during post-handling FDT_AFTER_IMPORT for FDT0001 T
    FDT_AFTER_IMPORT belongs to package SFDT_TRANSPORT
    The errors affect the following components:
       BC-SRV-BR (BRFplus - ABAP-Based Business Rules)
    Post-import method FDT_AFTER_IMPORT completed for FDT0001 T, date and time: 20140423011412
    Post-import methods of change/transport request DE1K901989 completed
         Start of subsequent processing ... 20140423011359
         End of subsequent processing... 20140423011412
    Any help would be appreciated.

    Is IT_FKKMAVS part of the same transport request or was it sent already earlier?
    You may have a look at the request if it was OK. Probably not.
    Maybe in the meantime more requests reached the system that now have in combination solved the problem. What is your release and support package level?
    Higher versions of BRFplus have a lot of automatic correction mechanisms built into it.
    E.g. problematic imports are collected in an import queue. As soon as a request comes in that fixes any problems the after import processing for faulty imports is automatically redone.

  • For one Urgent Change during performing the Approval(chnging the status to 'To be Tested') system does not recognize any changes using the CTS WBS BOM in the development system. The transaction is therefore incorrect or the status was reset by the system.

    For one Urgent Change while performing the one of the Approval before changing the status to 'To Be Tested'
    We are getting below error.
    The system does not recognize any changes using the CTS WBS BOM in the development system. The transaction is therefore incorrect or the status was reset by the system.
    COuld anyone please help us to know, How it can be resolved?
    We also have this below error.
    System Response
    If the PPF action is a condition check, the condition is initially considered as not met, and leads to another warning, an error message, or status reset, depending on the configuration.
    If the PPF action is the execution of a task in the task list, and the exception is critical, there is another error message in the document.
    Procedure
    The condition cannot be met until the cause is removed. Analyze all messages in the transaction application log.
    Procedure for System Administration
    Analyze any other messages in the task list application log, and the entries for the object /TMWFLOW/CMSCV
    Additional Information:
    System cancel RFC destination SM_UK4CLNT005_TRUSTED, Call TR_READ_COMM:
    No authorization to log on as a trusted system (Tr usted RC=0).
    /TMWFLOW/TU_GET_REQUEST_REMOTE:E:/TMWFLOW/TRACK_N:107
    For above error Table /TMWFLOW/REP_DATA_FLOWwas refreshed as well but still the same error.

    If you are in Test System, you can use function module AA_AFABER_DELETE to totally delete the depreciation area (tcode SE37, specify chart of depreciation and depreciation area), After that recreate your depreciation area and run AFBN. But before you do that, have you created a retirement transaction type that limits the posting on your new depreciation area? If not create one.
    Hope this helps.
    Thanks!
    Jhero

  • How do you move changes from your project developments into the maintenance development system?

    Good day colleagues.
    We are in the process of introducing retrofit, and the picture is clear for us, generally speaking, to retrofit the project landscape with the changes done in the maintenance landscape.
    What about the other way around? We mean, when the project is set to Go Live, we understand the transports will be all added, e.g. to the Production System buffer to be moved there via Maintenance Cycle, based on the way we defined the project and the logical component being used.
    However, how do you feed those project transports to your Maintenance Development System using ChaRM?  or do you do that Manually?  We doubt. Copying Production back into Development?  no way !!!
    How do you establish that?  We have a sort of idea but the documents about retrofit only seem to talk about moving transports one way, as far as we have found.
    Many thanks for any feedback.
    Juan Carlos

    Hi Piyush.
    I apologize for a late close on this questioning about moving projects back into the maintenance stream.
    Basically there are 2 solutions as far as we know:
    1. What Vivek mentions, which is performing a cutover and repacking the project transports into 2 transports in the maintenance stream:  A workbench and a customizing.   In that sense, at the end of the day you end up moving just 2 transports to the maintenance stream up to Production.   They contain all your project objects.  Thanks to Vivek, again.
    This is a very practical and interesting approach.  The only reason we did not adopt it, is based on the fact that if by any chance we encounter an issue with a project transport object in the maintenance stream (Dev or QA), now that all is bundled together, we may be stuck right at the time weare getting ready to GoLive.  How tough is going to be that issue? how easy and quick to fix?how much would that affect the whole project time frame?   Those questions made us decide to option 2.
    2.  What we are doing is that at cutover we move at the same time all project transports to the transport buffer of each of the maintenance stream systems (Dev, QA, and Prod).   We first open the gate to move the transports to Dev and we test, then to QA and we test, as well.  If there may be an issue, and the issue can not be quickly resolved by the project team, we can go up to the extreme of using a new feature introduced in ChaRM in SP10, if we are not wrong, but definitely available in SP12.  That feature provides a way to selectively decide which transports of the release are to Go Live and which ones do not, although we have no had to use that feature, yet, but it is there.
    We do not see any risk on adding the transports to the maintenance buffers at the same time.  There are ways to control the systems that are open for receiving transports, and the project phases, which guarantees no room for error.  There have to be deliberate actions taken (more than one in our case), to wrongly move a project to GoLive before its time comes.
    That is more or less the scenario Piyush.    
    Hope that explains the scenario.  So far no decision on really publishing as a blog.  It seems not to be written on stone, as consulting with different companies, each adds its own flavor to the recipe and shuffle ideas to get to what they are looking for and makes them happy.
    Juan

  • New daq system - optical spectrocopy imaging

    Hi all
    My objective: Develop a DAQ system to be controlled using labview. Its for SPECTROSCOPY purpose.
    My OLD DAQ MODULE consists of :
    1. Analog devices -  AD9432 - ( 12-Bit, 80 MSPS/105 MSPS ADC )
    2. Motorola - Digital Signal Processor 56L307EVM - 24-Bit Digital Signal Processor
    My NEW UPGRADED DAQ SYSTEM should be something advanced. But on lookup many DAQ modules from third party industries are not providing drivers so that I can control using labview.
    Therefore I will need a High speed digitizer something and a DSP board that can accumulate the very large data and perform some simple filtering techniques on it. I have no idea how to begin.
    I seek your guidance regarding this.
    If any one out there is doing a experiment on Spectrocopy imaging. please help me out.
    Thank you
    Abhilash S Nair
    Research Assistant @ Photonic Devices and Systems lab
    [ LabView professional Development System - Version 11.0 - 32-bit ]
    LabView Gear:
    1. NI PXI-7951R & NI 5761
    2. The Imaging Source USB 3.0 monochrome camera with trigger : DMK 23UM021
    OPERATING SYSTEM - [ MS windows 7 Home Premium 64-bit SP-1 ]
    CPU - [Intel Core i7-2600 CPU @ 3.40Ghz ]
    MEMORY - [ 16.0 GB RAM ]
    GPU - [ NVIDIA GeForce GT 530 ]

    Hello Abhi,
    The requirements you have specified sound like a great application for our FlexRIO Products which are available in either PXI or PXI Express form factors.  This product line has an onboard FPGA where you can implement the custom logic that you need for processing the data.  In addition to the FlexRIO module you will need the Adapter Module (FAM) that will turn your module into a digitizer.  It looks like the closest FAM we provide that is similar to your current specs is the NI 5732 which is a 14-bit 80MS/s digitizer but we also have better modules available.
    Anthony F.
    Product Marketing Engineer
    National Instruments

  • Data Migration from R3 to new ECC System

    Hello,
    We are doing a fresh implementation for SAP MDM. Currently we have 3 SAP R/3 systems and 1 non-R3 system (BPCS system). One of the project goals is to do data migration from all these 4 systems into a new SAP ECC system and to MDM for Vendor Master, Customer Master and Material Master. My manager says that we need Business Objects tool to Migrate these data. But, my question is that is it not possible to achive this through XI/PI itself ? Can anyone let me know the entire process in XI/PI as to how do I receive the Idocs from R/3, convert into XML and then send it to the new ECC system in Idoc format ?
    Thks & Rgds,
    Hema

    Hi Gandhi,
    You have SAP MDM, BPC and R/3 systems in your landscape, when you are migrating data to new ECC system, what is the role of mdm, did you moving the same data to MDM to avoid duplicate/redundant data??is there any data migration from R/3 to MDM(any integration required)or MDM to R/3,
    You have to develop different interfaces to transfer data from MDM to ECC, BPC to ECC.definelty you required one middleware, the option definitely SAP PI. it offers very easy of integration between MDM and R/3 system, you can use MDM-PI adapter to connect to directly MDM syndicator file port from there you can pull the XML file and convert it in to IDoc format use Receiver IDoc adapter in PI, the same way we can integrate BPC system with R/3.
    Search in sdn, there are some scenarios integration of R/3 and MDM using PI, refer those links, if still you have any doubts let me know,
    Regards,
    Raj

  • Building Installer Crashes Developement System LV2014SP1

    Dear Community
    I have again a major problem which can not be reproduced up to now.
    Hopefully somebody has an idea.
    I have  a large project (including some LV classes, dll Calls (Hdf5), and Network shared Variables) which was fine since about one month ago.
    Now I did some work on the project: The building of the application still works fine but then ************************
    when I want to create the installer of the application (not depending on the additional installers I use) the whole developement system crashes completely.
    The last line of the application builder progress window shows:
    Adding file:
    Labview Elemental IO-error.txt
    It seems when it trys to add this file the system crashes and I am not able to build the installer.
    This is a major problem because we need to roll out the new version to the costumer.
    Hope somebody has an idea what to test next (I already did intense testing even on a different computer with the same project **so even the developement system installation may not be corrupt**)
    Eventually there may be a problem with my 'alway include' files but I dont know where and w.
    Hope you have some idea
    Thanks 
    Nottilie

    I think I found the problem and unfortunately it seems to be related to the Viewpoint TSVN Toolkit.
    See the following log from the crash report:
    <DEBUG_OUTPUT>
    6/24/2015 5:48:18.059 PM
    DWarn 0x50CBD7C1: Got corruption with error 1097 calling library mxLvProvider.mxx function mxLvApi_SetIconOverlaysBatch
    e:\builds\penguin\labview\branches\2014patch\dev\source\execsupp\ExtFuncRunTime.cpp(247) : DWarn 0x50CBD7C1: Got corruption with error 1097 calling library mxLvProvider.mxx function mxLvApi_SetIconOverlaysBatch
    minidump id: acdc1a8d-51cf-450c-8d63-fbc10cdecd70
    $Id: //labview/branches/2014patch/dev/source/execsupp/ExtFuncRunTime.cpp#1 $
    What does creating a LabVIEW installer have to do with icon overlays? I have no idea, but I know something that is LabVIEW related that uses icon overlays in the project – the Viewpoint TSVN Toolkit!  I promptly uninstalled the toolkit and I was able to build all 4 of my installers without a hitch multiple times.  Additionally, I’ve noticed that LabVIEW is much more responsive and launch time has been cut from ~60sec to ~20sec. 
    Although this seemed to have fixed the problem (I tested on two machines both exhibiting the same behavior and both having the toolkit installed) I am dissapointed that I no longer have the TSVN toolkit because it was extremely useful.  
    I recently upgraded to the latest 1.8.2.23 version of the TSVN toolkit, I'm going to instead install the previous version(s) until I see the problem go away (hopefully.)
    Does anybody here use the latest TSVN toolkit and have zero issues building an installer that has an app that uses shared variables? I'm not sure if the shared variables part is relevant but it might be.

  • Changes in development system are not excluded

    Anyone had this issue in CHARM before ?
    we have SLFN - SDCR - SDMJ open for a change.
    Development complete and almost tested - transport of copies is in QA, but original transport is not released from DEV.
    Business now wants to stop this change and close SLFN.
    How can I close/cancel SDMJ ?
    When I try to cancel it, I get the message "Changes in development system are not excluded" and status is not changed.
    Is there a way to force close SDMJ ?
    Thank you
    Elena

    >
    Elena Ainoulina wrote:
    > Anyone had this issue in CHARM before ?
    >
    > we have SLFN - SDCR - SDMJ open for a change.
    > Development complete and almost tested - transport of copies is in QA, but original transport is not released from DEV.
    > Business now wants to stop this change and close SLFN.
    >
    > How can I close/cancel SDMJ ?
    > When I try to cancel it, I get the message "Changes in development system are not excluded" and status is not changed.
    > Is there a way to force close SDMJ ?
    >
    > Thank you
    > Elena
    Elena, the system is giving you this message as a (cryptic) reminder that even though you want to close the SDMJ, you have made changes in DEV and QA that need to be reversed with another transport.  Depending on your audit requirements, you may want to add these reversing entries on the same transport as a new task, or on an additional transport and send the whole thing through the normal processing.  The other alternative is to execute CRM_SOCM_SERVICE_REPORT - just make sure you have reversed the changes already made to maintain the integrity of your landscape.

Maybe you are looking for