Debug-mode and function calls

Hi there,
I'm coding a program in SE38 in a BW environment and there is a function call made (RSDRI_INFOPROV_READ). It seems that the table returned by this function won't accept double entries (which should be the case for standard tables), however, when I execute my program in debug mode, it does everything like it should. Anyone knows why it doesn't work when i choose execute as background job? It's hard to find what's wrong since in debug all goes well...
thank you,
Tom

Why not debug the program in background running?You can add a endless loop before the code you are interested in, and then enter debug mode to jump to the focus code at background job console.

Similar Messages

  • How can I debug CRM middleware function call in R/3?

    Hi there,
    I have extended the order line in CRM with a few fields that I need in R/3. I have also extended BAPE_VBAP, BAPE_VBAPX, VBAP_KOM and VBAP_KOMX in R/3 and I have implemented the CRM_DATAEXCHG_BADI BADI to manipulate the BAPIPAREX structure before the BAPI is called in R/3.
    My problem is that I cannot find out which BAPI is called in R/3 to update the sales order and I cannot find out where in CRM_UPLOAD_TO_OLTP it is called? Am I looking the wrong place? Ideally I want to check that the BAPI is called with the correct parameters from CRM, to see if the BADI did what it was supposed to do.
    Best regards,
    Anders

    Below I have stated how Orders flow from CRM to R3 - triggering which all exits in R3 and how we can reach them from CRM in debug.
    A) Block the CSA queue on CRM Side .
    B) Create an order and save it.
    C) Go back to the CSA queue and execute your queue in debug mode . Please put a BreakPoint in 385 line of FM   crm_upload_to_oltp. and line 422 of FM crm_r3_salesdocument_upload.
    D) Put a break point on line 385  of  FM   crm_upload_to_oltp and change the destination in parameter lt_actual_recipients-DEST to <R3SYSTEMRFCDEST>_DIALOG.
    E) Put a break point on line 422 of FM crm_r3_salesdocument_upload and Set the value of variable gv_synchronous_call  = X  in debug.
    F) Then you will be in R3 - debugging how data flows to R3 .
    Hope it helps.
    Regards,
    Kaushal

  • Performance Diference between debug mode and release mode

    Hi ,
    I have been running an C++ multithreaded application with below technical specification on Production for more than an year.
    OS - Solaris 10 , Opteron
    Compiler - Sun Studio 11
    Bit Mode - 64
    I wanted to know what is the difference in the debug mode of an C++ application compiled with -g option and a release mode compiled with an optimization apart from the size of the binary and availability of debug information in case of a crash.
    1) Does it degrade the performance of the application? If yes by what extent?
    2) Does it use more memory?
    3) Will switching from Debug to Release mode with some optimization coupled with any kind of risk like byte guard etc which could lead to dumps and all? If yes what could those be?
    4) Which optimization level should be used?
    Regards,
    Ankur Verma

    If you compile without any optimization (with or without -g), you will see a noticeable difference in performance in most programs compared to compiling with a reasonable level of optimization. How much difference depends on the nature of the program, and what percentage of time is spent in code regions that can benefit from the code improvements.
    If you add -g, function inlining is disabled, which can dramatically reduce program performance. You can't debug a function that is generated inline, which is why this option disables inlining. (The +d option also disables inlining.)
    If you use -g0 instead of -g, function inlining is preserved.
    If you use -g0 with optimization, you get the advantages of optimization while still allowing limited debugging. (Local variables are not visible.) Beginning with Sun Studio 12 update 1, the current release, -g with optimization enables function inlining, so the effect of (for example) "-O -g" becomes the same as "-O -g0".
    A few optimizations are disabled with -g or -g0, the exact difference depending on the compiler release and patch level. Most programs won't see a difference in performance.
    Since you are running on Solaris 10, you should upgrade to the current release, Sun Studio 12 update 1. You will get many improvements and some new diagnostic tools. The new release is a drop-in replacement for Studio 11. You don't have to recompile any existing binaries, but you will want to recompile to get improved performance.
    [http://developers.sun.com/sunstudio/]

  • Problem with Debug mode and SLD

    Using the config tool, we have turned the Debuggable flag to 'yes' and set the Debug mode to OFF.   Now from the studio, when I enable debugging for the process, the server0 stops successfully and restarts successfully. However, during the restart, the SLD service errors out and does not get started. It encounters the following exception. I can verify the error in std_server0.out
    With erred SLD service, when I try to deploy and run the webdynpro application from studio, the deploy fails with the same exception.
    We tried all kinds of restart (of the engine\server instance), debuggable and debug mode setting permutations, but not successfull.  (debug port is 50021)
    We are having remote debug server setup (not local).
    Does anybody have any suggestions???
    The exception we get is:
    Finished with warnings: development component 'CreateOrder'/'local'/'LOKAL'/'0.2007.11.26.14.33.54':Caught exception during application startup from SAP J2EE Engine's deploy service:java.rmi.RemoteException: Error occurred while starting application local/CreateOrder and wait. Reason: Clusterwide execption: server ID 3775750:<Localization failed: ResourceBundle='com.sap.engine.services.deploy.DeployResourceBundle', ID='com.sap.engine.services.deploy.container.DeploymentException: <Localization failed: ResourceBundle='com.sap.engine.services.deploy.DeployResourceBundle', ID='Implied start failed for dependency :local/CreateOrder -> service:sld', Arguments: []> : Can't find resource for bundle java.util.PropertyResourceBundle, key Implied start failed for dependency :local/CreateOrder -> service:sld', Arguments: []> : Can't find resource for bundle java.util.PropertyResourceBundle, key com.sap.engine.services.deploy.container.DeploymentException: <Localization failed: ResourceBundle='com.sap.engine.services.deploy.DeployResourceBundle', ID='Implied start failed for dependency :local/CreateOrder -> service:sld', Arguments: []> : Can't find resource for bundle java.util.PropertyResourceBundle, key Implied start failed for dependency :local/CreateOrder -> service:sld (message ID: com.sap.sdm.serverext.servertype.inqmy.extern.EngineApplOnlineDeployerImpl.performAction(DeploymentActionTypes).REMEXC)

    Hi Michael,
    one way is to enhance the server go.bat with debug parameters. Under c:\usr\sap\<SID>\j2ee\<INSTANCE>\cluster\server\go.bat define the following params
    set DEBUG_PORT=5000
    set DEBUG_PARAMS=-Xdebug -Xnoagent -Djava.compiler=NONE -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=
    and add both params to the java arguments (e.g. behind the -classpath): %DEBUG_PARAMS%%DEBUG_PORT%.
    Regards,
    Stefan

  • Where exactly should i write API and function calls in OAF

    Can anybody tell me where to call the API's and functions and also please specify the reason for that so that i can understand properly.
    Thanks a lot in advance.
    kumar

    In case you need to call the pl/sql API's then you should create an PL/SQL EO based on each business function per each pl/sql package and override the dml methods.
    Please go through the section "PL/SQL Entity Object Design and Development" in OA Fwk Dev Guide.
    Thanks,
    Ravi

  • Debug mode and for (n in object) loop

    Hi,
    I have a for next loop to read all my xml nodes into an
    array. Funny
    (not) thing is that it looses one node when I run my loop in
    debugging
    mode. I know there are some problems with the debugger but
    this one is
    new to me.
    Does the debugger have any influrence in stepping through all
    elements
    of an object with a for..in routine?
    My code:
    xmlData = this.childNodes[1];
    for (n in xmlData.childNodes){
    obj = xmlData.childNodes[n];
    obj.attributes.value != undefined){
    // !!breakpoint in the next line ...
    if (obj.attributes.value != undefined){
    _global.strary[obj.nodeName] = obj.attributes.value;
    trace("node " + obj.nodeName + " " + obj.attributes.value);
    }else{
    trace("!node " + obj.nodeName + " " + obj.attributes.value);
    Thanks for any info ..

    There were one line too much the code looks like this. But
    the problem
    is still there:
    xmlData = this.childNodes[1];
    for (n in xmlData.childNodes){
    obj = xmlData.childNodes[n];
    // !!breakpoint in the next line ...
    if (obj.attributes.value != undefined){
    _global.strary[obj.nodeName] = obj.attributes.value;
    trace("node " + obj.nodeName + " " + obj.attributes.value);
    }else{
    trace("!node " + obj.nodeName + " " + obj.attributes.value);
    I tried to simulate the same with an array but the problem
    didn't occur.

  • Debug mode and normal mode

    Hi;
    we made a repair coding in MFBF. we send a mail to someone after saving proggress.
    in debug mode it's working. but in normal mode it's not working.
    WHY?.
    Thanks

    Hi,
    What type of repair ? What have you used for that ? Please explain in detail.
    Regards,
    Mukul

  • Question regarding JSTL and function calls

    Hello
    I'm quite new to JSTL and I'm having a problem accessing a function when I want to pass argument to that function.
    Im using MVC where my M (model.jsp) can be accest from my V (view.jsp) by doing ${model.(parameter/some function) }
    in my M I have a couple of getMethods() (example: getThisInfo() ) and som control methods like "isSomethingValid(arg1)"
    my question is lets say I want to access my "isSomethingValid(arg1)" and arg1 should be the value of "getThisInfo()" I thought I could do like
    ${model.somethingValid($model.thisInfo)} but it isnt working I get:
    "Unable to parse EL function ${model.somethingValid(model.thisInfo)}"
    I know that the syntax above is a mix of scriptlet and JSTL but I dont know how to or if its even possible to achieve this?
    Thanks for helping me!
    Best Regards/DS

    By default, EL can only access getters and setters on objects.
    It can not execute general methods on an object.
    You can define function libraries that call static methods
    You have to write the method as static, and then declare it in a tld.
    public boolean checkValid(Model m, Info i){ ...}
    and then you could invoke it something like
    <%@ taglib uri="myfunctionstaglib" prefix="myfunctions" %>
    ${myfunctions:checkValid(model, model.thisInfo)}

  • CVI under W2K crashes when using stop in debug mode and AOGenerateWaveforms

    When using AOGenerateWaveforms to generate a sine wave. If you hit stop in debug the PC restarts. If you're using the same channel to generate a DC signal, it's fine. Seems to be the card accessing the dynamic memory I've set the waveform up in after you hit stop and the memory has been free'd.

    This is sometimes the nature of the beast. When your program is running you are doing DMA transfers to locations in memory that have been setup by your operating system. Now if you stop your program, with out killing or stopping the DAQ device it will cause you problems in some cases. You need to call AOClearWaveforms before you stop your program.
    If you do not stop the DAQ device, before stopping your program it will cause the DAQ device to write to memory that is no longer allocated for that process. In most cases windows detects this as an error and reboots the system, because it thinks a program has crashed and is now writing to memory locations out of its designated space.
    I hope this helps.
    Joshua

  • SCXI-1321 Bad Chips and Function Calls

    8-4-04
    This message is intended for the PSE of the SCXI-1321 and that person�s manager. Please route it accordingly. I would appreciate a phone response to 8**-2**-2*** so that I might share some specific experiences.
    To the managers of the SCXI product line,
    I have used the SCXI-1121 with the SCXI 1321 for some time now, approximately 6 years. Our company generally uses them for full bridge pressure transducers and similar applications. I have some suggestions that would make them more saleable. I also believe that you will never know how many customers you have already lost on this product line, for the reasons given below.
    1. For many years the shunt resistor on the 1321 has been vastly unusable. What SHOULD be possible is to command the shunt resistor through a Labview command, and then read the virtual channel to see the shunted value. That IS the purpose of the feature. What happens is that once you set the shunt command, then read the virtual channel, the very act of reading the channel UN-shunts the resistor. What you read is some value part way between, because the chip relay from Clare actually ramps open and closed, unlike a mechanical relay. If your timing loop to read the value is consistent, you will consistently read a BAD value. Several VI�s tried to command the shunt, then quickly read the virtual channel, then loop and do it again, and thought they had solved the problem. Further investigation showed that all that was really happening is that the readings appeared to be stable because of the fast loop time, but were actually mid-release of the shunt for all readings.
    2. NI eventually starting saying �don�t use virtual channels with the shunt�. In fact, there is a channel string that starts with the word �shunt� that was recommended. The only problem with this is that the data that comes back is not scaled! Keep in mind that NI has been pushing �virtual channels� on users for many years as the way to do business, and so entire test stands are developed around them and MAX. Because we depend on MAX to do the scaling, how are we supposed to be able to read the data scaled? Does NI expect customers to duplicate the efforts of MAX for every virtual channel and scale it separately? Do I have to keep duplicate tracking of scaling information so that my virtual channels will work in the main code, but again elsewhere so that my calibration routine can read the data? This is messy, and for most customers, untenable.
    3. I was told, literally, year after year and release after release, that the problem would be fixed in future versions of NI-DAQ. Each release proved that the problem was not resolved. I am sure that many customers were not repeat customers as they found the problems associated with using a simple shunt on the board. I myself needed this hardware due to the 1121/1321 parallel operation capability.
    4. After years of trying to fight with the issues above, and what seemed like random problems with the shunting of channels, a second hardware related issue has finally been identified. See SRQ#600753 with Michelle Yagle. The Clare solid state relays installed on the 14 different 1321�s I bought ALL had the same defect. Imagine what this means for troubleshooting! You swap boards to try to troubleshoot, but they are all defective and so you falsely rule out the board, the problem didn�t resolve with a different board. The lot# 0027T12627 of chips covers a wide range of 1321 serial numbers (see my SRQ for details). That means this impacts a lot of potential customers. The part is rated by the manufacturer to 80C, 176F. This particular lot stops functioning as low as 96F! A great deal of testing with heat guns, shop air, and a thermocouple showed that all the chips in this lot fail somewhere between 96 and 110F, well below manufacturers specs or NI�s specs on the board. I also tested lot 98, from a board two years older, as well as new chips from lot 03, which both worked perfectly all the way up to 212F (100C).
    Conclusions:
    I believe that the people that have tried to use your 1321 have generally had a miserable time of it, unless they already do their own scaling (like some of the Alliance members do.) This is because the shunt feature is not compatible with virtual channels, because NI did not offer a reasonable fix for this problem over a four+ year period, and does NOT advertise this limitation when people are making hardware selections. Further, I believe there are many people that have boards with the same bad lot of chips on them that don�t know it. They may have elected to hardwire their own resistors after fighting with random problems (temperature related). They might have elected to hardwire their own resistors to get around the NI-DAQ limitation. They might not even use the shunt resistor. But one thing is certain: NI needs to clean up this board�s operation, make sure that customers of the serial numbers in my SRQ named above are notified of the bad chips, and fully test that the use of the relays is transparent to LV virtual channels in the future. People will not trust NI products with this type of problems, and will not buy the product again if all the features don�t work!
    I have helped NI resolve similar issues with the SCXI-1126 years ago, which had a number of scaling issues in NI-DAQ (the board read great when you used 1k, 2k, 4k, 8k, etc. even ranges, but if you set a virtual channel to 0-2200, the readings were off.) I plotted the problem, documented it with my sales rep, and worked at length with tech support. The boards also had a lot of bad buffer amplifier chips that allowed cross talk between channels, causing a lot of other spurious problems. Those experiences were horribly frustrating, but I now buy the product with confidence. It took a lot of push on my part to convince NI there was a real underlying problem. This 1321 board is similar.
    I want NI products to be the best they can be, as I have dedicated my career around them. Please take this feedback to heart, and contact me at 8**-2**-2*** to talk about these issues. Additionally, I am speaking at NI week this year and will be available for discussion. Contact me at the same number.
    Sincerely,
    Tim Jones
    Test Equipment Design Engineer
    Space Shuttle Program
    Space, Land and Sea Enterprise, Hamilton Sundstrand

    Tim
    Thank you for your feedback for the SCXI-1321 terminal block.
    I wanted to see if it would be possible for you to move your application to NI DAQmx, the new driver for DAQ and SCXI as of NI-DAQ 7.0. I have tested the shunt calibration using a SCXI-1121 and SCXI-1321 and it works as it should. There are two ways to do the shunt calibration with DAQmx.
    The first is to use the Calibration feature in the DAQ Assistant. When creating a DAQmx Task in either MAX or LabVIEW, you can select the �Device� tab and click on the Calibration button. It will then ask you if you either want to do a null or shunt calibration, or both. It will then do the calibrations selected and save the values to the task.
    The other option to measure the shunted value is to use the AI.B
    ridge.ShuntCal.Enable property node in LabVIEW. By setting this property to True, the driver will enable the shunt for the current measurements. If you are taking a strain measurement, the value will be converted to strain.
    We are currently looking into the issues you are seeing with the Clare relays to see if other users could be affected.
    Brian Lewis
    Signal Conditioning PSE
    National Instruments

  • WLC Guidelines L3 Transport Mode and functionality

    Hi,
    I'm implementing a LWAPP Solution and I would like to have some confirmation about LWAPP solution
    If I understand right all the traffic from the WLAN client have to pass through the dynamic interface of the controller and there are no
    opportunity to configure it in another way...
    Best Practises suggest that LWAPP AP should be placed in a different VLAN (IP Subnet) from the LWAPP WLAN client and to use LWAPP L3 Transport Mode...
    Which are the drawback if I put the LWAPP APs on the same VLAN(IP Subnet) as the LWAPP APs? If I implement the solution in this way I can still configure
    LWAPP L3 transport Mode or it isn't working???
    Thanks for sharing your opinion

    Actually, Layer 2 LWAPP mode is considered depreciated by Cisco. Also, only 4400 controllers support Layer-2 LWAPP discovery. 2000 series WLCs doesn't.
    The reason why Layer-3 LWAPP is preferable than Layer-2 LWAPP is "Layer-3 LWAPP discovery involves a series of steps in its algorithm and finds the candidate list of controller in different ways like DHCP option43, OTAP, DNA etc..,
    Layer-2 LWAPP discovery just uses one method of controller discovery that is by using layer-2 broadcast in a LWAPP frame. Since, layer-3 lwapp uses a series of controller discovery methods, it is more secured and reliabel than layer-2 LWAPP mode.

  • Call function in debug mode

    Hi,
    when I´m on the test function screen of se37, I can run the function in debugging mode and it´s starting with the debugger on the first code line.
    Is it also call a function from a report in this debugging mode?
    thank you!
    reward points guaranteed

    Thank you for your answers...
    Let me explain a little bit more in detail, what I want to do.
    I have an XML interface function which is going to be called by an
    external application about RFC, reading binary data,
    interpreting that as XML and doing something in the SAP system afterwards.
    As the system is a 4.6C I can´t debug externally, I want to provide
    a debugging function in a monitoring application for that.
    As people are using the monitoring, which are not familiar with all
    the code behind that, I want to start the debugging mode by myself,
    without setting a breakpoint.
    Let me describe it like this:
    When you type in /H in the transaction field in the SAP menu bar, the
    debugging mode is going to be started for further actions.
    All I want to do is starting that /h debug mode without typing anything
    in the transaction field, just inside of the report.
    How can I do that?
    Michael

  • Remote Enabled function module of SAP working - debug mode but not run mode

    Hi,
    When a remote function module is called from Java a message should get thrown from the function module which the FM is throwing but Java is not able to fetch the error. This problem is coming in execution mode but if we are in debugging mode and go to the function module  remotely the error is getting triggered.
    Please suggest the solution to the problem.
    Thanks,
    Abhishek

    Hello
    try something like this
    try {
       JCO.Client.execute(myFunction)
    } catch (JCO.AbapException ex) {
       System.out.println ("ABAP Exception: " + ex.getKey() + " " + ex.getMessage());
    the problem is, that JCo ABAP Exceptions are subclasses of java.lang.RuntimeException so the Java Compiler doesn't force you to catch them
    regards franz
    reward points if useful

  • Processing happens only in debug mode.

    Hi,
    I am calling an RFC enabled FM BAPI_ALM_NOTIF_DATA_MODIFY and BAPI_ALM_NOTIF_SAVE to modify the data of a notification. These function modules are called within another FM which is also a RFC called in background as a separate task. The problem here is that, the data gets modified fine only when I am in the debugging mode and execute it. But if I execute without going to the debug mode, I cannot modify the notification data. Kindly suggest why this is happening and what the solution is. Thanks

    I am not using any destination. The code is as shown:
      WAIT UP TO 5 SECONDS.
      CALL FUNCTION 'BAPI_ALM_ORDER_GET_DETAIL'
        EXPORTING
          number                 = aufnr
       IMPORTING
         es_header              = gf_header
        TABLES
         et_olist               = it_ord_notif
          return                 = it_ret
    CALL FUNCTION 'BAPI_ALM_NOTIF_GET_DETAIL'
      EXPORTING
        number                   = gf_header-notif_no
    IMPORTING
       NOTIFHEADER_EXPORT       = gf_notif_exp
            CLEAR: gf_notif_no.
            DESCRIBE TABLE it_notif_no LINES g_line.
            READ TABLE it_notif_no into gf_notif_no INDEX g_line.
            IF gf_notif_no-short_text+0(3) = 'PRB'.
             clear: gf_notif_imp, gf_notif_imp_x.
                gf_notif_imp-desstdate = gf_notif_exp-desstdate.
                gf_notif_imp-desenddate = gf_notif_exp-desenddate.
                gf_notif_imp_x-desstdate = 'X'.
                gf_notif_imp_x-desenddate = 'X'.
                CALL FUNCTION 'BAPI_ALM_NOTIF_DATA_MODIFY'
                      EXPORTING
                        number                   = gf_notif_no-notif_no
                       NOTIFHEADER              = gf_notif_imp
                       NOTIFHEADER_X            = gf_notif_imp_x
                     IMPORTING
                       NOTIFHEADER_EXPORT       = gf_notif_result
                     TABLES
               CALL FUNCTION 'BAPI_ALM_NOTIF_SAVE'
      EXPORTING
        number            = gf_notif_no-notif_no
    IMPORTING
      NOTIFHEADER       = gf_qmnum_save
    TABLES
       RETURN            = it_ret2.
              endif.
                clear: gf_notif_no.
            LOOP AT it_notif_no INTO gf_notif_no.
              IF gf_notif_no-short_text+0(3) = 'PRB'.
                SELECT SINGLE objnr FROM viqmel
                                                      INTO g_objnr
                                                     WHERE qmnum = gf_notif_no-notif_no.
                SELECT SINGLE stsma INTO g_stsma FROM jsto WHERE objnr = g_objnr.
                SELECT SINGLE estat INTO g_stat FROM tj30t WHERE stsma = g_stsma AND
                  spras = 'EN'
                  AND txt04 = 'SFMR'.
                CALL FUNCTION 'STATUS_CHANGE_EXTERN'
                  EXPORTING
                    check_only          = ' '
                    client              = sy-mandt
                    objnr               = g_objnr
                    user_status         = g_stat
                    set_inact           = ' '
                    set_chgkz           = 'X'
                    no_check            = ' '
                  EXCEPTIONS
                    object_not_found    = 1
                    status_inconsistent = 2
                    status_not_allowed  = 3
                    OTHERS              = 4.
                IF sy-subrc = 0.
                ENDIF.
              ENDIF.
            ENDLOOP.
          ENDIF.
        ENDIF.
      ENDIF.

  • Visual Studio 2015 strange failure in debug mode

    Hi,
    I installed the latest vs 2015 on windows 7 (vmware) to test it compiling and running a set of applications (desktop) I have.
    The compilation is ok but when I try to run one of those apps, I get this error (running the debug version):
    Please note that if I run in release mode, there is no error at all!?
    Following the call stack, it seems that it fails somewhere in the STL releasing a temporary string or something like that. Unfortunately, I cannot create just a small example showing the failure.
    Any ideas of why will this failure be in debug mode only? Note that the call stack shows that this code:
        ~_String_alloc() _NOEXCEPT
            {    // destroy the object
            _Free_proxy();
    is run from xstring and it fails calling "_Free_proxy();". Also note that this is called ONLY when _ITERATOR_DEBUG_LEVEL is defined. No such code is run in release mode.
    Thanks,
    G.

    Hi,
    I think I found the issue and here are some details, hopping that it will help others running in the same problem(s).
    First, I did not have just the issue starting this thread, there was another issue related.
    I have created a C++ DLL exporting common functions I used in various applications. Most of the time I used this DLL from console programs (servers). However I had two GUI programs where I also used this DLL, one based on MFC and the other on WTL.
    These two programs have the issues as follows:
       When the MFC program is run in debug mode under the IDE, at exist the IDE will show a memory leak. This happened with both vs 2013 and vs 2015. This is a debug mode only issue.
       When the WTL program was run in debug mode under the IDE, it showed no problems at all when using vs 2013 but had the issue described at the starting of this thread under vs 2015. This is also a debug mode only issue.
    It turned out that mfc issue was caused by a known bug in MFC where the termination code of the mfc app, is run BEFORE the termination of code of the DLL. This way the IDE give a false memory leak!! There were actually no memory leaks, just the false alarm!
    I am not sure exactly what is going on in the WTL but the issue seems somehow similar in the sense that something not loaded in time from the DLL caused the problem.
    The solution: For both projects in fact it was very easy to fix this issue by setting the option to delay load the DLL. That in turn will force the MFC to wait to first unload the DLL and then
    there is no false leakage info! This also as I said fixed the WTL issue but I am not sure why? Anyway the WTL is no longer maintained (too bad!!) so who know?!
    Bottom line: if you use DLLs from WTL or from MFC, delay load your DLLs! Otherwise you may get this nasty false flags in debug mode and the assert issue.
    G.

Maybe you are looking for