LSMW testing and Debugging

Hi
  I really need help about testing and debugging LSMW. I really appreciate  if anybody can provide me enough help on how to test LSMW and Debug in QA environment
Pls help me

Once you input the Project, Subproject and Objec on the initial screen and execute it.
Here you are guided through the individual steps of data migration. Example: Maintain Object attribute...etc.
on the application tool bar, click on the personal menu push button, you will list of all the steps. Check the Display conversion program.
After you maintain the field mapping, read data and convert data. you can execute the Display conversion program step. This will display the generated progrom.
Here you can set the <b>break point</b>.
Other way is while doing the field mapping, you can set hard break-point.

Similar Messages

  • Testing and debugging

    This question was posted in response to the following article: http://help.adobe.com/en_US/flashbuilder/using/WSe4e4b720da9dedb5-6caff02f136a645e895-7fff .html

    Probable BUG:
    I believe the final section of the documentation "Change Application ID while testing or debugging" is incorrect.  Despite removing the ".debug" from the AppID in the run configuration, and making certain nothing of that nature appears in the manifest (just covering all the bases even though that's not referenced in this section), when I attempt to build onto my device, I still receive an AppID mismatch error [Exception occurred during launch Reason: The AIR application ID "org.PictureTools.Apps.PictureToolsOnTheMoveMakeIt.debug" does not match the pattern "org.PictureTools.apps.PictureToolsOnTheMoveMakeIt" specified in the provisioning profile. Either change the applications ID, or use a provisioning profile that matches the existing ID.]
    Any solutions?
    Btw... The error dialog contains incorrect English grammar.  The possessive "Either change the applications ID, or use..." should be denoted using an apostrophe, "Either change the application's ID, or use...". I so believe.  Cheers!

  • How to test and debug EJB from a JUnit TestCase?

    It is possible to access an EJB from a JUnit TestCase? I have seen that in this case the embedded OC4J is not started automatically. Is it possible to start it programatically?
    Thanks in advance
    Cs.

    For the first part, I googled for "JUnit" and "EJB" with many hits like
    http://www.devx.com/Java/Article/28119
    and
    http://www.onjava.com/pub/a/onjava/2003/02/05/automate_ejb.html
    The embedded server can be started by running an EJB from the IDE before performing the tests. If you start it programmatically (e.g. issuing the startup command) then it will be launched as a stand alone OC4J container.
    EJB 3.0 will make things easier to test in the near future
    Frank

  • How to test and configure custom extensions in Flash Pro CC?

    Hi,
    How do you install custom CS extensions in Flash Pro CC for testing and debugging before they have been signed and packaged?
    We have a custom extension for Flash Pro developed in-house using the CS SDK that we've been using with flash pro CS5 and CS6 but we're having trouble getting it to work in CC and I'd like to run it locally in test/debug mode.
    With flash pro CS6 and CS5 it was possible to test extensions by copying the output folder from flash builder into the CS Service Manager root folder (e.g. /Users/<username>/Library/Application Support/Adobe/CS6ServiceManager) and setting PlayerDebugMode to 1 in the CSXS preferences.
    I've added the PlayerDebugMode=1 flag to com.adobe.CSXS.4.plist however, I can't find the equivalent service manager folder for Flash Pro CC to copy the extension into.
    Is there a different folder to use for CC or is the process for testing extensions completely different? (are extensions written in the CS SDK even compatible with CC?)
    I'm running Mac OS Lion.
    Thanks,
    Stephen

    Thanks, that explains a lot.
    I'm afraid I'm a bit confused about what the preferred way of developing extensions is going ahead. There seems to be a lot of conflicting documentation on the adobe site, e.g. http://www.adobe.com/devnet/creativesuite.html  seems like the top page for extension documentation but it only talks about CS SDK 2.0 which uses CSXS and doesn't seem to be compatible with creative cloud apps.
    Then there's CS Extension Builder 2.1 which looks like it supports Flash Pro CC but isn't mentioned in the pages you linked too and it doesn't mention JSFL, but talks about the CSAW library.
    Do you know if:
    * The CS SDK has been dropped completely? 
    * CS Extenstion Builder is deprecated? (I can't see it as an option in the list of Creative Cloud apps)
    * Is then any documentation/utilities to help migrate existing CSXS extensions to work with creative cloud apps?
    Thanks,
    Stephen

  • Error opening test scripts in Build and Debug with Screens

    Hello,
    I am testing scenarios in the Build and Debug with screens mode in OPM. I save each scenario as a test script by clicking the Export button and saving. Some test scripts are able to be opened again, yet when I try to open others, I get the error message shown at the link below and am unable to load the test script.
    http://imgur.com/uqh80
    This did not happen previously, it just started today. All of these test scripts were created today, and the attributes have not been changes since the test scripts were created. This is seriously hindering my scenario testing ability.
    Does anyone know how to rectify this problem?
    Thanks,
    Christine

    Hi Kristy,
    I am in OPM 10.2.
    To troubleshoot, I tried redoing the test scenarios that werent working from scratch (rather than building off a previous scenario), and this time it worked. I am not sure why it works now and didnt before, and I am also not sure if they will continue working.
    I will continue creating the scenarios and will be sure to update if it fails again. In the case it does fail, I will try your export recommendation.
    Christine

  • Testing a Query and Debugging Virtual InfoObjects

    Hi,
    I was wondering if someone would be able to help me out by telling me how to test a query and debug a virtual InfoObject? I would really appreciate it if someone would also tell me how to test user exit variables ??

    Hi Jayashree,
    be aware with the break point. Put in a little logic with a static variable, because otherwise your program will stop there for each record.
    Here some coding.
    statics: l_stat type i value 0.
    if l_stat = 0.
      break-point.
    endif.
    With this, you will not come to the break point once you set the variable l_stat to a value <> 0. This might be helpful.
    regards
    Siggi
    PS: rsrt will also work for testing the variables, but I am not sure if it will work for the aggregates.
    Message was edited by: Siegfried Szameitat

  • Unit tests and QA process

    Hello,
    (disclaimer : if you agree that this topic does not really belong to this forum please vote for a new Development Process forum there:
    http://forum.java.sun.com/thread.jspa?forumID=53&threadID=504658 ;-)
    My current organization has a dedicated QA team.
    They ensure end-user functional testing but also run and monitor "technical" tests as well.
    In particular they would want to run developer-written junit tests as sanity tests before the functional tests.
    I'm wondering whether this is such a good idea, and how to handle failed unit tests:
    1) Well , indeed, I think this is a good idea: even if developer all abide by the practice of ensuring 100% of their test pass before promoting their code (which is unfortunately not the case), integration of independant development may cause regression or interactions that make some test fail.
    Any reason against QA running junit tests at this stage?
    However the next question is, what do they do with failed tests : QA has no clue how important a given unit test is with regard to the whole application.
    Maybe a single unit test failed out of 3500 means a complete outage of a 24x7 application. Or maybe 20% of failed tests only means a few misaligned icons...
    2) The developer of the failed package may know, but how can he communicate this to QA?
    Javadocing their unit testing code ("This test is mandatory before entering user acceptance") seems a bit fragile.
    Are there recommended methods?
    3) Even the developer of the failed package may not realize the importance of the failure. So what should be the process when unit tests fail in QA?
    Block the process until 100% tests pass? Or, start acceptance anyway but notify the developper through the bug tracking system?
    4) Does your acceptance process require 100% pass before user acceptance starts?
    Indeed I have ruled out requiring 100% pass, but is this a widespread practice?
    I rule it out because maybe the failed test indeed points out a bad test, or a temporary unavailability of a dependent or simulated resource.
    This has to be analyzed of course, as tests have to be maintained as well, but this can be a parallel process to the user acceptance (accepting that the software may have to be patched at some point during the acceptance).
    Thank you for your inputs.
    J.

    >
    Any reason against QA running junit tests at this
    stage?
    Actually running them seems pointless to me.
    QA could be interested in the following
    - That unit tests do actually exist
    - That the unit tests are actually being run
    - That the unit tests pass.
    This can all be achieved as part of the build process however. It can either be done for every cm build (like automated nightly) or for just for release builds.
    This would require that the following information was logged
    - An id unique to each test
    - Pass fail
    - A collection system.
    Obviously doing this is going to require more work and probably code than if QA was not tracking it.
    However the next question is, what do they do with
    failed tests : QA has no clue how important a given
    unit test is with regard to the whole application.
    Maybe a single unit test failed out of 3500 means a
    complete outage of a 24x7 application. Or maybe 20%
    of failed tests only means a few misaligned icons...
    To me that question is like asking what happens if one class fails to build for a release build.
    To my mind any unit test failure is logged as a severe exception (the entire system is unusable.)
    2) The developer of the failed package may know, but
    how can he communicate this to QA?
    Javadocing their unit testing code ("This test is
    mandatory before entering user acceptance") seems a
    bit fragile.
    Are there recommended methods?Automatic collection obviously. This has to be planned for.
    One way is to just log success and failure for each test which is gathered in one or more files. Then a seperate process munges the result file to collect the data.
    I know that there is a java build engine (add on to ant or a wrapper to ant) which will do periodic builds and email reports to developers. I think it even allows for categorization so the correct developer gets the correct error.
    >
    3) Even the developer of the failed package may not
    realize the importance of the failure. So what
    should be the process when unit tests fail in
    QA?
    Block the process until 100% tests pass? Or, start
    acceptance anyway but notify the developper through
    the bug tracking system?
    I would block it.
    4) Does your acceptance process require 100% pass
    before user acceptance starts?
    No. But I am not sure what that has to do with what you were discussing above. I consider unit tests and acceptance testing to be two seperate things.
    Indeed I have ruled out requiring 100% pass, but is
    this a widespread practice?
    I rule it out because maybe the failed test indeed
    points out a bad test, or a temporary unavailability
    of a dependent or simulated resource.Then something is wrong with your process.
    When you create a release build you should include those things that should be in the release. If they are not done, missing, have build errors, then they shouldn't be in the release.
    If some dependent piece is missing for the release build then the build process has failed. And so it should not be released to QA.
    I have used version control/debug systems which made this relatively easy by allowing control over which bug/enhancements are included in a release. And of course requiring that anything checked in must be done so under a bug/enhancement. They will even control dependencies as well (the files for bug 7 might require that bug 8 is added as well.)

  • Flash Builder can't Test or Debug

    Recently I've not been able to test or debug my Actionscript projects. Everytime I try I get the following error screen.
    I tried reinstalling Flash and the individual debuggers (do I need to use them to replace a file?)
    Current specs are:
    > Adobe Flash Builder 4
    > Adobe Flash Player:

    Here's an update...
    I've looked for every Flash Player and Debugger I could find in the Adobe Flash CS5, Flash Builder 4 and seperate Flash Player folder and deleted them from my computer. Then I reinstalled Flash Player and installed the following debugger:
    After that I tried Flash Builder again and was able to test my project, though it opened Flash CS5 instead of Flash Player, very strange.
    So I continued with the debugger and that also started the Flash CS5 player instead of the standalone player and also started loading in Flash Builder:
    Up till the point where Flash Builder showed me this error:
    The current problem is now the error above and the fact that I can't seem to be able to see any traces of debugging code in my console (Flash Builder) or Output screen (Flash Pro CS5).
    Has anyone got an idea?
    UPDATE:
    I was able to make it work IF I let Flash Builder Generate an HTML Wrapper file.
    If I tick it off, the debugging will start Flash Pro CS5, but doesn't debug and gives me the same things as the 2 last screenshot.

  • Forms 9i and Debug

    Hi I here that there is an interactive debug feature in forms 9i, whereby you can see what the user is doing and debug it at runtime. I also hear that this does not work over firwalls.
    Does anyone know
    1)If the feature works, as it sounds useful and are there any restrictions
    2)Why it does not work over firewalls (most support is undertaken by 3rd party companies over firewalls.) Could somekind of proxy server like Connection Manager (CMAN.ora) could be used to overcome this problem

    Mark,
    you cannot debug another user's session. You can run a form in debug mode using e.g. the following URL:
    http://<server>:<port>/forms90/f90servlet?form=<myform>&debug=yes.
    This will provide you the server name and a port number. Now with the according fmb file open in forms builder go to "Debug/Attach debug" and enter server name and port. Then you can set breakpoints and therewith debug the module.
    The port is the reason why it doesn't work over firewalls. The FW admin would need to open that port, and for each debug session it will be a different port number.
    I'm not sure about CMAN.
    I only use this feature on my local 9ids and on a testing environment within our LAN.
    Hope this helps
    Gerald Krieger

  • Linux Debugger and Debug Player

    Hello.
    As someone who develops Flash games under linux platform, I'm already struggling that latest version of Flash Player is unavailable for my platform.
    For now, there is at least workaround to get 11.2 Debug Player working by replacing libflashplayer.so.
    As far, as I understand that would not work with Flash Player 12.
    Is there any chance that there will be Debugger and Debug Player 12 released for linux?

    Jeromie Clark wrote:
    We are currently working on a content debugger version of Flash Player for Chrome Pepper, which is the only linux target that is actively supported.
    Unfortunately, the debug PPAPI player version is Win/Mac only.
    If you have interest in Linux, Flash, AIR, and Scout, feel free to follow my twitter feed and blog.
    You can get telemetry data out of the Linux PPAPI Flash Player. Unfortunately,1) it requires starting google-chrome with the --no-sandbox option (I filed a bug on this), and 2) since it's not the debug player, you don't get memory allocation data (you do get the stack profiler and timing data).
    Instead, if possible I suggest testing your content under Adobe AIR (under wine on linux), where you get full telemetry data.
    You can view this telemetry using Adobe Scout on a Windows VM (if you've paid for Windows and Adobe CC licenses), or with a little cross-platform project of mine called hxScout.

  • LSMW TEST RUN

    Hello Gurus,
    Is there any method to do LSMW test run.
    I am working on LSMW for FB01 with standar batch program RFBIBL00. Is there any option to do test run the records.
    Like in BAPIS we will be having a flag where we can check the data. Is this function is avalible in LSMW recording.
    Urgent...... Plz.........-.......
    Thanks,
    Feroz.

    Hi Feroz,
    You can write the code for the same.........
    1. define one variable like pc_test in global data event in 5th step of LSMW ie field mappings and conversion step.
    PARAMETERS: PC_TEST AS CHECKBOX DEFAULT 'X'
    2. do all the validations.
    3. If pc_test = 'X'.
    then in the end of transaction step write the code like......
    IF pc_test = 'X'.
      skip_transaction.
    thats it, it will not transfer the records for further process.
    You can see this parameter in convert data step.

  • Firefox can't open normally without going through the "this is embarrassing" routine; nor does it close down properly; it also sticks frequently, i.e. screen freezes for a minute or more; should I uninstall and re-install to try and debug?

    Firefox can't open normally without going through the "this is embarrassing" routine; nor does it close down properly; I get the "end program" message and have to say "end now"it also sticks frequently, i.e. screen freezes for a minute or more; should I uninstall and re-install to try and debug?
    - I've used Firefox exclusively for around 2 years.
    - My outlook express has no problems
    - other programs seem unaffected; it seems peculiar to Firefox.
    - my main use is surfing the net, primarily entering competitions on line via specialist competition sites.
    - every day at some stage(s) I have to switch computer off as Firefox has got very slow/sticky/freezes.
    I'm not techy by any means, but I can only thing of un- and re-installing to hopefully start with a clean slate.

    Sounds like something is keeping Firefox from closing properly. See this: <br />
    https://support.mozilla.com/en-US/kb/Firefox+hangs#Hang_at_exit

  • So I think there should be an app with flash cards that you can fill in your own info with. I'm studying for a test, and how awesome would it be to have my flash cards on my phone!!! Is there already an app like that?

    I think there should be a flash card app that let's u put in ur own info. I'm studying for a test and I think it would be really cool to have my flash cards in my phone!! Is there an app like that already?

    http://www.appshopper.com/education/flashcards

  • Unit testing and system integration testing for crystal report

    Hi gurus,
           I am creating crystal report by oracle procedure, will you please tell me how to do unit testing and system integration testing? appreciate very much.

    The focus of this forum is report design. The actual testing of reports would be subject to your own internal policies and procedures.
    Jason

  • Unit Testing and Integrating testing In HR

    Dear Sap Gurus,
    Would you be kind enough to  give me an example of unit testing and integrating testing??  what do you test, eg..TC and what else.. what happened.??. And also an example of Integrating testing  ..and an example ....I know what unit and integrating test is ..and with a good example, i will have a great idea about it ...thanks a lot.

    Hi Pooja
    Unit Testing:
    A process for verifying that software, a system, or a system component performs its intended functions.
    Unit transactions are tested against their own specifications and design documents.
    Integration Testing
    An orderly progression of testing in which software elements, hardware elements or both are combined and tested until the entire system has been integrated.
    Integration tests deal mainly with the testing of cross-application process chains in addition to transactions and business processes. The process models and the test cases derived from these form the basis of these tests.
    Regards
    Vijay

Maybe you are looking for

  • Upgrade to 7.6 now no video only sound

    I upgraded to 7.6. Now when I play a music video I hear the audio, but I can't get a picture. All I get is green and purple stripes. I have tried upgrading Quicktime, but get the same problem. Can anyone help. Thanks

  • Single-App Subscription

    I would like to have a single app subscription, however I see that requires as CS3-6 serial number. Is there any way I can subscribe to Photoshop CC without the aforementioned serial number?

  • How do I close apps in the new ios7

    How do I close apps when I am finished with them in the new ios7 (to conserve battery)?  double tapping the bottom button and then holding down on the apps to get the "-" doesn't work any more.

  • I bought a locked iPhone plz help me fell in AMBUSHES???!!!

    I bought a locked iPhone plz help me fell in AMBUSHES???!!! Please help me

  • File & Transfer P2 Workflow

    Do those of you who shoot and edit frequently with Panasonic's P2 find FCP's File & Transfer really useful? What's your ingestion workflow from camera to Mac? Is it truly faster in adding scene naming?