Pass/Fail in Report Filename

I would like to display pass/fail in the TS report filename.  Please provide steps or example code of how to perform this task.

Just an update:
TestStand 4.2 gives you the ability to specify the report path using Expressions as well right from the Report Options dialog box. For instance, using the  
<UUTStatus> macro, you could specify that reports for UUTs that Pass versus those that Fail go to different locations.
For more information on this, refer to:
NI TestStand Help: Specifying Report File Paths by Expression
NI TestStand 4.2 Release Notes: Using Expressions to Customize Report File Paths
For more information on the new features in TestStand 4.2, refer to:
Whats New in TestStand 4.2
Jervin Justin
NI TestStand Product Manager

Similar Messages

  • Display pass or fail in Report based on return value

    Hi,
    I'm trying to integrate TestStand with TestComplete using the ActiveX/COM server adapter.  I have the COM setup so that TestStand can call a test in TestComplete and have TestComplete return a string value (true|false) to TestStand.
    Now I need to get the TestStand report to display "pass" or "fail" text for this sequence call,  based on the return string from TestComplete.
    Is there someplace in Expressions or Post Actions that I can state and how:
    return value "True" = pass
    return value "false" = fail
    Thanks.
    Solved!
    Go to Solution.

    Hi,
    Option 1.) Use a String Value Test.   The Limits.String would be 'True'.  This would be the easiest because you already returning strings from your ActiveX call.   You could make your ActiveX calls a String Value Test.
    Option 2.) Use a Pass/Fail Step.   You could set the returned string to a local variable and then create another Pass/Fail step that would have the Pass/Fail criteria as Locals.StringReturned == 'True'
    These are two options that hopefully can get you started....
    Thanks,
    PH

  • I want to display the number of Passed / failed tests in Testtand xml report , please help me

    Hi, I want to display the number of Passed / failed tests in Testtand xml report , please help me.
    My requirement is later i should be able to display these number of passed/failed test in report header too.
    Solved!
    Go to Solution.

    I have attached a sequence to give you an idea of how to get the Result count (I didn't check it's working ), I have used "SequenceFilePostResultListEntry" callback to calculate the Result Count,
    and the Result Count is passed to the "ModifyReportHeader" through File Globals,  where you can edit the XML file to show it in the Repot (follow the link in my previous post).
    Hope this helps
    AshwiN,
    Attachments:
    Get Result Count.seq ‏8 KB

  • [LV - TS] How to execute a SEQ file, get the result (PASS/FAIL) and the report file ?

    Hello,
    I'm looking for the most obvious and simple way to, from Labview :
    1- Start the Teststand engine : DONE
    2- Get a list of SEQ files to execute : DONE
    3- Loop through the list to execute each SEQ file : <- found some examples but I have some trouble with the UIMessage stuff
    http://zone.ni.com/reference/en-XX/help/370052N-01/tsexamples/infotopics/executing_sequences_using_api_labview/
    4- Get the result of the SEQ execution : <- unsure
    5- Get the XML path+name of the report file generated : <- obviously not easy
    http://digital.ni.com/public.nsf/allkb/3D3B2358DF9B786B8625789500549A3B
    http://forums.ni.com/t5/NI-TestStand/Getting-Report-File-Path-location-at-execution-in-TestStand-2012/td-p/3041789
    Here is my current implementation :
    David Koch
     

    Hello,
    I tried to work from this example :
    http://forums.ni.com/t5/NI-TestStand/Execution-of-TestStand-Sequence-in-LabVIEW-via-TS-API-Different/td-p/2939616/page/2#M46686
    However it remains unclear about how to get a simple
    answer from just running a sequence file out of any
    model so far (Simple Pass). Like I told you, all what
    I need, once the TS engine is started, is a VI with the
    following interface :
    INPUT
    1- TS engine reference
    2- SEQ file path (path or string)
    3- Sequence name to execute
    4- UUT string (optional, but yeah)
    5- error in
    OUTPUT
    1- Result boolean (PASS/FAIL)
    2- Path to XML report
    3- error out
    It would be so cool to just have this
    About the surrounding UI, I'm working on it. The LV
    code is not really important at this point, it's running
    a SEQ file that matters to me.
    David Koch
     

  • Sequence pass/fail variable for csv report

    I need to write a small CSV file with my sequence test results. The way I am considering doing this is to
    save the information I need as the test progresses in local, file, or station global variables
    make a report string using the values stored in these variables, and
    write the string to a file via a LabVIEW VI.
    Is this a reasonable way to do it? In which variable does the overall UUT pass/fail result get stored?
    StephenB

    I recommend against doing it this way.  What you want to do defeats the entire purpose of using TestStand in the first place.
    If you are using a newer version of TestStand (2012 or newer) I would create a custom plugin to do exactly what you are doing.
    There is a ton of information out there on plugins.
    Here is an example on probably what you are trying to do:
    http://zone.ni.com/reference/en-XX/help/370052N-01/tsexamples/infotopics/modelplugin/
    Here are some more links that are important:
    http://zone.ni.com/reference/en-XX/help/370052K-01/tsfundamentals/infotopics/pmpcreatingplugins/
    http://zone.ni.com/reference/en-XX/help/370052M-01/tsfundamentals/infotopics/process_model_plugin_ar...
    The advantage of using a plugin is that now all tests can utilize it as opposed to the report generation being specific to a single test.
    Hope this helps,
    jigg
    CTA, CLA
    teststandhelp.com
    ~Will work for kudos and/or BBQ~

  • Lms reporting success status=pass/fail, where it should give unknown while running on saba 5.3

    I am trying to run my content on SABA 5.3 , After exiting the content the lms displays success status as Pass or Fail where it should show Unknown. My content does not contain any quiz, so on which basis lms is calculating the success status.

    Yes I know you said your content had no quiz, but please try to remember that we answer hundreds of these enquiries and it's not uncommon for people to tell us one thing when their settings say another.  So I have the habit of asking to see the actual screenshot.
    Each LMS interprets the SCORM standard a little differently.  In SCORM 2004 a course module can have both a status for Complete or Incomplete, as well as a separate status for Pass or Fail. (Under SCORM 1.2 the overall status was tracked by a single element - cmi.core.lesson_status - allowing for only a single status value that would boil down to Complete vs Incomplete or Pass vs Fail values.
    Under SCORM 2004 there are two separate elements tracking completion and success.
    The SCORM 2004 element cmi.completion_status is designed to indicate whether the learner has completed the SCO and can take the following status values:
    completed,
    incomplete,
    not attempted,
    unknown
    Your Pass/Fail status in SCORM 2004 falls under the element called cmi.success_status. This one indicates whether the learner has mastered the objective and can take the following values:
    passed,
    failed,
    unknown
    So while it is possible for a value of "unknown" to be set for either cmi.completion_status or cmi.success_status, I don't think this would be very commonly done for a Captivate module that actually has interaction data being sent to the LMS.  If the user is interacting with the content at all it would seem more likely that you'd be getting a 'completed' or 'incomplete' status value in the completion element and a 'pass' or 'fail' value in the success_status element.
    The bit I don't understand is why are you particularly keen to see a value of 'unknown' showing up here?
    Check this page for information about getting SABA to work with Captivate: http://www.infosemantics.com.au/adobe-captivate-learning-managment-system-lms-guide/saba
    Perhaps one of the threads it links to will provide more help for you.

  • Test Stand Report Filename

    How can I include the UUT Serial number in the report filename rather [NO UUT SERIAL NUMBER]  I've included the serial number on the report itslef, but don't know how to include it in the report filename

    In response to John's question:
    Can you include other parameters in the filename?
    TestStand 4.2 gives you the ability to specify the report path using Expressions as well right from the Report Options dialog box. For instance, using the  
    <UUTStatus> macro, you could specify that reports for UUTs that Pass versus those that Fail go to different locations.
    For more information on this, refer to:
    NI TestStand Help: Specifying Report File Paths by Expression
    NI TestStand 4.2 Release Notes: Using Expressions to Customize Report File Paths
    For more information on the new features in TestStand 4.2, refer to:
    Whats New in TestStand 4.2
    Jervin Justin
    NI TestStand Product Manager

  • How do i configure a FOR loop to have the behavior of the step Loop Type: Pass/Fail count?

    Hello,
    I'm using the Pass/Fail count set to one Pass to capture an event generated by my DUT.  I originally used a numerical compare step with the Looping type of Pass/Fail count to accomplish this.  Unfortunately the implementation changed and now I need to execute a few steps that can not be combined within one code module as before. Nor can these steps be put into a subroutine.  One of the steps executes a .NET asembly and I haven't figured out how to pass the reference to the subroutine.  When the subroutine is intered the reference is lost and the methode does not execute correctly.
    I have an evaluation function the exits the loop when the expected conditions are met. Everything works except for the Overall Pass/Fail result of the For loop.  If the loop exits due to the first numerical compare test passing, I want the loop overall execution to report as "Passed".  If the loop reaches it's predetermined number of iterations, the overall result needs to report as "Failed".  It would also be nice to have the radio button functionality of "Record Result of Each iteration".  Some conditions require a wait over a minute for the event to occur and I don't want to generate needless data for the report.
    Currently I get the pass/fail status for each For loop iteration, but only "Done" for each loop iteration.  I don't want the initial few failures to cause the test to fail.
    Does anyone know how to do this? Suggestions?
    Thanks,

    I have 2 steps in the loop that can not be combined into one step. This forces me to implement the behavior in some form of loop.  A While Loop could be an option.  I would need to pass the Step.Result.Status from the Multiple Numerical compaire step to the condition of the While Loop.  I tried to use the Step.TS.ID without success.  Some form of reference to the numerical compare step would be needed. I'm using a For Loop as I do want to limit the number of iterations of the loop.  In the case where the loop iterations are reached, the event I am trying to detect did not occur at the correct time and a failure needs to be reported.
    I came up with something based on my comments in the second post:
    1) To start with I configured a Locals.ForLoop_5 variable.  This is used to set the limit on the loop iterations and for comparison after the loop has finished executing. More on that later.
    2) The first step inside the loop invokes a method within a .NET assembly that has been packed in a DLL This method gets the required data from the DUT and stores it to a text file.
    3) The next step is the Multiple Numeric Limit step.  This step invokes a VI that extracts the data from the text file.  This step has been customized in several ways.
      i)  In Run Options, the Results Recording Option was disabled.  This prevents recording of "Failed" while the loop executes when waiting for the event to happen.
      ii) In Run Options, the Step Failure Causes Sequence Failure is unchecked.  Same reasoning as i)  These steps are not true failures.
      iii) A Post Action is configured to go to the nexxt step after the For Loop End step On Condition True with the logic of Step.Result.Status == "Passed".  This causes the loop to exit when the first "Passed" is encountered which corrolates with the event I'm trying to detect. On Conditon Fail remains set to default.
    4)  The step after the For Loop End is an expression step with everythin set to default except for the Status Expression logic set to: Locals.Loopindex < Locals.ForLoop_5 ? (Step.Result.Status = "Passed") : (Step.Result.Status = "Failed"). This step performs the overall Pass/Fail reporting for the For Loop.  If the number of loop iterations is less than the maximum it could have only gotten there by the previous logic triggered by the numerical compare passing, therefore "Passed".  If the loop index has reached the limit, then the event was not detected, therefore Failed.
    I have tested this work around with success, it just a pain to now have to implement this on my 40 some odd For Loops.
    If there is a better way, I'd still like to hear it.

  • Why does captivate 5 scorm 2004 change the pass/fail status when using complete/incomplete

    SUMMARY:  I am trying to use SCORM 2004 reporting but I am finding out ADOBE does not diffentiate between COMPLETE/INCOMPLETED and PASSED/FAILED variables in the SCORM 2004.
    RESULT: I can not accurately get the passing/failing status from the course.
    QUESTION: Is their anything I can do or does ADOBE have a patch they have release which will accurately report the passed/failed STATUS as well as the completed/incompleted status.
    OTHER RESEARCH: I even attempted to post these "proofs" on scorm.com and they have something called "SATISFACTION TRACKED" to accomodate content creation systems (CAPTIVATE 5) which does not follow the standards of SCORM 2004.
    BACKGROUND:  the report status "Incomplete/Complete" and "Pass/Fail" under the publishing options for SCORM 2004 essentially mean the same thing when the CAPTIVATE COURSE sends setvar commands to the SCORM API...  In other words the SCORM API receives the same exact commands when publishing in both modes.  "THIS IS WRONG"
    why is the wrong?  IF REPORTING; incomplete/complete ;  the CAPTIVATE 5 COURSE SHOULD NOT "change" the STATUS OF PASS/FAIL when attempting to report COMPLETE/INCOMLETE with 100% slide views.
    DEMO #3> SHOWS THE PROBLEM
    About DEMO #1 and DEMO #2> these are other reporting options I attempted with other problems arrising..so these images are just for supporting my original problem which is shown in DEMO #3...
    DEMO 1 (REPORT STATUS has no affect on reporting):SETTINGS:
    PASS: (PROBLEM)
    Anything above Passing Score is not recorded to LMS (once passing is reached...no more bookmarks are recorded)
    FAIL: (CORRECT)
    Failure gets sent to the database on the last screen with "completed"
    DEMO 2 (Report status  [Complete/Incomplete  Pass/Fail]   has no impact on commands sent to scorm api):
    SETTINGS:
    PASS: (CORRECT)
    Very Last page gets sent to the Database
    FAIL: (PROBLEM)
    NOTHING GETS SENT TO DATABASE (stuck in final score screen forever)...because it says "INCOMPLETE"
    DEMO 3:
    Report status [Complete/Incomplete Pass/Fail] has no impact on commands sent to scorm api
    SETTINGS:
    PASS: (CORRECT)
    Very Last page gets sent to the Database with a 90% passing with "completed"
    FAIL: (PROBLEM)
    LAST SLIDE SENDS "completed" but GETS SENT AS A PASSING SCORE??? even when it is a failure
    (IN OTHER WORDS ADOBE CHANGES THE PASS/FAIL STATUS based on slide views...
    this is WRONG ....REALLY WRONG becuase I am trying to report
    "COMPLETE/INCOMPLETE" for 100% slide views
    PASS/FAIL OPTIONS:
    OPTION I CAN PURSUE?
    SO WHAT CAN I DO?  YES, I CAN TELL THE LMS...to COMPARE THE SCORE RECEIVED DURING A "COMPLETE" status...and then change the grade in the LMS based on the SCORE IN THE LMS...BUT THAT WOULD MEAN THE LMS COULD REPORT ONE GRADE and the COURSE COULD REPORT A DIFFERENT GRADE....and that would not be good.
    ALL OF OUR COURSES SHOW THE STUDENT THE FINAL SCORE WITHIN EACH COURSE..
    QUESTION TO ADOBE?
    SO ADOBE when are you going to differentiate these two reporting options so that the PASS/FAIL OPTIONS ARE ACCURATELY REPORTED WHEN USING COMPLETE/INCOMPLETE?
    MORE INFORMATION
    HERE IS THE SCORM.COM "Satisfaction" adjustment (very last line) to accomodate inconsistent reporting in CAPTIVATE 5.
    Thanks,
    DOUG LUBEY OF LOUISIANA
    www.douglubey.com
    OTHER WEBSITES:
    http://forums.adobe.com/message/3332035
    http://forums.adobe.com/message/3122315
    http://forums.adobe.com/thread/712502?start=50&tstart=0
    http://forums.adobe.com/thread/800570
    http://forums.adobe.com/thread/720121
    SEARCH ENGINE REFERENCE:
    CAPTIVATE SCORM SUCCESS_STATUS
    captivate 5 scorm 2004 pass/fail is dependent on complete/incomplete
    scorm 2004 captivate cs5 success_status incorrect
    scorm 2004 captivate cs5 reports success_status incorrectly
    scorm 2004 captivate incorrectly reports pass/fail
    scorm 2004 captivate 5 incorrectly reports pass/fail
    scorm 2004 "captivate 5" incorrectly reports pass/fail
    captivate 5 sends a pass even when a failure is issued by the course using scorm 2004.
    captivate does not work properly when working with completed status.
    CAPTIVATE SCORM SUCCESS_STATUS COMPLETE STATUS SHOULD BE INDEPENDENT
    CAPTIvATE 5 cmi.completion_status
    and cmi.success_status does not work independently with scorm 2004.  they both share the same definition.  These two field were designed to help differentiate the difference between a passing or failing score and when the course is actually completed.   WITH CAPTIVATE 5 they do not differentiate the two.  COMPLETE means PASS and INCOMPLETE means fail..
    I was trying to use "completion_status" as a way to determine when all slides were shown.  So that I can send the final score (PASS/FAIL) the LMS after all slides were viewed and this was the absoluted score the student recieved.
    2011 CAPTIVATE CHANGES cmi.success_status TO PASSING WHEN COURSE REPORTS FAILURE
    2010 CAPTIVATE CHANGES SUCCESS TO PASSED WHEN COURSE REPORTS FAILURE
    CAPTIVATE 5 CHANGES cmi.success_status TO PASSING
    CAPTIVATE 5 CHANGES cmi.success_status TO PASSING WHEN COURSE REPORTS FAILURE
    CPTX CAPTIVATE CHANGES cmi.success_status TO PASSING
    CP5 WHY DOES CAPTIVATE SCORM SUCCESS_STATUS
    CPTX WHY DOES CAPTIVATE CHANGE PASSING STATUS
    CPTX CAPTIVATE SCORM CHANGES FAILING SCORE TO PASSING SCORE
    CAPTIVATE SCORM SUCCESS_STATUS COMPLETE STATUS MEANS THE SAME THING
    CAPTIVATE SCORM SUCCESS_STATUS COMPLETE STATUS SHOULD BE INDEPENDENT

    Hi,
    Here's something I found in a similar situation... hope this helps!
    After publishing your CP5 project as a SCORM 2004 zip file, unzip the file so you can get into the separate files.  Open the imsmanifest.xml file with a text editor; I like using Notepad++.  Search for the line starting with <imsss:minNormalizedMeasure>.  The complete line will probably look like <imss:minNormalizedMeasure></imss:minNormalizedMeasure>, this is the line that is supposed to contain your passing score (at least for our LMS).  So, if your passing score is supposed to be 80%, edit this line to be:  <imsss:minNormalizedMeasure>0.80</imsss:minNormalizedMeasure> and save the file.  Then select all the files and re-zip them, I use 7-zip for that.
    I found one other weird thing that doesn't happen 100% of the time, so test it.  In the example above, try to get a score of exactly 80% (or whatever passing grade you are using) and see if you get the correct status reported.  I found I sometimes have to re-edit that line in the manifest file to be 0.79 so it correctly includes an exact 80% score.  No idea why and you have to watch the number of questions in your quiz to make sure that works for you mathematically.
    If this works or not, I'd recommend posting your results so others can get some ideas.  Good Luck!!

  • Tracking Pass/Fail on LMS

    I am trying to track pass/fail on a Captivate project and I
    am having difficulties with the LMS. It is tracking as completed
    after just the first slide, before even taking the quiz. I have my
    quiz settings set to report "Quiz results only", Pass/Fail, Report
    to LMS as Percent, Reporting level is "Report Score", the Pass/Fail
    options are set to 75%.
    NOTE: I have allowed the user 2 attempts to pass. Could this
    be what's giving me trouble?

    Look at the Project > Advanced Interaction panel and see
    if you have
    score values setup throughout your lesson as you should.
    It sounds like perhaps you have a low number of scored
    interactions and
    that 75% is being met as soon as they answer the first
    question
    correctly? Compare the Advanced Interaction results to your
    Quiz
    Preferences > Pass/Fail options and see what shakes out...
    Erik
    BriarPatch1 wrote:
    > I am trying to track pass/fail on a Captivate project
    and I am having
    > difficulties with the LMS. It is tracking as completed
    after just the first
    > slide, before even taking the quiz. I have my quiz
    settings set to report
    > "Quiz results only", Pass/Fail, Report to LMS as
    Percent, Reporting level is
    > "Report Score", the Pass/Fail options are set to 75%.
    >
    > NOTE: I have allowed the user 2 attempts to pass. Could
    this be what's giving
    > me trouble?
    >
    Erik Lord
    http://www.capemedia.net
    Adobe Community Expert - eLearning
    http://www.adobe.com/communities/experts/
    http://www.awaretips.net -
    Authorware Tips!

  • Can the sequence pass/fail status be controlled in Test Stand?

    I have a sequence that contains a DoWhile loop. Within the loop are 5 or 6 steps that are executed. If the result of those steps is a fail the loop runs again until it passes or meets its allowable number of iterations. My problem is that if an iteration fails and then passes the result for the sequence is failed. Can the pass/fail status be controlled via an expression etc? I know I can create a subsequence and loop in it with the properties. I am just curious to know if there is a way it can be done like I have explained.
    Thanks in advance for any help,
    Troy

    Troy,
    You might be able to use the RunState.ThisContext.SequenceFailed property.  Set this equal to False after your loop runs, and the sequence should report as if it passed.
    John B.
    Applications Engineer
    National Instruments

  • Dynamically specify TestStand pass/fail string

    I am performing tests utilizing multiple instruments. Each instrumemt may independently be in 'live' or 'simulated' mode. If any one instrument is in 'sim' mode, the test is considered in 'sim' mode. When in 'sim' mode the tests pass.
    My goal is to be able to look at the test report and determine if a specific test ran in 'sim' mode or live mode. My solution is to append the suffix '-sim' to the test status produced by TestStand. Thus the 'passed', 'failed' strings would become 'passed-sim', 'failed-sim' in 'sim' mode. These strings would have to be given to TestStand at the same time as the measurement value but before the test exited.
    Any thoughts or ideas on how I might accomplish the dynamic setting of the pass/fail string? I don't want this to be a permanent change, I want TestStand to revert back to the original pass/fail strings for the next sequential step.
    Hurst C.

    Hi Believer,
    What you seem to be trying to do is modify the results or a property of the results from the previous step. But unfortunately modify this property will only be affective if you were to re-run that step again eg step back one step. What you need to be modifying is the property that is in the Locals.ResultList. i.e Locals.ResultList[0].ReportText for the first step to be recorded in the sequence, Locals.ResultList[nth].ReportText for the nth step in the sequence to be recorded.
    By modifying 'RunState.Sequence.Main["Clock test (4.0V)"].Result.ReportText="FLAG"' you are actually modifying the runtime version of the step before its executed. Therefore, providing you do not over write the contents of ReportText within you step, this initial value will be seen in the result.
    You can check this by stepping into you step and checking the value of ReportText within the step.
    Hope this clarifies things
    Regards
    Ray Farmer
    Regards
    Ray Farmer

  • Trouble with the pass/fail test

    I started having this trouble just recently and I can't figure out the problem. Just all of a sudden (without me changing anything anywhere apparently), whenever I call a pass/fail test and if the test fails, the sequence execution immediately goes to the <End Group> step. I want the execution to proceed with the next step regardless of a pass or a fail. I have rechecked the run options repeatedly and there's nothing in there that might be causing this. Also, if a pass/fail step fails and I am running on single pass, even the report does not get generated.
    I know it can be really hard for someone out there to tell me what the problem is. I am posting this message here just to see if people have any general comments/suggestions on what might be going on (perhaps this is a common problem which someone has run into before). The comments/suggestions might help me debug the problem.
    Thanks
    Anuj

    Hi Anuj,
    Do you have other step type in the same sequence or just Pass/Fail type?
    What adapter type are you using, LabVIEW, CVI, etc...
    Does it happen in the just one sequence or any sequence?
    What versions are you using?
    Posting an example would help.
    Regard
    Ray Farmer
    Regards
    Ray Farmer

  • Pass/Fail Assessments

    I am using Captivate 3 and am looking for resources that can guide me through creating an assessment in a pass/fail mode.  I have created assessments, and have created quizzes, but need to bring the two together to record a demonstration of a software package we use, then launch an assessment that requires the user to get XX percent of mouse clicks, etc. correct in order to pass the assessment.  I know that I can use failure captions and set the number of tries a user is allowed, but have not been able to determine a way to set it up as pass/fail.
    Thanks for your assistance.

    Hello,
    No problem with CP. First you will have to determine the mastery score, what is needed to Pass. This has to be done in the Quiz, Preferences, Pass or Fail, Pass options. You can combine Question slides with assessment, where the user has to use interactive objects to be scored. An overview of scored Question slides and interactive objects can be seen in the view 'Advanced Interaction' (menu Project). Then you will have to set the Reporting (also in Quiz Preferences), depending on the SCORM version (do not know Plateau myself). It is possible to report Pass/fail as you will see in those settings.
    Hope this can be a start.
    Lilybiri

  • SCORM - Simple Quiz - Pass/Fail Only

    Flash CS4
    AS2
    Good Afternoon,
    I've created a simple multiple choice quiz in Flash.
    I was wondering if anyone could point me to some resources for writing a simple SCORM wrapper, and the accompanying AS2 code.  This is only going to report a pass/fail to the LMS - that's all - nothing more at this point.  I'm hoping there should be a simple tut on this, as I said, it's pretty straight forward.
    Thank you!
    Actus

    I have a tutorial here:
    http://pipwerks.com/2008/04/27/how-to-add-basic-scorm-code-to-a-flash-movie/
    It uses a custom (and free) SCORM ActionScript class and JavaScript wrapper so you don't need to write your own.
    The tutorial uses SCORM 2004, but the JavaScript wrapper and ActionScript classes are version-neutral (works for SCORM 2004 and 1.2). There are two versions of the ActionScript class available: one for AS2 and one for AS3.
    You can download the wrapper files here:
    http://pipwerks.com/downloads/
    - philip

Maybe you are looking for

  • What is d BAPI for reversal of Good Receipt Note(GRN) for movement type 102

    Hi ABAPers,    I have got requirement like this , I have done Goods Receipt Note (GRN) based on Purchse order Number using '.BAPI_GOODSMVT_CREATE' with 101 movement type.    But now my requirement is reversal of good receipt note(GRN) based on Materi

  • In Firefox 7.0.1, sometimes link opens in another open window instead of current window.

    MacBook OS X 10.5.8 Working at Distributed Proofreading [DP] and at DPCanada. When trying to begin proofreading from Project Page in new tab using Command-click, link opens in new tab on a different, minimized page instead of in new tab in current wi

  • What can I do to edit and add or delete PDF document

    I do not use many functions of acrobat PDF except converting word document into PDF or vice versa. Recently, I have a document in PDF format with pages of writing. What do I if I want to edit the document in PDF format, and to add or to delete certai

  • Detect a Click on an image

    Is there any way to detect when a user clicks on a certain image in an applet. If I have image1 and image2 and the user clicks on image1 can I trap that event somehow?

  • Mail sync question - does it do this?

    Hi. I have read the iphone user guide and I am unsure of something. When Mail is downloaded to my iPhone, what happens if I dont delete them from the phone? a: Will the mails be deleted from the phone once I have connected the iphone to the Mac and i