Best practice to integrate UPK modes in a separate eLearning module? Specifically Lectora

I have several elearning modules to develop. For richness of content (and standards in my firm) these will be built either in Lectora or in Flash. I'd like to include some process simulations in those modules.
If I use UPK to generate the simulations how do I "include" them in my elearning?
An earlier thread confirmed that you cannot generate a simulation as an SWF or similar that is easy to embed. So my current understanding is the approach needs to be:
- generate all the required UPK simulations and publish as their own SCORM/LMS module (we use iLearn+ so that shouldn't be hard)
- generate the separate elearning module and link (somehow) to the required simulation in the UPK LMS publication
If anyone's done this, or knows a better way, I'd love to know. Because I cannot find anything to explain the process. And in particular:
- how to link to a specific simulation mode (ie I want a user to click a link and immediately run the "try it" for process X, not wade through a menu of options to pick the one to run)
- how to return/exit once a simulation is completed (again, want the user to just finish and return to the main elearning module, not click through other screens, close windows, etc)
- how to pass progress/tracking back to the elearning (for example I want to know if they completed a "try it", but need to know the score if they completed a "know it").
Thanks.

The ideal way is to publish the Topics in SCORM format, then import them into your SCORM-compliant LMS and package them al up nicely. If you're not using a SCORM-compliant LMS you are in more trouble. You may be able to import a UPK LMS package into Lectora but I've never tried it.
If you just want to be able to open up a specific Topic in a specific mode, then use kp.html to get the link to a Topic, and then use this as as a Hyperlink in your training content. If you use the CLOSE parameter then once the Topic has completed the window will close and the user will be passed back to the thing the Topic was called from (i.e the hyperlink they clicked on). I've used this with Articulated PowerPoint, and set the slide to auto-advance once the user clicks on the Hyperlink, so when they return they see the NEXT slide.
e.g. http://{website}/PlayerPackage/dhtml_kp.html?guid={Document ID}&Mode={mode}&Close
where {website} is your web server, {DocumentID} is the UPK 32-character ID of the Topic, and {mode} is T for Try it, K for Know It, and so on. But get it from kp.html rather than hand-building it yourself.
Of course you need to publish to a web server (as well as whatever you are doing in Lectora / LMS) for this to work...
If you need to return a score, you need to use SCORM.

Similar Messages

  • Best practice to integrate UPK modes in a separate eLearning module?

    I have several elearning modules to develop. For richness of content (and standards in my firm) these will be built either in Lectora or in Flash. I'd like to include some process simulations in those modules.
    If I use UPK to generate the simulations how do I "include" them in my elearning?
    An earlier thread confirmed that you cannot generate a simulation as an SWF or similar that is easy to embed. So my current understanding is the approach needs to be:
    - generate all the required UPK simulations and publish as their own SCORM/LMS module (we use iLearn+ so that shouldn't be hard)
    - generate the separate elearning module and link (somehow) to the required simulation in the UPK LMS publication
    If anyone's done this, or knows a better way, I'd love to know. Because I cannot find anything to explain the process. And in particular:
    - how to link to a specific simulation mode (ie I want a user to click a link and immediately run the "try it" for process X, not wade through a menu of options to pick the one to run)
    - how to return/exit once a simulation is completed (again, want the user to just finish and return to the main elearning module, not click through other screens, close windows, etc)
    - how to pass progress/tracking back to the elearning (for example I want to know if they completed a "try it", but need to know the score if they completed a "know it").
    Thanks.

    The ideal way is to publish the Topics in SCORM format, then import them into your SCORM-compliant LMS and package them al up nicely. If you're not using a SCORM-compliant LMS you are in more trouble. You may be able to import a UPK LMS package into Lectora but I've never tried it.
    If you just want to be able to open up a specific Topic in a specific mode, then use kp.html to get the link to a Topic, and then use this as as a Hyperlink in your training content. If you use the CLOSE parameter then once the Topic has completed the window will close and the user will be passed back to the thing the Topic was called from (i.e the hyperlink they clicked on). I've used this with Articulated PowerPoint, and set the slide to auto-advance once the user clicks on the Hyperlink, so when they return they see the NEXT slide.
    e.g. http://{website}/PlayerPackage/dhtml_kp.html?guid={Document ID}&Mode={mode}&Close
    where {website} is your web server, {DocumentID} is the UPK 32-character ID of the Topic, and {mode} is T for Try it, K for Know It, and so on. But get it from kp.html rather than hand-building it yourself.
    Of course you need to publish to a web server (as well as whatever you are doing in Lectora / LMS) for this to work...
    If you need to return a score, you need to use SCORM.

  • Best practice to integrate the external(ERP or Database etc) eCommerce data in to CQ

    Hi Guys,
    I am refering to GEOMetrixx-Outdoors project for building eCommerce fucntionality in our project.
    Currently we are integrating with an ERP system to fetch the Product details.
    Now I need to store all the Product data from ERP system in to our CRX  under etc/commerce/products/<myproject> folder structure.
    Do I need to create a csv file structure as explained in the geometrixx-outdoors project  and place it exactly the way they have mentioned in the documentation? By doing this the csvimporter will import the data in to CRX and creates the Sling:folder and nt:unstructured nodes in to CRX?
    Please guide me  which is this best practice to integrate the external eCommerce data in to CQ system to build eCommerce projects?
    Are there any other best practices ?
    Your help in this regard is really appreciated.
    Thanks

    Hi Kresten,
    Thanks for your reply.
    I went through the eCommerce framework link which you sent.
    Can you get me few of the steps to utilise eCommerce framework to pull all the product information in to our CRX repository and also  how to synchronise between the ERP system and CRX data. Is that we have a scheduling mechanism to pull the data from our ERP system and synch it with CRX repository?
    Thanks

  • Best Practice to Integrate CER with RedSky E911 Anywhere via SIP Trunk

    We are trying to integrate CER 9 with RedSky for V911 using a SIP trunk and need assistance with best practice and configuration. There is very little documentation regarding "best practice" for routing these calls to RedSky. This trunk will be handling the majority of our geographically dispersed company's 911  calls.
    My question is: should we use an IPsec tunnel for this? The only reference I found was this: http://www.cisco.com/c/en/us/solutions/collateral/enterprise-networks/virtual-office/deployment_guide_c07-636876.htmlm which recommends an IPsec tunnel for the SIP trunk to Intrado. I would think there are issues with an unsecure SIP trunk for 911 calls. Looking for advice or specifics on how to configure this. Does the SIP trunk require a CUBE or is a CUBE only required for the IPsec tunnel?
    Any insight is appreciated.
    Thank you.

    you can use Session Trace in RTMT to check who is disconnecting the call and why.

  • Best Practice on installing Workflow Manager on a separate box

    Hi,
    We are installing Workflow Manager 1.0 to work with our SP2013 farm, which is fairly simple.
    However we anticipate a significant  amount of workflow work in the future. 
    Could you please share your thoughts/experience/lessons learnt on Having WF Manager on a separate box compared to having it on the same box as SP2013 Front end server in terms of
    1. Scalability
    2. Stability 
    3. Ease of Installation/Management  
    etc
    Thanks a lot.
    Dineth

    Hi Dineth,
    Thanks for posting your query,
    Kindly browse the below mentioned URLs to know about the best practices and installation step by step
    http://msdn.microsoft.com/library/azure/jj730571%28v=azure.10%29.aspx
    http://www.sjoukjezaal.com/blog/2014/05/sharepoint-2013-workflows-part-2-installing-and-configuring-the-workflow-manager/
    I hope this is helpful to you, mark it as Helpful.
    If this works, Please mark it as Answered.
    Regards,
    Dharmendra Singh (MCPD-EA | MCTS)
    Blog : http://sharepoint-community.net/profile/DharmendraSingh

  • Best practice for having an access point giving out only a specific range

    Hey All,
    I have an access point which is currently set to relay all dhcp request to the server DC-01, However the range that has been setup is becoming low on available IP addresses so I have been asked if it is possible to setup another range for the AP only.
    Is there a way to set the DHCP up with a new range and say anything from that access point it will then give out a 192.168.2 subnet address as apposed to the standard 192.168.1 subnet?
    Or would it be easier / better to create a superscope? and slowly migrate the users to a new subnet with a larger range?
    Any help suggestions would be appreciated
    thanks
    Anthony

    Hi,
    Maybe we could configure a DHCP superscope to achieve your target.
    For details, please refer to the following articles.
    Configuring a DHCP Superscope
    http://technet.microsoft.com/en-us/library/dd759168.aspx
    Create a superscope to solve the problem of dwindling IP addresses
    http://www.techrepublic.com/article/create-a-superscope-to-solve-the-problem-of-dwindling-ip-addresses/
    Best Regards,
    Andy Qi
    Andy Qi
    TechNet Community Support

  • Best practices on number of pipelines in a single project/app to do forging

    Hi experts,
    I need couple of clarification from you regarding endeca guided search for enterprise application.
    1)Say for example,I have a web application iEndecaApp which is created by imitating the jsp reference application. All the necessary presentation api's are present in WEB-INF/lib folder.
    1.a) Do I need to configure anything else to run the application?
    1.b) I have created the web-app in eclipse . Will I be able to run it from the any thirdparty tomcat server ? If not then where I have to put the war file to successfully run the application?
    2)For the above web-app "iEndecaApp" I have created an application named as "MyEndecaApp" using deploymenttemplate. So one generic pipeline is created. I need to integrate 5 different source of data . To be precise
    i)CAS data
    ii)Database table data
    iii)Txt file data
    iv)Excel file data
    v)XML data.
    2.a)So what is the best practice to integrate all the data. Do I need to create 5 different pipeline (each for each data) or I have to integrate all the 5 data's in a single pipeline ?
    2.b)If I create 5 different pipeline then all should reside in a single application "MyEndecaApp" or I need to create 5 difference application using deployment template ?
    Hope you guys will reply it back soon..... Waiting for your valuable response ..
    Regards,
    Hoque

    Point number 1 is very much possible ie running the jsp ref application from a server of your choice.I havent tried that ever but will draw some light on it once I try.
    Point number 2 - You must create 5 record adapters in the same pipeline diagram and then join them with the help of joiner components. The resultant must be fed to the property mapper.
    So 1 application, 1 pipeline and all 5 data sources within one application is what should be the ideal case.
    And logically also since they all are related data, so must be having some joining conditions and you cant ask 5 different mdex engines to serve you a combined result.
    Hope this helps you.
    <PS: This is to the best of my knowledge>
    Thanks,
    Mohit Makhija

  • Office Web Apps - Best Practice for App Pool Security Account?

    Guys,
    I am finalising my testing of Office Web Apps, and ready to move onto deploying it to my live farm.
    Generally speaking, I put service applications in their own application pool.
    Obviously by doing so this has an overhead on memory and processing, however generally speaking it is best practice from a security perspective when using separate accounts.
    I have to create 3 new service applications in order to deploy Office Web Apps, in my test environment these are using the Default SharePoint app pool. 
    Should I create one application pool for all my office web apps with a fresh service account, or does it make no odds from a security perspective to run them in the default app pool?
    Cheers,
    Conrad
    Conrad Goodman MCITP SA / MCTS: WSS3.0 + MOSS2007

    i run my OWA under it's own service account (spOWA) and use only one app pool.  Just remember that if you go this route, "When
    you create a new application pool, you can specify a security account used by the application pool to be either a predefined Network Service account or a managed account. The account must have db_datareader, db_datawriter, and execute permissions for the content
    databases and the SharePoint configuration database, and be assigned to the db_owner role for the content databases." (http://technet.microsoft.com/en-us/library/ff431687.aspx)

  • Best Practice for managing variables for multiple environments

    I am very new to Java WebDynPro and have a question
    concerning our deployments to Sandbox, Development, QA,
    and Production environments.
    What is the 'best practice' that people use so that if
    you have information specific to each environment you
    don't hard-code it in your Java WebDynPro code.
    I could put the value in a properties file, but how do I
    make that variant?  Otherwise I'd still have to make a
    change for each environment to the property file and
    re-deploy.  I know there are some configurations on the
    Portal but am not sure if that will work in my instance.
    For example, I have a URL that varies based on my
    environment.  I don't want to hard-code and re-compile
    for each environment.  I'd prefer to get that
    information on the fly by knowing which environment I'm
    running in and load the appropriate URL.
    So far the only thing I've found that is close to
    telling me where I'm running is by using a Parameter Map
    but the 'key' in the map is the URL not the value and I
    suspect there's a cleaner way to get something like that.
    I used Eclipse's autosense in Netweaver to discover some
    of the things available in my web context.
    Here's the code I used to get that map:
    TaskBinder.getCurrentTask().getWebContextAdapter().getRequestParameterMap();
    In the forum is an example that gets the IP address of
    the site you're serving from. It sounds like it is going
    to be or has been deprecated (it worked on my system
    right now) and I would really rather have something like
    the DNS name, not something like an IP that could change.
    Here's that code:
    String remoteHost = TaskBinder.getCurrentTask().getWebContextAdapter().getHttpServletRequest().getRemoteHost();
    Thanks in advance for any clues you can throw my way -
    Greg

    Hi Greg:
         I suggest you that checks the "Software Change Managment Guide", in this guide you can find an explication of the best practices to work with a development infrastructure.
    this is the link :
    http://help.sap.com/saphelp_erp2005/helpdata/en/83/74c4ce0ed93b4abc6144aafaa1130f/frameset.htm
    Now if you can gets the ip of your server or the name of your site you can do the next thing:
    HttpServletRequest request = ((IWebContextAdapter) WDWebContextAdapter.getWebContextAdapter()).getHttpServletRequest();
    String server_name = request.getServerName();
    String remote_address =     request.getRemoteAddr()
    String remote_host = request.getRemoteHost()
    Only you should export the servlet.jar in your project properties > Build Path > Libraries.
    Good Luck
    Josué Cruz

  • IronPort ESA best practice for DNS servers?

    Hello!
    Is there a best practice for what servers should be used for the Cisco IronPort DNS servers?
    Currently when I check our configuration, we have set it to "Use these DNS servers" and the first two are our domain controllers and last two are Google DNS.
    Is there a best practice way of doing this? I'm thinking of selecting the "Use the Internet's Root DNS Servers" option as I can't really see an advantage of using internal DC's.
    Thoughts?

    Best practice is to use Internet Root DNS Servers and define specific dns servers for any domain that you need to give different answers for. Since internal mail delivery is controlled by smtproutes using internal dns servers is normally not required.
    If you must use internal dns servers I recommend servers dedicated to your Ironports and not just using servers that handle enterprise lookups as well. Ironports can place a very high load on dns servers because every outside connection results in multiple dns lookups. (forward, reverse, sbrs)
    If you don't have enough dns horsepower you are susceptible to a DOS attack either through accident or design. If the Ironports overload your internal dns servers it can impact your entire enterprise.

  • SQL Server Best Practices Architecture UCS and FAS3270

    Hey thereWe are moving from EMC SAN and physical servers to NetApp fas3270 and virtual environment on Cisco UCS B200 M3.Traditionally - Best Practices for SQL Server Datbases are to separate the following files on spearate LUN's and/or VolumesDatabase Data filesTransaction Log filesTempDB Data filesAlso I have seen additional separations for...
    System Data files (Master, Model, MSDB, Distribution, Resource DB etc...)IndexesDepending on the size of the database and I/O requirements you can add multiple files for databases.  The goal is provide optimal performance.  The method of choice is to separate Reads & Writes, (Random and Sequential activities)If you have 30 Disks, is it better to separate them?  Or is better to leave the files in one continous pool?  12 Drives RAID 10 (Data files)10 Drives RAID 10 (Log files)8 Drives RAID 10 (TempDB)Please don't get too caught up on the numbers used in the example, but place focus on whether or not (using FAS3270) it is better practice to spearate or consolidate drives/volumes for SQL Server DatabasesThanks!

    Hi Michael,It's a completely different world with NetApp! As a rule of thumb, you don't need separate spindles for different workloads (like SQL databases & logs) - you just put them into separate flexible volumes, which can share the same aggregate (i.e. a grouping of physical disks).For more detailed info about SQL on NetApp have a look at this doc:http://www.netapp.com/us/system/pdf-reader.aspx?pdfuri=tcm:10-61005-16&m=tr-4003.pdfRegards,Radek

  • Dropbox workflow best practices?

    I'm looking for best practices on using Dropbox for an InDesign/InCopy workflow. Specifically, in my own testing with two computers, I've realized that the lockout function does not work when using Dropbox, so multiple people can  edit the same file simultaneously... which is obviously problematic. Suggestions for how to avoid this? If I create a "lock" folder, and move files in there before editing them, will my designer's Dropbox refresh fast enough to prevent him from also opening that file?
    How are your own Dropbox workflows set up, for those of you that use it? Are there other hiccups, hangups or landmines I should watch out for?

    Well the issues are what you stated yourself, namely that Dropbox doesn't copy the lock file across, and if you manually "locked" it by shifting it into a different folder another user could still grab it before Dropbox had shifted their copy.
    In that scenario though, you could speed up the shift by making sure that the InDesign file was updated on both computers first.  To do that you'd have to save the file, wait until it was uploaded (you get the green tick) and wait at least the same amount of time again (for it to download on the other computer) then shift it.  This means that Dropbox would shift it quicker because that is the only action it is transferring between computers.
    Another thing to be aware of though is that if your Dropbox is slogging way at another task (say uploading a folder of linked images) it might put doing that shift down the list.
    In conclusion I don't think it's realistic to expect Dropbox to stop two users editing the same file, you need some other project structure to achieve that.

  • ACS best practices for device config

    Can anybody tell me what the best practice is in regards to device setup in ACS?
    Specifically, is it better to specify each device individually or is it ok to allow whole subnets access to access, therefore allowing all devices in those subnets access to ACS for AAA.

    Find My iPad is not a fully reliable way to secure data on a corporate iPad. The service is too easy to defeat and block you from wiping the data. You can, however, make settings that will make it much more difficult for someone to get data from your company iPads and iPhones even if they can defeat the Find My iPad connection. I'd suggest you read these Apple documents:
    http://www.apple.com/ipad/business/docs/iOS_Security.pdf
    http://www.apple.com/ipad/business/docs/iOS_MDM.pdf
    They'll give you an overview of how to secure your devices.
    Regards.

  • "Installation best practices." Really?

    "Install Final Cut Pro X, Motion 5, or Compressor 4 on a new partition - The partition must be large enough to contain all the files required by the version of Mac OS X you are installing, the applications you install, and enough room for projects and media…"
    As a FCS3 user, if you were to purchase an OS Lion Mac, what would your "Installation best practices" be?  It seems the above recommendation is not taking into consideration FCS3s abrupt death, or my desire to continue to use it for a very long time.
    Wouldn't the best practice be to install FCS3 on a separate partition with an OS that you never, ever update?   Also, there doesn't appear to be any value added to FCS with Lion.  That's why I would be inclined to partition FCS3 with Snow Leopard -- but I'm really just guessing after being thrown off a cliff without a parachute.
    Partitioning… does this mean I'll need to restart my computer to use FCS?  What about my other "applications"? Will I be able to run Adobe Creative Suite off the other partition, or is the "best practice" to install a duplicate of every single application I own on the FCS partition?
    Note: This is not to say I'll never embrace FCX. But paying (with time & money) to be a beta tester just isn't gonna happen.  If it's as easy to use as claimed, I'm not falling behind, as has been suggested by some. I'm just taking a pass on the early adopter frustration.

    Okay, but are you not concerned with future OS updates that may render FCS3 useless?  Perhaps our needs are different, but I want and need FCS3 to continue to work in the future.
    That "best practices" link up at the top of this page is there for a reason, and it says "partition."  What it doesn't say is why, and that's really disappointing and concerning.  It's a little late in the game, but I would prefer Apple walk like a man and lay it on the line; the good, the bad, and the ugly.
    I'm glad to hear Lion is working okay for you!

  • Best Practice for UPK implementation

    We will start using UPK tool in our Oracle E-Business Suite (11.5.10) environment soon.
    We are in a process of configuring the tool and making a standard template for training documents.
    For example, which screen resolution? which font size and color, bubble icon, pointer position, task bar setting, and etc.
    If anyone have any best practice document to share, I will appreciate it.

    Hi,
    Some of the standards will depend on your end-user capabilities/environments but I have a good standards document we use when developing UPK content for the eBus suite which might help.
    Email me at [email protected] and I'll send it over.
    Jon

Maybe you are looking for