Best practice to integrate UPK modes in a separate eLearning module?

I have several elearning modules to develop. For richness of content (and standards in my firm) these will be built either in Lectora or in Flash. I'd like to include some process simulations in those modules.
If I use UPK to generate the simulations how do I "include" them in my elearning?
An earlier thread confirmed that you cannot generate a simulation as an SWF or similar that is easy to embed. So my current understanding is the approach needs to be:
- generate all the required UPK simulations and publish as their own SCORM/LMS module (we use iLearn+ so that shouldn't be hard)
- generate the separate elearning module and link (somehow) to the required simulation in the UPK LMS publication
If anyone's done this, or knows a better way, I'd love to know. Because I cannot find anything to explain the process. And in particular:
- how to link to a specific simulation mode (ie I want a user to click a link and immediately run the "try it" for process X, not wade through a menu of options to pick the one to run)
- how to return/exit once a simulation is completed (again, want the user to just finish and return to the main elearning module, not click through other screens, close windows, etc)
- how to pass progress/tracking back to the elearning (for example I want to know if they completed a "try it", but need to know the score if they completed a "know it").
Thanks.

The ideal way is to publish the Topics in SCORM format, then import them into your SCORM-compliant LMS and package them al up nicely. If you're not using a SCORM-compliant LMS you are in more trouble. You may be able to import a UPK LMS package into Lectora but I've never tried it.
If you just want to be able to open up a specific Topic in a specific mode, then use kp.html to get the link to a Topic, and then use this as as a Hyperlink in your training content. If you use the CLOSE parameter then once the Topic has completed the window will close and the user will be passed back to the thing the Topic was called from (i.e the hyperlink they clicked on). I've used this with Articulated PowerPoint, and set the slide to auto-advance once the user clicks on the Hyperlink, so when they return they see the NEXT slide.
e.g. http://{website}/PlayerPackage/dhtml_kp.html?guid={Document ID}&Mode={mode}&Close
where {website} is your web server, {DocumentID} is the UPK 32-character ID of the Topic, and {mode} is T for Try it, K for Know It, and so on. But get it from kp.html rather than hand-building it yourself.
Of course you need to publish to a web server (as well as whatever you are doing in Lectora / LMS) for this to work...
If you need to return a score, you need to use SCORM.

Similar Messages

  • Best practice to integrate UPK modes in a separate eLearning module? Specifically Lectora

    I have several elearning modules to develop. For richness of content (and standards in my firm) these will be built either in Lectora or in Flash. I'd like to include some process simulations in those modules.
    If I use UPK to generate the simulations how do I "include" them in my elearning?
    An earlier thread confirmed that you cannot generate a simulation as an SWF or similar that is easy to embed. So my current understanding is the approach needs to be:
    - generate all the required UPK simulations and publish as their own SCORM/LMS module (we use iLearn+ so that shouldn't be hard)
    - generate the separate elearning module and link (somehow) to the required simulation in the UPK LMS publication
    If anyone's done this, or knows a better way, I'd love to know. Because I cannot find anything to explain the process. And in particular:
    - how to link to a specific simulation mode (ie I want a user to click a link and immediately run the "try it" for process X, not wade through a menu of options to pick the one to run)
    - how to return/exit once a simulation is completed (again, want the user to just finish and return to the main elearning module, not click through other screens, close windows, etc)
    - how to pass progress/tracking back to the elearning (for example I want to know if they completed a "try it", but need to know the score if they completed a "know it").
    Thanks.

    The ideal way is to publish the Topics in SCORM format, then import them into your SCORM-compliant LMS and package them al up nicely. If you're not using a SCORM-compliant LMS you are in more trouble. You may be able to import a UPK LMS package into Lectora but I've never tried it.
    If you just want to be able to open up a specific Topic in a specific mode, then use kp.html to get the link to a Topic, and then use this as as a Hyperlink in your training content. If you use the CLOSE parameter then once the Topic has completed the window will close and the user will be passed back to the thing the Topic was called from (i.e the hyperlink they clicked on). I've used this with Articulated PowerPoint, and set the slide to auto-advance once the user clicks on the Hyperlink, so when they return they see the NEXT slide.
    e.g. http://{website}/PlayerPackage/dhtml_kp.html?guid={Document ID}&Mode={mode}&Close
    where {website} is your web server, {DocumentID} is the UPK 32-character ID of the Topic, and {mode} is T for Try it, K for Know It, and so on. But get it from kp.html rather than hand-building it yourself.
    Of course you need to publish to a web server (as well as whatever you are doing in Lectora / LMS) for this to work...
    If you need to return a score, you need to use SCORM.

  • Best practice to integrate the external(ERP or Database etc) eCommerce data in to CQ

    Hi Guys,
    I am refering to GEOMetrixx-Outdoors project for building eCommerce fucntionality in our project.
    Currently we are integrating with an ERP system to fetch the Product details.
    Now I need to store all the Product data from ERP system in to our CRX  under etc/commerce/products/<myproject> folder structure.
    Do I need to create a csv file structure as explained in the geometrixx-outdoors project  and place it exactly the way they have mentioned in the documentation? By doing this the csvimporter will import the data in to CRX and creates the Sling:folder and nt:unstructured nodes in to CRX?
    Please guide me  which is this best practice to integrate the external eCommerce data in to CQ system to build eCommerce projects?
    Are there any other best practices ?
    Your help in this regard is really appreciated.
    Thanks

    Hi Kresten,
    Thanks for your reply.
    I went through the eCommerce framework link which you sent.
    Can you get me few of the steps to utilise eCommerce framework to pull all the product information in to our CRX repository and also  how to synchronise between the ERP system and CRX data. Is that we have a scheduling mechanism to pull the data from our ERP system and synch it with CRX repository?
    Thanks

  • Best Practice to Integrate CER with RedSky E911 Anywhere via SIP Trunk

    We are trying to integrate CER 9 with RedSky for V911 using a SIP trunk and need assistance with best practice and configuration. There is very little documentation regarding "best practice" for routing these calls to RedSky. This trunk will be handling the majority of our geographically dispersed company's 911  calls.
    My question is: should we use an IPsec tunnel for this? The only reference I found was this: http://www.cisco.com/c/en/us/solutions/collateral/enterprise-networks/virtual-office/deployment_guide_c07-636876.htmlm which recommends an IPsec tunnel for the SIP trunk to Intrado. I would think there are issues with an unsecure SIP trunk for 911 calls. Looking for advice or specifics on how to configure this. Does the SIP trunk require a CUBE or is a CUBE only required for the IPsec tunnel?
    Any insight is appreciated.
    Thank you.

    you can use Session Trace in RTMT to check who is disconnecting the call and why.

  • Best Practice on installing Workflow Manager on a separate box

    Hi,
    We are installing Workflow Manager 1.0 to work with our SP2013 farm, which is fairly simple.
    However we anticipate a significant  amount of workflow work in the future. 
    Could you please share your thoughts/experience/lessons learnt on Having WF Manager on a separate box compared to having it on the same box as SP2013 Front end server in terms of
    1. Scalability
    2. Stability 
    3. Ease of Installation/Management  
    etc
    Thanks a lot.
    Dineth

    Hi Dineth,
    Thanks for posting your query,
    Kindly browse the below mentioned URLs to know about the best practices and installation step by step
    http://msdn.microsoft.com/library/azure/jj730571%28v=azure.10%29.aspx
    http://www.sjoukjezaal.com/blog/2014/05/sharepoint-2013-workflows-part-2-installing-and-configuring-the-workflow-manager/
    I hope this is helpful to you, mark it as Helpful.
    If this works, Please mark it as Answered.
    Regards,
    Dharmendra Singh (MCPD-EA | MCTS)
    Blog : http://sharepoint-community.net/profile/DharmendraSingh

  • Best practices on number of pipelines in a single project/app to do forging

    Hi experts,
    I need couple of clarification from you regarding endeca guided search for enterprise application.
    1)Say for example,I have a web application iEndecaApp which is created by imitating the jsp reference application. All the necessary presentation api's are present in WEB-INF/lib folder.
    1.a) Do I need to configure anything else to run the application?
    1.b) I have created the web-app in eclipse . Will I be able to run it from the any thirdparty tomcat server ? If not then where I have to put the war file to successfully run the application?
    2)For the above web-app "iEndecaApp" I have created an application named as "MyEndecaApp" using deploymenttemplate. So one generic pipeline is created. I need to integrate 5 different source of data . To be precise
    i)CAS data
    ii)Database table data
    iii)Txt file data
    iv)Excel file data
    v)XML data.
    2.a)So what is the best practice to integrate all the data. Do I need to create 5 different pipeline (each for each data) or I have to integrate all the 5 data's in a single pipeline ?
    2.b)If I create 5 different pipeline then all should reside in a single application "MyEndecaApp" or I need to create 5 difference application using deployment template ?
    Hope you guys will reply it back soon..... Waiting for your valuable response ..
    Regards,
    Hoque

    Point number 1 is very much possible ie running the jsp ref application from a server of your choice.I havent tried that ever but will draw some light on it once I try.
    Point number 2 - You must create 5 record adapters in the same pipeline diagram and then join them with the help of joiner components. The resultant must be fed to the property mapper.
    So 1 application, 1 pipeline and all 5 data sources within one application is what should be the ideal case.
    And logically also since they all are related data, so must be having some joining conditions and you cant ask 5 different mdex engines to serve you a combined result.
    Hope this helps you.
    <PS: This is to the best of my knowledge>
    Thanks,
    Mohit Makhija

  • Office Web Apps - Best Practice for App Pool Security Account?

    Guys,
    I am finalising my testing of Office Web Apps, and ready to move onto deploying it to my live farm.
    Generally speaking, I put service applications in their own application pool.
    Obviously by doing so this has an overhead on memory and processing, however generally speaking it is best practice from a security perspective when using separate accounts.
    I have to create 3 new service applications in order to deploy Office Web Apps, in my test environment these are using the Default SharePoint app pool. 
    Should I create one application pool for all my office web apps with a fresh service account, or does it make no odds from a security perspective to run them in the default app pool?
    Cheers,
    Conrad
    Conrad Goodman MCITP SA / MCTS: WSS3.0 + MOSS2007

    i run my OWA under it's own service account (spOWA) and use only one app pool.  Just remember that if you go this route, "When
    you create a new application pool, you can specify a security account used by the application pool to be either a predefined Network Service account or a managed account. The account must have db_datareader, db_datawriter, and execute permissions for the content
    databases and the SharePoint configuration database, and be assigned to the db_owner role for the content databases." (http://technet.microsoft.com/en-us/library/ff431687.aspx)

  • SQL Server Best Practices Architecture UCS and FAS3270

    Hey thereWe are moving from EMC SAN and physical servers to NetApp fas3270 and virtual environment on Cisco UCS B200 M3.Traditionally - Best Practices for SQL Server Datbases are to separate the following files on spearate LUN's and/or VolumesDatabase Data filesTransaction Log filesTempDB Data filesAlso I have seen additional separations for...
    System Data files (Master, Model, MSDB, Distribution, Resource DB etc...)IndexesDepending on the size of the database and I/O requirements you can add multiple files for databases.  The goal is provide optimal performance.  The method of choice is to separate Reads & Writes, (Random and Sequential activities)If you have 30 Disks, is it better to separate them?  Or is better to leave the files in one continous pool?  12 Drives RAID 10 (Data files)10 Drives RAID 10 (Log files)8 Drives RAID 10 (TempDB)Please don't get too caught up on the numbers used in the example, but place focus on whether or not (using FAS3270) it is better practice to spearate or consolidate drives/volumes for SQL Server DatabasesThanks!

    Hi Michael,It's a completely different world with NetApp! As a rule of thumb, you don't need separate spindles for different workloads (like SQL databases & logs) - you just put them into separate flexible volumes, which can share the same aggregate (i.e. a grouping of physical disks).For more detailed info about SQL on NetApp have a look at this doc:http://www.netapp.com/us/system/pdf-reader.aspx?pdfuri=tcm:10-61005-16&m=tr-4003.pdfRegards,Radek

  • "Installation best practices." Really?

    "Install Final Cut Pro X, Motion 5, or Compressor 4 on a new partition - The partition must be large enough to contain all the files required by the version of Mac OS X you are installing, the applications you install, and enough room for projects and media…"
    As a FCS3 user, if you were to purchase an OS Lion Mac, what would your "Installation best practices" be?  It seems the above recommendation is not taking into consideration FCS3s abrupt death, or my desire to continue to use it for a very long time.
    Wouldn't the best practice be to install FCS3 on a separate partition with an OS that you never, ever update?   Also, there doesn't appear to be any value added to FCS with Lion.  That's why I would be inclined to partition FCS3 with Snow Leopard -- but I'm really just guessing after being thrown off a cliff without a parachute.
    Partitioning… does this mean I'll need to restart my computer to use FCS?  What about my other "applications"? Will I be able to run Adobe Creative Suite off the other partition, or is the "best practice" to install a duplicate of every single application I own on the FCS partition?
    Note: This is not to say I'll never embrace FCX. But paying (with time & money) to be a beta tester just isn't gonna happen.  If it's as easy to use as claimed, I'm not falling behind, as has been suggested by some. I'm just taking a pass on the early adopter frustration.

    Okay, but are you not concerned with future OS updates that may render FCS3 useless?  Perhaps our needs are different, but I want and need FCS3 to continue to work in the future.
    That "best practices" link up at the top of this page is there for a reason, and it says "partition."  What it doesn't say is why, and that's really disappointing and concerning.  It's a little late in the game, but I would prefer Apple walk like a man and lay it on the line; the good, the bad, and the ugly.
    I'm glad to hear Lion is working okay for you!

  • Best Practice for UPK implementation

    We will start using UPK tool in our Oracle E-Business Suite (11.5.10) environment soon.
    We are in a process of configuring the tool and making a standard template for training documents.
    For example, which screen resolution? which font size and color, bubble icon, pointer position, task bar setting, and etc.
    If anyone have any best practice document to share, I will appreciate it.

    Hi,
    Some of the standards will depend on your end-user capabilities/environments but I have a good standards document we use when developing UPK content for the eBus suite which might help.
    Email me at [email protected] and I'll send it over.
    Jon

  • Best Practices on OWB/ODI when using Asynchronous Distributed HotLog Mode

    Hello OWB/ODI:
    I want to get some advice on best practices when implementing OWB/ODI mappings to handle Oracle Asynchronous Distributed HotLog CDC (change data capture), specifically for “updates”.
    Under Asynchronous Distributed HotLog mode, if a record is changed in a given source table, only the column that has been changed is populated in the CDC table with the old and new value, and all other columns with the exception of the keys are populated with NULL values.
    In order to process this update with an OWB or ODI mapping, I need to compare the old value (UO) against the new value (UN) in the CDC table. If both the old and the new value are NOT the same, then this is the updated column. If both the old and the new value are NULL, then this column was not updated.
    Before I apply a row-update to my destination table, I need to figure out the current value of those columns that have not been changed, and replace the NULL values with its current value. Otherwise, my row-update will replace with nulls those columns that its value has not been changed. This is where I am looking for an advise on best practices. Here are the possible 2 solutions I can come up with, unless you guys have a better suggestion on how to handle “updates”:
    About My Environment: My destination table(s) are part of a dimensional DW database. My only access to the source database is via Asynchronous Distributed HotLog mode. To build the datawarehouse, I will create initial mappings in OWB or ODI that will replicate the source tables into staging tables. Then, I will create another set of mappings to transform and load the data from the staging tables into the dimension tables.
    Solution #1: Use the staging tables as lookup tables when working with “updates”:
    1.     Create an exact copy of the source tables into a staging environment. This is going to be done with the initial mappings.
    2.     Once the initial DW database is built, keep the staging tables.
    3.     Create mappings to maintain the staging tables using as source the CDC tables.
    4.     The staging tables will always be in sync with the source tables.
    5.     In the dimension load mapping, “join” the staging tables, and identify “inserts”, “updates”, and “deletes”.
    6.     For “updates”, use the staging tables as lookup tables to get the current value of the column(s) that have not been changed.
    7.     Apply the updates in the dimension tables.
    Solution #2: Use the dimension tables as lookup tables when working with “updates”:
    1.     Delete the content of the staging tables once the initial datawarehouse database has been built.
    2.     Use the empty staging tables as a place to process the CDC records
    3.     Create mappings to insert CDC records into the staging tables.
    4.     The staging tables will only contain CDC records (i.e. new records, updated records, and deleted records)
    8.     In the dimension load mapping, “outer join” the staging tables, and identify “inserts”, “updates”, and “deletes”.
    5.     For “updates”, use the dimension tables as lookup tables to get the current value of a column(s) that has not been changed.
    6.     Apply the updates in the dimension tables.
    Solution #1 uses staging tables as lookup tables. It requires extra space to store copies of source tables in a staging environment, and the dimension load mappings may take longer to run because the staging tables may contain many records that may never change.
    Solution #2 uses the dimension tables as both the lookup tables as well as the destination tables for the “updates”. Notice that the dimension tables will be updated with the “updates” AFTER they are used as lookup tables.
    Any other approach that you guys may suggest? Do you see any other advantage or disadvantage against any of the above solutions?
    Any comments will be appreciated.
    Thanks.

    hi,
    can you please tell me how to make the JDBC call. I triedit as:
    1. TopicConnectionFactory tc_fact = AQjmsFactory.getTopicConnectionFactory(host, SID, Integer.parseInt(port), "jdbc:oracle:thin");
    and
    2. TopicConnectionFactory tc_fact = AQjmsFactory.getTopicConnectionFactory(host, SID, Integer.parseInt(port), "thin");
    -as given in http://www.acs.ilstu.edu/docs/oracle/server.101/b10785/jm_opers.htm#CIHJHHAD
    The 1st one is giving the error:
    Caused by: oracle.jms.AQjmsException: JMS-135: Driver jdbc:oracle:thin not supported
    at oracle.jms.AQjmsError.throwEx(AQjmsError.java:330)
    at oracle.jms.AQjmsTopicConnectionFactory.<init>(AQjmsTopicConnectionFactory.java:96)
    at oracle.jms.AQjmsFactory.getTopicConnectionFactory(AQjmsFactory.java:240)
    at com.ivy.jms.JMSTopicDequeueHandler.init(JMSTopicDequeueHandler.java:57)
    The 2nd one is erroring out:
    oracle.jms.AQjmsException: JMS-225: Invalid JDBC driver - OCI driver must be used for this operation
    at oracle.jms.AQjmsError.throwEx(AQjmsError.java:288)
    at oracle.jms.AQjmsConsumer.dequeue(AQjmsConsumer.java:1307)
    at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:1028)
    at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:951)
    at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:929)
    at oracle.jms.AQjmsConsumer.receive(AQjmsConsumer.java:781)
    at com.ivy.jms.JMSTopicDequeueHandler.receiveMessages(JMSTopicDequeueHandler.java:115)
    at com.ivy.jms.JMSManager.run(JMSManager.java:90)
    at java.lang.Thread.run(Thread.java:619)
    Is anything else beyond this is required??? please help. :(
    oracle: 10g R4
    linux environment and java is trying to do AQjmsFactory.getTopicConnectionFactory(...); Java machine is diffarent from the database and no oracle client is to be installed on java machine.
    The same code is working fine when i use oc8i instead of thin drivers and run it on db machine.
    ravi

  • Request info on Archive log mode Best Practices

    Hi,
    Could anyone from their personal experience share with me the Best Practices for maintaining Archiving on any version of oracle. Please tell me
    1) Whether to place archives and log files on same disks?
    2) How many lgwr processes to use.
    3) checkpoint frequency.
    4) How to maintain speed of the server being run in archivelog mode.
    5) Errors to look.
    Thanks,

    1. Use separate mount point for archive logs like /archv
    2. Start using with 1 and check the performance.
    3. This is depends upon the redo log file size. Create your redo log file such that hourly maximum 5-8 log switch will happen. Try to make it less than 5 log switch per hour.
    4. Check the redo log file size.
    5. Check for archive log mount point space allocation. Take the backup of archive by RMAN and deleted the backed up archive logs from the archived destination.
    Regards
    Asif Kabir

  • Integrate Best Practice in my application

    Hi.
    I have developed a fusion application for my organization. It is based on pages and we don't have bounded task flows. I want to create bounded task flows with page fragments as a best practice, but without modifying the home page that has a menu bar which dinamycally loads up all the options from the data base. When I clic on an option from the menu bar I want to call a bounded task flow. For example I have an option security and inside I have sub-options.
    The problem is that home page is in unbounded task flow and i can't call a bounded task flow with page fragments.
    Thank you.

    Ok i got it, but I have one problem. I have a commandNavigationItem (global link) and If I clic it then i see the corresponding taskflow, but If i clic on the menu items from the menu bar I go to a page and then clic on the commandNavigationItem i don't see the taskFlow, it stays in the page.
    Edited by: Miguel Angel on 06/11/2012 03:57 PM

  • Best Practice for Enterprise Application Integration

    I would like to integrate a few corporate systems together by using Oracle Fusion Middleware. I suppose the integrated process is running in synchronous mode such that it also supports two phase commit.
    In BPEL Process manager, there is a tool called "WSIF" which seems to be relevant to my requirement. I would like to know which tools should be best for my integration project and any suggestion on implementation.
    Thanks in advance,
    Samuel Wai

    This has been answered repeatedly. WL allows you to cache JNDI context
              objects, ejb homes and remotes without any problems. (EJB remote interfaces
              must only be used by one thread at a time, but that requirement is provided
              by the EJB spec itself.)
              Peace,
              Cameron Purdy
              Tangosol, Inc.
              http://www.tangosol.com
              +1.617.623.5782
              WebLogic Consulting Available
              "Geordie" <[email protected]> wrote in message
              news:3af9579f$[email protected]..
              >
              > I'm wondering what the best practice is for Servlet EJB integration in
              terms of
              > caching the home and remote objects. My understanding is that the Home
              object
              > is threadsafe and could therefore be cached as an attribute of the
              Servlet. This
              > would remove the need for a JNDI lookup for each request. Similarly
              caching the
              > ProxyObject would yield further savings. However, I have noticed that
              most examples
              > don't use either of these practices. Why not?
              >
              > Thanks in advance,
              > Geordie
              

  • SAP to Non-SAP Integration best Practices

    Hi Folks,
    Recently I demonstrated to few of my managers the integration of our SAP ISU with a 3rd Party MDUS System via SAP PI. A question which was repeatedly asked is 'Why SAP PI'? Isn't there any other way to do it? They did mention BAPIs and doing things directly in ABAP but I couldn't really answer as to how weigh one on the other in this particular scenario.
    I do know that there are standard ES Bundles for achieving integration with 3rd Party Systems via SAP PI, We can do the interface and message mappings but
    is it possible to achieve this integration with the 3rd Party MDUS System without using PI?
    3rd party MDUS can only integrate via its web Services so how would they get called?
    Whats the trade-off in case of Performance, Development Cost?
    I am looking for best practices, recommendations, trade-offs and possibilities. Your input is very much appreciated.
    Regards,
    Adil Khalil

    Hi Adil,
    The below blog might be useful
    Consuming Services with ABAP
    regards,
    Harish

Maybe you are looking for

  • How can I re import pictures to my library?

    OK, I made a silly mistake, and I erased all of the originals from iphoto (yes, I know, it is a hidden folder, it is hard to do, but I managed to do it all the same). Now, all of the thumbnails are there, but not the actual pics. I have them all back

  • I had a temperature warning come up on my iPhone 4 and now it won't turn on

    I had a temperature warning come up on my iPhone 4 saying I needed to cool it to use it or something and turned it off and attempted to turn it back on as a means of fixing it, but it didn't work and now my phone isn't turning back on and even when I

  • Navigation Attribut is not shown in two rows in the Query

    Hello, we are using a multiprovider with the infoobject 0material. In the multiprovider we use a navigation attribut MH_PR_ST of 0material. Normally, the query rows should show the material number and the corresponding attribut MH_PR_ST, but in some

  • How to download and open zip file

    Download cannot be done

  • Hd to dvd

    I have a 720p 60 avchd movie . I want to make a sd dvd. How do I do this... is there a tutorial somewhere? Thanks