Best practice for two differing encryption types

I'm using a Aironet 1232g and I want to use WPA2-PSK. I have the XP clients connected appropriately (via WPA2-AES CCMP), however I also have a Ricoh Digital Sender (copier) that only supports WEP. What is the best security practice in this scenario? Can I enable two SSID's one with WPA2-PSK and the other with WEP? If so, do I need to establish a VLAN on the AP? Any help is appreciated.

Hi Scott,
your thougts are right.
You have to establish 2 seperate VLANs on the AP. In the "encryption Manager" you setup the VLAN to encryption type association.
Then you need to define the VLANs to SSIDs in the "SSID Manager".
If you have a WDS running setup the "Server groups" in the WDS Section of "Wireless Services"
Do not forget to change the switchport (where the AP is connected to) to trunking mode.
I hope that helps.
Best regards,
Frank
P.S. Please rate helpful posts.

Similar Messages

  • Best practice for presenting different storage vendors arrays..?

    Hi, can't seem to find a specific answer in writing on this.
    Typical FlexPod already deployed, with different vsans on different n5ks.
    Uplinks from UCS / NetApp are FCoE.
    So I now want to present some normal FC (not FCoE, nexsan) storage to the same blades on the UCS system via the n5ks.
    My main questions are can I use same vsans / vHBAs as for NetApp storage, or should you be creating different vsans and vHBAs for different storage vendors?
    If multiple vsan is recommended, is Trunking multiple vsans over FCoE from UCS to FC fine, I'm sure it is but just want some experience on it.
    Thanks!

    Thanks guys for the input, interesting to see there is a difference of opinion on this topic. Makes me feel happier that I'm not being too stupid and someones posted back a link to a best practice guide on the subject that I've blatantly missed! (still time for that no doubt!)
    Like I've said, I haven't found anything specific in writing on this, but keeping vendors on seperate vsans / hbas feels like the best way to do it anyway with only a little extra config in the grand scheme of things.

  • "Best Practices" for using different Authentication Schemes ?

    Hi
    We are using different authentication schemes in different environments (Dev/QA/Prod). Changing the authentication scheme between the environments is currently a manual step during the installation. I am wondering if there are better "Best Practices" to follow, where the scheme is set programmatically as part of the build/ load process for a specific environment. ... or any other ideas.
    We refrained from merging the authentication schemes (which is possible) for the following reasons:
    - the authentication code becomes unnecessary complex
    - some functions required in some environments are not available in all environments (LDAP integration through centrally predefined APIs), requiring dynamic execution
    Any suggestions / experience / recommendation to share are appreciated.
    Regards,
    - Thomas
    [On Apex 4.1.0]

    t-o-b wrote:
    Thanks Vikram ... I stumbled over this post, I was more interested in what the "Work Around" / "Best Practices" given these restrictions.
    So I take it that:
    * load & change; or
    * maintain multiple exports
    seem to be the only viable options
    ... in addition to the one referred to in my questions.
    Best,
    - ThomasThomas,
    Its up-to you really and depends on many criteria +(i think its more of release process and version controlling)+.
    I haven't come across a similar scenario before..but I would maintain multiple exports so that the installation can be automated (no manual intervention required).
    Once the API is published +(god knows when it will be)+ you can just maintain one export with an extra script to call the API.
    I guess you can do the same thing with the load & change approach but I would recommend avoiding manual intervention.
    Cheers,
    Vikram

  • Best practice for using different BPEL domains

    Hi BPEL community,
    I'm interested in your experience of creating different BPEL domains for grouping your BPEL processes in an enterprise SOA way.
    Background is that we have different projects with many BPEL processes and want to find an ideal grouping for them.
    Some thoughts are:
    - group by project
    - group by business domain
    - group by characteristics and requirements (long-running, short-running, ...)
    Please post your recommendations and experiences!
    Best regards,
    Harald

    publish/subscribe, right?
    lots of subscribers, big messages == lots of network traffic.
    it's a wide open question, no?
    %

  • Best Practice for Conversion Workflow

    Hello,
    I'm converting video files from our "home grown" virtual media reserve to iTunes U. Some of the files are in RM format, some are already compressed .mov's (not H.264) and some I have the original DV files for.
    Anyone out there have a best practice for converting these file types for posting to iTunes U? I have Final Cut Studio (Compressor), QT Pro and Squeeze available to me.
    Any experience you have with this would be helpful.
    Thanks,
    Jeana

    For converting old files to a podcast compatible video and based on the machine you have, consider elgato turbo.264. It is a fairly priced "co-processor" for video conversion. It is comprised of an application and a small USB device with a encoder chip in it. In my experience, it is the fastest way to create podcast video files. The amount of time that you will save will pay for the device quickly (about $100). Plus it does batch conversion of any video that your system currently plays through QuickTime. it has all the necessary presets and you can create your own. It has a few minor limitations such as not supporting (at this moment) enhanced podcasting features such as chapter markers and closed-captions but since you have old files for conversion, that won't matter.
    For creating new content, the workflow varies a lot. Since you mention MP3s, I guess you are also interested in audio files. I would stick with GarageBand, especially if you are a beginner plus it supports enhanced podcasts.
    In any case the most important goal is to have the simplest and fastest way to go from recording to publication. The less editing the better. To attain that, the best methods will require the largest investments. For example, for video production the best way is to produce the content live so when you finish recording it is only a matter of encoding and publishing. that will require the use of a video switcher that can ingest at least one video camera and a computer output to properly capture presentation material. That's the minimum. there are several devices that can do this for you. Some are disguised PCs and some connect to a PC for tapeless recording. You can check the Tricaster, which I like but wish it was a Mac and not a Windows Xp PC. Other routes may include video mixers from manufacturers like Edirol, Pansonic or Sony connected to a VTR or directly to a Mac for direct-ti-disc capture. I f you look at some of the content available in iTunes U, you will see what I explain here. This workflow requires preparation and sufficient live support but you will have your material ready for delivery almost immediately after the recorded event. No editing required. Finally, the most intensive workflow is to record everything separately and edit it later, which is extremely time consuming.

  • SAP Best Practice for Document Type./Item category/Acc assignment cat.

    What is the Best Practice for the document Type & Item category
    I want to use NB -  Item category  - B & K ( Blanket PO) , D ( Service)  and T( Text) .
    Is sap recommends to use FO Only for the Blanket Purchase Order.
    We want to use service contract (with / without service entry sheet) for all our services.
    We want to buy asset for our office equipments .
    Which is the best one to use NB or FO ?
    Please give me any OSS notes or reference for this
    Thanks
    Nick

    Thank you very much for your response. 
    I hope I can provide some clarity on how the accounting needs to be handle per FERC  Regulations.  The G/L balance on the utility that is selling the assets will be in the following accounts (standard accounts across all FERC Regulated Utilities):
    101 - Acquisition Value for the assets
    108 - Accumulated Depreciation Value for the assets
    For an example, there is Debit $60,000,000 in FERC Account 101 and a credit $30,000,000 in FERC Account 108.  When the purchase occurs, the net book value for the asset will be on our G/L in FERC Account 102.  Once we have FERC Approval to acquire the plant assets, we will need to enter the Acquisition Value and associated Accumulated Depreciation onto our G/L to FERC Account 101 and FERC Account 108 respectively with an offset to FERC Account 102.
    The method that I came up with is to purchase the NBV of the assets to a clearing account.  I then set up account assignments that will track the Acquisition Value and respective Accumulated Depreciation for each asset that is being purchased.  I load the respective asset values using t-code AS91 and then make an entry to the 2 respective accounts with the offset against the clearing account using t-code OASV.  Once my company receives FERC approval, I will transfer the asset to new assets that has the account assignments for FERC Account 101 and FERC Account 108 using t-code ABUMN or FB01.

  • Best practice for phone number column data type

    Hi,
    I hope I have posted this in the right forum, if not just notify me and i will remove it to the correct area.
    What would be considered best practice for storing a phone number.
    Number of Varchar2.
    Ben

    Well I was thinking that Number would have the following disadvantages,
    1. Users entering phone numbers into a form may be tempted to pre format with spaces, thereby throwing up an error.
    2. Mobile phone numbers may have leading zeros
    3. Calculations are not carried out on phone numbers.
    I was leaning towards a varchar2 type.
    Ben

  • "Best practice" for components calling components on different panels.

    I'm very new to Swing. I have been learning from tutorials, but these are always relatively simple interfaces , in which every component and container is initialised and added in the constructor of a main JFrame (extension) object.
    I would assume that more complex, real-world examples would have JPanels initialise themselves. For example, I am working on a project in which the JFrame holds multiple JPanels. One of these Panels holds a group of JToggleButtons (grouped in a ButtonGroup). The action event for each button involves calling the repaint method of one of the other Panels.
    Obviously, if you initialise everything in the JFrame, you can simply have the ActionListener refer to the other JPanel directly, by making the ActionListener a nested class within the JFrame class. However, I would like the JPanels to initialise their own components, including setting the button actions, by using an extension of class JPanel which includes the ActionListeners as nested classes. Therefore the ActionListener has no direct access to JPanel it needs to repaint.
    What, then, is considered "best practice" for allowing these components to interact (not simply in this situation, but more generally)? Should I pass a reference to the JPanel that needs to be repainted to the JPanel that contains the ActionListeners? Should I notify the main JFrame that the Action event has fired, and then have that call "repaint"? Or is there a more common or more correct way of doing this?
    Similarly, one of the JPanels needs to use a field belonging to the JFrame that holds it. Should I pass a reference to this object to the JPanel, or should I have the JPanel use "getParent()", or some other method?
    I realise there are no concrete answers to this query, but I am wondering whether there are accepted practices for achieving this. My instinct is to simply pass a JPanel reference to the JPanel that needs to call repaint, but I am unsure how extensible this would be, how tightly coupled these classes would become.
    Any advice anybody could give me would be much appreciated. Sorry the question is so long-winded. :)

    Hello,
    nice to get feedback.
    I've been looking at a few resources on this issue from my last post. In my application I have been using the Observer and Observable classes to implement the MVC pattern suggested by T.PD.(...)
    Two issues (not fatal, but annoying) with this are:
    -Observable is a class, not an interface; since most of my Observers already extend JPanel (or some such), I have had to create inner classes.
    -If an Observer is observing multiple Observables, it will have to determine which Observer called its update() method (by using reference equality or class comparison or whatever). Again, a very minor issue, but something to keep in mind.I don't deem those issues are minor. The second one in particular, is rather annoying in terms of maintenance ("Err, remind me, which widget is calling this "update()" method?").
    In addition to that, the Observable/Observer are legacy non-generified classes, that incurr a loosely-typed approach (the subject and context arguments to the update(Observable subject, Object context) methods give hardly any info in themselves, and they generally have to be cast to provide app-specific information.
    Note that the "notification model" from AWT and Swing widgets is not Observer-Observable, but merely EventListener . Although we can only guess what reasons made them develop a specific notification model, I deem this essentially stems from those reasons.
    The contrasting appraoches are discussed in this article from Bill Venners: The Event Generator Idiom (http://www.artima.com/designtechniques/eventgenP.html).
    N.B.: this article is from a previous-millenary series of "Design Techniques" articles that I found very useful when I learned OO design (GUI or not).
    One last nail against the Observer/Observable model: these are general classes that can be used regardless of the context (GUI/non-GUI code), so this makes it easier to forget about Swing threading rules when using them (essentially: is the update method called in the EDT or not).
    If anybody has any information on the performance or efficiency of using Observable/ObserverI would be very surprised if this had any performance impact. If it had, that would mean that you have either:
    - a lot of widgets that are listening to one another (and then the Mediator pattern is almost a must to structure such entangled dependencies). And even then I don't think there could be any impact below a few thousands widgets.
    - expensive or long-running computation in the update methods. That's unrelated to the notification model itself.
    - a lot of non-GUI components that use the Observer/Observable to communicate among themselves - all the more risk then, to have a GUI update() called outside the EDT, see remark above.
    (or whether there are inbuilt equivalents for Swing components)See discussion above.
    As far as your remark 2 goes (if one observer observes more than one subjects, the update() method contains branching logic) : this also occurs with the Event Delegation model indeed: for example, it is quite common that people complain that their actionPerformed() method becomes unwieldy when the same class listens for several JButtons.
    The usual advice for this is, use anonymous listeners, each of which handles the event from only one source (and generally very close in code to the definition of that source), and that simply translates the "generic" event notification method into a specific method call of a Controller or Mediator .
    Best regards.
    J.
    Edited by: jduprez on May 9, 2011 10:10 AM

  • Best practice for integrating a 3 point metro-e in to our network.

    Hello,
    We have just started to integrate a new 3 point metro-e wan connection to our main school office. We are moving from point to point T-1?s to 10 MB metro-e. At the main office we have a 50 MB going out to 3 other sites at 10 MB each. For two of the remote sites we have purchase new routers ? which should be straight up configurations. We are having an issue connecting the main office with the 3rd site.
    At the main office we have a Catalyst 4006 and at the 3rd site we are trying to connect to a catalyst 4503.
    I have attached configurations from both the main office and 3rd remote site as well as a basic diagram of how everything physically connects. These configurations are not working ? we feel that it is a gateway type problem ? but have reached no great solutions. We have tried posting to a different forum ? but so far unable to find the a solution that helps.
    The problem I am having is on the remote side. I can reach the remote catalyst from the main site, but I cannot reach the devices on the other side of the remote catalyst however the remote catalyst can see devices on it's side as well as devices at the main site.
    We have also tried trunking the ports on both sides and using encapsulation dot10q ? but when we do this the 3rd site is able to pick up a DHCP address from the main office ? and we do not feel that is correct. But it works ? is this not causing a large broad cast domain?
    If you have any questions or need further configuration data please let me know.
    The previous connection was a T1 connection through a 2620 but this is not compatible with metro-e so we are trying to connect directly through the catalysts.
    The other two connection points will be connecting through cisco routers that are compatible with metro-e so i don't think I'll have problems with those sites.
    Any and all help is greatly welcome ? as this is our 1st metro e project and want to make sure we are following best practices for this type of integration.
    Thank you in advance for your help.
    Jeff

    Jeff, form your config it seems you main site and remote site are not adjacent in eigrp.
    Try adding a network statement for the 171.0 link and form a neighbourship between main and remote site for the L3 routing to work.
    Upon this you should be able to reach the remote site hosts.
    HTH-Cheers,
    Swaroop

  • Best practices for applying sharpening in your workflow

    Recently I have been trying to get a better understanding of some of the best practices for sharpening in a workflow.  I guess I didn't realize it but there are multiple places to apply sharpening.  Which are best?  Are they additive?
    My typical workflow involves capturing an image with a professional digital SLR in either RAW or JPEG or both, importing into Lightroom and exporting to a JPEG file for screen or printing both lab and local. 
    There are three places in this workflow to add sharpening.  In the SLR, manually in Lightroom and during the export to a JPEG file or printing directly from Lightroom
    It is my understanding that sharpening is not added to RAW images even if you have added sharpening in your SLR.  However sharpening will be added to JPEG’s by the camera. 
    Back to my question, is it best to add sharpening in the SLR, manually in Lightroom or wait until you export or output to your final JPEG file or printer.  And are the effects additive?  If I add sharpening in all three places am I probably over sharpening?

    You should treat the two file types differently. RAW data never has any sharpening applied by the camera, only jpegs. Sharpening is often considered in a workflow where there are three steps (See here for a founding article about this idea).
    I. A capture sharpening step that corrects for the loss of sharp detail due to the Bayer array and the antialias filter and sometimes the lens or diffraction.
    II. A creative sharpening step where certain details in the image are "highlighted" by sharpening (think eyelashes on a model's face), and
    III. output sharpening, where you correct for loss of sharpness due to scaling/resampling or for the properties of the output medium (like blurring due to the way a printing process works, or blurring due to the way an LCD screen lays out its pixels).
    All three of these are implemented in Lightroom. I. and II. are essential and should basically always be performed. II. is up to your creative spirits. I. is the sharpening you see in the develop panel. You should zoom in at 1:1 and optimize the parameters. The default parameters are OK but fairly conservative. Usually you can increase the mask value a little so that you're not sharpening noise and play with the other three sliders. Jeff Schewe gives an overview of a simple strategy for finding optimal parameters here. This is for ACR, but the principle is the same. Most photos will benefit from a little optimization. Don't overdo it, but just correct for the softness at 1:1.
    Step II as I said, is not essential but it can be done using the local adjustment brush, or you can go to Photoshop for this. Step III is however very essential. This is done in the export panel, the print panel, or the web panel. You cannot really preview these things (especially the print-directed sharpening) and it will take a little experimentation to see what you like.
    For jpeg, the sharpening is already done in the camera. You might add a little extra capture sharpening in some cases, or simply lower the sharpening in camera and then have more control in post, but usually it is best to leave it alone. Step II and III, however, are still necessary.

  • Best practice for multiple instances of the same BEX query

    Hi there,
    I'm wondering what's the best way to use multiple instances of the same BEX query. Let me explain what I mean:
    I have a dashboard with different queries feeding different period of time such as: week to date, month to date and so on. One query for each since it is based on a user exit.
    For each query I want to show different data in different sections of my dashboard. Per example: sales per directors or sales per customer group, sales per day, sales per week and the like.I tried to connect a simple bar chart via a direct connection but with no success due to the multiple lines generated by the addition of the sales director, customer group, week number and so on.
    My question is about the way to connect the different queries efficiently in order to show the different data while avoiding multiple useless lines.
    The image above shows the query browser where, per example, for a Month to date query there will be mutiple line for each week as well as one line for each director. If, for two different components, I want to show data per week and data per director or other representation what is the best practice:
    Add another instance of the same query and only put the week information and another one will only the director info?
    Should I bind those to the excel file and use formulas to make final calculations?
    Will there be a performance issues for adding different instances of the same query
    I have 6 different queries (read 6 user exit that filters time via user exit).
    Depending on the best practices there might be 4 instances for each for a total of 24 instances in the query browser.
    I hope my question is clear enough, if not please do not hesitate I'll clarify as much as possible.
    Regards,
    Steve

    Hi Steve,
    Might be trying for solution for a long time, If i understood your question clear let me clarify you few points.
    You are trying to access the bex query which is designed with the exit's in the background based on the logic and trying to call the entire dimensions and key-figures in a single connection. Then you are trying to map those data in the charts.
    Steve, try to make more connections based upon the logic and split them. use the same query but split them by sales per customer group, sales per day, sales per week by making three different connections and try. You can merge the prompts from all connections.
    Hope this Helps!!!
    Sorry if i misunderstood your question.
    --SumanT

  • Best Practices for Using Photoshop (and Computing in General)

    I've been seeing some threads that lead me to realize that not everyone knows the best practices for doing Photoshop on a computer, and in doing conscientious computing in general.  I thought it might be a good idea for those of us with some exprience to contribute and discuss best practices for making the Photoshop and computing experience more reliable and enjoyable.
    It'd be great if everyone would contribute their ideas, and especially their personal experience.
    Here are some of my thoughts on data integrity (this shouldn't be the only subject of this thread):
    Consider paying more for good hardware. Computers have almost become commodities, and price shopping abounds, but there are some areas where spending a few dollars more can be beneficial.  For example, the difference in price between a top-of-the-line high performance enterprise class hard drive and the cheapest model around with, say, a 1 TB capacity is less than a hundred bucks!  Disk drives do fail!  They're not all created equal.  What would it cost you in aggravation and time to lose your data?  Imagine it happening at the worst possible time, because that's exactly when failures occur.
    Use an Uninterruptable Power Supply (UPS).  Unexpected power outages are TERRIBLE for both computer software and hardware.  Lost files and burned out hardware are a possibility.  A UPS that will power the computer and monitor can be found at the local high tech store and doesn't cost much.  The modern ones will even communicate with the computer via USB to perform an orderly shutdown if the power failure goes on too long for the batteries to keep going.  Again, how much is it worth to you to have a computer outage and loss of data?
    Work locally, copy files elsewhere.  Photoshop likes to be run on files on the local hard drive(s).  If you are working in an environment where you have networking, rather than opening a file right off the network, then saving it back there, consider copying the file to your local hard drive then working on it there.  This way an unexpected network outage or error won't cause you to lose work.
    Never save over your original files.  You may have a library of original images you have captured with your camera or created.  Sometimes these are in formats that can be re-saved.  If you're going to work on one of those files (e.g., to prepare it for some use, such as printing), and it's a file type that can be overwritten (e.g., JPEG), as soon as you open the file save the document in another location, e.g., in Photoshop .psd format.
    Save your master files in several places.  While you are working in Photoshop, especially if you've done a lot of work on one document, remember to save your work regularly, and you may want to save it in several different places (or copy the file after you have saved it to a backup folder, or save it in a version management system).  Things can go wrong and it's nice to be able to go back to a prior saved version without losing too much work.
    Make Backups.  Back up your computer files, including your Photoshop work, ideally to external media.  Windows now ships with a quite good backup system, and external USB drives with surprisingly high capacity (e.g., Western Digital MyBook) are very inexpensive.  The external drives aren't that fast, but a backup you've set up to run late at night can finish by morning, and if/when you have a failure or loss of data.  And if you're really concerned with backup integrity, you can unplug an external drive and take it to another location.
    This stuff is kind of "motherhood and apple pie" but it's worth getting the word out I think.
    Your ideas?
    -Noel

    APC Back-UPS XS 1300.  $169.99 at Best Buy.
    Our power outages here are usually only a few seconds; this should give my server about 20 or 25 minutes run-time.
    I'm setting up the PowerChute software now to shut down the computer when 5 minutes of power is left.  The load with the monitor sleeping is 171 watts.
    This has surge protection and other nice features as well.
    -Noel

  • Best practices for setting up projects

    We recently adopted using Captivate for our WBT modules.
    As a former Flash and Director user, I can say it’s
    fast and does some great things. Doesn’t play so nice with
    others on different occasions, but I’m learning. This forum
    has been a great source for search and read on specific topics.
    I’m trying to understand best practices for using this
    product. We’ve had some problems with file size and
    incorporating audio and video into our projects. Fortunately, the
    forum has helped a lot with that. What I haven’t found a lot
    of information on is good or better ways to set up individual
    files, use multiple files and publish projects. We’ve decided
    to go the route of putting standalones on our Intranet. My gut says
    yuck, but for our situation I have yet to find a better way.
    My question for discussion, then is: what are some best
    practices for setting up individual files, using multiple files and
    publishing projects? Any references or input on this would be
    appreciated.

    Hi,
    Here are some of my suggestions:
    1) Set up a style guide for all your standard slides. Eg.
    Title slide, Index slide, chapter slide, end slide, screen capture,
    non-screen capture, quizzes etc. This makes life a lot easier.
    2) Create your own buttons and captions. The standard ones
    are pretty ordinary, and it's hard to get a slick looking style
    happening with the standard captions. They are pretty easy to
    create (search for add print button to learn how to create
    buttons). There should instructions on how to customise captions
    somewhere on this forum. Customising means that you can also use
    words, symbols, colours unique to your organisation.
    3) Google elearning providers. Most use captivate and will
    allow you to open samples or temporarily view selected modules.
    This will give you great insight on what not to do and some good
    ideas on what works well.
    4) Timings: Using the above research, I got others to
    complete the sample modules to get a feel for timings. The results
    were clear, 10 mins good, 15 mins okay, 20 mins kind of okay, 30
    mins bad, bad, bad. It's truly better to have a learner complete
    2-3 short modules in 30 mins than one big monster. The other
    benefit is that shorter files equal smaller size.
    5) Narration: It's best to narrate each slide individually
    (particularly for screen capture slides). You are more likely to
    get it right on the first take, it's easier to edit and you don't
    have to re-record the whole thing if you need to update it in
    future. To get a slicker effect, use at least two voices: one male,
    one female and use slightly different accents.
    6) Screen capture slides: If you are recording filling out
    long window based databse pages where the compulsory fields are
    marked (eg. with a red asterisk) - you don't need to show how to
    fill out every field. It's much easier for the learner (and you) to
    show how to fill out the first few fields, then fade the screen
    capture out, fade the end of the form in with the instructions on
    what to do next. This will reduce your file size. In one of my
    forms, this meant the removal of about 18 slides!
    7) Auto captions: they are verbose (eg. 'Click on Print
    Button' instead of 'Click Print'; 'Select the Print Preview item'
    instead of 'Select Print Preview'). You have to edit them.
    8) PC training syntax: Buttons and hyperlinks should normally
    be 'click'; selections from drop down boxes or file lists are
    normally 'select': Captivate sometimes mixes them up. Instructions
    should always be written in the correct order: eg. Good: Click
    'File', Select 'Print Preview'; Bad: Select 'Print Preview' from
    the 'File Menu'. Button names, hyperlinks, selections are normally
    written in bold
    9) Instruction syntax: should always be written in an active
    voice: eg. 'Click Options to open the printer menu' instead of
    'When the Options button is clicked on, the printer menu will open'
    10) Break all modules into chapters. Frame each chapter with
    a chapter slide. It's also a good idea to show the Index page
    before each chapter slide with a progress indicator (I use an
    animated arrow to flash next to the name of the next chapter), I
    use a start button rather a 'next' button for the start of each
    chapter. You should always have a module overview with the purpose
    of the course and a summary slide which states what was covered and
    they have complete the module.
    11) Put a transparent click button somewhere on each slide.
    Set the properties of the click box to take the learner back to the
    start of the current chapter by pressing F2. This allows them to
    jump back to the start of their chapter at any time. You can also
    do a similar thing on the index pages which jumps them to another
    chapter.
    12) Recording video capture: best to do it at normal speed
    and be concious of where your mouse is. Minimise your clicks. Most
    people (until they start working with captivate) are sloppy with
    their mouse and you end up with lots of unnecessarily slides that
    you have to delete out. The speed will default to how you recorded
    it and this will reduce the amount of time you spend on changing
    timings.
    13) Captions: My rule of thumb is minimum of 4 seconds - and
    longer depending on the amount of words. Eg. Click 'Print Preview'
    is 4 seconds, a paragraph is longer. If you creating knowledge
    based modules, make the timing long (eg. 2-3 minutes) and put in a
    next button so that the learner can click when they are ready.
    Also, narration means the slides will normally be slightly longer.
    14) Be creative: Capitvate is desk bound. There are some
    learners that just don't respond no matter how interactive
    Captivate can be. Incorporate non-captivate and desk free
    activities. Eg. As part of our OHS module, there is an activity
    where the learner has to print off the floor plan, and then wander
    around the floor marking on th emap key items such as: fire exits;
    first aid kit, broom and mop cupboard, stationary cupboard, etc.
    Good luck!

  • Best practice for managing a Windows 7 deployment with both 32-bit and 64-bit?

    What is the best practice for creating and organizing deployment shares in MDT for a Windows 7 deployment that has mostly 32-bit computers, but a few 64-bit computers as well? Is it better to create a single deployment share for Windows 7 and include both
    versions, or is it better to create two separate deployment shares? And what about 32-bit and 64-bit versions of applications?
    I'm currently leaning towards creating two separate deployment shares, just so that I don't have to keep typing (x86) and (x64) for every application I import, as well as making it easier when choosing applications in the Lite Touch installation. But I know
    each deployment share has the option to create both an x86 and x64 boot image, so that's why I am confused. 

    Supporting two task sequences is way easier than supporting two shares. Two shares means two boot media, or maintaining a method of directing the user to one or the other. Everything needs to be imported or configured twice. Not to mention doubling storage
    space. MDT is designed to have multiple task sequences, why wouldn't you use them?
    Supporting multiple task sequences can be a pain, but not bad once you get a system. Supporting app installs intelligently is a large part of that. We have one folder per app install, with a wrapper vbscript that handles OS detection. If there are separate
    binaries, they are placed in x86 and x64 subfolders. Everything runs from one folder via the same command, "cscript install.vbs". So, import once, assign once, and forget it. Its the same install package we use for Altiris, and we'll be using a Powershell
    version of it when we fully migrate to SCCM.
    Others handle x86 and x64 apps separately, and use the MDT app details to select what platform the app is meant for. I've done that, but we have a template for the vbscript wrapper and its a standard process, I believe its easier. YMMV.
    Once you get your apps into MDT, create bundles. Core build bundle, core deploy bundle, Laptop deploy bundle, etcetera. Now you don't have to assign twenty apps to both task sequences, just one bundle. When you replace one app in the bundle, all TS'es are
    updated automatically. Its kind of the same mentality as active directory. Users, groups and resources = apps, bundles and task sequences.
    If you have separate build and deploy shares in your lab, great. If not, separate your apps into build and deploy folders in your lab MDT share. Use a selection profile to upload only your deploy side to production. In fact I separate everything (except
    drivers) into Build and deploy folders on my lab server. Don't mix build and deploy, and don't mix Lab/QA and production. I also keep a "Retired" folder. When I replace an app, TS, OS, etcetera, I move it to the retired folder and append "RETIRED - " to the
    front of it  so I can instantly spot it if it happens to show up somewhere it shouldn't.
    To me, the biggest "weakness" of MDT is its flexibility. There's literally a dozen different ways to do everything, and there's no fences to keep you on the path. If you don't create some sort of organization for yourself, its very easy to get lost as things
    get complicated. Tossing everything into one giant bucket will have you pulling your hair out.

  • Best practice for migrating IDOCs?

    Subject: Best practice for migrating IDOC's? 
    Hi,
    I need to migrate some IDOC's to another system for 'historical reference'.
    However, I don't want to move them using the regular setup as I don't want the inbound processing to be triggered.
    The data that was created in the original system by the processed IDOC's will be migrated to the new system using migration workbench. I only need to migrate the IDOC's as-is due to legal requirements.
    What is the best way to do this? I can see three solutions:
    A) Download IDOC table contents to a local file and upload them in the new system. Quick and dirty approach, but it might also be a bit risky.
    B) Use LSMW. However, I'm not sure whether this is feasible for IDOC's.
    C) Using ALE and setting up a custom partner profile where inbound processing only writes the IDOC's to the database. Send the IDOC's from legacy to the new system. Using standard functionality in this way seems to me to be the best solution, but I need to make sure that the IDOC's once migration will get the same status as they had in the old system.
    Any help/input will be appreciated
    Regards
    Karl Johan
    PS. For anyone interested in the business case: Within EU the utility market was deregulated a few years ago, so that any customer can buy electricity from any supplier. When a customer switches supplier this is handled via EDI, in SAP using ALE and IDOC's. I'm working on a merger between two utility companies and for legal reasons we need to move the IDOC's. Any other data is migrated using migration workbench for IS-U.

    Hi Daniele
    I am not entirely sure, what you are asking, Please could you provide additional information.
    Are you looking for best practice recommendations for Governance, for example: Change transports between DEV, QA and PRD in BPC 7.0?
    What is the best method? Server Manager backup and restore, etc  ?
    And
    Best Practice recommendations on how to upgrade to a different version of BPC, for example: Upgrading from BPC 7.0 to 7.5 or 10.0 ?
    Kind Regards
    Daniel

Maybe you are looking for

  • Why do I get general access denied trying to update my own field in Active Directory?

    I am trying to update a field pertaining to my own user object in Active Directory using ADSI and C++ app. The operating system is Windows Server 2012 Standard. I am able to read, I am able to call Put without problems, but when I call SetInfo, it re

  • Forms6i crashes on Windows 7

    I have installed Forms6i - 6.0.8.11.3 on Windows 7 SP 1 and it works fine but at times Form Builder crashes in middle of everything. Is there any patch I need to apply for this? I looked the Patch 18 but it tries to install it in a new home/folder th

  • Table SelectionModel with Forte

    Is there a way to set the SelectionModel in Sun's Forte 4 ? I use this code by interting it with Post-Init Option. datesTable.getSelectionModel().setSelectionModeListSelectionModel.SINGLE_SELECTION); The problem is, if you change the name of the tabl

  • Syncing my iPhone 5 with an iBook G4

    Hi all, I've just got my lovely shiny new iPhone 5 & when I connect to my laptop, it tells me I need 10.7 or higher. But I can't get that on my iBook G4!!! How do I sync my apps/music/books with my phone? Many thanks in advance!!!

  • How to select Most recent record with rest of record in ascending order...?

    Dear Expert, I have one table called Tab1 & have a lot columns. For this schenario, i mentioned few of the columns. The columns are product_type varchar2(100), curr_date date (storing with time stamp), other1_cloumn varchar2(10), other2_cloumn varcha