Cp6 - Something broke! A "lifesaver" best practice reminder!

First, I should point out a very important best practice of saving your projects often and save your Captivate projects as a new name each time.
e.g. project_01, project_02, etc. (or some other variation that makes sense to you)
Case in point…
Just a while ago, my Captivate project became corrupted for unknown reasons. I last experienced this problem in Cp4 so this is a disappointing reminder that Cp6 may also be vulnerable to file corruption.
Everything seems fine. All the slides seem perfect but this particular project now fails on my two computers.
The symptom:
Now, I can no longer run in preview mode (F4 or otherwise). When I select preview, the Captivate preview window hangs for a very long time (slide 1/37 – just a grey screen). If I attempt to click anywhere within the preview window, I will see “Preview (Not Responding)”
If I wait about 5 minutes, the preview will actually start, however, it is very unresponsive… so clicking the NEXT button (I don’t use the slidebar ** name?) requires nearly 5 minutes to view the next slide.
The only way out of this mess is to let Captivate crash.
This repeats when I restart with the same file. In other words, the project is toast.
While really, really, annoying... all is not lost! Because I change my project name with every save (I anticipated this unfortunately inevitable moment), I now going to open an older (by 1/2 h) project and continue.
- Shawn

Hello Rod,
I am unsure what you are disagreeing with? I fully understand the legacy of Captivate as I've been there every step. I am unsure why you brought that up.
I think you are misunderstanding or I have not explained this issue well enough.
This course was developed entirely in Cp6. I have not done a significant amount of testing throughout development... basically building new slides each day.
Anyhow, when I ran a rare test earlier today, I was shocked to notice the course was unresponsive; taking nearly five minutes before the first slide would show. Naturally I thought corruption. But through hiding slides and publishing older  saved project revisions, I noticed a disturbing trend that project responsiveness. For instance, clicking the navigation buttons went from instantaneous down to just over five seconds per click as test through the published versions.
Curiously, the following remain unaffected (responsive throughout all revisions):
- Custom TOC button (a custom/advanced conditional action - if cpCmndTOCVisible = 0 or 1....)
- Effects
Everything else has become quite unresponsive (or gradually more unresponsive through the revisions):
- BACK/NEXT buttons,
- HELP button (a Jump to Slide x action)
- Timeline events within a slide
If this was a problem with preferences, I should not be able to solve the problem by publishing earlier versions or even hiding slides in the most recent revision. Additionally, I should not see this same issue on three different systems.
But because I am willing to try anything, I cleared out the preferences and restarted Captivate and believe me, I am really disappointed to report that it did not help.
Please understand, I am not necessarily blaming Captivate... as it could very well be something I have done that has simply made responsiveness worse as I added more slides. Figuring out what that is could be a huge challenge. :-(

Similar Messages

  • Slow starup of Java application - best practices for fine tuning JVM?

    We are having problems with a java application, which takes a long time to startup.
    In order to understand our question we better start with some background info. You will find the question(s) after that.
    Background:
    The setup is as follows:
    In a client-server solution we have a win xp, fat client running java 1.6.0.18.
    (Sun JRE). The fat client containt a lot of GUI, and connects to a server for DB access. Client machines are typical 1 to 3 years old (there are problems even on brand new machines). They have the client version of JRE - standard edition installed (Java SE 6 update 10 or better) Pretty much usual stuff so far.
    We have done a lot of profiling on the client code, and yes we have found parts of our own Java code that needs improving. we are all over this. Server side seems ok with good response times. So far, we havent found anything about shaky net connections or endless loops in the java client code or similiar.
    Still, things are not good. Starting the application takes a long time. too long.
    There are many complicating factors, but here is what we think we have observed:
    There is a problem with cold vs. varm starts of the application. Apparently, after a reboot of the client PC - things are really, really bad - and it takes (sometimes) up to 30-40 secs to start the application (until we arrive at the start GUI in our app).
    If we run our application, close it down, and then restart
    without rebooting, things are a lot better. It then usually takes
    something like 15 - 20 sec. which is "acceptable". Not good, but acceptable,
    Any ideas why?
    I have googled it, and some links seems to suggest that the reason could be disk cache. Where vital jar are already in disk cache on th warm start? Does that make any sense? Virus scanners presumable runs in both cases.
    People still think that 15 - 20 sec in start up on the warm start is an awful long time, even though there is a lot, a lot, of functionality in the application.
    We got a suggestion to use IBMs JRE - as it can do some tricks (not sure what) our SUN JRE cant do concerning the warm and cold start problem. But thats is not an option for us. And noone has come up with any really good suggestions with the SUN JRE so far?
    On the Java Quick Starter (JQS) -
    improves initial startup time for most java applets and applications.
    Which might be helpful? People on the internet seem more interested
    in uninstalling the thing than actually installing it though?
    And it seems very proprietary, where we cant give our Jar files to it?
    We could obviously try to "hide" the problem in some way and make it "seem" quicker. Where perceived performance can be just as good as actual performance. But it does seem a bad solution. So for the cold start we will probably try reading the jar files and thereby have them in disk cache before startup of our application. And see if that helps us.
    Still, ok the cold start is the real killer, but warm start isn't exactly wonderfull either.
    People have suggested that we read more on the JVM and performance.
    java.sun.com.javase/technologies/performance.jsp
    java.sun.com.docs/hotspot/gc5.0/gc_tuning_5.html
    the use of JVM flags "-Xms" "-Xmx" etc etc.
    And here comes the question .. da da ...
    Concerning various suggested reading material.
    it is very much appreciated - but we will like to ask people here - if it is possibe to get more specific pointers. to where the gold might be buried.
    I.e. in a an ideal world we would have time to read and understand all of these documents in depth. However, in this less than ideal world we are also doing a lot of very timeconsuming profiling in our own java code.
    E.g. java garbage collection is is a huge subject - and JVm settings also. Sure, in the end we will probably have to do this all very thoroughly. But for now we are hoping for some heuristics on what other people are doing when facing a problem like ours..?
    Young generation, large memory pages, garbage collection threads ect. all sounds interesting - but what would you start with?
    If you don't have info to decide - what kind of profiling would you be running and then adjust what JVM setting in your trials?
    In this pressed for time scenario. Ignorance is not bliss. But makes it hard to pinpoint the or those JVM parameters to adjust. So some good pointers from experienced JVM "configurators" will be much appreciated!
    Actually, If we can establish that finetuning of these parameters is a good idea, it will certainly also be much easier to allocate the time for doing so. - reading, experimenting etc. in our project.
    So, All in all , what kinds of performance improvements can we hope for? 5 out of 20 secs on the warm start? Or is it 10 % nitpicking? Whats the ball park figure for what we can hope to achieve here given our setup? What do you think based on above?
    Maybe someone out there have done some finetuning of JVM parameters in a similiar PC environments like, with similiar fat clients...? Finetuning so and so - gave 5 secs. So start your work with these one-two parameters?
    Something like that - some best practices? Thats what we are hoping for.
    best wishes
    -Simon

    Thanks for helpful answer from both you and kajbj.
    The app doesn't use shared network drives.
    What are you doing between main starts to get executed and the UI is
    displayed?
    Basicly, Calculating what to show in the UI. Accessing server - not so much, there are some reads from a cache, but the profiling doesnt indicate that it should be a problem. Sure, I could shift the startup time to some other slot, but sofar I havent found a place where the end-user wouldnt be annoyed.> Caching of something would seem most obvious. Normal VM stuff >seems unlikely. With profiling i basicly find that ''everything'' takes a lot longer in the cold start scenario. Some of our local java methods are going to be rewritten following our review. But what else can be tuned?You guys dont think the Java Quick Start approach, with more jars in disk cache will give something? And how should that be done/ what does people do?I.e. For the class loader I read something about
    1.Bootstrap class loader
    2.Extensions class loader
    3.System class loader
    and is wondering if this has something to do with the cold start problem?
    The extensions class loader loads the code in the extensions directories (<JAVA_HOME>/lib/ext
    So, we should move app classes to ext? Put them in one jar file? (We have many). Best practice about that?
    Otherwise it seems to me that it must be about finetuning the JVM?
    I imagine that it is a question about:
    1. the right heap size
    2. the right garbage collection scheme
    Googling heap size for XP
    CHE22 writes:
    You are right; -Xms1600M works well, but -Xms1700M bombs
    Thats one best practice or what?
    On garbage collection, there are numerous posts, and much "masters of Java black art" IMHO, And according to profiling GC is not really that much of a problem anyway? Still,
    Based on my description I was hoping for a short reply like "try setting these two parameters on your xp box, it worked for me" ...or something like that. With no takers on that one, I fear people are saying that there is nothing to be gained there?
    we read:
    [ -Xmx3800m -Xms3800m
    Configures a large Java heap to take advantage of the large memory system.
    -Xmn2g
    Configures a large heap for the young generation (which can be collected in parallel), again taking advantage of the large memory system. It helps prevent short lived objects from being prematurely promoted to the old generation, where garbage collection is more expensive.
    Unless you have problems with pauses, try granting as much memory as possible to the virtual machine. The default size (64MB) is often too small.
    Setting -Xms and -Xmx to the same value increases predictability by removing the most important sizing decision from the virtual machine. On the other hand, the virtual machine can't compensate if you make a poor choice.
    The -XX:+AggressiveHeap+ option inspects the machine resources (size of memory and number of processors) and attempts to set various parameters to be optimal for long-running, memory allocation-intensive jobs]
    So is Setting -Xms and -Xmx and -XX:AggressiveHeap
    best practice? What kind of performance improvement should we expect?
    Concerning JIT:
    I read this one
    [the impact of the JIT compiler is obvious on the graph: at startup the time taken is around 500us for the first few values, then quickly drops to 130us, before falling again to 70us, where it stays for 30 minutes,
    for this specific issue, I greatly improved my performances by configuring another VM argument: I set -XX:CompileThreshold=50]
    The size of the cache can be changed with
    -Xmaxjitcodesize
    This sounds like you should do something with JIT args, but reading
    // We disable the JIT during toolkit initialization. This
    // tends to touch lots of classes that aren't needed again
    // later and therefore JITing is counter-productiive.
    java.lang.Compiler.disable();
    However, finding
    the sweet spots for compilation thresholds has been tricky, so we're
    still experimenting with the recompilation policy. Work on it
    continues.
    sounds like there is no such straigth forward path, it all depends...
    Ok, its good, when
    [Small methods that can be more easily analyzed, optimized, and inlined where necessary (and not inlined where not necessary). Clearly delineated uses of data so that usage patterns and lifetimes are apparent. ]
    but when I read this:
    [The virtual machine is responsible for byte code execution, storage allocation, thread synchronization, etc. Running with the virtual machine are native code libraries that handle input and output through the operating system, especially graphics operations through the window system. Programs that spend significant portions of their time in those native code libraries will not see their performance on HotSpot improved as much as programs that spend most of their time executing byte codes.]
    I have the feeling that we might not able to improve performance that way?
    Any comments?
    otherwise i was wondering about
    -XX:CompileThreshold=50 -Xmaxjitcodesize (large, how large?)
    Somehow, we still feel that someone out there should have experienced similiar problems? But obviously there is no guarantee that the someone should surf by here!
    In c++ we used to just write everything ourselves. Here it does seem to be a question about the right use of other peoples stuff?
    Where you are kind of hoping for a shortcut, so you dont have to read endless number of documents, but can find a short document that actually addresses your problem ... well.
    -Simon
    Edited by: simoncpm on Mar 15, 2010 3:43 PM
    Edited by: simoncpm on Mar 15, 2010 3:53 PM

  • What do you find to be best practice when it comes to writing AS code to manage a big app ?

    Right now I am considering 3 options:
    1) I write all the code in a root component that extends group for example like this:
    <s:Application>
         <s:AppGroup>
                   <s:List />
         </s:AppGroup>
    </s:Application>
    So in it I will write the code to manage the list. Ok but Imagine now I have 10 views inside that AppGroup each having a list which needs to be managed. So here comes my option 2.
    2) I write code in the AppGroup component to manage what's on it's level and for new level (view for example) I create another component like ViewGroup1, ViewGroup2 etc which extends group or something else and I write code in it to manage what's inside of it. This looks like this:
    <s:Application>
         <s:AppGroup>
              <s:ViewGroup1>
                   <s:List />
              </s:ViewGroup1>
              <s:ViewGroup2>
                   <s:List />
              </s:ViewGroup2>
         </s:AppGroup>
    </s:Application>
    So this time the code to manage the views will be in AppGroup and the code for managing the Lists will be in the ViewGroup1/2 component.
    3) of course sometimes mixed architecture if for example the ViewGroup1 is very simple and doesn't have list but a label or something like that the code could be written in the AppGroup.
    What do you think of this code structure? Is my logic good or there's something else considered a best practice at the moment ? Thanks!

    Thank you all for the thoughts. Could we please stick to flex only for now...
    Currently I have a project where I see this structure:
    <Application creationComplete="init();">
         <fx:Script source="MainApp.as" /> - all initialization code is here
         <components /> - many components
    </Application>
    In the creation complete of the Application which is in MainApp.as, dataProviders are set and a controller class is initiated to which the Application is passed as Object and everything is manipulated from that controller. As you mentioned I guess you can always create additional controllers and pass them the Application or some other components from which they could start controlling so to speak.
    I am not sure if this structure is good or not, I started comparing it with mine and I ended up here...
    What I see at this point compared to mine is that:
    - in the included MainApp.as in Application I have question marks when i type something like "stage" in a function, I needs me to type "this.stage", which I don't like. To me it looks like including is bad and maybe everything should have started with creationComplete in the Application mxml with importing and initiating the controller with passing him the Application right away. Is that correct?
    - in the example given above, after MainApp initiates the controller by passing him the Application, the controller looses all of the nice code hints since now the Application is an object... maybe it's wrong for it to be object ? Should it be something else?
    Compared to my approach when I separate my logic into AS Group which is then extended as MXML Group. All I have to do is declare the instances in AS which I have as IDs in MXML and voila... I can control them and write their logic with all the nice code hints present.
    So basicly at this point you say instead of extending Group in AS every time I want to separate logic, write a controller right ?
    Here is what I summarized for now:
    1) Create a RootController class
    2) Initiate it in the creation complete of the Application passing the Application (as what type - object or something else?)
    3) manage all logic in that controller
    4) if parts of the application are too complex they can be separated into additional controllers.
    5) the RootController can initiate SubControllers which can initiate SubSubControllers
    6) to all controllers a component must be passed as a starting point for the logic
    Is this correct? If yes, what about the code hinting compared to my approach?
    Would be very nice if someone of you could make a very very very simple app with the model you are talking about, or if you have an article you took it from share the link! Thanks!

  • Bundling Best Practices

    Good morning All,
    I know that this has been asked for before, and my apologies for doing so. However; I would like to provide my customer something that can describe Best Practices when it comes to building, deploying bundles, and policies (which can be handled in another posting). Especially in relation to the placement of these Bundles, whether they should be at the Device, User, or Workstation level (if an enterprise wide application or Policy)
    There is a debate within the organization, that Policies, Bundles, should be deployed out to all geographic locations, instead of just being Deployed from the Top level down. In this case they have Group Policies which go out to every PC in the organization, and instead of doing that from the Workstation level, they deploy these Policy's from each geographic local. Which seems to not be according to best practices, or at least what I have always assumed to be Best Practices.
    I have seen this from the Documentation site, http://www.novell.com/documentation/zenworks11/ - System Planning, Deployment, and Best Practices Guide, and I have found some others looking to put together a packet for the customer.
    Thank you,
    -DS

    I know that the University of Uppsala has written up some, but they are
    in Swedish. Anyway, as for your question, I would say that it depends.
    Your one main goal should be management by exception.
    Are policies, bundles etc different based on geography?
    Anders Gustafsson (NKP)
    The Aaland Islands (N60 E20)
    Have an idea for a product enhancement? Please visit:
    http://www.novell.com/rms

  • "Best practice" for components calling components on different panels.

    I'm very new to Swing. I have been learning from tutorials, but these are always relatively simple interfaces , in which every component and container is initialised and added in the constructor of a main JFrame (extension) object.
    I would assume that more complex, real-world examples would have JPanels initialise themselves. For example, I am working on a project in which the JFrame holds multiple JPanels. One of these Panels holds a group of JToggleButtons (grouped in a ButtonGroup). The action event for each button involves calling the repaint method of one of the other Panels.
    Obviously, if you initialise everything in the JFrame, you can simply have the ActionListener refer to the other JPanel directly, by making the ActionListener a nested class within the JFrame class. However, I would like the JPanels to initialise their own components, including setting the button actions, by using an extension of class JPanel which includes the ActionListeners as nested classes. Therefore the ActionListener has no direct access to JPanel it needs to repaint.
    What, then, is considered "best practice" for allowing these components to interact (not simply in this situation, but more generally)? Should I pass a reference to the JPanel that needs to be repainted to the JPanel that contains the ActionListeners? Should I notify the main JFrame that the Action event has fired, and then have that call "repaint"? Or is there a more common or more correct way of doing this?
    Similarly, one of the JPanels needs to use a field belonging to the JFrame that holds it. Should I pass a reference to this object to the JPanel, or should I have the JPanel use "getParent()", or some other method?
    I realise there are no concrete answers to this query, but I am wondering whether there are accepted practices for achieving this. My instinct is to simply pass a JPanel reference to the JPanel that needs to call repaint, but I am unsure how extensible this would be, how tightly coupled these classes would become.
    Any advice anybody could give me would be much appreciated. Sorry the question is so long-winded. :)

    Hello,
    nice to get feedback.
    I've been looking at a few resources on this issue from my last post. In my application I have been using the Observer and Observable classes to implement the MVC pattern suggested by T.PD.(...)
    Two issues (not fatal, but annoying) with this are:
    -Observable is a class, not an interface; since most of my Observers already extend JPanel (or some such), I have had to create inner classes.
    -If an Observer is observing multiple Observables, it will have to determine which Observer called its update() method (by using reference equality or class comparison or whatever). Again, a very minor issue, but something to keep in mind.I don't deem those issues are minor. The second one in particular, is rather annoying in terms of maintenance ("Err, remind me, which widget is calling this "update()" method?").
    In addition to that, the Observable/Observer are legacy non-generified classes, that incurr a loosely-typed approach (the subject and context arguments to the update(Observable subject, Object context) methods give hardly any info in themselves, and they generally have to be cast to provide app-specific information.
    Note that the "notification model" from AWT and Swing widgets is not Observer-Observable, but merely EventListener . Although we can only guess what reasons made them develop a specific notification model, I deem this essentially stems from those reasons.
    The contrasting appraoches are discussed in this article from Bill Venners: The Event Generator Idiom (http://www.artima.com/designtechniques/eventgenP.html).
    N.B.: this article is from a previous-millenary series of "Design Techniques" articles that I found very useful when I learned OO design (GUI or not).
    One last nail against the Observer/Observable model: these are general classes that can be used regardless of the context (GUI/non-GUI code), so this makes it easier to forget about Swing threading rules when using them (essentially: is the update method called in the EDT or not).
    If anybody has any information on the performance or efficiency of using Observable/ObserverI would be very surprised if this had any performance impact. If it had, that would mean that you have either:
    - a lot of widgets that are listening to one another (and then the Mediator pattern is almost a must to structure such entangled dependencies). And even then I don't think there could be any impact below a few thousands widgets.
    - expensive or long-running computation in the update methods. That's unrelated to the notification model itself.
    - a lot of non-GUI components that use the Observer/Observable to communicate among themselves - all the more risk then, to have a GUI update() called outside the EDT, see remark above.
    (or whether there are inbuilt equivalents for Swing components)See discussion above.
    As far as your remark 2 goes (if one observer observes more than one subjects, the update() method contains branching logic) : this also occurs with the Event Delegation model indeed: for example, it is quite common that people complain that their actionPerformed() method becomes unwieldy when the same class listens for several JButtons.
    The usual advice for this is, use anonymous listeners, each of which handles the event from only one source (and generally very close in code to the definition of that source), and that simply translates the "generic" event notification method into a specific method call of a Controller or Mediator .
    Best regards.
    J.
    Edited by: jduprez on May 9, 2011 10:10 AM

  • Best practices for coding in Exits

    Does anyone have any documents on the Dos and Don'ts in BADIs, Exits and SD routines from the angle of System performance and how to minimise the load. Iam not talking about the generic Optimisation documents but something specific to SD routines and general BADIs and Exits which fall under the Best practices (coding) of SAP. eg. One guideline from SAP is that Clients should not tamper with the 'x' variables (xvbak, xvbap etc.) in the SD form exits.
    Thanks,
    Madhavan

    Not sure why check is a problem in general however it reminds me on another tricky situation.
    For exampele there is a user-exit in CO-PA
         call customer-function '001'
               exporting
                    erkrs           = i_erkrs
                    ep_source       = <ls_ce1xxxx>
                    ep_source_bukrs = <gs_ce_bukrs>
                    exit_nr         = i_exit_nr
               importing
                    ep_target         = <ls_ce1xxxx>
                    ep_target_bukrs   = <gs_ce_bukrs>
                    e_bukrs_processed = l_bukrs_processed
                    et_field          = gt_field_user
               tables
                    gt_message_table = gt_message_table
               exceptions
                    others    = 1.
    as you can see <ls_ce1xxxx> is passed on ep_source and ep_target. If you have several clients but only one in which the user-exit should be active,  you usually encapsulate your coding like following.
    if sy-mandt = '100'.
      ep_target = <changed_values>.
    endif.
    But in this case you will learn that in every other client the values of ep_source will be cleared if you do not add a statement like ep_target = ep_source.
    before the if statement.
    Christian

  • Web services - Best practices

    Hello all,
    I have been working my way through 'Core J2EE Patterns: Best Practices and Design Strategies' in preparation for a web services based project I am responsible for delivering.
    Theres just one thing I cannot get my head round. The book identifies a web service broker responsible for dealing with WS requests. However it states its at the integration level. If I have a WS enabled client wanting to communicate with my services, wouldn't the web service broker have to be the first point of contact, almost acting as a Controller?
    Much of this is new to me, so perhaps I am barking up the wrong tree. But any help would be greatly appreaciated
    Kind regards
    Kevin

    Michal
    Can you send me your article or anything you have on web services? I am trying to connect to a .NET server and I'm getting the following error:
    System.Web.Services.Protocols.SoapHeaderException: WSE012: The input was not a valid SOAP message because the following information is missing: action. at Microsoft.Web.Services3.Utilities.AspNetHelper.SetDefaultAddressingProperties(SoapContext context, HttpContext httpContext) at Microsoft.Web.Services3.WseProtocol.CreateRequestSoapContext(SoapEnvelope requestEnvelope) at Microsoft.Web.Services3.WseProtocol.FilterRequest(SoapEnvelope requestEnvelope) at Microsoft.Web.Services3.WseProtocol.RouteRequest(SoapServerMessage message) at System.Web.Services.Protocols.SoapServerProtocol.Initialize() at System.Web.Services.Protocols.ServerProtocol.SetContext(Type type, HttpContext context, HttpRequest request, HttpResponse response) at System.Web.Services.Protocols.ServerProtocolFactory.Create(Type type, HttpContext context, HttpRequest request, HttpResponse response, Boolean& abortProcessing)
    I know it has something to do with Axis but I just don't know enough. Is there a way around it? Do you know if WSE 3.0 requires Axis2?
    Thanks for any info you can send my way.
    Steve

  • Best Practice for Significant Amounts of Data

    This is basically a best-practice/concept question and it spans both Xcelsius & Excel functions:
    I am working on a dashboard for the US Military to report on some basic financial transactions that happen on bases around the globe.  These transactions fall into four categories, so my aggregation is as follows:
    Year,Month,Country,Base,Category (data is Transaction Count and Total Amount)
    This is a rather high level of aggregation, and it takes about 20 million transactions and aggregates them into about 6000 rows of data for a two year period.
    I would like to allow the users to select a Category and a country and see a chart which summarizes transactions for that country ( X-axis for Month, Y-axis Transaction Count or Amount ).  I would like each series on this chart to represent a Base.
    My problem is that 6000 rows still appears to be too many rows for an Xcelsius dashboard to handle.  I have followed the Concatenated Key approach and used SUMIF to populate a matrix with the data for use in the Chart.  This matrix would have Bases for row headings (only those within the selected country) and the Column Headings would be Month.  The data would be COUNT. (I also need the same matrix with Dollar Amounts as the data). 
    In Excel this matrix works fine and seems to be very fast.  The problem is with Xcelsius.  I have imported the Spreadsheet, but have NOT even created the chart yet and Xcelsius is CHOKING (and crashing).  I changed Max Rows to 7000 to accommodate the data.  I placed a simple combo box and a grid on the Canvas u2013 BUT NO CHART yet u2013 and the dashboard takes forever to generate and is REALLY slow to react to a simple change in the Combo Box.
    So, I guess this brings up a few questions:
    1)     Am I doing something wrong and did I miss something that would prevent this problem?
    2)     If this is standard Xcelsius behavior, what are the Best Practices to solve the problem?
    a.     Do I have to create 50 different Data Ranges in order to improve performance (i.e. Each Country-Category would have a separate range)?
    b.     Would it even work if it had that many data ranges in it?
    c.     Do you aggregate it as a crosstab (Months as Column headings) and insert that crosstabbed data into Excel.
    d.     Other ideas  that Iu2019m missing?
    FYI:  These dashboards will be exported to PDF and distributed.  They will not be connected to a server or data source.
    Any thoughts or guidance would be appreciated.
    Thanks,
    David

    Hi David,
    I would leave your query
    "Am I doing something wrong and did I miss something that would prevent this problem?"
    to the experts/ gurus out here on this forum.
    From my end, you can follow
    TOP 10 EXCEL TIPS FOR SUCCESS
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/204c3259-edb2-2b10-4a84-a754c9e1aea8
    Please follow the Xcelsius Best Practices at
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac
    In order to reduce the size of xlf and swf files follow
    http://myxcelsius.com/2009/03/18/reduce-the-size-of-your-xlf-and-swf-files/
    Hope this helps to certain extent.
    Regards
    Nikhil

  • Best Practice to implement row restriction level

    Hi guys,
    We need to implement a security row filter scenario in our reporting system. Following several recommendations already posted in the forum we have created a security table with the following columns
    userName  Object Id
    U1             A
    U2             B
    where our fact table is something like that
    Object Id    Fact A
    A                23
    B                4
    Additionally we have created row restriction on the universe based on the following where clause:
    UserName = @Variable('BOUSER')
    If the report only contains objects based on Fact table the restriction is never applied. This has sense as docs specify that the row restrictions are only applied if the table is actually invoked in the SQL statement (SELECT statment is supposed).
    Question is the following: Which is the best practice recommended in this situation. Create a dummy column in the security table, map into it into the universe and include the object in the query?
    Thanks
    Edited by: Alfons Gonzalez on Mar 8, 2012 5:33 PM

    Hi,
    This solution also seemed to be the most suitable for us. Problem that we have discover: when the restriction set is not applied for a given user (the advantage of using restriction set is the fact that is not always applied) the query joins the fact table with the security table withou applying any where clause based on @variable('USER'). This is not a problem if the secuity table contains a 1:1 relationship betwwen users and secured objects , but (as in our case) relathion ship is 1:n query provide "additional wrong rows".
    By the moment we have discarded the use of the restriction sets. The effect of putting a dummy column based on the security table may have undesired effects when the condition is not applied.
    I don't know if anyone has found how to workaround this matter.
    Alfons

  • Best practice for multi-language content in common areas

    I've got a site with some text in header/footer/nav that needs to be translated between an English and Spanish site, which use the same design. My intention was to set up all the text as content to facilitate. However, if I use a standard dialog with the component's path set to a child of the current page node, I would need to re-enter the text on every page. If I use a design dialog, or a standard dialog with the component's path set absolutely, the Engilsh and Spanish sites will share the same text. If I use a standard dialog with the component's path set relatively (eg path="../../jcr:content/myPath"), the pages using the component would all need to be at the same level of the hierarchy.
    It appears that the Geometrixx demo doesn't address this situation, and leaves copy in English. Is there a best practice for this scenario?

    I'm finding that something to the effect of <cq:include path="<%= strCommonContentPath + "codeEntry" %>" resourceType ...
    works fine for most components, but not for parsys, or a component containing a parsys. When I attempt that, I get a JS error that says "design.path is null or not an object". Is there a way around this?

  • Best Practice for setting systems up in SMSY

    Good afternoon - I want to cleanup our SMSY information and I am looking for some best practice advice on this. We started with an ERP 6.0 dual-stack system. So I created a logical component Z_ECC under "SAP ERP" --> "SAP ECC Server" and I assigned all of my various instances (Dev, QA, Train, Prod) to this logical component. We then applied Enhancement Package 4 to these systems. I see under logical components there is an entry for "SAP ERP ENHANCE PACKAGE". Now that we are on EhP4, should I create a different logical component for my ERP 6.0 EhP4 systems? I see in logical components under "SAP ERP ENHANCE PACKAGE" there are entries for the different products that can be updated to EhP4, such as "ABAP Technology for ERP EHP4", "Central Applications", ... "Utilities/Waste&Recycl./Telco". If I am supposed to change the logical component to something based on EhP4, which should I choose?
    The reason that this is important is that when I go to Maintenance Optimizer, I need to ensure that my version information is correct so that I am presented with all of the available patches for the parts that I have installed.
    My Solution Manager system is 7.01 SPS 26. The ERP systems are ECC 6.0 EhP4 SPS 7.
    Any assistance is appreciated!
    Regards,
    Blair Towe

    Hello Blair,
    In this case you have to assign products EHP 4 for ERP 6 and SAP ERP 6 for your system in SMSY.
    You will then have 2 entries in SMSY, one under each product, the main instance for EHP 4 for ERP 6 must be central applications and the one for SAP ERP 6 is SAP ECC SERVER.
    This way your system should be correctly configured to use the MOPZ.
    Unfortunately I'm not aware of a guide explaining these details.
    Some times the System Landscape guide at service.sap.com/diagnostics can be very useful. See also note 987835.
    Hope it can help.
    Regards,
    Daniel.
    Edited by: Daniel Nicol on May 24, 2011 10:36 PM

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • What is a best practice for managing a large amount of ever-changing hyperlinks?

    I am moving an 80+ page printed catalog online. We need to add hyperlinks to our Learning Management System courses to each reference of a class - there are 100s of them. I'm having difficulty understanding what the best practice is for consistent results when I need to go back and edit (which we will have to do regularly).
    These seem like my options:
    Link the actual text - sometimes when I go back to edit the link I can't find it in InDesign but can see it's there when I open up the PDF in Acrobat
    Draw an invisible box over the text and link it - this seems to work better but seems like an extra step
    Do all of the linking in Acrobat
    Am I missing anything?
    Here is the document in case anyone wants to see it so far. For the links that are in there, I used a combination of adding the links in InDesign then perfecting them using Acrobat (removing additional links or correcting others that I couldn't see in InDesign). This part of the process gives me anxiety each month we have to make edits. Nothing seems consistent. Maybe I'm missing something obvious?

    what exatly needs to be edited, the hyperlink or content or?

  • Best practices of having a different external/internal domain

    In the midst of migrating from a joint Windows/Mac server environment to a completely Apple one. Previously, DNS was hosted on the Windows machine using the companyname.local internal domain. When we set up the Apple server, our Apple contact created a new internal domain, called companyname.ltd. (Supposedly there was some conflict in having a 10.5 server be part of a .local domain - either way it was no worries either way.) Companyname.net is our website.
    The goal now is to have the Leopard server run everything - DNS, Kerio mailserver, website, the works. In setting up the DNS on the Mac server this go around, we were advised to just use companyname.net as the internal domain name instead of .ltd or .local or something like that. I happen to like having a separate local domain just for clarity's sake - users know if they are internal/external, but supposedly the Kerio setup would respond much better to just the one companyname.net.
    So after all that - what's the best practice of what I should do? Is it ok to have companyname.net be the local domain, even when companyname.net is also the address to our external website? Or should the local domain be something different from that public URL? Or does it really not matter one way or the other? I've been running companyname.net as the local domain for a week or so now with pretty much no issues, I'd just hate to hit a point where something breaks long term because of an initial setup mixup.
    Thanks in advance for any advice you all can offer!

    Part of this is personal preference, but there are some technical elements to it, too.
    You may find that your decision is swayed by the number of mobile users in your network. If your internal machines are all stationary then it doesn't matter if they're configured for companyname.local (or any other internal-only domain), but if you're a mobile user (e.g. on a laptop that you take to/from work/home/clients/starbucks, etc.) then you'll find it a huge PITA to have to reconfigure things like your mail client to get mail from mail.companyname.local when you're in the office but mail.companyname.net when you're outside.
    For this reason we opted to use the same domain name internally as well as externally. Everyone can set their mail client (and other apps) to use one hostname and DNS controls where they go - e.g. if they're in the office or on VPN, the office DNS server hands out the internal address of the mail server, but if they're remote they get the public address.
    For the most part, users don't know the difference - most of them wouldn't know how to tell anyway - and using one domain name puts the onus on the network administrator to make sure it's correct which IMHO certainly raises the chance of it working correctly when compared to hoping/expecting/praying that all company employees understand your network and know which server name to use when.
    Now one of the downsides of this is that you need to maintain two copies of your companyname.net domain zone data - one for the internal view and one for external (but that's not much more effort than maintaining companyname.net and companyname.local) and make sure you edit the right one.
    It also means you cannot use Apple's Server Admin to manage your DNS on a single machine - Server Admin only understands one view (either internal or external, but not both at the same time). If you have two DNS servers (one for public use and one for internal-only use) then that's not so much of an issue.
    Of course, you can always drive DNS manually by editing the zone files directly.

  • Home Movie Cataloging - BEST PRACTICES

    I have about 200 hours of old home movies on VHS which I am in the process of adding to my iMac. I am wondering about 'best practices' on how much video can be stored inside of iMovie '08, when how much video becomes too much inside of the program, etc.
    In a perfect world, I'd like to simply import all of my home videos into iMovie, leave them in the 'library' section, and make 2-5 minute long clips in the 'projects' section for sharing with family members, but never deleting anything from the 'library'. Is this a good way to store original data? Would it be smarter to export all of the original video content to .DV files or something like that for space saving, etc?
    Can I use iMovie to store and catalog all of my old home movies in the same way I use iPhoto to store ALL of my photos, and iTunes to store ALL of my music/hollywood-movies, etc?

    We-ell, since no-one else has replied:
    1 hour of DV (digital video in the file system which iMovie uses) needs 13GB of hard disc space.
    You have 200 hours of video. 200 x 13 = 2,600 gigabytes. Two point six terabytes. If you put all that on one-and-a-bit 2TB hard discs, and a hard disc fails - oops! - where's your backup? ..Ah, on another one-and-a-bit 2TB hard discs ..or, preferably, spread over several hard discs, so that if one fails you haven't lost everything!
    iMovie - the program - can handle video stored on external discs. But are you willing to pay the price for those discs? If so; fine! Digitise all your VHS and store it on computer discs (prices come down month by month).
    Yes, you can "mix'n'match" clips between different projects, making all sorts of "mash-ups" or new videos from all the assorted video clips. But you'll need more hard disc space for the editing, too. You could use your iMac's internal hard disc for that ..or use one of the external discs for doing the editing on. That's how professionals edit: all the video "assets" on external discs, and edit onto another disc. That's what I do with my big floorstander PowerMac, or whatever those big cheesegraters were called..
    So the idea's fine, as long as you have all the external storage you'd need, plus the backup in case one of those discs fails, and all the time and patience to digitise 200 hours of VHS.
    Note that importing from VHS will import material as one long, continuous take - there'll be no automatic scene breaks between different shots - so you'll have to spend many hours chopping up the material into different clips after importing it.
    Best way to index that? Dunno; there have been several programs which supposedly do the job for you (..I can't remember their names; I've tried a few: find them by Googling..) but they've been more trouble - and taken up more disc space - than I've been prepared to bother with. I'd jot down the different clips as you create them, either by jotting in TextEdit (simplest) or in a database or spreadsheet program such as Excel or Numbers or similar ..or even in a notebook.
    Jot down the type of footage (e.g; 16th Birthday party), name of clip (e.g; 016 party), duration (e.g; 06:20 mins and seconds) and anything else you might need to identify each clip.
    Best of luck!

Maybe you are looking for

  • PS and LR CC on Surface Pro

    Does anyone have experience running PS (and/or LR) CC on a Surface Pro 3?  Good?  Bad?  Ugly?  Comments or recommendations?

  • How to solve Fusion ADF file upload and download?

    Now we are building web application with Fusion ADF (JDeveloper 11.1.2.0.0). We want to downloading and uploading from tables.Is there any special tool in Fusion ADF or should I going with traditional java coding. Thanks

  • Plug-in to grayscale photos

    Hello, Our clients always send us graphic rich PDF's in colour that they wanted printed in Grayscale. I am looking for a plug-in that will convert either a whole PDF document, page or selected photos into grey from RGB or CMYK. However, I am looking

  • Oracle Date function.

    Hi Team, I have a table with date column (16/06/1996 15:03:59) as value in the displayed format. As I have tried with the below query format I could n't able to retrive the records. select * from <table> where DATE_INSERT=SYSDATE; Could you please he

  • Aperture forgets it license and asks me for the serial number over and over again

    Hi, This is getting ridiculous to the point where I'm thinking of just ditching Aperture.  It's continually forgetting is license and asking me for the serial number over and over again.  I have the original serial number and an upgrade number when I