Best practices...one timeline or many?

I have a lot of nodes with lots of complex behavior. Up until this point, I've used individual animation/effect timelines attached to each node. Is it less resource intensive to have a single timeline and use bound and onchange variables that the nodes utilize to kick off 'events'? I'm not sure how the timelines are implemented in javafx...are they each managed in their own threads?
Is it a best practice to utilize a single 'large' timeline or many smaller ones?
Any help or insight would be appreciated.

Greetings,
I don't know much about the subject.
But i can tell you two things.
First you can create groups inside an applications so even having hundreds of pages you can still have them seperated by groups.
Second in a large applications composed by several applications you can make a common login for all so you only need to login once. I have seen a thread about thsi recently.
My opinion?
One application is best...
Because in a GUI you probably will have a lot of Javascript to make things smooth in the client side.
Also Javascript code can and SHOULD be kept apart from the application, you should just link the file to the application template. I recomend prototype/scriptaculous for this because they are already used by APEX and are excelent with many examples in the web.
Don't forget to assign points to those who help you! (Mark the thread Helpful/Correct)
My Homepage
Best Regards

Similar Messages

  • Need best practice: one application or many?

    We have a database and are going to build GUI using ApEx
    The question is:
    should we create many applications - one for every task (or group of tasks / user roles)
    or create one large application (hundreds of pages)
    Why?

    Greetings,
    I don't know much about the subject.
    But i can tell you two things.
    First you can create groups inside an applications so even having hundreds of pages you can still have them seperated by groups.
    Second in a large applications composed by several applications you can make a common login for all so you only need to login once. I have seen a thread about thsi recently.
    My opinion?
    One application is best...
    Because in a GUI you probably will have a lot of Javascript to make things smooth in the client side.
    Also Javascript code can and SHOULD be kept apart from the application, you should just link the file to the application template. I recomend prototype/scriptaculous for this because they are already used by APEX and are excelent with many examples in the web.
    Don't forget to assign points to those who help you! (Mark the thread Helpful/Correct)
    My Homepage
    Best Regards

  • Swing best practice - private modifier vs. many parameters

    Dear Experts,
    I have a comboBox that has a customized editor and has KeyListener that responds to several keyPressed and keyTyped. The comboBox is used in two different JFrame, say JFrame frmA and JFrame frmB.
    Since the KeyListener changes the state of 8 other components in frmA, I have two options:
    Option 1:
    - Coding the comboBox in separate class and pass all components as parameters. I will have around 10 parameters, but the components can be made private to frmA or frmB.
    Option 2:
    - Coding the comboBox in separate class and pass the instance of the caller (frmA or frmB), so that the comboBox can change the state of the other components in frmA or frmB according to its caller. However, the components must not be private and should be able to be accessed by the comboBox class.
    My questions:
    1. I have not implemented option 2, so that I have not proved that it will work. Will it work?
    2. Which option will be more efficient and require less cpu time? If it is the same, which option is the best practice?
    3. Is there any other option that is better than these two options?
    Thanks for your advice,
    Patrick

    Option 2:
    - Coding the comboBox in separate class and pass the instance of the caller (frmA or frmB), so that the comboBox can change the state of the other components in frmA or frmB according to its caller. However, the components must not be private and should be able to be accessed by the comboBox class.
    My questions:
    1. I have not implemented option 2, so that I have not proved that it will work. Will it work?It doesn't stand in the long run. Doing so couples your specific ComboBox classes to all widgets that react to the ComboBox changes. If you happen to add a new button is either JFrame that should also be affected by the combo-box selection, you'll have to modify, and re-test, the ComboBox code. Moreover, if a new button was needed in one the JFrame but not the other, you'd have to introduce a special case in your combobox code.
    Instead of the ComboBox's listeners invoke methods on each piloted widget, have them invoke one method (+selectionChanged(...)+) on these widget's common parent (not necessarily a graphical container, but an object that has (maybe indirect) references to each of the dependant widgets).
    2. Which option will be more efficient and require less cpu time?I wouldn't mind.
    In the graphical layer of an application, and unless the graphcial representation performs computations on the bitmap, any action is normally much quicker than business logic computation. Any counter-example is likely to be a bug in the UI implementation (such as, not observing Swing's threading rules), or a severe flaw in the design 'such as, having a hierrachy of several hunderd JComponents,...). Swing widgets are pretty reactive to genuine calls such as setEnabled(), setBackground(), setText(),...
    If it is the same, which option is the best practice?Neither. Hardcoding relationships between widgets may be OK within a single, and single-purpose, form.
    But if you want to code a reusable component thoug, design it for reuse (that is, the less it knows about how which context it is used in, the more contexts it can be used in).
    In general, widgets that know each-other involve a quadratic number of references that accordingly impacts the code readability (and bug rate). This is the primary reason for introducing a Mediator pattern (of which my reply to 1 above is a degenerate form).
    3. Is there any other option that is better than these two options?Yes. Look into the [+Mediator+|http://en.wikipedia.org/wiki/Mediator_pattern] pattern (the Wikipedia page is not compelling, but you'll easily find lots of resources on the Web).

  • Best practices for subcriptions of many servers

    Thinking of monitoring free space on my servers. Servers are 2003,2008,2012 (plus R2). Basing on
    http://fehse.ca/2013/07/disk-space-monitoringcommon-practices/ - need to make three monitors for each version of server. So, the best practice is to create there groups based on version
    condition. But we have many specialized servers (BizTalk, CRM, different application servers etc.) and want alert send to application admins. How can I implement such taks without creating a buch of groups based on application and override each monitor to
    each group?

    If you have multiple different teams, you will most likely need multiple different SCOM groups to organize your notifications and configure overrides against.  You could limit the number of groups you have to create by using the same group for both
    the override and the notification to the app team.  For example, if you are looking to customize thresholds for the Biztalk servers, you would create a SCOM group and add the various logical disk objects to that group.  When you create your threshold
    override, you will target that group of disk objects.  When you create your notification, you will scope the notification to the logical disk space monitor and this group of disk objects.  Beyond this there isn't much you can do to keep from creating
    a bunch of groups.  You may actually have multiple groups for the Biztalk team even, if they are utilizing multiple os'es for their servers because you should not combine disks from different os versions into the same group as your overrides will not
    work properly.

  • Best Practice: One Library, Multiple Macs ?

    Despite searching, I've yet to find a definitive answer to this one - so any advice would be appreciated....
    Having downloaded the trial, I very much like Aperture, but my decision to purchase and continue using it hinges very much on whether or not I can access the same library from multiple machines.
    Here is the scenario:
    My 'studio' currently has a Quad G5 (soon to be upgraded to a MacPro), with ample Firewire storage, including 2 new 500Gb Western Digital MyBook drives. I now also have a MacBook Pro to allow me to work away from the studio.
    So far, I'm using one of the 500Gb drives to hold everything I need - 'work in progress' documents, images etc. etc. with this being SuperDuper! smart updated daily to the second drive. This allows me to take drive 1 away at night and work from home, our go out of the office with the MacBook Pro and have all my media to hand, and 'synchronise' upon return the next day or whenever.
    I think what I'd like to be able to do, is set up Aperture on the G5 and get all my images sorted and stored into a new library (out of iPhoto and various Finder folders etc) stored on FW Drive 1 along with the rest of my stuff. This would semi-automatically be backed up to FW Drive 2 as mentioned above.
    However, I want to be able to access this library with Aperture on the MacBook Pro when I'm not in the office - will this work? I appreciate that I'll need 2 licenses of Aperture (one for the desktop, and one for the laptop) but my concern is whether or not both copies of Aperture can use the same library....... I wonder if the Aperture prefs would prevent this from working or screw it up completely.
    If this ain't gonna work - what other options do I have !?
    MacBook Pro 15"   Mac OS X (10.4.8)  

    Not sure this will help but this is what I have decided to do.
    System: MacBook Pro and a G5. Storage: 2 Western Digital 160 GB USB 2.0 Passport Portable drives, 1 OWC 160 GB Firewire 800 (for back up of my internal drive on my MacBook Pro), and 5 500GB Western Digital MyBook Pro FW drives (attached to my G5). One Aperture library resides on my MacBook Pro internal and a yearly (like 2006) Aperture resides on 1 500 GB WD MyBook Pro
    In studio, I shoot wirelessly and transfer by ftp to the MBP and using Aperture Hot Folder move the images into Aperture to show the client at the end and during the shoot. They are managed, but sadly the files are not named by studio convention. Once the client leaves, I delete the temporary project, and reimport the pictures, renaming the pictures using the studio format and they are managed. At this point the picture reside on my internal drive of my MBP.
    I do the majority of the clean up of the images at this point (contrast, exposure, saturation, sharpening, leveling, blemish corrections, etc, keywording). Once done, I export the project to my primary WD Passport drive and then using the old fashion network strategy of walking, take the WD Passport drive to the G5 and import the project in G5 library converting them to referenced files so I have Finder based architecture for all my RAW files.
    Once I know the the files are on the G5, I go back to my MBP and convert the managed pictures to referenced pictures in the process moving them off my internal drive and on to the WD 160 USB 2.0 drive in a Finder based architecture. This keeps my library on my MBP pretty small and allows me to carry 160 GB worth of pictures that I can work on at any time or place. [Actually, I will probably pick up a 3rd 160GB Passport soon, like this weekend, so I will be able to carry 320 GB of pictures with me.
    Now it gets nasty because I have two sets of pictures, one on my MacBook Pro and one on my G5. Again using old fashion strategies, I decided that the work flow could only go one way... from MBP to G5 and never (or very rarely) the opposite direction. The other nasty is how to deal with changes made after the first transfer of the project to the G5. The vast majority of those changes go through Photoshop and by default they become managed files. So for a particular project when I start to do additional editing, I bring those images into an album (let's say "Selected pictures") for that project and edit away. Once I am really done, I generate a new project (Original project is 070314 and the new project would be 070314 Update) and I drag the album to the new project. The nice thing is the primary pictures remain in the original project when I move the album. I then export Project Update consolidating images and import that into the library on the G5. Lastly, I relocated masters of the managed files to the WD Passport drive. I pay the penalty of having a few replicates of a couple of pictures and the architecture of my G5 library is not identical to my MBP library, but I have all of my primary images plus CS2 changes available to my MBP and my G5.
    OK..... my library on my MBP internal is backed up to the OWC drive using Superduper so I always have a current replicate of that library. The library and pictures on the 500 GB WD Mybook Pro are backed up 2-3 times per week to a second 500 GB WD MyBook Pro.
    In case my GS fails, I can address the library by simply attaching my MBP to the firewire chain and double click on the library icon. This forces Aperture to open the library on the 500 GB WD Mybook Pro drive. In case my MBP fails, I can boot my MBP or G5 off the OWC drive.
    Principles: A one library solution will never work, because eventually, even with referenced images it will become too large for a single drive. So start now with a strategy of smaller libraries structures. I use a yearly system, but others are possible. My yearly system consists of my library plus referenced images on one 500 GB drive plus a second drive to which I back up. If I run out of space, I will buy two more drives, perhaps moving TB drives.
    Don't try to store very many images on your internal drive of any computer. Develop a methodology of keeping the images on external (FW or eSATA) drives where capacity can be readily expanded.
    Keep your MBP library pristine by regularly moving managed files over to referenced files on a bus powered portable drive.
    Make sure you regularly back up both systems.
    Hope this helps.
    steven

  • Best Practice "One SSID for everything"

    Hello Guys,
    we switched from ACS to ISE and now we want to have just two SSIDs for alle Business Needs:
    I´m not sure if this is the right or best way to do it.
    One SSID is for Guest Network and also for BYOD Registration.
    The second SSID is for BYOD and Company Devices (LAptop ipad iphone....). But we have also cisco 7925g which should get and client cert and then also connect to that ssid. In the old setup it was an seperate SSID with CCKM enabled. Now because of campatibilty i had to disable cckm. Also the new SSId would have CLient band select enabled, which should be good for voice, right ?
    With your expirience is it a good idea to but all clients in 1 SSID ?
    Is Wireless Voice working fine without cckm ?
    What is your recommendation for that setup regarding ssid and voice/video configuration specially 802.11 settings and CAC
    Thanks for help
    Kind regards
    Philip

    A lot of vendors will suggest also to have one SSID if possible, but the rule of thumb is 3-4 max.  The main issue is the differences required for specific WLAN's, which isn't just for Data and Voice, but you also have to look at mDNS, multicast, 802.11r, DTIM's, MFP, etc.  You can combine all devices to use one, but all the features/setting will be the same, which isn't ideal all the time.  There are attributes which you can set from ISE to push out to the WLC(s), but its the other unique values that you need to research and understand.

  • Best Backup Practices - One Library vs Individual Project Libraries?

    Hello, I'm new to Final Cut Pro X. I have several completed projects that I've been backing up to an external hard drive. However, at first I thought it would be best to create one library on the backup drive and back up my events to this one library. But I've second guessed myself and have been instead creating a new library for each event and then backing up the event to it's own library. This seems less prone to loading issues on large projects and less likely for multiple events to become corrupted if the library gets hosed.
    How do you guys manage backups and what do you guys think are the best practices, one backup library or a separate library for each event.

    There has been a lot said about the"best practice" for backups.
    In reality there is no right or wrong, but having all your backup eggs/Libraries in one basket can be risky.
    I use your later suggested method as it suits my style of work.
    I prefer to have each Project, Event/s in there own Library. Self contained unit.
    When the Project is done the complete Library is copied to another drive, plus if it's important another drive is used for a second backup.
    What you have to work through is what gear you have or need to aquire to make your backup method secure and easy to do.
    Nowadays you need lots of drive space and thankfully Terabytes are cheaper than ever.
    Al

  • Best practice when deleting from different table simultainiously

    Greetings people,
    I have two tables joined with a foreign key contrraint. They are written at the same time to keep the constraint happy but I don't know the best way of deleting them as far as rowsets and datamodels are concerned. Are there "gotchas" like do I delete the row in the foreign key table first?
    I am reading thread:http://swforum.sun.com/jive/thread.jspa?forumID=123&threadID=49918
    and getting my head around it.
    Is there a tutorial which deals with this topic?
    I was wondering the best way to go.
    Many Thanks.
    Phil
    is there a "best practice" method for

    Without knowing many details about your specifics... I can suggest a few alternatives -
    You can definitely build coordinating the deletes into your application - you can automatically delete any FK related entries prior to deleting the master, or, refuse to delete the master until the user goes and explicitly deletes the children... just depends on how you want to manage it.
    Also in many databases you can build the cascading delete rules into your database tables themselves.... so that when you delete the master the deletes automatically cascade. I think this is something you typically declare when creating the FK constrataint (delete cascade and update cascade rules).
    hth,
    v

  • Database best practice: max number of columns

    I have two questions that I would appreciate comments on...
    We have a table titled TRANSACTION with 160 columns and a view titled TRANSACTIONS_VIEW with 233 columns in it. This was designed by someone a while ago. I am wondering if it is against best practice to have this many columns in a table? I have never before seen a table with this many columns in it and feel that there must be a way to split the data into multiple tables to make it more manageable.
    My second question is on partitions, the above table TRANSACTION is partitioned by manually specifying partitions with max values on the transaction date starting august 2008 through january 2010 at 1 month increments. Isn't it much better to specify automatic partitioning using the interval clause?

    kev374 wrote:
    thanks for the response, yes there are many columns that violate 3NF and that is why the column count is so high.
    Regarding the partition question, by better I meant by using "interval" the partitions could be created automatically at the specified interval instead of having to create them manually.The key is to understand the logic behind these tables and columns. Why it was designed like this, if it's business requirement then 200 some columns are not bad, if it's design flaw 20 columns could be too much. It's not necessarily always good to have strict 3NF design, sometime for various reason you can denormalize the tables to get better performance.
    As to partitioning Q, you have to do the rolling window (drop/add partition as time goes by) type of partitioning scheme manually so far.

  • Best practice UOM changes for one master data

    Hello ALL,
    What are the best practices must be followed while changing one master data UOM data.
    for eg i have two UOMs respectively,
    basic UOM = EA another one ia Alternative UOM CAR  . 1 car = 10 EA.
    now my master data team deleting this CAR via MM02 .
    what are the follow up actions we should follow.
    (1) for eg i have contract existed for thi smaterial as CAR
    (2) I have many POs without doing GR
    like wise can you list what are the impacts in the system.
    how we can go ahead.
    What are the best practices followed while this UOM changes for your customer.
    br
    muthu

    Hi Ravi / tao
    Yes . I am deleting alternate UOM CAR in mm02 .
    I created a contract for that material in CAR .
    my contarct also refered in source list.
    share your views.
    what happen to my open documents.
    What happen to open documents ? .. System may not allow me to do GR right?
    br
    muthu

  • Working with many sequences- best practice

    Hi.
    I´ve just started using Adobe Premiere CS6. My goal is to create a 2 hour long movie, based on 30 hours of raw gopro footage recorded on a recent vacation.
    Now my question is, what is the best practice for working with so many sequences/movie clips?
    Do you have one heavy project file, with all the clips?
    Or do you make small chapters that contains x number of sequences/is x minutes long, and in the end combine all these?
    Or how would you do it the best way, so its easiest to work with?
    Thanks alot for your help.
    Kind regards,
    Lars

    I'll answer your second question first, as it's more relevant to the topic.
    You should export in the very highest quality you can based on what you started with.
    The exception to this is if you have some end medium in mind. For example, it would be best to export 30 FPS if you are going to upload it to YouTube.
    On the other hand, if you just want it as a video file on your computer, you should export it as 50 FPS because that retains the smooth, higher framerate.
    Also, if you are making slow-motion scenes with that higher framerate, then export at the lowest framerate (for example, if you slow down a scene to 50% speed, your export should be at 25 FPS).
    About my computer:
    It was for both, but I built it more with gaming in mind as I wasn't as heavily into editing then as I am now.
    Now, I am upgrading components based on the editing performance gains I could get rather than gaming performance gains.

  • Best practice for if/else when one outcome results in exit [Bash]

    I have a bash script with a lot of if/else constructs in the form of
    if <condition>
    then
    <do stuff>
    else
    <do other stuff>
    exit
    fi
    This could also be structured as
    if ! <condition>
    then
    <do other stuff>
    exit
    fi
    <do stuff>
    The first one seems more structured, because it explicitly associates <do stuff> with the condition.  But the second one seems more logical because it avoids explicitly making a choice (then/else) that doesn't really need to be made.
    Is one of the two more in line with "best practice" from pure bash or general programming perspectives?

    I'm not sure if there are 'formal' best practices, but I tend to use the latter form when (and only when) it is some sort of error checking.
    Essentially, this would be when <do stuff> was more of the main purpose of the script, or at least that neighborhood of the script, while <do other stuff> was mostly cleaning up before exiting.
    I suppose more generally, it could relate to the size of the code blocks.  You wouldn't want a long involved <do stuff> section after which a reader would see an "else" and think 'WTF, else what?'.  So, perhaps if there is a substantial disparity in the lengths of the two conditional blocks, put the short one first.
    But I'm just making this all up from my own preferences and intuition.
    When nested this becomes more obvious, and/or a bigger issue.  Consider two scripts:
    if [[ test1 ]]
    then
    if [[ test2 ]]
    then
    echo "All tests passed, continuing..."
    else
    echo "failed test 2"
    exit
    fi
    else
    echo "failed test 1"
    fi
    if [[ ! test1 ]]
    then
    echo "failed test 1"
    exit
    fi
    if [[ ! test2 ]]
    then
    echo "failed test 2"
    exit
    fi
    echo "passed all tests, continuing..."
    This just gets far worse with deeper levels of nesting.  The second seems much cleaner.  In reality though I'd go even further to
    [[ ! test1 ]] && echo "failed test 1" && exit
    [[ ! test2 ]] && echo "failed test 2" && exit
    echo "passed all tests, continuing..."
    edit: added test1/test2 examples.
    Last edited by Trilby (2012-06-19 02:27:48)

  • Select One Choice attribute' LoV based on two bind variables, best practice

    Hello there,
    I am in the process of learning the ADF 11g, I have following requirement,
    A page must contain a list of school names which is needed to be fetched based on two parameters, the parameters are student information been inserted in the previous page.
    I have defined a read only view "SchoolNamesViewRO", it's query depends on two bind variables :stdDegree and stdCateg.
    added that RO View as a view accessor to the entity to which the name attribute belongs, and then add LoV for the name attribute using the ReadOnly view,
    added the name attribute as Select One Choice to page2,
    and now I need to pass the values of the bind variables of the ReadOnly view,
    the information needed to be passed as the bind variables is inserted in the previous page, I could have the data as bindings attribute values in the page2 definition
    I have implemented the next two appraoches but both resulted in an empty list :
    * added ExecuteWithParams Action to the bindings of the page and then defined an Invoke Action (set refresh condition) in the executable s, set the default values of the parameters to be the attributes values' input value,
    in the trace I code see that the binding fetches correct values as supposed , but the select list appears empty, does the this execution for the query considered to be connected to the list ?
    * added a method to the ReadOnly view Imp java class to set the bind variables, then I define it as a MethodAction in the bindings , and then create an Invoke action for it , also the select is empty,
    if the query been executed with the passed variables, then why the list is empty? is it reading data from another place than the page!
    and what is the best practice to implement that requirement?
    would the solution be : by setting the default value of the bind variables to be some kind of Expression!
    please notice that query execution had the bound variables ( I see in the trace) are set to the correct values.
    would you give some hints or redirect me to a useful link,
    Thanks in advance
    Regards,

    please give me any example using backing bean .for example
    <?xml version='1.0' encoding='UTF-8'?>
    <jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1"
    xmlns:f="http://java.sun.com/jsf/core"
    xmlns:h="http://java.sun.com/jsf/html"
    xmlns:af="http://xmlns.oracle.com/adf/faces/rich">
    <jsp:directive.page contentType="text/html;charset=UTF-8"/>
    <f:view>
    <af:document id="d1">
    <af:form id="f1">
    <af:selectOneChoice label="Label 1" id="soc1" binding="#{Af.l1}"
    autoSubmit="true">
    <af:selectItem label="A" value="1" id="si1"/>
    <af:selectItem label="B" value="2" id="si2"/>
    </af:selectOneChoice>
    <af:selectOneChoice label="Label 2" id="soc2" disabled="#{Af.l1=='2'}"
    partialTriggers="soc1">
    <af:selectItem label="C" value="3" id="si3"/>
    <af:selectItem label="D" value="4" id="si4"/>
    </af:selectOneChoice>
    </af:form>
    </af:document>
    </f:view>
    </jsp:root>
    package a;
    import oracle.adf.view.rich.component.rich.input.RichSelectOneChoice;
    public class A {
    private RichSelectOneChoice l1;
    public A() {
    public void setL1(RichSelectOneChoice l1) {
    this.l1 = l1;
    public RichSelectOneChoice getL1() {
    return l1;
    is there any mistake

  • Best practice for repositories during configuration - one or several DBs?

    Establishing my 11.1.2 dev box, we are in 9.3.1 in Production. Reading through documentation it states that one database is the repository for the Shared Services, Business Rules, Essbase, etc.
    Since I came to this new job with 9.3.1 installed not sure if this was verbiage that is the standard from version 9.3 or this is something new for 11.1.x
    So...what is the best practice? is it better to lump all foundation type activity into one DB (I realize Planning apps have their own db) or is it better to have a db for biplus, db for shared services, etc...
    JTS

    Here is what Oracle have to say
    "For ease of deployment and simplicity, for a new installation, you can use one database for all products, which is the default when you configure all products at the same time. To use a different database for each product, perform the “Configure Database” task separately for each product. In some cases you might want to configure separate databases for products. Consider performance, roll-back procedures for a single application or product, and disaster recovery plans."
    I would say in a development environment then there is no harm in using one db/schema for products, remember some products require separate databases/schemas e.g. Planning application.
    In production environment I tend to promote keeping them separate as it helps with troubleshooting and recovery.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Best Practice to use one Key on ACE for new CSR?

    We generate multiple CSR on our ACE....but our previous network admin was only using
    one key for all new CSR requests.
    i.e.......we have samplekey.pem key on our ACE
    we use samplekey.pem to generate CSR's for multiple certs..
    is this best practice or should we be using new keys for each new CSR
    also .is it ok to delete old CSR on the lb..since the limit is only 8?..thx

    We generate multiple CSR on our ACE....but our previous network admin was only using
    one key for all new CSR requests.
    i.e.......we have samplekey.pem key on our ACE
    we use samplekey.pem to generate CSR's for multiple certs..
    is this best practice or should we be using new keys for each new CSR
    also .is it ok to delete old CSR on the lb..since the limit is only 8?..thx

Maybe you are looking for