Question on best practice....

Friends,
Final Cut Studio Pro 5/Soundtrack Pro 1.0.3
Powerbook G4, 2GB Ram
I have a DV session recorded over 6 hours that I need some assistance with. The audio for the session was recorded in two instances....via a conference "mic" plugged into a Marantz PDM-671 audio recorder onto compactflash (located in the front of the room by the presenter(s)) AND via the built-in mics on our Sony HDR-FX1 video camera. Needless to say, the audio recording on the DV tape is not very good (presenters' voice(s) are distant with lots of "noise" in the foreground), while the Marantz recording is also not great...but better.
Since these two were not linked together or started recording at the same time, the amount/time of recording doesn't match. I'm looking for either of the following:
(a) Ways to clean up or enhance the audio recording on the DV tape so that the "background" voices of the presenters are moved to the foreground and able to be amplified properly.
OR
(b) A software/resource that would allow me to easily match my separate audio recording from the Marantz to the DV tape video, so I could clean up the "better" of the two audio sources, but match the audio and video without having our speakers look like they're in a badly dubbed film.
Any advice or assistance you could give would be great. Thanks.
-Steve
Steven Dunn
Director of Information Technology
Illinois State Bar Association
Powerbook G4   Mac OS X (10.4.6)   2GB RAM

Hello Steven,
What I would do in your case since you have 6 hours is to edit the show with the audio off the DV camera. Then, as painfull as this will be, get the better audio from the recorder and sync it back up till it "phases" with the audio from the DV camera. One audio track will have the DV camera audio on it. Create another audio track and import the audio from the recorder and place it on the 2nd audio track. Find the exact "bite" or audio and match it to the start of the DV camera audio clip. Now slip/slid the recorder audio till the sound starts to "phase". This will take awile but in the end works when original camera audio is recorded from across the room. Good luck.

Similar Messages

  • Quick question regarding best practice and dedicating NIC's for traffic seperation.

    Hi all,
    I have a quick question regarding best practice and dedicating NIC's for traffic seperation for FT, NFS, ISCSI, VM traffic etc.  I get that its best practice to try and separate traffic where you can and especially for things like FT however I just wondered if there was a preferred method to achieving this.  What I mean is ...
    -     Is it OK to have everything on one switch but set each respective portgroup to having a primary and failover NIC i.e FT, ISCSI and all the others failover (this would sort of give you a backup in situations where you have limited physical NICs.
    -    Or should I always aim to separate things entirely with their own respective NICs and their own respective switches?
    During the VCAP exam for example (not knowing in advance how many physical NIC's will be available to me) how would I know which stuff I should segregate on its own separate switch?  Is there some sort of ranking order of priority /importance?  FT for example I would rather not stick on its own dedicated switch if I could only afford to give it a single NICs since this to me seems like a failover risk.

    I know the answer to this probably depends on however many physical NICs you have at your disposal however I wondered if there are any golden 100% rules for example FT must absolutely be on its own switch with its own NICs even at the expence of reduced resiliency should the absolute worst happen?  Obviously I know its also best practice to seperate NICs by vender and hosts by chassis and switch etc 

  • Architecture/Design Question with best practices ?

    Architecture/Design Question with best practices ?
    Should I have separate webserver, weblogic for application and for IAM ?
    If yes than how this both will communicate, for example should I have webgate at both the server which will communicate each other?
    Any reference which help in deciding how to design and if I have separate weblogic one for application and one for IAM than how session management will occur etc
    How is general design happens in IAM Project ?
    Help Appreciated.

    The standard answer: it depends!
    From a technical point of view, it sounds better to use the same "midleware infrastructure", BUT then the challenge is to find the lastest weblogic version that is certified by both the IAM applications and the enterprise applications. This will pull down the version of weblogic, since the IAM application stack is certified with older version of weblogic.
    From a security point of view (access, availability): do you have the same security policy for the enterprise applications and the IAM applications (component of your security architecture)?
    From a organisation point of view: who is the owner of weblogic, enterprise applications and IAM applications. In one of my customer, application and infrastructure/security are in to different departments. Having a common weblogic domain didn't feet in the organization.
    My short answer would be: keep it separated, this will save you a lot of technical and political challenges.
    Didier.

  • New to ColdFusion - Question regarding best practice

    Hello there.
    I have been programming in Java/C#/PHP for the past two years or so, and as of late have really taken a liking to ColdFusion.
    The question that I have is around the actual seperation of code and if there are any best practices that are preached using this language. While I was learning Java, I was taught that it's best to have several layers in your code; example: Front end (JSPs or ASP) -> Business Objects -> Daos -> Database. All of the code that I have written using these three languages have followed this simple structure, for the most part.
    As I dive deeper into ColdFusion, most of the examples that I have seen from vetrans of this language don't really incorporate much seperation. And I'm not referring to the simple "here's what this function does" type of examples online where most of the code is written in one file. I've been able to see projects that have been created with this language.
    I work with a couple developers who have been writing in ColdFusion for a few years and posed this question to them as well. Their response was something to the affect of, "I'm not sure if there are any best practices for this, but it doesn't really seem like there's much of an issue making calls like this".
    I have searched online for any type of best practices or discussions around this and haven't seen much of anything.
    I do still consider myself somewhat of a noobling when it comes to programming, but matters of best practice are important to me for any language that I learn more about.
    Thanks for the help.

    Frameworks for Web Applications can require alot of overhead, more than you might normally need programming ColdFusion, I have worked with Frameworks, including Fusebox, what I discovered is when handing a project over to a different developer, it took them over a month before they were able to fully understand the Fusebox framework and then program it comfortably. I decided to not use Fusebox on other projects for this reason.
    For maintainability sometimes its better to not use a framework, while there are a number of ColdFusion developers, those that know the Fusebox framework are in the minority. When using a framework, you always have to consider the amount of time to learn it and succesfuly implement it. Alot of it depends on how much of your code you want to reuse. One thing you have to consider, is if you need to make a change to the web application, how many files will you have to modify? Sometimes its more files with a framework than if you just write code without a framework.
    While working on a website for Electronic Component sourcing, I encountered this dynamic several times.
    Michael G. Workman
    [email protected]
    http://www.usbid.com
    http://ic.locate-ic.com

  • Question about Best Practices - Redwood Landscape/Object Naming Conventions

    Having reviewed documentation and posts, I find that there is not that much information available in regards to best practices for the Redwood Scheduler in a SAP environment. We are running the free version.
    1) The job scheduling for SAP reference book (SAP Press) recommends multiple Redwood installations and using export/import to move jobs and other redwood objects from say DEV->QAS->PROD. Presentations from the help.sap.com Web Site show the Redwood Scheduler linked to Solution Manager and handling job submissions for DEV-QAS-PROD. Point and Shoot (just be careful where you aim!) functionality is described as an advantage for the product. There is a SAP note (#895253) on making Redwood highly available. I am open to comments inputs and suggestions on this issue based on SAP client experiences.
    2) Related to 1), I have not seen much documentation on Redwood object naming conventions. I am interested in hearing how SAP clients have dealt with Redwood object naming (i.e. applications, job streams, scripts, events, locks). To date, I have seen in a presentation where customer objects are named starting with Z_. I like to include the object type in the name (e.g. EVT - Event, CHN - Job Chain, SCR - Script, LCK - Lock) keeping in mind the character length limitation of 30 characters. I also have an associated issue with Event naming given that we have 4 environments (DEV, QA, Staging, PROD). Assuming that we are not about to have one installation per environment, then we need to include the environment in the event name. The downside here is that we lose transportability for the job stream. We need to modify the job chain to wait for a different event name when running in a different environment. Comments?

    Hi Paul,
    As suggested in book u2018job scheduling for SAP from SAPu2019 press it is better to have multiple instances of Cronacle version (at least 2 u2013 one for development & quality and other separate one for production. This will have no confusion).
    Regarding transporting / replicating of the object definitions - it is really easy to import and export the objects like Events, Job Chain, Script, Locks etc. Also it is very easy and less time consuming to create a fresh in each system. Only complicated job chains creation can be time consuming.
    In normal cases the testing for background jobs mostly happens only in SAP quality instance and then the final scheduling in production. So it is very much possible to just export the verified script / job chain form Cronacle quality instance and import the same in Cronacle production instance (use of Cronacle shell is really recommended for fast processing)
    Regarding OSS note 895253 u2013 yes it is highly recommended to keep your central repository, processing server and licencing information on highly available clustered environment. This is very much required as Redwood Cronacle acts as central job scheduler in your SAP landscape (with OEM version).
    As you have confirmed, you are using OEM and hence you have only one process server.
    Regarding the conventions for names, it is recommended to create a centrally accessible naming convention document and then follow it. For example in my company we are using the naming convention for the jobs as Z_AAU_MM_ZCHGSTA2_AU01_LSV where A is for APAC region, AU is for Australia (country), MM is for Materials management and then ZCHGSTA2_AU01_LSV is the free text as provided by batch job requester.
    For other Redwood Cronacle specific objects also you can derive naming conventions based on SAP instances like if you want all the related scripts / job chains to be stored in one application, its name can be APPL_<logical name of the instance>.
    So in a nutshell, it is highly recommend
    Also the integration of SAP solution manager with redwood is to receive monitoring and alerting data and to pass the Redwood Cronacle information to SAP SOL MAN to create single point of control. You can find information on the purpose of XAL and XMW interfaces in Cronacle help (F1). 
    Hope this answers your queries. Please write if you need some more information / help in this regard.
    Best regards,
    Vithal

  • Question on best practice to extend schema

    We have a requirement to extend the directory schema. I wanted to know what is the standard practice adopted
    1) Is it good practice to manually create an LDIF so that this can be run on every deployment machine at every stage?
    2) Or should the schema be created through the console the first time and the LDIF file from this machine copied over to the schema directory of the target server ?
    3) Should the custom schema be appended to the 99user.ldif file or is it better to keep it in a separate LDIF ?
    Any info would be helpful.
    Thanks
    Mamta

    I would say it's best to create your own schema file. Call it 60yourname.ldif and place it in the schema directory. This makes it easy to keep track of your schema in a change control system (e.g. CVS). The only problem with this is that schema replication will not work - you have to manually copy the file to every server instance.
    If you create the schema through the console, schema replication will occur - schema replication only happens when schema is added over LDAP. The schema is written to the 99user.ldif file. If you choose this method, make sure you save a copy of the schema you create in your change control system so you won't lose it.

  • Question regarding best practice

    Hello Experts,
    What is the best way to deploy NWGW?
    We recently architected a solution to install the 7.4 ABAP stack which comes with Gateway. We chose the Central Gateway HUB scenario in a 3 -tier setup. Is this all that's required in order to connect this hub gateway to the business systems ie ECC? Or do we have to also install the gateway add-on on our business system in order to expose the development objects to the HUB? I'm very interested in understanding how others are doing this and what has been the best way according to your own experiences. I thought creating a trusted connection between the gateway hub and the business system would suffice to expose the development objects from the business system to the hub in order to create the gateway services in the hub out of them? Is this a correct assumption? Happy to receive any feedback, suggestion and thoughts.
    Kind regards,
    Kunal.

    Hi Kunal,
    My understanding is that in the HUB scenario you still need to install an addon in to the backend system (IW_BEP). If your backend system is already a 7.40 system then I believe that addon (or equivalent) should already be there.
    I highly recommend you take a look at SAP Gateway deployment options in a nutshell by Andre Fischer
    Hth,
    Simon

  • Question on best practice/optimization

    So I'm working with the Custom 4 dimension and I'm going to be reusing the highest member in the dimension under several alternate hierarchies. Is it better to drop the top member under each of the alternate hierarchies or create a single new member and copy the value from the top member to the new base one.
    Ex:
    TotC4
    --Financial
    -----EliminationA
    ------EliminationA1
    ------EliminationA2
    -----GL
    -------TrialBalance
    -------Adjustments
    --Alternate
    ----AlternateA
    -------Financial
    -------AdjustmentA
    -----AlternateB
    -------Financial
    -------AdjustmentB
    In total there will be about 8 Alternate Adjustments(it's for alternate trasnlations if you're curious).
    So should I repeate the entire Financial Hierarchy under each alternate rollup, or just write a rule saying FinancialCopy = Financial. It seems like it would be a trade off between performance and database size, but I'm not sure if this is even substantial enough to worry about.

    You are better off to have alternate hierarchies where you repeat the custom member in question (it would become a shared member). HFM is very fast at aggregating the rollups. This is more efficient than creating entirely new members which would use rules to copy the data from the original member.
    --Chris                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Question on best practice for NAT/PAT and client access to firewall IP

    Imagine that I have this scenario:
    Client(IP=192.168.1.1/24)--[CiscoL2 switch]--Router--CiscoL2Switch----F5 Firewall IP=10.10.10.1/24 (only one NIC, there is not outbound and inbound NIC configuration on this F5 firewall)
    One of my users is complaining about the following:
    When clients receive traffic from the F5 firewall (apparently the firewall is doing PAT not NAT, the client see IP address 10.10.10.1.
    Do you see this is a problem? Should I make another IP address range available and do NAT properly so that clients will not see the firewall IP address? I don't see this situation is a problem but please let me know if I am wrong.

    Hi,
    Static PAT is the same as static NAT, except it lets you specify the protocol (TCP or UDP) and port for the local and global addresses.
    This feature lets you identify the same global address across many different static statements, so long as the port is different for each statement (you CANNOT use the same global address for multiple static NAT statements).
    For example, if you want to provide a single address for global users to access FTP, HTTP, and SMTP, but these are all actually different servers on the local network, you can specify static PAT statements for each server that uses the same global IP address, but different ports
    And for PAT you cannot use the same pair of local and global address in multiple static statements between the same two interfaces.
    Regards
    Bjornarsb

  • A question about Best Practices

    Im currently working on a project and have run into a bit of structure debate.
    Our project works with a retional database.
    Hence we have classes that model certain sections of the db.
    We wish to create a Data Access Object to interface the model classe to the db. To enforce consitency in programming we were thinking of using an DAOInterface object that would define all methods ( ie load() , save() ect... )
    This leads to one issue... because each model is different our interface would need to declare arguments and returns ast Object.
    Which means a lot of casting .... ugh.. ugly.
    however the solution to this problem is to create an interface for each DAOObject however this defeats the purpose... cause now any developer onthe team and sneak a method in without being standard across the board...
    I was hoping my fellow developers may be able to share their experiences with this problem and provide recomendations.
    thanks
    J.

    You can declare "marker" interfaces for your DO Classes to be included in the interface for the DAO Class.
    public interface DAOInterface {
        DOInterface create(DOPrimaryKeyInterface key) throws DAOException;
    public interface DOInterface {
    public interface DOPrimaryKeyInterface {
    }It still involves casting, but at least not from Object - and it does enforce the "contract."
    As to keeping other developers from screwing it up, that's called Team Management and is out of the purview of this forum. ;D

  • Questions about best practic

    I have this existing functionality
    public class EmployeeUtility{
    public EmpDetail getEmployeeDetail(Employee employee){
    int accountKey = employee.getAccountKey();
    String name = employee.getName();
    int age = employee.getAge();
    return getDetail(accountKey,name,age,sex,address)
    I am making this class a webservice.In order to make this call easier for consumer I am changing the parameter to
    public EmpDetail getEmployeeDetail(String empKey){
    because empKey is the key to get all the other details like account key,sex,name,address.The bad part is an extra database call to get the account key,sex,name,address based on empKey.Remember Employee class above has much more variables than what I have shown.I dont want my consumer of my webservice to bang their head in order to get the info they want.
    Is this the right approach .Basically I believe my webservice should be very flexible and easy to use.
    Thanks
    m
    Edited by: shet on May 21, 2010 7:13 AM

    shet wrote:
    The bad part is an extra database call to get the account key,sex,name,address based on empKey.I guess If I understand correctly, earlier you were receiving employee object with more information than just a key. Now since you have changed the parameter to key means the purpose of web service has been changed also. So, you shouldn't be bothering much. Why do you think that this is the issue?
    I dont want my consumer of my webservice to bang their head in order to get the info they want.What actually are you looking for?
    Is this the right approach .Basically I believe my webservice should be very flexible and easy to use.What flexibility and ease of use are you looking for?

  • Networking "best practice" for setting up a farm

    Hi all.
    We would like to set an OracleVM farm, and I have a question about "best practice" for
    configuring the network. Some background:
    - The hardware I have is comprised of machines with 4 gig-eth NICs each.
    - The storage will be coming primarily from a backend NAS appliance (Netapp, FWIW).
    - We have already allocated a separate VLAN for management.
    - We would like to have HA capable VMs using OCFS2 (on top of NFS.)
    I'm trying to decide between 2 possible configurations. The first would keep physical separation
    between the mgt/storage networks and the DomU networks. The second would just trunk
    everything together across all 4 NICs, something like:
    Config 1:
    - eth0 - management/cluster-interconnect
    - eth1 - storage
    - eth2/eth3 => bond0 - 8021q trunked, bonded interfaces for DomUs
    Config 2:
    - eth0/1/2/3 => bond0
    Do people have experience or recommendation about the best configuration?
    I'm attracted to the first option (perhaps naively) because CI/storage would benefit
    from dedicated bandwidth and this configuration might also be more secure.
    Regards,
    Robert.

    user1070509 wrote:
    Option #4 (802.3ad) looks promising, but I don't know if this can be made to work across
    separate switches.It can, if your switches support cross-switch trunking. Essentially, 802.3ad (also known as LACP or EtherChannel on Cisco devices) requires your switch to be properly configured to allow trunking across the interfaces used for the bond. I know that the high-end Cisco and Juniper switches do support LACP across multiple switches. In the Cisco world, this is called MEC (Multichassis EtherChannel).
    If you're using low-end commodity-grade gear, you'll probably need to use active/passive bonds if you want to span switches. Alternatively, you could use one of the balance algorithms for some bandwitch increase. You'd have to run your own testing to determine which algorithm is best suited for your workload.
    The Linux Foundation's Net:Bonding article has some great information on bonding in general, particularly on the various bonding methods for high availability:
    http://www.linuxfoundation.org/en/Net:Bonding

  • Best practiceS for setting up Macs on Network

    Greetings.
    We have six Macs on our Windows Server network; three iMacs and three laptops. We have set up all the machines and they are joined to the Active Directory. In the past, we have always created local users on the machines and then "browsed" to the server shares and mounted the them. We've learned things have improved/changed over the years and we're just now realizing we can probably have the machines set up to work better. So, I have a couple of questions for "best practices" when setting up each of the machines.
    1. Since we’re in a network environment, should we not set up “local logins/users” and instead have users login using their AD login? It seems having a local account creates some conflicts with the server since upgrading to lion.
    2. Should we set the computer to not ask for a “list of users” and instead ask for a username and password for logins?
    3. For the user that uses the machine most often, they can still customize their desktop when they use an AD login, correct?
    4. Should we set up Mobile User Accounts? What exactly does this do?
    Any other advice on how we should best be setting up the clients for our environment to make sure we are following best practices would be great!
    Thanks for any help!
    Jay

    Greetings.
    We have six Macs on our Windows Server network; three iMacs and three laptops. We have set up all the machines and they are joined to the Active Directory. In the past, we have always created local users on the machines and then "browsed" to the server shares and mounted the them. We've learned things have improved/changed over the years and we're just now realizing we can probably have the machines set up to work better. So, I have a couple of questions for "best practices" when setting up each of the machines.
    1. Since we’re in a network environment, should we not set up “local logins/users” and instead have users login using their AD login? It seems having a local account creates some conflicts with the server since upgrading to lion.
    2. Should we set the computer to not ask for a “list of users” and instead ask for a username and password for logins?
    3. For the user that uses the machine most often, they can still customize their desktop when they use an AD login, correct?
    4. Should we set up Mobile User Accounts? What exactly does this do?
    Any other advice on how we should best be setting up the clients for our environment to make sure we are following best practices would be great!
    Thanks for any help!
    Jay

  • Best Practice: Where to put Listeners?

    In a Swing application the typical way of handling events is to add a listener. So far, so good.
    But where to locate the listeners? Obviously there are a lot of possible solutions:
    - Scatter them all over your code, like using anonymous listeners.
    - Implement all of them in a single, explicit class.
    - Only uses windows as listeners.
    - etc.
    The intention of my question is not to get a rather long list of more ideas, or to get pros or cons of any of the above suggestion. My actual question is: Is there a best practice for where to locate a listener's implementation? I mean, after decades of Swing and thousands of Swing based applications, I am sure that there must be a best practice where to but listeners implementations.

    mkarg wrote:
    In a Swing application the typical way of handling events is to add a listener. So far, so good.
    But where to locate the listeners? Obviously there are a lot of possible solutions:
    - Scatter them all over your code, like using anonymous listeners.
    - Implement all of them in a single, explicit class.
    - Only uses windows as listeners.
    - etc.
    The intention of my question is not to get a rather long list of more ideas, or to get pros or cons of any of the above suggestion. My actual question is: Is there a best practice for where to locate a listener's implementation? I mean, after decades of Swing and thousands of Swing based applications, I am sure that there must be a best practice where to but listeners implementations.You've asked other similar questions about best practices. No matter how long Swing has been around, people still program in a variety of ways, and there are lots of areas where there are several equally correct ways of doing things. Each way has its pros and cons, and the specific situation drives towards one way or the other. One's best practice of using anonymous listeners will be another's code smell. One's best practice of using inner class will be another's hassle.
    So you will probably only get opinions, and likely not universally recognized best practices.
    That being said, here is my opinion (nothingmore than that, but it has a high value to me :o) :
    In yous list of options, one thing that is more likely to form a consensus against it, is "only uses window as listeners". I assume you mean each frame is implemented as a MyCustomFrame extends JFrame , and add this as listener on all contained widgets.
    This option is disregarded because
    1) extending JFrame is generally not a meaningful use of inheritance (that point is open to debate, as it is quite handy)
    2) register the same object to serve as a listener for several widgets makes the implementation of listener callbacks awkward (lots of if-then-else). See [that thread|http://forums.sun.com/thread.jspa?forumID=57&threadID=5395604] for more arguments.
    Now, no matter what style of listeners you choose, your listeners shouldn't do too much work (how much is too much is also open to debate...):
    if a listener gets complicated, you should simplify it by making it a simple relay, that transform low-level graphical events into functional events to be processed by a higher-level class (+Controller+). I find the Mediator pattern to be a best practice for "more-than-3-widgets" interactions. As the interactions usually involves also calls to the application model, the mediator becomes a controller.
    With that in mind, sort anonymous listeners are fine: the heavy work will be performed by the mediator or controller, and that latter is where maintenance will occur. So "Scatter them all over your code" (sounds quite pejorative) is not much of an issue: you have to hook the widget to the behavior somewhere anyway, the shorter, the best.
    For simpler behavior, see the previous reply which gives perfect advice.

  • Best practice to create views

    Hi,
    I've a question about best practice to develop a large application with many complex views.
    Typically at each time only one views is displayed. User can go from a view to another using a menu bar.
    Every view is build with fxml, so my question is about how to create views and how switch from one to another.
    Actually I load fxml every time the view is required:
    FXMLLoader loader = new FXMLLoader();
    InputStream in = MyController.class.getResourceAsStream("MyView.fxml");
    loader.setBuilderFactory(new JavaFXBuilderFactory());
    loader.setLocation(OptixController.class.getResource("MyView.fxml"));
    BorderPane page;
    try {
         page = (BorderPane) loader.load(in);
         } finally {
              if (in != null) {
                   in.close();
    // appController = loader.getController();
    Scene scene = new Scene(page, MINIMUM_WINDOW_WIDTH, MINIMUM_WINDOW_HEIGHT);
    scene.getStylesheets().add("it/myapp/Mycss.css");
    stage.setScene(scene);
    stage.sizeToScene();
    stage.setScene(scene);
    stage.sizeToScene();
    stage.centerOnScreen();
    stage.show();My questions:
    1- is a good practice reload every time the fxml to design the view?
    2- is a good practice create every time a new Scene or to have an unique scene in the app and every time clear all elements in it and set the new view?
    3- the views should be keep in memory to avoid performace issue or it is a mistake? I think that every time a view should be destroy in order to free memory.
    Thanks very much
    Edited by: drenda81 on 21-mar-2013 10.41

    >
    >
    My questions:
    1- is a good practice reload every time the fxml to design the view?
    2- is a good practice create every time a new Scene or to have an unique scene in the app and every time clear all elements in it and set the new view?
    3- the views should be keep in memory to avoid performace issue or it is a mistake? I think that every time a view should be destroy in order to free memory.
    In choosing between 1 and 3 above, I think either is fine. Loading the FXML on demand every time will be slightly slower, but assuming you are not doing something unusual such as loading over a network connection it won't be noticeable to the user. Loading all views at startup and keeping them in memory uses more memory, but again, it's unlikely to be an issue. I would choose whichever is easier to code (probably loading on demand).
    In choosing between reusing a Scene or creating a new one each time, I would reuse the Scene. "Clearing all elements in it" only needs you to call scene.setRoot(...) and pass in the new view. Since the Scene has a mutable root property, you may as well make use of it and save the (small) overhead of instantiating a new Scene each time. You might consider exposing a currentView property somewhere (say, in your main controller, or model if you have a separate model class) and binding the Scene's root property to it. Something like:
    public class MainController {
      private final ObjectProperty<Parent> currentView ;
      public MainController() {
        currentView = new SimpleObjectProperty<Parent>(this, "currentView");
      public void initialize() {
        currentView.set(loadView("StartView.fxml"));
      public ObjectProperty<Parent> currentViewProperty() {
        return currentView ;
      // event handler to load View1:
      @FXML
      private void loadView1() {
        currentView.set(loadView("View1.fxml"));
      // similarly for other views...
      private Parent loadView(String fxmlFile) {
        try {
         Parent view = FXMLLoader.load(getClass().getResource(fxmlFile));
         return view ;
        } catch (...) { ... }
    }Then your application can do this:
    @Override
    public void start(Stage primaryStage) {
       Scene scene = new Scene();
       FXMLLoader loader = new FXMLLoader(getClass().getResource("Main.fxml"));
       MainController controller = (MainController) loader.getController();
       scene.rootProperty().bind(controller.currentViewProperty());
       // set scene in stage, etc...
    }This means your Controller doesn't need to know about the Scene, which maintains a nice decoupling.

Maybe you are looking for

  • I can no longer "send for signature" in Acrobat XI after Echosign update.

    I can no longer "send for signature" in Acrobat XI after Echosign update. Now when I click that option, up comes a box that asks me to enter the recipients email and their name. I did a test and it send it to the recipient without me being able to ad

  • ALV to XML conversion

    Dear all , I have one report that gives the output in ALV. Now i want to convert this output to XML tagged format. The columns in ALV will be treated as differant tags in XML file. Can you please help in doing this? Thanx in Advance. Regards, Nikhil

  • C7 photo gallery

    guys help me... i just found out and noticed just today that my photo gallery is somewhat blurry... when i'm trying to browse the photos it's pixelated eventhough the pictures are alright and clear before... before when i'm scrolling the photos it's

  • Keyboard not communicating with computer

    When I first boot up my HP All In One, I try to enter my login information but no characters appear. So I restart and am able to log in. But the keyboard stops working. When I hit a key, nothing appears on the screen and I hear kind of a beep (not li

  • RE missing (?) plug-in OS X 10.5.8 :

    RE plug-ins OS X 10.5.8 : Recently I have been having some sporadic trouble with safari (5.0.6) and started to look at plug-ins - The apple support document entitled "Safari: Unsupported third-party add-ons may cause Safari to unexpectedly quit or ha