Best Practice for Commit() after custom method on struts action

Hi all,
I'm curious what's the best way to commit programmatically after an application module custom method, wired to a struts action, inserts a row into a view object.
Right now I do this:
vo.insertRow(theRow);
vo.executeQuery();
getDBTransaction.commit();
Then, I wire a struts action after the one calling the custom method, and drag a commit onto that as well. Is there a better way? I wanted to make sure...
Thanks.

John,
it seems that your are commiting it twice then.
On another issue, don't program directly against View objects in your Struts Action. Write a method on the YourApplicationModuleImpl class and expose this to the client. Have this method handle the update and commit on the server. In the Action simply call the method exposed method from the YourApplicationModule.
Frank

Similar Messages

  • Best practice for calling application module methods and plsql code

    In my application I am experiencing problems with connection pooling, I seem to be using a lot of connections in my application when only a few users are using the system. As part of our application we need to call database procedures for business logic.
    Our backing beans, call methods on the application module which in turn call a database procedure. For instance in the backing bean we have code like this to call the application module method.
    // Calling Module to generate new examination/test.
    CIGAppModuleImpl appMod = (CIGAppModuleImpl)Configuration.createRootApplicationModule("ky.gov.exam.model.CIGAppModule", "CIGAppModuleLocal");
    String testId = appMod.createTest( userId, examId, centerId).toString();
    AdfFacesContext.getCurrentInstance().getPageFlowScope().put("tid",testId);
    // Close the call
    System.out.println("Calling releaseRootApplicationModule remove");
    Configuration.releaseRootApplicationModule(appMod, true);
    System.out.println("Completed releaseRootApplicationModule remove");
    return returnResult;
    In the application module method we have the following code.
    System.out.println("CIGAppModuleImpl: Call the database and use the value from the iterator");
    CallableStatement cs = null;
    try{
    cs = getDBTransaction().createCallableStatement("begin ? := macilap.user_admin.new_test_init(?,?,?); end;", 0);
    cs.registerOutParameter(1, Types.NUMERIC);
    cs.setString(2, p_userId);
    cs.setString(3, p_examId);
    cs.setString(4, p_centerId);
    cs.executeUpdate();
    returnResult=cs.getInt(1);
    System.out.println("CIGAppModuleImpl.createTest: Return Result is " + returnResult);
    }catch (SQLException se){
    throw new JboException(se);
    finally {
    if (cs != null) {
    try {
    cs.close();
    catch (SQLException s) {
    throw new JboException(s);
    I have read in one of Steve Muench presentations (Oracle Fusion Applications Team' Best Practises) that calling the createRootApplicationModule method is a bad idea, and to call the method via the binding interface.
    I am assuming calling the createRootApplicationModule uses much more resources and database connections that calling the method through the binding interface such as
    BindingContainer bindings = getBindings();
    OperationBinding ob = bindings.getOperationBinding("customMethod");
    Object result = ob.execute()
    Is this the case? Also is using getDBTransaction().createCallableStatement the best way of calling database procedures. Would it be better to expose plsql packages as webservices and then call them from the applicationModule. Is this more efficient?
    Regards
    Orlando

    Thanks Shay, this is now working.
    I successfully got the binding to the application method in the pagedef.
    I used the following code in my backing bean.
    package view.backing;
    import oracle.binding.BindingContainer;
    import oracle.binding.OperationBinding;
    public class Testdatabase {
    private DCBindingContainer bindingContainer;
    public void setBindingContainer (DCBindingContainer bc) {this.bindingContainer = bc;}
    public DCBindingContainer getBindingContainer() {return bindingContainer;}
    public static String validateUser()
    // Calling Module to validate user and return user role details.
    System.out.println("Getting Binding Container from Home Backing Bean");
    BindingContainer bindings = BindingContext.getCurrent().getCurrentBindingsEntry();
    System.out.println("Obtain binding");
    OperationBinding operationBinding = bindings.getOperationBinding("calldatabase");
    System.out.println("Set username parameter");
    operationBinding.getParamsMap().put("p_userId",userId);
    System.out.println("Set password parameter");
    operationBinding.getParamsMap().put("p_testId",examId);
    Object result = operationBinding.execute();
    System.out.println("Obtain result");
    String userRole = result.toString();
    System.out.println("Result is "+userRole);
    }

  • Best practice for calling an AM method with parameters

    Which will be the best way to call an AM method with parameters from a backing bean.
    I usually use the BindingContainer to get the operation binding and then call execute function. But when the method have parameters, how to do it?
    Thanks

    Hi,
    same:
    operationBinding.getParamMap().put("argument1Name", argument1Value);
    operationBinding.getParamMap().put("argument2Name", argument2Value);
    operationBinding.execute();
    Frank

  • Best practice for customizing EJB property after deployment

    Hi Gurus,
      What is the best practice for customizing EJB property after deployment in NW7.1? I have a stateless session bean and it needs to get some environment information before acting. While the information can only be known at runtime. What should I do to achieve it? I thought I can bind the property with a JNDI context but I did not find out where to declare and change the context value. Please advise. Thanks.
    B.R.

    Hi.
    I have a similar problem. But I still can not edit the properties of the ejb-jar.xml.
    I tried to stop the web service, but the properties still remain unmodifiable.
    Could you advise me how to change them?
    We have installed SAP Server 7.0.2

  • Best practice for application help for a custom screen?

    Hi,
    The system is Netweaver 7.0 SP 15 with e-recruiting .
    We have some custom SAP GUI transactions and have written Word documents with screen prints and explanations. I would like to make the procedure document accessible from the custom transaction or at least provide custom help text that includes a link to the full documents.
    Can anyone help me out with options and best practices for providing customized application help for custom SAP GUI transactions?
    Thanks,
    Margaret

    Hello Margaret,
    sorry I though you might be still in a design or proof of concept phase where the decision for the technology is still adjustable.
    If the implementation is already done things change of course. The standard in-system documentation is surely not fitting your needs as including screenshots won't work well.
    I would solve the task the following way:
    I'd make a web or pdf document out of the word document and put it on a web ressource - as you run e-recruiting you have probably the possibility for that.
    I would then just put a button into the transaction an open a web container to show the document.
    I am not sure if this solution really qualifies as "best practise" but SAP does the same if you call the Help for application in the help menue. This is implemented in function module SAPGUIHC_OPEN_HELP_CENTER. I'd just copy it, throw out what I do not need and hard code the url to call.
    Perhaps someone could offer a better solution but I think this works a t least without exxagerated costs.
    Kind Regards
    Roman

  • Best practices for customizing the standard OBIA metedata repository (RPD)

    Hello
    Is there a Best practices document published by oracle or a partner that talks about best practices for customizing OBIA out-of-box RPD. I am specifically looking for guidance around:
    1. adding new objects to physical layer or modifying an exisitng table definition to add more columns
    2. Building new Logical columns in BMM layer
    3. Modifying the exisitng Subject areas.
    Thanks

    There is a very good presentation by Rittman mead on extending and customizing BI Applications. Refer to this link (http://www.rittmanmead.com/files/OOW2008%20-%20Extending%20and%20Customizing%20the%20BI%20Apps%20Data%20Warehouse.pdf ).
    Thanks,
    -Amith.

  • Best Practice for Use of ABAP in Customizing SRM and/or CRM

    I was wondering if there is a document that defines best practices for the use of ABAP with the installation and customization of SRM and/or CRM.   Such as amount of ABAP coding typically required, and best practices around the use of ABAP for customization and configuration.
    Thanks.

    Hi, Johnson
    Sorry, Please don't mind, you are not at right place to ask the Question like this
    Please read "The Forum Rules of Engagement" before posting!  HOT NEWS!!
    Thanks and Regards,
    Faisal

  • JSF - Best Practice For Using Managed Bean

    I want to discuss what is the best practice for managed bean usage, especially using session scope or request scope to build database driven pages
    ---- Session Bean ----
    - In the book Core Java Server Faces, the author mentioned that most of the cases session bean should be used, unless the processing is passed on to other handler. Since JSF can store the state on client side, i think storing everything in session is not a big memory concern. (can some expert confirm this is true?) Session objects are easy to manage and states can be shared across the pages. It can make programming easy.
    In the case of a page binded to a resultset, the bean usually helds a java.util.List object for the result, which is intialized in the constructor by query the database first. However, this approach has a problem: when user navigates to other page and comes back, the data is not refreshed. You can of course solve the problem by issuing query everytime in your getXXX method. But you need to be very careful that you don't bind this XXX property too many times. In the case of querying in getXXX, setXXX is also tricky as you don't have a member to set. You usually don't want to persist the resultset changes in the setXXX as the changes may not be final, in stead, you want to handle in the actionlistener (like a save(actionevent)).
    I would glad to see your thought on this.
    --- Request Bean ---
    request bean is initialized everytime a reuqest is made. It sometimes drove me nuts because JSF seems not to be every consistent in updating model values. Suppose you have a page showing parent-children a list of records from database, and you also allow user to change directly on the children. if I hbind the parent to a bean called #{Parent} and you bind the children to ADF table (value="#{Parent.children}" var="rowValue". If I set Parent as a request scope, the setChildren method is never called when I submit the form. Not sure if this is just for ADF or it is JSF problem. But if you change the bean to session scope, everything works fine.
    I believe JSF doesn't update the bindings for all component attributes. It only update the input component value binding. Some one please verify this is true.
    In many cases, i found request bean is very hard to work with if there are lots of updates. (I have lots of trouble with update the binding value for rendered attributes).
    However, request bean is working fine for read only pages and simple binded forms. It definitely frees up memory quicker than session bean.
    ----- any comments or opinions are welcome!!! ------

    I think it should be either Option 2 or Option 3.
    Option 2 would be necessary if the bean data depends on some request parameters.
    (Example: Getting customer bean for a particular customer id)
    Otherwise Option 3 seems the reasonable approach.
    But, I am also pondering on this issue. The above are just my initial thoughts.

  • Best practice for migrating IDOCs?

    Subject: Best practice for migrating IDOC's? 
    Hi,
    I need to migrate some IDOC's to another system for 'historical reference'.
    However, I don't want to move them using the regular setup as I don't want the inbound processing to be triggered.
    The data that was created in the original system by the processed IDOC's will be migrated to the new system using migration workbench. I only need to migrate the IDOC's as-is due to legal requirements.
    What is the best way to do this? I can see three solutions:
    A) Download IDOC table contents to a local file and upload them in the new system. Quick and dirty approach, but it might also be a bit risky.
    B) Use LSMW. However, I'm not sure whether this is feasible for IDOC's.
    C) Using ALE and setting up a custom partner profile where inbound processing only writes the IDOC's to the database. Send the IDOC's from legacy to the new system. Using standard functionality in this way seems to me to be the best solution, but I need to make sure that the IDOC's once migration will get the same status as they had in the old system.
    Any help/input will be appreciated
    Regards
    Karl Johan
    PS. For anyone interested in the business case: Within EU the utility market was deregulated a few years ago, so that any customer can buy electricity from any supplier. When a customer switches supplier this is handled via EDI, in SAP using ALE and IDOC's. I'm working on a merger between two utility companies and for legal reasons we need to move the IDOC's. Any other data is migrated using migration workbench for IS-U.

    Hi Daniele
    I am not entirely sure, what you are asking, Please could you provide additional information.
    Are you looking for best practice recommendations for Governance, for example: Change transports between DEV, QA and PRD in BPC 7.0?
    What is the best method? Server Manager backup and restore, etc  ?
    And
    Best Practice recommendations on how to upgrade to a different version of BPC, for example: Upgrading from BPC 7.0 to 7.5 or 10.0 ?
    Kind Regards
    Daniel

  • Best practice for mouseless ADF applications

    I am developing an ADF application where the users do not want to use the mouse.
    So I would like to know if there are a best practice for this?
    I am already using the accessKey functionality and subforms defaultCommand
    But I have had problems setting focus to objects on a page like tables. I would like a button to return the focus to the table after it has made the command like delete.
    I have implemented a solution where I have found inspiration several threads and other webpages (see below).
    Is this solution okay?
    Are there any problems with it?
    I would also like to know if there are better pathways to go like
    out of the box solutions,
    http://www.oracle.com/technetwork/developer-tools/adf/learnmore/79-global-template-button-strategy-360139.pdf (are there an example implementation?), or
    http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html
    in advance thanks
    Inspiration webpages
    https://blogs.oracle.com/jdevotnharvest/entry/how_to_programmatically_set_focus
    http://technology.amis.nl/2008/01/04/adf-11g-rich-faces-focus-on-field-after-button-press-or-ppr-including-javascript-in-ppr-response-and-clientlisteners-client-side-programming-in-adf-faces-rich-client-components-part-2/
    how to Commit table by writting Java code in Managed Bean?
    Table does not refresh and getting error as UIComponent is Null
    A short description of the solution:
    (jdeveloper version 11.1.1.2.0)
    --- Example where I use onSetFocus in jsff page
    <af:commandButton text="#{hrsusuiBundle.FOCUS}" id="cb10"
    partialSubmit="true" accessKey="f"
    shortDesc="Alt+Shift+F"
    actionListener="#{managedBean_clientUtils.onSetFocus}">
    <af:clientAttribute name="focusField" value="t1"/>
    </af:commandButton>
    --- Examples where I use doTableActionAndSetFocus in jsff page
    --- There have to be a binding in the jsff page to delete, commit and rollback
    <af:commandButton text="#{hrsusuiBundle.DELETE}" id="cb4"
    accessKey="x"
    shortDesc="Alt+Shift+X"
    partialSubmit="true"
    actionListener="#{managedBean_clientUtils.doTableActionAndSetFocus}">
    <af:clientAttribute name="focusField" value="t1"/>
    <af:clientAttribute name="actionField" value="Delete"/>
    </af:commandButton>
    <af:commandButton text="#{hrsusuiBundle.COMMIT}" id="cb5"
    accessKey="s" shortDesc="Alt+Shift+S"
    partialSubmit="true"
    actionListener="#{managedBean_clientUtils.doTableActionAndSetFocus}">
    <af:clientAttribute name="focusField" value="t1"/>
    <af:clientAttribute name="actionField" value="Commit"/>
    </af:commandButton>
    <af:commandButton text="#{hrsusuiBundle.ROLLBACK}" id="cb6"
    accessKey="z" shortDesc="Alt+Shift+Z"
    partialSubmit="true"
    actionListener="#{managedBean_clientUtils.doTableActionAndSetFocus}"
    immediate="true">
    <af:resetActionListener/>
    <af:clientAttribute name="focusField" value="t1"/>
    <af:clientAttribute name="actionField" value="Rollback"/>
    </af:commandButton>
    --- This is the java class I use
    --- It is published in adfc-config.xml as a request scope managedbean
    public class ClientUtils {
    public ClientUtils() {
    public void doTableActionAndSetFocus(ActionEvent event) {
    RichCommandButton rcb = (RichCommandButton)event.getSource();
    String focusOn = (String)rcb.getAttributes().get("focusField");
    String actionToDo = (String)rcb.getAttributes().get("actionField");
    UIComponent component = null;
    String clientId = null;
    component = JSFUtils.findComponentInRoot(focusOn);
    clientId = component.getClientId(JSFUtils.getFacesContext());
    if ( "Delete".equals(actionToDo) || "Commit".equals(actionToDo) || "Rollback".equals(actionToDo) ){
    BindingContainer bindings = BindingContext.getCurrent().getCurrentBindingsEntry();
    OperationBinding operationBinding = bindings.getOperationBinding(actionToDo);
    Object result = operationBinding.execute();
    AdfFacesContext.getCurrentInstance().addPartialTarget(component);
    if (clientId != null) {           
    makeSetFocusJavaScript(clientId);
    public static String onSetFocus(ActionEvent event) {
    RichCommandButton rcb = (RichCommandButton)event.getSource();
    String focusOn = (String)rcb.getAttributes().get("focusField");
    String clientId = null;
    if (focusOn.contains(":")) {
    clientId = focusOn;
    } else {
    clientId = findComponentsClientIdInRoot(focusOn);
    if (clientId != null) {           
    makeSetFocusJavaScript(clientId);
    return null;
    private static void writeJavaScriptToClient(String script) {
    FacesContext fctx = FacesContext.getCurrentInstance();
    ExtendedRenderKitService erks = null;
    erks = Service.getRenderKitService(fctx, ExtendedRenderKitService.class);
    erks.addScript(fctx, script);
    public static void makeSetFocusJavaScript(String clientId) {
    if (clientId != null) {
    StringBuilder script = new StringBuilder();
    //use client id to ensure component is found if located in
    //naming container
    script.append("var textInput = ");
    script.append("AdfPage.PAGE.findComponentByAbsoluteId");
    script.append ("('"+clientId+"');");
    script.append("if(textInput != null){");
    script.append("textInput.focus();");
    script.append("}");
    writeJavaScriptToClient(script.toString());
    public static String findComponentsClientIdInRoot(String id) {
    UIComponent component = null;
    String clientId = null;
    component = JSFUtils.findComponentInRoot(id);
    clientId = component.getClientId(JSFUtils.getFacesContext());
    return clientId;
    }

    Hi,
    I am developing an ADF application where the users do not want to use the mouse. So I would like to know if there are a best practice for this?
    Well HTML (and this is the user interface you see) follows a tab index navigation that you follow with "tab" and "shift+tab". Anything else is a short cut for which you use mnemonics (as you already do) or shortcuts (explained in http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html). There is a distinction to make between non-web environments (which I think you and your users have abackground in) and client desktop environments. Browsers block some keyboard functionality for their own purpose. So you may have to find a list of keys first that work across browsers. Unlike desktop clients, which allow you to "press a button" without the button to take focus, this cannot be done on the web. So you need to be clever here, avoiding buttons at all.
    The following paper is about JavaScript in ADF and explains the basics for what Chris Muir explains in : http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html
    http://www.oracle.com/technetwork/developer-tools/jdev/1-2011-javascript-302460.pdf
    It has the outline for how to register short cut keys that perform a specific action (e.g. register ctrl+d to delete the current row you are on, or press F11 to execute a query (similar to Oracle Forms frmres files)). However, be aware that this includes some code you have to write (actually quite some code to be honest).
    http://www.oracle.com/technetwork/developer-tools/adf/learnmore/79-global-template-button-strategy-360139.pdf (are there an example implementation?), or
    http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html
    Actually these are implementations as they come with example code for you to use and customize, do they? So what is this question asking for more ? Also note that global buttons don't quite have anything in common with the question you asked. I assume you want to see it as an implementation of the Forms toolbar that operates on the form or table the focus is in. This however does not work for the web as there is nothing that keeps track of which component has a focus and to what iterator (data block) it belongs. This would involve even more coding (though possibly doable)
    Frank

  • Best Practice for Customization of ESS 50.4

    Hi ,
    We have implemented ESS 50.4 on EP 6.0 SP 14 and R3 4.6C . I want to know what is the best practice for minor modification of ESS transaction . For eg : I need to hide the change button in Personal information screen .
    Pls let me know .
    PS : Guaranteed award points
    Aneez

    @Aneez
       "Best Practice" is just going to be good ole' ITS custom development. All the "old" ESS services are all ITS based. What can not be done through config is then done by developing custom version of the ESS services. For what you describe (ie. the typical "hide a button" scenario) it is simply a matter of:
    (1) create custom version(ie. "Z" version) of the standard service. The service file will still call the same backend transaction via the ITS parameter ~transaction.
    (2) Since you are NOT making changes that require anything changed on the backend transaction (such as adding new fields, changing business logic, etc) you are lucky to ONLY have to change the web templates. Locate the web template in your new custom service file that corresponds to the screen in the transaction where the "CHANGE" button appears. The ITS naming convention for web templates is <sapprogramname>_<screennumber>.
    (3) After locating the web template that corresponds to your needed screen, simply locate in the HTMLb where the "CHANGE" button code is and comment it out. Just that easy!
    (4) Publish your new customized service and test it out directly through ITS. ie. via the direct URL to it: http://<yourdomain>/scripts/wgate/<yourservice>!
    (5) once you see that it works, you can then make an iView for it in your portal (or simply change the iView you have to now point to your custom ITS service.
    LOTS and LOTS more info on ITS development all around this site and in the ITS sepcific forum.
    Hope this helps!
    Award points or save them...I really don't care. I think the points system here is one of the dumbest ideas since square wheels. =)

  • Best Practice for Conversion Workflow

    Hello,
    I'm converting video files from our "home grown" virtual media reserve to iTunes U. Some of the files are in RM format, some are already compressed .mov's (not H.264) and some I have the original DV files for.
    Anyone out there have a best practice for converting these file types for posting to iTunes U? I have Final Cut Studio (Compressor), QT Pro and Squeeze available to me.
    Any experience you have with this would be helpful.
    Thanks,
    Jeana

    For converting old files to a podcast compatible video and based on the machine you have, consider elgato turbo.264. It is a fairly priced "co-processor" for video conversion. It is comprised of an application and a small USB device with a encoder chip in it. In my experience, it is the fastest way to create podcast video files. The amount of time that you will save will pay for the device quickly (about $100). Plus it does batch conversion of any video that your system currently plays through QuickTime. it has all the necessary presets and you can create your own. It has a few minor limitations such as not supporting (at this moment) enhanced podcasting features such as chapter markers and closed-captions but since you have old files for conversion, that won't matter.
    For creating new content, the workflow varies a lot. Since you mention MP3s, I guess you are also interested in audio files. I would stick with GarageBand, especially if you are a beginner plus it supports enhanced podcasts.
    In any case the most important goal is to have the simplest and fastest way to go from recording to publication. The less editing the better. To attain that, the best methods will require the largest investments. For example, for video production the best way is to produce the content live so when you finish recording it is only a matter of encoding and publishing. that will require the use of a video switcher that can ingest at least one video camera and a computer output to properly capture presentation material. That's the minimum. there are several devices that can do this for you. Some are disguised PCs and some connect to a PC for tapeless recording. You can check the Tricaster, which I like but wish it was a Mac and not a Windows Xp PC. Other routes may include video mixers from manufacturers like Edirol, Pansonic or Sony connected to a VTR or directly to a Mac for direct-ti-disc capture. I f you look at some of the content available in iTunes U, you will see what I explain here. This workflow requires preparation and sufficient live support but you will have your material ready for delivery almost immediately after the recorded event. No editing required. Finally, the most intensive workflow is to record everything separately and edit it later, which is extremely time consuming.

  • Best practice for server configuration for iTunes U

    Hello all, I'm completely new to iTunes U, never heard of this until now and we have zero documentation on how to set it up. I was given the task to look at best practice for setting up the server for iTunes U, and I need your help.
    *My first question*: Can anyone explains to me how iTunes U works in general? My brief understanding is that you design/setup a welcome page for your school with sub categories like programs/courses, and within that you have things like lecture audio/video files and students can download/view them on iTunes. So where are these files hosted? Is it on your own server or is it on Apple's server? Where & how do you manage the content?
    *2nd question:* We have two Xserve(s) sitting in our server room ready to roll, my question is what is the best method to configure them so it meets our need of "high availability in active/active mode, load balancing, and server scaling". Originally I was thinking about using a 3rd party load balancing device to meet these needs, but I was told there is no budget for it so this is not going to happen. I know there is IP Failover but one server has to sit in standby mode which is a waste. So the most likely scenario is to setup DNS round robin and put both xserves in active/active. My question now is (this maybe related to question 1), say that all the content data like audio/video files are stored by us, (We are going to link a portion of our SAN space to Xserve for storage), if we are going with DNS round robin and put the 2 servers in Active/Active mode, can both servers access a common shared network space? or is this not possible and each server must have its own storage space? And therefore I must use something like RSYNC to make sure contents on both servers are identical? Should I use XSAN or is RSYNC good enough?
    Since I have no experience with iTunes U whatsoever, I hope you understand my questions, any advice and suggestion are most welcome, thanks!

    Raja Kondar wrote:
    wht is the Best Practice for having server pool i.e
    1) having a single large serverpool consisting of "n" number of guest vm
    2) having a multiple small serverpool consisting of less of number of guest vm I prefer option 1, as this gives me the greatest amount of resources available. I don't have to worry about resources in smaller pools. It also means there are more resources across the pool for HA purposes. Not sure if this is Official Best Practice, but it is a simpler configuration.
    Keep in mind that a server pool should probably have up to 20 servers in it: OCFS2 starts to strain after that.

  • Best practice for importing non-"Premiere-ready" video files

    Hello!
    I work with internal clients that provide me with a variety of differnet video types (could be almost ANYTHYING, WMV, MP4, FLV).  I of course ask for AVIs when possible, but unfortunately, I have no control over the type of file I'm given.
    And, naturally, Premiere (just upgraded to CS5) has a hard time dealing with these files.  Unpredictable, ranging from working fine to not working at all, and everything in between.  Naturally, it's become a huge issue for turnaround time.
    Is there a best practice for preparing files for editing in Premiere?
    I've tried almost everything I can think of:  converting the file(s) to .AVIs using a variety of programs/methods.  Most recently, I tried creating a Watch Folder in Adobe Media Encoder and setting it for AVI with the proper aspect ratio.  It makes sense to me that that should work:using an Adobe product to render the file into something Premiere can work with.
    However, when I imported the resulting AVI into Premiere, it gave me the Red Line of Un-renderness (that is the technical term, right?), and had the same sync issue I experienced when I brought it in as a WMV.
    Given our environment, I'm completely fine with adding render time to the front-end of projects, but it has to work.  I want files that Premiere likes.
    THANK YOU in advance for any advice you can give!
    -- Dave

    I use an older conversion program (my PrPro has a much older internal AME, unlike yours), DigitalMedia Converter 2.7. It is shareware, and has been replaced by Deskshare with newer versions, but my old one works fine. I have not tried the newer versions yet. One thing that I like about this converter is that it ONLY uses System CODEC's, and does not install its own, like a few others. This DOES mean that if I get footage with an oddball CODEC, I need to go get it, and install it on the System.
    I can batch process AV files of most types/CODEC's, and convert to DV-AVI Type II w/ 48KHz 16-bit PCM/WAV Audio and at 29.97 FPS (I am in NTSC land). So far, 99% of the resultant converted files have been perfect, whether from DivX, WMV, MPEG-2, or almost any other format/CODEC. If there is any OOS, my experience has been that it will be static, so I just have to adjust the sync offset by a few frames, and that takes care of things.
    In a few instances, the PAR flag has been missed (Standard 4:3 vs Widescreen 16:9), but Interpret Footage has solved those few issues.
    Only oddity that I have observed (mostly with DivX, or WMV's) is that occasionally, PrPro cannot get the file's Duration correct. I found that if I Import those problem files into PrElements, and then just do an Export, to the same exact specs., that resulting file (seems to be 100% identical, but something has to be different - maybe in the header info?) Imports perfectly into PrPro. This happens rarely, and I have the workaround, though it is one more step for those. I have yet to figure out why one very similar file will convert with the Duration info perfect, and then a companion file will not. Nor have I figured out exactly what is different, after running through PrE. Every theory that I have developed has been shot down by my experiences. A mystery still.
    AME works well for most, as a converter, though there are just CODEC's, that Adobe programs do not like, such as DivX and Xvid. I doubt that any Adobe program will handle those suckers easily, if at all.
    Good luck,
    Hunt

  • Best practice for backing bean population? (also, ActionListener RANT)

    Hello,
    I am about 3/4 of the way through development of a small to medium size JSF application. Sometimes I really like JSF, but much of the time I am left puzzled or frustrated for hours trying to find workarounds to JSF's bugs/glitches and design flaws.
    For example, early on, I was impressed with how easily it was to invoke a method from a page using an actionlistener. Now that I'm actually building things with JSF, the actionlistener funtionality still seems cool, but incredibly half baked. I find myself using request parameters LIKE CRAZY to work around the fact that JSF doesnt support passing parameters directly to backing bean methods. This feels awkward and wrong considering the fact that JSF is intended to abstract the HTTP underpinnings. To add insult to injury, I often have to iterate through ALL of the request parameters looking for one that has an id with an ending matching my desired property name (since JSF appends it's own crap to the beginning). I don't like doing things in a hacky way. This seems very hacky, and I feel dirty doing it.
    So, my first question is, what is the best practice for populating backing beans??? How do others accomplish this. I can think of several other approaches, but none feel less hacky.
    Second, are there plans in the next spec (please say there are) to allow parameters to be passed to backing bean methods? If not, WHY THE HECK NOT?
    Even though JSF expert group people have been conspicuously absent from this forum of late, I'd really appreciate responses from you as well.
    Thank you for your thoughts.

    Hi BrownBear,
    I've been using JSF for about 6 months now and I'd be glade to help as much as I can.
    Concerning parameters, I'm not sure what your issue is but I use the f:param tag to pass them. If you could post an example of what you are trying to do, I could see exactly what your issue is. Maybe the f:param can't help you.
    As for best practice for populating backing beans, I personaly try to let JSF do as much as possible. For example, if I have a backing bean with five properties, I make sure that they all are on the JSP page the bean serves. If one of the property is just there as an Id like, lets say, a Person ID (DB row key), then I put it on my JSP page as a hidden input field. I do the same with the properties that only for display, if I want them to be back in my bean when request comesback.
    Hope this help some how. Please, feel free to ask specific questions related to your specific problem and I monitor this post and trnasfer to you the ;little JSF experience I have.
    I'm pretty happy with JSF as it is but it sure needs improvements. :) What the heck, it's version 1.01 after all, and the next release should be a great one with the integration of JSTL.
    Cheers

Maybe you are looking for

  • Can I Get My Money Back?

    I bought Dexter the TV series Season 1 on Friday through iTunes. My hearing isn't quite as good as it should be so I was delighted to see that the series had closed captioning. Then I realized that the closed captioning was 10 seconds behind what was

  • List distribution doesn't reflect the treatment split, Siebel Marketing 8.1

    hi, I have a segment tree with 2 branches. The first branch is assigned to treatment A, the other one to treatment B. If I load the campaign then 2 lists are generated and the number of clients reflect the treatment splits. However if I launch the ca

  • Possible bug in DST patches

    I am running Trusted Solaris 8 12/02 x86 I have installed patches 125235-01 and 125237-01. It appears the spring forward time change works however the fall back time change does not seem to be losing an hour. I performed the following steps to test t

  • Data retrieval from reformatted external hard drive

    I have an 80Gb Lacie rugged portable external hard drive that I had to reformat in Disk Utility in order to mount properly. In the 'Erase' option I picked 'Don't Erase Data' in the Security Options. As I understand it the data is still retrievable un

  • Numeric format changes depending on Driver Version

    Scenario: JAVA application accessing Oracle 8.1.6 with last Oracle 9.2.0 JDBC Driver (classes12.zip) JDK1.3. We have found numeric format changes using old oracle.zip and new classes12.zip. With oracle.zip value 0.346 is returned With classes12.zip v