Which JSF template technology? Best practices?

Which templating technology is most suitable for use with JSF? What are my options? I'd just like to use the templating framework which is most accepted, and has flexibility.

There is also JSF templating:
https://jsftemplating.dev.java.net
The interesting thing about this project is it has its own syntax, and has the basic support for facelet syntax as well (they are improving support as they go).

Similar Messages

  • Which one is the best practice!!

    hi all,
    i have one doubt with the lov
    can we show the data in the LOV by decode function or its better to write two LOV's based on the condition.
    which one is the best practice.
    right now i am using decode function. according to the condition the data will come in the LOV.
    Thanks..

    can we show the data in the LOV by decode functionAs you already do, you can.
    its better to write two LOV's based on the condition.It depends. If its a complex lov and only one column varies based onsome condition, it might less work to use just one lov. Personally, i prefer two different lov's

  • JSF DB Connection Best Practices

    During the life of a JavaServer Faces request, what are some best practices for storing a db connection?
    For the duration of the transaction, a single connection should be created before db updates, made available to (possibly) multiple model objects, and closed at the conclusion of the request. The connection would only live during the request.
    Right now each model object gets and releases it's own dbCon but that doesn't satisfy the (logical unit of work) requirements of a transaction when multiple objects are required for updating the database.
    What is a good technique to get a dbCon and release it, and, where should it be stored?
    TIA,
    Al Malin

    Hi,
    I'd like to ask a follow-up question.
    How do I close the connection after the session timed out or the user leaves the application by closing the browser?
    Thanks,
    Achim

  • Typekit vs Edge Fonts -- which is better? Best practices?

    I'm not sure I understand the difference between the Edge Web fonts and having a Typekit. In this case I'm using Source Sans Pro, which is available under both.
    In my first attempt at generating a Reflow project, I had the text in Photoshop set in Source Sans Pro, with a copy of the font active on my Mac (via FontXplorer). When I did my first 'Generate' test the text came through without the font, just a 'Browser Default.' But I do believe that Source Sans was available as a choice in the Syling tab.
    For my second attempt, I deactivated the local font and instead turned it on in my Typekit* under Creative Cloud, and reopend the Photoshop doc with that version (the layers had to Update, and all was well). This time when I generated the Reflow project, the type came through as Source Sans Pro.
    So, which is the best way to use these fonts in Reflow? I do notice that the CSS only says "font: source sans pro", which means I'm still going to have to manually add the specific font codes to my HTML and CSS by hand, correct?
    *I saw in a demo video (which I can't find now, otherwise I would link it) where a guy who generated a Reflow project from Photoshop that contained Typekit fonts, that Reflow would ask you to enter the Kit ID in a pop-up window, then re-select your fonts to match what you originally chose in Photoshop. In my second attempt I was using the Typekit version of the font but I was not prompted for the Kit ID like this upon opening the Reflow doc. Has this feature been changed or removed since then? I was able to enter the Kit ID in the 'Custom' tab when I chose 'Manage fonts', but then I had duplicates listed in my font menu until I deactivated the Edge version.
    Sorry for the long post -- just wondering which way is best since Edge Fonts and Typekit seem to have redundant functionality!
    JVK

    First there shouldn't be a difference between the two. I think the only suggestion is to try and not use both. You can and it is supported, but results in more http requests because all your selected edge web fonts are loaded in one file and your selected typekit fonts are loaded in another and those can't get combined into one single file.
    Also, if you are just syncing your fonts using the Creative Cloud to get them on your desktop but not adding they to a typekit "Kit" and using your kit ID in the dialog, or not seeing the dialog at all, then the Edge Web Fonts were selected for you automatically.
    I think the reason Source Sans Pro didn't work the first time Reflow hadn't finished downloading the full font list from the servers. This list is cached locally so the next time you use that font we'd find it in the list and select it for you. If Relow finds matches for all the fonts you used it won't popup the font picker dialog. If that dialog does pop and that list doesn't have your Edge Web Font available you can add it to the list by selecting Manage Fonts from that menu.
    Hope that helps and thanks for useing Refow. Let us know how you like the Photoshop import and any other things we can do to improve it.

  • Creating RFCs which emulate program activity - best practice.

    Hi, First post on the forum so please be kind I'm hoping I've posted in the correct place.
    I'm looking at creating a few RFCs to enable a bit of automation. Theory is that details are passed into the RFC and the function does some magic to update the system, hitting all the BRFs etc. The transaction I'm looking at is ICLCDC02, we're quite a few patches behind so there is a lot of good RFC stuff I just don't have access to.
    My first attempt simply calls the relevant functions detailed in the program and ticks all the boxes but I have concerns. Then I had a look at BDC and found that SHDB couldn't replay some fo the screen selections - just errored out (and that was with a straight record and playback - no modification) so BDC is out for that reason and also some logic would be required so BDC is not really a good candidate IMO. My plan is to stick to calling the relevant functions which I feel may introduce some maintenance overhead.
    Is there any pest practice for situations like this? Is there a way to run the transaction without dialog, call the relevant functions to add records etc accessing the global memory of the transaction then close/save ensuring that all future BRF changes etc get hit? If this is possible then it would greatly reduce/eliminate the maintenance overhead and get rid of my nagging doubt.
    Or is my current approach the correct one?
    Many thanks.

    Hello,
    "This will send work item to user (pr creator) sap inbox which when they double click it will complete the workflow."
    It sounds liek they are sending a workitem where an email would be enough. By completing the workitem they are simply acknowledging that they have received notification of the completion of the PR.
    "Our PR creator will receive notification of the PR approval to theirs outlook mail handled by a program that is scheduled every 5 minutes."
    I hope (and assume) that they only receive the email once.
    I would change the workflow to send an email (SendMail step) to the initiator instead of the workitem. That is normally what happens. Either that or there is no email at all - some businesses only send an email if something goes wrong. Of course, the business has to agree to this change.
    Having that final workitem adds nothing to the process. Replace it with an email.
    regards
    Rick Bakker
    hanabi technology

  • Which wayis the best practice ?

    Hi ,
    I am doubt of which way is the best practice to initialize instance component in a Panel.
    ie .is there any diference, in terms of performance , between initializing component inside the constructor or just outside of constructor?
    1)
    public class MyPanel extends JPanel
    private static JScrollPane scrollPane = new JScrollPane();
    public CustomerTaskPane()
    2)
    public class MyPanel extends JPanel
    private static JScrollPane scrollPane = null;
    public CustomerTaskPane()
    scrollPane = new JScrollPane()
    }

    Correction for the above reply.
    *btw can i avoid from declaring the static field of a class itself when i need to implement the class which follows Singleton pattern like.
    public class CustomerMainPanel extends JPanel
    public static CustomerMainPanel customerMainPanel = null;
    public static synchronized CustomerMainPanel getInstance()
    if( customerMainPanel == null )
    customerMainPanel = new CustomerMainPanel();
    return customerMainPanel;
    }

  • SAP Best Practice Integrated with Solution manager

    We have a server in which we installed SAP Best practice baseline package and we have the solution manager 7.01 SP 25
    We maintained the logical port but when we try to check connectivity to solution manager we got the following error:
    Connectivity check to sap solution manager system not successful
    Message no. /SMB/BB_INSTALLER375
    Can anyone guide us how to solve the problem and also if there is another way to upload the solution defined on the best practice solution builder into sap solution manager as a template project
    Thanks,
    Heba Hesham

    Hi,
    Patches for SAPGUI 7.10 can be found at the following location:
    http://service.sap.com/patches
    -> Entry by Application Group -> SAP Frontend Components
    -> SAP GUI FOR WINDOWS -> SAP GUI FOR WINDOWS 7.10 CORE
    -> SAP GUI FOR WINDOWS 7.10 CORE -> Win 32
    -> gui710_2-10002995.exe

  • Where to put java code - Best Practice

    Hello. I am working with the Jdeveloper 11.2.2. I am trying to figure out the best practice for where to put code. After reviewing http://docs.oracle.com/cd/E26098_01/web.1112/e16182.pdf it seemed like the application module was the preferred spot (although many of the examples in the pdf are in main methods). After coding a while though, I noticed that there were quite a few libraries imported, and wondered whether this would impact performance.
    I reviewed postings on the forum, especially Re: Access service method (client interface) programmatically . This link mentions accessing code from a backing bean -- and the gist of the recommendations seems to be to use the data control to drag it to the JSF, or use the bindings to access code.
    My interest lies in where to put java code in the first place; In the View Object, Entity Object, and Am object, backing bean.....other?
    I can outline several best guesses about where to put code and the pros and cons:
    1. In the application module
    Pros: Centralized location for code makes development and support more simple as there are not multiple access points. Much like a data control centralizes services, the application module can act as a conduit for different pieces of code you have in objects in your model.
    Cons: Everything in one place means the application module becomes bloated. I am not sure how memory works in java -- if the app module has tons of different libraries are they all called when even a simple query re-execute method is called? Memory hog?
    2. Write code in the objects it affects. If you are writing code that accesses a view object, write it in a view object. Then make it visible to the client.
    pros: The code is accessed via fewer conduits (for example, I would expect that if you call the application module from a JSF backing bean, then the application module calls the view object, you have three different pieces of code --
    conts: The code gets spread out, harder to locate etc.
    I would greatly appreciate your thoughts on the matter.
    Regards,
    Stuart
    Edited by: Stuart Fleming on May 20, 2012 5:25 AM
    Edited by: Stuart Fleming on May 20, 2012 5:27 AM

    First point here is when you say "where to put the java code" and you're referring to ADF BC, the point is you put "business logic java code" in the ADF Business Components. It's fine of course to have Java code in the ViewController layer that deals with the UI layer. Just don't put business logic in the UI layer, and don't put UI logic in the model layer. In your 2 examples you seem to be considering the ADF BC layer only, so I'll assume you mean business logic java code only.
    Meanwhile I'm not keen on the term best practice as people follow best practices without thinking, typically best practices come with conditions and people forget to apply them. Luckily you're not doing that here as you've thought through the pros and cons of each (nice work).
    Anyway, back on topic and off my soap box, as for where to put your code, my thoughts:
    1) If you only have 1 or 2 methods put it in the AppModuleImpl
    2) If you have hundreds of methods, or there's a chance #1 above will morph into #2, split the code up between the AppModuleImpl, ViewImpl and ViewRowImpls. Why? Because your AM will become overloaded with hundreds of methods making it unreadable. Instead put the code where it should logically go. Methods that work on a specific VO row go into the associated ViewRowImpl, methods that work across rows in a VO go into the ViewImpl, and methods that work across VOs in the associated AppModuleImpl.
    To be honest which you ever option you choose, one thing I do recommend as a best practice is be consistent and document the standard so your other programmers know.
    Btw there isn't an issue about loading lots of libraries/imports into a class, it has no runtime cost. However if your methods require lots of class variables, then yes this will have a memory cost.
    On a side note if you're interested in more ideas around how to build ADF apps correctly think about joining the "ADF EMG", a free online forum which discusses ADF architecture, best practices (cough), deployment architectures and more.
    Regards,
    CM.

  • Best Practice for Distributed TREX NFS vs cluster file systems

    Hi,
    We are planning to implement a distributed TREX, using RedHat on X64, but we are wondering which could be the best practice or approach to configure the "file server" used on the TREX distributed environment. The guides mention file server, that seems to be another server connected to a SAN exporting or sharing the file systems required to be mounted in all the TREX systems (Master, Backup and Slaves), but we know that the BI accelerator uses OCFS2 (cluster file systems) to access the storage, in the case of RedHat we have GFS or even OCFS.
    Basically we would like to know which is the best practice and how other companies are doing it, for a TREX distributed environment using either network file systems or cluster file systems.
    Thanks in advance,
    Zareh

    I would like to add one more thing, in my previous comment I assumed that it is possible to use cluster file system on TREX because BI accelerator, but maybe that is not supported, it does not seem to be clear on the TREX guides.
    That should be the initial question:
    Aare cluster file system solutions supported on plain TREX implementation?
    Thanks again,
    Zareh

  • DNS best practice in local domain network of Windows 2012?

    Hello.
    We have a small local domain network in our office. Which one is the best practice for the DNS: to setup a DNS in our network forwarding to public DNSs or directly using public DNS in all computers including
    server?
    Thanks.
    Selim

    Hi Selim,
    Definately the first option  "setup a DNS in our network forwarding to public DNSs " and all computers including server has local DNS configured
    Even better best practice would be, this local DNS points to a standalone DNS server in DMZone which queries the public DNS.
    Using a centralized DNS utilizes the DNS cache to answer similar queries, resulting in faster response time, less internet usage for repeated queries.
    Also an additional DNS layer helps protect your internal DNS data from attackers out in the internet.
    Using internal DNS on all the computer will also help you host intranet websites and accessibility to them directly. Moreover when you are on a AD domain, you need to have the computers DNS configured properly for AD authentication to happen.
    Regards,
    Satyajit
    Please “Vote As Helpful”
    if you find my contribution useful or “Mark As Answer” if it does answer your question. That will encourage me - and others - to take time out to help you.

  • Best practice of OSB logging Report handling or java code using publish

    Hi all,
    I want to do common error handling of OSB I did two implementations as below just want to know which one is the best practice.
    1. By using the custom report handler --> When ever we want to log we will use the report action of OSB which will call the Custom java class which
    Will log the data in to DB.
    2. By using plain java class --> creating a java class publish to the proxy which will call this java class and do the logging.
    Which is the best practice and pros and cons.
    Thanks
    Phani

    Hi Anuj,
    Thanks for the links, they have been helpful.
    I understand now that OSR is only meant to contain only Proxy services. The synch facility is between OSR and OSB so that in case when you are not using OER, you can publish Proxy services to OSR from OSB. What I didn't understand was why there was a option to publish a Proxy service back to OSB and why it ended up as a Business service. From the link you provided, it mentioned that this case is for multi-domain OSBs, where one OSB wants to use the other OSB's service. It is clear now.
    Some more questions:
    1) In the design-time, in OER no Endpoints are generated for Proxy services. Then how do we publish our design-time services to OSR for testing purposes? What is the correct way of doing this?
    Thanks,
    Umar

  • Metadata Loads (.app) - What is best practice?

    Dear All,
    Our metadata scan and load duration is approximately 20 mins (full load using replace option). Business hfmadmin has suggested the option of partial dimension loads in an effort to speed up the loading process.
    HFM System Admins prefer Metadata loads with replace option as there seems to less associated risk.
    Using partial loads there appears to be risk to cross dimension integrity checking, changes are merged, potentially duplicating of members when moved in Hierarchy.
    Are there any other risk with partial loads?
    Which approach is considered best practice?

    When we add new entities to our structure and load them with the merge option, they will always appear on the bottom of the structure. But when we use the replace option they will appear in the order that we want it. For us, and for the user friendlyness we always use the replace option. And for us the Metadata-load usually takes at least 35 minutes. Last time - 1.15...

  • Best practices for XMLType to RDMS migration

    We are storing XBRL files (basically XML) into oracle 11.2 as XMLType. Each XML files containing more than 500 tags. For reporting purpose we are extracting around 200 values from XML files to normal tables (NORMAL_TABLE).Reading from XML file directly is taking more execution time so we are doing migration as job from XMLType to NORMAL_TABLE. Basically each row in NORMAL_TABLE represent each XML files, that mean now we have around 200 columns in NORMAL_TABLE.
    Because of new requirement and CR , now we need more values to be migrate from XML to NORMAL_TABLE. Which would be the best practices for this case? Is it ok to add more columns into NORMAL_TABLE ?

    The link is not working (Error Page 404), maybe this one can help:
    [http://www.oracle.com/technetwork/database-features/xmldb/xmlqueryoptimize11gr2-168036.pdf]
    Actually, It's not about RDBMS migration, but I think it can help anyway.

  • Advice on Best practice for inter-countries Active Directory

    We want to merge three active directories with on as parent in Dubai, then child in Dubai, Bahrain and Kuwait. The time zones are different and sites are connected using VPN/leased line. With my studies i have explored two options. One way is to have parent
    domain/forest in Dubai and Child domain in respective countries/offices; second way is to have parent and all child domains in Dubai Data center as it is bigger, while respective countries have DCs connected to their respective child domains in Dubai. (Personally
    i find it safer in second option)
    Kindly advise which approach comes under best practice.
    Thanks in advance.

    Hi Richard
    Mueller,
    You perfectly got my point. We have three difference forests/domain in three different countries. I asked this question becuase I am worried for problems in replications. 
    And yes there are political reasons due to which we want to have multiple domains under one single forest. I have these following points:
    1. With multiple domains you introduce complications with trusts 
    (Yes we will face complications that is why  I will have a VM where there will be three child domains for 3 countries in HQ sitting right next to my main AD server which have forest/domain -  which i hope will help in fixing replication problems)
    2. and
    accessing resources in remote domains. (To address this issue i will implement two additional DCs in respective countries to make the resources available, these RODCs will be pointed toward their respective main domains in HQ)
    As an example:- 
    HQ data center=============
    Company.com (forest/domain)
    3 child domain to company.com
    example uae.company.com
    =======================
    UAE regional office=====================
    2 RODCs pointed towards uae.company.com in HQ
    ==================================
    Please tell me if i make sense here.

  • What is the best practice to consume RMI Service

    hi ,
    i have rmi solution client server
    i want to know which one is the best practice and more stable approach from one of the below.
    1. initialize / consume the service once only
    maybe can do it in static block
    Example : static {
    //some object = Naming.lookup("rmi://192.168.0.130/RMIServer");
    2. Everytime client perform some activity and need to invoke rmi server
    then consume the service
    Example :
    public String getQuote(String quoteNumber){
    }

    Esentially both.
    Each time you do #2, but if it fails, then you do #1. You never know when its going to fail. failure is a normal part of TCP communication so be prepared to handle it gracefully as if its not even a failure.

Maybe you are looking for

  • How do I use Safari RSS?

    I have RSS in Mail and it is straightforward.  Just click on one of the folders.  Subscribing to a site is easy also since a window will come up with 'subscribe in Mail' as a choice on the site I am subscribing to.  I have not noticed one for Safari.

  • How to configure my servomotor and third party servodrive using Servo Tune

    I am trying to configure a small servomotor using a PCI 7342 motion controller and a third party servo power drive. When powered up, the motor spins at max rpm and when run, the Auto Tune function doesn't seem to have any effect other than changing t

  • Can't filter by file type RAW + JPEG after upgrading to 3.3

    I'm running 10.7.4 OS. After upgrading to 3.3 I'm unable to filter my photos using the "Raw + Jpeg" argument. There are Raw files and Jpeg files and they filter okay. The combo doesn't work any longer. How do I get this to work?

  • MacBook Pro w/ Retina External Display Issues

    Has anyone else noticed with the MacBook Pro w/ Retina, that it does not wake external monitors connected to through a dvi adapter?

  • Xi training prerequiset

    hi sdn, I'm bw consultant having more than 3 yrs of experience can you guide me if i'll go for Xi training, does this help me with the career development. what is the prerequiste for Xi and what is the scope. rubane