Portal and Portlet best practice...

Hello All,
We're just at the start of next gen Portal project. We've selected Oracle Portal for our solution, jdev, the whole thing.
Last year an application was developed in our organization using Oracle Portal. In this application, the user interface is divided up into many portlets. i.e. the menu bar is it's own portlet, the content area is multiple portlet depending on what part of the application you're in.
My question is, is this considered the best pratice for developing interoperable portals? We will have several differing web applications consolidated into one portal interface when this process is over.
However, that said, does it make sense to seperate all of a site's functionality out into atonomous portlets?
Or should a web application, with workflow in this case be consolidated into one portlet, that is configurable?

Hi!
I don't think that complex applications should be developed in portlet manner.
Portlets are very simple small applications.
You can place them anywere on your portal.
My opinion is if you have to build a complex applications, then it should be implemented with JHeadstart or some other framework.
You can place the login link of this application on Portal that opens a new window.

Similar Messages

  • HTML and CSS Best Practices for Eloqua?

    Hello Topliners Community,
    My name is Ben and I am a Web Designer. I am currently looking for any guidance on HTML and CSS best practices when working with Eloqua. I am interested in the best practices for e-mail and landing pages.
    Thank you,
    Ben

    Personally I like to upload my custom created html/css into Eloqua instead of using the WYSIWYG.
    But if you must then right clicking on text boxes and click edit source is the way to go.
    There was a good discussion on editing your forms with CSS:
    Energize Your Eloqua10 Forms with CSS
    created by Ryan Wheler on Nov 2, 2012 8:44 AM, last modified by Greg Stotler on Sep 19, 2013 2:00 PM
    Version 2
    CSS can be used to heavily customize the layout of forms in Eloqua10.  In this article we will provide sample cover some common formatting use cases on Eloqua10 Landing Pages.  Further details about uses of CSS in Eloqua10 form templates can be found here: EE12 - Do It - Eloqua - Energize E10 Forms
    Eloqua10 Forms HTML Structure
    Below is an outline of the structure of the HTML generated by Eloqua when a form is added to a landing page.  By targeting the HTML classes highlighted below, we can control the layout of any form on your landing page.
      For the rest of page: http://topliners.eloqua.com/docs/DOC-3015

  • WebCenter Portal and Portlet support in JDeveloper 11.1.2 version

    Hi,
    When the WebCenter Portal and Portlet application creation will be available within JDeveloper 11.1.2 version ?
    Thanks
    Eli

    Thanks for your answer.
    My Requirements are:
    1) Develope a stand alone ADF FACES(JSF 2.0) application
    2) Develope a Portal using webcenter- We need to implement portlets by taking some of the functonality that was implemented in section (1) and use the Portlet-JSF bridge to create the portlets.
    Basically, we need to support JSF 2.0 standart.
    As I understood I must use the JDeveloper 11.1.2 version as it support the JSF 2.0. but what about the Webcenter and the Portlet-JSF bridge to support JSF 2.0 ?
    I will be happy to get a clarification for those requirments.
    Thanks a lot !
    Eli

  • Portal 8.1 best practices document.

    Hi All,
    Is there any document on Portal 8.1 best practices standard document?
    If yes, can somebody send me the same OR point me to appropriate URL?
    Thanks,
    Prashanth Bhat.

    Hi,
    http://edocs.bea.com is the entry point to docs. Try these documents as a start
    On the URL below is several links to various useful documents.
    http://e-docs.bea.com/wlp/docs81/index.html
    - Anders M.

  • Portal db provider(best practice)

    Best practice question here. If I wanted to create a few db portlets(suggestions/questions) is there already an existing portal db provider/schema that I should add them to? Or is it best to simply create a schema and db provider?

    That is an interesting question, we created our own schemas for each of the portal sites we have, so basically custom made providers for all portlets used in those portals.

  • Consuming web services in a jsr 168 portlet best practices.

    I am building portlets (jsr 168 api in Websphere Portal 6.0 using web service client of Rational). Now needed some suggestions on caching the web services data on the portlet. We have a number of portlets (somewhere around 4 or 5) on a portal page which basically rely on a single wsdl Lotus Domino Web Service.
    Is there a way I can cache the data returned by webservice so that I dont make repeated calls to the webservice on every portlet request. Any best practices/ideas on how I could do avoid multiple web service calls would be appreciated ?

    Interestingly, as it often happens with Oracle portal, this has started working without me doing anything special.
    However, the session events my listener gets notified of are (logically, as this portlet works via WSRP) different from user sessions. The problem I'm trying to solve now is that logging off (in SSO) doesn't lead to those sessions being destroyed. They only get destroyed after timeout specified in my web.xml (<session-config><session-timeout>30</session-timeout></session-config>). On the other hand, when they do expire, the SSO session may still be active, in which case the user gets presented with the infamous "could not get markup" error message. The latter is unacceptable in our case, so we had to set session-timeout to a pretty high value.
    So the question is, how can we track when the user logs off. We have found the portal.wwctx_sso_session$ and portal.WWLOG_ACTIVITY_LOG1$ (and ...2$) tables, but no documentation for them. However, the real problem with using those tables is that there's no way we could think of to match the portlet sessions with SSO sessions/actions listed in the tables. (Consider situation when someone logs in from two PCs.)
    Any ideas?

  • Portal System Transport (Best Practice)

    Hello,
    We have DEV, QA and PRD landscape. We have created systems that connect to backend ECC systems. Since the DEV and QA ECC system has one application server, we have created a portal system of type singe application server in the DEV Portal that points to DEV ECC system. Subsequently we have transported this portal system to QA portal and make it point to QA ECC.
    Now the Prd ECC systems is of type load balancing with multiple servers. The portal system that connects to Prd ECC system should also be of type Load Balancing. Now we cannot transport the QA portal system that connects to QA ECC system to prd since its of type Single Application Server.
    What will be the best strategy to create the portal system in prd portal that points to PRD ECC.
    1. Create the portal system freshly in Prd system of type Load Ballancing. Does it adhere to the best practise approach that suggest Not to Create anyting in prd system directly.
                                                       OR
    2, Is there any other way that should I follow to make sure that Best Practices for Portal Dvelepment is followed.
    Regards
    Deb

    I don't find it useful to transport system objects so I make them manually.

  • Portal server deployment best practices

    anyone out there knows what is the right way to deply portal server into production environment instead of manually copying all the folders and run the nessarily commands? Is there a better way to deploy portal server? Any best practices that i should follow for deploying portal server?

    From the above what I understood is you would like to transfer your existing portal server configuration to the new one. I don't think there is an easy method to do it.
    One way You can do is by taking the "ldif " back up from the existing portal server.
    For that first you have to install the portal server in the new box and then take back up of existing portal server using
    # /opt/netscape/directory4/slapd-<host>/ldif2db /tmp/profile.ldif
    edit the "/tmp/profile.ldif " file and modify <hostname> and <Domain name> with the new system values.
    copy this file to the new server using
    # /opt/netscape/directory4/slapd-<host>/ldif2db -i /tmp/backdb.ldif
    and also copy the file "slapd.user_at.conf " under /opt/netscape/directory4/slapd-<hostname>/config to the new system.
    Restarting the server makes you to access the portal server with the confguration of the old one.

  • Sessions and Controllers best-practice in JSF2

    Hi,
    I've not done web development work since last using Apache Struts for its MVC framework ( about 6 years ago now ). So bear with me if my questions does not make sense:
    SESSIONS
    1) Reading through the JSF2 spec PDF, it mentions about state-saving via the StateManager. I presume this is also the same StateManager that it used to store managed-beans that are in @SessionScoped ?
    2) In relation to session-scoped managed beans, when does a JSF implementation starts a new session ? That is, when does the implementation such as Mojarra call ExternalContext.getSession( true ) .. and when does it simply uses an existing session ( calling ExternalContext.getSession( false ) ) ?
    3) In relation to session-scoped managed beans, when does a JSF implementation invalidate a session ? That is, when does the implementation call ExternalContext.invalidateSession() ?
    4) Does ExternalContext.getSession( true ) or ExternalContext.invalidateSession() even make sense if the state-saving mechanism is client ? ( javax.faces.STATE_SAVING_METHOD = client ) Will the JSF implementation ever call these methods if the state-saving mechanism is client ?
    CONTROLLERS
    Most of the JSF2 tutorials that I have been reading on-line uses the same backing-bean when perfoming an action on the form ( when doing a POST or a GET or a post-back to the same page ).
    Is this best practice ? It looks like mixing what should have been a simple POJO with additional logic that should really be in a separate class.
    What have others done ?

    gimbal2 wrote:
    jmsjr wrote:
    EJP wrote:
    It's better because it ensures the bean gets instantiated, stuck in the session, which gets instantiated itself, the bean gets initialised, resource-injected, etc etc etc. Your way goes goes behind the scenes and hopes for the best, and raises complicated questions that don't really need answers.Thanks.
    1) But if I only want to check that the bean is in the session ... and I do NOT want to create an instance of the bean itself if it does not exist, then I presume I should still use ExternalApplication.getSessionMap.get(<beanName>).I can't think of a single reason why you would ever need to do that. Checking if a property of a bean in the session is populated however is far more reasonable to me.In my case, there is an external application ( e.g. a workflow system from a vendor ) that will open a page in the JSF webapp.
    The user is already authenticated in the workflow system, and the external system from the vendor sends along the username and password and some parameters that define what the request is about ( e.g. whether to start a new case, or open an existing case ). There will be no login page in the JSF webapp as the authentication was already done externally by the workflow system.
    Basically, I was think of implementing a PhaseListener that would:
    1) Parse the request from the external system, and store the relevant username / password and other information into a bean which I store into the session.
    2) If the request parameter does not exist, then I go look for a bean in the session to see if the actual request came from within the JSF webapp itself ( e.g. if it was not triggered from the external workflow system ).
    3) If this bean does not exist at all ( e.g. It was triggered by something else other than the external workflow system that I was expecting ) then I would prefer that it would avoid all the JSF lifecycle for the current request and immediately do a redirect to a different page ( be it a static HTML, or another JSF page ).
    4) If the bean exist, then proceed with the normal JSF lifecycle.
    I could also, between [1] and [2], do a quick check to verify that the username and password is indeed valid on the external system ( they have a Java API to do that ), and if the credentials are not valid, I would also avoid all the JSF lifecycle for the current request and redirect to a different page.

  • Informatica and Essbase Best Practice questions

    We now have the Informatica adapter for Essbase installed and working. We have been able to get Informatica to upload data successfully. Now I have a few questions that I have not been able to find answers to in any documentation or forums for Informatica or Essbase. I have submitted these same questions to the Informatica Support but thought I would also post the questions here to see if many folks are using Informatica against Essbase.
    We are using:
    Informatica 8.6.1 (Linux)
    Essbase 11.1.1.3 (Windows 2003)
    1) I can see in Informtica that when we load data to Essbase (Target) it gives me the option to run a calc script AFTER it loads the data. However, if I need to run a Calc script BEFORE the load to Essbase (Target) what is the best practice? The work around I have found was to add the same session twice and for the 1st instance select the option to 'ONLY RUN THE CALC SCRIPT' on the mapping tab. The problem with this is the log shows that it will still run the query against the Source tables. This will impact run times and double to querying against the Source database. What is the Best Practice and proper way to build the workflow to Run a Calc Script BEFORE the load?
    2)Since you do not see the list of Calc Scripts for Essbase in Informatica (you have to manually type the Calc name), If I want to run the 'Default' calc for Essbase what is the syntax to run the 'Default' Calc Script? Tried 'Default' but didn't seem to work.
    3)I have other tasks in Essbase I want to do before actually having Informatica load the data. I would like to run the MAXL commands via a Command task. What is the Best Practice for doing this and the syntax to run MAXL commands in a Command Task in Informatica? I previously had Shell scripts built on the Informatica server that would be kicked off within Informatica, but we are trying to move away from shell scripts and instead have the scripting codes IN the workflows/sessions to make it easier to review the code and follow the logic, rather than having to find the scripts and open each of them.
    Any assistance you have with the two products working together I would GREATLY appreciate it!
    Robert

    AS i know addUser(User user){ ... } is much more useful for several reasons:
    1.Its object oriented
    2.its easy to write , because if Object has many parameters its very painful to write method with comma seperated parameters

  • Users And Security Best Practice

    Dear Experts
    I am designing an application with almost fifty users scattered in different places. Each users should access tables according to his/her criteria. For example salessam, salesjug can see only the sales related tables. purchasedon should access only purchase related tables. i have the following problems
    Is it a best practice to create 50 users in the DB i.e. 50 Schemas are going to be created? Where are these users normally created?
    or is it better for me to maintain a table of users and their passwords in my design itself and i regulate through the front end. seems that this would be risky and a cumbersome process.
    Please advice
    thanks
    Manish Sawjiani

    You would normally create a single schema to own the
    objects and 50 users to use them. You would use roles
    and object privileges to control access.Well, this is the classic 'Oracle' approach to do this. I might say it depends a bit on what you want to achieve. Let's call this approach A.
    The other option was to have your own user/pwd table. You can create your own custom authentication but I would go for the built-in Application Express Users - authentication scheme. You can manage the users via the frontend (Application builder > manage Application Express Users) . There you can manage the groups and end users which you can leverage in your Apex app. You can even use the APIs to create the users programmatically. It is all done for you. Let's call this approach B.
    Some things to consider:
    1) You want to create a web application and also other applications that access the data stored in Oracle (another PHP / Oracle Forms / Perl ) or allow access via SQL/Plus. Then you should use approach A. This way you don't need to reimplement security for these different approaches.
    2) You want to create one (or multiple) Apex applications only. This will be the only mechanism the users will access your data. Then I would go for approach B.
    3) When using approach A some users didn't like that all users will have access to their workspace, including the sql command line and having the capability of building applications and possibly being able to change the data they have access to through the Oracle roles. Locking down this capability is possible but it takes some effort and requires an Apache as a proxy.
    4) When using approach A you will need DBA privileges to manage the users and assign the roles. This might not always be possible nor desired. Depends on who will manage the Oracle XE instance.
    5) Moving the application including the end users to another machine is a bit easier using approach B since they are exported via the application export mechanism. Using approach A you would have to do it yourself. Be aware that the passwords are lost when you install the users into a different Oracle XE instance.
    6) If you design the application using approach B you will have to design security in a way that doesn't rely on the Oracle roles / grants security mechanisms. This makes it easier to change the authentication scheme later. For example, later you want to use a LDAP directory, a different custom authentication scheme or even SSO (SSO is not available out of the box but feasible). This is directly possible.
    Using approach A you would have to recode the security mechanisms (which user is allowed to update/delete which data).
    Hope that clarifies your options a bit.
    ~Dietmar.
    Message was edited by:
    Dietmar Aust
    Corrected a typo in (5): Approach B instead of approach A , sorry.
    Message was edited by:
    Dietmar Aust

  • What is the guideline and/or best practice for EMC setup on ASM?

    We are going to use EMC CX4-480 for ASM storage on RAC. What is the guideline and best practice for EMC setup on ASM?
    Thanks for the advice!

    Probably a poor choice of words. Sorry.
    So basically, I have gotten further, but I just noticed related problem.
    I'm using the WebServices(WS) 1.0. I insert an account, then, on a separate WS call, I insert my contacts for the account. I include the AccountID, and a user defined key from the Account when creating the Contact.
    When I look at my Contact on the CRMOD web page, it shows the appropriate links back to the Account. But when I look at my Account on the CRMOD web page, it shows no Contacts.
    So when I say workflow or Best Practice, I was hoping for guidance on how to properly write my code to accomplish all of the necessary steps. As in this is how you insert an account with a contact(s) and it updates the appropriate IDs so that it shows up properly on the CRMOD web pages.
    Based on the above, it looks like I need to, as the next step, take the ContactID and update the Account with it so that their is a bi-directional link.
    I'm thinking there is a better way in doing this.
    Here is my psuedocode:
    AccountInsert()
    AccountID = NewAcctRec
    ContactInsert(NewAcctRec)
    ContactID = NewContRec
    AccountUpdate(NewContRec)
    Thanks,

  • Configuring AD Sites and Services best practice for multiple office site ?

    Hi People,
    Can anyone here please suggest me or share the link of what is the best practice in configuring the AD Sites and Service for single AD domain with multiple office sites ?
    I'd like to know more about the number and the direction of the connection between Domain Controllers in one site to the Data Center and vice versa.
    Thanks.
    /* Server Support Specialist */

    Hi People,
    Can anyone here please suggest me or share the link of what is the best practice in configuring the AD Sites and Service for single AD domain with multiple office sites ?
    This series can be useful:
    Active Directory Structure Guidelines – Part 1
    Mahdi Tehrani   |  
      |  
    www.mahditehrani.ir
    Please click on Propose As Answer or to mark this post as
    and helpful for other people.
    This posting is provided AS-IS with no warranties, and confers no rights.
    How to query members of 'Local Administrators' group in all computers?

  • Product and SWCV best practice

    We have a 3rd party Product  that tend to change product versions
    frequently as once in 3-5 month.
    as SAP Software logistics mechanisem is based on hierrachy of
    Product->product version->SWCU->SWCV
    My quesion is :
    what is the best way to maintain this Product versioning in the SLD and IR,
    To allow best practice of software logistics in XI and maintanance?
    Please share from you knowledge and expiriance only.
    Nimrod

    Structuring Integration Repository Content - Part 1: Software Component Versions
    Have a look at that weblog and also search for the following parts of the same. That should give you a good idea.

  • Purchasing Group control - PO and PR - Best Practices?

    Hi experts,
      My client need to implement in the same plant, purchasing group for POs and indirect PRs.
      They currently have around 5-6 plants in Asia and the design is to group the purchasing group for the plant at the prefix level. Example A01 to AZZ for plant in Thailand, B01 to BZZ for plant in China.
       Thus we will have situation of A01, A02, A03 created for buyers and A04, A05 created for department to raise indirect PR.
       But the requirement is that the buyer should not use the purchasing group from the department (A04, A05) to create PO. Other than hard-coding the A01, A02 and A03 to the role to be given to the buyer, for Best practice, how should the purchasing groups be design in such situations?

    Hi Ravi,
              Thanks for your respond.
               My client won't have issue of buyer buying for other plants as the buyers are local buyers situated at the plant. They don't have a centralized purchasing team that purchased for mor than 1 plant.
                When you say purchasing group are added to the role, I presume you mean that buyer only have authorizaton for their own purchasing group? That mean buyer A01 only have authorization for A01 and nothing else?
                 When it come to maintenance, won't this be tough? In addition, buyers will not be able to back up each other of the same plant when anyone goes on leave. Our current plan is to give A* for buyers in Thailand plant. But that also mean I might have the prob of direct buyers accidentally using department purchasing group to create PO.

Maybe you are looking for