Best practice for passing parameters to a Timer event handler?

The code hinting is suggesting that I use a Timer object rather than setTimeout or setInterval, however I need to pass a parameter to the timer handler.
What is the best practice for doing this?
+ Subclass Timer?
+ Subclass TimerEvent?
+ Global variable?
+ other?
Thanks

Hmm.  I don’t think I would’ve chosen that option.  I would probably create a class that listens for the TimerEvent dispatches a custom event.  Unless there is information you need in TimerEvent, I don’t see a need to extend it.

Similar Messages

  • Whats the best option for passing parameters between tf?

    Dear All,
    I have three Task Flows:
    1. TF1
         -  Main Taskflow that calls a web service to gather its data
    2. TF2
         -  Secondary taskflow which receives a parameter and depending on the value of the parameter received will display its data accordingly.  Generally any data
         is feed from TF1
    3. TF3
         -  Same as TF2Use Case:
    All three TF will be dropped to the page as Regions in a Webcenter Portal Application. Changes in TF1 should propagate into TaskFlow 2.
    Question:
    1. How do I configure that changes in TF1 would be propagated back into task flow 2 and 3 and whats the best option for this?
    2. At runtime, user can choose to edit the page and TF2 and TF3 can be deleted but TF 1 should remain as the source of information.
    Given the scenario above:
    - shall I wire the taskflows via page parameters?
    - contextual events?
    What are the considerations that needs to be thought of. I havent done such requirements before.
    Please help.
    Webcenter 11.1.1.6

    Contextual events seem to be the best case.
    This way you can trigger whenever you want. Web services can be slow so you can trigger the event when the gathering of the data has been finished and then pass some value on the event.
    An event also has a payload so it's an ideal scenario to add the data from the service on it so you can use it in the other TF's.
    In order to manage the deletion of the TF1, you can use the UI events on the composer: http://docs.oracle.com/cd/E23943_01/webcenter.1111/e10148/jpsdg_page_editor_adv.htm#CHDHHFDJ

  • Best practices for pass collections and to navigate between 2 view endlessl

    Hello, I have a doubt about efficiency and optimization of memory in this case. I have a managedBean for to show a activities´s list, then I can select one activity and the application redirects another view. This view is controlled for another managedBean for to show the specificied activity.
    My idea is pass the collection and the id of specificied activity to he second managedBean, and since second managedBean pass the collection to the first managedBean.
    I had thought pass properties by request and retrieve in the second bean, but I am not sure wich scope to usea in both bean. Because, the first bean pass collection to the first again.
    I also thought to use SessionScope in both bean, but I doubt about efficiency of memory in this case.
    How to pass parameters is not yet defined:
    -Using h:link and attributes
    - Using setPropertyactionListener between both bean
    -Others who do not know
    First managedBean (show list)
    @ManagedBean(name="actividades")
    @ViewScoped I'm not sure which scope to use
    public class ActividadesController implements Serializable {
         private static final long serialVersionUID = 1L;
         private final static Logger logger=Logger.getLogger(ActividadesController.class);
         private List<Actividad> listado; All activities
         @ManagedProperty(value="#{actividadBO}")
         private ActividadBO actividadBo;
         @ManagedProperty(value="#{asociaciones}")
         private AsociacionController asociacionController;
    /** methods **/
    Second managedBean (specified activity)
    @ManagedBean(name="actV")
    @ViewScoped I'm not sure which scope to use
    public class ActividadView implements Serializable {
         private static final long serialVersionUID = 1L;
         private Actividad actividad;
         private String comentario;
    private List<Actividad> listado; All activities for to avoid having to search again
         @ManagedProperty(value="#{actividadBO}")
         private ActividadBO actividadBo;
         private Integer idActividad;
         @PostConstruct
         public void init(){
              //actividad=actividadBo.get(idActividad);
              actividad=actividadBo.get(idActividad);
              actualizarComentarios(actividad.getIdActividad());
              actualizarAdjuntos(actividad.getIdActividad());
    /** methods **/
    Any suggestions??
    Kind regards.

    Hello, I have a doubt about efficiency and optimization of memory in this case. I have a managedBean for to show a activities´s list, then I can select one activity and the application redirects another view. This view is controlled for another managedBean for to show the specificied activity.
    My idea is pass the collection and the id of specificied activity to he second managedBean, and since second managedBean pass the collection to the first managedBean.
    I had thought pass properties by request and retrieve in the second bean, but I am not sure wich scope to usea in both bean. Because, the first bean pass collection to the first again.
    I also thought to use SessionScope in both bean, but I doubt about efficiency of memory in this case.
    How to pass parameters is not yet defined:
    -Using h:link and attributes
    - Using setPropertyactionListener between both bean
    -Others who do not know
    First managedBean (show list)
    @ManagedBean(name="actividades")
    @ViewScoped I'm not sure which scope to use
    public class ActividadesController implements Serializable {
         private static final long serialVersionUID = 1L;
         private final static Logger logger=Logger.getLogger(ActividadesController.class);
         private List<Actividad> listado; All activities
         @ManagedProperty(value="#{actividadBO}")
         private ActividadBO actividadBo;
         @ManagedProperty(value="#{asociaciones}")
         private AsociacionController asociacionController;
    /** methods **/
    Second managedBean (specified activity)
    @ManagedBean(name="actV")
    @ViewScoped I'm not sure which scope to use
    public class ActividadView implements Serializable {
         private static final long serialVersionUID = 1L;
         private Actividad actividad;
         private String comentario;
    private List<Actividad> listado; All activities for to avoid having to search again
         @ManagedProperty(value="#{actividadBO}")
         private ActividadBO actividadBo;
         private Integer idActividad;
         @PostConstruct
         public void init(){
              //actividad=actividadBo.get(idActividad);
              actividad=actividadBo.get(idActividad);
              actualizarComentarios(actividad.getIdActividad());
              actualizarAdjuntos(actividad.getIdActividad());
    /** methods **/
    Any suggestions??
    Kind regards.

  • Best Practice for multivalued parameters in *QL

    I have a need to do this:
                <query>             
                   <query-method>
                        <method-name>ejbSelectAllCasesInFIPS</method-name>
                        <method-params>
                                              <method-param>java.util.Collection</method-param>
                          </method-params>
                   </query-method>
                    <ejb-ql>     <![CDATA[
                                         select object(c)
                                         from Clients c, in (c.Status) as s
                                         where s.issue='H' or s.issue='P'
                                         and c.ClientCaseInfo.fipsCode in $1
                                         ]]>
                     </ejb-ql>
                </query>But based on the spec, this seems verboten.
    Am I restricted to building the query by hand, iterating thru the collection and appending each new string parameter into another String representing a comma delimited list of these parameters?
    Many thanks,
    Alexandra

    Is Sun considering adding Collections (even strongly typed Collections) to this spec? I'm guessing the reason it�s not in the spec is because you can have a Collection of ANYTHING, which could be averted by adding collection types for known types (or specifications for such) . I don't see any reason Java can�t provide StringCollection, IntCollection, LongCollection, etc. It would be incumbent upon DBMS vendors to implement the mapping (which they already do for non-Collection types) but being a programmer I don't see how difficult that could be, since we programmers are repeatedly faced with this relatively low-level problem.

  • Best practice from passing messages from servlets

    Is there a best practice for passing user messages (typically errors) back to the page from servlets?
    e.g. http://localhost:4502/content/geometrixx/en.html?message=Some user error message
    Dan

    Well I suppose that answer to that question depends somewhat on your requirements, but I would say using a query string as you have indicated would be less than ideal because the page with the message would not be cached. No depending on your requirements and what sort of message you are passing that might be OK - especially if you message is highly personalized.
    If however you have a limited number of standard messages to display a more common approach is to have each message have it's own page and then to configure the servlet to redirect to the appropriate page based on the desired message.
    If you want the user to end up on the same page they submitted to the servlet from then another common approach would be to have the post to the servlet be AJAX and then display the message client side without having to change the URL.

  • Best practice for real time requierement

    All,
    I am trying to find out what is the best practice for reporting against real time data ?
    is it while using
    1 - webi against universe/bex query on the top of hybrid cubes ?
    2 - Crystal report directly against the ECC data ?
    3 - using another solution such as Data Fedrator ? or something different ?
    I am looking to know if anyone got such req and also to share their experience 
    did they get some huge challenge against hybrid cubes ?
    Thanks in advance for your help
    Philippe

    Well their first requierement was to get real time data .. if i am in Xcelsius and click refresh then i want it to load my last data ..
    with live office , i can either schedule a crystal report and get the data delayed or use the option from live office to make iterfresh as right now .. is that a correct assumption ?
    I was talking about BW, just in case they are willing to change the requierement to go from Real time to every 5 min
    Just you know we are also thinking of the following option:
    1 - modify the virtual provider on the  CRM machine to get all the custom fields needed for the Xcelsius Dashboard
    2 - Build some interactive report on the top of these Virtual Provider within CRM
    3 - get the link to this report , it is one of the Report feature within CRM
    4 - design and build your dashboard on the top of it
    5 - EXport your swf file to the cRM web ui
    we are trying to see which one is the best one
    Philippe

  • JSF - Best Practice For Using Managed Bean

    I want to discuss what is the best practice for managed bean usage, especially using session scope or request scope to build database driven pages
    ---- Session Bean ----
    - In the book Core Java Server Faces, the author mentioned that most of the cases session bean should be used, unless the processing is passed on to other handler. Since JSF can store the state on client side, i think storing everything in session is not a big memory concern. (can some expert confirm this is true?) Session objects are easy to manage and states can be shared across the pages. It can make programming easy.
    In the case of a page binded to a resultset, the bean usually helds a java.util.List object for the result, which is intialized in the constructor by query the database first. However, this approach has a problem: when user navigates to other page and comes back, the data is not refreshed. You can of course solve the problem by issuing query everytime in your getXXX method. But you need to be very careful that you don't bind this XXX property too many times. In the case of querying in getXXX, setXXX is also tricky as you don't have a member to set. You usually don't want to persist the resultset changes in the setXXX as the changes may not be final, in stead, you want to handle in the actionlistener (like a save(actionevent)).
    I would glad to see your thought on this.
    --- Request Bean ---
    request bean is initialized everytime a reuqest is made. It sometimes drove me nuts because JSF seems not to be every consistent in updating model values. Suppose you have a page showing parent-children a list of records from database, and you also allow user to change directly on the children. if I hbind the parent to a bean called #{Parent} and you bind the children to ADF table (value="#{Parent.children}" var="rowValue". If I set Parent as a request scope, the setChildren method is never called when I submit the form. Not sure if this is just for ADF or it is JSF problem. But if you change the bean to session scope, everything works fine.
    I believe JSF doesn't update the bindings for all component attributes. It only update the input component value binding. Some one please verify this is true.
    In many cases, i found request bean is very hard to work with if there are lots of updates. (I have lots of trouble with update the binding value for rendered attributes).
    However, request bean is working fine for read only pages and simple binded forms. It definitely frees up memory quicker than session bean.
    ----- any comments or opinions are welcome!!! ------

    I think it should be either Option 2 or Option 3.
    Option 2 would be necessary if the bean data depends on some request parameters.
    (Example: Getting customer bean for a particular customer id)
    Otherwise Option 3 seems the reasonable approach.
    But, I am also pondering on this issue. The above are just my initial thoughts.

  • Best practice for backing bean population? (also, ActionListener RANT)

    Hello,
    I am about 3/4 of the way through development of a small to medium size JSF application. Sometimes I really like JSF, but much of the time I am left puzzled or frustrated for hours trying to find workarounds to JSF's bugs/glitches and design flaws.
    For example, early on, I was impressed with how easily it was to invoke a method from a page using an actionlistener. Now that I'm actually building things with JSF, the actionlistener funtionality still seems cool, but incredibly half baked. I find myself using request parameters LIKE CRAZY to work around the fact that JSF doesnt support passing parameters directly to backing bean methods. This feels awkward and wrong considering the fact that JSF is intended to abstract the HTTP underpinnings. To add insult to injury, I often have to iterate through ALL of the request parameters looking for one that has an id with an ending matching my desired property name (since JSF appends it's own crap to the beginning). I don't like doing things in a hacky way. This seems very hacky, and I feel dirty doing it.
    So, my first question is, what is the best practice for populating backing beans??? How do others accomplish this. I can think of several other approaches, but none feel less hacky.
    Second, are there plans in the next spec (please say there are) to allow parameters to be passed to backing bean methods? If not, WHY THE HECK NOT?
    Even though JSF expert group people have been conspicuously absent from this forum of late, I'd really appreciate responses from you as well.
    Thank you for your thoughts.

    Hi BrownBear,
    I've been using JSF for about 6 months now and I'd be glade to help as much as I can.
    Concerning parameters, I'm not sure what your issue is but I use the f:param tag to pass them. If you could post an example of what you are trying to do, I could see exactly what your issue is. Maybe the f:param can't help you.
    As for best practice for populating backing beans, I personaly try to let JSF do as much as possible. For example, if I have a backing bean with five properties, I make sure that they all are on the JSP page the bean serves. If one of the property is just there as an Id like, lets say, a Person ID (DB row key), then I put it on my JSP page as a hidden input field. I do the same with the properties that only for display, if I want them to be back in my bean when request comesback.
    Hope this help some how. Please, feel free to ask specific questions related to your specific problem and I monitor this post and trnasfer to you the ;little JSF experience I have.
    I'm pretty happy with JSF as it is but it sure needs improvements. :) What the heck, it's version 1.01 after all, and the next release should be a great one with the integration of JSTL.
    Cheers

  • Best practice for declaring and initializing String?

    What is the best practice for the way Strings are declared in a class?
    Should it be
    private String strHello = "";
    or should I have the initialization in the constructors?

    The servlet constructor is usually called once, when the servlet is first accessed. But then again maybe something else happens, google servlet life cycle if you must know.
    But let's take a step backwards here. It seems like you are trying to put fields into servlets. Don't do that. When two users fetch the servlet's URL at the same time, the fields are shared between the two hits. If you store something like HTTP parameters in the fields, the two hits' parameters will get mangled. The hits can end up seeing each other's parameter values.
    The best way is not to have fields in servlets. (Except maybe "static final" constants, sometimes rarely something else.) Many concurrency worries go away, servlet life cycle worries go away, servlet constructors go away, init() usually goes away.

  • Best practices for logging results from Looped steps

    Hi all
    I would like to start a discussion  to document best practices for logging results (to reports and databases) from Looped Steps 
    As an application example - let's say you are developing a test for one of NI's analog input or output cards and need to measure a voltage across multiple inputs or outputs.
    One way to do that would be to create a sequence that switches the appropriate signals and performs a "Voltage Measurement" test in a loop.    
    What are your techniques for keeping track of the individual measurements so that they can be traced to the individual signal paths that are being measured?
    I have used a variety of techniques such as
    i )creating a custom step type that generates unique identifiers for each iteration of the loop.    This required some customization to the results processing . Also the sequence developer had to include code to ensure that a unique identifier was generated for each iteration
    ii) Adding an input parameter to the test function/vi, passing loop iteration to it and adding this to Additional results parameters to log.   

    I have attached a simple example (LV 2012 and TS 2012) that includes steps inside a loop structure as well as a looped test.
    If you enable both database and report generation, you will see the following:
    1)  The numeric limit test in the for loop always generates the same name in the report and database which makes it difficult to determine the result of a particular iteration
    2) The Max voltage test report includes the paramater as an additional result but the database does not include any differentiating information
    3) The Looped Limit test generates both uniques reports and database entries - you can easily see what the result for each iteration is.   
    As mentioned, I am seeking to start a discussion for how others handle results for steps inside loops.    The only way I have been able to accomplish a result similar to that of the Looped step (unique results and database entry for each iteration of the loop) is to modify the process model results processing.  
    Attachments:
    test.vi ‏27 KB
    Sequence File 2.seq ‏9 KB

  • Best Practices for FSCM Multiple systems scenario

    Hi guys,
    We have a scenario to implement FSCM credit, collections and dispute management solution for our landscape comprising the following:
    a 4.6c system
    a 4.7 system
    an ECC 5 system
    2 ECC6 systems
    I have documented my design, but would like to double check and rob minds with colleagues regarding the following areas/questions.
    Business partner replication and synchronization: what is the best practice for the initial replication of customers in each of the different systems to business partners in the FSCM system? (a) for the initial creation, and (b) for on-going synchronization of new customers and changes to existing customers?
    Credit Management: what is the best practice for update of exposures from SD and FI-AR from each of the different systems? Should this be real-time for each transaction from SD and AR  (synchronous) or periodic, say once a day? (assuming we can control this in the BADI)
    Is there any particular point to note in dispute management?
    Any other general note regarding this scenario?
    Thanks in advance. Comments appreciated.

    Hi,
    I guess when you've the informations that the SAP can read and take some action, has to be asynchronous (from non-SAP to FSCM);
    But when the credit analysis is done by non-SAP and like an 'Experian', SAP send the informations with invoices paid and not paid and this non-SAP group give a rate for this customer. All banks and big companies in the world does the same. And for this, you've the synchronous interface. This interface will updated the FSCM-CR (Credit), blocking or not the vendor, decreasing or increasing them limit amount to buy.
    So, for these 1.000 sales orders, you'll have to think with PI in how to create an interface for this volume? What parameters SAP does has to check? There's an time interval to receive and send back? Will be a synchronous or asynchronous?
    Contact your PI to help think in this information exchange.
    Am I clear in your question?
    JPA

  • Is there a list of best practices for Azure Cloud Services?

    Hi all;
    I was talking with a Sql Server expert today and learned that Azure Sql Server can take up to a minute to respond to a query that normally takes a fraction of a second. This is one of those things where it's really valuable to learn it when architecting as
    opposed to when we go live.
    Cloud Services are not Sql Server (obviously) but that led to the question - Is there a list of best practices for Azure Cloud Services? If so, what are they?
    We will be placing the cloud services in multiple datacenters and using traffic manager to point people to the right one. The cloud service will set between an IMAP client & server, pretending to be the mail client to the server, and the server to the client.
    Mostly it will pass all requests & responses across from one to the other.
    thanks - dave
    What we did for the last 6 months -
    Made the world's coolest reporting & docgen system even more amazing

    hi dave,
    >>Cloud Services are not Sql Server (obviously) but that led to the question - Is there a list of best practices for Azure Cloud Services? If so, what are they?
    For this issue, I have collected some blogs and document about best practices for azure cloud service, you can view them, but I am not sure they are your need.
    http://msdn.microsoft.com/en-us/library/azure/xx130451.aspx
    http://gauravmantri.com/2013/01/11/some-best-practices-for-building-windows-azure-cloud-applications/
    http://www.hanselman.com/blog/CloudPowerHowToScaleAzureWebsitesGloballyWithTrafficManager.aspx
    http://msdn.microsoft.com/en-us/library/azure/jj717232.aspxhttp://azure.microsoft.com/en-us/documentation/articles/best-practices-performance/
    >>The cloud service will set between an IMAP client & server, pretending to be the mail client to the server, and the server to the client. Mostly it will pass all requests & responses across from one to the other.
    For your scenarioes, If you'd like to communicate with each instances, I recommend you refer to this document (
    http://msdn.microsoft.com/en-us/library/azure/hh180158.aspx ). And generally, if we want connect the client to server on Azure, the service bus is a good choice (http://azure.microsoft.com/en-us/documentation/articles/cloud-services-dotnet-multi-tier-app-using-service-bus-queues/
    If I misunderstood, please let me know.
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Best practices for ARM - please help!!!

    Hi all,
    Can you please help with any pointers / links to documents describing best practices for "who should be creating" the GRC request in below workflow of ARM in GRC 10.0??
    Create GRC request -> role approver -> risk manager -> security team
    options are : end user / Manager / Functional super users / security team.
    End user and manager not possible- we can not train so many people. Functional team is refusing since its a lot of work. Please help me with pointers to any best practices documents.
    Thanks!!!!

    In this case, I recommend proposing that the department managers create GRC Access Requests.  In order for the managers to comprehend the new process, you should create a separate "Role Catalog" that describes what abilities each role enables.  This Role Catalog needs to be taught to the department Managers, and they need to fully understand what tcodes and abilities are inside of each role.  From your workflow design, it looks like Role Owners should be brought into these workshops.
    You might consider a Role Catalog that the manager could filter on and make selections from.  For example, an AP manager could select "Accounts Payable" roles, and then choose from a smaller list of AP-related roles.  You could map business functions or tasks to specific technical roles.  The design flaw here, of course, is the way your technical roles have been designed.
    The point being, GRC AC 10 is not business-user friendly, so using an intuitive "Role Catalog" really helps the managers understand which technical roles they should be selecting in GRC ARs.  They can use this catalog to spit out a list of technical role names that they can then search for within the GRC Access Request.
    At all costs, avoid having end-users create ARs.  They usually select the wrong access, and the process then becomes very long and drawn out because the role owners or security stages need to mix and match the access after the fact.  You should choose a Requestor who has the highest chance of requesting the correct access.  This is usually the user's Manager, but you need to propose this solution in a way that won't scare off the manager - at the end of the day, they do NOT want to take on more work.
    If you are using SAP HR, then you can attempt HR Triggers for New User Access Requests, which automatically fill out and submit the GRC AR upon a specific HR action (New Hire, or Termination).  I do not recommend going down this path, however.  It is very confusing, time consuming, and difficult to integrate properly.
    Good luck!
    -Ken

  • Best Practices For Household IOS's/Apple IDs

    Greetings:
    I've been searching support for best practices for sharing primarily apps, music and video among multple iOS's/Apple IDs.  If there is a specific article please point me to it.
    Here is my situation: 
    We currently have 3 iPads (2-kids, 1-dad) in the household and one iTunes account on a win computer.  I previously had all iPads on single Apple ID/credit card and controlled the kids' downloads thru the Apple ID password that I kept secret.  As the kids have grown older, I found myself constantly entering my password as the kids increased there interest in music/apps/video.  I like this approach because all content was shared...I dislike because I was constantly asked to input password for all downloads.
    So, I recently set up an individual account for them with the allowance feature at iTunes that allows them to download content on their own (I set restrictions on their iPads).  Now I have 3 Apple IDs under one household.
    My questions:
    With the 3 Apple IDs, what is the best way to share apps,music, videos among myself and the kids?  Is it multiple accounts on the computer and some sort of sharing? 
    Thanks in advance...

    Hi Bonesaw1962,
    We've had our staff and students run iOS updates OTA via Settings -> Software Update. In the past, we put a DNS block on Apple's update servers to prevent users from updating iOS (like last fall when iOS 7 was first released). By blocking mesu.apple com, the iPads weren't able to check for or install any iOS software updates. We waited until iOS 7.0.3 was released before we removed the block to mesu.apple.com at which point we told users if they wanted to update to iOS 7 they could do so OTA. We used our MDM to run reports periodically to see how many people updated to iOS 7 and how many stayed on iOS 6. As time went on, just about everyone updated on their own.
    If you go this route (depending on the number of devices you have), you may want to take a look at Caching Server 2 to help with the network load https://www.apple.com/osx/server/features/#caching-server . From Apple's website, "When a user on your network downloads new software from Apple, a copy is automatically stored on your server. So the next time other users on your network update or download that same software, they actually access it from inside the network."
    I wish there was a way for MDMs to manage iOS updates, but unfortunately Apple hasn't made this feature available to MDM providers. I've given this feedback to our Apple SE, but haven't heard if it is being considered or not. Keeping fingers crossed.
    Hope this helps. Let us know what you decide on and keep us posted on the progress. Good luck!!
    ~Joe

  • BEST PRACTICES FOR CREATING DISCOVERER DATABASE CONNECTION -PUBLIC VS. PRIV

    I have enabled SSO for Discoverer. So when you browse to http://host:port/discoverer/viewer you get prompted for your SSO
    username/password. I have enabled users to create their own private
    connections. I log in as portal and created a private connection. I then from
    Oracle Portal create a portlet and add a discoverer worksheet using the private
    connection that I created as the portal user. This works fine...users access
    the portal they can see the worksheet. When they click the analyze link, the
    users are prompted to enter a password for the private connection. The
    following message is displayed:
    The item you are requesting requires you to enter a password. This could occur because this is a private connection or
    because the public connection password was invalid. Please enter the correct
    password now to continue.
    I originally created a public connection...and then follow the same steps from Oracle portal to create the portlet and display the
    worksheet. Worksheet is displayed properly from Portal, when users click the
    analyze link they are taken to Discoverer Viewer without having to enter a
    password. The problem with this is that when a user browses to
    http://host:port/discoverer/viewer they enter their SSO information and then
    any user with an SSO account can see the public connection...very insecure!
    When private connections are used, no connection information is displayed to
    SSO users when logging into Discoverer Viewer.
    For the very first step, when editing the Worksheet portlet from Portal, I enter the following for Database
    Connections:
    Publisher: I choose either the private or public connection that I created
    Users Logged In: Display same data to all users using connection (Publisher's Connection)
    Users Not Logged In: Do no display data
    My question is what are the best practices for creating Discoverer Database
    Connections.
    Is there a way to create a public connection, but not display it in at http://host:port/discoverer/viewer?
    Can I restrict access to http://host:port/discoverer/viewer to specific SSO users?
    So overall, I want roughly 40 users to have access to my Portal Page Group. I then want to
    display portlets with Discoverer worksheets. Certain worksheets I want to have
    the ability to display the analyze link. When the SSO user clicks on this they
    will be taken to Discoverer Viewer and prompted for no logon information. All
    SSO users will see the same data...there is no need to restrict access based on
    SSO username...1 database user will be set up in either the public or private
    connection.

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

Maybe you are looking for

  • Amount of forms open in the session

    Hello all, I would like to restrict/control the amount of forms open by the user, as I am using open_form. I have found nothing in the forms help about that, like a system variable and nothing so far on the web. A possible workaroung/idea could be to

  • Username and password token retrieval from SOAP web services

    We are implementing one JAX-WS Web services which requires to retrieve the username and password in SOAP header elements and use those for further use/processing. When we are retrieving username/password it's coming as null. Please help ... if (Boole

  • Computer as video source

    Thanks for all the help on the auto-stop function, it worked! Now for my next question:  Is there any way to pull the video from the computer as the video source.  What I am looking to do is to capture a PowerPoint presentation with audio (speaker ta

  • IPod continuously restores and updates to no effect

    I have my girlfriend's grey-display 4GB iPod mini. Here's the problem. When starting up the iPod goes to the exclamation folder icon (http://docs.info.apple.com/article.html?artnum=61003). It then shuts off. Following the instructions I plug it in an

  • Demand management planning strategy

    Hi there, I am trying to set up demand management for simple make to stock. I am using standard requirements type LSF which is linked to requirements class 100.(OMP1) The requirements class has a planning indicator (PLNKZ) set to "1" (net requirement