Session question; best practice

Hi,
One of our high profile application's queries/updates are served to user sessions. But we wanted to improve user query performance and reduce general database activity.
This piece of application cause an auto refresh to execute every 60 seconds. These queries execute against order tables looking for statuses on active orders, are user specific, and in some cases are not optimally tuned producing very high database buffer get and disk read activity. On average, 1,500 executions representing various flavors of these queries are executed hourly.
my questions are:
1) how can we get max performance ?
2) can we cache these queries for like every 30 secs ?
3) how can we cache ? so that user sessions would access the cache.
-sharma

well, you could load the data and put it in the application scope (in memory) with a timeout time so that it's not used after however long, in which case, a request would have to go to get the newer data from the DB.

Similar Messages

  • ADF Faces : session timeout best practice

    hi
    I made these small modifications to the web.xml file in the SRDemoSample application:
    (a) I changed the login-config from this ...
      <login-config>
        <auth-method>FORM</auth-method>
        <form-login-config>
          <form-login-page>infrastructure/SRLogin.jspx</form-login-page>
          <form-error-page>infrastructure/SRLogin.jspx</form-error-page>
        </form-login-config>
      </login-config>... to this
      <login-config>
        <auth-method>BASIC</auth-method>
      </login-config>(b) I changed the session-timeout to 1 minute.
      <session-config>
        <session-timeout>1</session-timeout>
      </session-config>Please consider this scenario:
    (1) Run the UserInterface project of the SRDemoSample application in JDeveloper.
    (2) Authenticate using "sking" and password "welcome".
    (3) Click on the "My Service Requests" tab.
    (4) Click on a "Request Id" like "111". You should see a detail page titled "Service Request Information for SR # 111" that shows detail data on the service request.
    (5) Wait for at least one minute for the session to timeout.
    (6) Click on the "My Service Requests" tab again. I see the same detail page as in (4), now titled "Service Request Information for SR #" and not showing any detail data.
    question
    What is the best practice to detect such session timeouts and handle them in a user friendly way in an ADF Faces application?
    thanks
    Jan Vervecken

    Hi,
    no. Here's the content copied from a word doc:
    A frequent question on the JDeveloper OTN forum, and also one that has been asked by customers directly, is how to detect and graceful handle user session expiry due to user inactivity.
    The problem of user inactivity is that there is no way in JavaEE for the server to call the client when the session has expired. Though you could use JavaScript on the client display to count
    down the session timeout, eventually showing an alert or redirecting the browser, this goes with a lot of overhead. The main concern raised against unhandled session invalidation due to user
    inactivity is that the next user request leads to unpredictable results and errors messages. Because all information stored in the user session get lost upon session expiry, you can't recover the
    session and need to start over again. The solution to this problem is a servlet filter that works on top of the Faces servlet. The web.xml file would have the servlet configured as follows
    1.     <filter>
    2.         <filter-name>ApplicationSessionExpiryFilter</filter-name>
    3.         <filter-class>
    4.             adf.sample.ApplicationSessionExpiryFilter
    5.         </filter-class>
    6.         <init-param>
    7.             <param-name>SessionTimeoutRedirect</param-name>
    8.             <param-value>SessionHasExpired.jspx</param-value>
    9.         </init-param>
    10.     </filter>
    This configures the "ApplicationSessionExpiryFilter" servlet with an initialization parameter for the administrator to configure the page that the filter redirects the request to. In this
    example, the page is a simple JSP page that only prints a message so the user knows what has happened. Further in the web.xml file, the filter is assigned to the JavaServer Faces
    servlet as follows
    1.     <filter-mapping>
    2.             <filter-name>ApplicationSessionExpiryFilter</filter-name>
    3.             <servlet-name>Faces Servlet</servlet-name>
    4.         </filter-mapping>
    The Servlet filter code compares the session Id of the request with the current session Id. This nicely handles the issue of the JavaEE container implicitly creating a new user session for the incoming request.
    The only special case to be handled is where the incoming request doesn't have an associated session ID. This is the case for the initial application request.
    1.     package adf.sample;
    2.     
    3.     import java.io.IOException;
    4.     
    5.     import javax.servlet.Filter;
    6.     import javax.servlet.FilterChain;
    7.     import javax.servlet.FilterConfig;
    8.     import javax.servlet.ServletException;
    9.     import javax.servlet.ServletRequest;
    10.     import javax.servlet.ServletResponse;
    11.     import javax.servlet.http.HttpServletRequest;
    12.     import javax.servlet.http.HttpServletResponse;
    13.     
    14.     
    15.     public class ApplicationSessionExpiryFilter implements Filter {
    16.         private FilterConfig _filterConfig = null;
    17.        
    18.         public void init(FilterConfig filterConfig) throws ServletException {
    19.             _filterConfig = filterConfig;
    20.         }
    21.     
    22.         public void destroy() {
    23.             _filterConfig = null;
    24.         }
    25.     
    26.         public void doFilter(ServletRequest request, ServletResponse response,
    27.                              FilterChain chain) throws IOException, ServletException {
    28.     
    29.     
    30.             String requestedSession =   ((HttpServletRequest)request).getRequestedSessionId();
    31.             String currentWebSession =  ((HttpServletRequest)request).getSession().getId();
    32.            
    33.             boolean sessionOk = currentWebSession.equalsIgnoreCase(requestedSession);
    34.           
    35.             // if the requested session is null then this is the first application
    36.             // request and "false" is acceptable
    37.            
    38.             if (!sessionOk && requestedSession != null){
    39.                 // the session has expired or renewed. Redirect request
    40.                 ((HttpServletResponse) response).sendRedirect(_filterConfig.getInitParameter("SessionTimeoutRedirect"));
    41.             }
    42.             else{
    43.                 chain.doFilter(request, response);
    44.             }
    45.         }
    46.        
    47.     }
    This servlet filter works pretty well, except for sessions that are expired because of active session invalidation e.g. when nuking the session to log out of container managed authentication. In this case my
    recommendation is to extend line 39 to also include a check if security is required. This can be through another initialization parameter that holds the name of a page that the request is redirected to upon logout.
    In this case you don't redirect the request to the error page but continue with a newly created session.
    Ps.: For testing and development, set the following parameter in web.xml to 1 so you don't have to wait 35 minutes
    1.     <session-config>
    2.         <session-timeout>1</session-timeout>
    3.     </session-config> Frank
    Edited by: Frank Nimphius on Jun 9, 2011 8:19 AM

  • [CS5.5/6] - XML / Data Merge questions & Best practice.

    Fellow Countrymen (and women),
    I work as a graphic designer for a large outlet chain retailer which is constantly growing our base of centers.  This growth has brought a workload that used to be manageable with but two people to a never ending sprint with five.  Much of what we do is print, which is not my forte, but is also generally a disorganized, ad-hoc affair into which I am wading to try to help reduce overall strain.
    Upon picking up InDesign I noted the power of the simple Data Merge function and have added it to our repetoire in mass merging data sources.  There are some critical failures I see in this as a tool going forward for our purposes, however:
    1) Data Merge cannot handle information stored and categorized in a singular column well.  As an example we have centers in many cities, and each center has its own list of specific stores.  Data merge cannot handle a single column, or even multiple column list of these stores very easily and has forced us into some manual operations to concatenate the data into one cell and then, using delimiter characters, find and replace hard returns to seperate them.
    2) Data Merge offers no method of alternate alignment of data, or selection by ranges.  That is to say:  I cannot tell Data merge to start at Cell1 in one column, and in another column select say... Cell 42 as the starting point.
    3) Data merge only accepts data organized in a very specific, and generally inflexible pattern.
    These are just a few limitations.
    ON TO MY ACTUAL DILEMMA aka Convert to XML or not?
    Recently my coworker has suggested we move toward using XML as a repository / delivery system that helps us quickly get data from our SQL database into a usable form in InDesign. 
    I've watched some tutorials on Lynda.com and havent yet seen a clear answer to a very simple question:
    "Can XML help to 'merge' large, dynamic, data sets like a list of 200 stores per center over 40 centers based off of a single template file?"
    What I've seen is that I would need to manually duplicate pages, linking the correct XML entry as I go rather than the program generating a set of merged pages like that from Data Merge with very little effort on my part.  Perhaps setting up a master page would allow for easy drag and drop fields for my XML data?
    I'm not an idiot, I'm simply green with this -- and it's kind of scary because I genuinely want us to proceed forward with the most flexible, reliable, trainable and sustainable solution.  A tall order, I know.  Correct me if I'm wrong, but XML is that beast, no?
    Formatting the XML
    Currently I'm afraid our XML feed for our centers isnt formatted correctly with the current format looking as such:
    <BRANDS>
         <BRAND>
              • BrandID = xxxx
              [Brand Name]
              [Description]
              [WebMoniker]
              <CATEGORIES>
                   <CATEGORY>
                        • xmlns = URL
                        • WebMoniker = category_type
              <STORES>
                   <STORE>
                        • StoreID = ID#
                        • CenterID = ID#
    I dont think this is currently usable because if I wanted to create a list of stores from a particular center, that information is stored as an attribute of the <Store> tag, buried deep within the data, making it impossible to 'drag-n-drop'. 
    Not to mention much of the important data is held in attributes rather than text fields which are children of the tag.
    Im thinking of proposing the following organizational layout:
    <CENTERS>
         <CENTER>
         [Center_name]
         [Center_location]
              <CATEGORIES>
                   <CATEGORY>
                        [Category_Type]
                        <BRANDS>
                             <BRAND>
                                  [Brand_name]
    My thought is that if I have the <CENTER> tag then I can simply drag that into a frame and it will auto populate all of the brands by Category (as organized in the XML) for that center into the frame.
    Why is this important?
    This is used on multiple documents in different layout styles, and since our store list is ever changes as leases end or begin, over 40 centers this becomes a big hairy monster.  We want this to be as automated as possible, but I'd settle for a significant amount of dragging and dropping as long as it is simple and straightforward.  I have a high tollerance for druding through code and creating work arounds but my co-workers do not.  This needs to be a system that is repeatable and understandable and needs to be able to function whether I'm here or not -- Mainly because I would like to step away from the responsibility of setting it up every time
    I'd love to hear your raw, unadulterated thoughts on the subject of Data merge and XML usage to accomplish these sorts of tasks.  What are your best practices and how would you / do you accomplish these operations?
    Regards-
    Robert

    From what I've gleaned through watching Lynda tutorials on the subject is that what I'm hoping to do is indeed possible.
    Peter, I dont disagree with you that there is a steep learning curve for me as the instigator / designer of this method for our team, but in terms of my teammates and end-users that will be softened considerably.  Even so I'm used to steep learning curves and the associated frustrations -- but I cope well with new learning and am self taught in many tools and programs.
    Flow based XML structures:
    It seems as though as long as the initial page is set up correctly using imported XML, individual data records that cascade in a logical fashion can be flowed automatically into new pages.  Basically what you do is to create an XML based layout with the dynamic portion you wish to flow in a single frame, apply paragraph styles to the different tags appropriately and then after deleting unused records, reimport the XML with some specific boxes checked (depending on how you wish to proceed).
    From there simply dragging the data root into the frame will cause overset text as it imports all the XML information into the frame.  Assuming that everything is cascaded correctly using auto-flow will cause new pages to be automatically generated with the tags correctly placed in a similar fashion to datamerge -- but far more powerful and flexible. 
    The issue then again comes down to data organization in the XML file.  In order to use this method the data must be organized in the same order in which it will be displayed.  For example if I had a Lastname field, and a Firstname field in that order, I could not call the Firstname first without faulting the document using the flow method.  I could, however, still drag and drop content from each tag into the frame and it would populate correctly regardless of the order of appearance in the XML.
    Honestly either method would be fantastic for our current set of projects, however the flow method may be particularly useful in jobs that would require more than 40 spreads or simple layouts with huge amounts of data to be merged.

  • Question - Best practice data source for Vs2008 and Crystal Reports 2008

    I have posted a question here
    CR2008 using data from .NET data provider (ADO.NET DATASET from a .DLL)
    but think that perhaps I need general community advise on best practice with data sources.
    In Crystal reports I can choose the data source location from any number of connection types, eg ado.net(xml), com, oledb, odbc.
    Now in regard to the post, the reports have all been created in Crxi 6.3, upgraded to Crystal XI and now Im using the latest and greatest. I wrote the Crystal Reports 6.3/ XI reports back in the day to do the following: The Reports use a function from COM Object which returns an ADO recordset which is then consumed fine.
    So I don't want to rewrite all these reports, of which there are many.
    I would like to know if any developers are actually using .NET Class libraries to return ADO.NET datasets via the method call or if you are connecting directly to XML data via whatever source ( disk, web service, http request etc).
    I have not been able to eliminate the problem listed in the post mentioned above, which is that the Crystal Report is calling the .NET class library method twice before displaying the data. I have confirmed this by debugging the class lib.
    So any guidance or tips is appreciated.
    Thanks

    This is already being discuss in one of your other threads. Let's close this one out and concentrate on the one I've already replied to.
    Thanks

  • ISE policy creation question - best practices

    Ok, I am a rookie ISE user here and am trying to learn as I go. I have a 802.1x policy for our corporate users on both wired and wireless and a wireless guest policy that redirects to the guest portal to enter credentials created in the sponsor portal. The corporate user has access to corporate resources and the guest basically has access to just the internet.
    I need to make what I am calling a Vendor policy that is basically a hybrid of the corporate user and the guest user. These would be vendors that are on-site to assist with programming and need access longer than what the guest account can be created for. This would also have specific ACLs that grant them access to the specific resources they would nee. I would like to tie this into AD authentication since they have an AD account created to be able to access those corporate resources in most cases. My first question is do I have a single policy that is tweaked as vendors come and go or do I simply create a specific policy for each vendor? My second question is do I or should I create unique SSIDs for each vendor?
    As I said I am just now getting into getting ISE configured. I am just not sure of what is considered a best practice or what is considered a secure way to may things happen. In regards to the policies I have created, they work but I think I have a couple holes to address.
    Thanks ...
    Brent

    Mostly makes sense. I have the AD part just need to get an AD group created for my test subject.
    I created an Endpoint Identity Group to place the vendors devices into so that we can allow laptop to connect but not phone. Got that.
    I think I can handle the Authorization Profile. It will be something like if VendorAsset and AD1:ExternalGroups Equals VendorADGroup then VendorPermissions. VendorPermissions would be the ACL that limits where they can go. I also need to create a non 802.1x based SSID as well and add this to the Authorization profile but can still be generic enough to be useable by all vendors.
    I think it is my Authentication rules that I need to modify for Vendor as my Corporate based policies use Dot1x and I need a policy that does not use dot1x. Right?

  • Data warehousing question/best practices

    I have been given the task of copying a few tables from our production database to a data warehousing database on a once-a-day (overnight) basis. The number of tables will grow over time; currently it is 10. I am interested in not only task success but also best practices. Here's what I've come up with:
    1) drop the table in the destination database.
    2) re-create the destination table from the script provided by SQL Developer when you click on the 'SQL' tab while you're viewing the table.
    3) INSERT INTO the destination table from the source table using a database link. Note: I am not aware of any columns in the tables themselves which could be used to filter added/deleted/modified rows only.
    4) After data import, create primary key and indexes.
    Questions:
    1) SQL Developer included the following lines when generating the table creation script:
    <table creation DDL commands>
    then
    PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE (INITIAL 251658240 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBLSPC_PGROW"
    it generated this code snippet for the table, the primary key and every index.
    Is this necessary to include in my code if they are all default values? For example, one of the indexes gets scripted as follows:
    CREATE INDEX "XYZ"."PATIENT_INDEX" ON "XYZ"."PATIENT" ("Patient")
    -- do I need the following four lines?
    PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE(INITIAL 60817408 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBLSPC_IGROW"
    2) Anyone with advice on best practices for warehousing data like this, I am very willing to learn from your experience.
    Thanks in advance,
    Carl

    I would strongly suggest not dropping and recreating tables every day.
    The simplest option would be to create a materialized view on the destination database that queries the source database and to do a nightly refresh of that materialized view. You could then create a materialized view log on the source table and then do an incremental refresh of the materialized view.
    You can schedule the refresh of the materialized view either in the materialized view definition, as a separate job, or by creating a refresh group and adding one or more materialized views.
    Justin

  • Kiosk Session Timeout Best Practice

    Dear All,
    I've noticed that user sessions are being timed out after about 3 hrs 20 minutes (presumably this is the default) so people coming in to work in the morning are having to start new sessions which really isn't practical with Citrix.
    The users need to be able to leave a session in the evening and then pick it up again the following morning.
    What are people using as a timeout value for Sun Ray sessions - 12 hours? 24 hours? What's the best thing to do? Will it cause problems if I have lots of sessions hanging around for that length of time?
    Many thanks.
    Chris

    Dear All,
    I've noticed that user sessions are being timed out after about 3 hrs 20 minutes (presumably this is the default) so people coming in to work in the morning are having to start new sessions which really isn't practical with Citrix.
    The users need to be able to leave a session in the evening and then pick it up again the following morning.
    What are people using as a timeout value for Sun Ray sessions - 12 hours? 24 hours? What's the best thing to do? Will it cause problems if I have lots of sessions hanging around for that length of time?
    Many thanks.
    Chris

  • Question: Best practices for dealing with multiple AM configurations

    Hello all,
    I have a project using ADF Business Components and ADF Faces. I would like to set up multiple configurations for the Application Modules to support the following scenarios:
    1). Local testing and debugging - using a connection defined in JDeveloper and AM Pooling turned off.
    2). Testing and debugging on an application server - using a JDBC Data Source and AM Pooling turned off
    3). Production deployment - using a JDBC Data Source and AM Pooling turned on.
    It is no problem to create multiple AM configurations to reflect this scenario. In order for the web part of the application to use the correct configurations, the DataBindings.cpx file must specify the correct ones. I was thinking to have 3 different DataBindings.cpx files and to change the CpxFileName context-param in the web.xml file as needed.
    My questions:
    1). Does this make sense as an approach? It should be better than having to change a single AM configuration every time I deploy or test. Is there any easy way to keep multiple DataBIndings.cpx files in synch, given that we may add new pages from time-to-time? Alternatively, can we do some type of "include" processing to include just the dataControlUsages section into a common DataBindings.cpx file?
    2). How would you manage the build-and-deploy process? For the most part, in JDev we would be using configuration #1. The only time to switch to configuration #2 or #3 would be to build an EAR file for deployment. Is this something that it would make sense to accomplish with ANT? I'm not an ANT expert at all. The ANT script would have "build-test-ear" and "build-prod_ear" targets which would swap in a correct web.xml file, recompile everything, build the EAR, then put the development web.xml file back. I'm relatively sure this is possible... comments?
    3). Is there some other recommended approach?
    I appreciate any insights from experience, or even just ideas or thoughts that I can test out.
    Best regards,
    John

    Hi K,
    Sorry for the long long delay in responding I've been traveling - and thanks for the e-mail tickler too...
    To answer your question in short, I do think that ANT is the right way to go; there is an extra ANT task called XMLTask that I was able to download and play with, and it seems it would make this manipulation of the cpx file (or the xcfg file, for that matter) pretty straightforward. I don't have any code to post; it's just in the conceptual stage for me right now. I didn't see anything magical in JDev 11 TP3 that solved this problem for me either.
    Having said all of that, it's more complicated than it might appear. In addition to the DataBindings.cpx file (stores, among other things, which AM configuration to use for each data control), it's certainly possible to programmatically access an AM (specifying the configuration either directly in the code or via a properties file/etc). I'm not sure what the most common use case for AM configurations is, but in my case, I have a Test configuration and a Prod configuration. The Test config, among other things, disables AM pooling. When I am developing/testing, I always use the Test config; in Production, I always use the Prod config. Perhaps the best way for me to do this would be to have an "Active" config and use ANT tasks to copy either Test or Prod to "Active." However, our Subversion repository is going to have a few complaints about this.
    John

  • Redirection question - best practice

    I have a managed session scoped bean, named UserBean, which as it's name implies stores user information. Now, if the session has expired (or was never created), a lot of it's methods will return null values and will result in an error. What I'd like to know is the best way to redirect to a login page if the UserBean is null. My first idea was the following:
    <navigation-rule>
            <from-view-id>*</from-view-id>
            <navigation-case>
                <from-outcome>#{UserBean == null}</from-outcome>
                <to-view-id>/login.xhtml</to-view-id>
                <redirect/>
            </navigation-case>
        </navigation-rule>However it didn't work. Am I onto something? If not -- whats the best solution?
    I appreciate your help.

    ServletRequest is an interface [1]. In a HTTP servlet environment the ServletRequest instance in the Filter is an implementation of HttpServletRequest [2]. So cast it back.
    [1] http://java.sun.com/javaee/5/docs/api/javax/servlet/ServletRequest.html
    [2] http://java.sun.com/javaee/5/docs/api/javax/servlet/http/HttpServletRequest.html

  • What are default Zend Session handling best practices to prevent Cross Site Request Forgery?

    I have enjoyed the David Powers book Adobe Dreamweaver CS5 with PHP:  Training from the Source - and have put many of the examples into practice.  I have a security related concern that may be tied to the Zend::Auth example in the book.  While this is installed an working on my site:
    <?php
    $failed = FALSE;
    if ($_POST) {
      if (empty($_POST['username']) || empty($_POST['password'])) {
        $failed = TRUE;
      } else {
        require_once('library.php');
        // check the user's credentials
        try {
          $auth = Zend_Auth::getInstance();
          $adapter = new Zend_Auth_Adapter_DbTable($dbRead, 'user', 'login', 'user_pass', 'sha1(?)');
          $adapter->setIdentity($_POST['username']);
          $adapter->setCredential($_POST['password']);
          $result = $auth->authenticate($adapter);
          if ($result->isValid()) {
            $storage = $auth->getStorage();
            $storage->write($adapter->getResultRowObject(array(
              'ID', 'login',  'user_first', 'user_last', 'user_role')));
            header('Location: /member/index.php');
            exit;
          } else {
            $failed = TRUE;
        } catch (Exception $e) {
          echo $e->getMessage();
    if (isset($_GET['logout'])) {
      require_once('library.php');
      try {
        $auth = Zend_Auth::getInstance();
        $auth->clearIdentity();
      } catch (Exception $e) {
        echo $e->getMessage();
    Apparently, there is  very limited protection against Cross Site Request Forgery, where the resulting SessionID could be easily hijacked?  I am using the Zend Community edition (I have 1.11.11).     I have an observation from a client that this authentication is not up to snuff. 
    To boil it down: 
    1.  Is there a Zend configuration file that might have some settings to upgrade the Session and or authentication security basics? I'm wondering specifically about the settings in /library/Zend/session.php? Ie secure the session against a changing user IP, and invoking some other session handling stuff (time-out etc). 
    2.  If I understand it correctly, "salting" won't help with this, unless it's added/checked via a hidden POST at login time? 
    Ideally, the man himself, David Powers would jump in here - but I'll take any help I can get!
    Thanks!

    Might ask them over here.
    http://forums.asp.net/1146.aspx/1?MVC
    Regards, Dave Patrick ....
    Microsoft Certified Professional
    Microsoft MVP [Windows]
    Disclaimer: This posting is provided "AS IS" with no warranties or guarantees , and confers no rights.

  • Request/session objects - Best Practice

    I have a simple scenario which I think other people will recognise, and am wondering what the best pattern to use in JSF is:
    I have a summary page which displays orders for the current user in a dataTable. This has a backing bean called orders.
    When the user clicks on an order, it calls an action on the orders object which fetches the specific order from the database and navigates to a second page to display details about the order.
    I don't mind the orders object being a session bean, but I don't really want the order bean to be session, it needs to be a request bean.
    How do I place the order bean somewhere so that it is a request bean on the second page?
    In ASP.NET I could place it in the ViewState before transferring control to the second page, or temporarily put it on the session, then pull it off the session when the second page loads the first time.
    The problem with putting the order object in the session is that it never goes away, and might be confused if the user has multiple browser windows open trying to look at 2 orders at the same time.

    Here's the way I do this kind of thing.
    In this case, I'd have a session bean called orders. It's got an orders property that will return all the orders for display in a dataTable. It's got a reference to a session-scoped bean that contains the id of the currently selected order. When the user selects an order (typically by clicking a commandLink) the id of the selected order is set in a session scoped bean called orderDetailsOptions and the user is navigated to the order details page. I'd have a Refresh button on the page that causes the orders to be reloaded.
    public class OrdersBean {
      private OrderDetailsOptionsBean orderDetailsOptions;
      private DataModel orders;
      private void reset() {
        orders = null;
      public OrdersBean() {
        reset();
      public void setOrderDetailsOptions( OrderDetailsOptionsBean orderDetailsOptions ) {
        this.orderDetailsOptions = orderDetailsOptions;
      public DataModel getOrders() {
        if ( orders == null ) {
          ResultSet rs = doQuery();
          orders = new ResultDataModel( ResultSupport.toResult( rs ) );
        return orders;
      /* Actions */
      public String orderSelected() {
        Map row_data = (Map) orders.getRowData();
        String order_id = orders.get( "orderId" );
        orderDetailsOptions.setOrderId( order_id );
        reset();
        return "toOrderDetails";
      public String refresh() {
        reset();
        return "success";
    }The OrderDetailsOptionsBean for the session holds the id of the currently selected order.
    public class OrderDetailsOptionsBean {
      private String order_id;
      public void setOrderId( String order_id ) {
        this.order_id = order_id;
      public String getOrderId() {
        return order_id;
    }The OrderDetailsBean is a request bean to get the details for the selected order.
    public class OrderDetailsBean {
      private OrderDetailsOptionsBean options;
      private String order_id = null;
      private Map fields = null;
      public void setOptions( OrderDetailsOptionsBean options ) {
        this.options = options;
      public String getOrderId() {
        if ( order_id == null ) {
          order_id = options.getOrderId();
      public Map getFields() {
        if ( fields == null ) {
          getOrderId();
          // Do the query.
        return fields;
    }Then here's what's in faces-config for this:
    <managed-bean>
      <managed-bean-name>orders</managed-bean-name>
      <managed-bean-class>OrderBean</managed-bean-name>
      <managed-bean-scope>session</managed-bean-scope>
      <managed-property>
        <property-name>orderDetailsOptions</property-name>
        <property-class>OrderDetailsOptionsBean</property-class>
        <value>#{orderDetailsOptions}</value>
      </managed-property>
    </managed-bean>
    <managed-bean>
      <managed-bean-name>orderDetailsOptions</managed-bean-name>
      <managed-bean-class>OrderDetailsOptionsBean</managed-bean-name>
      <managed-bean-scope>session</managed-bean-scope>
    </managed-bean>
    <managed-bean>
      <managed-bean-name>orderDetails</managed-bean-name>
      <managed-bean-class>OrderDetailsBean</managed-bean-name>
      <managed-bean-scope>request</managed-bean-scope>
      <managed-property>
        <property-name>options</property-name>
        <property-class>OrderDetailsOptionsBean</property-class>
        <value>#{orderDetailsOptions}</value>
      </managed-property>
    </managed-bean>
    <navigation-rule>
      <from-view-id>orders.jsp</from-view-id>
      <navigation-case>
        <from-outcome>toOrderDetails</from-outcome>
        <to-view-id>orderDetails.jsp</to-view-id>
        <redirect />
      </navigation-case>
    </navigation-rule>

  • Ffmpeg question - best practice

    I have a script i saved and have used for a while without any issues;
    #!/bin/bash
    for i in *.mkv
    do
    ffmpeg -i "$i" -acodec ac3 -vcodec copy "${i%.mkv}.mp4"
    done
    which gives me;
    Stream mapping:
      Stream #0:0 -> #0:0 (copy)
      Stream #0:1 -> #0:1 (ac3 (native) -> ac3 (native))
    My question is, is it better for me to copy the audio instead and is AC3 -> AC3 going to give me an issue?
    Sometimes i get AAC souce audio which is why i specify AC3

    psjbeisler wrote:is AC3 -> AC3 going to give me an issue?
    No, other than wasting time re-encoding. You probably wouldn't notice a difference in quality. If something weird happens, like a change in channel layout, then it should be reported upstream.
    psjbeisler wrote:Sometimes i get AAC souce audio which is why i specify AC3
    AAC is the most common audio format for MP4 container, so stream copying it would be the best option.
    qubodup wrote:
    You could check what the codec is:
    codec=`ffprobe video.mkv 2>&1 >/dev/null |grep Stream.*Audio | sed -e 's/.*Audio: //' -e 's/[, ].*//'`
    You can avoid the redirection, grep, and sed (see FFmpeg Wiki: FFprobe Tips).
    $ ffprobe -v error -select_streams a:0 -show_entries stream=codec_name -of default=nw=1:nk=1 input.mkv
    aac
    Note that only the first audio stream will be probed in this example. If there are others they will be ignored. Change "-select_streams a:0" to "-select_streams a" if you want to list all.
    Last edited by DrZaius (2015-04-18 23:41:57)

  • Session Data - best practice

    Hi,
              We are designing a Servlet/JSP based application that has a web-tier
              separate from the middle tier.
              One of our apps have a lot of user inputs, average 500k and upto 2MB of data
              in the request.
              We do not have a way of breaking this application up (i.e the whole 2MB form
              data must be posted at ome time).
              We have 2 solutions and want to know what is the better one and wahy ...
              1. Use session and store all the information in the session.
              2. use Javascript to assemble all the data and submit it at one time.
              I prefer #2 because I don't want to use sessions and also becuase I don't
              want to use a database on the web-tier....
              Please help me explain to my cpollegues who are convinced that we have to
              use sessions to store this data..
              -JJ
              

    I'm not overly familar with Weblogic clustering, but I assume it is similar in concept to OC4J clustering. The thing that you need to be aware of is that any object stored in HttpSession needs to be completely serializable. The LibrarySession that you create/obtain for a user cannot be serialized. Thus you need to come up with a technique that allows a user to obtain the same librarysession instance from whatever store it may be across multiple requests.
    CM SDK, Files, Content Services typically achieve high availability through use of multiple midtiers with Big IP in front. Our out-of-box applications do not make use of OC4J clustering.
    thanks,
    matt.

  • MiniDV work flow question-best practice

    I've got a client with a Cannon DC100 miniDV camcorder. This unit does not seem to have a firewire or USB port.
    What I have are 17 of these little puppies that I need up get into iMovie so i can teach him the iMovie basics.
    What I think I need is some freestanding reader that plugs into the firewire/usb port.
    Is there a better way?
    Thanks

    Michael:
    Take a look at this camera here:
    http://www.camcorderinfo.com/content/Canon-DC100-Camcorder-Review.htm
    It records in MPEG2 format into miniDVD. You can insert the miniDVDs into your G5's drive and take aout and convert the movies to DV. As far as I know, you can find problems inserting miniDVD/CDs in slot loading drives, but not in a standard one.
    The camera has a AV output but you need a A/D converter to digitalize the video. If your customer wants to learn to edit his home videos, he must change to any miniDV (tape) consumer camera in place of getting any other hardware to work with this one.
      Alberto

  • Database Primary Key Question - Best Practices

    I posted this in the ADDT forum, but I imagine I'll get more
    responses here:
    All you database developers - how do you deal with primary
    keys? Do you
    ALWAYS use an AutoIncrement/AutoNumber? Or only sometimes? Is
    there an
    argument to NOT use AutoIncrement? I know how I create
    databases and how
    I usually do things. I know how a few of my colleagues work.
    But how
    about the rest of the world? (Research for a MS Access book I
    am
    involved with.)
    Alec
    Adobe Community Expert

    .oO(Alec)
    >I posted this in the ADDT forum, but I imagine I'll get
    more responses here:
    >All you database developers - how do you deal with
    primary keys? Do you
    >ALWAYS use an AutoIncrement/AutoNumber?
    No.
    >Or only sometimes? Is there an
    >argument to NOT use AutoIncrement?
    AUTO_INCREMENT is a proprietary MySQL feature. For some
    people this
    might be an argument against it, but doesn't have to. Every
    DBMS has its
    own special features. You just have to decide whether you
    want to keep
    your code/queries as portable as possible or want to get the
    most out of
    your DB. Usually I prefer performance/features over
    portability, simply
    because for me and my projects it's very unlikely that I have
    to change
    the DBMS. I've chosen MySQL for good reasons and will stay
    with it for
    quite a while.
    >I know how I create databases and how
    >I usually do things. I know how a few of my colleagues
    work. But how
    >about the rest of the world? (Research for a MS Access
    book I am
    >involved with.)
    It always depends on the table itself, what data it contains,
    what I
    want to do with it and also some personal preferences. In n:m
    tables for
    example there's no need for an extra numeric PK, since the
    entire record
    already is the PK, built from two or more FKs.
    But if I need a numeric PK, I usually use sequences. Some
    DBMS support
    them natively, in MySQL they can be emulated with an extra
    table. It
    simply means, that the used PK number is generated _before_
    the record
    itself is inserted. For me and my framework this has some
    advantages
    (makes the internal work a bit easier), but of course in
    other cases an
    AUTO_INCREMENT might be more appropriate.
    So IMHO there's no general solution. If an AUTO_INCREMENT or
    something
    similar fits your needs, you should use it. I don't see a
    real problem
    with that.
    Micha

Maybe you are looking for

  • How do I unpair my audio clips

    I have two audio tracks, one from an ambient mic, one from a radio clip mic. I want to mix them differently, but can't de-link them. The linking symbol is not showing at the beginning of the timeline, so they should be able to be adjusted individuall

  • Getting the Error while restoring the iPhone3 back-up to iPhone4s

    When i try to restore the back-up getting the error and restore is not happening with the iPhone3 data in iPhone4. I have observed the below error in MobileBackUp log Can any one help me in how to restoring the data successfuly. 2013-12-05 02:14:50.0

  • Permission problems with external hard drive

    Hello there, I'm writting you from Paris but my french is just not good enough to really explain my problem. I'm no computer expert so i might not use the right name for stuff but i will try to be as clear as possible. I have a macbook pro 13" from e

  • NB550 - HDD light flickering

    Hi just bought my Toshiba but the hard drive light is always flickering. Just want to know if that is normal or if I should take it back to the retailer. Ta

  • Fire wire or usb Connection for external disk dvd disk drive

    is it worth the extra money buying a external dvd re-wrighter with fire wire connection rather than the traditional usb connection? would it work like a normal disk drive as my internal dvd/cd drive is broken.