ADF features to improve performance of applications

Hello,
While trying to find a fix for my application (built using ADF and JSF) which is very, very slow I came across this article on Oracle site and the suggestions were:
1. In web.xml, disable oracle.adf.view.faces.CHECK_FILE_MODIFICATION. I know in my application it is set to True, because I get the warning that this option should not be used in Production environment. But when I saw Web.xml, I did not see any mention of this property.
2. In web.xml, disable oracle.adf.view.faces.DEBUG_JAVASCRIPT. See my web.xml below, it does not even mention this property.
3. In adf-faces-config.xml, set <debug-output> to false. See my adf-faces-config.xml below, it does not even mention this property.
Any other thing that you see (or not see) that could enhance performance?
my adf-faces-config.xml_
<?xml version="1.0" encoding="UTF-8"?>
<adf-faces-config xmlns="http://xmlns.oracle.com/adf/view/faces/config">
<skin-family>oracle</skin-family>
<client-validation-disabled>false</client-validation-disabled>
<accessibility-mode>inaccessible</accessibility-mode>
</adf-faces-config>
my web.xml_
<?xml version = '1.0' encoding = 'UTF-8'?>
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"
version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee">
<description>Web.xml file </description>
<filter>
<filter-name>adfFaces</filter-name>
<filter-class>oracle.adf.view.faces.webapp.AdfFacesFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>adfFaces</filter-name>
<servlet-name>Faces Servlet</servlet-name>
<dispatcher>FORWARD</dispatcher>
<dispatcher>REQUEST</dispatcher>
</filter-mapping>
<listener>
<listener-class>com.epam.hdrm.client.view.listener.CloseSessionListener</listener-class>
</listener>
<!-- JSF section-->
<servlet>
<servlet-name>Faces Servlet</servlet-name>
<servlet-class>javax.faces.webapp.FacesServlet</servlet-class>
<load-on-startup>1</load-on-startup>
</servlet>
<servlet>
<servlet-name>resources</servlet-name>
<servlet-class>oracle.adf.view.faces.webapp.ResourceServlet</servlet-class>
</servlet>
<servlet>
<servlet-name>WorkflowRequestDetails</servlet-name>
<servlet-class>com.jasmine.hdrm.client.servlet.RequestDetails</servlet-class>
</servlet>
<servlet>
<servlet-name>adfAuthentication</servlet-name>
<servlet-class>oracle.adf.share.security.authentication.AuthenticationServlet</servlet-class>
<init-param>
<param-name>success_url</param-name>
<param-value>pages/dashboard.jspx</param-value>
</init-param>
<load-on-startup>1</load-on-startup>
</servlet>
<servlet-mapping>
<servlet-name>Faces Servlet</servlet-name>
<url-pattern>*.html</url-pattern>
</servlet-mapping>
<context-param>
<param-name>javax.faces.DEFAULT_SUFFIX</param-name>
<param-value>.jspx</param-value>
</context-param>
<context-param>
<param-name>javax.faces.STATE_SAVING_METHOD</param-name>
<param-value>client</param-value>
</context-param>
<servlet-mapping>
<servlet-name>resources</servlet-name>
<url-pattern>/adf/*</url-pattern>
</servlet-mapping>
<servlet-mapping>
<servlet-name>RequestDetails</servlet-name>
<url-pattern>/requestdetails</url-pattern>
</servlet-mapping>
<servlet-mapping>
<servlet-name>adfAuthentication</servlet-name>
<url-pattern>/adfAuthentication/*</url-pattern>
</servlet-mapping>
<session-config>
<session-timeout>60</session-timeout>
</session-config>
<!-- MIME section-->
<mime-mapping>
<extension>html</extension>
<mime-type>text/html</mime-type>
</mime-mapping>
<mime-mapping>
<extension>txt</extension>
<mime-type>text/plain</mime-type>
</mime-mapping>
<jsp-config/>
<security-constraint>
<web-resource-collection>
<web-resource-name>allPages</web-resource-name>
<url-pattern>/pages/*</url-pattern>
</web-resource-collection>
<auth-constraint>
<role-name>myUser</role-name>
</auth-constraint>
</security-constraint>
<security-constraint>
<web-resource-collection>
<web-resource-name>adfAuthentication</web-resource-name>
<url-pattern>/adfAuthentication</url-pattern>
</web-resource-collection>
<auth-constraint>
<role-name>myUser</role-name>
</auth-constraint>
</security-constraint>
<login-config>
<auth-method>FORM</auth-method>
<form-login-config>
<form-login-page>login.html</form-login-page>
<form-error-page>error.html</form-error-page>
</form-login-config>
</login-config>
<security-role>
<role-name>myUser</role-name>
</security-role>
</web-app>

In the prod. environment CHECK_FILE_MODIFICATION has to be set to false.
    <context-param>
        <description>If this parameter is true, there will be an automatic check of the modification date of your JSPs, and saved state will be discarded when JSP's change. It will also automatically check if your skinning css files have changed without you having to restart the server. This makes development easier, but adds overhead. For this reason this parameter should be set to false when your application is deployed.</description>
        <param-name>org.apache.myfaces.trinidad.CHECK_FILE_MODIFICATION</param-name>
        <param-value>false</param-value>
    </context-param>It's there in v2.5
<?xml version = '1.0' encoding = 'windows-1252'?>
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" version="2.5" xmlns="http://java.sun.com/xml/ns/javaee">

Similar Messages

  • BIA to improve performance for BPS Applications

    Hi All,
    Is it possible to improve performance of BPS applications using BIA. Currently we are running applications on BI-BPS which because of huge range of period are having a performance issue.
    Would request to please share whether in this read and write option of BPS would BIA be helpful and to what extent can the performance be increased?
    Request an early reply as system is in really bad shape and users are grappling with poor performance?
    Rgds,
    Rajeev

    Hi Rajeev,
    If the performance issue you are facing is while running the query on real-time (transactional) infocube being used in BPS, then BIA can help. The closed requests from real-time cube can be indexed in BIA. At the query runtime, analytic engine reads data from database for open request and from BIA for closed and indexed requests. It combines this data with the plan buffer cache and produce the result.
    Hence if you are facing issue with query response time, BIA will defenitely help.
    Regards,
    Praveen

  • FI-CA events to improve performance

    Hello experts,
    Does anybody use the FI-CA events to improve the extraction performance for datasources 0FC_OP_01 and 0FC_CI_01 (open and cleared items)?
    It seems that this specific exits associated to BW events have been developped especially to improve performance.
    Any documentation, guide should be appreciate.
    Thanks.
    Thibaud.

    Thanks to all for the replies
    @Sybrand
    Please answer first whether the column is stored in a separate lobsegment.
    No. Table,Index,LOB,LOB index uses the same TS. I missed adding this point( moving to separate TS) as part of table modifications.
    @Hemant
    There's a famous paper / blog post about CLOBs and Database Flashback. If I find it, I'll post the URL.
    Is this the one you are referring to
    http://laimisnd.wordpress.com/2011/03/25/lobs-and-flashback-database-performance/
    By moving the CLOB column to different block size , I will test the performance improvement it gives and will share the results.
    We dont need any data from this table. XML file contains details about finger prints and once the application server completes the job , XML data is deleted from this table.
    So no need of backup/recovery operations for this table. Client will be able to replay the transactions if any problem occurs.
    @Billy
    We are not performing XML parsing on DB side. Gets the XML data from client -> insert into table -> client selects from table -> Upon successful completion of the Job from client ,XML data gets deleted.
    Regarding binding of LOB from client side, will check on that side also to reduce round trips.
    By changing the blocksize, I can keep db_32K_cache_size=2G and keep this table in CACHE. If I directly put my table to CACHE, it will age out all other operation from buffer which makes things worse for us.
    This insert is part of transaction( Registration of a finger print) and this is the only statement taking time as of now compared to other statements in the transaction.
    Thanks,
    Arun

  • How to preload sound into memory to improve performance?

    Hello all
    I have an application where it needs to play 4 different short wave files on some events. The wave files are small (less then 1 sec each) so they can be preloaded into memory. But I don't really know how to do that.. This is my current code... Performance is really important here, so the faster users can hear the sounds, the better...
    import java.io.*;
    import javax.sound.sampled.*;
    import javax.swing.*;
    import java.awt.event.*;
    public class PlaySound implements ActionListener
         private Clip clip = null;
         public void play(String name)
              if (clip != null)
                   clip.stop();
                   clip = null;
              loadClip(name);
              clip.start();
         private void loadClip(String fnm)
              try
                   AudioInputStream stream = AudioSystem.getAudioInputStream(new File(fnm + ".wav"));
                   AudioFormat format = stream.getFormat();
                   DataLine.Info info = new DataLine.Info(Clip.class, format);
                   if (!AudioSystem.isLineSupported(info))
                        JOptionPane.showMessageDialog(null, "Unsupported sound line", "Warning!", JOptionPane.WARNING_MESSAGE);
                   else
                        clip = (Clip) AudioSystem.getLine(info);
                        clip.open(stream);
                        stream.close();
              catch (Exception e)
                   JOptionPane.showMessageDialog(null, "loadClip E: " + e.toString(), "Warning!", JOptionPane.WARNING_MESSAGE);
         public static void main(String[] args)
              play("a wav file name");
    }     I would appreciate it if someone can point out how I can preload them to improve performance... Thanks in advance!

    The message above should be:
    OMG, me dumb you smart Florian...
    Thank you for your suggestion... It's not the best OR anything close to what I thought it would be, it's certainly one way to do it and better then what I've got now...
    Thanks again Florian, I really appreciate it!!
    BTW, is there anything that would produce the sound faster then this?
    Message was edited by:
    BuggyVB

  • How do I improve performance while doing pull, push and delete from Azure Storage Queue

           
    Hi,
    I am working on a distributed application with Azure Storage Queue for message queuing. queue will be used by multiple clients across the clock and thus it is expected that it would be heavily loaded most on the time in usage. business case is typical as in
    it pulls message from queue, process the message then deletes the message from queue. this module also sends back a notification to user indicating process is complete. functions/modules work fine as in they meet the logical requirement. pretty typical queue
    scenario.
    Now, coming to the problem statement. since it is envisaged that the queue would be heavily loaded most of the time, I am pushing towards to speed up processing of the overall message lifetime. the faster I can clear messages, the better overall experience
    it would be for everyone, system and users.
    To improve on performance I did multiple cycles for performance profiling and then improving on the identified "HOT" path/function.
    It all came down to a point where only the Azure Queue pull and delete are the only two most time consuming calls outside. I can further improve on pull, which i did by batch pulling 32 message at a time (which is the max message count i can pull from Azure
    queue at once at the time of writing this question.), this returned me a favor as in by reducing processing time to a big margin. all good till this as well.
    i am processing these messages in parallel so as to improve on overall performance.
    pseudo code:
    //AzureQueue Class is encapsulating calls to Azure Storage Queue.
    //assume nothing fancy inside, vanila calls to queue for pull/push/delete
    var batchMessages = AzureQueue.Pull(32); Parallel.ForEach(batchMessages, bMessage =>
    //DoSomething does some background processing;
    try{DoSomething(bMessage);}
    catch()
    //Log exception
    AzureQueue.Delete(bMessage);
    With this change now, profiling results show that up-to 90% of time is only taken by the Azure Message delete calls. As it is good to delete message as soon as processing is done, i remove it just after "DoSomething" is finished.
    what i need now is suggestions on how to further improve performance of this function when 90% of the time is being eaten up by the Azure Queue Delete call itself? is there a better faster way to perform delete/bulk delete etc?
    with the implementation mentioned here, i get speed of close to 25 messages/sec. Right now Azure queue delete calls are choking application performance. so is there any hope to push it further.
    Does it also makes difference in performance which queue delete call am making? as of now queue has overloaded method for deleting message, one which except message object and another which accepts message identifier and pop receipt. i am using the later
    one here with message identifier nad pop receipt to delete message from queue.
    Let me know if you need any additional information or any clarification in question.
    Inputs/suggestions are welcome.
    Many thanks.

    The first thing that came to mind was to use a parallel delete at the same time you run the work in DoSomething.  If DoSomething fails, add the message back into the queue.  This won't work for every application, and work that was in the queue
    near the head could be pushed back to the tail, so you'd have to think about how that may effect your workload.
    Or, make a threadpool queued delete after the work was successful.  Fire and forget.  However, if you're loading the processing at 25/sec, and 90% of time sits on the delete, you'd quickly accumulate delete calls for the threadpool until you'd
    never catch up.  At 70-80% duty cycle this may work, but the closer you get to always being busy could make this dangerous.
    I wonder if calling the delete REST API yourself may offer any improvements.  If you find the delete sets up a TCP connection each time, this may be all you need.  Try to keep the connection open, or see if the REST API can delete more at a time
    than the SDK API can.
    Or, if you have the funds, just have more VM instances doing the work in parallel, so the first machine handles 25/sec, the second at 25/sec also - and you just live with the slow delete.  If that's still not good enough, add more instances.
    Darin R.

  • Error occurred in deployment step 'Activate Features': Attempted to perform an unauthorized operation.

    Hi,
    I'm unable to deploy a custom workflow/ visual web part or anything using Visual Studio.
    Here is the Output trace:
    ------ Build started: Project: TestWorkflow, Configuration: Debug Any CPU ------
    TestWorkflow -> E:\Codes\TestWorkflow\TestWorkflow\bin\Debug\TestWorkflow.dll
    Successfully created package at: E:\Codes\TestWorkflow\TestWorkflow\bin\Debug\TestWorkflow.wsp
    ------ Deploy started: Project: TestWorkflow, Configuration: Debug Any CPU ------
    Active Deployment Configuration: Default
    Run Pre-Deployment Command:
    Skipping deployment step because a pre-deployment command is not specified.
    Recycle IIS Application Pool:
    Recycling IIS application pool 'SharePoint - 1111'...
    Retract Solution:
    Retracting solution 'Testworkflow.wsp'...
    Deleting solution 'Testworkflow.wsp'...
    Add Solution:
    Adding solution 'TestWorkflow.wsp'...
    Deploying solution 'TestWorkflow.wsp'...
    Activate Features:
    Activating feature 'Feature1' ...
    Error occurred in deployment step 'Activate Features': Attempted to perform an unauthorized operation.
    ========== Build: 1 succeeded or up-to-date, 0 failed, 0 skipped ==========
    ========== Deploy: 0 succeeded, 1 failed, 0 skipped ==========
    Points to be noted:
    I have opened the visual studio using the System Account and "Run as Administrator".
    I checked the User policy for the particular web app and it has Full read and Full control for the System account
    I checked SharePoint administration under services and verified the system account credientials.
    Can you please help me solving this.
    Thanks,
    Sachin

    Please try giving permissions
    pen the SharePoint 2010 Central
    Administration.
    - Go to Application Management
    –> Change site collection administrations
    - Select the correct site
    collection and type your windows account as the secondary site collection administrator.
    - Try deploying now, it should work.
    or 
    You can grante permissions in "SITE ACTIONS > SITE PERMISSIONS"
    Ref :http://learnsharepointwithme.blogspot.in/2013/04/error-occurred-in-deployment-step.html
    Please remember to click 'Mark as Answer' on the answer if it helps you

  • I have a MAC Pro from 2011 currently running MAC OS 10.9.5.  This weekend I cloned the MAC HD drive to a new SSD drive for improved performance.  The clone was completed successfully with no errors.  After the clone completed I successfull restarted my sy

    I have a MAC Pro from 2011 currently running MAC OS 10.9.5.  This weekend I cloned the MAC HD drive to a new SSD drive for improved performance.  The clone was completed successfully with no errors.  After the clone completed I successfully restarted my system using the SSD as the boot device.  I then successfully tested all of my products, including Photoshop CS6 and all of its plug-ins.  I successfully tested the key features that I frequently use.  Today while attempting to launch Photoshop CS6 a message is being displayed indicating that a scratch disk cannot be found.  All drives are available on the system via the Finder and Disk Utility.  I can access all drives including the old MAC HD which is no longer the boot device.  I've even attempted to launch Photoshop from the old device yet the same error persist.  Is there a way to review/edit/change Photoshop preferences if Photoshop doesn't launch?  I've even restarted my system several times to see if that would resolve the issues.  Does anyone have any recommendations for this issue?  Have you previously address this issue? 
    Thank you Gregg Williams

    Boilerplate text:
    Reset Preferences
    http://forums.adobe.com/thread/375776
    1) Close the program and press Ctrl+Alt+Shift/Cmd+Option+Shift during startup (not reversible)
    or
    2) Move the Folder. See:
    http://www.bugge.com/Family-and-friends/Illy/illy.html
    --OB

  • Times ten to improve performance for search results in Oracle eBS

    Hi ,
    We have various search scenarios in our ERP implementaion using Oracle Apps eBS, for example searching for an item . Oracle apps does provide item search but performance is not great. We have about 30 million items and hence to improve the performance of the search thought Times ten may help.
    Can anyone please clarify if Times ten can be used to improve performance on the eBS database , if yes how.

    Vikash,
    We were thinking along the same lines (using TimesTen for massive item search in e-Business Suite). In our case massive Item / parametric search leveraging the Product Information Management application. We were thinking about setting up a POC on a Linux Server with a Vision Instance. We should compare notes?
    SParker

  • How to run query in parallel  to improve performance

    I am using ALDSP2.5, My data tables are split to 12 ways, based on hash of a particular column name. I have a query to get a piece of data I am looking for. However, this data is split across the 12 tables. So, even though my query is the same, I need to run it on 12 tables instead of 1. I want to run all 12 queries in parallel instead of one by one, collapse the datasets returned and return it back to the caller. How can I do this in ALDSP ?
    To be specific, I will call below operation to get data:
    declare function ds:SOA_1MIN_POOL_METRIC() as element(tgt:SOA_1MIN_POOL_METRIC_00)*
    src0:SOA_1MIN_POOL_METRIC(),
    src1:SOA_1MIN_POOL_METRIC(),
    src2:SOA_1MIN_POOL_METRIC(),
    src3:SOA_1MIN_POOL_METRIC(),
    src4:SOA_1MIN_POOL_METRIC(),
    src5:SOA_1MIN_POOL_METRIC(),
    src6:SOA_1MIN_POOL_METRIC(),
    src7:SOA_1MIN_POOL_METRIC(),
    src8:SOA_1MIN_POOL_METRIC(),
    src9:SOA_1MIN_POOL_METRIC(),
    src10:SOA_1MIN_POOL_METRIC(),
    src11:SOA_1MIN_POOL_METRIC()
    This method acts as a proxy, it aggregates data from 12 data tables
    src0:SOA_1MIN_POOL_METRIC() get data from SOA_1MIN_POOL_METRIC_00 table
    src1:SOA_1MIN_POOL_METRIC() get data from SOA_1MIN_POOL_METRIC_01 table and so on.
    The data source of each table is different (src0, src1 etc), how can I run these queries in parallel to improve performance?

    Thanks Mike.
    The async function works, from the log, I could see the queries are executed in parallel.
    but the behavior is confused, with same input, sometimes it gives me right result, some times(especially when there are few other applications running in the machine) it throws below exception:
    java.lang.IllegalStateException
         at weblogic.xml.query.iterators.BasicMaterializedTokenStream.deRegister(BasicMaterializedTokenStream.java:256)
         at weblogic.xml.query.iterators.BasicMaterializedTokenStream$MatStreamIterator.close(BasicMaterializedTokenStream.java:436)
         at weblogic.xml.query.runtime.core.RTVariable.close(RTVariable.java:54)
         at weblogic.xml.query.runtime.core.RTVariableSync.close(RTVariableSync.java:74)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.runtime.core.IfThenElse.close(IfThenElse.java:99)
         at weblogic.xml.query.runtime.core.CountMapIterator.close(CountMapIterator.java:222)
         at weblogic.xml.query.runtime.core.LetIterator.close(LetIterator.java:140)
         at weblogic.xml.query.runtime.constructor.SuperElementConstructor.prepClose(SuperElementConstructor.java:183)
         at weblogic.xml.query.runtime.constructor.PartMatElemConstructor.close(PartMatElemConstructor.java:251)
         at weblogic.xml.query.runtime.querycide.QueryAssassin.close(QueryAssassin.java:65)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.runtime.core.QueryIterator.close(QueryIterator.java:146)
         at com.bea.ld.server.QueryInvocation.getResult(QueryInvocation.java:462)
         at com.bea.ld.EJBRequestHandler.executeFunction(EJBRequestHandler.java:346)
         at com.bea.ld.ServerBean.executeFunction(ServerBean.java:108)
         at com.bea.ld.Server_ydm4ie_EOImpl.executeFunction(Server_ydm4ie_EOImpl.java:262)
         at com.bea.dsp.dsmediator.client.XmlDataServiceBase.invokeFunction(XmlDataServiceBase.java:312)
         at com.bea.dsp.dsmediator.client.XmlDataServiceBase.invoke(XmlDataServiceBase.java:231)
         at com.ebay.rds.dao.SOAMetricDAO.getMetricAggNumber(SOAMetricDAO.java:502)
         at com.ebay.rds.impl.NexusImpl.getMetricAggNumber(NexusImpl.java:199)
         at com.ebay.rds.impl.NexusImpl.getMetricAggNumber(NexusImpl.java:174)
         at RDSWS.getMetricAggNumber(RDSWS.jws:240)
         at jrockit.reflect.VirtualNativeMethodInvoker.invoke(Ljava.lang.Object;[Ljava.lang.Object;)Ljava.lang.Object;(Unknown Source)
         at java.lang.reflect.Method.invoke(Ljava.lang.Object;[Ljava.lang.Object;I)Ljava.lang.Object;(Unknown Source)
         at com.bea.wlw.runtime.core.dispatcher.DispMethod.invoke(DispMethod.java:371)
    below is my code example, first I get data from all the 12 queries, each query is enclosed with fn-bea:async function, finally, I do a group by aggregation based on the whole data set, is it possible that the exception is due to some threads are not returned data yet, but the aggregation has started?
    the metircName, serviceName, opname, and $soaDbRequest are simply passed from operation parameters.
    let $METRIC_RESULT :=
            fn-bea:async(
                for $SOA_METRIC in ns20:getMetrics($metricName,$serviceName,$opName,"")
                for $SOA_POOL_METRIC in src0:SOA_1MIN_POOL_METRIC()
                where
                $SOA_POOL_METRIC/SOA_METRIC_ID eq fn-bea:fence($SOA_METRIC/SOA_METRIC_ID)
                and $SOA_POOL_METRIC/CAL_CUBE_ID  ge fn-bea:fence($soaDbRequest/ns16:StartTime)  
                and $SOA_POOL_METRIC/CAL_CUBE_ID lt fn-bea:fence($soaDbRequest/ns16:EndTime )
                and ( $SOA_POOL_METRIC/SOA_SERVICE_ID eq fn-bea:fence($soaDbRequest/ns16:ServiceID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:ServiceID)))
                and ( $SOA_POOL_METRIC/POOL_ID eq fn-bea:fence($soaDbRequest/ns16:PoolID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:PoolID)))
                and ( $SOA_POOL_METRIC/SOA_USE_CASE_ID eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)))
                and ( $SOA_POOL_METRIC/ROLE_TYPE eq fn-bea:fence($soaDbRequest/ns16:RoleID)
                   or (-1 eq fn-bea:fence($soaDbRequest/ns16:RoleID)))
                return
                $SOA_POOL_METRIC
               fn-bea:async(for $SOA_METRIC in ns20:getMetrics($metricName,$serviceName,$opName,"")
                for $SOA_POOL_METRIC in src1:SOA_1MIN_POOL_METRIC()
                where
                $SOA_POOL_METRIC/SOA_METRIC_ID eq fn-bea:fence($SOA_METRIC/SOA_METRIC_ID)
                and $SOA_POOL_METRIC/CAL_CUBE_ID  ge fn-bea:fence($soaDbRequest/ns16:StartTime)  
                and $SOA_POOL_METRIC/CAL_CUBE_ID lt fn-bea:fence($soaDbRequest/ns16:EndTime )
                and ( $SOA_POOL_METRIC/SOA_SERVICE_ID eq fn-bea:fence($soaDbRequest/ns16:ServiceID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:ServiceID)))
                and ( $SOA_POOL_METRIC/POOL_ID eq fn-bea:fence($soaDbRequest/ns16:PoolID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:PoolID)))
                and ( $SOA_POOL_METRIC/SOA_USE_CASE_ID eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)))
                and ( $SOA_POOL_METRIC/ROLE_TYPE eq fn-bea:fence($soaDbRequest/ns16:RoleID)
                   or (-1 eq fn-bea:fence($soaDbRequest/ns16:RoleID)))
                return
                $SOA_POOL_METRIC
             ... //12 similar queries
            for $Metric_data in $METRIC_RESULT    
            group $Metric_data as $Metric_data_Group        
            by   $Metric_data/ROLE_TYPE as $role_type_id  
            return
            <ns0:RawMetric>
                <ns0:endTime?></ns0:endTime>
                <ns0:target?>{$role_type_id}</ns0:target>
    <ns0:value0>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE0)}</ns0:value0>
    <ns0:value1>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE1)}</ns0:value1>
    <ns0:value2>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE2)}</ns0:value2>
    <ns0:value3>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE3)}</ns0:value3>
    </ns0:RawMetric>
    could you tell me why the result is unstable? thanks!

  • How to improve performance on MacBook Pro 10.5.8?

    I have a MacBook Pro running 10.5.8 that has become way too slow. It hesitates between key strokes and response and is slow to search and navigate websites.
    I just checked and there are no updates due.
    What can I do to improve performance?

    Please read this whole message before doing anything.
    This procedure is a diagnostic test. It won’t solve your problem. Don’t be disappointed when you find that nothing has changed after you complete it.
    Third-party system modifications are a common cause of usability problems. By a “system modification,” I mean software that affects the operation of other software -- potentially for the worse. The following procedure will help identify which such modifications you've installed. Don’t be alarmed by the complexity of these instructions -- they’re easy to carry out and won’t change anything on your Mac.
    These steps are to be taken while booted in “normal” mode, not in safe mode. If you’re now running in safe mode, reboot as usual before continuing.
    Below are several lines of text in monospaced type, which are UNIX shell commands. They’re harmless, but they must be entered exactly as given in order to work. If you have doubts about the safety of running these commands, search this site for other discussions in which they’ve been used without any report of ill effects.
    Some of the commands will line-wrap in your browser, but each one is really just a single line, all of which must be selected. You can accomplish this easily by triple-clicking anywhere in the line. The whole line will highlight, and you can then either copy or drag it. The headings “Step 1” and so on are not part of the commands.
    Note: If you have more than one user account, Step 2 must be taken as an administrator. Ordinarily that would be the user created automatically when you booted the system for the first time. The other steps should be taken as the user who has the problem, if different. Most personal Macs have only one user, and in that case this paragraph doesn’t apply.
    To begin, launch the Terminal application; e.g., by entering the first few letters of its name in a Spotlight search. A text window will open with a line already in it, ending either in a dollar sign (“$”) or a percent sign (“%”). If you get the percent sign, enter “sh” (without the quotes) and press return. You should then get a new line ending in a dollar sign.
    Step 1
    Copy or drag -- do not type -- the line below into the Terminal window, then press return:
    kextstat -kl | awk '!/com\.apple/ {print $6 $7}'
    Post the lines of output (if any) that appear below what you just entered (the text, please, not a screenshot.)
    Step 2
    Repeat with this line:
    sudo launchctl list | sed 1d | awk '!/0x|com\.apple/ {print $3}'
    This time, you'll be prompted for your login password, which won't be displayed when you type it. You may get a one-time warning not to screw up. You don't need to post the warning.
    Step 3
    launchctl list | sed 1d | awk '!/0x|com\.apple/ {print $3}'
    Step 4
    ls -1A /e*/mach* {,/}L*/{Ad,Compon,Ex,Fram,In,Keyb,La,Mail/Bu,P*P,Priv,Qu,Scripti,Servi,Sta}* L*/Fonts 2> /dev/null
    Important: If you synchronize with a MobileMe account, your me.com email address may appear in the output of the above command. If so, change it to something like “[email protected]” before posting.
    Step 5
    osascript -e 'tell application "System Events" to get the name of every login item'
    Remember, steps 1-5 are all drag-and-drop or copy-and-paste, whichever you prefer -- no typing, except your password.
    You can then quit Terminal.

  • Brush Lag? - Here are OpenGL settings that improve performance

    For anyone having horrible Brush lag, please try the following settings and report your findings in this thread.
    After much configuring and comparing CS4 vs CS3, I found that these settings do improve CS4's brush lag significantly. CS3 is still faster, but these settings made CS4 brush strokes a lot more responsive.
    Please try these settings and share your experiences.
    NOTE: these do not improve clone tool performance, the best way to improve clone performance right now seems to be to turn off the clone tools overlay feature.
    Perhaps Adam or Chris from Adobe could explain what is happening here. The most significant option that improved performance appears to be the "Use for Image Display - OFF". I have no idea what this feature does or does not do but it does seem to be the biggest performance hit. The next most influential setting seems to be "3D Interaction Acceleration - OFF"
    Set the following settings in Photoshop CS4 Preferences:
    OpenGL - ON
    Vsync - OFF
    3D Interaction Acceleration - OFF
    Force Bilinear Interpolation - OFF
    Advanced Drawing - ON
    Use for Image Display - OFF
    Color Matching - ON

    Hi guys,
    As I am having very little problems with my system I though I should post my specs and settings for comparison reasons.
    System - Asus p5Q deluxe,
    Intel quad 9650 3ghz,
    16gb pc6400 800Mhz ram,
    loads of drives ( system drive on 10K 74gb Raptor, Vista partition on fast 500gb drive, Ps scratch on an another fast 500gb drive, the rest are storage/bk-ups ),
    Gainward 8800GTS 640mb GPU,
    30ins monitor @2560x1600 and a 24ins 1920x1200,
    Wacom tablet.
    PS CS4 x64bit
    Vista x64
    All latest drivers
    No Antivirus. Index and superfetch is ON, Defender is ON
    No internet connection except for updates
    No faffing around with vista processes
    Wacom Virtual HID and Wacom Mouse Monitor are disabled
    nVidia GPU set to default settings
    On this system I am able to produce massive images, the last major size was 150x600cm@200ppi and the brushes are smooth until they increased to around 700+ pixels then there is a slight lag of around 1 second if I draw fast, if I take it slow then there's no lag. All UI is snappy.
    I have the following settings in Photoshop CS4 Preferences:
    Actual Ram: 14710MB
    Set to use (87%) :12790MB
    Scratch Disk
    on a separate fast 500gb - to become a 80gb ssd soon
    History state: 50
    Cache: 8
    OpenGL - ON
    Vsync - OFF
    3D Interaction Acceleration - OFF
    Force Bilinear Interpolation - OFF
    Advanced Drawing - ON
    Use for Image Display - ON
    Color Matching - ON
    I hope this helps in some way too...
    EDIT: I should also add that I Defrag all my drives with a third party defrag software every night due to the large image files.

  • Will native compiling improve performance?

    Hi,
    I've got familiar with Excelsior Native Compiler since a week ago. They claim on their website that compiling Java classes to native code (machine code) will improve the performance of the application. However, JAlbum (http://jalbum.net) says that its JAR files of the application run "basically at the same speed" compared to the native compiled one for windows.
    Will really compiling Java classes to native code improve performance? I'm not talking about the startup speed, I mean the whole application performance.
    Thanks...

    depends on what the app is doing
    many things in java run as fast as native code, especially if you're using a later version of java
    i guess there's one way to find out :-)

  • After Effects Help | Improve performance

    This question was posted in response to the following article: http://helpx.adobe.com/after-effects/using/improve-performance.html

    The link to Cache Work Area in Background is wrong, it links to Memory instructions in CS5.5 and earlier instead of the CS6 feature that it should to.

  • External drives to improve performance?

    I have a 17" macbook pro, 8gb RAM, Ati Radeon 6750, Intel i7 2,2ghz
    I want to work with final cut pro X and motion 5, but i see that there´s no scratch disk preferences inside these applications.
    Working with an external hard drive could improve performance? What if i only have usb 2.0 by the moment?
    should be better to work on the macbook hard drive till I can get a usb 3.0 or thunderbolt in the future?
    Thanks,
    Pablo

    Stick with FW800 external drives.  No, there's no more Scratch Disk (thank goodness).  Inside FCP X, you create Events on any hard drive attached.  You create Projects on any hard drive attached.  You can even use Disk Images, too.  The user manual explains all of this very clearly.
    I'd stay away from USB for video editing all together.  Thunderbolt will help with really large projects, and work using very high end, very large HD and digital film files.

  • Improving native messaging application

    i am new in this community.. please tell me if i made mistakes..
    i am now using iphone for quite sometimes and i really enjoy using this phone rather than any other phone brands.
    i just want to give some tell that apple should make improvement to its messaging application.
    I notice that apple also make some improvements to messaging application but i think they should observe more to what should be improvised rather makes it standard..
    I loves messaging but lack of great features such as QUICK REPLY, SIGNATURE, QUICK COMPOSE, AND ETC..
    so all i want to say is don't give us any reason to jailbreak our iphone because the main reason why people jailbreak their device is because they can customize their device anyway they can..
    Anyone agree with me???

    You can leave feedback for Apple at:  http://www.apple.com/feedback

Maybe you are looking for