Super Proxy? Improve Performance more?

http://help.adobe.com/en_US/AfterEffects/9.0/WSF13D6BED-C53B-408a-B2D6-C8B4205D4FB7a.html
Outside of these Adobe tips on improving performance, does anyone know any more tricks? ....even if its a third party purchase. It just seemed that when proxied everything, the machine still hangs up enough to annoy me and images because theyre proxied/lowres are of course a bit blurry. I guess Im hoping there is some super proxy plug-in that lets you work superfast. The dream is to build HD 3D animations on a laptop. Then ftp it all to a renderfarm and they do the heavy lifting and mail me a nice shiny bluray.

Mylenium,  Could you please give a guy a hand as I'm loosing my mind calling first level support at adobe.  A run down on basic performance setting would be helpful. 
Should I be looking at a Mac?
I just posted my workstation specs and I also have the same system and work.  Dell t7400
We are seeing the CPU getting hammered while doing a simple AE task.  Just a png file with a blur effect kills the system.
What could be going on.  Do I need to PRAY!!!
Max

Similar Messages

  • How to improve performance when there are many TextBlocks in ItemsControl items?

       Hi,
       I'm trying to find a way to improve performance for a situation when there is an ItemsControl using UI and Data virtualization and each item on that control has 36 TextBlocks. Basically the item is a single string. There are so many TextBlocks
    to allow assigning different brushes to different parts of the string. Performance of this construction is terrible. I have 37 items visible on the screen and if I try to scroll up or down it scrolls into the black space and then it takes a second or two to
    show the items.
       I tried different things. For example, the most successful performance-wise was to replace TextBlocks with Borders and then draw bitmaps. In other words, I prepared 127 bitmaps for each character (I need ASCII only) and then I used those bitmaps
    to set Border.Backgrounds. It improved performance about 1.5 - 2 times but it consumed much more memory (which is not surprising, of course). Required amount of memory is so big that it throws OutOfMemoryException on 512MB emulator but works on 1GB. As a result
    I don't thing it is a good solution.
       Another thing that worked perfect is to replace 36 TextBlocks with only 6 TextBlocks. In this case the performance improvement is about 5 - 10 times but I lose the ability to set different colors to different parts of the string. It seems that
    the performance degrades dramatically with the increase of number of TextBlocks. Is there another technique to draw strings where literally each character can be of different color with decent performance?
    Thank you
    Alex

       Using Runs inside TextBlocks gives approximately the same improvement as using bitmaps 1.5 - 2 times faster but it is not even close to the case with just a couple of TextBlocks in the ItemsControl item. Any other ideas?
    Alex

  • How to improve Performance of the Statements.

    Hi,
    I am using Oracle 10g. My problem is when i am Execute & fetch the records from the database it is taking so much time. I have created Statistics also but no use. Now what i have to do to improve the Performance of the SELECT, INSERT, UPDATE, DELETE Statements.
    Is it make any differents because i am using WindowsXP, 1 GB RAM in Server Machine, and WindowsXP, 512 GB RAM in Client Machine.
    Pls. Give me advice for me to improve Performance.
    Thank u...!

    What and where to change parameters and values ?Well, maybe my previous post was not clear enough, but if you want to keep your job, you shouldn't change anything else on init parameter and you shouldn't fall in the Compulsive Tuning Disorder.
    Everyone who advise you to change some parameters to some value without any more info shouldn't be listen.
    Nicolas.

  • How do I improve performance while doing pull, push and delete from Azure Storage Queue

           
    Hi,
    I am working on a distributed application with Azure Storage Queue for message queuing. queue will be used by multiple clients across the clock and thus it is expected that it would be heavily loaded most on the time in usage. business case is typical as in
    it pulls message from queue, process the message then deletes the message from queue. this module also sends back a notification to user indicating process is complete. functions/modules work fine as in they meet the logical requirement. pretty typical queue
    scenario.
    Now, coming to the problem statement. since it is envisaged that the queue would be heavily loaded most of the time, I am pushing towards to speed up processing of the overall message lifetime. the faster I can clear messages, the better overall experience
    it would be for everyone, system and users.
    To improve on performance I did multiple cycles for performance profiling and then improving on the identified "HOT" path/function.
    It all came down to a point where only the Azure Queue pull and delete are the only two most time consuming calls outside. I can further improve on pull, which i did by batch pulling 32 message at a time (which is the max message count i can pull from Azure
    queue at once at the time of writing this question.), this returned me a favor as in by reducing processing time to a big margin. all good till this as well.
    i am processing these messages in parallel so as to improve on overall performance.
    pseudo code:
    //AzureQueue Class is encapsulating calls to Azure Storage Queue.
    //assume nothing fancy inside, vanila calls to queue for pull/push/delete
    var batchMessages = AzureQueue.Pull(32); Parallel.ForEach(batchMessages, bMessage =>
    //DoSomething does some background processing;
    try{DoSomething(bMessage);}
    catch()
    //Log exception
    AzureQueue.Delete(bMessage);
    With this change now, profiling results show that up-to 90% of time is only taken by the Azure Message delete calls. As it is good to delete message as soon as processing is done, i remove it just after "DoSomething" is finished.
    what i need now is suggestions on how to further improve performance of this function when 90% of the time is being eaten up by the Azure Queue Delete call itself? is there a better faster way to perform delete/bulk delete etc?
    with the implementation mentioned here, i get speed of close to 25 messages/sec. Right now Azure queue delete calls are choking application performance. so is there any hope to push it further.
    Does it also makes difference in performance which queue delete call am making? as of now queue has overloaded method for deleting message, one which except message object and another which accepts message identifier and pop receipt. i am using the later
    one here with message identifier nad pop receipt to delete message from queue.
    Let me know if you need any additional information or any clarification in question.
    Inputs/suggestions are welcome.
    Many thanks.

    The first thing that came to mind was to use a parallel delete at the same time you run the work in DoSomething.  If DoSomething fails, add the message back into the queue.  This won't work for every application, and work that was in the queue
    near the head could be pushed back to the tail, so you'd have to think about how that may effect your workload.
    Or, make a threadpool queued delete after the work was successful.  Fire and forget.  However, if you're loading the processing at 25/sec, and 90% of time sits on the delete, you'd quickly accumulate delete calls for the threadpool until you'd
    never catch up.  At 70-80% duty cycle this may work, but the closer you get to always being busy could make this dangerous.
    I wonder if calling the delete REST API yourself may offer any improvements.  If you find the delete sets up a TCP connection each time, this may be all you need.  Try to keep the connection open, or see if the REST API can delete more at a time
    than the SDK API can.
    Or, if you have the funds, just have more VM instances doing the work in parallel, so the first machine handles 25/sec, the second at 25/sec also - and you just live with the slow delete.  If that's still not good enough, add more instances.
    Darin R.

  • Improving Performance

    Hi Experts,
    How can we improve the performance of the select with out creating an secondary index?
    In my select query am not using  primary fields in where condation.
    so i want to know that how can we improve the performance .
    one more thing is that if we r creating secondary index what are the disadvantages of that?
    Thanks & Regards,
    Amit.

    If you select from a table without using an appropriate index or key, then the database will perform a table scan to get the required data.  If you accept that this will be slow but must be used, then the key to improving performance of the program is to minimise the number of times it does the scan of the table.
    Often the way to do this is not what would normally be counted as good programming.
    For example, if you SELECT inside a loop or SELECT using FOR ALL ENTRIES, the system can end up doing the table scan a lot of times because the SQL is broken up into lots of individual/small selects passed to the database one after the other.  So it may be quicker to SELECT from the table into an internal table without specifying any WHERE conditions, and then delete the rows from the internal table that are not wanted.  This way you do only a single table scan on the database to get all records.  Of course, this uses a lot of memory - which is often the trade off.  If you have a partial key and are then selecting based on non idexed fields, you can get all records matching the partial key and then throw away those where the remaining fields dont meet requirements.
    Andrew

  • To improve performance for report

    Hi Expert,
    i have generated the opensales order report which is fetching data from VBAK it is taking more time exectuing in the forground itself.
    it is going in to dump in foreground and i have executed in the background also but it is going in to dump.
    SELECT vbeln
               auart
               submi
               vkorg
               vtweg
               spart
               knumv
               vdatu
               vprgr
               ihrez
               bname
               kunnr
        FROM vbak
        APPENDING TABLE itab_vbak_vbap
        FOR ALL ENTRIES IN l_itab_temp
    *BEGIN OF change 17/Oct/2008.
        WHERE erdat IN s_erdat              AND
             submi = l_itab_temp-submi     AND
    *End of Changes 17/Oct/2008.
              auart = l_itab_temp-auart     AND
    *BEGIN OF change 17/Oct/2008.
              submi = l_itab_temp-submi     AND
    *End of Changes 17/Oct/2008.
              vkorg = l_itab_temp-vkorg     AND
              vtweg = l_itab_temp-vtweg     AND
              spart = l_itab_temp-spart     AND
              vdatu = l_itab_temp-vdatu     AND
              vprgr = l_itab_temp-vprgr     AND
              ihrez = l_itab_temp-ihrez     AND
              bname = l_itab_temp-bname     AND
              kunnr = l_itab_temp-sap_kunnr.
        DELETE itab_temp FROM l_v_from_rec TO l_v_to_rec.
      ENDDO.
    Please give me suggession for improving performance for the programmes.

    hi,
    you try like this
    DATA:BEGIN OF itab1 OCCURS 0,
         vbeln LIKE vbak-vbeln,
         END OF itab1.
    DATA: BEGIN OF itab2 OCCURS 0,
          vbeln LIKE vbap-vbeln,
          posnr LIKE vbap-posnr,
          matnr LIKE vbap-matnr,
          END OF itab2.
    DATA: BEGIN OF itab3 OCCURS 0,
          vbeln TYPE vbeln_va,
          posnr TYPE posnr_va,
          matnr TYPE matnr,
          END OF itab3.
    SELECT-OPTIONS: s_vbeln FOR vbak-vbeln.
    START-OF-SELECTION.
      SELECT vbeln FROM vbak INTO TABLE itab1
      WHERE vbeln IN s_vbeln.
      IF itab1[] IS NOT INITIAL.
        SELECT vbeln posnr matnr FROM vbap INTO TABLE itab2
        FOR ALL ENTRIES IN itab1
        WHERE vbeln = itab1-vbeln.
      ENDIF.

  • Alternate for inner join to improve performance

    Hi all,
    I have used an inner join query to fetch data from five different tables into an internal table with where clause conditions.
    The execution time is almost 5-6 min for this particular query(I have more data in all five DB tables- more than 10 million records in every table).
    Is there any alternate for inner join to improve performance.?
    TIA.
    Regards,
    Karthik

    Hi All,
    Thanks for all your interest.
    SELECT  a~object_id a~description a~descr_language
                a~guid AS object_guid a~process_type
                a~changed_at
                a~created_at AS created_timestamp
                a~zzorderadm_h0207 AS cpid
                a~zzorderadm_h0208 AS submitter
                a~zzorderadm_h0303 AS cust_ref
                a~zzorderadm_h1001 AS summary
                a~zzorderadm_h1005 AS summary_uc
                a~zzclose_date     AS clsd_date
                d~stat AS status
                f~priority
                FROM crmd_orderadm_h AS a INNER JOIN crmd_link AS b ON  a~guid = b~guid_hi
                INNER JOIN crmd_partner AS c ON b~guid_set = c~guid
                INNER JOIN crm_jest AS d ON objnr  = a~guid
                INNER JOIN crmd_activity_h AS f ON f~guid = a~guid
                INTO CORRESPONDING FIELDS OF TABLE et_service_request_list
                WHERE process_type IN lt_processtyperange
                AND   a~created_at IN lt_daterange
                AND   partner_no IN lr_partner_no
                AND   stat IN lt_statusrange
                AND   object_id IN lt_requestnumberrange
                AND   zzorderadm_h0207 IN r_cpid
                AND   zzorderadm_h0208 IN r_submitter
                AND   zzorderadm_h0303 IN r_cust_ref
                AND   zzorderadm_h1005 IN r_trans_desc
                AND   d~inact = ' '
                AND   b~objtype_hi = '05'
                AND   b~objtype_set = '07'.
                f~priority
                FROM crmd_orderadm_h AS a INNER JOIN crmd_link AS b ON  a~guid = b~guid_hi
                INNER JOIN crmd_partner AS c ON b~guid_set = c~guid
                INNER JOIN crm_jest AS d ON objnr  = a~guid
                INNER JOIN crmd_activity_h AS f ON f~guid = a~guid
                INTO CORRESPONDING FIELDS OF TABLE et_service_request_list
                WHERE process_type IN lt_processtyperange
                AND   a~created_at IN lt_daterange
                AND   partner_no IN lr_partner_no
                AND   stat IN lt_statusrange
                AND   object_id IN lt_requestnumberrange
                AND   zzorderadm_h0207 IN r_cpid
                AND   zzorderadm_h0208 IN r_submitter
                AND   zzorderadm_h0303 IN r_cust_ref
                AND   zzorderadm_h1005 IN r_trans_desc
                AND   d~inact = ' '
                AND   b~objtype_hi = '05'
                AND   b~objtype_set = '07'.

  • Improving performance while adding groups

    Hello,
    I've been monitoring my crystal reports from a week or so and the report performance is going for a toss. I would like to narrate this in little detail. I have created 3 groups to select dynamic parameters and each group has a formula for itself. In my parameters I have added one parameter with 7 entities (which is hard coded), now a user can select any 3 entity out of those seven when initiallly refreshing the document, each of the parameter entity is bundeled in a conditional formula (mentioned under formula fields) for each entity. The user may select any entity and may get the respective data for that entity.
    For all this i have created 3 groups and same formula is pasted under all the 3 groups. I have then made the formula group to be selected under Group expert. The report works fine and yields me correct data. However, during the grouping of the formula's crystal selects all the database tables from the database field as these tables are mentioned under the group formula. Agreed all fine.
    But when I run the report the "Show SQL query" selects all the database tables under Select clause which should not be the case. Due to this even if i have selected an entity which has got only 48 to 50 records, crystal tends to select all the 16,56,053 records from the database fields which is hampering the crystal performance big time. When I run the same query in SQL it retrives the data in just 8 seconds but as crystal selecting all the records gives me data after 90 seconds which is frustrating for the user.
    Please suggest me a workaround for this. Please help.
    Thank you.

    Hi,
    I suspect the problem isn't necessarily just your grouping but with your Record Selection Formula as well.  If you do not see a complete Where clause is because your Record Selection Formula is too complicated for Crystal to translate to SQL. 
    The same would be said for your grouping.  There are two suggestions I can offer: 
    1)  Instead of linking the tables in Crystal, use a SQL Command and generate your query in SQL directly.  You can use parameters and at the very least, get a working WHERE clause. 
    2)  Create a Stored Procedure or view that can use the logic you need to retrieve the records. 
    At the very least you want to be able to streamline the query to improve performance.  Grouping may not be possible but my guess it's more with the Selection formula than the grouping.
    Good luck,
    Brian

  • How to improve performance of Siebel Configurator

    Hi All,
    We are using Siebel Configurator to model the item structures. We wrote few constraint rules on that. But while launching the configurator it is taking more time to open.
    Even without rules also it is behaving in the same manner.
    Any inputs on this could be highly appreciated
    RAM

    duplicate thread..
    How to improve performance of attached query

  • Best way to improve performance?

    I'm using a Dual Core Intel Xeon and starting to do video work on it with Final Cut Pro, etc.
    I've got 8GB of DDR2 FB-DIMM at 667MHz.
    I'm using a Cinema HD display, driven by a NVIDIA GeForce 7300 GT.
    What would be your best advice at improving performance (if there is any). I doubt more RAM would help; not sure if there is anything faster than 667Mhz for this machine; Could a newer video card help rendering speed?
    I'm generally very happy with how things work, can't afford a new MacPro, and am simply wondering if I should invest in something more affordable that could give me a slightly better kick?
    Many thanks,
    Czet

    One thing that's always useful is to make sure you've got iStat installed and see how much of your RAM is actually being used because 8Gb is a lot, even for HD but it doesn't rule out a lack of RAM.
    The 'biggest' increase in speed I've ever seen on any of my kit was a SSD. I've setup my Mac Pro with 2 x 128Gb SSD in the 2nd optical bay (in a 3.5" RAID 0 caddy) and setup Final Cut Pro to use 2 x 1.5Tb Seagate 7200.4's in software RAID 0 and everything flies. A 256Gb SSD is far more expensive (and slower) than 2 x 128Gb's in a RAID 0 caddy so a no brainer.
    Graphics cards always help but in my experience the only place you'll notice significant improvements are Games and software such as Motion/Aperture where you really must have a decent card.

  • How to run query in parallel  to improve performance

    I am using ALDSP2.5, My data tables are split to 12 ways, based on hash of a particular column name. I have a query to get a piece of data I am looking for. However, this data is split across the 12 tables. So, even though my query is the same, I need to run it on 12 tables instead of 1. I want to run all 12 queries in parallel instead of one by one, collapse the datasets returned and return it back to the caller. How can I do this in ALDSP ?
    To be specific, I will call below operation to get data:
    declare function ds:SOA_1MIN_POOL_METRIC() as element(tgt:SOA_1MIN_POOL_METRIC_00)*
    src0:SOA_1MIN_POOL_METRIC(),
    src1:SOA_1MIN_POOL_METRIC(),
    src2:SOA_1MIN_POOL_METRIC(),
    src3:SOA_1MIN_POOL_METRIC(),
    src4:SOA_1MIN_POOL_METRIC(),
    src5:SOA_1MIN_POOL_METRIC(),
    src6:SOA_1MIN_POOL_METRIC(),
    src7:SOA_1MIN_POOL_METRIC(),
    src8:SOA_1MIN_POOL_METRIC(),
    src9:SOA_1MIN_POOL_METRIC(),
    src10:SOA_1MIN_POOL_METRIC(),
    src11:SOA_1MIN_POOL_METRIC()
    This method acts as a proxy, it aggregates data from 12 data tables
    src0:SOA_1MIN_POOL_METRIC() get data from SOA_1MIN_POOL_METRIC_00 table
    src1:SOA_1MIN_POOL_METRIC() get data from SOA_1MIN_POOL_METRIC_01 table and so on.
    The data source of each table is different (src0, src1 etc), how can I run these queries in parallel to improve performance?

    Thanks Mike.
    The async function works, from the log, I could see the queries are executed in parallel.
    but the behavior is confused, with same input, sometimes it gives me right result, some times(especially when there are few other applications running in the machine) it throws below exception:
    java.lang.IllegalStateException
         at weblogic.xml.query.iterators.BasicMaterializedTokenStream.deRegister(BasicMaterializedTokenStream.java:256)
         at weblogic.xml.query.iterators.BasicMaterializedTokenStream$MatStreamIterator.close(BasicMaterializedTokenStream.java:436)
         at weblogic.xml.query.runtime.core.RTVariable.close(RTVariable.java:54)
         at weblogic.xml.query.runtime.core.RTVariableSync.close(RTVariableSync.java:74)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.runtime.core.IfThenElse.close(IfThenElse.java:99)
         at weblogic.xml.query.runtime.core.CountMapIterator.close(CountMapIterator.java:222)
         at weblogic.xml.query.runtime.core.LetIterator.close(LetIterator.java:140)
         at weblogic.xml.query.runtime.constructor.SuperElementConstructor.prepClose(SuperElementConstructor.java:183)
         at weblogic.xml.query.runtime.constructor.PartMatElemConstructor.close(PartMatElemConstructor.java:251)
         at weblogic.xml.query.runtime.querycide.QueryAssassin.close(QueryAssassin.java:65)
         at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
         at weblogic.xml.query.runtime.core.QueryIterator.close(QueryIterator.java:146)
         at com.bea.ld.server.QueryInvocation.getResult(QueryInvocation.java:462)
         at com.bea.ld.EJBRequestHandler.executeFunction(EJBRequestHandler.java:346)
         at com.bea.ld.ServerBean.executeFunction(ServerBean.java:108)
         at com.bea.ld.Server_ydm4ie_EOImpl.executeFunction(Server_ydm4ie_EOImpl.java:262)
         at com.bea.dsp.dsmediator.client.XmlDataServiceBase.invokeFunction(XmlDataServiceBase.java:312)
         at com.bea.dsp.dsmediator.client.XmlDataServiceBase.invoke(XmlDataServiceBase.java:231)
         at com.ebay.rds.dao.SOAMetricDAO.getMetricAggNumber(SOAMetricDAO.java:502)
         at com.ebay.rds.impl.NexusImpl.getMetricAggNumber(NexusImpl.java:199)
         at com.ebay.rds.impl.NexusImpl.getMetricAggNumber(NexusImpl.java:174)
         at RDSWS.getMetricAggNumber(RDSWS.jws:240)
         at jrockit.reflect.VirtualNativeMethodInvoker.invoke(Ljava.lang.Object;[Ljava.lang.Object;)Ljava.lang.Object;(Unknown Source)
         at java.lang.reflect.Method.invoke(Ljava.lang.Object;[Ljava.lang.Object;I)Ljava.lang.Object;(Unknown Source)
         at com.bea.wlw.runtime.core.dispatcher.DispMethod.invoke(DispMethod.java:371)
    below is my code example, first I get data from all the 12 queries, each query is enclosed with fn-bea:async function, finally, I do a group by aggregation based on the whole data set, is it possible that the exception is due to some threads are not returned data yet, but the aggregation has started?
    the metircName, serviceName, opname, and $soaDbRequest are simply passed from operation parameters.
    let $METRIC_RESULT :=
            fn-bea:async(
                for $SOA_METRIC in ns20:getMetrics($metricName,$serviceName,$opName,"")
                for $SOA_POOL_METRIC in src0:SOA_1MIN_POOL_METRIC()
                where
                $SOA_POOL_METRIC/SOA_METRIC_ID eq fn-bea:fence($SOA_METRIC/SOA_METRIC_ID)
                and $SOA_POOL_METRIC/CAL_CUBE_ID  ge fn-bea:fence($soaDbRequest/ns16:StartTime)  
                and $SOA_POOL_METRIC/CAL_CUBE_ID lt fn-bea:fence($soaDbRequest/ns16:EndTime )
                and ( $SOA_POOL_METRIC/SOA_SERVICE_ID eq fn-bea:fence($soaDbRequest/ns16:ServiceID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:ServiceID)))
                and ( $SOA_POOL_METRIC/POOL_ID eq fn-bea:fence($soaDbRequest/ns16:PoolID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:PoolID)))
                and ( $SOA_POOL_METRIC/SOA_USE_CASE_ID eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)))
                and ( $SOA_POOL_METRIC/ROLE_TYPE eq fn-bea:fence($soaDbRequest/ns16:RoleID)
                   or (-1 eq fn-bea:fence($soaDbRequest/ns16:RoleID)))
                return
                $SOA_POOL_METRIC
               fn-bea:async(for $SOA_METRIC in ns20:getMetrics($metricName,$serviceName,$opName,"")
                for $SOA_POOL_METRIC in src1:SOA_1MIN_POOL_METRIC()
                where
                $SOA_POOL_METRIC/SOA_METRIC_ID eq fn-bea:fence($SOA_METRIC/SOA_METRIC_ID)
                and $SOA_POOL_METRIC/CAL_CUBE_ID  ge fn-bea:fence($soaDbRequest/ns16:StartTime)  
                and $SOA_POOL_METRIC/CAL_CUBE_ID lt fn-bea:fence($soaDbRequest/ns16:EndTime )
                and ( $SOA_POOL_METRIC/SOA_SERVICE_ID eq fn-bea:fence($soaDbRequest/ns16:ServiceID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:ServiceID)))
                and ( $SOA_POOL_METRIC/POOL_ID eq fn-bea:fence($soaDbRequest/ns16:PoolID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:PoolID)))
                and ( $SOA_POOL_METRIC/SOA_USE_CASE_ID eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)
                   or (0 eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)))
                and ( $SOA_POOL_METRIC/ROLE_TYPE eq fn-bea:fence($soaDbRequest/ns16:RoleID)
                   or (-1 eq fn-bea:fence($soaDbRequest/ns16:RoleID)))
                return
                $SOA_POOL_METRIC
             ... //12 similar queries
            for $Metric_data in $METRIC_RESULT    
            group $Metric_data as $Metric_data_Group        
            by   $Metric_data/ROLE_TYPE as $role_type_id  
            return
            <ns0:RawMetric>
                <ns0:endTime?></ns0:endTime>
                <ns0:target?>{$role_type_id}</ns0:target>
    <ns0:value0>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE0)}</ns0:value0>
    <ns0:value1>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE1)}</ns0:value1>
    <ns0:value2>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE2)}</ns0:value2>
    <ns0:value3>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE3)}</ns0:value3>
    </ns0:RawMetric>
    could you tell me why the result is unstable? thanks!

  • Using Lightroom and Aperture, will a new ATI 5770/5870 vs. GT 120 improve performance?

    I have a MP (2009, 3.3 Nehalem Quad and 16GB RAM) and wanted to improve performance in APERTURE (see clock wheel processing all the time) with edits, also using Lightroom, and sometimes CS5. 
    Anyone with experience that can say upgrading from the GT120 would see a difference and how much approximately?
    Next, do I need to buy the 5870 or can I get the 5770 to work?
    I am assuming I have to remove the GT120 for the new card to fit?
    Thanks

    Terrible marketing. ALL ATI 5xxx work in ALL Mac Pro models. With 10.6.5 and later.
    It really should be yours to just check AMD and search out reviews that compare these to others. You didn't look at the specs of each or Barefeats? He has half a dozen benchmark tests, but the GT120 doesn't even show up or in the running on most.
    From AMD 5870 shows 2x the units -
    TeraScale 2 Unified Processing Architecture   
    1600 Stream Processing Units
    80 Texture Units
    128 Z/Stencil ROP Units
    32 Color ROP Units
    ATI Radeon™ HD 5870 graphics
    That should hold up well.
    Some are on the fence or don't want to pay $$
    All they or you (and you've been around for more than a day!) is go to Apple Store:
    ATI Radeon HD 5870 Graphics Upgrade Kit for Mac Pro (Mid 2010 or Early 2009)
    ATI Radeon HD 5870 Upgrade

  • How to improve performance on SD document with over 1000 line items

    Dear experts,
    A lot of our SD document contains 1500 to 5000 line items. Saving a simple change of one line item description would take 5 minutes (with or without pricing considerations). We are on ECC 5.0. Does anyone know of an existing snote or ways that would help to improve performance?
    A related question on performance tracing. I have done some tracing and notice that the time to access SQL/buffer/enqueue/RFC only amount to less than 50% (sometimes 5%) of the total processing time. What has the system been doing for the "unrecorded" time?
    Thank you and appreciate!
    -cynthia

    I would check which modules or which parts of code take the most of the processing time. You can use SE30 to check one order save.
    If it's not SQL - then it may be internal processing like inefficient loops and so on.  For example runtime of nested loops with big tables have grows exponentially.
    It may be caused by SAP code - search OSS notes for based on the module names which consume most time.
    OR it can be caused by the code in your userexits - then developers have to fix it and redesign using more efficient technique.

  • How to improve performance on MacBook Pro 10.5.8?

    I have a MacBook Pro running 10.5.8 that has become way too slow. It hesitates between key strokes and response and is slow to search and navigate websites.
    I just checked and there are no updates due.
    What can I do to improve performance?

    Please read this whole message before doing anything.
    This procedure is a diagnostic test. It won’t solve your problem. Don’t be disappointed when you find that nothing has changed after you complete it.
    Third-party system modifications are a common cause of usability problems. By a “system modification,” I mean software that affects the operation of other software -- potentially for the worse. The following procedure will help identify which such modifications you've installed. Don’t be alarmed by the complexity of these instructions -- they’re easy to carry out and won’t change anything on your Mac.
    These steps are to be taken while booted in “normal” mode, not in safe mode. If you’re now running in safe mode, reboot as usual before continuing.
    Below are several lines of text in monospaced type, which are UNIX shell commands. They’re harmless, but they must be entered exactly as given in order to work. If you have doubts about the safety of running these commands, search this site for other discussions in which they’ve been used without any report of ill effects.
    Some of the commands will line-wrap in your browser, but each one is really just a single line, all of which must be selected. You can accomplish this easily by triple-clicking anywhere in the line. The whole line will highlight, and you can then either copy or drag it. The headings “Step 1” and so on are not part of the commands.
    Note: If you have more than one user account, Step 2 must be taken as an administrator. Ordinarily that would be the user created automatically when you booted the system for the first time. The other steps should be taken as the user who has the problem, if different. Most personal Macs have only one user, and in that case this paragraph doesn’t apply.
    To begin, launch the Terminal application; e.g., by entering the first few letters of its name in a Spotlight search. A text window will open with a line already in it, ending either in a dollar sign (“$”) or a percent sign (“%”). If you get the percent sign, enter “sh” (without the quotes) and press return. You should then get a new line ending in a dollar sign.
    Step 1
    Copy or drag -- do not type -- the line below into the Terminal window, then press return:
    kextstat -kl | awk '!/com\.apple/ {print $6 $7}'
    Post the lines of output (if any) that appear below what you just entered (the text, please, not a screenshot.)
    Step 2
    Repeat with this line:
    sudo launchctl list | sed 1d | awk '!/0x|com\.apple/ {print $3}'
    This time, you'll be prompted for your login password, which won't be displayed when you type it. You may get a one-time warning not to screw up. You don't need to post the warning.
    Step 3
    launchctl list | sed 1d | awk '!/0x|com\.apple/ {print $3}'
    Step 4
    ls -1A /e*/mach* {,/}L*/{Ad,Compon,Ex,Fram,In,Keyb,La,Mail/Bu,P*P,Priv,Qu,Scripti,Servi,Sta}* L*/Fonts 2> /dev/null
    Important: If you synchronize with a MobileMe account, your me.com email address may appear in the output of the above command. If so, change it to something like “[email protected]” before posting.
    Step 5
    osascript -e 'tell application "System Events" to get the name of every login item'
    Remember, steps 1-5 are all drag-and-drop or copy-and-paste, whichever you prefer -- no typing, except your password.
    You can then quit Terminal.

  • Improve performance MAX HEAP?

    Hello,
    We are trying to improve performance of one of our interfaces FILE->SQL STAGING->TARGET.
    Here are the steps we took;
    1. We are indeed using LKM SQL BULK INSERT (which brought the performance down from 70min to 20min)
    *2. The ARRAY and FETCH sizes "DONOT" seem to improve performance at all. Probably because we are using FILE TECHNOLOGY. Experts please confirm this?*
    3. Finally, we changed the JAVA INT and MAX sizes from;
    INT: 32m to 1024m
    MAX: 256m to 2g
    a. How do we ensure that the odiparam changes have taken affect? We have been just closing all ODI Navigators, stopping agent and restarting agent. Is this correct?
    But it "DIDNT" improve performance either.
    Here is what we noticed:
    Total Physical Memory is 8G
    Physical Memory used is 3.6G
    ODI64.exe is using 0.5 G
    Java.exe is using 0.5 G
    We noticed that Java.exe was "not" using any CPU.
    b. Is ODI64.exe what we should be looking at to monitor performance? Shouldn't it be at 2G?
    c. Is there a way to define more than one CPU to be used, parallel processing?
    Please let me know your thoughts.
    Thanks!
    Edited by: user10678366 on May 18, 2013 9:09 AM
    Edited by: user10678366 on May 18, 2013 9:17 AM

    #a Changes in odiparams would be applicable only for the standalone agent. Bouncing the standalone agent would pick the latest settings from odiparams.
    #b The min heap size you have specified is 1G so your jvm should have initialized with at least that much memory. Since the processes you mentioned are using lesser memory, it seems that either your memory are not picked up or you are looking at the wrong processes.
    #c Depending upon your interface if there is data to be moved row by row through ODI then the reading an writing automatically happens in parallel.

Maybe you are looking for

  • SAP Business One 8.8

    Hi Al, can any one of you provide me the link for downloading the  SAP Business One 8.8 .when will the full version be released.is the released version the beta one Edited by: micheal willis on Jul 22, 2009 5:25 PM

  • XP mode looses network connectivity periodically on multiple Dell Optiplex Windows 7 Professional PCs.

    XP mode looses connectivity periodically throughout the day on a daily basis but Windows 7 Pro host PC never looses connectivity.  Problem is at its worst when the XP mode has been running all day.  If XP mode is rebooted once daily, problem is reduc

  • My computers don't see each other on the network

    my Imac (early'06) and a machine running Ubuntu (apparently a linux platform OS) and a windows machine are connected to the same network thru a router. All machines access the internet fine. However i want to file share with the other computers but t

  • Apple mobile device failed to start so can't download itunes - help

    itunes has run perfectly well on my desktop until this week.  i have had to reinstall and now it comes up with an error message.  I see lots of other folk have the same problem.  how do i work around this?

  • IMac Monitor freezes

    Over the last 2 weeks, my monitor froze twice.... I dunno why this is happening. My OS 10.6.8. Please explain why this is happening or any support? Thanks, Elliwhi