Performance problem ,  100 % swap used, but vmstat -   sr = 0

Hi,
I have a performance problem on a server. It is sometimes very low during several hours.
context : v890, 32 Go RAM, 8 SPARC IV+, solaris 10 release 03/05, veritas volume manager, containers, several oracle databases, applications...
with iostat, swap partition : %b -> 100% !!!!
with vmstat, r-> 0, b -> 0, w ->29, free memory : 600 Mo , sr -> 0, idle : more than 50%,
uptime, load average : 6
vmstat -S : si -> 0 , so -> 0
vmstat -p : api -> 45126682863 ( probably a bug ) , apo -> 0 fpi -> 1895320681342 ( probably a bug ), fpo -> 0
It's difficult to me to find the problem. Is it paging activity ??? someone can tell me, what is the memory limit for paging activity start ?
If you thing I'm in the wrong way, thanks for all ideas :)
Julien
Edited by: Wylem on Feb 28, 2008 6:11 PM

Does seem a bit odd.
The 'w' column doesn't necessarily mean that anything bad is happening now, but it does mean that the system was severely memory limited at some point in the past at least.
Paging should occur when free memory drops below LOTSFREE. I don't remember if swapping happens at a particular point, but probably wouldn't happen above DESFREE. The page scanner should become active (non-zero 'sr' numbers) any time the memory is below LOTSFREE.
Since you have Solaris 10, you might want to grab the dtrace toolkit and see if some of the tools in there show you anything more useful (some of the I/O ones might break down the access further).
So it really doesn't look like you're swapping/paging out anything now, but you almost certainly did in the past. It could be that you're using an app that has paged out a lot of stuff do disk, so that the I/O you're seeing is it bringing stuff back now that RAM is available.
Darren

Similar Messages

  • Performances Problem in XI using RFC

    Hi All,
    I have some doubts about XI performance:
    Does anybody knows if there is any
    performance restrictions to do RFC calls to XI ?
    What's the best performance Solution in XI ?
    iDoc Adpater or RFC Adapter, File Adapter or RFC Adapter ?
    Is it possible that something in Mapping
    causes performance problem ?
    Thanks

    Hi
    You need to create the Type H RFC in PI system pointing to ECC and reverse in ECC system pointing to PI.
    Create the RFC and test. When you test connection it will show error because of empty message but this will work fine for the Proxy with Data.
    Thanks
    Gaurav

  • Swap used but no page outs

    What would cause OS X, Mountain Lion in this case but I believe I've seen this in previous versions as well, to use swap space but not to page out?
    See:
    Its been a while since I've had to delve into the whys and wherefores of memmory managment and paging systems.
    thanks

    Thanks for that, it was a good vm refresher.
    However after giving the doc a quick read I didn't see any mention of OS X use of swap vs. paging. Unless the term has changed meaning when a process was swapped its entire memory footprint was written out to disk. Now if my memory was all used up and I started a new program I could see the need to swap out an existing program to make room (but even this case, at least as I remember it the paging system would be used)
    So my question still is open, what causes OS X to swap rather then page when there is memory available and what does OS X even mean by swapping?
    thanks
    Message was edited by: Frank Caggiano - looking at the Activity Monitor page on the Apple site isn;t any help. They state:
    Swap used:
    This is the amount of information copied to the swap file on the Mac's drive.

  • "blued" process causing performance problems (98% CPU use)

    I have a friend who's MacBook fan is spinning at hi RPM's almost constantly. I took at look at her Activity Monitor and there's a process running called "blued" that seems to be the source of the problem. It runs nearly constantly at 90%+ CPU useage, with frequent spikes into the 96-100% range, which is what I assume is causing the cooling fan to go spinning out of control. Does anyone know what this is? I've Googled it to no avail, and am hesitant to just kill it before I know what it is. Any assistance would be greatly appreciated.

    Hi Evan,
    I found this in the Apple Developer area: http://developer.apple.com/documentation/Darwin/Reference/ManPages/man8/blued.8. html
    I would just kill that process from the Activity Monitor.
    Also, Run the hardware tests from your Install DVD (hold down d at Boot) and see if it reports any SNS (Temperature Sensor) errors - if one has failed or the leads have become disconnected then the fans will run at maximums revs by default and should go in for service.
    Carolyn

  • Urgent for know my ipad is using, but my icloud cant search?

    Hi all of you, my ipad now is using to download the music but i use my iphone to "find my iphone" the apps are not taking any route, i have do the activate lock and play sound on my ipad but is offline, The people is still dont know my icloud is setting on ipad. The ipad are link the music i am not downloading. Can give me know what can i do now? Because the ipad is stolen on my car. I have taking report to police, but if can i need to find back. This ipad is earn for me first job then get the ipad. This is most important in my life and this ipad. Please help me first can??  Before that i also dont know why have this a music come out now i understand why i have this music not i using to download after i lost my ipad and no any route information for me. The people are taking to my ipad format the activate key for my ipad i put it. But havent remove my icloud id so can help me how to do it. I

    My ipad was stolen, he take Brush Brush finished but he thought no problem you can use, but I did not go to change icloud account. I say this because my music would appear no reason someone downloads music in English and I do not go to download. I did not see ipad thing I have to tell the police, I think the question now how can I find him out?

  • Using a hashmap for caching -- performance problems

    Hello!
    1) DESCRIPTION OF MY PROBLEM:
    I was planing to speed up computations in my algorithm by using a caching-mechanism based on a Java HashMap. But it does not work. In fact the performance decreased instead.
    My task is to compute conditional probabilities P(result | event), given an event (e.g. "rainy day") and a result of a measurement (e.g. "degree of humidity").
    2) SITUATION AS AN EXCERPT OF MY CODE:
    Here is an abstraction of my code:
    ====================================================================================================================================
    int numOfevents = 343;
    // initialize cache for precomputed probabilities
    HashMap<String,Map<String,Double>> precomputedProbabilities = new HashMap<String,Map<String,Double>>(numOfEvents);
    // Given a combination of an event and a result, test if the conditional probability has already been computed
    if (this.precomputedProbability.containsKey(eventID)) {
    if (this.precomputedProbability.get(eventID).containsKey(result)) {
    return this.precomputedProbability.get(eventID).get(result);
    } else {
    // initialize a new hashmap to maintain the mappings
    Map<String,Double> resultProbs4event = new HashMap<String,Double>();
    precomputedProbability.put(eventID,resultProbs4event);
    // unless we could use the above short-cut via the cache, we have to really compute the conditional probability for the specific combination of the event and result
    * make the necessary computations to compute the variable "condProb"
    // store in map
    precomputedProbabilities.get(eventID).put(result, condProb);
    ====================================================================================================================================
    3) FINAL COMMENTS
    After introducing this cache-mechanism I encountered a severe decrease in performance of my algorithm. In total there are over 94 millions of combinations for which the conditional probabilities have to be computed. But there is a lot of redundancy in this set of feasible combinations. Basically it can be brought down to just about 260.000 different combinations that have to be captured in the caching structure. Therefore I expected a significant increase of the performance.
    What do I do wrong? Or is the overhead of a nested HashMap so severe? The computation of the conditional probabilities only contains basic operations.
    Only for those who are interested in more details
    4) DEEPER CONSIDERATION OF THE PROCEDURE
    Each defined event stores a list of associated results. These results lists include 7 items on average. To actually compute the conditional probability for a combination of an event and a result, I have to run through the results list of this event and perform an Java "equals"-operation for each list item to compute the relative frequency of the result item at hand. So, without using the caching, I would estimate to perform on average:
    7 "equal"-operations (--> to compute the number of occurences of this result item in a list of 7 items on average)
    plus
    1 double fractions (--> to compute a relative frequency)
    for 94 million combinations.
    Considering the computation for one combination (event, result) this would mean to compare the overhead of the look-up operations in the nested HashMap with the computational cost of performing 7 "equal' operations + one double fration operation.
    I would have expected that it should be less expensive to perform the lookups.
    Best regards!
    Edited by: Coding_But_Still_Alive on Sep 10, 2008 7:01 AM

    Hello!
    Thank you very much! I have performed several optimization steps. But still caching is slower than without caching. This may be due to the fact, that the eventID and results all share long common prefixes. I am not sure how this affects the computation of the values of the hash method.
    * Attention: result and eventID are given as input of the method
    Map<String,Map<String,Double>> precomputedProbs = new HashMap<String,Map<String,Double>>(1200);
    HashMap<String,Double> results2probs = (HashMap<String,Double>)this.precomputedProbs.get(eventID);
    if (results2Probs != null) {
           Double prob = results2Probs.get(result);
           if (prob != null) {
                 return prob;
    } else {
           // so far there are no conditional probs for the annotated results of this event
           // initialize a new hashmap to maintain the mappings
           results2Probs = new HashMap<String,Double>(2000);
           precomputedProbs.put(eventID,results2Probs);
    * Later, in case of the computation of the conditional probability... use the initialized map to save one "get"-operation on "precomputedProbs"
    // the variable results2probs still holds a reference to the inner HashMap<String,Double> entry of the HashMap  "precomputedProbs"
    results2probs.put(result, condProb);And... because it was asked for, here is the computation of the conditional probability in detail:
    * Attention: result and eventID are given as input of the method
    // the computed conditional probabaility
    double condProb = -1.0;
    ArrayList resultsList = (ArrayList<String>)this.eventID2resultsList.get(eventID);
    if (resultsList != null) {
                // listSize is expected to be about 7 on average
                int listSize = resultsList.size(); 
                // sanity check
                if (listSize > 0) {
                    // check if given result is defined in the list of defined results
                    if (this.definedResults.containsKey(result)) { 
                        // store conditional prob. for the specific event/result combination
                        // Analyze the list for matching results
                        for (int i = 0; i < listSize; i++) {
                            if (result.equals(resultsList.get(i))) {
                                occurrence_count++;
                        if (occurrence_count == 0) {
                            condProb = 0.0;
                        } else {
                            condProb = ((double)occurrence_count) / ((double)listSize);
                        // the variable results2probs still holds a reference to the inner HashMap<String,Double> entry of the HashMap  "precomputedProbs"
                        results2probs.put(result, condProb);
                        return condProb;
                    } else {
                        // mark that result is not part of the list of defined results
                        return -1.0;
                } else {
                    throw new NullPointerException("Unexpected happening. Event " + eventID + " contains no result definitions.");
            } else {
                throw new IllegalArgumentException("unknown event ID:"+ eventID);
            }I have performed tests on a decreased data input set. I processed only 100.000 result instances, instead of about 250K. This means there are 343 * 100K = 34.300K = 34M calls of my method per each iteration of the algorithm. I performed 20 iterations. With caching it took 8 min 5 sec, without only 7 min 5 sec.
    I also tried to profile the lookup-operations for the HashMaps, but they took less than a ms. The same as with the operations for computing the conditional probability. So regarding this, there was no additional insight from comparing the times on ms level.
    Edited by: Coding_But_Still_Alive on Sep 11, 2008 9:22 AM
    Edited by: Coding_But_Still_Alive on Sep 11, 2008 9:24 AM

  • (new?) performance problem using jDriver after a Sql Server 6.5 to 2000 conversion

    Hi,
    This is similar - yet different - to a few of the old postings about performance
    problems with using jdbc drivers against Sql Server 7 & 2000.
    Here's the situation:
    I am running a standalone java application on a Solaris box using BEA's jdbc driver
    to connect to a Sql Server database on another network. The application retrieves
    data from the database through joins on several tables for approximately 40,000
    unique ids. It then processes all of this data and produces a file. We tuned
    the app so that the execution time for a single run through the application was
    24 minutes running against Sql Server 6.5 with BEA's jdbc driver. After performing
    a DBMS conversion to upgrade it to Sql Server 2000 I switched the jDriver to the
    Sql Server 2000 version. I ran the app and got an alarming execution time of
    5hrs 32 min. After some research, I found the problem with unicode and nvarchar/varchar
    and set the "useVarChars" property to "true" on the driver. The execution time
    for a single run through the application is now 56 minutes.
    56 minutes compared to 5 1/2 hrs is an amazing improvement. However, it is still
    over twice the execution time that I was seeing against the 6.5 database. Theoretically,
    I should be able to switch out my jdbc driver and the DBMS conversion should be
    invisible to my application. That would also mean that I should be seeing the
    same execution times with both versions of the DBMS. Has anybody else seen a
    simlar situation? Are there any other settings or fixes that I can put into place
    to get my performance back down to what I was seeing with 6.5? I would rather
    not have to go through and perform another round of performance tuning after having
    already done this when the app was originally built.
    thanks,
    mike

    Mike wrote:
    Joe,
    This was actually my next step. I replaced the BEA driver with
    the MS driver and let it run through with out making any
    configuration changes, just to see what happened. I got an
    execution time of about 7 1/2 hrs (which was shocking). So,
    (comparing apples to apples) while leaving the default unicode
    property on, BEA ran faster than MS, 5 1/2 hrs to 7 1/2 hrs.
    I then set the 'SendStringParametersAsUnicode' to 'false' on the
    MS driver and ran another test. This time the application
    executed in just over 24 minutes. The actual runtime was 24 min
    16 sec, which is still ever so slightly above the actual runtime
    against SS 6.5 which was 23 min 35 sec, but is twice as fast as the
    56 minutes that BEA's driver was giving me.
    I think that this is very interesting. I checked to make sure that
    there were no outside factors that may have been influencing the
    runtimes in either case, and there were none. Just to make sure,
    I ran each driver again and got the same results. It sounds like
    there are no known issues regarding this?
    We have people looking into things on the DBMS side and I'm still
    looking into things on my end, but so far none of us have found
    anything. We'd like to continue using BEA's driver for the
    support and the fact that we use Weblogic Server for all of our
    online applications, but this new data might mean that I have to
    switch drivers for this particular application.Thanks. No, there is no known issue, and if you put a packet sniffer
    between the client and DBMS, you will probably not see any appreciable
    difference in the content of the SQL sent be either driver. My suspicion is
    that it involves the historical backward compatibility built in to the DBMS.
    It must still handle several iterations of older applications, speaking obsolete
    versions of the DBMS protocol, and expecting different DBMS behavior!
    Our driver presents itself as a SQL7-level application, and may well be treated
    differently than a newer one. This may include different query processing.
    Because our driver is deprecated, it is unlikely that it will be changed in
    future. We will certainly support you using the MS driver, and if you look
    in the MS JDBC newsgroup, you'll see more answers from BEA folks than
    from MS people!
    Joe
    >
    >
    Mike
    The next test you should do, to isolate the issue, is to try another
    JDBC driver.
    MS provides a type-4 driver now, for free. If it is significantly faster,
    it would be
    interesting. However, it would still not isolate the problem, because
    we still would
    need to know what query plan is created by the DBMS, and why.
    Joe Weinstein at BEA
    PS: I can only tell you that our driver has not changed in it's semantic
    function.
    It essentially send SQL to the DBMS. It doesn't alter it.

  • Performance problems when using Premiere Elements for photo slideshows

    Hello,
    I had been using Premiere Elements 9 (PE9) to make a simple slideshow for my parents from their vacation trip and I ran into some serious performance problems.  I had used it to create similar projects before, but not nearly as big.  This one is like 260 photos, so basically it is 260 seperate clips.  I have a POWERHOUSE workstation (see below) so it isn't my PC.  Even when PE9 crashes, looking at my performance monitor my CPU and RAM aren't even halfway being utilized.  I finally switched to Windows Movie Maker of all things and it worked seemlessly, amazing really.  I'm wondering if I was just using PE9 for something other than what it was designed for since there weren't really any video clips, just a ton of photos that I made into video clips, if that makes sense.  Based upon my experience with this so far, I can't imagine using PE9 anymore for anything really.  I might have the need for a more professional video editing program in the near future, although it does seem like PE has a lot of features.  How can I make sure it utilizes my workstation to its full potential?  Here are my specs:
    PC
    Intel Core i7-2600K 4.6 GHz Overclocked
    ASUS P8P67 Deluxe Motherboard
    AMD Firepro V8800 Video Card
    Crucial 128 GB SATA 6Gb/s Solid State Drive (Operating System)
    Corsair Vengeance 16GB (4x4GB) Memory
    Corsair H60 Liquid CPU Cooler
    Corsair Professional Series Gold AX850 Power Supply
    Graphite Series 600T Mid-Tower Case
    Western Digital Caviar Black 1 TB SATA III Hard Drive
    Western Digital Caviar Black 2 TB SATA III Hard Drive
    Western Digital Green 3 TB SATA III Hard Drive
    Logitech Wireless Gaming Mouse G700
    I don’t play any games but it’s a great productivity mouse with 13 customizable buttons
    Wacom Intuos5 Pen Tablet
    Yes, this system is blazingly fast.  I have yet to feel it slow down, even with Photoshop, Lightroom, InDesign, Illustrator and numerous other apps running at the same time.  HOWEVER, Premiere Elements 9 has crashed NUMERUOS times, every time my system wasn't even close to being fully taxed. 
    Monitors – All run on the ATI V8800
    Dell Ultra Sharp 30 inch
    Samsung 27 Inch
    HAANS-G 28 Inch
    Herman Miller Embody Ergonomic Chair (one of my favorite items)

    Andy,
    There ARE some differences between PrE and PrPro w/ an approved CUDA-capable and MPE hardware acceleration-enabled nVidia video card, but those differences show up ONLY in the quality of the Scaling. The processing overhead is almost exactly the same, when it comes to handling the extra pixels.
    As of PrPro CS 5, two things changed:
    The max. size of Still Images went up from 4096 x 4096 pixels, to quite a bit larger (cannot recall the numbers now).
    The Scaling algorithms have been improved, though ONLY with the correct nVidia cards, with MPE hardware support enabled.
    Now, there CAN be another consideration, between the two programs, in that PrPro CS 5 - CS 6, are 64-bit ONLY, so one benefits from the computer and OS to run it. PrE can be either 32-bit, or 64-bit, so one might, or might not, be taking advantage of the 64-bit program and OS. Still, the processing overhead will be almost identical, it's just that the 64-bit OS can spread it around a bit.
    I still recommend Scaling the large Still Images in PS, prior to Import, to keep that processing overhead as low as is possible. Scaled Still Images work just fine, and I have one Project with 3000+ Scaled Still Images, that edits just fine in PrPro, even on my older 32-bit workstation. Testing that same machine, and PrPro some years ago, I could ONLY work with up to 5 - 4096 x 4096 Stills, before things ground to a crawl.
    Now, Adobe AfterEffects handles large Still Images differently, so I just moved that test Project to AE, and added another 20 large Images, which edited just fine. IIRC, AE can handle Still Images up to 10K x 10K pixels, and that might have gone up, as of CS 5.
    Good luck, and hope that helps,
    Hunt

  • Performance Problem between Oracle 9i to Oracle 10g using Crystal XI

    We have a Crystal XI Report using ODBC Drivers, 14 tables, and one sub report. If we execute the report on an Oracle 9i database the report will complete in about 12 seconds. If we execute the report on an Oracle 10g database the report will complete in about 35 seconds.
    Our technical Setup:
    Application server: Windows Server 2003, Running Crystal XI SP2 Runtime dlls with Oracle Client 10.01.00.02, .Net Framework 1.1, C# for Crystal Integration, Unmanaged C++ for app server environment calling into C# through a dynamically loaded mixed-mode C++ DLL.
    Database server is Oracle 10g
    What we have concluded:
    Reducing the number of tables to 1 will reduce the execution time of the report from 180s to 13s. With 1 table and the sub report we would get 30 seconds
    We have done some database tracing and see that Crystal Reports Issues the following query when verifying the database and it takes longer in 10g vs 9i.
    We have done some profiling in the application code. When we retarget the first table to the target database, it takes 20-30 times longer in 10g than in 9i. Retargeting the other tables takes about twice as long. The export to a PDF file takes about 4-5 times as long in 10g as in 9i.
    Oracle 10g no longer supports the /*+ RULE */ hint.
    Verify DB Query:
    select /*+ RULE */ *
    from
    (select /*+ RULE */ null table_qualifier, o1.owner table_owner,
    o1.object_name table_name, decode(o1.owner,'SYS', decode(o1.object_type,
    'TABLE','SYSTEM TABLE','VIEW', 'SYSTEM VIEW', o1.object_type), 'SYSTEM',
    decode(o1.object_type,'TABLE','SYSTEM TABLE','VIEW', 'SYSTEM VIEW',
    o1.object_type), o1.object_type) table_type, null remarks from all_objects
    o1 where o1.object_type in ('TABLE', 'VIEW') union select /*+ RULE */ null
    table_qualifier, s.owner table_owner, s.synonym_name table_name, 'SYNONYM'
    table_type, null remarks from all_objects o3, all_synonyms s where
    o3.object_type in ('TABLE','VIEW') and s.table_owner= o3.owner and
    s.table_name = o3.object_name union select /*+ RULE */ null table_qualifier,
    s1.owner table_owner, s1.synonym_name table_name, 'SYNONYM' table_type,
    null remarks from all_synonyms s1 where s1.db_link is not null ) tables
    WHERE 1=1 AND TABLE_NAME='QCTRL_VESSEL' AND table_owner='QLM' ORDER BY 4,2,
    3
    SQL From Main Report:
    SELECT "QCODE_PRODUCT"."PROD_DESCR", "QCTRL_CONTACT"."CONTACT_FIRST_NM", "QCTRL_CONTACT"."CONTACT_LAST_NM", "QCTRL_MEAS_PT"."MP_NM", "QCTRL_ORG"."ORG_NM", "QCTRL_TKT"."SYS_TKT_NO", "QCTRL_TRK_BOL"."START_DT", "QCTRL_TRK_BOL"."END_DT", "QCTRL_TRK_BOL"."DESTINATION", "QCTRL_TRK_BOL"."LOAD_TEMP", "QCTRL_TRK_BOL"."LOAD_PCT", "QCTRL_TRK_BOL"."WEIGHT_OUT", "QCTRL_TRK_BOL"."WEIGHT_IN", "QCTRL_TRK_BOL"."WEIGHT_OUT_UOM_CD", "QCTRL_TRK_BOL"."WEIGHT_IN_UOM_CD", "QCTRL_TRK_BOL"."VAPOR_PRES", "QCTRL_TRK_BOL"."SPECIFIC_GRAV", "QCTRL_TRK_BOL"."PMO_NO", "QCTRL_TRK_BOL"."ODORIZED_VOL", "QARCH_SEC_USER"."SEC_USER_NM", "QCTRL_TKT"."DEM_CTR_NO", "QCTRL_BA_ENTITY"."BA_NM1", "QCTRL_BA_ENTITY_VW"."BA_NM1", "QCTRL_BA_ENTITY"."BA_ID", "QCTRL_TRK_BOL"."VOLUME", "QCTRL_TRK_BOL"."UOM_CD", "QXREF_BOL_PROD"."MOVEMENT_TYPE_CD", "QXREF_BOL_PROD"."BOL_DESCR", "QCTRL_TKT"."VOL", "QCTRL_TKT"."UOM_CD", "QCTRL_PMO"."LINE_UP_BEFORE", "QCTRL_PMO"."LINE_UP_AFTER", "QCODE_UOM"."UOM_DESCR", "QCTRL_ORG_VW"."ORG_NM"
    FROM (((((((((((("QLM"."QCTRL_TRK_BOL" "QCTRL_TRK_BOL" INNER JOIN "QLM"."QCTRL_PMO" "QCTRL_PMO" ON "QCTRL_TRK_BOL"."PMO_NO"="QCTRL_PMO"."PMO_NO") INNER JOIN "QLM"."QCTRL_MEAS_PT" "QCTRL_MEAS_PT" ON "QCTRL_TRK_BOL"."SUP_MP_ID"="QCTRL_MEAS_PT"."MP_ID") INNER JOIN "QLM"."QCTRL_TKT" "QCTRL_TKT" ON "QCTRL_TRK_BOL"."PMO_NO"="QCTRL_TKT"."PMO_NO") INNER JOIN "QLM"."QCTRL_CONTACT" "QCTRL_CONTACT" ON "QCTRL_TRK_BOL"."DRIVER_CONTACT_ID"="QCTRL_CONTACT"."CONTACT_ID") INNER JOIN "QFC_QLM"."QARCH_SEC_USER" "QARCH_SEC_USER" ON "QCTRL_TRK_BOL"."USER_ID"="QARCH_SEC_USER"."SEC_USER_ID") LEFT OUTER JOIN "QLM"."QCODE_UOM" "QCODE_UOM" ON "QCTRL_TRK_BOL"."ODORIZED_VOL_UOM_CD"="QCODE_UOM"."UOM_CD") INNER JOIN "QLM"."QCTRL_ORG_VW" "QCTRL_ORG_VW" ON "QCTRL_MEAS_PT"."ORG_ID"="QCTRL_ORG_VW"."ORG_ID") INNER JOIN "QLM"."QCTRL_BA_ENTITY" "QCTRL_BA_ENTITY" ON "QCTRL_TKT"."DEM_BA_ID"="QCTRL_BA_ENTITY"."BA_ID") INNER JOIN "QLM"."QCTRL_CTR_HDR" "QCTRL_CTR_HDR" ON "QCTRL_PMO"."DEM_CTR_NO"="QCTRL_CTR_HDR"."CTR_NO") INNER JOIN "QLM"."QCODE_PRODUCT" "QCODE_PRODUCT" ON "QCTRL_PMO"."PROD_CD"="QCODE_PRODUCT"."PROD_CD") INNER JOIN "QLM"."QCTRL_BA_ENTITY_VW" "QCTRL_BA_ENTITY_VW" ON "QCTRL_PMO"."VESSEL_BA_ID"="QCTRL_BA_ENTITY_VW"."BA_ID") LEFT OUTER JOIN "QLM"."QXREF_BOL_PROD" "QXREF_BOL_PROD" ON "QCTRL_PMO"."PROD_CD"="QXREF_BOL_PROD"."PURITY_PROD_CD") INNER JOIN "QLM"."QCTRL_ORG" "QCTRL_ORG" ON "QCTRL_CTR_HDR"."BUSINESS_UNIT_ORG_ID"="QCTRL_ORG"."ORG_ID"
    WHERE "QCTRL_TRK_BOL"."PMO_NO"=12345 AND "QXREF_BOL_PROD"."MOVEMENT_TYPE_CD"='TRK'
    SQL From Sub Report:
    SELECT "QXREF_BOL_VESSEL"."PMO_NO", "QXREF_BOL_VESSEL"."VESSEL_NO"
    FROM "QLM"."QXREF_BOL_VESSEL" "QXREF_BOL_VESSEL"
    WHERE "QXREF_BOL_VESSEL"."PMO_NO"=12345
    Does anyone have any suggestions on how we can improve the report performance with 10g?

    Hi Eric,
    Thanks for your response. The optimizer mode in our 9i database is CHOOSE. We changed the optimizer mode from ALL_ROWS to CHOOSE in 10g but it didn't make a difference.
    While researching Metalink I came across a couple of documents that indicated performance problems and issues with using certain data-dictionary views in 10g. Apparently, the definition of ALL_OBJECTS, ALL_ARGUMENTS and ALL_SYNONYMS have changed in 10g, resulting in degradation in performance, if quieried against these views. These are the same queries that crystal reports is queriying. We'll try the workaround suggested in these documents and see if it resolves the issue.
    Here are the Doc Ids, if you are interested:
    Note 377037.1
    Note:364822.1
    Thanks again for your response.
    Venu Boddu.

  • Performance problem when using CAPS LOCK piano input

    Dear reader,
    I'm very new to Logic and am running into a performance problem when using the CAPS LOCK-piano-keyboard for input of an instrument: when I'm not recording everything is fine and the program instantly responds on my keystrokes, but as soon as I go into record-mode there is sometimes a delay in the response-time (so I press a key and it takes up to half a second longer before the note is actually played).
    Is there anything to do about this to improve performance (for example turning of certain features of the application), or should I never use the CAPS LOCK keyboard anyway and go straight for an external MIDI-keyboard?
    Thanks and regards,
    Tim Metz

    Does your project have Audio tracks and just how heavy it is, how many tracks? Also, what kind of Software Instrument do you use?

  • I am using Photoshop CS5 on a new iMac with a wireless keyboard.  I used to be able to hit F11 to perform a custom sharpening action but now the F11 key is the volume control key and I have tried everything to disable or switch the volume key. I have also

    I am using Photoshop CS5 on a new iMac with a wireless keyboard.  I used to be able to hit F11 to perform a custom sharpening action but now the F11 key is the volume control key and I have tried everything to disable or switch the volume key. I have also tried assigning other function keys to initiate the action. Is there a simple solution to this? What am I missing?

    Try unmounting the volume on your iMac using Disk Utility. Then mount it again. You may need to reboot the laptop or relaunch its Finder process (using the Force Quit window) after remounting the drive on your iMac. Remember that no process may be accessing any files on the drive you plan to unmount, or the unmount will fail. Unmounting and remounting an external drive on my iMac made it become visible on my MacBook Pro after it had disappeared.

  • How do I fix the compatibility issue between my iPhone 4S with iOS 7 and my Pioneer car stereo? There was no problem with iOS 6, but now I get a message saying "this device is not compatible" and so I can't use Netflix for example. How do I fix it?

    How do I fix the compatibility issue between my iPhone 4S with iOS 7 and my Pioneer car stereo? There was no problem with iOS 6, but now I get a message saying "this device is not compatible" and so I can't use Netflix for example. How do I fix it?

    This is a typical response from the manufacturer. Did you try the fix that Lawrence mentioned. When Apple or any other phone manufacturer update phone software, they have the latest Bluetooth installed. It is usually the problem with the radio manufacturer that they devices are using the older Bluetooth protocols. You can try this support document http://support.apple.com/kb/TS3581 and see if anything there helps, but generally it requires the radio manufacturer to update their firmware.

  • Performance problem in Mapping Designer using UDF with external imports

    Hello,
    we do have a big performance problem in developing (not in execution) graphical Mappings as far as we use "user defined functions" (UDF) with include-entries referencing to jar files which are imported as "imported archives".
    For example the execution of invice mapping with a little bit bigger test file in the Mapping designer:
    - after opening, not in change mod: 6 seconds
    - after switching to change mod: 37 seconds (that's clear, now everything is compiled first)
    - after adding "com.seeburger.functions.permstore.CounterFactory;" into the "import" field of one UDF, no other change: 227 seconds
    - after saving and submiting the changlist (no longer in change mode): 6 seconds
    - after switching to change mode: 227 seconds
    So execution speed of testing (and also when watching queues) only increases in changemod more then three minutes when using UDF with imports, referencing to external JAR files. It doesn't depend on Seeburger functions (we are using XI also for EDIFACT, so we also use some Seeburger functions), I can reproduce it with any other JAR file which is used from a UDF.
    Using java included functions like "java.text.NumberFormat;" in "Import" doesn't slow down the testing.
    Can anybody reproduce this? We are using XI 3.0 SP19 on a AIX machine, so we also have to use the Java version from IBM.
    cu
    Manfred

    Problem was fixed by a upgrad of the JDK.

  • Performance problem using OBJECT tag

    I have a performance problem using the java plugin and was wondering if anyone else was has seen the same thing. I have a rather complex applet that interacts with java script in a web page using the LiveConnect API. The applet both calls javascript in the page and is called by java script.
    Im using IE6 with the java plugin that ships with the 1.4.2_06 JVM. I have noticed that if I deploy the applet using the OBJECT tags, the application seems the trash everytime I call a java method on the applet from javascript. When I deplot the same applet using the APPLET tag the perfomance is much better. I would like to use the OBJECT tag because it applet bahaves better and I have more control over the caching.
    This problem seems to be on the boundaries of IE6, JScript, the JVM and my Applet (and I suppose any could be the real culprit). My application is IE5+ specific so I can not test the applet in isolation from the surround HTML/JavaScript (for example in another browser).
    Does anyone have any idea?
    thanks in advance.
    dennis.

    I have a performance problem using the java plugin and was wondering if anyone else was has seen the same thing. I have a rather complex applet that interacts with java script in a web page using the LiveConnect API. The applet both calls javascript in the page and is called by java script.
    Im using IE6 with the java plugin that ships with the 1.4.2_06 JVM. I have noticed that if I deploy the applet using the OBJECT tags, the application seems the trash everytime I call a java method on the applet from javascript. When I deplot the same applet using the APPLET tag the perfomance is much better. I would like to use the OBJECT tag because it applet bahaves better and I have more control over the caching.
    This problem seems to be on the boundaries of IE6, JScript, the JVM and my Applet (and I suppose any could be the real culprit). My application is IE5+ specific so I can not test the applet in isolation from the surround HTML/JavaScript (for example in another browser).
    Does anyone have any idea?
    thanks in advance.
    dennis.

  • Performance Problem with File Adapter using FTP Conection

    Hi All,
    I have a pool of 19 interfaces that send data from R/3 using RFC Adpater, and these interfaces generate 30 TXT files in a target Server. I'm using File Adapters as Receiver Comunication Channel. It's generating a serious perfomance problem. In File Adpater I'm using FTP Conection with Permanently Conection, Somebody knows if PERMANENTLY CONECTION is the cause of performances problem ?
    These interfaces will run once a day with total of 600 messages.
    We still using a Test Server with few messages.

    Hi Regis,
        We also faced teh same porblem. Whats happening is that when the FTP session is initiated by the file adapter, then its getting done from teh XI server. Hence the memory of the server is also eaten up. Why dont you give a try by using 'per file transfer'.
        If this folder to which you are connecting is within your XI server network then you can mount(or map) that drive to the XI server and use it with a NFS protocol of the file adapter and thereby increasing the performance.
    Cheers
    JK

Maybe you are looking for

  • TDS amount deduct on payment to vendor

    Dear Expert, i have done extended withholding tax configuration but i have faced problem  when i use T.Code F-48 of vendor outgoing payment TDS amount does not deduct payment  but document is generated. requirement of  customer TDS Amount deduct on p

  • First PKGBUILD attempt

    I wanted to create a PKGBUILD for Beyond Compare (diff/merge tool),  A .deb is provided, which simply needs to be converted to a .tar.gz and extracted to the root of the drive. The 64-bit package is not really a 64-bit application.  It just allows th

  • When i need to print the only way it seems that i can do it is by "ctrl P"

    is there a print button located somewhere so i can print by clicking it. if i print by using control P sometimes the print result is not adequate. It is probably just me not knowing how to print properly from Firefox

  • No Presets in Lightroom 4

    I am a nes Lightroom user and have just input my license key into the download version of Lightroom 4 I had been exploring but there are no presets loaded as standard. Am I right in assuming that there should be a standard selection of presets loaded

  • Safari automatically opens the same page repeatedly

    I just spent the last hour searching this forum for solutions and I found one. I just can't find the original post with answer to mark it as helpful! I also lost the Apple Support Link but if you search 3rd part add-ons, the solution is there. In my