Performance Problem on Designer Repository

Hello,
We have performance problem on Designer Repository database, I run system statistics, recreate objects, truncate temporary tablespaces, but compile forms runs very slowly, on top sql is this sql statement:
SELECT MAX(1)
FROM I$SDD_WA_CONTEXT CTXT
WHERE CTXT.OBJECT_IVID = :B1
AND CTXT.WORKAREA_IRID = JR_CONTEXT.WORKAREA
AND CTXT.WASTEBASKET = JR_CONTEXT.WASTEBASKET
how optimize this statement?

jr_wastebasket.purge_insignificant_versions helps, I drop old versions of forms
Edited by: Vazha_Mantua on Mar 27, 2009 9:24 AM

Similar Messages

  • Performance problem in Mapping Designer using UDF with external imports

    Hello,
    we do have a big performance problem in developing (not in execution) graphical Mappings as far as we use "user defined functions" (UDF) with include-entries referencing to jar files which are imported as "imported archives".
    For example the execution of invice mapping with a little bit bigger test file in the Mapping designer:
    - after opening, not in change mod: 6 seconds
    - after switching to change mod: 37 seconds (that's clear, now everything is compiled first)
    - after adding "com.seeburger.functions.permstore.CounterFactory;" into the "import" field of one UDF, no other change: 227 seconds
    - after saving and submiting the changlist (no longer in change mode): 6 seconds
    - after switching to change mode: 227 seconds
    So execution speed of testing (and also when watching queues) only increases in changemod more then three minutes when using UDF with imports, referencing to external JAR files. It doesn't depend on Seeburger functions (we are using XI also for EDIFACT, so we also use some Seeburger functions), I can reproduce it with any other JAR file which is used from a UDF.
    Using java included functions like "java.text.NumberFormat;" in "Import" doesn't slow down the testing.
    Can anybody reproduce this? We are using XI 3.0 SP19 on a AIX machine, so we also have to use the Java version from IBM.
    cu
    Manfred

    Problem was fixed by a upgrad of the JDK.

  • Import designer repository error

    I am trying to import a designer repository using RAU but I have seen several errors about ORACLE error 942. One of them is:
    IMP-00017: following statement failed with ORACLE error 942:
    "CREATE DIMENSION "PROMOTIONS_DIM" LEVEL "PROMO" IS ("PROMOTIONS"."PROMO_ID""
    ") LEVEL "SUBCATEGORY" IS ("PROMOTIONS"."PROMO_SUBCATEGORY_ID") LEVEL "CATEG"
    "ORY" IS ("PROMOTIONS"."PROMO_CATEGORY_ID") LEVEL "PROMO_TOTAL" IS ("PROMOTI"
    "ONS"."PROMO_TOTAL_ID") HIERARCHY "PROMO_ROLLUP" ("PROMO" CHILD OF "SUBCATEG"
    "ORY" CHILD OF "CATEGORY" CHILD OF "PROMO_TOTAL") ATTRIBUTE "PROMO" LEVEL "P"
    "ROMO" DETERMINES "PROMOTIONS"."PROMO_NAME" ATTRIBUTE "PROMO" LEVEL "PROMO" "
    "DETERMINES "PROMOTIONS"."PROMO_END_DATE" ATTRIBUTE "PROMO" LEVEL "PROMO" DE"
    "TERMINES "PROMOTIONS"."PROMO_BEGIN_DATE" ATTRIBUTE "PROMO" LEVEL "PROMO" DE"
    "TERMINES "PROMOTIONS"."PROMO_COST" ATTRIBUTE "SUBCATEGORY" LEVEL "SUBCATEGO"
    "RY" DETERMINES "PROMOTIONS"."PROMO_SUBCATEGORY" ATTRIBUTE "CATEGORY" LEVEL "
    ""CATEGORY" DETERMINES "PROMOTIONS"."PROMO_CATEGORY" ATTRIBUTE "PROMO_TOTAL""
    " LEVEL "PROMO_TOTAL" DETERMINES "PROMOTIONS"."PROMO_TOTAL""
    IMP-00003: ORACLE error 942 encountered
    ORA-00942: table or view does not exist
    I have the same versions (dev suite 10g and database 10.2.0.2) with the source designer. Both of them are on Windows XP.
    Both of the repositories are using the tablespace with exactly same name(this may be irrelevant). When I start importing, I already have some data in that tablespace in the target repository. DSGNR users on both repositories are using that tablespace as default tablespace.
    Anything wrong here? I did upgrade database from 10.2.0.1 to 10.2.0.2 myself before export/import designer repository. Not sure if this is caused by the unsuccessful update? I did not see error in database upgrade.
    Your input is highly appreciated.
    Jennifer

    Yes the is a problem here.
    You have incompatable import export tools between Designer and the DB.
    In the Repository install manual it states:
    Step 5b - (Oracle 9i or 10.2 DB) Install Oracle 9i or 10.2 Import and Export Utilities
    Before installing Designer Repository on an Oracle 9i or 10.2 database, you need to set up the installation workstation to use the Oracle 9i or 10.2 import and export utilities. To do so, perform the following steps at the workstation from which you will be running the repository installation:
    From the Oracle 9i or 10.2 installation media, install the Oracle 9i or 10.2 import and export utilities in a dedicated Oracle home.
    In the Windows Registry, locate the key named:
    HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\KEY_HomeName\REPOS61
    where HOMEn is the home number of the home installed into for a multiple Oracle home environment, but is not present where the default Oracle home is being used.
    Change the value of the EXECUTE_IMPORT and EXECUTE_EXPORT variables to point to the new Oracle home. Thus, if the new Oracle home is d:\des_9i, the settings would be:
    d:\des_9i\bin\exp.exe
    d:\des_9i\bin\imp.exe
    If you have the 10.2 DB on the same machine then just point the registry to thoes tools and if you dont you will need to install the 10.2 client with admin privleges to get the imp/exp tools installed and then point the registry to them.
    hope that helps.
    Michael

  • Performance problems when running PostgreSQL on ZFS and tomcat

    Hi all,
    I need help with some analysis and problem solution related to the below case.
    The long story:
    I'm running into some massive performance problems on two 8-way HP ProLiant DL385 G5 severs with 14 GB ram and a ZFS storage pool in raidz configuration. The servers are running Solaris 10 x86 10/09.
    The configuration between the two is pretty much the same and the problem therefore seems generic for the setup.
    Within a non-global zone I’m running a tomcat application (an institutional repository) connecting via localhost to a Postgresql database (the OS provided version). The processor load is typically not very high as seen below:
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU                            
        49 postgres  749M  669M   4,7%   7:14:38  13%
         1 jboss    2519M 2536M    18%  50:36:40 5,9%We are not 100% sure why we run into performance problems, but when it happens we experience that the application slows down and swaps out (according to below). When it settles everything seems to turn back to normal. When the problem is acute the application is totally unresponsive.
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
        1 jboss    3104M  913M   6,4%   0:22:48 0,1%
    #sar -g 5 5
    SunOS vbn-back 5.10 Generic_142901-03 i86pc    05/28/2010
    07:49:08  pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf
    07:49:13    27.67   316.01   318.58 14854.15     0.00
    07:49:18    61.58   664.75   668.51 43377.43     0.00
    07:49:23   122.02  1214.09  1222.22 32618.65     0.00
    07:49:28   121.19  1052.28  1065.94  5000.59     0.00
    07:49:33    54.37   572.82   583.33  2553.77     0.00
    Average     77.34   763.71   771.43 19680.67     0.00Making more memory available to tomcat seemed to worsen the problem or at least didn’t prove to have any positive effect.
    My suspicion is currently focused on PostgreSQL. Turning off fsync boosted performance and made the problem less often to appear.
    An unofficial performance evaluation on the database with “vacuum analyze” took 19 minutes on the server and only 1 minute on a desktop pc. This is horrific when taking the hardware into consideration.
    The short story:
    I’m trying different steps but running out of ideas. We’ve read that the database block size and file system block size should match. PostgreSQL is 8 Kb and ZFS is 128 Kb. I didn’t find much information on the matter so if any can help please recommend how to make this change…
    Any other recommendations and ideas we could follow? We know from other installations that the above setup runs without a single problem on Linux on much smaller hardware without specific tuning. What makes Solaris in this configuration so darn slow?
    Any help appreciated and I will try to provide additional information on request if needed…
    Thanks in advance,
    Kasper

    raidz isnt a good match for databases. Databases tend to require good write performance for which mirroring works better.
    Adding a pair of SSD's as a ZIL would probably also help, but chances are its not an option for you..
    You can change the record size by "zfs set recordsize=8k <dataset>"
    It will only take effect for newly written data. Not existing data.

  • Performance Problems on Faces Navigation Diagram and Hyperthreading query

    Am I the only one having performance problems when dealing with Faces-Config Diagrams of about 35 JSPs displayed on the sheet. using Jdev 10.1.3 It's taking my workstation about a full minute and a half to update the name of an arrow. The most stressed component during this task seems to be the CPU.
    And just another question has anybody investigated how is the performance of Jdev affected by either enabling or disabling hyperthreading? In my case my CPU usage manages to reach only 50%. I'm tempted to switch HT off to let JDev use all the cpu power. if that would be the case.

    Hello Diego,
    you mentioned that you compared a BEx Query with the Web INtelligence report. Could you provide more details here ?
    - what are the elements in the rows, columns and free characterisitcs in the BEx Query ?
    - was the query execute as designed in the BEx Query Designer with BEx Web Reporting ?
    - what are the elements in the WebIntelligence Query panel ?
    thanks
    Ingo

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • 6321 performance problems

    Hello again,
    we are experiencing performance problems with the 6321 in our environment - 10 kHz system clock, i.e. the analog input channels should be read in every 100 µs. This works fine with one card (differentially cabled), but if we plug a second card the systems just comes to a grinding halt.
    It appears, that one AI channel register access is taking us about 4 µs, which would mean 64 µs using 16 channels. We have tried RLP, "pure" DMA and DMA using interrupts.
    Am I getting something wrong or was the 6321 just not designed for this kind of application?
    Best regards,
    Philip

    Hello again,
    we are experiencing performance problems with the 6321 in our environment - 10 kHz system clock, i.e. the analog input channels should be read in every 100 µs. This works fine with one card (differentially cabled), but if we plug a second card the systems just comes to a grinding halt.
    It appears, that one AI channel register access is taking us about 4 µs, which would mean 64 µs using 16 channels. We have tried RLP, "pure" DMA and DMA using interrupts.
    Am I getting something wrong or was the 6321 just not designed for this kind of application?
    Best regards,
    Philip

  • How to find cause of db performance problem??

    Hi,
    I am facing continuous performance issues with our database and for that I want to know how I can get information about the following points:
    1- How to find most accessed table(s) or tables with highest hits or top queries is accessing which table(s)?
    2- What indication can tell that a particular table need to be rebuilt?
    3- When to rebuild indexes? and how to know that an indexed need to be rebuilt?
    Your prompt reply is highly appreciated
    Thanks,
    Younis

    Hi,
    a good starting point for investigating poor database performance is AWR (if you have a license for that) or statspack (if you don't). If you need help interpreting it, you can refer to J. Lewis's series on statspack reports (also applies to AWR):
    http://jonathanlewis.wordpress.com/2011/03/09/statspack-reports/
    I have also made a few blog posts on this topic, see http://savvinov.com/tag/awr/
    Regarding your other questions -- countrary to popular belief, rebuilding indexes or tables is seldom helpful. More often, performance problems are caused by bad execution plans (side effects of bind peaking, inaccurate statistics, correlated predicates etc.), data design issues, bad coding practices, not using bind variables etc.
    Database performance topic is a huge topic and obviously cannot fit into a discussion thread. Christian Antognini's book "Oracle Performance Troubleshooting" can provide you a gentle introduction into performance tuning, provided you already have good familiarity with Oracle architecture.
    Or, if you want help with your particular problem, post your AWR report here and briefly describe what your users are unhappy about -- there is a good chance that you get valuable feedback from several renowned experts.
    Good luck!
    Best regards,
    Nikolay

  • Performance problems when using Premiere Elements for photo slideshows

    Hello,
    I had been using Premiere Elements 9 (PE9) to make a simple slideshow for my parents from their vacation trip and I ran into some serious performance problems.  I had used it to create similar projects before, but not nearly as big.  This one is like 260 photos, so basically it is 260 seperate clips.  I have a POWERHOUSE workstation (see below) so it isn't my PC.  Even when PE9 crashes, looking at my performance monitor my CPU and RAM aren't even halfway being utilized.  I finally switched to Windows Movie Maker of all things and it worked seemlessly, amazing really.  I'm wondering if I was just using PE9 for something other than what it was designed for since there weren't really any video clips, just a ton of photos that I made into video clips, if that makes sense.  Based upon my experience with this so far, I can't imagine using PE9 anymore for anything really.  I might have the need for a more professional video editing program in the near future, although it does seem like PE has a lot of features.  How can I make sure it utilizes my workstation to its full potential?  Here are my specs:
    PC
    Intel Core i7-2600K 4.6 GHz Overclocked
    ASUS P8P67 Deluxe Motherboard
    AMD Firepro V8800 Video Card
    Crucial 128 GB SATA 6Gb/s Solid State Drive (Operating System)
    Corsair Vengeance 16GB (4x4GB) Memory
    Corsair H60 Liquid CPU Cooler
    Corsair Professional Series Gold AX850 Power Supply
    Graphite Series 600T Mid-Tower Case
    Western Digital Caviar Black 1 TB SATA III Hard Drive
    Western Digital Caviar Black 2 TB SATA III Hard Drive
    Western Digital Green 3 TB SATA III Hard Drive
    Logitech Wireless Gaming Mouse G700
    I don’t play any games but it’s a great productivity mouse with 13 customizable buttons
    Wacom Intuos5 Pen Tablet
    Yes, this system is blazingly fast.  I have yet to feel it slow down, even with Photoshop, Lightroom, InDesign, Illustrator and numerous other apps running at the same time.  HOWEVER, Premiere Elements 9 has crashed NUMERUOS times, every time my system wasn't even close to being fully taxed. 
    Monitors – All run on the ATI V8800
    Dell Ultra Sharp 30 inch
    Samsung 27 Inch
    HAANS-G 28 Inch
    Herman Miller Embody Ergonomic Chair (one of my favorite items)

    Andy,
    There ARE some differences between PrE and PrPro w/ an approved CUDA-capable and MPE hardware acceleration-enabled nVidia video card, but those differences show up ONLY in the quality of the Scaling. The processing overhead is almost exactly the same, when it comes to handling the extra pixels.
    As of PrPro CS 5, two things changed:
    The max. size of Still Images went up from 4096 x 4096 pixels, to quite a bit larger (cannot recall the numbers now).
    The Scaling algorithms have been improved, though ONLY with the correct nVidia cards, with MPE hardware support enabled.
    Now, there CAN be another consideration, between the two programs, in that PrPro CS 5 - CS 6, are 64-bit ONLY, so one benefits from the computer and OS to run it. PrE can be either 32-bit, or 64-bit, so one might, or might not, be taking advantage of the 64-bit program and OS. Still, the processing overhead will be almost identical, it's just that the 64-bit OS can spread it around a bit.
    I still recommend Scaling the large Still Images in PS, prior to Import, to keep that processing overhead as low as is possible. Scaled Still Images work just fine, and I have one Project with 3000+ Scaled Still Images, that edits just fine in PrPro, even on my older 32-bit workstation. Testing that same machine, and PrPro some years ago, I could ONLY work with up to 5 - 4096 x 4096 Stills, before things ground to a crawl.
    Now, Adobe AfterEffects handles large Still Images differently, so I just moved that test Project to AE, and added another 20 large Images, which edited just fine. IIRC, AE can handle Still Images up to 10K x 10K pixels, and that might have gone up, as of CS 5.
    Good luck, and hope that helps,
    Hunt

  • Performance Problems with Web Layouts in web interface

    Hello Gurus,
    We have a a BPS web interface tool which has the following design:
    1> A web interface with several tabs
    2> Each tab has around 3-4 input layouts which are dependent on each other
    3> In all there are 120-140 layouts that the tool uses...
    My questions were in term of performance
    1> Is there a limit to how many web layouts you can use per page/tab/view or if SAP recommends specific number of web layouts per page/tab/view ?
    2> If there is a limitation...our intention was to convert all the display layouts into BW reports so as to increase the performance of the tool....
    3> Would like to know the restriction on the number of users who can log into the tool as a specific point of time ? We may have 50-60 minimum using this tool.
    I would appreciate your help in this regard.
    Thanks in advance

    Hello Rashmi,
    Have you got a chance to look at the performance guide and SAP notes on BPS performance. If not here are the details
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/documents/a1-8-4/performance guide - sap sem bw bps.pdf
    Enclosed are the few SAP notes related for improving performance.
    358921 - Oracle database parameterization for SEM
    459897 - SEM-CPM: Performance when reading transaction data
    566713 – Required information for the analysis of performance problems 
    560369 - Proposals BW aggregates for SEM-BPS
    180605 - Oracle database parameter settings for BW
    124361 - Oracle parameterization (R/3 >= 4.x, Oracle 8.x / 9.x)
    358529 - Overview of performance notes
    350011 - Technical performance: Using the business content
    340246 - Techn. performance: Overview of statistics   
    417091 - Optimize execution time of planning functions
    Some of them are Oracle specific, ignore them if you are not oracle database. Hope this helps.
    Thanks,
    Praveen
    PS.Dont forget to reward points

  • Serious performance problems on iWeb site

    Can anyone help me find the problem with peterlance.com? It looks like author Peter Lance uses iWeb to update his web site. He redesigned it recently (and did not use iWeb for the old design). Anyway, it has serious performance problems on Internet Explorer, which I was hoping you guys could help pinpoint? Is there something simple I can tell him to try to fix the problem? He is launching a new book on Tuesday so he will be getting a lot of traffic, but for now the response time is not very good. Any suggestions?
    In particular, each new page seems to hang for about ten seconds and freeze using Internet Explorer 6. Any fixes?
    I mean, fixes other than: "Don't use Internet Explorer!"
    http://www.peterlance.com/Peter%20Lance/Home/Home.html
    name="Generator" content="iWeb 1.1.2"

    Welcome to the discussions, TopDog08.
    There are several plain .jpg's and that's good, but you need to convert every .png on the page (or as many .png's as you can) to a .jpg. For example,
    http://www.peterlance.com/Peter%20Lance/Home/Homefiles/shapeimage46.png
    Convert that to a jpg, then replace the graphic and text on the iWeb page with this converted jpg (some have has success using Downsize http://www.stuntsoftware.com/Downsize/)
    The ones you won't be able to do anything about are the ones that are a part of the blog, but that will be only 5 images instead of the many you have now.
    This image
    http://www.peterlance.com/Peter%20Lance/Home/Homefiles/shapeimage50.png
    Is covering part of your menu which is why it appears that only the top half is sensitive to rolling over. You'll have to adjust your layout to account for this.
    Another area to look at, there are white boxed png's surrounding the amazon book covers at the top.
    http://www.peterlance.com/Peter%20Lance/Home/Homefiles/imageEffectsAboveAli%20Mug%20TPI%20jpg.png
    Another set of .png's you'll have to get rid of. How did those get put on the page?
    What you see happening with IE is a java script running to fix IE's problem with png's. The fewer .png's you have, the less of a problem with IE you have.

  • OBIEE 11g performance problem

    Hi,
    I am facing a performance problem in OBIEE 11g. When I run the query taking from nqquery.log in database, it is giving me the result within few seconds. But In the OBIEE Answers the query runs forever not showing any data.
    Attaching the query below.
    Please help to solve.
    Thanks
    Titas
    [2012-10-16T18:07:34.000+00:00] [OracleBIServerComponent] [TRACE:2] [USER-23] [] [ecid: 3a39339b45a46ab4:-70b1919f:13a1f282668:-8000-00000000000769b2] [tid: 44475940] [requestid: 26e1001e] [sessionid: 26e10000] [username: weblogic] -------------------- General Query Info: [[
    Repository: Star, Subject Area: BM_BG Pascua Lama, Presentation: BG PL Project Analysis
    [2012-10-16T18:07:34.000+00:00] [OracleBIServerComponent] [TRACE:2] [USER-18] [] [ecid: 3a39339b45a46ab4:-70b1919f:13a1f282668:-8000-00000000000769b2] [tid: 44475940] [requestid: 26e1001e] [sessionid: 26e10000] [username: weblogic] -------------------- Sending query to database named XXBG Pascua Lama (id: <<26911>>), connection pool named Connection Pool, logical request hash e3feca59, physical request hash 5ab00db6: [[
    WITH
    SAWITH0 AS (select sum(T6051.COST_AMT_PROJ_RATE) as c1,
    sum(T6051.COST_AMOUNT) as c2,
    T6051.AFE_NUMBER as c3,
    T6051.BUDGET_OWNER as c4,
    T6051.COMMENTS as c5,
    T6051.COMMODITY as c6,
    T6051.COST_PERIOD as c7,
    T6051.COST_SOURCE as c8,
    T6051.COST_TYPE as c9,
    T6051.DATA_SEL as c10,
    T6051.FACILITY as c11,
    T6051.HISTORICAL as c12,
    T6051.OPERATING_UNIT as c13,
    T5633.project_number as c14,
    T5637.task_number as c15
    from
    (SELECT project_id proj_id
    ,segment1 project_number
    ,org_id
    FROM pa.pa_projects_all
    WHERE org_id IN (825, 865, 962, 2161)) T5633,
    (SELECT project_id proj_id
    ,task_id
    ,task_number
    ,task_name
    FROM pa.pa_tasks) T5637,
    (SELECT xxbg_pl_proj_analysis_cost_v.AFE_NUMBER,
    xxbg_pl_proj_analysis_cost_v.BUDGET_OWNER,
    xxbg_pl_proj_analysis_cost_v.COMMENTS,
    xxbg_pl_proj_analysis_cost_v.COMMODITY,
    xxbg_pl_proj_analysis_cost_v.COST_PERIOD,
    xxbg_pl_proj_analysis_cost_v.COST_SOURCE,
    xxbg_pl_proj_analysis_cost_v.COST_TYPE,
    xxbg_pl_proj_analysis_cost_v.FACILITY,
    xxbg_pl_proj_analysis_cost_v.HISTORICAL,
    xxbg_pl_proj_analysis_cost_v.PO_NUMBER_COST_CONTROL,
    xxbg_pl_proj_analysis_cost_v.PREVIOUS_PROJECT,
    xxbg_pl_proj_analysis_cost_v.PREV_AFE_NUMBER,
    xxbg_pl_proj_analysis_cost_v.PREV_COST_CONTROL_ACC_CODE,
    xxbg_pl_proj_analysis_cost_v.PREV_COST_TYPE,
    xxbg_pl_proj_analysis_cost_v.PROJECT_NUMBER,
    xxbg_pl_proj_analysis_cost_v.SUPPLIER_NAME,
    xxbg_pl_proj_analysis_cost_v.TASK_DESCRIPTION,
    xxbg_pl_proj_analysis_cost_v.TASK_NUMBER,
    xxbg_pl_proj_analysis_cost_v.TRANSACTION_NUMBER,
    xxbg_pl_proj_analysis_cost_v.WORK_PACKAGE,
    xxbg_pl_proj_analysis_cost_v.WP_OWNER,
    xxbg_pl_proj_analysis_cost_v.OPERATING_UNIT,
    xxbg_pl_proj_analysis_cost_v.DATA_SEL,
    pa_periods_all.PERIOD_NAME,
    xxbg_pl_proj_analysis_cost_v.ORG_ID,
    xxbg_pl_proj_analysis_cost_v.COST_AMT_PROJ_RATE COST_AMT_PROJ_RATE,
    xxbg_pl_proj_analysis_cost_v.COST_AMOUNT COST_AMOUNT,
    xxbg_pl_proj_analysis_cost_v.project_id,
    xxbg_pl_proj_analysis_cost_v.task_id
    FROM (select xpac.*,
    decode(xpac.historical, 'Y', 'Historical', 'N', 'Current') data_sel
    from apps.xxbg_pl_proj_analysis_cost_v xpac
    union
    select xpac.*, 'All' data_sel
    from apps.xxbg_pl_proj_analysis_cost_v xpac) xxbg_pl_proj_analysis_cost_v,
    (select period_name, org_id from apps.pa_periods_all) pa_periods_all
    WHERE ((xxbg_pl_proj_analysis_cost_v.ORG_ID = pa_periods_all.ORG_ID))
    AND (xxbg_pl_proj_analysis_cost_v.ORG_ID IN (825,865,962,2161))
    AND (APPS.XXBG_PL_PA_COMMITMENT_PKG.GET_LAST_DAY(xxbg_pl_proj_analysis_cost_v.COST_PERIOD) <=
    APPS.XXBG_PL_PA_COMMITMENT_PKG.GET_LAST_DAY(pa_periods_all.PERIOD_NAME))) T6051
    where ( T5633.proj_id = T5637.proj_id and T5633.project_number = 'SUDPALAPAS11' and T5637.proj_id = T6051.PROJECT_ID and T5637.task_id = T6051.TASK_ID and T5637.task_number = '2100.2000.01.BC0100' and T6051.DATA_SEL = 'All' and T6051.OPERATING_UNIT = 'Compañía Minera Nevada SpA' and T6051.PERIOD_NAME = 'JUL-12' )
    group by T5633.project_number, T5637.task_number, T6051.AFE_NUMBER, T6051.BUDGET_OWNER, T6051.COMMENTS, T6051.COMMODITY, T6051.COST_PERIOD, T6051.COST_SOURCE, T6051.COST_TYPE, T6051.DATA_SEL, T6051.FACILITY, T6051.HISTORICAL, T6051.OPERATING_UNIT)
    select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4, D1.c5 as c5, D1.c6 as c6, D1.c7 as c7, D1.c8 as c8, D1.c9 as c9, D1.c10 as c10, D1.c11 as c11, D1.c12 as c12, D1.c13 as c13, D1.c14 as c14, D1.c15 as c15, D1.c16 as c16 from ( select distinct 0 as c1,
    D1.c3 as c2,
    D1.c4 as c3,
    D1.c5 as c4,
    D1.c6 as c5,
    D1.c7 as c6,
    D1.c8 as c7,
    D1.c9 as c8,
    D1.c10 as c9,
    D1.c11 as c10,
    D1.c12 as c11,
    D1.c13 as c12,
    D1.c14 as c13,
    D1.c15 as c14,
    D1.c2 as c15,
    D1.c1 as c16
    from
    SAWITH0 D1
    order by c13, c14, c2, c3, c4, c5, c6, c7, c8, c9, c10, c11, c12 ) D1 where rownum <= 65001

    Hi Titas,
    with such problems typically, at least for me, the cause turns out to be something simple and embarrassing like:
    - I am connected to another database,
    - The database is right but I have made some manual adjustments without committing them,
    - I have got the wrong query from the query log,
    - I have got the right query but my request is based on multiple queries.
    Do other OBIEE reports work fine?
    Have you tried removing columns one by one to see whether it makes a difference?
    -JR

  • 3D performance problems after upgrading memory

    I recently purchased an additional 2GB of memory to try and extend the life of my aging computer.  I installed the memory yesterday and Windows seems to recognize it (reporting now 3.3GB) but when I dropped into WoW (pretty much the only game I have) the 3D performance was down from the usual 60FPS @ 1600x1080 to a bleak 20 (at best) and the CPU utilization went to about 80% on both cores (with ~20% kernel usages).  Basically WoW was being software 3D rendered!!!
    I went through the usual reinstall drivers, reboot, etc... and couldn't find a fix.  I powered down, pulled out 2 of the memory sticks, booted up, and dropped into WoW - it ran at the full 60FPS and CPU utilization was very low (i.e. back to GPU Hardware 3D rendering).  I powered down again, swapped the 2 sticks for the other 2 sticks, booted up, and dropped into WoW - again it ran 100% fine.  So I powered down, put all four sticks in, booted back up, and when I dropped into WoW it was running in the software 3D rendering mode (20FPS at best and High CPU/Kernel usage).
    I've tried the /PAE option in boot.ini - no joy.  I've tried /MAXMEM = to 3300, 3072, 3000, and even 2048 - no joy in any of those cases.  Has anyone seen anything like this before?  Or have suggestions to fix (other than going to Win7-64)?
    All info in signature is up to date.
    Thanks in advance for any help!

    Quote
    Well his last post was a little over 6 hours ago so he was up pretty late.
    Looks like nothing one does in here goes completely unnoticed.   
    Anyway, I am done sleeping now.
    Quote
    his 2 Pfennig's worth.  I know, I know it's Euro's now.
    Yeah, and what used to be "Pfennige" is now also called "Cents" and here are mine:
    Quote
    I've tried the /PAE option in boot.ini - no joy.  I've tried /MAXMEM = to 3300, 3072, 3000, and even 2048 - no joy in any of those cases.  Has anyone seen anything like this before?  Or have suggestions to fix (other than going to Win7-64)?
    PAE or Physical Memory Extension will not do anything as Microsoft has castrated this feature to such an extend that it has nothing to do with memory addressing anymore when in comes to Windows XP:
    http://en.wikipedia.org/wiki/Physical_Address_Extension#Microsoft_Windows
    Quote
    Windows XP Service Pack 2 and later, by default, on processors with the no-execute (NX) or execute-disable (XD) feature, runs in PAE mode in order to allow NX. The NX (or XD) bit resides in bit 63 of the page table entry and, without PAE, page table entries only have 32 bits; therefore PAE mode is required if the NX feature is to be exploited. However, desktop versions of Windows (Windows XP, Windows Vista) limit physical address space to 4 GiB for driver compatibility reasons.
    The feature is already automatically enabled.  But since is original function (Address Extension) does no longer exist when it comes to the desktop versions of Windows XP, it won't really do anything you would ever notice.
    About the /MAXMEM Switch:  In Windows 32bit operating systems, every process is limited to 2GB of memory.  The point of the switch is to allow certain applications (or their run-time process) to occupy a higher amount of system memory than 2GB.  However, the culprit here is that only those applications are able to utilize this ability that have been programmed (or compiled) accordingly.  A special flag (large memory aware) has to be implemented.  Otherwise, these application will be restricted to 2GB even though the /MAXMEM Switch has been set to extend the 2GB limit to 3GB.  Most 32bit applications come without the "large memory aware" flag and that is why usually, settings the switch won't change anything.
    In any case, it is unlikely that /PAE (even if it would not be castrated) and /MAXMEM would have an impact on your actual issue because I doubt that it has much to do with either memory adressing or the memory limit of an indiviual Windows process.
    Quote
    the 3D performance was down from the usual 60FPS @ 1600x1080 to a bleak 20 (at best) and the CPU utilization went to about 80% on both cores (with ~20% kernel usages).
    There are a couple of hardware based explanations to consider here.  Let's start with the most obvious one:
    1. 975X Memory Controller
    The main reason that the system chooses to automatically set the Memory Speed to DDR2-667 even though DDR2-800 modules are installed, is that by design the memory controller of the Intel 975X Chipset does not natively support DDR2-800 modules, but
    >>Intel® 975X Express Chipset Datasheet - For the Intel® 82975X Memory Controller Hub (MCH)<< [Page 20]
    This means, that from the point of view of the memory controller, operating the memory @DDR2-800 actually means overclocking it (with all potential side effects).
    Basically, if your initial problem disappears as soon as you reduce the memory speed to DDR2-667, the design limitation of the memory controller may explain your findings.
    2. Different memory modules
    If I read your signature correctly, you are actually mixing two different kits/models of RAM (CM2X1024-6400C4DHX and  CM2X1024-6400C4).  This can work of course, but in practice it not necessarely does under all circumstances. 
    This list  (-> http://ramlist.i4memory.com/ddr2/) indicates that there are at least 14 different module types/revisions of Corsair DDR2-800 / CL4 modules that utilize a wide range of different memory chips (Elpida, ProMos, Micron, Infinion, Powerchip, Qimonda, Samsung, Infinion etc.).  Even though the superficial specifications for these chips appear to be pretty similar (DDR2-800 / CL5 / CL4), this does not necessarely mean that the modules will respond to the same operating conditions in the same way. There may be small difference in sub-timings/sub-latencies and/or the general responsiveness of the ICs which may affect the operating behaviour of the memory controller (which by the way also includes the PCI-Express interface which your video card is hooked up to).
    And again:  If running the system @DDR2-667 solves your issue, the possible explanation is that higher clock speeds may amplify (or trigger) potential performance problems that could have to do with the use of non-identical memory modules.
    Furthermore: It is also possible that the memory controller's design limitations and the potential compatibility problems that may be attributed to mixing different modules types may reinforce each other in terms of reduced system performance.
    3. The BIOS may have an impact as well
    There has been known issue with the use of certain video cards in conjunction with 4GB of system memory on this mainboard:
    https://forum-en.msi.com/index.php?topic=107301.0
    https://forum-en.msi.com/index.php?topic=105955.0
    https://forum-en.msi.com/index.php?topic=99818.msg798951#msg798951
    What may have come out as graphics/display corruption in earlier BIOS Releases may come out as reduced system performance when using the latest BIOS Release.  Of course, this is hard to prove, but I thought I'd mention it anyway.  May I ask what amount of video memory your card has onboard?
    Fortunately, there is a BIOS version that you could consider to try in this matter.  It is not only the last BIOS Release that could be used in order to avoid the corruption issue, but it is (in my oppionion) the best BIOS Version that was ever released for the 975X Platinum PUE Mainboard:  W7246IMS.716 [v7.1b6].  I have been using this mainboard for almost two years and have tested almost every BIOS Release that ever came out and I always went back to v7.1b6 as "ground zero". 
    It will properly support your E6600 (so you don't have to worry about that) and as far as I remember, there are no known compatibility issues with other components.  So maybe, you want to give this a shot.
    The bottom line is that in a worst case scenario, the problem you describe could be caused by all of the above things at the same time.  You cannot really do anything about the 975X Chipset Specifications and the only way to rule out explanation #2 is to test modules that are actually identical (same model number, revision and memory chips).  A test of the 7.1b6 BIOS Release is something you should consider.  It may be the only way to test the BIOS Hypothesis.
    This post turned out to be longer than I intended, but then again, I am well-rested after a good sleep and the wake-up coffee is kicking in pretty good.

  • Performance problem

    Hi, I'm having a performance problem I cannot solve, so I'm hoping anyone here has some advice.
    Our system consists of a central database server, and many applications (on various servers) that connect to this database.
    Most of these applications are older, legacy applications.
    They use a special mechanism for concurrency that was designed, also a long time ago.
    This mechanism requires the applications to lock a specific table with a TABLOCKX before making any changes to the database (in any table).
    This is of course quite nasty as it essentially turns the database into single user mode, but we are planning to phase this mechanism out.
    However, there is too much legacy code to simply migrate everything right away, so for the time being we're stuck with it.
    Secondly, this central database may not become too big because the legacy applications cannot cope with large amounts of data.
    So, a 'cleaning' mechanism was implemented to move older data from the central database to a 'history' database.
    This cleaning mechanism was created in newer technology; C# .NET.
    It is actually a CLR stored procedure that is called from a SQL job that runs every night on the history server.
    But, even though it is new technology, the CLR proc *must* use the same lock as the legacy applications because they all depend on the correct use of this lock.
    The cleaning functionality is not a primary process, so it does not have high priority.
    This means that any other application should be able to interrupt the cleaning and get handled by SQL first.
    Therefore, we designed the cleaning so that it cleans as few data as possible per transaction, so that any other process can get the lock right after committing each cleaning transaction.
    On the other hand, this means that we do have a LOT of small transactions, and because each transaction is distributed, they are expensive and so it takes a long time for the cleaning to complete.
    Now, the problem: when we start the cleaning process, we notice that the CPU shoots up to 100%, and other applications are slowed down excessively.
    It even caused one of our customer's factories to shut down!
    So, I'm wondering why the legacy applications cannot interrupt the cleaning.
    Every cleaning transaction takes about 0.6 seconds, which may seem long for a single transaction, but this also means any other application should be able to interrupt within 0,6 seconds.
    This is obviously not the case; it seems the cleaning session 'steals' all of the CPU, or for some reason other SQL sessions are not handled anymore, or at least with serious delays.
    I was thinking about one of the following approaches to solve this problem:
    1) Add waits within the CLR procedure, after each transaction. This should lower the CPU, but I have no idea whether this will allow the other applications to interrupt sooner. (And I really don't want the factory to shut down twice!)
    2) Try to look within SQL Server (using DMV's or something) and see whether I can determine why the cleaning session gets more CPU. But I don't know whether this information is available, and I have no experience with DMV's.
    So, to round up:
    - The total cleaning process may take a long time, as long as it can be interrupted ASAP.
    - The cleaning statements themselves are already as simple as possible so I don't think more query tuning is possible.
    - I don't think it's possible to tell SQL Server that specific SQL statements have low priority.
    Can anyone offer some advice on how to handle this?
    Regards,
    Stefan

    Hi Stefan,
    Have you solved this issue after follow up Bob's suggestion? I’m writing to follow up with you on this post, please let us know how things go.
    If you have any feedback on our support, please click
    here.
    Elvis Long
    TechNet Community Support

  • Performance problem with selecting records from BSEG and KONV

    Hi,
    I am having performance problem while  selecting records from BSEG and KONV table. As these two tables have large amount of data , they are taking lot of time . Can anyone help me in improving the performance . Thanks in advance .
    Regards,
    Prashant

    Hi,
    Some steps to improve performance
    SOME STEPS USED TO IMPROVE UR PERFORMANCE:
    1. Avoid using SELECT...ENDSELECT... construct and use SELECT ... INTO TABLE.
    2. Use WHERE clause in your SELECT statement to restrict the volume of data retrieved.
    3. Design your Query to Use as much index fields as possible from left to right in your WHERE statement
    4. Use FOR ALL ENTRIES in your SELECT statement to retrieve the matching records at one shot.
    5. Avoid using nested SELECT statement SELECT within LOOPs.
    6. Avoid using INTO CORRESPONDING FIELDS OF TABLE. Instead use INTO TABLE.
    7. Avoid using SELECT * and Select only the required fields from the table.
    8. Avoid nested loops when working with large internal tables.
    9. Use assign instead of into in LOOPs for table types with large work areas
    10. When in doubt call transaction SE30 and use the examples and check your code
    11. Whenever using READ TABLE use BINARY SEARCH addition to speed up the search. Be sure to sort the internal table before binary search. This is a general thumb rule but typically if you are sure that the data in internal table is less than 200 entries you need not do SORT and use BINARY SEARCH since this is an overhead in performance.
    12. Use "CHECK" instead of IF/ENDIF whenever possible.
    13. Use "CASE" instead of IF/ENDIF whenever possible.
    14. Use "MOVE" with individual variable/field moves instead of "MOVE-
    CORRESPONDING" creates more coding but is more effcient.

Maybe you are looking for

  • Hooking up mini to home theater

    Good afternoon. I have a 1st gen Mac Mini. I want to hook it up to my home theater setup to be able to stream my videos, music, photos to my HDTV. What's the best way to connect this to my setup (i.e., what cables, connectors, etc.)? Thanks.

  • GX600 digital sound output

    So i have a gx600 notebook it works great and i love it, but recently I got tritton ax360 5.1 surround sound headset. They can plug into the three audio jacks and work okay although I dont have the use of the microphone on the headset or the surround

  • Pricing and margin control on condition creation

    Dear champions, I would like to create a condition type PR00 that should be controled by cost price and all the discounts performed for a material. i mean it shouldn't be possible to create a PR00 condition if I am under cost price considering also d

  • Logical system not defined.

    Hello Can some body help me to identify the issue which gives me an error,No Logical system for FI is manitained . This error, I receive at the time of approving shopping carts. Thanks. Jayawant Gokhale

  • Business Content Source System Assignment???

    Hi Gurus, i am installing Business content and after going thru few posts in this Forums i have been hearing about Source system assignment in the Business content screen. but when ever i installed i never really used this option and we have only one