Performance problem because of ignored index

Hi,
We have a performance problem with kodo ignoring indexes in Oracle:
Our baseclass of all our persistent classes (LogasPoImpl) has a subclass
CODEZOLLMASSNAHMENIMPL.
We use vertical mapping for all subclasses and have 400.000 instances of
CODEZOLLMASSNAHMENIMPL.
We defined an additional index on an attribute of CODEZOLLMASSNAHMENIMPL.
A query with a filter like "myIndexedAttribute = 'DE'" takes about 15
seconds on Oracle 8.1.7.
Kodo logs something like the following:
[14903 ms] executing prepstmnt 6156689 SELECT (...)
FROM CODEZOLLMASSNAHMENIMPL t0, LOGASPOIMPL t1
WHERE (t0.myIndexedAttribute = ?)
AND t1.JDOCLASS = ?
AND t0.JDOID = t1.JDOID
[params=(String) DE, (String)
de.logas.zoll.eztneu.CodeZollMassnahmenImpl] [reused=0]
When I execute the same statement from a SQL-prompt, it takes that long as
well, but when I swap the tablenames in the from part
(to "FROM LOGASPOIMPL t1, CODEZOLLMASSNAHMENIMPL t0") the result comes
immediately.
I've had a look at the query plans, oracle creates for the two statements
and found, that our index on myIndexedAttribute is not used
by the first statement, but it is by the second.
How can I make Kodo use the faster statement?
I've tried to use the "jdbc-indexed" tag, but without success so far.
Thanks,
Wolfgang

Thank you very much, Stefan & Alex.
After computing statistics the index is used and the performance is fine
now.
- Wolfgang
Alex Roytman wrote:
ANALYZE TABLE MY_TABLE COMPUTE STATISTICS;
"Stefan" <[email protected]> wrote in message
news:btlqsj$f18$[email protected]..
When I execute the same statement from a SQL-prompt, it takes that longas
well, but when I swap the tablenames in the from part
(to "FROM LOGASPOIMPL t1, CODEZOLLMASSNAHMENIMPL t0") the result comes
immediately.
I've had a look at the query plans, oracle creates for the twostatements
and found, that our index on myIndexedAttribute is not used
by the first statement, but it is by the second.
How can I make Kodo use the faster statement?
I've tried to use the "jdbc-indexed" tag, but without success so far.I know that in DB2 there is a function called "Run Statistics" which you
can (and should do) on all tables involved in a query (at least once a
month, when there are heavy changes in the tables).
On information gathered by this statistics DB2 can optimize your queries
and execution path's
Since I was once involved in query performance optimizing on DB/2 I can
say you can get improvements of 80% on big tables on which statistics are
run and not. (Since the execution plans created by the optimizer differ
heavily)
Since I'm working now with Oracle as well, at least I can say, that Oracle
has a featere like statistics as well. (go into the manager enterprise
Console and click on a table, you will find a row "statisitics last run")
I don't know how to trigger these statistics nor whether they would
influence the query execution path on oracle (thus "swapping" tablenames
by itself), since I didn't have time to do further research on thatmatter.
But it's worth a try to find out and maybe it helps on you problem ?

Similar Messages

  • Performance problem on function-based index

    Hi guys,
    I am having performance problems with the addition of new function-based indexes.
    alter session set nls_comp='ANSI';
    alter session set nls_sort='BINARY_CI';
    * have to run this because the of case-insensitivity requirements
    I have a view. for ex:
    create or replace view view1
    as
    select * from emp1,user
    where emp1.empno=user.empno
    union
    select * from emp2,user
    where emp2.empno=user.empno
    union
    select * from emp3,user
    where emp3.empno=user.empno and so on
    When I run this it works with a full table scan. Then when i created a function-based index:
    create index user_ix on
    user(nlssort(empno,'NLS_SORT=BINARY_CI'));
    analyze index user_ix compute statistics;
    analyze table user compute statistics;
    the view hangs. but when i run the individual select statements it works.
    Do you guys have any idea on what's going on? Any advise is greatly appreciated.
    Thanks.

    LC is absolutely right. Brain cramp on my part.
    On the other hand, I can't seem to coerce Oracle to apply a to_binary_double conversion as part of an implicit conversion.
    var bin_dbl binary_double;
    select to_binary_double(14) into :bin_dbl from dual;
    SCOTT @ nx102 JCAVE9420> select * from emp where empno = :bin_dbl;
    no rows selected
    Elapsed: 00:00:00.14
    Execution Plan
    Plan hash value: 2949544139
    | Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |        |     1 |    39 |     1   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| EMP    |     1 |    39 |     1   (0)| 00:00:01 |
    |*  2 |   INDEX UNIQUE SCAN         | PK_EMP |     1 |       |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("EMPNO"=TO_NUMBER(:BIN_DBL))I'd expect that Oracle would try to convert the binary double to a number, not the other way around.
    Justin

  • Critical performance problem upon bulk load of groups

    All (including product development),
    I think there are missing indexes in wwsec_flat$ and wwsec_sys_priv$. Anyway, I'd like assistance on fixing the critical performance problems I see, properly. Read on...
    During and after bulk load of a few (about 500) users and groups from an external database, it becomes evident that there's a performance problem somewhere. Many of the calls to wwsec_api.addGroupToList took several minutes to finish. Afterwards the machine went 100% CPU just from logging in with the portal30 user (which happens to be group owner for all the groups).
    Running SQL trace points in the directions of the following SQL statement:
    SELECT ID,PARENT_ID,NAME,TITLE_ID,TITLEIMAGE_ID,ROLLOVERIMAGE_ID,
    DESCRIPTION_ID,LAYOUT_ID,STYLE_ID,PAGE_TYPE,CREATED_BY,CREATED_ON,
    LAST_MODIFIED_BY,LAST_MODIFIED_ON,PUBLISHED_ON,HAS_BANNER,HAS_FOOTER,
    EXPOSURE,SHOW_CHILDREN,IS_PUBLIC,INHERIT_PRIV,IS_READY,EXECUTE_MODE,
    CACHE_MODE,CACHE_EXPIRES,TEMPLATE FROM
    WWPOB_PAGE$ WHERE ID = :b1
    I checked the existing indexes, and see that the following ones are missing (I'm about to test with these, but have not yet done so):
    CREATE UNIQUE INDEX "PORTAL30"."WWSEC_FLAT_IX_GROUP_ID"
    ON "PORTAL30"."WWSEC_FLAT$"("GROUP_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 160K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING
    CREATE UNIQUE INDEX "PORTAL30"."WWSEC_FLAT_IX_PERSON_ID"
    ON "PORTAL30"."WWSEC_FLAT$"("PERSON_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 160K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING
    CREATE UNIQUE INDEX "PORTAL30"."WWSEC_SYS_PRIV_IX_PATCH1"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("OWNER", "GRANTEE_GROUP_ID",
    "GRANTEE_TYPE", "OWNER", "NAME", "OBJECT_TYPE_NAME")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 80K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING
    Note that when I deleted the newly inserted groups, the CPU consumption immediately went down from 100% to some 2-3%.
    This behaviour has been observed on a Sun Solaris system, but I think it's the same on NT (I have observed it during the bulk load on my NT laptop, but so far have not had the time to test further.).
    Also note: In the call to addGroupToList, I set owner to true for all groups.
    Also note: During loading of the groups, I logged a few errors, all of the same type ("PORTAL30.WWSEC_API", line 2075), as follows:
    Error: Problem calling addGroupToList for child group'Marketing' (8030), list 'NO_OSL_Usenet'(8017). Reason: java.sql.SQLException: ORA-06510: PL/SQL: unhandled user-defined exception ORA-06512: at "PORTAL30.WWSEC_API", line 2075
    Please help. If you like, I may supply the tables and the java program that I use. It's fully reproducable.
    Thanks,
    Erik Hagen (you may call me on +47 90631013)
    null

    YES!
    I have now tested with insertion of the missing indexes. It seems the call to addGroupToList takes just as long time as before, but the result is much better: WITH THE INDEXES DEFINED, THERE IS NO LONGER A PERFORMANCE PROBLEM!! The index definitions that I used are listed below (I added these to the ones that are there in Portal 3.0.8, but I guess some of those could have been deleted).
    About the info at http://technet.oracle.com:89/ubb/Forum70/HTML/000894.html: Yes! Thanks! Very interesting, and I guess you found the cause for the error messages and maybe also for the performance problem during bulk load (I'll look into it as soon as possible anbd report what I find.).
    Note: I have made a pretty foolproof and automated installation script (or actually, it's part of my Java program), that will let anybody interested recreate the problem. Mail your interest to [email protected].
    ============================================
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_PERS_IX1"
    ON "PORTAL30"."WWSEC_PERSON$"("MANAGER")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_IX2
    ON PORTAL30.WWSEC_PERSON$('ORGANIZATION')
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_PK
    ON PORTAL30.WWSEC_PERSON$('ID')
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_UK
    ON PORTAL30.WWSEC_PERSON$('USER_NAME')
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_UK
    ON PORTAL30.WWSEC_FLAT$("GROUP_ID", "PERSON_ID",
    "SPONSORING_MEMBER_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_PK
    ON PORTAL30.WWSEC_FLAT$("ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX5
    ON PORTAL30.WWSEC_FLAT$("GROUP_ID", "PERSON_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX4
    ON PORTAL30.WWSEC_FLAT$("SPONSORING_MEMBER_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX3
    ON PORTAL30.WWSEC_FLAT$("GROUP_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX2
    ON PORTAL30.WWSEC_FLAT$("PERSON_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX1"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("GRANTEE_GROUP_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX2"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("GRANTEE_USER_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX3"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("OBJECT_TYPE_NAME", "NAME")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_PK"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_UK"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("OBJECT_TYPE_NAME",
    "NAME", "OWNER", "GRANTEE_TYPE", "GRANTEE_GROUP_ID",
    "GRANTEE_USER_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 88K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    ==================================
    Thanks,
    Erik Hagen
    null

  • Performance Problems - Index and Statistics

    Dear Gurus,
    I am having problems lossing indexes and statistics on cubes ,it seems my indexes are too old which in fact are not too old just created a month back and we check indexes daily and it returns us RED on the manage TAB.
    please help

    Dear Mr Syed ,
    Solution steps I mentioned in my previous reply itself explains so called RE-ORG of tables;however to clarify more on that issue.
    Occasionally,ORACLE <b>Cost-Based Optimizer</b> may calculate the estimated costs for a Full Table Scan lower than those for an Index Scan, although the actual runtime of an access via an index would be considerably lower than the runtime of the Full Table Scan,Some Imperative points to be considered in order to perk up the performance and improve on quandary areas such as extensive running times for Change runs & Aggregate activate & fill ups.
    Performance problems based on a wrong optimizer decision would show that there is something serious missing at Database level and we need to RE_ORG  the degenerated indexes in order to perk up the overall performance and avoid daily manual (RSRV+RSNAORA)activities on almost similar indexes.
    For <b>Re-organizing</b> degenerated indexes 3 options are available-
    <b>1) DROP INDEX ..., and CREATE INDEX …</b>
    <b>2)ALTER INDEX <index name> REBUILD (ONLINE PARALLEL x NOLOGGING)</b>
    <b>3) ALTER INDEX <index name> COALESCE [as of Oracle 8i (8.1) only]</b>
    Each option has its Pros & Cons ,option <b>2</b> seems to be having lot of advantages to
    <b>Advantages- option 2</b>
    1)Fast storage in a different table space possible
    2)Creates a new index tree
    3)Gives the option to change storage parameters without deleting the index
    4)As of Oracle 8i (8.1), you can avoid a lock on the table by specifying the ONLINE option. In this case, Oracle waits until the resource has been released, and then starts the rebuild. The "resource busy" error no longer occurs.
    I would still leave the Database tech team be the best to judge and take a call on these.
    These modus operandi could be institutionalized  for all fretful cubes & its indexes as well.
    However,I leave the thoughts with you.
    Hope it Helps
    Chetan
    @CP..

  • White disk indicator light is out, but no seeming performance problems.Problem needing service or ignore??

    Hard disc indicator light on right front of laptop lights only periodically at startup or restart.  There appears not to be any performance problems.  Do I have a problem in the making?  Danger in the future?  Need service?  Or did I just choose the wrong option somewhere?

    The white LED is not a disk activity or power indicator.  MacBooks don't have either of these.  The indicator will only light as mentioned by Dave Stowe... when your display is asleep it will come on and when your system is asleep, it will pulsate.  From what you have indicated, it seems to be working correctly.

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Query performance problem

    I am having performance problems executing a query.
    System:
    Windows 2003 EE
    Oracle 9i version 9.2.0.6
    DETAIL table with 120Million rows partitioned in 19 partitions by SD_DATEKEY field
    We are trying to retrieve the info from an account (SD_KEY) ordered by date (SD_DATEKEY). This account has about 7000 rows and it takes about 1 minute to return the first 100 rows ordered by SD_DATEKEY. This time should be around 5 seconds to be acceptable.
    There is a partitioned index by SD_KEY and SD_DATEKEY.
    This is the query:
    SELECT * FROM DETAIL WHERE SD_KEY = 'xxxxxxxx' AND ROWNUM < 101 ORDER BY SD_DATEKEY
    The problem is that all the 7000 rows are read prior to be ordered. I think that it is not necessary for the optimizer to access all the partitions to read all the rows because only the first 100 are needed and the partitions are bounded by SD_DATEKEY.
    Any idea to accelerate this query? I know that including a WHERE clause for SD_DATEKEY will increase the performance but I need the first 100 rows and I don't know the date to limit the query.
    Anybody knows if this time is a normal response time for tis query or should it be improved?
    Thank to all in advance for the future help.

    Thank to all for the replies.
    - We have computed statistics and no changes in the response time.
    - We are discussing about restrict the query to some partitions but for the moment this is not the best solution because we don't know where are the latest 100 rows.
    - The query from Maurice had the same response time (more or less)
    select * from
    (SELECT * FROM DETAIL WHERE SD_KEY = 'xxxxxxxx' ORDER BY SD_DATEKEY)
    where ROWNUM < 101
    - We have a local index on SD_DATEKEY. Do we need another one on SD_KEY? Should it be created as BITMAP?
    I can't test inmediately your sugestions because this is a problem with one of our customers. In our test system (that has only 10Million records) the indexes accelerate the query but this is not the same in the customer system. I think the problem is the total records on the table.

  • 3D performance problems after upgrading memory

    I recently purchased an additional 2GB of memory to try and extend the life of my aging computer.  I installed the memory yesterday and Windows seems to recognize it (reporting now 3.3GB) but when I dropped into WoW (pretty much the only game I have) the 3D performance was down from the usual 60FPS @ 1600x1080 to a bleak 20 (at best) and the CPU utilization went to about 80% on both cores (with ~20% kernel usages).  Basically WoW was being software 3D rendered!!!
    I went through the usual reinstall drivers, reboot, etc... and couldn't find a fix.  I powered down, pulled out 2 of the memory sticks, booted up, and dropped into WoW - it ran at the full 60FPS and CPU utilization was very low (i.e. back to GPU Hardware 3D rendering).  I powered down again, swapped the 2 sticks for the other 2 sticks, booted up, and dropped into WoW - again it ran 100% fine.  So I powered down, put all four sticks in, booted back up, and when I dropped into WoW it was running in the software 3D rendering mode (20FPS at best and High CPU/Kernel usage).
    I've tried the /PAE option in boot.ini - no joy.  I've tried /MAXMEM = to 3300, 3072, 3000, and even 2048 - no joy in any of those cases.  Has anyone seen anything like this before?  Or have suggestions to fix (other than going to Win7-64)?
    All info in signature is up to date.
    Thanks in advance for any help!

    Quote
    Well his last post was a little over 6 hours ago so he was up pretty late.
    Looks like nothing one does in here goes completely unnoticed.   
    Anyway, I am done sleeping now.
    Quote
    his 2 Pfennig's worth.  I know, I know it's Euro's now.
    Yeah, and what used to be "Pfennige" is now also called "Cents" and here are mine:
    Quote
    I've tried the /PAE option in boot.ini - no joy.  I've tried /MAXMEM = to 3300, 3072, 3000, and even 2048 - no joy in any of those cases.  Has anyone seen anything like this before?  Or have suggestions to fix (other than going to Win7-64)?
    PAE or Physical Memory Extension will not do anything as Microsoft has castrated this feature to such an extend that it has nothing to do with memory addressing anymore when in comes to Windows XP:
    http://en.wikipedia.org/wiki/Physical_Address_Extension#Microsoft_Windows
    Quote
    Windows XP Service Pack 2 and later, by default, on processors with the no-execute (NX) or execute-disable (XD) feature, runs in PAE mode in order to allow NX. The NX (or XD) bit resides in bit 63 of the page table entry and, without PAE, page table entries only have 32 bits; therefore PAE mode is required if the NX feature is to be exploited. However, desktop versions of Windows (Windows XP, Windows Vista) limit physical address space to 4 GiB for driver compatibility reasons.
    The feature is already automatically enabled.  But since is original function (Address Extension) does no longer exist when it comes to the desktop versions of Windows XP, it won't really do anything you would ever notice.
    About the /MAXMEM Switch:  In Windows 32bit operating systems, every process is limited to 2GB of memory.  The point of the switch is to allow certain applications (or their run-time process) to occupy a higher amount of system memory than 2GB.  However, the culprit here is that only those applications are able to utilize this ability that have been programmed (or compiled) accordingly.  A special flag (large memory aware) has to be implemented.  Otherwise, these application will be restricted to 2GB even though the /MAXMEM Switch has been set to extend the 2GB limit to 3GB.  Most 32bit applications come without the "large memory aware" flag and that is why usually, settings the switch won't change anything.
    In any case, it is unlikely that /PAE (even if it would not be castrated) and /MAXMEM would have an impact on your actual issue because I doubt that it has much to do with either memory adressing or the memory limit of an indiviual Windows process.
    Quote
    the 3D performance was down from the usual 60FPS @ 1600x1080 to a bleak 20 (at best) and the CPU utilization went to about 80% on both cores (with ~20% kernel usages).
    There are a couple of hardware based explanations to consider here.  Let's start with the most obvious one:
    1. 975X Memory Controller
    The main reason that the system chooses to automatically set the Memory Speed to DDR2-667 even though DDR2-800 modules are installed, is that by design the memory controller of the Intel 975X Chipset does not natively support DDR2-800 modules, but
    >>Intel® 975X Express Chipset Datasheet - For the Intel® 82975X Memory Controller Hub (MCH)<< [Page 20]
    This means, that from the point of view of the memory controller, operating the memory @DDR2-800 actually means overclocking it (with all potential side effects).
    Basically, if your initial problem disappears as soon as you reduce the memory speed to DDR2-667, the design limitation of the memory controller may explain your findings.
    2. Different memory modules
    If I read your signature correctly, you are actually mixing two different kits/models of RAM (CM2X1024-6400C4DHX and  CM2X1024-6400C4).  This can work of course, but in practice it not necessarely does under all circumstances. 
    This list  (-> http://ramlist.i4memory.com/ddr2/) indicates that there are at least 14 different module types/revisions of Corsair DDR2-800 / CL4 modules that utilize a wide range of different memory chips (Elpida, ProMos, Micron, Infinion, Powerchip, Qimonda, Samsung, Infinion etc.).  Even though the superficial specifications for these chips appear to be pretty similar (DDR2-800 / CL5 / CL4), this does not necessarely mean that the modules will respond to the same operating conditions in the same way. There may be small difference in sub-timings/sub-latencies and/or the general responsiveness of the ICs which may affect the operating behaviour of the memory controller (which by the way also includes the PCI-Express interface which your video card is hooked up to).
    And again:  If running the system @DDR2-667 solves your issue, the possible explanation is that higher clock speeds may amplify (or trigger) potential performance problems that could have to do with the use of non-identical memory modules.
    Furthermore: It is also possible that the memory controller's design limitations and the potential compatibility problems that may be attributed to mixing different modules types may reinforce each other in terms of reduced system performance.
    3. The BIOS may have an impact as well
    There has been known issue with the use of certain video cards in conjunction with 4GB of system memory on this mainboard:
    https://forum-en.msi.com/index.php?topic=107301.0
    https://forum-en.msi.com/index.php?topic=105955.0
    https://forum-en.msi.com/index.php?topic=99818.msg798951#msg798951
    What may have come out as graphics/display corruption in earlier BIOS Releases may come out as reduced system performance when using the latest BIOS Release.  Of course, this is hard to prove, but I thought I'd mention it anyway.  May I ask what amount of video memory your card has onboard?
    Fortunately, there is a BIOS version that you could consider to try in this matter.  It is not only the last BIOS Release that could be used in order to avoid the corruption issue, but it is (in my oppionion) the best BIOS Version that was ever released for the 975X Platinum PUE Mainboard:  W7246IMS.716 [v7.1b6].  I have been using this mainboard for almost two years and have tested almost every BIOS Release that ever came out and I always went back to v7.1b6 as "ground zero". 
    It will properly support your E6600 (so you don't have to worry about that) and as far as I remember, there are no known compatibility issues with other components.  So maybe, you want to give this a shot.
    The bottom line is that in a worst case scenario, the problem you describe could be caused by all of the above things at the same time.  You cannot really do anything about the 975X Chipset Specifications and the only way to rule out explanation #2 is to test modules that are actually identical (same model number, revision and memory chips).  A test of the 7.1b6 BIOS Release is something you should consider.  It may be the only way to test the BIOS Hypothesis.
    This post turned out to be longer than I intended, but then again, I am well-rested after a good sleep and the wake-up coffee is kicking in pretty good.

  • Performance problem in RFC to JDBC interface

    Hello everybody!
    i'm working whit SAP PI 7.1
    We defined some interfaces RFC - PI - JDBC (SQL server) but we have some performance problem.
    If we have many row to write on the table then interface finish in timeout :
    Synchronous timeout exceeded.
    Returning to application. Exception: com.sap.engine.interfaces.messaging.api.exception.MessageExpiredException: Message 1d1f00b0-fecf-11de-8738-0015600446f0(OUTBOUND) expired.
    I read the PI tuning document and i tried to apply configuration whit Advanced Adapter Engine but whitout result.
    Now we want change the timeout in visual admin and maybe we solve the error but i'm asking myself....:
    It's normal that for write 1500 row in a table we need more than 4 minuts????
    It's possible accelerate this process??? After go live we will write messages whit more than 50.000 row.
    somebody may help me?
    PS: please no link to tuning guide or to notes (to increase the timeout parameter).

    This could be because your Database system (JDBC server) is taking more time to insert. The problem is not on PI side but on the receiving system side. Try inserting the same number od rows on the database server itself and check for the time taken for execution. Adding indexes on your database table solves the issue lot of times.
    Here PI is not the culprit but definitely  the receiver system.
    VJ

  • Performance problem in 7.6.6.10

    We have a performance problem after doing the update from MaxDB 7.6.6.3 to 7.6.6.10.  
    The symptom is that querys with the "<>" operator in the WHERE-Clause on a indexed Integer/SmallInteger-column slows down extremly, e.g. "WHERE FILEDNAME <> 1".
    On large tables the query is very, very slow.
    The dbanalyser shows "DIFFERENT STRATEGIES FOR OR-TERMS". 
    A way to reproduce the prob:
    Create a table with 2 columns
    CREATE TABLE "ADMIN"."TEST"
         "INTID"  Integer  NOT NULL,
         "FLAG"  Smallint,
         PRIMARY KEY("INTID")
    Index on Column FLAG
    CREATE INDEX "IDX_TEST" ON "ADMIN"."TEST"("FLAG" ASC)
    Insert about 1000 lines in the TEST
    INSERT INTO TEST (SELECT ROWNO, 1 FROM LARGETABLE WHERE ROWNO <= 1000)
    (The easiest way for me to fill the table.)
    Call the dbanalyser
    EXPLAIN SELECT * FROM TEST WHERE FLAG <> 1
    OWNER  TABLENAME  COLUMN_OR_INDEX  STRATEGY                                PAGECOUNT
    ADMIN  TEST                        DIFFERENT STRATEGIES FOR OR-TERMS                8
                      IDX_TEST         RANGE CONDITION FOR INDEX              
                                       ONLY INDEX ACCESSED                    
                      FLAG                  (USED INDEX COLUMN)               
                      IDX_TEST         RANGE CONDITION FOR INDEX              
                                       ONLY INDEX ACCESSED                    
                      FLAG                  (USED INDEX COLUMN)               
                                            RESULT IS COPIED   , COSTVALUE IS           6
                                       QUERYREWRITE - APPLIED RULES:          
                                          DistinctPullUp                                1
    The statement is fast because of the small table, but I think the startegy is wrong.

    > We have a performance problem after doing the update from MaxDB 7.6.6.3 to 7.6.6.10.  
    > The symptom is that querys with the "<>" operator in the WHERE-Clause on a indexed Integer/SmallInteger-column slows down extremly, e.g. "WHERE FILEDNAME <> 1".
    > On large tables the query is very, very slow.
    > Index on Column FLAG
    > -
    > CREATE INDEX "IDX_TEST" ON "ADMIN"."TEST"("FLAG" ASC)
    > The statement is fast because of the small table, but I think the startegy is wrong.
    Hmm.. what other strategy would you propose?
    The single table optimizer tries to estimate how many pages would need to be read to find the data required.
    It figures that for your statement there won't be many pages required, so a index access might be beneficial.
    And to use the index efficiently it transforms your unequality to a "larger then" OR "smaller then" condition.
    So you get "DIFFERENT STRATEGIES FOR OR-TERMS".
    If you look closely you'll find that both strategies actually are "RANGE CONDITION FOR INDEX" on the IDX_TEST index.
    The difference between them both is the range (the start/stop-key combination) used for the index reading.
    Anyhow - unquality conditions are always problematic to DBMS.
    They are designed to be quick to find data that is equal or like some condition.
    regards,
    Lars

  • Yosemite Mail performance problems

    I upgraded to OS 10.10 Yosemite this morning. When I opened the new Mail application, I noticed some significant performance problems.
    I organize all my emails by conversation (i.e., they display by thread), and I like to expand those conversations on occasion. However, whenever I choose the menu command to expand all conversations (View-->Expand All Conversations), there is a delay between 15-30 seconds for the conversations to expand.
    Switching between mailboxes incurs a 2-3 second delay before the mailbox I'm switching to appears in the viewer pane.
    Flagging and unflagging messages incurs a 10-15 second delay before the message is flagged or unflagged.
    I tried the usual things to fix problems with Mail. All of my mailboxes have been rebuilt, and I also manually re-indexed my mailboxes. I'm running a MacBook Air with a  1.7 GHz Intel i7 processor with 4 GB of RAM. All my mailboxes are run on a Microsoft Exchange Server (which I realize might be the whole or part of the problem, though I didn't have any issues with any other OSX versions before).
    So, my questions: is there a general performance problem with Mail in Yosemite? And if there is, will Apple release a fix?

    Please follow these directions to delete the Mail "sandbox" folder.
    Back up all data.
    Triple-click anywhere in the line below on this page to select it:
    ~/Library/Containers/com.apple.mail
    Right-click or control-click the highlighted line and select
              Services ▹ Reveal
    from the contextual menu.* A Finder window should open with a folder named "com.apple.mail" selected. If it does, move the selected folder—not just its contents—to the Desktop. Leave the Finder window open for now.
    Restart the computer. Launch Mail and test. If the problem is resolved, you may have to recreate some of your Mail settings. Any custom stationery that you created may be lost. Ask for instructions if you want to preserve that data. You can then delete the folder you moved and close the Finder window.
    Caution: If you change any of the contents of the sandbox, but leave the folder itself in place, Mail may crash or not launch at all. Deleting the whole sandbox will cause it to be rebuilt automatically.
    *If you don't see the contextual menu item, copy the selected text to the Clipboard by pressing the key combination  command-C. In the Finder, select
              Go ▹ Go to Folder...
    from the menu bar and paste into the box that opens by pressing command-V. You won't see what you pasted because a line break is included. Press return.

  • Performance Problem - MS SQL 2K and PreparedStatement

    Hi all
    I am using MS SQL 2k and used PreparedStatement to retrieve data. There is strange and serious performance problem when the PreparedStatement contains "?" and using PreparedStatement.setX() functions to set its value. I have performed the test with the following code.
    for (int i = 0; i < 10; i ++) {
    try {
    con = DBConnection.getInstance();
    statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = '" + cardNo + "'");
    // statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = ?");
    // statement.setString(1, cardNo);
    rs = statement.executeQuery();
    if (rs.next()) {
    catch(SQLException e) {
    e.printStackTrace();
    finally {
    try {
    rs.close();
    statement.close();
    catch(SQLException e) {
    e.printStackTrace();
    Iteration Time (ms)
    1 961
    10 1061
    200 1803
    for (int i = 0; i < 10; i ++) {
    try {
    con = DBConnection.getInstance();
    // statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = '" + cardNo + "'");
    statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = ?");
    statement.setString(1, cardNo);
    rs = statement.executeQuery();
    if (rs.next()) {
    catch(SQLException e) {
    e.printStackTrace();
    finally {
    try {
    rs.close();
    statement.close();
    catch(SQLException e) {
    e.printStackTrace();
    Iteration Time (ms)
    1 1171
    10 2754
    100 18817
    200 36443
    The above test is performed with DataDirect JDBC 3.0 driver. The one uses ? and setString functions take much longer to execute, which supposed to be faster because of precompilation of the statement.
    I have tried different drivers - the one provided by MS, data direct and Sprinta JDBC drivers but all suffer the same problem in different extent. So, I am wondering if MS SQL doesn't support for precompiled statement and no matter what JDBC driver I used I am still having the performance problem. If so, many O/R mappings cannot be used because I believe most of them if not all use the precompiled statement.
    Best regards
    Edmond

    Edmond,
    Most JDBC drivers for MS SQL (and I think this includes all the drivers you tested) use sp_executesql to execute PreparedStatements. This is a pretty good solution as the driver doesn't have to keep any information about the PreparedStatement locally, the server takes care of all the precompiling and caching. And if the statement isn't already precompiled, this is also taken care of transparently by SQL Server.
    The problem with this approach is that all names in the query must be fully qualified. This means that the driver has to parse the query you are submitting and make all names fully qualified (by prepending a db name and schema). This is why creating a PreparedStatement takes so much using these drivers (and why it does so every time you create it, even though it's the same PreparedStatement).
    However, the speed advantage of PreparedStatements only becomes visible if you reuse the statement a lot of times.
    As about why the PreparedStatement with no placeholder is much faster, I think is because of internal optimisations (maybe the statement is run as a plain statement (?) ).
    As a conclusion, if you can reuse the same PreparedStatement, then the performance hit is not so high. Just ignore it. However, if the PreparedStatement is created each time and only used a few times, then you might have a performance issue. In this case I would recommend you try out the jTDS driver ( http://jtds.sourceforge.net ), which uses a completely different approach: temporary stored procedures are created for PreparedStatements. This means that no parsing is done by the driver and PreparedStatement caching is possible (i.e. the next time you are preparing the same statement it will take much less as the previously submitted procedure will be reused).
    Alin.

  • Performance problems after Update from 7.6.00.37 to 7.6.03.15

    Hi,
    after the Update from 7.6.00.37 to 7.6.03.15 in our live system we noticed a lot of performance problems. We have tested the new version before on our test system, but there don't noticed effects like this.
    We have 2 identical systems (Opteron 64 bit, openSuse 10.2) with log shipping between them. We updated first the standby system, switched from online to standby (with copy of cold log) and started the new server as online system. After that we run a complete backup (runtime: 1 hour) for starting a new backup history and for activating autolog. Then
    With the update we changed USE_OPEN_DIRECT to YES, but the performance of the system was very slow afterwards. After the backup it remains at a high load average (> 10, previous system had about 2-4), with nearly 100% of CPU usage for the db kernel process.
    Next day we switched USE_OPEN_DIRECT back to NO. The system first runs better, but periodically rises up to a load average of 6 and slow down the performance of various applications (somebody says about 10 times slower). Here we also noticed a high usage (now 200-300%) of the db kernel process.
    Our questions are:
    1. Has something basically changed from 7.6.00.37 to 7.6.03.15, so that our various applications (JDBC, ODBC and Perl/SQLDBC partially on old linux systems with drivers from 7.5.00.23) don't reach same performance as before?
    2. Are there any other (new) parameters, which can help? Maybe reducing MAXCPU from 4 to 3 for reserving capacities for the system (there is only one maxdb instance running)?
    3. Is the a possibility to switch back to 7.6.00.37 (only for worst case)?
    I have made some first steps with x_cons, but don't see any anomalies on the first look.
    Regards,
    Thomas

    Thomas Schulz wrote:>
    > > > Next day we switched USE_OPEN_DIRECT back to NO. The system first runs better, but
    > >
    > > What is it about this parameter that lets you think it may be the cause for your problems?
    > After changing it back to NO, the system runs better (lower load average) than with YES (but much slower than with old version!)
    Hmm... that is really odd. When using USE_OPEN_DIRECT there is actually less work to do for he operating system.
    > > > Our questions are
    > > >
    > > > 1. Has something basically changed from 7.6.00.37 to 7.6.03.15, so that our various applications (JDBC, ODBC and Perl/SQLDBC partially on old linux systems with drivers from 7.5.00.23) don't reach same performance as before?
    > >
    > > Yes - of course. Changes are what Patches are all about!

    > Are there any known problems with updating from 7.6.00.37 to 7.6.03.15?
    Well of course there are bugs that have been found inbetween the release of both versions, but I am not aware of something like the performance killer.
    We will have to check this in detail here.
    > > > I have made some first steps with x_cons, but don't see any anomalies on the first look.
    > >
    > > Ok, looking into what the system does when it uses CPU is a first good step.
    > > But what would be "anomalies" to you?
    >
    > Good question! I don't really know.
    Well - then I guess 'looking at the system' won't bring you far...
    > > Do you use DBAnalyzer? If not -> activate it!
    > > Does it gives you any warnings?
    > > What about TIME_MEASUREMENT? Is it activated on the system? If not -> activate it!
    >
    > OK, that will be our next steps.
    Great - let's see the warnings you get.
    Let us also see the DB Parameters you set.
    > > What parameters have changed due  to the patch installations (check the parameter history file)?
    > >
    > > What queries take longer now? What is the execution plan of them?
    >
    > It seems to happen for all selects on tables with a lot of rows (>10.000, partially without indexes because automatically generated). With the old version we had no problems with missing indexes or the general performance. Unfortunatly it is very difficult to extract some sql statements out of the JBoss applications. But even simple queries (without any join) runs slower, when the load average rises over 4-5.
    Hmm... the question here is still, if the execution plans are good enough to meet your expectations.
    E.g. for tables that you access via the primary key it actually doesn't matter how many rows a table has (not for MaxDB at least).
    > > BTW: how exactly do the tests look like that you've done on the testsystem?
    > Usage over 6 weeks with our JDBC development environment (JBoss), backup and restore with various combinations of USE_OPEN_DIRECT and USE_OPEN_DIRECT_FOR_BACKUP.
    Sounds like the I/O is your most suspect aspect for overall system performance...
    > > Was the testsystem a 1:1 copy of the productive machine before the upgrade test?
    > No - smaller hardware (32 bit), only 20% of data of the live system, few db users and applications.
    >
    > > How did you test the system performance with multiple parallel users?
    > Only while permanent development with the 2-3 developers and some parallel tests of backup/restore. Unfortunately no tests with many users/applications.
    Ok - so this is next to no testing at all when it comes to performance.
    > An UPDATE STATISTICS over all db users seems to change nothing. At the moment the system remains markedly slow and we are searching for reasons and solutions. Another attempt will be the change of MAXCPU from 4 to 3.
    Why do you want to do that? Have you observed any threads that don't get a CPU because all 4 cores are used by the MaxDB kernel?
    regards,
    Lars

  • Performance problem whlile selecting(extracting the data)

    i have one intermediate table.
    iam inserting the rows which are derived from a select statement
    The select statement having a where clause which joins a view (created by 5 tables)
    The problem is select statement which is getting the data is taking more time
    i identified the problems like this
    1) The view which is using in the select statement is not indexed---is index is necessary on view ????
    2) Because the tables which are used to create a view have already properly indexed
    3) while extracting the data it is taking the more time
    the below query will extract the data and insert the data in the intermediate table
    SELECT 1414 report_time,
    2 dt_q,
    1 hirearchy_no_q,
    p.unique_security_c,
    p.source_code_c,
    p.customer_specific_security_c user_security_c,
    p.par_value par_value, exchange_code_c,
    (CASE WHEN p.ASK_PRICE_L IS NOT NULL THEN 1
    WHEN p.BID_PRICE_L IS NOT NULL THEN 1
    WHEN p.STRIKE_PRICE_L IS NOT NULL THEN 1
    WHEN p.VALUATION_PRICE_L IS NOT NULL THEN 1 ELSE 0 END) bill_status,
    p.CLASS_C AS CLASS,
    p.SUBCLASS_C AS SUBCLASS,
    p.AGENT_ADDRESS_LINE1_T AS AGENTADDRESSLINE1,
    p.AGENT_ADDRESS_LINE2_T AS AGENTADDRESSLINE2,
    p.AGENT_CODE1_T AS AGENTCODE1,
    p.AGENT_CODE2_T AS AGENTCODE2,
    p.AGENT_NAME_LINE1_T AS AGENTNAMELINE1,
    p.AGENT_NAME_LINE2_T AS AGENTNAMELINE2,
    p.ASK_PRICE_L AS ASKPRICE,
    p.ASK_PRICE_DATE_D AS ASKPRICEDATE,
    p.ASSET_CLASS_T AS ASSETCLASS
    FROM (SELECT
    DISTINCT x.*,m.customer_specific_security_c,m.par_value
    FROM
    HOLDING_M m JOIN ED_DVTKQS_V x ON
    m.unique_security_c = x.unique_security_c AND
    m.customer_c = 'CONF100005' AND
    m.portfolio_c = 24 AND
    m.status_c = 1
    WHERE exists
         (SELECT 1 FROM ED_DVTKQS_V y
              WHERE x.unique_security_c = y.unique_security_c
                   GROUP BY y.unique_security_c
                   HAVING MAX(y.trading_volume_l) = x.trading_volume_l)) p
    any one please give me the valueble suggestions on the performance

    thanks for the updating
    in the select query we used some functions like max
    (SELECT 1 FROM ED_DVTKQS_V y
    WHERE x.unique_security_c = y.unique_security_c
    GROUP BY y.unique_security_c
    HAVING MAX(y.trading_volume_l) = x.trading_volume_l)) p
    will these type of functions will cause the performance problem ???

  • ERP6.0/10g performance problem

    Dear development support,
    We have severe performance problems in several standard functions which access large tables such as GLPCA and MSEG.
    [PROBLEM DESCRIPTION]
    We are currently upgrading 4.6C system to ERP 6.0.
    Program RCOPCA02 takes more than 10 minutes in our ERP 6.0 system, while the same problem finishes for about a few seconds in 4.6C system.
    After our investigation, we found out the followings:
    1. In 4.6C system, SQL execution plan shows that it's using add-on index
    2. In ERP 6.0 system, SQL execution plan shows that it's using standard index (~1)
    3. GLPCA table is analyzed and statistics information is updated
    We want to solved this problem without adding any additional index because this table is very huge.
    Thank you very much for your support in advance.
    Regards,
    Fukuoji

    Hello again,
    I found a sap note "1165319 - Optimizer merge fix for Oracle 10.2.0.4" which was released recently.
    This might be asked to SAP support, but I am afraid this patch might solve the problem....
    I will update this message if the situation develops.
    Regards,
    Kazuya Imabayashi

Maybe you are looking for