SQL Area Usage, Performance Problem

Hi,
I'm a software engineer, not a DBA, with a lot of doubts about our production environment, but I think that I have found a problem at our production Database.
At our development database when I execute the same SQL statement over than onces, I can see this behaviour (for example):
* First execution 580 milisenconds.
* Second execution 21 milisencons.
As where I know, I understand that the compiled SQL statement is stored at the SQL Area, and by that reason the second execution is faster then the first one. Is that assumption correct?
If it is correct, I have a problem on our production Database because it does not work as expected. I have done a lot of trials, and SQL statement executions do not reduce her time execution when I do consecutive SQL execution.
Could you help me? I think that the parameter shared_pool_size value is too lower for our production server.
Thanks in advance.
Best Regards,
Joan

Just a comment about performance tuning and troubleshooting in general Joan.
It is very dangerous to base your conclusions on observation only. Consider the following example:
SQL> set timing on
SQL> select count(*) from all_objects;
COUNT(*)
10296
Elapsed: 00:00:18.38
SQL>
SQL> -- now using the magically warp speed hint
SQL> select /*+ WARP_SPEED */ count(*) from all_objects;
COUNT(*)
10296
Elapsed: 00:00:00.32
SQL>
From 18 seconds to less than half a second. It does look, based on pure observation of the results, that there is a WARP_SPEED hint in Oracle and it does increase performance dramatically. And we can also infer that this is an undocumented feature added by one of the Oracle kernel developers that is also a Star Trek fan. ;-)
But if we turn on SQL tracing (as suggested), we will see that the first SELECT did a lot of physical I/O. The 2nd SELECT did the same work, but without having to do the very expensive physical I/O - as the data blocks it need to hit (again) was now in memory (inside Oracle's buffer cache).
It had nothing to do with an invalid and non-existing CBO hint called WARP_SPEED.
The critical bit is KNOWING exactly what you are measuring when using this type of approach. If you do not know that, you are in no position to determine a sound and valid conclusion.
Side note on shared pool size - one of the worse mistakes can be to increase it. It can cause incredible damage to performance on an instance that deals with bindless/non-sharable SQL as the pool that Oracle needs to use to determine if a soft parse is possible gets to be huge.. without the benefit of being able to soft parse and forced to hard parse anyway. And that hard parse also now added to the size of the pool.

Similar Messages

  • MS SQL Server 2008 performance problem

    We use TopLink 10.1.3.5 to connect to MS SQL Server 2008.
    What we are seeing is that when a query is being run by TopLink a lot of cursors open up and remain open. Our database CPU usage goes up and it affects the whole application.
    Our DBA took a look at it and said the database shows FETCH_APICURSOR* being used for select statements.
    Is there a way to tell TopLink not to use cursors for queries?
    Thanks.

    Can you pin point a particular TopLink query tied to the " FETCH_APICURSOR* " call in the app and post how it is being created?
    My guess is that the application is specifying the TopLink query object to return a cursor or stream and not closing it in all cases, or keeping them open for a long period - did you say they were leaking, or is it just that a large number are open at a time leading to performance problems?
    This streams+cursors are described in the TopLink docs here
    http://docs.oracle.com/cd/E21764_01/web.1111/b32441/qryadv.htm#CJGJBHGJ
    or the 10g docs here:
    http://sqltech.cl/doc/oas10gR3/web.1013/b13593/qryadv010.htm
    If this is the case, you might want to use a different strategy such as pagination instead of cursors, described here:
    http://docs.oracle.com/cd/E17904_01/web.1111/b32441/optimiz.htm#CHDIBGFE
    Best Regards,
    Chris

  • What effect does "GATHER_SCHEMA_STATS" have on SQL AREA??

    Hi, all.
    The database is 2 node RAC database (10.2.0.2.0) on 32 bit windows 2003 EE SP1.
    Recently, the database has been suffering performance degrade due to
    "library cache - related wait events" such as library cache pin and library cache lock. I also could see global cache related wait events(RAC).
    I found that the default job on installation, "GATHER_STATS_JOB", is causing
    a lot of invalidations of parsed sqls. Depending on the free memory of shared pool,
    the 2 node RAC database hangs, even not allowing "log switch".
    I think GATHER_STATS_JOB is very expensive in a RAC environment
    because it gathers statistics all objects in a database.
    Therefore, I disabled "GATHER_STATS_JOB". I could see the improvement
    in terms of "sql area usage".
    As an alternative, I would like to gather only application schema statistics
    by using the following procedure.
    -->DBMS_STATS.GATHER_SCHEMA_STATS('NMSUSER',DBMS_STATS.AUTO_SAMPLE_SIZE);
    My question is the following.
    ● After gathering applicaion schema statistics, all application related SQL
    statements are invalidated and are required to be re-parsed(hard parsing)??
    The database is based on All-Rows optimizer mode (CBO).
    Thanks and Regards.

    Hi,
    After gathering applicaion schema statistics, all application related SQL statements are invalidated and are required to be re-parsed(hard parsing)??Yes and no... it depends on the call of DBMS_STATS.GATHER_SCHEMA_STATS ... you can specify not to invalidate your parsed statements...
    http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_stats.htm#sthref8114
    Parameter:
    no_invalidate      Does not invalidate the dependent cursors if set to TRUE. The procedure invalidates the dependent cursors immediately if set to FALSE. Use DBMS_STATS.AUTO_INVALIDATE. to have Oracle decide when to invalidate dependent cursors. This is the default. The default can be changed using the SET_PARAM Procedure.Regards
    Stefan

  • DB Performance problem

    Hi Friends,
    We are experiencing performance problem with our oracle applications/database.
    I run the OEM and I got the following report charts:
    http://farm3.static.flickr.com/2447/3613769336_1b142c9dd.jpg?v=0
    http://farm4.static.flickr.com/3411/3612950303_1f83a9f20.jpg?v=0
    http://farm4.static.flickr.com/3411/3612950303_1f83a9f20.jpg?v=0
    Are there any clues that these charts can give re: performance problem?
    What other charts in OEM that can help solve or give assitance performance problem?
    Thanks a lot in advance

    ytterp2009 wrote:
    Hi Charles,
    This is the output of:
    SELECT
    SUBSTR(NAME,1,30) NAME,
    SUBSTR(VALUE,1,40) VALUE
    FROM
    V$PARAMETER
    ORDER BY
    UPPER(NAME);
    (snip)
    Are there parameters need tuning?
    ThanksThanks for posting the output of the SQL statement. The output answers several potential questions (note to other readers, shift the values in the SQL statement's output down by one row).
    Parameters which I found to be interesting:
    control_files                 C:\ORACLE\PRODUCT\10.2.0\ORADATA\BQDB1\C
    cpu_count                     2
    db_block_buffers              995,648 = 8,156,348,416 bytes = 7.6 GB
    db_block_size                 8192
    db_cache_advice               on
    db_file_multiblock_read_count 16
    hash_area_size                131,072
    log_buffer                    7,024,640
    open_cursors                  300
    pga_aggregate_target          2.68435E+12 = 2,684,350,000,000 = 2,500 GB
    processes                     950
    sessions                      1,200
    session_cached_cursors        20
    shared_pool_size              570,425,344
    sga_max_size                  8,749,318,144
    sga_target                    0
    sort_area_retained_size       0
    sort_area_size                65536
    use_indirect_data_buffers     TRUE
    workarea_size_policy          AUTOFrom the above, the server is running on Windows, and based on the value for use_indirect_data_buffers is running a 32 bit version of Windows using a windowing technique to access memory (database buffer cache only) beyond the 4GB upper limit for 32 bit applications. By default, 32 bit Windows limits each process to a maximum of 2GB of memory utilization. This 2GB limit may be raised to 3GB through a change in the Windows configuration, but a certain amount of the lower 4GB region (specifically in the upper 2GB of that region) must be used for the windowing technique to access the upper memory (the default might be 1GB of memory, but verify with Metalink).
    By default on Windows, each session connecting to the database requires 1MB of server memory for the initial connection (this may be decreased, see Metalink), and with SESSIONS set at 1,200, 1.2GB of the lower 2GB (or 3GB) memory region would be consumed just to let the sessions connect, before any processing is performed by the sessions.
    The shared pool is potentially consuming another 544MB (0.531GB) of the lower 2GB (or 3GB) memory region, and the log buffer is consuming another 6.7MB of memory.
    Just with the combination of the memory required per thread for each session, the memory for the shared pool, and the memory for the log buffer, the server is very close to the 2GB memory limit before the clients have performed any real work.
    Note that the workarea_size_policy is set to AUTO, so as long as that parameter is not adjusted at the session level, the sort_area_size and sort_area_retained_size have no impact. However, the 2,500 GB specification (very likely an error) for the pga_aggregate_target is definitely a problem as the memory must come from the lower 2GB (or 3GB) memory region.
    If I recall correctly, a couple years ago Dell performed testing with 32 bit servers using memory windowing to utilize memory above the 4GB limit. Their tests found that the server must have roughly 12GB of memory to match (or possibly exceed) the performance of a server with just 4GB of memory which was not using memory windowing. Enabling memory windowing and placing the database buffer cache into the memory above the 4GB limit has a negative performance impact - Dell found that once 12GB of memory was available in the server, performance recovered to the point that it was just as good as if the server had only 4GB of memory. You might reconsider whether or not to continue using the memory above the 4GB limit.
    db_file_multiblock_read_count is set to 16 - on Oracle 10.2.0.1 and above this parameter should be left unset, allowing Oracle to automatically configure the parameter (it will likely be set to achieve 1MB multi-block reads with a value of 128).
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Performance Problem Again

    Hi all,
    We are encountering performance problem again
    The batch process deletes 1M rows every night which took 30mins the usual.
    But last night (12AM) it took more that 2hrs and hangs.
    Does it help if I run gather_schena stats regularly when there is constant DELETE on the table?
    Please help me check our ASH, AWR, ADDM to resolve the issue.
    ADDM
    https://app.box.com/s/7o734e70aa2m2zg087hf
    ASH
    https://app.box.com/s/xadlxfk0r5y7jvtxfsz7
    AWR
    https://app.box.com/s/x8ordka2gcc6ibxatvld
    Thanks....
    zxy

    Hi ARM,
    ***What is the SGA_TARGET or MEMORY_TARGET that the database is running on?
    Our server has 8Gb Physical Memory and 8Gb Swap.
    What  is the ideal SGA_TARGET and MEMORY_TARGET shouldbe?
    Our current setting is:
    ========
    SQL> show parameter memory
    NAME                                 TYPE        VALUE
    hi_shared_memory_address             integer     0
    memory_max_target                        big integer 5936M
    memory_target                                big integer 5936M
    shared_memory_address                 integer     0
    SQL> show parameter sga_
    NAME                                 TYPE        VALUE
    sga_max_size                         big integer 5936M
    sga_target                               big integer 0
    Thanks

  • 6321 performance problems

    Hello again,
    we are experiencing performance problems with the 6321 in our environment - 10 kHz system clock, i.e. the analog input channels should be read in every 100 µs. This works fine with one card (differentially cabled), but if we plug a second card the systems just comes to a grinding halt.
    It appears, that one AI channel register access is taking us about 4 µs, which would mean 64 µs using 16 channels. We have tried RLP, "pure" DMA and DMA using interrupts.
    Am I getting something wrong or was the 6321 just not designed for this kind of application?
    Best regards,
    Philip

    Hello again,
    we are experiencing performance problems with the 6321 in our environment - 10 kHz system clock, i.e. the analog input channels should be read in every 100 µs. This works fine with one card (differentially cabled), but if we plug a second card the systems just comes to a grinding halt.
    It appears, that one AI channel register access is taking us about 4 µs, which would mean 64 µs using 16 channels. We have tried RLP, "pure" DMA and DMA using interrupts.
    Am I getting something wrong or was the 6321 just not designed for this kind of application?
    Best regards,
    Philip

  • Performance problem with ojdbc14.jar

    Hi,
    We are having performance problem with ojdbc14.jar in selecting and updating (batch updates) entries in a table. The queries are taking minutes to execute. The same java code works fine with classes12.zip ans queries taking sub seconds to execute.
    We have Oracle 9.2.0.5 Database Server and I have downloaded the ojdbc14.jar from Oracle site for the same. Tried executing the java code from windows 2000, Sun Solaris and Opteron machines and having the same problem.
    Does any one know a solution to this problem? I also tried ojdbc14.jar meant for Oracle 10g, that did not help.
    Please help.
    Thanks
    Yuva

    My code is doing some thing which might be working well with classes12.zip and which does not work well with ojdbc14.jar? Any general suggestions to make the code better, especially for batch updates.
    But for selecting a row from the table, I am using index columns in the where cluase. In the code using PreparedStatement, setting all the reuired fields. Here is the code. We have a huge index with 14 fields!!. All the parameters are for where clause.
    if(longCallPStmt == null) {
    longCallPStmt = conn.prepareStatement(longCallQuery);
    log(Level.FINE, "CdrAggLoader: Loading tcdragg entry for "
    +GeneralUtility.formatDate(cdrAgg.time_hour, "MM/dd/yy HH"));
    longCallPStmt.clearParameters();
    longCallPStmt.setInt(1, cdrAgg.iintrunkgroupid);
    longCallPStmt.setInt(2, cdrAgg.iouttrunkgroupid);
    longCallPStmt.setInt(3, cdrAgg.iintrunkgroupnumber);
    longCallPStmt.setInt(4, cdrAgg.iouttrunkgroupnumber);
    longCallPStmt.setInt(5, cdrAgg.istateregionid);
    longCallPStmt.setTimestamp(6, cdrAgg.time_hour);
    longCallPStmt.setInt(7, cdrAgg.icalltreatmentcode);
    longCallPStmt.setInt(8, cdrAgg.icompletioncode);
    longCallPStmt.setInt(9, cdrAgg.bcallcompleted);
    longCallPStmt.setInt(10, cdrAgg.itodid);
    longCallPStmt.setInt(11, cdrAgg.iasktodid);
    longCallPStmt.setInt(12, cdrAgg.ibidtodid);
    longCallPStmt.setInt(13, cdrAgg.iaskzoneid);
    longCallPStmt.setInt(14, cdrAgg.ibidzoneid);
    rs = longCallPStmt.executeQuery();
    if(rs.next()) {
    cdr_agg = new CdrAgg(
    rs.getInt(1),
    rs.getInt(2),
    rs.getInt(3),
    rs.getInt(4),
    rs.getInt(5),
    rs.getTimestamp(6),
    rs.getInt(7),
    rs.getInt(8),
    rs.getInt(9),
    rs.getInt(10),
    rs.getInt(11),
    rs.getInt(12),
    rs.getInt(13),
    rs.getInt(14),
    rs.getInt(15),
    rs.getInt(16)
    }//if
    end_time = System.currentTimeMillis();
    log(Level.INFO, "CdrAggLoader: Loaded "+((cdr_agg==null)?0:1) + " "
    + GeneralUtility.formatDate(cdrAgg.time_hour, "MM/dd/yy HH")
    +" tcdragg entry in "+(end_time - start_time)+" msecs");
    } finally {
    GeneralUtility.closeResultSet(rs);
    GeneralUtility.closeStatement(pstmt);
    Why that code works well for classes12.zip (comes back in around 10 msec) and not for ojdbc14.jar (comes back in 6-7 minutes)?
    Please advise.

  • Performance Problem After Upgrade

    Hi Gurus,
    We have successfully Upgrade our system R3 4.6c to ECC6, Database Oracle 10g on AIX.
    Now we are facing Performance Problem. when we are checking Tables are taking Huge time to update data.
    Regards,
    Darshan...

    Thanks! AC,
    Performance Problem is resolved.
    Now facing problem in RFC SAPXPG_DBDEST_ECCDB
    Logon     Connection Error
    Error Details     Error when opening an RFC connection
    Error Details     ERROR: SAP gateway connection failed. Is SAP gateway started?
    Error Details     LOCATION: SAP-Server eccci_LRP_00 on host eccci (wp 14)
    Error Details     COMPONENT: CPIC
    Error Details     COUNTER: 18
    Error Details     MODULE:
    Error Details     LINE:
    Error Details     RETURN CODE: 236
    Error Details     SUBRC: 0
    Error Details     RELEASE: 700
    Error Details     TIME: Tue Mar 17 11:57:01 2009
    Error Details     VERSION:
    Regards,
    Darshan..

  • Performance problem on free chacteristics

    Hi Friends,
    We are getting performance problem with some of our BW Reports.
    When we run BW web reports. Intial Screen of the report is OK.
    when we do filtering on some free characteristics, It is taking ages to filter.
    this is only for some Free characteristics.
    These free characteristics are ok few days back and now we are facing this problems since 10 days.
    Please help us in by providing your valuable solutions.
    Thanks
    Tony

    Hi Oliver,
    Let me explain in detail.
    When we do drill down all Free chacteristics(Characteristics, Nav Attributes) - This is Fine
    When we press filter option in order to filter (Nav Attributes) - This is Fine
    When we press filter option in order to filter (Characteristics) - This is fine for some characteristics and some other are taking ages. These Guilty Chacteristics are SAP defined characteristics like 0Material, 0Plant, 0Sold_To.
    Data loading is same since somany days.
    Please advice how to proceed
    Thanks.
    Tony
    Message was edited by:
            tony

  • Performance problem in RSPC1

    when BI Team building loads on server they are getting performance problem is RSPC1 but they are not facing that problem continuously, they are getting problem for one hour after they were able to execute with out any issue, at that point of time all the t-codes performance is good , could you please suggest me what could be the problem.

    Hi,
    The running of process chains depends on the number of background processes that are available.
    You need to check with the BASIS team on the no# of processes allocated for background jobs.  The reason maybe that all the background jobs are occupying the available processes at that given point of time and hence the lag before they are available to start the job.  You can also check in sm50 for the jobs that are running.
    Sasi

  • Documaker 12.1 Standard Edition Performance Problem

    We are having performance problems with our Documaker version 12.1 Standard Edition large batch runs.  We are seeing much longer run times (5+ hours for a large line of business with a 30,000 policy declaration file) than we are willing to accept.  Is or has anyone else had similar experiences?  Any suggestions for optimizing our Documaker batch performance?

    You may have to be more specific and/or report your question to Support.
    I do recall that there is a performance improvement in 12.1 related to using table rows declared in repeating sections. The more sections you repeated with the table-row, the slower it would become. I think this is patch 12.1.3. I don't know if that patch is released yet. As I said, you should check with the Support site. If you have already reported to Support, check the status of your BugDB entry as this might be your fix in the works.
    If your situation is not related to using repeating table rows, then perhaps you could describe your setup and anything unusual about your situation.

  • PL/SQL Performance problem

    I am facing a performance problem with my current application (PL/SQL packaged procedure)
    My application takes data from 4 temporary tables, does a lot of validation and
    puts them into permanent tables.(updates if present else inserts)
    One of the temporary tables is parent table and can have 0 or more rows in
    the other tables.
    I have analyzed all my tables and indexes and checked all my SQLs
    They all seem to be using the indexes correctly.
    There are 1.6 million records combined in all 4 tables.
    I am using Oracle 8i.
    How do I determine what is causing the problem and which part is taking time.
    Please help.
    The skeleton of the code which we have written looks like this
    MAIN LOOP ( 255308 records)-- Parent temporary table
    -----lots of validation-----
    update permanent_table1
    if sql%rowcount = 0 then
    insert into permanent_table1
    Loop2 (0-5 records)-- child temporary table1
    -----lots of validation-----
    update permanent_table2
    if sql%rowcount = 0 then
    insert into permanent_table2
    end loop2
    Loop3 (0-5 records)-- child temporary table2
    -----lots of validation-----
    update permanent_table3
    if sql%rowcount = 0 then
    insert into permanent_table3
    end loop3
    Loop4 (0-5 records)-- child temporary table3
    -----lots of validation-----
    update permanent_table4
    if sql%rowcount = 0 then
    insert into permanent_table4
    end loop4
    -- COMMIT after every 3000 records
    END MAIN LOOP
    Thanks
    Ashwin N.

    Do this intead of ditching the PL/SQL.
    DECLARE
    TYPE NumTab IS TABLE OF NUMBER(4) INDEX BY BINARY_INTEGER;
    TYPE NameTab IS TABLE OF CHAR(15) INDEX BY BINARY_INTEGER;
    pnums NumTab;
    pnames NameTab;
    t1 NUMBER(5);
    t2 NUMBER(5);
    t3 NUMBER(5);
    BEGIN
    FOR j IN 1..5000 LOOP -- load index-by tables
    pnums(j) := j;
    pnames(j) := 'Part No. ' || TO_CHAR(j);
    END LOOP;
    t1 := dbms_utility.get_time;
    FOR i IN 1..5000 LOOP -- use FOR loop
    INSERT INTO parts VALUES (pnums(i), pnames(i));
    END LOOP;
    t2 := dbms_utility.get_time;
    FORALL i IN 1..5000 -- use FORALL statement
    INSERT INTO parts VALUES (pnums(i), pnames(i));
    get_time(t3);
    dbms_output.put_line('Execution Time (secs)');
    dbms_output.put_line('---------------------');
    dbms_output.put_line('FOR loop: ' || TO_CHAR(t2 - t1));
    dbms_output.put_line('FORALL: ' || TO_CHAR(t3 - t2));
    END;
    Try this link, http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96624/05_colls.htm#23723

  • SQL report performance problem

    I have a SQL classic report in Apex 4.0.2 and database 11.2.0.2.0 with a performance problem.
    The report is based on a PL/SQL function returning a query. The query is based on a view and pl/sql functions. The Apex parsing schema has select grant on the view only, not the underlying objects.
    The generated query runs in 1-2 sec in sqlplus (logged in as the Apex parsing schema user), but takes many minutes in Apex. I have found, by monitoring the database sessions via TOAD, that the explain plan in the Apex and sqlplus sessions are very different.
    The summary:
    In sqlplus SELECT STATEMENT ALL_ROWS Cost: 3,695                                                                            
    In Apex SELECT STATEMENT ALL_ROWS Cost: 3,108,551                                                        
    What could be the cause of this?
    I found a blog and Metalink note about different explain plans for different users. They suggested to set optimizer_secure_view_merging='FALSE', but that didn't help.

    Hmmm, it runs fast again in SQL Workshop. I didn't expect that, because both the application and SQL Workshop use SYS.DBMS_SYS_SQL to parse the query.
    Only the explain plan doesn't show anything.
    To add: I changed the report source to the query the pl/sql function would generate, so the selects are the same in SQL Workshop and in the application. Still in the application it's horribly slow.
    So, Apex does do something different in the application compared to SQL Workshop.
    Edited by: InoL on Aug 5, 2011 4:50 PM

  • (new?) performance problem using jDriver after a Sql Server 6.5 to 2000 conversion

    Hi,
    This is similar - yet different - to a few of the old postings about performance
    problems with using jdbc drivers against Sql Server 7 & 2000.
    Here's the situation:
    I am running a standalone java application on a Solaris box using BEA's jdbc driver
    to connect to a Sql Server database on another network. The application retrieves
    data from the database through joins on several tables for approximately 40,000
    unique ids. It then processes all of this data and produces a file. We tuned
    the app so that the execution time for a single run through the application was
    24 minutes running against Sql Server 6.5 with BEA's jdbc driver. After performing
    a DBMS conversion to upgrade it to Sql Server 2000 I switched the jDriver to the
    Sql Server 2000 version. I ran the app and got an alarming execution time of
    5hrs 32 min. After some research, I found the problem with unicode and nvarchar/varchar
    and set the "useVarChars" property to "true" on the driver. The execution time
    for a single run through the application is now 56 minutes.
    56 minutes compared to 5 1/2 hrs is an amazing improvement. However, it is still
    over twice the execution time that I was seeing against the 6.5 database. Theoretically,
    I should be able to switch out my jdbc driver and the DBMS conversion should be
    invisible to my application. That would also mean that I should be seeing the
    same execution times with both versions of the DBMS. Has anybody else seen a
    simlar situation? Are there any other settings or fixes that I can put into place
    to get my performance back down to what I was seeing with 6.5? I would rather
    not have to go through and perform another round of performance tuning after having
    already done this when the app was originally built.
    thanks,
    mike

    Mike wrote:
    Joe,
    This was actually my next step. I replaced the BEA driver with
    the MS driver and let it run through with out making any
    configuration changes, just to see what happened. I got an
    execution time of about 7 1/2 hrs (which was shocking). So,
    (comparing apples to apples) while leaving the default unicode
    property on, BEA ran faster than MS, 5 1/2 hrs to 7 1/2 hrs.
    I then set the 'SendStringParametersAsUnicode' to 'false' on the
    MS driver and ran another test. This time the application
    executed in just over 24 minutes. The actual runtime was 24 min
    16 sec, which is still ever so slightly above the actual runtime
    against SS 6.5 which was 23 min 35 sec, but is twice as fast as the
    56 minutes that BEA's driver was giving me.
    I think that this is very interesting. I checked to make sure that
    there were no outside factors that may have been influencing the
    runtimes in either case, and there were none. Just to make sure,
    I ran each driver again and got the same results. It sounds like
    there are no known issues regarding this?
    We have people looking into things on the DBMS side and I'm still
    looking into things on my end, but so far none of us have found
    anything. We'd like to continue using BEA's driver for the
    support and the fact that we use Weblogic Server for all of our
    online applications, but this new data might mean that I have to
    switch drivers for this particular application.Thanks. No, there is no known issue, and if you put a packet sniffer
    between the client and DBMS, you will probably not see any appreciable
    difference in the content of the SQL sent be either driver. My suspicion is
    that it involves the historical backward compatibility built in to the DBMS.
    It must still handle several iterations of older applications, speaking obsolete
    versions of the DBMS protocol, and expecting different DBMS behavior!
    Our driver presents itself as a SQL7-level application, and may well be treated
    differently than a newer one. This may include different query processing.
    Because our driver is deprecated, it is unlikely that it will be changed in
    future. We will certainly support you using the MS driver, and if you look
    in the MS JDBC newsgroup, you'll see more answers from BEA folks than
    from MS people!
    Joe
    >
    >
    Mike
    The next test you should do, to isolate the issue, is to try another
    JDBC driver.
    MS provides a type-4 driver now, for free. If it is significantly faster,
    it would be
    interesting. However, it would still not isolate the problem, because
    we still would
    need to know what query plan is created by the DBMS, and why.
    Joe Weinstein at BEA
    PS: I can only tell you that our driver has not changed in it's semantic
    function.
    It essentially send SQL to the DBMS. It doesn't alter it.

  • Performance Problem - MS SQL 2K and PreparedStatement

    Hi all
    I am using MS SQL 2k and used PreparedStatement to retrieve data. There is strange and serious performance problem when the PreparedStatement contains "?" and using PreparedStatement.setX() functions to set its value. I have performed the test with the following code.
    for (int i = 0; i < 10; i ++) {
    try {
    con = DBConnection.getInstance();
    statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = '" + cardNo + "'");
    // statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = ?");
    // statement.setString(1, cardNo);
    rs = statement.executeQuery();
    if (rs.next()) {
    catch(SQLException e) {
    e.printStackTrace();
    finally {
    try {
    rs.close();
    statement.close();
    catch(SQLException e) {
    e.printStackTrace();
    Iteration Time (ms)
    1 961
    10 1061
    200 1803
    for (int i = 0; i < 10; i ++) {
    try {
    con = DBConnection.getInstance();
    // statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = '" + cardNo + "'");
    statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = ?");
    statement.setString(1, cardNo);
    rs = statement.executeQuery();
    if (rs.next()) {
    catch(SQLException e) {
    e.printStackTrace();
    finally {
    try {
    rs.close();
    statement.close();
    catch(SQLException e) {
    e.printStackTrace();
    Iteration Time (ms)
    1 1171
    10 2754
    100 18817
    200 36443
    The above test is performed with DataDirect JDBC 3.0 driver. The one uses ? and setString functions take much longer to execute, which supposed to be faster because of precompilation of the statement.
    I have tried different drivers - the one provided by MS, data direct and Sprinta JDBC drivers but all suffer the same problem in different extent. So, I am wondering if MS SQL doesn't support for precompiled statement and no matter what JDBC driver I used I am still having the performance problem. If so, many O/R mappings cannot be used because I believe most of them if not all use the precompiled statement.
    Best regards
    Edmond

    Edmond,
    Most JDBC drivers for MS SQL (and I think this includes all the drivers you tested) use sp_executesql to execute PreparedStatements. This is a pretty good solution as the driver doesn't have to keep any information about the PreparedStatement locally, the server takes care of all the precompiling and caching. And if the statement isn't already precompiled, this is also taken care of transparently by SQL Server.
    The problem with this approach is that all names in the query must be fully qualified. This means that the driver has to parse the query you are submitting and make all names fully qualified (by prepending a db name and schema). This is why creating a PreparedStatement takes so much using these drivers (and why it does so every time you create it, even though it's the same PreparedStatement).
    However, the speed advantage of PreparedStatements only becomes visible if you reuse the statement a lot of times.
    As about why the PreparedStatement with no placeholder is much faster, I think is because of internal optimisations (maybe the statement is run as a plain statement (?) ).
    As a conclusion, if you can reuse the same PreparedStatement, then the performance hit is not so high. Just ignore it. However, if the PreparedStatement is created each time and only used a few times, then you might have a performance issue. In this case I would recommend you try out the jTDS driver ( http://jtds.sourceforge.net ), which uses a completely different approach: temporary stored procedures are created for PreparedStatements. This means that no parsing is done by the driver and PreparedStatement caching is possible (i.e. the next time you are preparing the same statement it will take much less as the previously submitted procedure will be reused).
    Alin.

Maybe you are looking for