Performance problem in OLAP DB by using optimizer_index_cost_adj=1

Hi,
I have Oracle 11gR2 as my DB and Siebel 8.1 Application
I would like to ask everyone about the significance of below two parameters:
1> optimizer_index_cost_adj=1
2> optimizer_index_caching=0
I'm facing performance issue while extracting data from my OLTP DB to OLAP DB. Below I have described it much briefly for your persual
I had noted that the explain plans to extract the data from OLTP are very bad to extract lots of OLTP data.
For example, the explain plan for one of our transaction called "extract person data" is mostly doing nested loops instead of hash joins. I checked some OLTP parameter and I noted one important Db parameter “optimizer_index_cost_adj “ has the current value 1.This leads to the nested loops as the DB is not acting cost based Since, Oracle recommends this paramter value to 1 for better performance of Siebel application.
Whereas, If I change the above paramters to below values
optimizer_index_cost_adj=25;
optimizer_index_caching=75;
and I reused this parameter as to create explain plans for the specific long running queries and they changed as expected.
Now, my question is if Oracle recommends 1> optimizer_index_cost_adj=1 and 2> optimizer_index_caching=0 then why the cost for many queries increase enormously which in turn leads to bad performance during extraction of dat from OLTP system to OLAP system?
Request you to please comment and suggest me how to proceed
Thanks,
Anil

Going from 10.2.x to 11.2.0.3 I have often noticed two points:
First, you must re-gather system statistics. If you run
select * from sys.aux_stats$;
what do you get? Have they ever been gathered? I suspect the 11.x makes more use of them than earlier releases.
Secondly, the default for serialdirect_read is now auto, in 10g it was (I think) false. This can have a big effect on pushing the optimizer away from indexes.

Similar Messages

  • Performance problems to generate PDF files using CR

    Hi,
    I develop an application that read TXT files and uses this information to generate PDF files, but I'm having problems with performance to generate this PDF, each files are taking at least 1 minute to create the file. I realized that some TMP and RPT files are generated and deleted in the temp folder of the machine which is running the application, but after some time creating the PDF's the RPT files aren't deleted and I have an exception on my app:
    Memory full.
    Failed to export the report.
    Not enough memory for operation.    at CrystalDecisions.ReportAppServer.Controllers.ReportSourceClass.Export(ExportOptions pExportOptions, RequestContext pRequestContext)
       at CrystalDecisions.ReportSource.EromReportSourceBase.ExportToStream(ExportRequestContext reqContext)
       at CrystalDecisions.CrystalReports.Engine.FormatEngine.ExportToStream(ExportRequestContext reqContext)
       at CrystalDecisions.CrystalReports.Engine.FormatEngine.Export(ExportRequestContext reqContext)
       at CrystalDecisions.CrystalReports.Engine.ReportDocument.ExportToDisk(ExportFormatType formatType, String fileName)
    The server who is running this application is a Dual Core AMD Opteron 2.61 ghz, 7 GB RAM
    The framework version is 2.0 and the Crystal Reports dll version is 10.5.3700.0
    Thanks.

    Hi,
    If you are using IIS5 the give aspnet full permission to the Temp folders. For IIS6 give  IIS_wpg permission.
    For this you need to right click on the folder,
    go to properties,
    security tab,
    click on Add,
    Type aspnet/iis_wpg,
    select top note in Location and click on check on check names.
    Give full control and
    click ok.
    Also try using GC.Collect(), reportDocObject.Close() and reportDocObject.Dispose() as clean up code.
    Regards,
    AG.

  • Performance problem between Oracle.DataAccess v1 and v2

    Hi, I have serious performance problem with OracleDataReader when I use the GetValues method.
    My server is Oracle 9.2.0.7, and i use ODAC v10.2.0.221
    I create a dummy table for benchmark :
    create table test (a varchar2(50), b number)
    begin
    for i in 1..62359 loop
    insert into test values ('Values ' || i, i);
    end loop;
    commit;
    end;
    I use the same code for benchmark Framework v1 and Framework v2.
    Code :
    try {
    OracleConnection c = new OracleConnection("user id=saturne_dbo;password=***;data source=satedfx;");
    c.Open();
    go(c);
    c.Close();
    catch (Exception ex) {
    MessageBox.Show(ex.Message);
    private void go(IDbConnection c) {
    IDbCommand cmd = c.CreateCommand();
    cmd.CommandText = "select * from test";
    cmd.CommandType = CommandType.Text;
    DateTime dt = DateTime.Now;
    IDataReader reader = cmd.ExecuteReader();
    int count = 0;
    while (reader.Read()) {
    object[] fields = new object[reader.FieldCount];
    reader.GetValues(fields);
    count++;
    reader.Close();
    TimeSpan eps = DateTime.Now - dt;
    MessageBox.Show("Time " + count + " : " + eps.TotalSeconds);
    Result are :
    Framework v1 with OracleDataAccess 1.10.2.2.20 "Time 62359 : 0.5"
    Framework v2 with OracleDataAccess 2.10.2.2.20 "Time 62359 : 3.57" FACTOR 6 !!!!!
    I notice same problem with oleDb provider and Microsoft Oracle Client provider..
    It's a serious problem for my production server, the time calculation explode...
    Where is the explication ?
    Do u know solution ?

    Can you please try out following -
    1. Create a .NET 1.x DLL with your benchmark code. This will obviously use ODP.NET for .NET 1.x.
    2. Call this assembly routine from a .NET 1.x executable and note the results.
    3. Now call this assembly routine from a .NET 2.0 executable and note the results.
    The idea is to always use "ODP.NET for .NET 1.x" even in .NET 2.0 runtime. This will tell us whether the performance degradation is a runtime issue.

  • Performance problem with Oracle

    We are currently getting a system developed in Unix/Weblogic/Tomcat/Oracle environment. We have developed a screen that contains 5 or 6 different parameters to select from. We could select multiple parameters in each of these selections. The idea behind the subsequent screens is to attach information to already existing data/ possible future data that matches the selection criteria.
    Based on these selections, existing data located within the system in a table is searched and those that match are selected. Also new rows are created in the table against combinations that do not currently have a match. Frequently multiple parameters are selected, and 2000 different combinations need to be searched in the table. Of these selections, only about 100 or 200 combinations will be available in existing data. So the system is having to insert 1800 rows. The user meanwhile waits for the system to come up with data based on their selections. The user is not willing to wait more than 30 seconds to get to the next screen. In the above mentioned scenario, the system takes more than an hour to insert the new records and bring the information up. We need suggestions to see if the performance can be improved this drastically. If not what are the alternatives? Thanks

    The #1 cause for performance problems with Oracle is not using it correctly.
    I find it hard to believe that with the small data volumes mentioned, that you can have perfornance problems.
    You need to perform a sanity check. Are you using Oracle correctly? Do you know what bind variables are? Are you using indexes correctly? Are you using PL/SQL correctly? Is the instance setup correctly? What about storage, are you using SAME (RAID10) or something else? Etc.
    Facts. Oracle peforms exceptionally well. Oracle exceptionally well.
    Simple example from a benchmark I did on this exact same subject. App-tier developers not understanding and not using Oracle correctly. Incorrect usage of Oracle doing a 100,000 SQL statements. 24+ minutes elapsed time. Doing those exact same 100,000 SQL statement correctly (using bind variables) - 8 seconds elapsed time. (benchmark using Oracle 10.1.0.3 on a Sunfire V20z server)
    But then you need to use Oracle correctly. Are you familiar with the Oracle Concepts Guide? Have you read the Oracle Application Developer Fundamentals Guide?

  • DB Adapter Performance Problem

    Hello,
    We are trying to update oracle database using DB Adapter. The insertion into database via DBAdapter (& only with DBAdapter alone) is slow. Even for transferring 50 records of ~1K data, 5-6 seconds are spent.
    Environment:
    Oracle SOA suite 10.1.3 with 10.1.3.3 Patch Applied
    AIX 5
    8 CPU & 20 GB RAM
    Our test setup.
    Tool:ESB
    Inbound Adapter to read data from Oracle Table
    TransformActivity to convert source schema to destination schema
    Outbound Adapter to write data into same oracle table in the same machine. (This has performance problem).
    If we are to read data using DB adapter from oracle table & write it to a file using File adapter, transfer of 10,000 records (~2K each) happens in 2 secs.Only writing into database is taking long time. We are unsure why writing into database takes so much. Any help would be appreciated to solve this problem.
    We have modified the DB values stated by Oracle documentation for performance improvement. We have done the JVM tuning. We tried using "UsesBatchWriting" & UseDirectSql=true. However there is no improvement.
    We also tried creating an outbound adapter which executes custom sql. Custom sql inserts 10000 records into destination table. (insert into dest_table select * from source_table). There is no performance issue with this approach. Customsql is executed in less than 2 seconds. Also we dont see any performance problem if we are to use any of the sql clients to update data in the same destination table. Only via DB Adapter we face this issue.
    Please let me know if you would like know setting of any parameter in the system.We would appreciate if any help can be provided to find where the bottleneck is.
    Thanks

    I'm presuming this is just merge and not insert.
    do alter system set sql_Trace=true and capture the trace files on the database . It's probably only waiting on SQLNET message from client but we need to rule that out.
    dmstool should show you some of the activity stuff inside the client, it may also be worth doing a truss on the java process to see what syscalls it is waiting on.
    Also are you up to MLR7 , the latest ESB release ?

  • 10.1.3.3 ESB DB Adapter Performance Problem

    Hello,
    We are trying to update oracle database using DB Adapter. The insertion into database via DBAdapter (& only with DBAdapter alone) is slow. Even for transferring 50 records of ~1K data, 5-6 seconds are spent.
    Environment:
    Oracle SOA suite 10.1.3 with 10.1.3.3 Patch Applied
    AIX 5
    8 CPU & 20 GB RAM
    Our test setup.
    Tool:ESB & BPEL
    Inbound Adapter to read data from Oracle Table
    TransformActivity to convert source schema to destination schema
    Outbound Adapter to write data into same oracle table in the same machine. (This has performance problem).
    ESB Console shows much of total time spent in the Outbound Adapter activity.
    We also created a BPEL process to do the data transfer between Oracle Databases. Adapter statistics for outbound insert activity in BPEL console shows higher value under "Commit" listed under "Adapter Post Processing".
    If we are to read data using DB adapter from oracle table & write it to a file using File adapter, transfer of 10,000 records (~2K each) happens in 2 secs.Only writing into database is taking long time. We are unsure why writing into database takes so much. Any help would be appreciated to solve this problem.
    We have modified the DB values stated by Oracle documentation for performance improvement. We have done the JVM tuning. We tried using "UsesBatchWriting" & UseDirectSql=true. However there is no improvement.
    We also tried creating an outbound adapter which executes custom sql. Custom sql inserts 10000 records into destination table. (insert into dest_table select * from source_table). There is no performance issue with this approach. Customsql is executed in less than 2 seconds. Also we dont see any performance problem if we are to use any of the sql clients to update data in the same destination table. Only via DB Adapter we face this issue.
    Interestingly, in a different setup,a Windows machine with just 1CPU, 1GB RAM running 10.1.3 is able to transfer 10,000 records (~2K per record) to a different Oracle database over the network(with in LAN).
    Please let me know if you would like know setting of any parameter in the system.We would appreciate if any help can be provided to find where the bottleneck is.
    Thanks

    I'm presuming this is just merge and not insert.
    do alter system set sql_Trace=true and capture the trace files on the database . It's probably only waiting on SQLNET message from client but we need to rule that out.
    dmstool should show you some of the activity stuff inside the client, it may also be worth doing a truss on the java process to see what syscalls it is waiting on.
    Also are you up to MLR7 , the latest ESB release ?

  • (new?) performance problem using jDriver after a Sql Server 6.5 to 2000 conversion

    Hi,
    This is similar - yet different - to a few of the old postings about performance
    problems with using jdbc drivers against Sql Server 7 & 2000.
    Here's the situation:
    I am running a standalone java application on a Solaris box using BEA's jdbc driver
    to connect to a Sql Server database on another network. The application retrieves
    data from the database through joins on several tables for approximately 40,000
    unique ids. It then processes all of this data and produces a file. We tuned
    the app so that the execution time for a single run through the application was
    24 minutes running against Sql Server 6.5 with BEA's jdbc driver. After performing
    a DBMS conversion to upgrade it to Sql Server 2000 I switched the jDriver to the
    Sql Server 2000 version. I ran the app and got an alarming execution time of
    5hrs 32 min. After some research, I found the problem with unicode and nvarchar/varchar
    and set the "useVarChars" property to "true" on the driver. The execution time
    for a single run through the application is now 56 minutes.
    56 minutes compared to 5 1/2 hrs is an amazing improvement. However, it is still
    over twice the execution time that I was seeing against the 6.5 database. Theoretically,
    I should be able to switch out my jdbc driver and the DBMS conversion should be
    invisible to my application. That would also mean that I should be seeing the
    same execution times with both versions of the DBMS. Has anybody else seen a
    simlar situation? Are there any other settings or fixes that I can put into place
    to get my performance back down to what I was seeing with 6.5? I would rather
    not have to go through and perform another round of performance tuning after having
    already done this when the app was originally built.
    thanks,
    mike

    Mike wrote:
    Joe,
    This was actually my next step. I replaced the BEA driver with
    the MS driver and let it run through with out making any
    configuration changes, just to see what happened. I got an
    execution time of about 7 1/2 hrs (which was shocking). So,
    (comparing apples to apples) while leaving the default unicode
    property on, BEA ran faster than MS, 5 1/2 hrs to 7 1/2 hrs.
    I then set the 'SendStringParametersAsUnicode' to 'false' on the
    MS driver and ran another test. This time the application
    executed in just over 24 minutes. The actual runtime was 24 min
    16 sec, which is still ever so slightly above the actual runtime
    against SS 6.5 which was 23 min 35 sec, but is twice as fast as the
    56 minutes that BEA's driver was giving me.
    I think that this is very interesting. I checked to make sure that
    there were no outside factors that may have been influencing the
    runtimes in either case, and there were none. Just to make sure,
    I ran each driver again and got the same results. It sounds like
    there are no known issues regarding this?
    We have people looking into things on the DBMS side and I'm still
    looking into things on my end, but so far none of us have found
    anything. We'd like to continue using BEA's driver for the
    support and the fact that we use Weblogic Server for all of our
    online applications, but this new data might mean that I have to
    switch drivers for this particular application.Thanks. No, there is no known issue, and if you put a packet sniffer
    between the client and DBMS, you will probably not see any appreciable
    difference in the content of the SQL sent be either driver. My suspicion is
    that it involves the historical backward compatibility built in to the DBMS.
    It must still handle several iterations of older applications, speaking obsolete
    versions of the DBMS protocol, and expecting different DBMS behavior!
    Our driver presents itself as a SQL7-level application, and may well be treated
    differently than a newer one. This may include different query processing.
    Because our driver is deprecated, it is unlikely that it will be changed in
    future. We will certainly support you using the MS driver, and if you look
    in the MS JDBC newsgroup, you'll see more answers from BEA folks than
    from MS people!
    Joe
    >
    >
    Mike
    The next test you should do, to isolate the issue, is to try another
    JDBC driver.
    MS provides a type-4 driver now, for free. If it is significantly faster,
    it would be
    interesting. However, it would still not isolate the problem, because
    we still would
    need to know what query plan is created by the DBMS, and why.
    Joe Weinstein at BEA
    PS: I can only tell you that our driver has not changed in it's semantic
    function.
    It essentially send SQL to the DBMS. It doesn't alter it.

  • Performance problems when using Premiere Elements for photo slideshows

    Hello,
    I had been using Premiere Elements 9 (PE9) to make a simple slideshow for my parents from their vacation trip and I ran into some serious performance problems.  I had used it to create similar projects before, but not nearly as big.  This one is like 260 photos, so basically it is 260 seperate clips.  I have a POWERHOUSE workstation (see below) so it isn't my PC.  Even when PE9 crashes, looking at my performance monitor my CPU and RAM aren't even halfway being utilized.  I finally switched to Windows Movie Maker of all things and it worked seemlessly, amazing really.  I'm wondering if I was just using PE9 for something other than what it was designed for since there weren't really any video clips, just a ton of photos that I made into video clips, if that makes sense.  Based upon my experience with this so far, I can't imagine using PE9 anymore for anything really.  I might have the need for a more professional video editing program in the near future, although it does seem like PE has a lot of features.  How can I make sure it utilizes my workstation to its full potential?  Here are my specs:
    PC
    Intel Core i7-2600K 4.6 GHz Overclocked
    ASUS P8P67 Deluxe Motherboard
    AMD Firepro V8800 Video Card
    Crucial 128 GB SATA 6Gb/s Solid State Drive (Operating System)
    Corsair Vengeance 16GB (4x4GB) Memory
    Corsair H60 Liquid CPU Cooler
    Corsair Professional Series Gold AX850 Power Supply
    Graphite Series 600T Mid-Tower Case
    Western Digital Caviar Black 1 TB SATA III Hard Drive
    Western Digital Caviar Black 2 TB SATA III Hard Drive
    Western Digital Green 3 TB SATA III Hard Drive
    Logitech Wireless Gaming Mouse G700
    I don’t play any games but it’s a great productivity mouse with 13 customizable buttons
    Wacom Intuos5 Pen Tablet
    Yes, this system is blazingly fast.  I have yet to feel it slow down, even with Photoshop, Lightroom, InDesign, Illustrator and numerous other apps running at the same time.  HOWEVER, Premiere Elements 9 has crashed NUMERUOS times, every time my system wasn't even close to being fully taxed. 
    Monitors – All run on the ATI V8800
    Dell Ultra Sharp 30 inch
    Samsung 27 Inch
    HAANS-G 28 Inch
    Herman Miller Embody Ergonomic Chair (one of my favorite items)

    Andy,
    There ARE some differences between PrE and PrPro w/ an approved CUDA-capable and MPE hardware acceleration-enabled nVidia video card, but those differences show up ONLY in the quality of the Scaling. The processing overhead is almost exactly the same, when it comes to handling the extra pixels.
    As of PrPro CS 5, two things changed:
    The max. size of Still Images went up from 4096 x 4096 pixels, to quite a bit larger (cannot recall the numbers now).
    The Scaling algorithms have been improved, though ONLY with the correct nVidia cards, with MPE hardware support enabled.
    Now, there CAN be another consideration, between the two programs, in that PrPro CS 5 - CS 6, are 64-bit ONLY, so one benefits from the computer and OS to run it. PrE can be either 32-bit, or 64-bit, so one might, or might not, be taking advantage of the 64-bit program and OS. Still, the processing overhead will be almost identical, it's just that the 64-bit OS can spread it around a bit.
    I still recommend Scaling the large Still Images in PS, prior to Import, to keep that processing overhead as low as is possible. Scaled Still Images work just fine, and I have one Project with 3000+ Scaled Still Images, that edits just fine in PrPro, even on my older 32-bit workstation. Testing that same machine, and PrPro some years ago, I could ONLY work with up to 5 - 4096 x 4096 Stills, before things ground to a crawl.
    Now, Adobe AfterEffects handles large Still Images differently, so I just moved that test Project to AE, and added another 20 large Images, which edited just fine. IIRC, AE can handle Still Images up to 10K x 10K pixels, and that might have gone up, as of CS 5.
    Good luck, and hope that helps,
    Hunt

  • Performance problem in Mapping Designer using UDF with external imports

    Hello,
    we do have a big performance problem in developing (not in execution) graphical Mappings as far as we use "user defined functions" (UDF) with include-entries referencing to jar files which are imported as "imported archives".
    For example the execution of invice mapping with a little bit bigger test file in the Mapping designer:
    - after opening, not in change mod: 6 seconds
    - after switching to change mod: 37 seconds (that's clear, now everything is compiled first)
    - after adding "com.seeburger.functions.permstore.CounterFactory;" into the "import" field of one UDF, no other change: 227 seconds
    - after saving and submiting the changlist (no longer in change mode): 6 seconds
    - after switching to change mode: 227 seconds
    So execution speed of testing (and also when watching queues) only increases in changemod more then three minutes when using UDF with imports, referencing to external JAR files. It doesn't depend on Seeburger functions (we are using XI also for EDIFACT, so we also use some Seeburger functions), I can reproduce it with any other JAR file which is used from a UDF.
    Using java included functions like "java.text.NumberFormat;" in "Import" doesn't slow down the testing.
    Can anybody reproduce this? We are using XI 3.0 SP19 on a AIX machine, so we also have to use the Java version from IBM.
    cu
    Manfred

    Problem was fixed by a upgrad of the JDK.

  • Performance problem ,  100 % swap used, but vmstat -   sr = 0

    Hi,
    I have a performance problem on a server. It is sometimes very low during several hours.
    context : v890, 32 Go RAM, 8 SPARC IV+, solaris 10 release 03/05, veritas volume manager, containers, several oracle databases, applications...
    with iostat, swap partition : %b -> 100% !!!!
    with vmstat, r-> 0, b -> 0, w ->29, free memory : 600 Mo , sr -> 0, idle : more than 50%,
    uptime, load average : 6
    vmstat -S : si -> 0 , so -> 0
    vmstat -p : api -> 45126682863 ( probably a bug ) , apo -> 0 fpi -> 1895320681342 ( probably a bug ), fpo -> 0
    It's difficult to me to find the problem. Is it paging activity ??? someone can tell me, what is the memory limit for paging activity start ?
    If you thing I'm in the wrong way, thanks for all ideas :)
    Julien
    Edited by: Wylem on Feb 28, 2008 6:11 PM

    Does seem a bit odd.
    The 'w' column doesn't necessarily mean that anything bad is happening now, but it does mean that the system was severely memory limited at some point in the past at least.
    Paging should occur when free memory drops below LOTSFREE. I don't remember if swapping happens at a particular point, but probably wouldn't happen above DESFREE. The page scanner should become active (non-zero 'sr' numbers) any time the memory is below LOTSFREE.
    Since you have Solaris 10, you might want to grab the dtrace toolkit and see if some of the tools in there show you anything more useful (some of the I/O ones might break down the access further).
    So it really doesn't look like you're swapping/paging out anything now, but you almost certainly did in the past. It could be that you're using an app that has paged out a lot of stuff do disk, so that the I/O you're seeing is it bringing stuff back now that RAM is available.
    Darren

  • Performance problem using OBJECT tag

    I have a performance problem using the java plugin and was wondering if anyone else was has seen the same thing. I have a rather complex applet that interacts with java script in a web page using the LiveConnect API. The applet both calls javascript in the page and is called by java script.
    Im using IE6 with the java plugin that ships with the 1.4.2_06 JVM. I have noticed that if I deploy the applet using the OBJECT tags, the application seems the trash everytime I call a java method on the applet from javascript. When I deplot the same applet using the APPLET tag the perfomance is much better. I would like to use the OBJECT tag because it applet bahaves better and I have more control over the caching.
    This problem seems to be on the boundaries of IE6, JScript, the JVM and my Applet (and I suppose any could be the real culprit). My application is IE5+ specific so I can not test the applet in isolation from the surround HTML/JavaScript (for example in another browser).
    Does anyone have any idea?
    thanks in advance.
    dennis.

    I have a performance problem using the java plugin and was wondering if anyone else was has seen the same thing. I have a rather complex applet that interacts with java script in a web page using the LiveConnect API. The applet both calls javascript in the page and is called by java script.
    Im using IE6 with the java plugin that ships with the 1.4.2_06 JVM. I have noticed that if I deploy the applet using the OBJECT tags, the application seems the trash everytime I call a java method on the applet from javascript. When I deplot the same applet using the APPLET tag the perfomance is much better. I would like to use the OBJECT tag because it applet bahaves better and I have more control over the caching.
    This problem seems to be on the boundaries of IE6, JScript, the JVM and my Applet (and I suppose any could be the real culprit). My application is IE5+ specific so I can not test the applet in isolation from the surround HTML/JavaScript (for example in another browser).
    Does anyone have any idea?
    thanks in advance.
    dennis.

  • Performance Problem with File Adapter using FTP Conection

    Hi All,
    I have a pool of 19 interfaces that send data from R/3 using RFC Adpater, and these interfaces generate 30 TXT files in a target Server. I'm using File Adapters as Receiver Comunication Channel. It's generating a serious perfomance problem. In File Adpater I'm using FTP Conection with Permanently Conection, Somebody knows if PERMANENTLY CONECTION is the cause of performances problem ?
    These interfaces will run once a day with total of 600 messages.
    We still using a Test Server with few messages.

    Hi Regis,
        We also faced teh same porblem. Whats happening is that when the FTP session is initiated by the file adapter, then its getting done from teh XI server. Hence the memory of the server is also eaten up. Why dont you give a try by using 'per file transfer'.
        If this folder to which you are connecting is within your XI server network then you can mount(or map) that drive to the XI server and use it with a NFS protocol of the file adapter and thereby increasing the performance.
    Cheers
    JK

  • Performance Problem between Oracle 9i to Oracle 10g using Crystal XI

    We have a Crystal XI Report using ODBC Drivers, 14 tables, and one sub report. If we execute the report on an Oracle 9i database the report will complete in about 12 seconds. If we execute the report on an Oracle 10g database the report will complete in about 35 seconds.
    Our technical Setup:
    Application server: Windows Server 2003, Running Crystal XI SP2 Runtime dlls with Oracle Client 10.01.00.02, .Net Framework 1.1, C# for Crystal Integration, Unmanaged C++ for app server environment calling into C# through a dynamically loaded mixed-mode C++ DLL.
    Database server is Oracle 10g
    What we have concluded:
    Reducing the number of tables to 1 will reduce the execution time of the report from 180s to 13s. With 1 table and the sub report we would get 30 seconds
    We have done some database tracing and see that Crystal Reports Issues the following query when verifying the database and it takes longer in 10g vs 9i.
    We have done some profiling in the application code. When we retarget the first table to the target database, it takes 20-30 times longer in 10g than in 9i. Retargeting the other tables takes about twice as long. The export to a PDF file takes about 4-5 times as long in 10g as in 9i.
    Oracle 10g no longer supports the /*+ RULE */ hint.
    Verify DB Query:
    select /*+ RULE */ *
    from
    (select /*+ RULE */ null table_qualifier, o1.owner table_owner,
    o1.object_name table_name, decode(o1.owner,'SYS', decode(o1.object_type,
    'TABLE','SYSTEM TABLE','VIEW', 'SYSTEM VIEW', o1.object_type), 'SYSTEM',
    decode(o1.object_type,'TABLE','SYSTEM TABLE','VIEW', 'SYSTEM VIEW',
    o1.object_type), o1.object_type) table_type, null remarks from all_objects
    o1 where o1.object_type in ('TABLE', 'VIEW') union select /*+ RULE */ null
    table_qualifier, s.owner table_owner, s.synonym_name table_name, 'SYNONYM'
    table_type, null remarks from all_objects o3, all_synonyms s where
    o3.object_type in ('TABLE','VIEW') and s.table_owner= o3.owner and
    s.table_name = o3.object_name union select /*+ RULE */ null table_qualifier,
    s1.owner table_owner, s1.synonym_name table_name, 'SYNONYM' table_type,
    null remarks from all_synonyms s1 where s1.db_link is not null ) tables
    WHERE 1=1 AND TABLE_NAME='QCTRL_VESSEL' AND table_owner='QLM' ORDER BY 4,2,
    3
    SQL From Main Report:
    SELECT "QCODE_PRODUCT"."PROD_DESCR", "QCTRL_CONTACT"."CONTACT_FIRST_NM", "QCTRL_CONTACT"."CONTACT_LAST_NM", "QCTRL_MEAS_PT"."MP_NM", "QCTRL_ORG"."ORG_NM", "QCTRL_TKT"."SYS_TKT_NO", "QCTRL_TRK_BOL"."START_DT", "QCTRL_TRK_BOL"."END_DT", "QCTRL_TRK_BOL"."DESTINATION", "QCTRL_TRK_BOL"."LOAD_TEMP", "QCTRL_TRK_BOL"."LOAD_PCT", "QCTRL_TRK_BOL"."WEIGHT_OUT", "QCTRL_TRK_BOL"."WEIGHT_IN", "QCTRL_TRK_BOL"."WEIGHT_OUT_UOM_CD", "QCTRL_TRK_BOL"."WEIGHT_IN_UOM_CD", "QCTRL_TRK_BOL"."VAPOR_PRES", "QCTRL_TRK_BOL"."SPECIFIC_GRAV", "QCTRL_TRK_BOL"."PMO_NO", "QCTRL_TRK_BOL"."ODORIZED_VOL", "QARCH_SEC_USER"."SEC_USER_NM", "QCTRL_TKT"."DEM_CTR_NO", "QCTRL_BA_ENTITY"."BA_NM1", "QCTRL_BA_ENTITY_VW"."BA_NM1", "QCTRL_BA_ENTITY"."BA_ID", "QCTRL_TRK_BOL"."VOLUME", "QCTRL_TRK_BOL"."UOM_CD", "QXREF_BOL_PROD"."MOVEMENT_TYPE_CD", "QXREF_BOL_PROD"."BOL_DESCR", "QCTRL_TKT"."VOL", "QCTRL_TKT"."UOM_CD", "QCTRL_PMO"."LINE_UP_BEFORE", "QCTRL_PMO"."LINE_UP_AFTER", "QCODE_UOM"."UOM_DESCR", "QCTRL_ORG_VW"."ORG_NM"
    FROM (((((((((((("QLM"."QCTRL_TRK_BOL" "QCTRL_TRK_BOL" INNER JOIN "QLM"."QCTRL_PMO" "QCTRL_PMO" ON "QCTRL_TRK_BOL"."PMO_NO"="QCTRL_PMO"."PMO_NO") INNER JOIN "QLM"."QCTRL_MEAS_PT" "QCTRL_MEAS_PT" ON "QCTRL_TRK_BOL"."SUP_MP_ID"="QCTRL_MEAS_PT"."MP_ID") INNER JOIN "QLM"."QCTRL_TKT" "QCTRL_TKT" ON "QCTRL_TRK_BOL"."PMO_NO"="QCTRL_TKT"."PMO_NO") INNER JOIN "QLM"."QCTRL_CONTACT" "QCTRL_CONTACT" ON "QCTRL_TRK_BOL"."DRIVER_CONTACT_ID"="QCTRL_CONTACT"."CONTACT_ID") INNER JOIN "QFC_QLM"."QARCH_SEC_USER" "QARCH_SEC_USER" ON "QCTRL_TRK_BOL"."USER_ID"="QARCH_SEC_USER"."SEC_USER_ID") LEFT OUTER JOIN "QLM"."QCODE_UOM" "QCODE_UOM" ON "QCTRL_TRK_BOL"."ODORIZED_VOL_UOM_CD"="QCODE_UOM"."UOM_CD") INNER JOIN "QLM"."QCTRL_ORG_VW" "QCTRL_ORG_VW" ON "QCTRL_MEAS_PT"."ORG_ID"="QCTRL_ORG_VW"."ORG_ID") INNER JOIN "QLM"."QCTRL_BA_ENTITY" "QCTRL_BA_ENTITY" ON "QCTRL_TKT"."DEM_BA_ID"="QCTRL_BA_ENTITY"."BA_ID") INNER JOIN "QLM"."QCTRL_CTR_HDR" "QCTRL_CTR_HDR" ON "QCTRL_PMO"."DEM_CTR_NO"="QCTRL_CTR_HDR"."CTR_NO") INNER JOIN "QLM"."QCODE_PRODUCT" "QCODE_PRODUCT" ON "QCTRL_PMO"."PROD_CD"="QCODE_PRODUCT"."PROD_CD") INNER JOIN "QLM"."QCTRL_BA_ENTITY_VW" "QCTRL_BA_ENTITY_VW" ON "QCTRL_PMO"."VESSEL_BA_ID"="QCTRL_BA_ENTITY_VW"."BA_ID") LEFT OUTER JOIN "QLM"."QXREF_BOL_PROD" "QXREF_BOL_PROD" ON "QCTRL_PMO"."PROD_CD"="QXREF_BOL_PROD"."PURITY_PROD_CD") INNER JOIN "QLM"."QCTRL_ORG" "QCTRL_ORG" ON "QCTRL_CTR_HDR"."BUSINESS_UNIT_ORG_ID"="QCTRL_ORG"."ORG_ID"
    WHERE "QCTRL_TRK_BOL"."PMO_NO"=12345 AND "QXREF_BOL_PROD"."MOVEMENT_TYPE_CD"='TRK'
    SQL From Sub Report:
    SELECT "QXREF_BOL_VESSEL"."PMO_NO", "QXREF_BOL_VESSEL"."VESSEL_NO"
    FROM "QLM"."QXREF_BOL_VESSEL" "QXREF_BOL_VESSEL"
    WHERE "QXREF_BOL_VESSEL"."PMO_NO"=12345
    Does anyone have any suggestions on how we can improve the report performance with 10g?

    Hi Eric,
    Thanks for your response. The optimizer mode in our 9i database is CHOOSE. We changed the optimizer mode from ALL_ROWS to CHOOSE in 10g but it didn't make a difference.
    While researching Metalink I came across a couple of documents that indicated performance problems and issues with using certain data-dictionary views in 10g. Apparently, the definition of ALL_OBJECTS, ALL_ARGUMENTS and ALL_SYNONYMS have changed in 10g, resulting in degradation in performance, if quieried against these views. These are the same queries that crystal reports is queriying. We'll try the workaround suggested in these documents and see if it resolves the issue.
    Here are the Doc Ids, if you are interested:
    Note 377037.1
    Note:364822.1
    Thanks again for your response.
    Venu Boddu.

  • Performance problem when using CAPS LOCK piano input

    Dear reader,
    I'm very new to Logic and am running into a performance problem when using the CAPS LOCK-piano-keyboard for input of an instrument: when I'm not recording everything is fine and the program instantly responds on my keystrokes, but as soon as I go into record-mode there is sometimes a delay in the response-time (so I press a key and it takes up to half a second longer before the note is actually played).
    Is there anything to do about this to improve performance (for example turning of certain features of the application), or should I never use the CAPS LOCK keyboard anyway and go straight for an external MIDI-keyboard?
    Thanks and regards,
    Tim Metz

    Does your project have Audio tracks and just how heavy it is, how many tracks? Also, what kind of Software Instrument do you use?

  • Forms performance problem on the web, using webutil.

    When starting the webutil-demoform on the Application Server,
    webutils eight javabeans is loaded in 1 second.
    I'm using &WebUtilLogging=Console&WebUtilLoggingDetail=Detailed for logging this.
    When starting the same form from client the beans is loaded in ~30-40seconds.
    Any suggestions to figure out why?

    Problem solved!
    Don't use the IP-address in the URL, use hostname or add an entry in your hostfile on the client. http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=5092063
    Metalink
    Note:402180.1 Initial Loading of Webutil Forms Are Slow
    Note 356190.1 Performance Problems in Forms with Webutil 1.0.6 for Intranet Web Clients

Maybe you are looking for