Oracle 9i reading BLOB performance issues

Windows XP Pro SP2
JDK 1.5.0_05
Oracle 9i
Oracle Thin Driver for JDK 1.4 v.10.2.0.1.0
DBCP v.1.2.1
Spring v1.2.7 (I am using the JDBC template for convenience)
I have run into serious performance issues reading BLOBs from Oracle using oracle's JDBC thin driver. I am not sure if it a constraint/mis-configuration with oracle or a JDBC problem.
I am hoping that someone has some experience accessing multi-MB BLOBs under heavy volume.
We are considering using Oracle 8 or 9 as a document repository. It will end up storing hundreds of thousands of PDFs that can be as large as 30 MBs. We don't have access to Oracle 10.
TESTS
I am running tests against Oracle 8 and 9 to simulate single and multi-threaded document access. Out goal is to get a sense of KBps throughput and BLOB data access contention.
DATA
There is a single test table with 100 rows. Each row has a PK id and a BLOB field. The blobs range in size from a few dozen KB to 12MB. They represent a valid sample of production data. The total data size is approx. 121 MBs.
Single Threaded Test
The test selects a single blob object at a time and then reads the contents of the blob's binary input stream in 2 KB chunks. At the end of the test, it will have accessed all 100 blobs and streamed all 121 MBs. The test harness is JUnit.
8i Results: On 8i it starts and terminates successfully on a steady and reliable basis. The throughput hovers around 4.8 MBps.
9i Results: Similar reliability to 8i. The throughput is about 30% better.
Multi-Threaded Test
The multi-threaded test uses the same "blob reader" functionality used in the single threaded test. However, it spawns 8 threads each running a separate "blob reader".
8i Results: The tests successfully complete on a reliable basis. The aggregate throughput of all 8 threads is a bit more than 4.8 MBps.
9i Results: Erratic. The tests were highly erratic on 9i. Threads would intermittently lock when accessing a BLOB's output stream. Sometimes they lock accessing data from the same row, othertimes it is distinct rows. The number and the timing of the thread "locks" is indeterminate. When the test completed successfully the aggregate throughput of the 8 threads was approx. 5.4 MBps.
I would be more than happy to post code or the data model if that would help.
Carlos

Hi Murphy16,
Try investigate where are the principal issues in your RAC system.
Check:
* Expensive SQL's;
* Sorts in disks;
* Wait Events;
* Interconnect hardware issues;
* Applications doing unnecessary manual LOCKs (SQL);
* If SGA is adequatly sized (take care to not use of SWAP space "DISK");
* Backup's and unnecessary jobs running at business time (Realocate this jobs and backups to night window or a less intensive work hour at database);
* Rebuild indexes and identify tables that must be reorganized (fragmentation);
* Verify another software consuming resources on your server;
Please give us more info about your environment. The steps above are general, but you can use to guide u in basic performance issues.
Regards,
Rodrigo Mufalani
http://mufalani.blogspot.com

Similar Messages

  • Oracle Apps Database severe Performance Issue

    Hi Gurus,
    This is regarding a severe performance issue running in our Production E-Business Suite Instance.
    its an R12.1.3 setup installed with 11.2.0.1 Database. All the servers are Solaris Sparc 64 (Solaris 10)
    Let me brief you about the instance first:
    2 Node Application
    - Main Application Server hosting web/forms/concurrent/admin servers
    - iSupplier server hosting web services (placed in DMZ, used by external suppliers via Internet)
    1 Node Database Server
    Database Server Specs
    Memory: 144G phys mem 20G total swap
    - CPUs (8Px4cores, 2Px2cores)
    - I/O - fiber channel hard disk (hitachi SAN Storage) - 7 DATA_TOPs (7 drives with RAID 5) - current DB size 1.6 TB
    - at peak load, around 1000 concurrent forms session and 2000 web sessions.
    We have been facing some serious performance issues and we raised an SR with Oracle Support.
    The Support analyzed a bunch of AWR Reports we provided them and they asked us to increase the DB_CACHE from its current usage of 27G to 40G
    So, we changed SGA_TARGET from 35G to 50G and PGA was increased from 35G to 40G as v$pgastat was also suggesting some lack of memory.
    We made these changes last night.
    Today morning we observed the following:
    1. after start of office hours, we checked in the home page of EM DB Console that ADDM was showing reduced impact due to lack of SGA memory which seemed to be a good sign. Earlier it was around 25% which was now at 12%.
    However, negative aspects were:
    1. lot of swapping was reported by the System Administrators on the DB Server
    2. High CPU Usage
    3. EM DB Console showed a lot of "Concurrency Wait Class" events ...throughout the day lot of blocking sessions were reported which were making other sessions to wait.
    in the AWR Report, following foreground reports were listed:
    Top 5 Timed Foreground Events
    Event
    Waits
    Time(s)
    Avg wait (ms)
    % DB time
    Wait Class
    DB CPU
    132,577
    61.46
    library cache lock
    3,539
    40,683
    11496
    18.86
    Concurrency
    library cache: mutex X
    4,014,083
    21,011
    5
    9.74
    Concurrency
    db file sequential read
    4,138,014
    20,767
    5
    9.63
    User I/O
    latch free
    381,916
    5,897
    15
    2.73
    Other
    This is showing "library cache lock" events as the main culprit apart from the usual suspect, the CPU.
    I am attaching the AWR Report. Please let me know if  i should revert back the memory changes or is there anything else i could do.
    Please help us resolving it because the performance is going worst.
    Regards,
    Muneer.

    Pl do not post duplicates - Oracle Apps Database severe Performance Issue
    For all critical production issues, pl work with Support thru SRs - using the forums to troubleshoot production issues is not wise

  • Oracle Advance Compression Deletion Performance issue in 11g R1

    Hi,
    We have implemented OAC in our datawarehouse environment to enable table and index compression. We tested in our Test machine and we gained almost 600GB due to advance compression without any issues and all the informatica loads are running fine. And hence we implemented the same in our production but unfortunately two sessions which are involving deletion of data are taking more time (3 times of actual timing) for completion which affects our production environment.
    The tables creating issue are all non partitioned tables.
    I need to know whether Oracle Advance Compression will decrease delete performance? and is there any way to disable advance compression on those particular tables?
    Our environment details:
    DB earlier version: 11.1.0.6
    DB current version : Oracle 11.1.0.7
    Applied PSU: 11.1.0.7.6
    Operating system: Solaris 5.9
    Syntax used for compression:
    ALTER TABLE TABLE_NAME MOVE COMPRESS FOR ALL OPERATIONS;
    Thanks in Advance.

    Hi,
    Thanks for your reply.
    The note is for update performance issue and also I have applied necessary patches for improving update performance.
    The update sessions are all working fine. only the deletion sessions are creating problem.
    Could someone help me out to clear this problem.
    Thanks,
    VBK

  • How to resolve most of the Oracle SQL , PL/SQL Performance issues with help of quick Checklist/guidelines ?

    Please go thru below important checklist/guidelines to identify issue in any Perforamnce issue and resolution in no time.
    Checklist for Quick Performance  problem Resolution
    ·         get trace, code and other information for given PE case
              - Latest Code from Production env
              - Trace (sql queries, statistics, row source operations with row count, explain plan, all wait events)
              - Program parameters & their frequently used values
              - Run Frequency of the program
              - existing Run-time/response time in Production
              - Business Purpose
    ·         Identify most time consuming SQL taking more than 60 % of program time using Trace & Code analysis
    ·         Check all mandatory parameters/bind variables are directly mapped to index columns of large transaction tables without any functions
    ·         Identify most time consuming operation(s) using Row Source Operation section
    ·         Study program parameter input directly mapped to SQL
    ·         Identify all Input bind parameters being used to SQL
    ·         Is SQL query returning large records for given inputs
    ·         what are the large tables and their respective columns being used to mapped with input parameters
    ·         which operation is scanning highest number of records in Row Source operation/Explain Plan
    ·         Is Oracle Cost Based Optimizer using right Driving table for given SQL ?
    ·         Check the time consuming index on large table and measure Index Selectivity
    ·         Study Where clause for input parameters mapped to tables and their columns to find the correct/optimal usage of index
    ·         Is correct index being used for all large tables?
    ·         Is there any Full Table Scan on Large tables ?
    ·         Is there any unwanted Table being used in SQL ?
    ·         Evaluate Join condition on Large tables and their columns
    ·         Is FTS on large table b'cos of usage of non index columns
    ·         Is there any implicit or explicit conversion causing index not getting used ?
    ·         Statistics of all large tables are upto date ?
    Quick Resolution tips
    1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
    2) Use Data Caching Technique/Options to cache static data
    3) Use Pipe Line Table Functions whenever possible
    4) Use Global Temporary Table, Materialized view to process complex records
    5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
    6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
    7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
    8) Follow Oracle PL/SQL Best Practices
    9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
    10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
    11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
    12) Review Join condition on existing query explain plan
    13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
    14) Avoid applying SQL functions on index columns
    15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
    Thanks
    Praful

    I understand you were trying to post something helpful to people, but sorry, this list is appalling.
    1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
    No, use pure SQL.
    2) Use Data Caching Technique/Options to cache static data
    No, use pure SQL, and the database and operating system will handle caching.
    3) Use Pipe Line Table Functions whenever possible
    No, use pure SQL
    4) Use Global Temporary Table, Materialized view to process complex records
    No, use pure SQL
    5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
    No, use pure SQL
    6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
    Makes no sense.
    7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
    What about using the execution trace?
    8) Follow Oracle PL/SQL Best Practices
    Which are?
    9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
    You mean design your database and queries properly?  And table scanning is not always bad.
    10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
    It depends if that is necessary or not.
    11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
    No, consider that too many indexes can have an impact on overall performance and can prevent the CBO from picking the best plan.  There's far more to creating indexes than just picking every column that people are likely to search on; you have to consider the cardinality and selectivity of data, as well as the volumes of data being searched and the most common search requirements.
    12) Review Join condition on existing query explain plan
    Well, if you don't have your join conditions right then your query won't work, so that's obvious.
    13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
    No.  Oracle recommends you do not use hints for query optimization (it says so in the documentation).  Only certain hints such as APPEND etc. which are more related to certain operations such as inserting data etc. are acceptable in general.  Oracle recommends you use the query optimization tools to help optimize your queries rather than use hints.
    14) Avoid applying SQL functions on index columns
    Why?  If there's a need for a function based index, then it should be used.
    15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
    See 13.
    In short, there are no silver bullets for dealing with performance.  Each situation is different and needs to be evaluated on its own merits.

  • Performance issue in obiee11.1.1.6.8

    Hi all,
    i have built  same Logical model  for sql server database and Oracle database.i have performance issue in sql server where as it not showing the filter in physical query(nqs query log) hitting the DB of SQL server( so its taking more time to respond) but same model i m using for oracle Db its passing the physical query with filter to oracle DB (it is not taking more time  )to respond in obiee11.1.1.6.8
    Please help me

    Can you run the same query on Physical SQL Server DB and let us know the outcome ?
    Thanks,

  • Performance issues with flashed 7800GT (G5)

    Hey,
    I recently flashed a PC 7800GT with the 128K OEM NVidia ROM. It's one of the cards with a physical 128K ROM chip, so no worries there. However, it doesn't deliver the performance I expected. I have a Late 2005 2.0 DC, 10GB DDR2 and use Leopard 10.5.8.
    This is what OS X puts out (german):
    NVIDIA GeForce 7800 GT:
      Chipsatz-Modell:    GeForce 7800GT
      Typ:    Monitor
      Bus:    PCIe
      Steckplatz:    SLOT-1
      PCIe-Lane-Breite:    x16
      VRAM (gesamt):    256 MB
      Hersteller:    NVIDIA (0x10de)
      Geräte-ID:    0x0092
      Versions-ID:    0x00a1
      ROM-Version:    2152.2
    Performance issues are: I can't seem to reach frame rates that are anywhere near the results provided by barefeats. Quake 3 runs with approx 200fps in 1600x1200x32 (I expected 300+ fps), Quake 4 and UT2004 run okay, but not on high settings with high resolutions. So, not equivalent to what you wouldd expect from the system's specs) Same goes for Colin McRae Rallye. Now, I remember reading about performance issues in 10.5.8 with the flashed ROM since it doesn't seem to be a 1:1 copy of the original one. I didn't try it in Tiger since I don't exactly want to go back from Leopard. Am I right that this is probably an issue of the "OEM ROM" (from the macelite)? Does anyone have the real deal in terms of 7800GT ROMs and could provide me with a link?
    Br

    Hi-
    If you send me an email via my website, I can send you a couple of ROMs that might work better.
    http://www.jcsenterprises.com/Japamacs_Page/All_Things_PPC.html
    Problem with the flashed 256 MB GT, though, is that Leopard runs slow.
    Bad driver interaction.....
    The 512 MB GTX is the way to go........

  • Critical performance issues

    Adobe LiveCycle Designer 7.1 is having serious performance issues when dealing with some forms. These are fairly simple, one-page forms, around 215KiB. I am running on a Pentium D 2.8GHz with 1GiB of RAM. When trying to open the forms, LiveCycle eats up a whole CPU core and several hundred MiB of memory, and it takes on the order of 10 minutes to open. Copying a single form field and pasting it is an hour-long operation. I have already tried uninstalling and reinstalling, it had no effect. The computer is clean of viruses/spyware and is as up-to-date as I can get it. Deleting and recreating the forms is an absolute last-resort option, and at that point not using PDF at all is under consideration, so I would like to recover them if possible. The forms work fine under Adobe Reader, no performance issues. Any ideas?

    I'll have to check to find out whether that's allowed.
    Another form started doing this.  Working fine, then paste one more form field and Designer goes to hell.  I killed it, then go back and try to view the XML source of the form, Designer took a while to do so.  I notice:
    repeat previous line for several pages
    A line being repeated several hundred times sets off my brokenness detector; any ideas?

  • Oracle 9i Performance Issue High Physical Reads

    Dear All,
    I have Oracle 9i Release 9.2.0.5.0 database under HP Unix, I have run the query and got following output. Can any body just have a look and advise what to do in the following situation? We have performance issues.
    Many thanks in advance
    Buffer Pool Advisory for DB: DBPR Instance: DBPR End Snap: 902
    -> Only rows with estimated physical reads >0 are displayed
    Size for Size Buffers for Est Physical Estimated
    P Estimate (M) Factr Estimate Read Factor Physical Reads
    D 416 .1 51,610 4.27 1,185,670,652
    D 832 .2 103,220 2.97 825,437,374
    D 1,248 .3 154,830 2.03 563,139,985
    D 1,664 .4 206,440 1.49 412,550,232
    D 2,080 .5 258,050 1.32 366,745,510
    D 2,496 .6 309,660 1.23 340,820,773
    D 2,912 .7 361,270 1.14 317,544,771
    D 3,328 .8 412,880 1.09 301,680,173
    D 3,744 .9 464,490 1.04 288,191,418
    D 4,096 1.0 508,160 1.00 276,929,627

    Hi,
    Actually you didnt give the exact problem statement.
    Seems to be your database is I/O bound. Ok, do the following one by one:
    1. Identify the FTS queries and try to create the optimal indexes (depending on the disk reads factor!!) on the problem queries.
    2. To reduce the 276M physical reads, you need to allocate more memory to db_cache_size. try 8GB (initially) and then depending on the buffer advisery you can increase further if you have more memory on the box.
    3. as a Next step , configure KEEP and RECYCLE cache to get the benefits of reduced I/O by multiple pools. Allocate objects to the KEEP/RECYCLE pools.
    Thanks,

  • Performance issue in oracle 11.1.0.7 version

    Hi ,
    In production environment we have some cronjobs are scheduled, they will run every Saturday. One of the cronjob is taking more time to finish the job.
    Previous oracle version is 10.2.0.4, that time it was taking 36hrs to complete it. After upgrading to 11gr1, now it's taking 47hrs some time 50hrs to finish.
    I have asked my production DBA take AWR report after finish the cronjob.
    Now he sent the AWR report, but i don't know how to read it. Can you please help me to read the AWR reports, and i need to give some recommendations to reduce the overall running time.
    I don't know how to attach the AWR report here.
    Please help me on this.
    Thanks
    Shravan Kumar

    Hi,
    Now he sent the AWR report, but i don't know how to read it. Can you please help me to read the AWR reports, and i need to give some recommendations to reduce the overall running time.An't you a DBA? Probably you should seek help of you DBA to read the AWR and mean while you should also have AWR of 10g where this job was running previously so that you can compare the things.
    Did you do a testing before upgrade? you SHOULD have done a thorough testing of your applications/reports before the upgrade and resolve the performance issues before the production upgrade.
    Mean while you do investigation, you can set optimizer_features_enable=10.2.0.4 for only cron job session to check whether you job returns to same 26 hours time
    alter session set optimizer_features_enable='10.2.0.4';Salman

  • Performance Issue in Oracle EBS

    Hi Group,
    I am working in a performance issue at customer site, let me explain the behaviour.
    There is one node for the database and other for the application.
    Application server is running all the services.
    EBS version is 12.1.3 and database version is: 11.1.0.7 with AIX both servers..
    Customer has added memory to both servers (database and application) initially they had 32 Gbytes, now they have 128 Gbytes.
    Today, I have increased memory parameters for the database and also I have increased JVM's proceesses from 1 to 2 for Forms and OAcore, both JVM's are 1024M.
    The behaviour is when users are navigating inside of the form, and they push the down button quickly the form gets thinking (reloading and waiting 1 or 2 minutes to response), it is no particular for a specific form, it is just happening in several forms.
    Gathering statistics job is scheduled every weekend, I am not sure what can be the problem, I have collected a trace of the form and uploaded it to Oracle Support with no success or advice.
    I have just send a ping command and the reponse time between servers is below to 5 ms.
    I have several activities in mind like:
    - OATM conversion.
    - ASM implementation.
    - Upgrade to 11.2.0.4.
    Has anybody had this behaviour?, any advice about this problem will be really appreciated.
    Thanks in advance.
    Kind regards,
    Francisco Mtz.

    Hi Bashar, thank you very much for your quick response.
    If both servers are on the same network then the ping should not exceed 2 ms.
    If I remember, I did a ping last Wednesday, and there were some peaks over 5 ms.
    Have you checked the network performance between the clients and the application server?
    Also, I did a ping from the PC to the application and database, and it was responding in less than 1 ms.
    What is the status of the CPU usage on both servers?
    There aren't overhead in the CPU side, I tested it (scrolling getting frozen) with no users in the application.
    Did this happen after you performed the hardware upgrade?
    Yes, it happened after changing some memory parameters in the JVM and the database.
    Oracle has suggested to apply the latest Forms patches according to this Note: Doc ID 437878.1
    Thanks in advance.
    Kind regards,
    Francisco Mtz.

  • Oracle Performance Issue

    Hardware Configuration:
    Regarding Oracle Performance Issue.
    Configuration 1
    ================
    SunV880 - Sunfire
    32 GB RAM
    14 numbers of 36GB hard disk
    8 CPUs
    CPU Speed 750MZ.
    Software Configuration:
    Oracle 8i
    OS version - Solaris 8
    Customized our own application - Namex
    Configuration 2
    ================
    Intel PIII - 750 MZ
    2 GB RAM
    2 CPUS
    Software configuration
    Oracle 8i
    OS version linux 6.2
    Customized our own application - Namex (multi threaded application)
    We installed the oracle application in all hard disks. All tables
    are splited in to separate hard disks.
    OS installed in 1 hard disk.
    namex application installed in 1 hard disk
    Oracle installed in 1 hard disk.
    All tables are splited in to other hard disks.
    We are trying to insert some user databases in oracle table. We
    achieved up to 150 records/second in Sun server. But in lower
    configuration our application inserts up to 100 records/second.
    (configuration 2)
    We want improve our inserting database records/per rate
    in Sun Server.
    How to tune our oracle application parameter values in init.ora
    file. Our application tries to insert up to 500 records per second.
    But I can't able to achieve this value.
    init.ora file
    =============
    db_name = "namex"
    instance_name = namex64
    service_names = namex64
    control_files = ("/disk1/oracle64/OraHome1/oradata/Namex64/control01.ctl", "/disk1/oracle64/OraHome1/oradata/namex64/control02.ctl", "/disk1/oracle64/OraHome1/oradata/namex64/control03.ctl")
    open_cursors = 300
    max_enabled_roles = 145
    #db_block_buffers = 20480
    db_block_buffers = 604800
    #shared_pool_size = 419430400
    shared_pool_size = 8000000000
    #log_buffer = 163840000
    log_buffer = 2147467264
    #large_pool_size = 614400
    java_pool_size = 0
    log_checkpoint_interval = 10000
    log_checkpoint_timeout = 1800
    processes = 1014
    # audit_trail = false # if you want auditing
    # timed_statistics = false # if you want timed statistics
    timed_statistics = true # if you want timed statistics
    # max_dump_file_size = 10000 # limit trace file size to 5M each
    # Uncommenting the lines below will cause automatic archiving if archiving has
    # been enabled using ALTER DATABASE ARCHIVELOG.
    # log_archive_start = true
    # log_archive_dest_1 = "location=/disk1/oracle64/OraHome1/admin/namex64/arch"
    # log_archive_format = arch_%t_%s.arc
    #DBCA uses the default database value (30) for max_rollback_segments
    #100 rollback segments (or more) may be required in the future
    #Uncomment the following entry when additional rollback segments are created and made online
    #max_rollback_segments = 500
    # If using private rollback segments, place lines of the following
    # form in each of your instance-specific init.ora files:
    #rollback_segments = ( RBS0, RBS1, RBS2, RBS3, RBS4, RBS5, RBS6, RBS7, RBS8, RBS9, RBS10, RBS11, RBS12, RBS13, RBS14, RBS15, RBS16, RBS17, RBS18, RBS19, RBS20, RBS21, RBS22, RBS23, RBS24, RBS25, RBS26, RBS27, RBS28 )
    # Global Naming -- enforce that a dblink has same name as the db it connects to
    # global_names = false
    # Uncomment the following line if you wish to enable the Oracle Trace product
    # to trace server activity. This enables scheduling of server collections
    # from the Oracle Enterprise Manager Console.
    # Also, if the oracle_trace_collection_name parameter is non-null,
    # every session will write to the named collection, as well as enabling you
    # to schedule future collections from the console.
    # oracle_trace_enable = true
    # define directories to store trace and alert files
    background_dump_dest = /disk1/oracle64/OraHome1/admin/Namex64/bdump
    core_dump_dest = /disk1/oracle64/OraHome1/admin/Namex64/cdump
    #Uncomment this parameter to enable resource management for your database.
    #The SYSTEM_PLAN is provided by default with the database.
    #Change the plan name if you have created your own resource plan.# resource_manager_plan = system_plan
    user_dump_dest = /disk1/oracle64/OraHome1/admin/Namex64/udump
    db_block_size = 16384
    remote_login_passwordfile = exclusive
    os_authent_prefix = ""
    compatible = "8.0.5"
    #sort_area_size = 65536
    sort_area_size = 1024000000
    sort_area_retained_size = 65536
    DB_WRITER_PROCESSES=4
    How to improve my performance activities on Oracle server.
    Please guide me regarding this issue.
    If anyone wants more info, please let me know.
    Best regards,
    Senthilkumar

    Are you sure that it is not an application constraint ? i.e. the application can't handle so much data per second ? (application locks, threads )
    Have you tried to write a simple test program, which inserts predefined data (which your application inserts) the same data, only changing keys ?
    Then comparing the values from the 1st and the 2nd configuration ?
    Did you check the way your application is communicating with oracle ? If it is TCP/ip (even on the local machine) then this is your main problem.
    And one more thing, do you know if your application is able to run the load (inserts) of data on different threads (i.e. in parallel), because if is not, you won't be able to push the speed higher because your constraint is the speed of a single CPU. Consider running several process, which loads the data.
    We had the same problem ot AIX machines with 4 cpus. Monitoring the machine, we found that only 25% (1 cpu) where in use. We had to run 4 processes to push the speed up. Check your system's overal load while running the 'load' (inserts).
    log_checkpoint_interval = 10000
    Check if this value is appropriate. Maybe you should set it to 0 (infinite). This will disable checkpoints on a 'number of undo record' basis. Checpoints will occure only on log switch.
    How much redo files per redo groups do you have ? What is their size ? Are they on different disks ? How much redo data is generated by a single 'record' inserted ?
    Hope i helped at least a little.

  • Performance issue related to OWM? Oracle version is 10.2.0.4

    The optimizer picks hash join instead of nested loop for the queries with OWM tables, which causes full table scan everywhere. I wonder if it happens in your databases as well, or just us. If you did and knew what to do to solve this, it would be great appriciated! I did log in a SR to Oracle but it usually takes months to reach the solution.
    Thanks for any possible answers!

    Ha, sounded like you knew what I was talking about :)
    I thought the issue must've had something to do with OWM because some complicate queries have no performance issue while they're regular tables. There's a batch job which took an hour to run now it takes 4.5 hours. I just rewrote the job to move the queries from OWM to regular tables, it takes 20 minutes. However today when I tried to get explain plans for some queries involve regular tables with large amount of data, I got the same full table scan problem with hash join. So I'm convinced that it probably is not OWM. But the patch for removing bug fix didn't help with the situation here.
    I was hoping that other companies might have this problem and had a way to work around. If it's not OWM, I'm surprised that this only happens in our system.
    Thanks for the reply anyway!

  • Oracle 10.2.0.3 performance issue

    Hi all,
    I have a performance issue in the database I currently maintain.
    Here's the specifications:
    - Windows 2003 Server 64bit
    - Oracle 10g 10.2.0.3 patch 31
    - Application Server 10gR2 OC4J (Forms and Report Services)
    The server was re-installed about 3 weeks ago after it got viruses.
    I believe my setup on the memory parameters were fine because for two weeks the system ran fine and the end of day process is also increasing in terms of time.
    However, starting 2 days ago in the middle of no where, the end of day process started to get really slow (from 11 minutes became 1 hour).
    The IT standby in my client will normally do an analyze schema and after that every thing will be back to normal again.
    I turned off the GATHER_STATS_JOB 2 weeks ago and replaced it with DBMS_STATS.GATHER_DATABASE_STATS which I scheduled to execute every Friday.
    This problem occurred even before the server was re-installed and as usual, analyze schema (GATHER_SCHEMA_STATS) will fix it.
    I don't have much options currently, and analyzing the schema seems to be the only solution which normally we never do for other clients (at not everyday analyzing schema).
    I hope any of you could help me out with any solution on how to trace the exact problem on this.
    Thank you,
    Adhika

    Hi Satish,
    This end of day process basically will insert some values to certain tables and also generating reports afterward.
    I managed to get the AWR report within the time of the end of day process.
    sql execute elapsed time is on the top of Time Model Statistics.
    What I don't understand here is why is this issue happened only after 2 weeks?
    I suspected that this might be because Oracle pick the wrong execution plan, and normally a wrong execution plan caused by outdated statistics on the indexes and tables, but what I cannot understand is why this is happening on Monday morning where in Friday night GATHER_DATABASE_STATS ran successfully.
    Yesterday when the analyze schema was executed again, this morning I got another email saying that the performance issue occurred again.
    The top most wait events are: db file sequential read, db file scattered read, log file parallel write, LNS Wait on SENDREQ, and log file sequential read.
    Thank you,
    Adhika

  • As a number of Oracle Connections created - Performance Issue

    Hello,
    We are using Oracle 9i as a backend and VB 6.0 as Frontend. For one client one connection will be opened in the Database Server, if 200 clients open the application then 200 connections will be opened in the Database Server, due to that performance issue is raising.
    We are doing: If the frontend application is not in use for 10-minutes then the connection is closed and at the same time front-end application is also terminated or if the Client terminates the application the connection is closed in the Server.
    Eventhought if 200-Clients connect the Server only 4 to 5 Clients send a request to the Server for retriving or for saving the data.
    I want to Open a fixed number of connections in the DataBase Server eg., around 10-connections, and i want to use this 10 connections for 200 clients.
    At First 10 Connections are opened and 10 Clients are using this 10 connections, among 10 Clients 1 released the database connection after finishing the Job and if 11'th Client opens then this released connection as to be used without opening a new connection, in the same way if some of the connections from 9 get free and if any other clients open application then this connections which are free as to be used for opening the application or else for doing any transaction. So, that the number of connection to be create can be redused.
    Please give me the suggestion how to make use the released connection for doing the transaction or else for opening the application.
    Thanking U with Regards,
    Sravan,
    Hyderabad.

    As Satish mentioned, Shared Server can be a good solution, but it would also be worth telling us , how did you quantify this thing that due to the number of connections, you have a performance issue? 200 is not that big number I guess. What's the o/s , exact database version and the system details with database details? Also if you have statspack report, post that too over here.
    HTH
    Aman....

  • RedHat AS4 and Oracle 10R2 upgrade - performance issue

    Hi all,
    As part of our database upgrade project (from Oracle 10.2.0.1 to 10.2.0.3), we decided to upgrade our OS as well (RedHat AS4 from Update 2 to Update 4, 64-bit). We are in testing phase now and having some serious performance issues.
    These below are steps that we have performed:
    1. Ran db load testing plus Application Regression testing in order to get a baseline
    2. Upgraded RH AS4 from Update 2 to Update 4
    3. Ran db load testing plus Application Regression testing
    4. Database load testing (including massive parallel processes, like 'insert into ...', 'create indexes ...parallel',etc ...) showed significant slowdown compared to our 'baseline' test (Update 2): 70-80%. The database is still Oracle 10.2.0.1, only RH was updated at this point
    5. We upgraded Oracle 10.2.0.1 to 10.2.0.3, latest CPU , etc ...(Update 4 on OS side) still same slowdown in performances ...
    6. Our Application Regression Testing has showed performance degradation also but not in same extent as db load test (10-20 % on different application modules)
    7. We also did one more thing : went back and upgraded Oracle with Update 2 (some 'workarounds' were required but that's another story). Performances were OK. So it seems our problem is not (directly) related to Oracle version.
    Now we are going to install Update 6 (the latest update for AS4) and see how it goes.
    If anybody have any experience or idea about this case and want to share it with us, it would be highly appreciated.
    Thanks in advance,
    Milan N.

    Just a quick update for those who follow this thread ...
    Everything worked fine with 'Update 6' kernel (both Oracle 10.2.0.1 and 10.2.0.3). We have also applied all available security packages.
    Milan N.

Maybe you are looking for

  • Where did the "Save As" function go?

    New version of Pages forgot to include "Save As", on my "File" menu. So... "Save a Version" is useless. I need to save new stuff into designated areas! What the hey!???!

  • IPad Pop3 email deleting from mail server even though set to NEVER

    Hello. We are experiencing a most strange behaviour with our e-mail.  We use a web host and access e-mail through POP3. Two of our users are experiencing similar problems, and this is how their lives are setup: 1) Outlook 2010, set to remove e-mail f

  • Payment to  Customers via automatic payment run (Just like suppliers)

    Hi all, I do have a client requirement which is described below.  Thank you for reading my post and suggesting any solution. Customers made various payments in the past and those payments were cleared against invoices.  During this process which cont

  • No Analog audio input?

    Been reading a lot about lack of an analog input option - lots of external devices not working. Is this in fact true? Does anyone know how to get line level recording into the iphone from a mic or guitar then? Thx

  • J2ME Icons Problems

    I bought "Beginnign Mobile Game Programming" by Michael Morrison, and I'm working through his very first program example. Its a basic MIDlet that displays the color depth, alpha levels, screen size, and free memory on the screen. My problem is that i