Open_cursors and session_cached_cursors!!

Hi, all.
I have a 2 node RAC database (10.2.0.2.0) on windows2003 EE SP1.
Recently, I am getting warnings related to "library cache lock" and "cursor: pin S wait on X" wait event.
The recommendation from ADDM findigs is as follows.
-- increase open_cursors
-- increase session_cached_cursors
The above parameters are dynamic ?? or do I need to restart the instances??
Thanks and Regards.

set linesize 121
SELECT name, isses_modifiable, issys_modifiable
FROM gv$parameter
WHERE ....;

Similar Messages

  • How to increase dynamically open_cursors and session_cached_cursors

    how to increase dynamically open_cursors and session_cached_cursors
    for Ex. alter system open_cursors = 500

    instane level:
    alter system set session_cached_cursors=200 scope = spfile; or set init.ora file.
    alter system set open_cursors=400;
    More details about open_cursor and session_cached_cursor, refer the below link
    http://www.orafaq.com/node/758
    Regards
    RajaBaskar

  • Measuring the value of "session_cached_cursors"  and "open_cursor"

    Friends ,
    Recently In my Database production server Oracle10g (version : 10.2.0.1.0.), I got the "open_cursor" and "session_cached_cursors" related error where OEM asks to increase the value . I have increase the value but the problem still is not solved .
    Can anybody plz tell me , how can I measure the Standard value of "open_cursor" and also "session_cached_cursors" of my database server ?
    Another question ,
    SQL> show parameter open_
    NAME TYPE VALUE
    open_cursors integer 500
    In above output , what is the unit of 500 value . Is this value related with the SGA memory area ?

    shipon_97 wrote:
    Thanks all for reply ..
    I have another query ...
    How can I find the standarnd value of "open_cursor" as well as "session_cached_cursors" parameter value in the respect of my oracle database server . And what are the recommended value of these parameters . I am using oracle database 10g (v-10.2.0.1.0 ) .Shipon,
    You can see the values of the parameters in your db with the simple show parameter command,
    >
    show parameter open_cursors
    show parameter session_cached_cursors>
    About the settings of the parameters and their optimal value, I guess there wont' be any "concrete" answer to that. Session cached cursors is set to 50 default in Oracle which means 50 cursors can be marked as 'hot cursors' for the the system and will be avoided from the library cache lookup. This also has a condition that the cursor will be marked as hot only when its run for 3 times. So you need to check back with your system that how many queries are actually requiring this optimization. And more over, this is used or said to be used when you are seeing a Library Cache Latch contention. I don't think that just for the sake of change, you need to modify the parameter from default.
    The smae is true for the OPEN_CURSORS as well. The value is required to be changed if you are seeing an error about maximum opened cursor exceeding from the set value. Generally , a value of 2000 is enough for most of the systems but again, that may depend on site to site and surely enough , you need to check yours befoe playing around.
    HTH
    Aman....

  • OIM 11g R2 installation Issue. OIM Schema creation failed using RCU 11.1.2

    I have been trying to install OIM 11g R2 on a Windows server 2008 R2 64 Bit and have been encountering the following error during the OIM schema creation. The other schemas, such as Metadata, SOA, user messaging services and other associated schema creation was successful. But, the OIM schema creation was taking more than 2 minutes and finally it fails with the below error.
    RCU-6130: Action failed
    RCU-6135: Error while trying to execute java action.
    Components used:
    OS: Windows Server 2008 R2 64 Bit
    DBS: 11gR2 (11.2.0.1)
    RCU: 11.1.2
    The first error occured was ora-12637 packet receive failed followed by Table or View does not exist. I could not fetch much information from the oim and rcu.log.
    I have set the processes, open_cursors and session_cached_cursors as suggested in the preinstallation step of OIM 11g R2 installation.
    Any pointers on this will be highly appreciated.
    Thanks,
    Srini

    Copy the msvcr71.dll file from rcuHome\jdk\jre\bin inside rcu installer and paste it in C:\Windows\SysWOW64.
    Try running the rcu again with the new user i.e. instead of DEV_OIM, run it with DEV_OIM1.
    Or drop the DEV_OIM user first and then use the same user.

  • Open cursors in 11.5.10.2

    Hi,
    Per metalink note: 216205.1
    In 10g, the upper limit is now
    # controlled by the parameter session_cached_cursors.
    # For 10g environments, the parameters open_cursors and
    # session_cached_cursors should be set as follows, in
    # accordance with this change in behavior.
    open_cursors = 600
    session_cached_cursors = 500
    Does increasing OPEN_CURSORS parameter value increase performanance in 10.2.3 DB in 11.5.10.2 Application?
    Thanks,

    Hi,
    Does increasing OPEN_CURSORS parameter value increase performanance in 10.2.3 DB in 11.5.10.2 Application?No. You would need to increase the value of this parameter if you get ORA-01000 error -- See (Note: 108886.1 - ORA-1000 Max Open Cursors Exceeded), (Note: 76684.1 - Monitoring Open Cursors & Troubleshooting ORA-1000 Errors), and (Note: 30781.1 - Init.ora Parameter "OPEN_CURSORS" Reference Note).
    Regards,
    Hussein

  • Ora-00604 error and ora 01000 error while report generation.

    hi all,
    I am trying to generate the multiple reports of same template through a program.
    While this job is running, i get the following error at the BIP console and the reports don't get generated.
    [101711_044115578][][EXCEPTION] java.sql.SQLException: ORA-00604: error occurred
    at recursive SQL level 1
    ORA-01000: maximum open cursors exceeded
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01000: maximum open cursors exceeded
    ORA-01000: maximum open cursors exceeded
    Kindly help.
    Thanks.

    Lots of resources with a simple search to see what this is about, for example:
    http://www.orafaq.com/wiki/ORA-01000
    ORA-01000:     maximum open cursors exceeded
    Cause:     A host language program attempted to open too many cursors. The initialization parameter OPEN_CURSORS determines the maximum number of cursors per user.
    Action:     Modify the program to use fewer cursors. If this error occurs often, shut down Oracle, increase the value of OPEN_CURSORS, and then restart Oracle.
    open_cursors parameter
    http://download.oracle.com/docs/cd/E11882_01/server.112/e25513/initparams160.htm#REFRN10137
    Oracle support note:
    OERR: ORA-1000 maximum open cursors exceeded (Doc ID 18591.1)

  • High library cache load lock waits in AWR

    Hi All,
    Today i faced a significant performance problem related to shared pool. I made some observations, thought it would be a nice idea to share them with Oracle experts. Please feel free to add your observations/recommendations and correct me where i am wrong.
    Here are the excerpts from AWR report created for the problem timing. Database server is on 10.2.0.3 and running with 2*16 configuration. DB cache size is 4,000M and shared pool size is of 3008M.
    Snap Id Snap Time Sessions Cursors/Session
    Begin Snap: 9994 29-Jun-09 10:00:07 672 66.3
    End Snap: 10001 29-Jun-09 17:00:49 651 64.4
    Elapsed:   420.70 (mins)    
    DB Time:   4,045.34 (mins)   -- Very poor response time visible from difference between DB time and elapsed time.
    Load Profile
    Per Second Per Transaction
    Redo size: 248,954.70 23,511.82
    Logical reads: 116,107.04 10,965.40
    Block changes: 1,357.13 128.17
    Physical reads: 125.49 11.85
    Physical writes: 51.49 4.86
    User calls: 224.69 21.22
    Parses: 235.22 22.21
    Hard parses: 4.83 0.46
    Sorts: 102.94 9.72
    Logons: 1.12 0.11
    Executes: 821.11 77.55
    Transactions: 10.59   -- User calls and Parse count are almost same, means most of the calls are for parse. Most of the parses are soft. Per transaction 22 parses are very high figure.
    -- Not much disk I/O activity. Most of the reads are being satisfy from memory.
    Instance Efficiency
    Buffer Nowait %: 100.00 Redo NoWait %: 100.00
    Buffer Hit %: 99.92 In-memory Sort %: 100.00
    Library Hit %: 98.92 Soft Parse %: 97.95
    Execute to Parse %: 71.35 Latch Hit %: 99.98
    Parse CPU to Parse Elapsd %: 16.82 % Non-Parse CPU: 91.41 -- Low execute to parse ratio denotes CPU is significantly busy in parsing. Soft Parse% showing, most of the parse are soft parses. It means we should concentrate on soft parsing activity.
    -- Parse CPU to Parse Elapsed % is quite low, means some bottleneck is there related to parsing. It could be a side-effect of huge parsing pressure. Like CPU cycles are not available.
    Shared Pool Statistics
    Begin End
    Memory Usage %: 81.01 81.92
    % SQL with executions>1: 88.51 86.93
    % Memory for SQL w/exec>1: 86.16 86.76 -- Shared Pool memory seems ok (in 80% range)
    -- 88% of the SQLs are repeating ones. It's a good sign.
    Top 5 Timed Events
    Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
    library cache load lock 24,243 64,286 2,652 26.5 Concurrency
    db file sequential read 1,580,769 42,267 27 17.4 User I/O
    CPU time   33,039   13.6  
    latch: library cache 53,013 29,194 551 12.0 Concurrency
    db file scattered read 151,669 13,550 89 5.6 User I/O Problem-1: Contention on Library cache: May be due to under-sized shared pool, incorrect parameters, poor application design, But since we already observed that most of the parses are soft parses and shared pool usgae in 80%, seems problem related to holding cursors. open_cursors/session_cached_cursors are red flags.
    Problem-2: User I/O, may be due to poor SQLs, I/O sub-system, or poor physical design (wrong indexes are being used as DB file seq reads)
    Wait Class
    Wait Class Waits %Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn
    Concurrency 170,577 44.58 109,020 639 0.64
    User I/O 2,001,978 0.00 59,662 30 7.49
    System I/O 564,771 0.00 8,069 14 2.11
    Application 145,106 1.25 6,352 44 0.54
    Commit 176,671 0.37 4,528 26 0.66
    Other 27,557 6.31 2,532 92 0.10
    Network 6,862,704 0.00 696 0 25.68
    Configuration 3,858 3.71 141 37 0.01
    Wait Events
    Event Waits %Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn
    library cache load lock 24,243 83.95 64,286 2652 0.09
    db file sequential read 1,580,769 0.00 42,267 27 5.91
    latch: library cache 53,013 0.00 29,194 551 0.20
    db file scattered read 151,669 0.00 13,550 89 0.57
    latch: shared pool 25,403 0.00 12,969 511 0.10
    log file sync 176,671 0.37 4,528 26 0.66
    enq: TM - contention 1,455 90.93 3,975 2732 0.01 Instance Activity Stats
    opened cursors cumulative 5,290,760 209.60 19.80
    parse count (failures) 6,181 0.24 0.02
    parse count (hard) 121,841 4.83 0.46
    parse count (total) 5,937,336 235.22 22.21
    parse time cpu 283,787 11.24 1.06
    parse time elapsed 1,687,096 66.84 6.31 Latch Activity
    library cache 85,042,375 0.15 0.43 29194 304,831 7.16
    library cache load lock 257,089 0.00 1.20 0 69,065 0.00
    library cache lock 41,467,300 0.02 0.07 6 2,714 0.07
    library cache lock allocation 730,422 0.00 0.44 0 0  
    library cache pin 28,453,986 0.01 0.16 8 167 0.00
    library cache pin allocation 509,000 0.00 0.38 0 0 Init.ora parameters
    cursor_sharing= EXACT
    open_cursors= 3000
    session_cached_cursors= 0
    -- open_cursors value is too high. I have checked that maximum usage by a single session is 12%.
    -- session_cached_cursors are 0 causing soft parsing. 500/600 is good number to start with.
    cursor_sharing exact may cause hard parses. But here, hard parsing is comparatively small, we can ignore this.
    From v$librarycache
    NAMESPACE             GETS    GETHITS GETHITRATIO       PINS PINHITRATIO    RELOADS INVALIDATIONS
    SQL AREA            162827      25127  .154317159  748901435  .999153087     107941         81886-- high invalidation count due to DDL like activities.
    -- high reloads due to small library cache.
    -- hit ratio too small.
    -- Need to pin frequently executed objects into library cache.
    P.S. Same question asked on Oracle_L, but due to formatting reasons, pasing duplicate contents here.
    Regards,
    Neeraj Bhatia
    Edited by: Neeraj.Bhatia2 on Jul 13, 2009 6:51 AM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Thanks Charles. I really appreciate your efforts to diagnose the issue.
    I agree with you performance issue is caused by soft parsing, which can be solved by holding cursors (session_cached_cursors). It may be due to oversized shared pool, which is causing delay in searching child cursors.
    My second thought is, there is large number of reloads, which can be due to under-sized shared pool, if invalidation activities are not going (CBO statistics collection, DDL etc), cursors are being flushed frequently.
    CPU utilization is continuously high (above 90%). Pasting additional information from same AWR report.
    Namespace                Get Requests       Pct Miss        Pin Requests         Pct Miss      Reloads        Invalidations
    BODY                       225,345               0.76            4,965,541            0.15           5,533           0
    CLUSTER                   1,278                  1.41            2,542                  1.73           26                0
    INDEX                       5,982                  9.31            13,922                7.35           258               0
    SQL AREA                  141,465              54.10           27,831,235         1.21           69,863          19,085 Latch Miss Sources
    Latch Name             Where                                         NoWait Misses                 Sleeps             Waiter Sleeps
    library cache lock       kgllkdl: child: no lock handle             0                                   8,250                   5,792 Time Model Statistics
    Statistic Name                                                                           Time (s)                               % of DB Time
    sql execute elapsed time                                                           206,979.31                                      85.27
    PL/SQL execution elapsed time                                                    94,651.78                                      39.00
    DB CPU                                                                                     33,039.29                                      13.61
    parse time elapsed                                                                      22,635.47                                       9.33
    inbound PL/SQL rpc elapsed time                                                  14,763.48                                       6.08
    hard parse elapsed time                                                               14,136.77                                       5.82
    connection management call elapsed time                                        1,625.07                                       0.67
    PL/SQL compilation elapsed time                                                        760.76                                       0.31
    repeated bind elapsed time                                                               664.81                                       0.27
    hard parse (sharing criteria) elapsed time                                             500.11                                       0.21
    Java execution elapsed time                                                              252.95                                       0.10
    failed parse elapsed time                                                                   167.23                                       0.07
    hard parse (bind mismatch) elapsed time                                             124.11                                       0.05
    sequence load elapsed time                                                                23.34                                        0.01
    DB time                                                                                   242,720.12  
    background elapsed time                                                             11,645.52  
    background cpu time                                                                      247.25 According to this DB CPU is 65% utilization (DB CPU + Background CPU / Total Available CPU seconds). While at the same time DB host was 95% utilized (confirmed from DBA_HIST_SYSMETRIC_SUMMARY).
    Operating System Statistics
    Statistic                                         Total
    BUSY_TIME                             3,586,030
    IDLE_TIME                              1,545,064
    IOWAIT_TIME                              22,237
    NICE_TIME                                           0
    SYS_TIME                                  197,661
    USER_TIME                              3,319,452
    LOAD                                                 11
    RSRC_MGR_CPU_WAIT_TIME                  0
    PHYSICAL_MEMORY_BYTES          867,180
    NUM_CPUS                                           2

  • Manual Standby Database (10.2.0.2.0) on Windows 2003 R2

    Hi,
    We are setting up a standby database on a remote site for a simple oracle DB. As we already have a standby/master for another Oracle DB (from SAP) we want to stay as close as possible as what already exist.
    For the SAP Oracle standby, we are copying manualy all archive to the stand by and apply them with brarchive. All is working fine.
    For the new standby, we can not use brarchive as there is no SAP install on the standby but we stay with the "manual" copy of the archive from the master to the standby (using robocpy). It means all archive are on the standby (K:\oracle\oradata\archive).
    The creation of the standby DB seems to be OK as i can open it, but i can't manage to apply de redo logs.
    I'm quite new to oracle, so it's maybe a very basic issue, but i've already spent 3 days on it...
    To start the DB, we lauch a bat script :
    sqlplus /nolog @c:\backup\standby.sql
    pause
    the standby.sql:
    connect /@TECDB01 as sysdba
    startup nomount;
    alter database mount standby database;
    exit;
    Then i connect to sqlplus and enter:
    alter database recover managed standby database;
    In another sqlplus session :
    SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
    wich give me :
    PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    MR(fg) WAIT_FOR_GAP 1 45400 0 0
    RFS IDLE 0 0 0 0
    The sequence 45400 seems to be ok regarding the time of the backup restored on the standby.
    The archive is well on the server, but it won't apply it.
    The Alert_TECDB01.log :
    Fri Oct 29 11:03:43 2010
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 3
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =121
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.2.0.
    System parameters with non-default values:
    processes                = 999
    sga_target               = 7214202880
    control_files            = I:\ORACLE\ORADATA\CNTRL\STANDBY.CTL, J:\ORACLE\ORADATA\CNTRL\STANDBY.CTL, K:\ORACLE\ORADATA\CNTRL\STANDBY.CTL
    db_block_size            = 8192
    compatible               = 10.2.0.2.0
    log_archive_dest_1       = LOCATION=K:\oracle\oradata\archive
    log_archive_dest_2       = SERVICE=TECDB01
    log_archive_dest_state_1 = enable
    log_archive_dest_state_2 = enable
    standby_archive_dest     = K:\oracle\oradata\archive
    archive_lag_target       = 1800
    db_file_multiblock_read_count= 16
    undo_management          = AUTO
    undo_tablespace          = RBS
    undo_retention           = 10800
    recyclebin               = OFF
    remote_login_passwordfile= EXCLUSIVE
    db_domain                = WORLD
    dispatchers              = (ADDRESS=(PROTOCOL=tcp)(HOST=xxx.xxx.xxx.92))(DISPATCHERS=4)(CONNECTIONS=1000)
    shared_servers           = 100
    local_listener           = (ADDRESS=(PROTOCOL=TCP)(HOST=xxx.xxx.xxx.92)(PORT=1521))
    session_cached_cursors   = 300
    utl_file_dir             = \\srvuniway.vrithoff.srwt.tec-wl.be\hotspots
    job_queue_processes      = 10
    audit_file_dest          = I:\ORACLE\ADMIN\TECDB01\ADUMP
    background_dump_dest     = I:\ORACLE\ADMIN\TECDB01\BDUMP
    user_dump_dest           = I:\ORACLE\ADMIN\TECDB01\UDUMP
    core_dump_dest           = I:\ORACLE\ADMIN\TECDB01\CDUMP
    db_name                  = TECDB01
    open_cursors             = 3000
    pga_aggregate_target     = 1086324736
    PMON started with pid=2, OS id=4012
    PSP0 started with pid=3, OS id=3856
    MMAN started with pid=4, OS id=3580
    DBW0 started with pid=5, OS id=1084
    LGWR started with pid=6, OS id=576
    CKPT started with pid=7, OS id=3516
    SMON started with pid=8, OS id=508
    RECO started with pid=9, OS id=3068
    CJQ0 started with pid=10, OS id=2448
    MMON started with pid=11, OS id=2840
    MMNL started with pid=12, OS id=3024
    Fri Oct 29 11:03:44 2010
    starting up 4 dispatcher(s) for network address '(ADDRESS=(PROTOCOL=tcp)(HOST=xxx.xxx.xxx.92))'...
    starting up 100 shared server(s) ...
    Fri Oct 29 11:03:45 2010
    alter database mount standby database
    Fri Oct 29 11:03:51 2010
    Setting recovery target incarnation to 2
    ARCH: STARTING ARCH PROCESSES
    ARC0 started with pid=118, OS id=3584
    Fri Oct 29 11:03:51 2010
    ARC0: Archival started
    ARC1 started with pid=119, OS id=3688
    Fri Oct 29 11:03:51 2010
    ARC1: Archival started
    ARCH: STARTING ARCH PROCESSES COMPLETE
    Fri Oct 29 11:03:51 2010
    ARC0: Becoming the 'no FAL' ARCH
    Fri Oct 29 11:03:51 2010
    Successful mount of redo thread 1, with mount id 3987142355
    Fri Oct 29 11:03:51 2010
    ARC0: Becoming the 'no SRL' ARCH
    Fri Oct 29 11:03:51 2010
    ARC1: Becoming the heartbeat ARCH
    Fri Oct 29 11:03:51 2010
    Physical Standby Database mounted.
    Completed: alter database mount standby database
    Fri Oct 29 11:04:06 2010
    alter database recover managed standby database
    Fri Oct 29 11:04:06 2010
    Managed Standby Recovery not using Real Time Apply
    parallel recovery started with 7 processes
    Media Recovery Waiting for thread 1 sequence 45400
    Fetching gap sequence in thread 1, gap sequence 45400-45499
    +FAL[client]: Error fetching gap sequence, no FAL server specified+
    Fri Oct 29 11:04:37 2010
    +FAL[client]: Failed to request gap sequence+
    GAP - thread 1 sequence 45400-45499
    DBID 3776455083 branch 670241032
    +FAL[client]: All defined FAL servers have been attempted.+
    Check that the CONTROL_FILE_RECORD_KEEP_TIME initialization
    parameter is defined to a value that is sufficiently large
    enough to maintain adequate log switch information to resolve
    archivelog gaps.
    Fri Oct 29 11:04:51 2010
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[1]: Assigned to RFS process 3452
    RFS[1]: Identified database type as 'physical standby'
    Fri Oct 29 11:04:51 2010
    RFS LogMiner: Client disabled from further notification
    The tecdb01_arc1_3688.trc :
    Dump file i:\oracle\admin\tecdb01\bdump\tecdb01_arc1_3688.trc
    Fri Oct 29 11:03:51 2010
    ORACLE V10.2.0.2.0 - 64bit Production vsnsta=0
    vsnsql=14 vsnxtr=3
    Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    Windows NT Version V5.2 Service Pack 2
    CPU                 : 8 - type 8664, 2 Physical Cores
    Process Affinity    : 0x0000000000000000
    Memory (Avail/Total): Ph:7467M/9215M, PhPgF:2454M/10796M+
    Instance name: tecdb01
    Redo thread mounted by this instance: 1
    Oracle process number: 119
    Windows thread id: 3688, image: ORACLE.EXE (ARC1)
    *** SERVICE NAME:() 2010-10-29 11:03:51.177
    *** SESSION ID:(1088.1) 2010-10-29 11:03:51.177
    kcrrwkx: nothing to do (start)
    *** 2010-10-29 11:04:51.129
    Redo shipping client performing standby login
    *** 2010-10-29 11:04:51.176 64529 kcrr.c
    Logged on to standby successfully
    Client logon and security negotiation successful!
    kcrrwkx: nothing to do (end)
    *** 2010-10-29 11:05:51.285
    kcrrwkx: nothing to do (end)
    *** 2010-10-29 11:06:51.300
    kcrrwkx: nothing to do (end)
    The initTECDB01.ora :
    # Copyright (c) 1991, 2001, 2002 by Oracle Corporation
    # Archive
    archive_lag_target=1800
    log_archive_dest_1='LOCATION=K:\oracle\oradata\archive'
    # Cache and I/O
    db_block_size=8192
    db_file_multiblock_read_count=16
    # Cursors and Library Cache
    open_cursors=3000
    session_cached_cursors=300
    # Database Identification
    db_domain=WORLD
    db_name=TECDB01
    # Diagnostics and Statistics
    background_dump_dest=I:\oracle\admin\TECDB01\bdump
    core_dump_dest=I:\oracle\admin\TECDB01\cdump
    user_dump_dest=I:\oracle\admin\TECDB01\udump
    # File Configuration
    control_files=("I:\oracle\oradata\cntrl\standby.ctl", "J:\oracle\oradata\cntrl\standby.ctl", "K:\oracle\oradata\cntrl\standby.ctl")
    # Job Queues
    job_queue_processes=10
    # Miscellaneous
    compatible=10.2.0.2.0
    recyclebin=OFF
    # Processes and Sessions
    processes=999
    # SGA Memory
    sga_target=6880M
    # Pools
    #java_pool_size=150M
    # Security and Auditing
    audit_file_dest=I:\oracle\admin\TECDB01\adump
    remote_login_passwordfile=EXCLUSIVE
    # Shared Server
    shared_servers=100
    dispatchers="(ADDRESS=(PROTOCOL=tcp)(HOST=xxx.xxx.xxx.92))(DISPATCHERS=4)(CONNECTIONS=1000)"
    #dispatchers="(PROTOCOL=TCP) (SERVICE=TECDB01XDB)"
    # Sort, Hash Joins, Bitmap Indexes
    pga_aggregate_target=1036M
    # System Managed Undo and Rollback Segments
    undo_management=AUTO
    undo_retention=10800
    undo_tablespace=RBS
    local_listener="(ADDRESS=(PROTOCOL=TCP)(HOST=xxx.xxx.xxx.92)(PORT=1521))"
    # NIDA - 28.10.2010 - redo apply
    log_archive_dest_state_1=enable
    log_archive_dest_2 = 'SERVICE=TECDB01'
    log_archive_dest_state_2=enable
    #standby_file_management=auto
    standby_archive_dest=K:\oracle\oradata\archive
    And the TNSNAMES.ora :
    # tnsnames.ora Network Configuration File: C:\oracle\102\network\admin\tnsnames.ora
    # Generated by Oracle configuration tools.
    #this is the standby
    TECDB01.VRITHOFF.SRWT.TEC-WL.BE =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = xxx.xxx.xxx.92)(PORT = 1521))
    (CONNECT_DATA =
    (SERVICE_NAME = TECDB01)
    # This file is written by Oracle Services For MSCS
    # on Sat Nov 08 10:44:27 2008
    #this is the master
    PRIMARY.VRITHOFF.SRWT.TEC-WL.BE =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = xxx.xxx.xxx.246)(PORT = 1521))
    (CONNECT_DATA =
    (SID = TECDB01)
    EXTPROC_CONNECTION_DATA.VRITHOFF.SRWT.TEC-WL.BE =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = IPC)(KEY = TECDB01))
    (CONNECT_DATA =
    (SERVICE_NAME = TECDB01)
    Hope you have all information to bring me in the right way.
    Regards,
    Nicolas

    Hi,
    The recover automatic is working fine, but I still have problems with the recover managed
    Here is the Alert log :(the 46626 was there at 11:30)
    Mon Nov 15 11:31:13 2010
    alter database recover managed standby database using current logfile
    Managed Standby Recovery starting Real Time Apply
    parallel recovery started with 7 processes
    Media Recovery Waiting for thread 1 sequence 46626
    Mon Nov 15 16:36:01 2010
    alter database recover managed standby database cancel
    Mon Nov 15 16:36:05 2010
    Managed Standby Recovery not using Real Time Apply
    Recovery interrupted!
    Mon Nov 15 16:36:06 2010
    Media Recovery user canceled with status 16037
    ORA-16043 signalled during: alter database recover managed standby database using current logfile...
    Mon Nov 15 16:36:07 2010
    Completed: alter database recover managed standby database cancel
    Mon Nov 15 16:36:37 2010
    ALTER DATABASE RECOVER automatic standby database until time'2010-11-15:15:50:00'
    Mon Nov 15 16:36:37 2010
    Media Recovery Start
    Managed Standby Recovery not using Real Time Apply
    parallel recovery started with 7 processes
    Mon Nov 15 16:36:39 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46626_0670241032.001
    Mon Nov 15 16:36:45 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46627_0670241032.001
    Mon Nov 15 16:37:11 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46628_0670241032.001
    Mon Nov 15 16:37:30 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46629_0670241032.001
    Mon Nov 15 16:37:48 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46630_0670241032.001
    Mon Nov 15 16:37:59 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46631_0670241032.001
    Mon Nov 15 16:38:15 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46632_0670241032.001
    Mon Nov 15 16:38:28 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46633_0670241032.001
    Mon Nov 15 16:38:47 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46634_0670241032.001
    Mon Nov 15 16:39:34 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46635_0670241032.001
    Mon Nov 15 16:40:43 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46636_0670241032.001
    Mon Nov 15 16:42:03 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46637_0670241032.001
    Mon Nov 15 16:43:18 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46638_0670241032.001
    Mon Nov 15 16:44:38 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46639_0670241032.001
    Mon Nov 15 16:45:45 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46640_0670241032.001
    Mon Nov 15 16:46:37 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46641_0670241032.001
    Mon Nov 15 16:47:48 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46642_0670241032.001
    Mon Nov 15 16:49:07 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46643_0670241032.001
    Mon Nov 15 16:50:04 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46644_0670241032.001
    Mon Nov 15 16:51:13 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46645_0670241032.001
    Mon Nov 15 16:52:16 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46646_0670241032.001
    Mon Nov 15 16:53:07 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46647_0670241032.001
    Mon Nov 15 16:54:28 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46648_0670241032.001
    Mon Nov 15 16:55:47 2010
    Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46649_0670241032.001
    Mon Nov 15 16:56:35 2010
    Incomplete Recovery applied until change 4037420604
    Completed: ALTER DATABASE RECOVER automatic standby database until time'2010-11-15:15:50:00'
    I don't catch why the system wait for a sequence that is available...
    Regards,
    Nico

  • Performace issue in Oracle 10.1.0.4

    Hi,
    I am having the database with version 10.1.0.4 on solaris environment, I am having the strange problem with open cursors, set the open_cursors paremeter has 1000, but in the snapshot cursors/session showing 1200 and the count is gradually increasing and sometimes found upto 15000. Secondly, parse 2 execute ratio is ~86%.
    I am not getting any error related to open_cursors but users are reporting the very slow response of the performance. Application upload bulk logs in the database and use bind variables.
    RAM -> 8G
    The value of below parameters:
    open_cursors=1000
    session_cached_cursor=150
    cursor_sharing=similar
    SGA=3.5GB
    Please help to find out the rootcause of the issue and suggestion to resolve this.
    Thanks in advance,
    Subash

    select 'session_cached_cursors' parameter,
    lpad(value, 5) value,
    decode(value, 0, ' n/a', to_char(100 * used / value, '990') || '%') usage
    from ( select max(s.value) used
    from v$statname n,
    v$sesstat s
    where n.name = 'session cursor cache count'
    and s.statistic# = n.statistic#
    ( select value
    from v$parameter
    where name = 'session_cached_cursors'
    union all
    select 'open_cursors', lpad(value, 5),
    to_char(100 * used / value, '990') || '%'
    from ( select max(sum(s.value)) used
    from v$statname n,
    v$sesstat s
    where n.name in ( 'opened cursors current',
    'session cursor cache count')
    and s.statistic# = n.statistic#
    group by s.sid
    ( select value
    from v$parameter
    where name = 'open_cursors');
    and check ... what session open many cursors
    select a.value, s.username, s.sid, s.serial#, s.machine
    from gv$sesstat a, gv$statname b, gv$session s
    where a.statistic# = b.statistic# and s.sid=a.sid
    and b.name = 'session cursor cache count' order by a.value;
    Or get information on session open > 1000 cursors ;)
    select SADDR,SID,USER_NAME,ADDRESS,HASH_VALUE,SQL_ID,SQL_TEXT from v$open_cursor where sid in (SELECT sid FROM V$OPEN_CURSOR group by sid having count(*)>1000)

  • Pfile

    Hi all, I am using Oracle 10gR2 on Solaris 10.
    We have a RAC, two nodes as well as ASM on them. Both the nodes are using their local PFILES. Currently the sga_target is set to 0 on my DB. I have checked the init.ora files on both the nodes and there is no entry on sga_target.
    #### SGA Tuning ####
    *.sga_max_size=12G
    *.shared_pool_size=1024M
    *.db_cache_size=4096M
    *.large_pool_size=1024M
    The total size of the SGA is 12G, but if we add up the below values they dont sum up to 12G. Is this normal? If I want oracle to use automatic memory management do I need to manually add the parameter in the PFILE? Do I need to make some other changes as well?
    Regards.....

    Hi,
    #        Initialization Parameter Settings Done By Connectiva Systems (I) Pvt. Ltd For MTC Group RA - (2007)                     #
    ################# DATBASE NAME : gradb , Node 1 Instance Name : gradb1 , Node 2 Instance Name : gradb2 ###########################
    ################# Local Listener For Node 1:LISTENER_RA-DB1 , Local Listener For Node 2:LISTENER_RA-DB2 ##########################
    ##### Global Settings #####
    *.compatible='10.2.0'
    *.db_domain=''
    *.db_name='gradb'
    *.cluster_database=TRUE
    *.cluster_database_instances=4
    *.control_files='+MTCRAC_DBREP_DATA/ctlfile/gradbctl01.ctl','+MTCRAC_TEMP_ALL/ctlfile/gradbctl02.ctl'
    *.control_file_record_keep_time=60 # 60 Days Control File Record Retention #
    *.db_files=3000
    *.recyclebin='OFF'
    *.fast_start_mttr_target=1200 # 20 Minutes #
    #### Data I/O Tuning ####
    *.db_block_size=16384
    *.db_block_checking='true'
    *.db_cache_advice='OFF'
    *.db_file_multiblock_read_count=16 #### Stripe Read 16k*16 => 256K ####
    *.gcs_server_processes=8
    #### Enqueue Processes Restriction ####
    *.aq_tm_processes=2
    *.job_queue_processes=200
    #### SGA Tuning ####
    *.sga_max_size=12G
    *.sga_target=12G
    *.shared_pool_size=0
    *.db_cache_size=0
    *.large_pool_size=0
    #### PGA Allocation Limit ####
    *.pga_aggregate_target=6144M
    #### PGA Tuning Parameters ####
    *.open_cursors=2000
    *.session_cached_cursors=2000
    #### Query Optimization Tuning ####
    *.optimizer_index_caching=60
    *.optimizer_index_cost_adj=40
    #### Enhanced Feature For Query ####
    *.query_rewrite_enabled='true'
    *.star_transformation_enabled='true'
    #### Limit Total Number of Processes In The Environment ####
    *.processes=2000
    ####### Undo Tuning ######
    *.undo_management='AUTO'
    *.undo_retention=36000## 10Hrs ##
    #### XML Database Listener Registration Configuration ####
    *.dispatchers="(PROTOCOL=TCP) (SERVICE=gradbXDB)"
    #### Automatic Archivelog Settings ####
    *.log_archive_format="gradb%T_seq%S_reset%r"
    *.log_archive_dest='+MTCRAC_ARCHIVE_DEST/archivefile'
    ##### Local Settings #####
    *.background_dump_dest='/logs/db_logs/GRADB_logs/bdump'
    *.user_dump_dest='/logs/db_logs/GRADB_logs/udump'
    *.core_dump_dest='/logs/db_logs/GRADB_logs/cdump'
    *.audit_file_dest='/logs/db_logs/GRADB_logs/adump'
    *.audit_trail=db
    ##### Instance Based Settings #####
    gradb1.instance_number=1
    gradb2.instance_number=2
    gradb1.local_listener='LISTENER_RA-DB1'
    gradb2.local_listener='LISTENER_RA-DB2'
    gradb1.thread=1
    gradb2.thread=2
    gradb1.undo_tablespace='UNDOTBS_N1'
    gradb2.undo_tablespace='UNDOTBS_N2'
    *.statistics_level='BASIC'Now when I start the instance it give me this error
    ORA-00824: cannot set sga_target due to existing internal settings, see alert log for more information
    There is nothing in the alert log.
    Regards.....

  • KEEP GETTING ORA-04030: OUT OF PROCESS MEMORY During import using DataPump

    Hi,
    I know I have several issues with my datapump during imp, but I am stuck again people :(
    We took a expdp from an external client and we are trying to append the data to our existing db. When we do this, we keep
    getting this.
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    ORA-31693: Table data object "IWORKS"."TBLEDIFILE_DTL" failed to load/unload and is being
    skipped due to error:
    ORA-29913: error in executing ODCIEXTTABLEFETCH callout
    ORA-04030: out of process memory when trying to allocate 64528 bytes (sort subheap,sort key)
    and so on...for all the tables.
    I used 2 different impdp commands to see if I would get different results and they are:
    mpdp system@iworksdb directory=DATA_PUMP_DIR dumpfile=expdpgemdev.dmp
    job_name=impgemdev041708_01 INCLUDE=TABLE TABLE_EXISTS_ACTION=APPEND
    SCHEMAS=GEMDEV LOGFILE=IMPIWORKS_BOON.log REMAP_SCHEMA=GEMDEV:IWORKS
    REMAP_TABLESPACE=IWORKS_INDEX:IWORKS_IDX REMAP_TABLESPACE=IWORKS_IOT:IWORKS_IDX
    REMAP_TABLESPACE=IWORKS_TABLES:IWORKS_TABLES EXCLUDE=GRANT exclude=statistics
    STREAMS_CONFIGURATION=N
    impdp system@iworksdb directory=DATA_PUMP_DIR dumpfile=expdpgemdev.dmp job_name=impgemdev041708_02 SCHEMAS=GEMDEV
    LOGFILE=IMPIWORKS_BOON.log REMAP_SCHEMA=GEMDEV:IWORKS
    REMAP_TABLESPACE=IWORKS_INDEX:IWORKS_IDX REMAP_TABLESPACE=IWORKS_IOT:IWORKS_IDX
    REMAP_TABLESPACE=IWORKS_TABLES:IWORKS_TABLES EXCLUDE=GRANT exclude=statistics
    STREAMS_CONFIGURATION=N
    I have also enabled my 3gb limit on my Windows 2003 Server which has a total of 4gb of RAM and a 2.6 ghz dual core:
    Microsoft Windows [Version 5.2.3790]
    (C) Copyright 1985-2003 Microsoft Corp.
    C:\Documents and Settings\rdgadmin>cd ../..
    C:\>type boot.ini
    [boot loader]
    redirect=UseBiosSettings
    timeout=30
    default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS
    [operating systems]
    multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Windows Server 2003, Enterprise" /noexecute=optout /fastdetect /3GB /redirect
    Here is my Parameter file as well so this will show you how I setup my memory allocation as well.
    # Memory Allocations
    iworksdb.__db_cache_size=0
    iworksdb.__java_pool_size=0
    iworksdb.__large_pool_size=0
    iworksdb.__shared_pool_size=0
    iworksdb.__streams_pool_size=0
    *.db_16k_cache_size=673741824
    *.db_block_size=8192
    *.db_recovery_file_dest_size=1147483648
    *.pga_aggregate_target=1010612736
    *.sga_max_size=1521225472
    *.sga_target=1321225472
    # Instance Parameters
    *.control_files='C:\ORACLE\FILES\IWORKSDB\control01.ctl',
    'R:\ORACLE\FILES\IWORKSDB\control02.ctl',
    'C:\ORACLE\FILES\IWORKSDB\control03.ctl'
    *.db_domain=''
    *.db_name='iworksdb'
    *._kgl_large_heap_warning_threshold=0
    *.compatible='10.2.0.4.0'
    *.job_queue_processes=20
    *.open_cursors=20000
    *.session_cached_cursors=8000
    *.processes=300
    *.remote_login_passwordfile='EXCLUSIVE'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.db_recovery_file_dest='c:\ORACLE\FILES\IWORKSDB'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=iworksdbXDB)'
    *.statistics_level=ALL
    *.db_writer_processes=4
    # CBO Settings
    *.optimizer_mode='FIRST_ROWS'
    *.optimizer_index_cost_adj=20
    *.query_rewrite_enabled=TRUE
    *.STAR_TRANSFORMATION_ENABLED=TRUE
    *._NEWSORT_ENABLED=TRUE
    *.OPTIMIZER_DYNAMIC_SAMPLING=4
    *.optimizer_index_caching=75
    *.optimizer_index_cost_adj=15
    Continued on the next post....

    Continuation....
    Here is my log file from the impdp:
    Import: Release 10.2.0.4.0 - Production on Thursday, 17 April, 2008 14:35:31
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "SYSTEM"."IMPGEMDEV041708_01" successfully loaded/unloaded
    Starting "SYSTEM"."IMPGEMDEV041708_01": system/********@iworksdb directory=DATA_PUMP_DIR dumpfile=expdpgemdev.dmp job_name=impgemdev041708_01 INCLUDE=TABLE TABLE_EXISTS_ACTION=APPEND SCHEMAS=GEMDEV LOGFILE=IMPIWORKS_BOON.log REMAP_SCHEMA=GEMDEV:IWORKS REMAP_TABLESPACE=IWORKS_INDEX:IWORKS_IDX REMAP_TABLESPACE=IWORKS_IOT:IWORKS_IDX REMAP_TABLESPACE=IWORKS_TABLES:IWORKS_TABLES STREAMS_CONFIGURATION=N
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    ORA-39152: Table "IWORKS"."SYS_TOKENTYPE" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    ORA-31693: Table data object "IWORKS"."TBLEDIFILE_DTL" failed to load/unload and is being skipped due to error:
    ORA-29913: error in executing ODCIEXTTABLEFETCH callout
    ORA-04030: out of process memory when trying to allocate 64528 bytes (sort subheap,sort key)
    ORA-31693: Table data object "IWORKS"."TBLSUBSCRIBERBENEFITS_DTL" failed to load/unload and is being skipped due to error:
    ORA-29913: error in executing ODCIEXTTABLEFETCH callout
    ORA-04030: out of process memory when trying to allocate 64528 bytes (sort subheap,sort key)
    ORA-31693: Table data object "IWORKS"."TBLROUTE_DTL" failed to load/unload and is being skipped due to error:
    ORA-29913: error in executing ODCIEXTTABLEFETCH callout
    ORA-04030: out of process memory when trying to allocate 64528 bytes (sort subheap,sort key)
    ;;; Import> kill_job
    Job "SYSTEM"."IMPGEMDEV041708_01" stopped due to fatal error at 14:42:54
    So basically I have looked online at Metalink and they are telling me (VIA me opening a SR with them), that I should look at Note 233869.1 Diagnosing and Resolving ORA-4030 errors
    So I did this, and the only thing that can apply to me that I am thinking is maybe I have sized my SGA too big??? I mean what else can cause this issue?

  • ORA-600 (QMXISTOREXOBEL1) ERROR

    I have a Customet that obtain the following
    error:
    ORA-00600: internal error code, arguments: [qmxiStoreXobEl1], [10], [258], [1], [],
    when he insert an xml document in a xmlType column.
    He use:
    OS: Linux SuSE 2.1 A.S.
    DB: ORACLE 10g
    He registred the xmlSchema on database. Some xml document are stored correctly, other obtain the error like before.
    Any suggestion or idea is appreciated.
    Thank you
    Paola

    user12003658 wrote:
    The application has generated ORA-600 [17059] error in 9.2.0.7 database. I looked up in Metalink for this particular error. Looks like the number of child cursors against a single parent object has reached 32768. I can verify this in the trace file too. The solution in the metalink doc suggest that i should increase the limit to 65536. How can i increase the child cursor limit? The other cursor related initialization parameters in the database are:
    CURSOR_SHARING=exact
    CURSOR_SPACE_FOR_TIME=false
    OPEN_CURSORS=500
    SESSION_CACHED_CURSORS=0
    Thanks in advance..Getting ORA-00600 internal errors, you need to check metalink "ORA-00600 error lookup" to find the cause and solution that's offered by Oracle Support.
    As you're using unsupported 9i version, you can't get any feedback from support :(

  • Database Limits

    Windows Server 2008 - Oracle 11g 11.2.0
    I was checking Oracle Enterprise Manager.
    There is a warning as database limits saying that open cursors are at 2185.
    I checked some parameters :
    open_cursors : 12000
    session_cached_cursors : 50
    To me, it seems that the limit is far from overflowing...
    So I'm trying to understand why I had this warning.
    Could you please help me ?
    Ed

    You should be able to click on the warning link and get to an open cursors screen which will show you the warning and critical levels, as well as your recent history. On my machine it appears the default is 1200, which is far below my normal usage - it's kind of humorous that the warning message was sent at the time my usage was at its minimum. Why it checks the way it does is one of those mysteries of the universe. You might try setting the levels in the Manage Metrics screen, though on earlier versions on my platform that's been a challenge.
    Note there are EM fora: http://forums.oracle.com/forums/category.jspa?categoryID=70

  • What a mess !! : ORA-01000: maximum open cursors exceeded

    I have read a lot of article about this error
    I m also having this problem with 8.1.6.0 or 8.1.6.1 oracle db ( and classes112 jdbc drivers ) and my application
    I have checked that I close every statment,preparedstatment,recordset and connection ( connections are closed every 30 sec by a dbpool object ) objects when I have finished to do the job but I m still having this error.
    In fact it seems that with theses oracle releases, the statment,preparedstatment and recordset.close have no effect on cursors.
    looks like that it clears the cursors only when you close the connection.
    So anyone here know what version of oracle and jdbc drivers correct this bug ? 8.1.6.2 ? 8.1.7.0 ? classes12 drivers ?
    oracle 8.1.5.0 seems to work fine I have monitored the cursors activity with select user_name, sql_text from v$open_cursor; and they are opened and closed correctly ( statment.close do the job.. )
    I m quite lost. Anyone have some info to share with me about this issue ?
    Thanks a lot for any response

    I've dealt with this problem too. Oracle says that it is a jdbc implementation problem. We came accross this problem when running reporting/high volume programs even after every jdbc object was dealt with(.close(), =null..etc). Something about internally spawn cursors not being dealt with by the DB. Apparently this was dealt with in the newest release of classes12.zip but we found that we still had excess cursors. We set our open_cursors variable to over 400(default is lower than 50) . Doing a forced conn.rollback() every time the connection was returned to the pool seemed to help too.
    Jamie

  • ORA-00604: error occurred at recursive SQL level 1 (Call to a Oracle View)

    I have created a view that refers to a package function within the sql select.
    Like
    E.x
    CREATE OR REPLACE VIEW VW_TAX
    as select
    test_pkg.fn_get_gl_value(acct_id) desired_col1,
    test_pkg.fn_get_gl_desc_value(acct_id) desired_col2
    From tables a, b
    a.col= b.col
    The sample function( fn_get_gl_value) is embedded into a package (test_pkg).
    Function fn_get_gl_value:
    It earlier referred to table A1, B1, C1 and this query took really long, Therefore I used object type tables and stored the values required once within the package when it is invoked. Later I used the Tables A1, B1 and C1(table Cast from the type Table Loaded in Package Memory)
    The query was fast and fine, but now when I try to re-use the view
    select * from VW_TAX
    where acct_id = '02846'
    It fails with this message
    09:32:35 Error: ORA-00604: error occurred at recursive SQL level 1
    ORA-01000: maximum open cursors exceeded
    Note: The database is Oracle8i Enterprise Edition Release 8.1.7.4.0.
    Maximum cursors database is 500
    Please let me know if there is any known solution,
    Appreciate all your help
    Thanks
    RP

    Seems like your OPEN_CURSORS init.ora parameter is set too low.
    See Metalink Note:1012266.6 for details.
       ORA-01000: "maximum open cursors exceeded"
            Cause: A host language program attempted to open too many cursors.
                   The initialization parameter OPEN_CURSORS determines the
                   maximum number of cursors per user.
           Action: Modify the program to use fewer cursors. If this error occurs
                   often, shut down Oracle, increase the value of OPEN_CURSORS,
                   and then restart Oracle.

Maybe you are looking for