Runtime Error - DBIF_RSQL_INVALID_RSQL - Too many OPEN CURSOR

When I try to train a Decision Tree Model via an APD process in RSANWB, I get a runtime error when my model is configured with too many parameter fields or too many leaves (with 2 leaves it works, with more it fails).
By searching SAP Notes I see that there are many references to this kind of runtime errors. But no note on occurences of it in RSANWB / RSDMWB .
Any information on this anyone?
Claudio Ciardelli
Runtime Errors         DBIF_RSQL_INVALID_RSQL
Date and Time          29.07.2005 16:19:21
|ShrtText                                                                                |
|    Error in RSQL module of database interface.                                                   |
|What happened?                                                                                |
|    Error in ABAP application program.                                                            |
|                                                                                |
|    The current ABAP program "SAPLRS_DME_DECISION_TREE_PRED" had to be terminated                 |
|     because one of the                                                                           |
|    statements could not be executed.                                                             |
|                                                                                |
|    This is probably due to an error in the ABAP program.                                         |
|                                                                                |
|Error analysis                                                                                |
|    The system attempted to open a cursor for a SELECT or OPEN CURSOR                             |
|    statement but all 16 cursors were already in use.                                             |
|    The statement that failed accesses table "/BIC/0CDT000030 ".                                  |
|    The erroneous statement accesses table "/BIC/0CDT000030 ".                                    |
|Trigger Location of Runtime Error                                                                 |
|    Program                                 SAPLRS_DME_DECISION_TREE_PRED                         |
|    Include                                 LRS_DME_DECISION_TREE_PREDU06                         |
|    Row                                     103                                                   |
|    Module type                             (FUNCTION)                                            |
|    Module Name                             RS_DME_DTP_EVALUATE                                   |
|Source Code Extract                                                                               |
|Line |SourceCde                                                                                |
|   73|* Prepare for Data evaluation                                                               |
|   74|  CATCH SYSTEM-EXCEPTIONS OTHERS = 15.                                                      |
|   75|    CREATE DATA ref TYPE (i_enum_dbtab).                                                    |
|   76|    ASSIGN ref->* TO <fs_wkarea>.                                                           |
|   77|    ASSIGN COMPONENT gv_class_dbposit OF STRUCTURE                                          |
|   78|                      <fs_wkarea> TO <fs_class>.                                            |
|   79|    CREATE DATA ref TYPE TABLE OF (i_enum_dbtab).                                           |
|   80|    ASSIGN ref->* TO <ft_data>.                                                             |
|   81|                                                                                |
|   82|  ENDCATCH.                                                                                |
|   83|  IF sy-subrc = 15.                                                                         |
|   84|*   Error on Assignment.                                                                    |
|   85|    CALL FUNCTION 'RS_DME_COM_ADDMSG_NOLOG'                                                 |
|   86|      EXPORTING                                                                             |
|   87|        i_type    = 'E'                                                                     |
|   88|        i_msgno   = 301                                                                     |
|   89|        i_msgv1   = 'EVALUATION_PHASE'                                                      |
|   90|      IMPORTING                                                                             |
|   91|        es_return = ls_return.                                                              |
|   92|    APPEND ls_return TO e_t_return.                                                         |
|   93|    EXIT.                                                                                |
|   94|  ENDIF.                                                                                |
|   95|                                                                                |
|   96|* For the un-trained Rec-Ids, evaluate.....                                                 |
|   97|  REFRESH lt_recinp.                                                                        |
|   98|  APPEND LINES OF i_t_records TO lt_recinp.                                                 |
|   99|  SORT lt_recinp .                                                                          |
|  100|* Open Cursor..                                                                             |
|  101|  DATA: l_curs TYPE cursor.                                                                 |
|  102|  DATA: l_psize TYPE i VALUE 10000.                                                         |
|>>>>>|  OPEN CURSOR WITH HOLD l_curs FOR                                                          |
|  104|   SELECT * FROM (i_enum_dbtab)                                                             |
|  105|     WHERE rsdmdt_recid NOT IN                                                              |
|  106|        ( SELECT rsdmdt_recid FROM                                                          |
|  107|             (i_learn_tab) ).                                                               |
|  108|                                                                                |
|  109|*  Start Fetch...                                                                           |
|  110|  DO.                                                                                |
|  111|    FETCH NEXT CURSOR l_curs                                                                |
|  112|      INTO CORRESPONDING FIELDS OF TABLE <ft_data>                                          |
|  113|      PACKAGE SIZE l_psize.                                                                 |
|  114|    IF sy-subrc NE space.                                                                   |
|  115|      EXIT.                                                                                |
|  116|    ENDIF.                                                                                |
|  117|                                                                                |
|  118|*     Process records...                                                                    |
|  119|    LOOP AT <ft_data> ASSIGNING <fs_wkarea>.                                                |
|  120|                                                                                |
|  121|*     Call Prediction Function.                                                             |
|  122|      CALL FUNCTION 'RS_DME_DTP_PREDICT_STRUCTURE'                                          |

Hi Claudio,
well the message is very clear and I think in your case you need to split your model into a few somehow equal models, each not having more than 2 leaves.
Another option might be to do more things serially instead of parallel.
Hope it helps
regards
Siggi

Similar Messages

  • Report causes too many open cursors

    Hello there!
    I've got the following situation:
    I've a very heavy report used for generating our Users Manual. In Reports 6i this Report works fine, generating the Manual works.
    In 10g the Report starts, and formats about 240 pages (in 6i I can generate over 1000 pages and more with this report), and cancels with the message "too many open cursors".
    So I took a look at the open cursors:
    In 6i there are about 100 open cursors caused by this report; in 10g there are...uhm...in all cases to much for the max_open_cursors parameter of the database (standard value which is used by our application is 1000; increasing this to e.g. 5000 resulted in the same behaviour => too many open cursors).
    Checked the open cursors while running the report which showed the following behaviour:
    The report formats about 230 pages, and opens about 20 cursors (~30 sec.). for the next 10 sites the report opens the pending 980 cursors (~5 sec.), and stops formatting...
    So it seems the report server causes some bad recursion: When restarting the reports server and re-running the report, I get sometimes the following error:
    Mit Fehler beendet: REP-536870981: Interner Fehler REP-62204: Interner Fehler beim Schreiben des Bildes BandCombine: a row of the matrix does not have the correct number of entries, should be OpImage.getExpandedNumBands(source0.getSampleModel(), source0.getColorModel()) + 1.. REP-0069: Interner Fehler REP-50125: Exception abgefangen: java.lang.NullPointerException REP-0002: Unable to retrieve a string from the Report Builder message file. REP-536870981:
    or maybe the report server tries to paralellize some querys (as this report consists of about 5 querys)?
    As said - this is a very complex report (my colleague spent about 3 months of his life with creating it and that's not why he is a lamer in reports ;-)) so it's very hard to give you a repcase, but if anyone knows some advice like "edit the <repservername>.conf; append 'DO NEVER EVER PARALLELYZE QUERYS' to the config" or something this would be very useful ;-).
    many thanks
    best regards
    Christian

    I've now located the problem:
    The report consists of several querys based on a ref cursor; and this cursors are opend and not closed in 10g...
    I'll open a SR on metalink....
    best regards
    Christian

  • STARTING DATABASE : PROBLEM OF Linux Error: 23: Too many open files in syst

    Hi everybody,
    I am running an RMAN script and get this error,
    9> @/u01/app/oracle/admin/devpose/backup/configuration.rcv
    RMAN> ###################################################################
    2> # Configuration file used to set Rman policies.
    3> #
    4> ###################################################################
    5>
    6> CONFIGURE DEFAULT DEVICE TYPE TO DISK;
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of configure command at 08/26/2009 20:03:30
    RMAN-06403: could not obtain a fully authorized session
    ORA-01034: ORACLE not available
    RMAN> CONFIGURE RETENTION POLICY TO REDUNDANCY 1;
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of configure command at 08/26/2009 20:03:30
    RMAN-06403: could not obtain a fully authorized session
    ORA-01034: ORACLE not available
    RMAN> #CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
    2> CONFIGURE DEVICE TYPE DISK PARALLELISM 2;
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of configure command at 08/26/2009 20:03:30
    RMAN-06403: could not obtain a fully authorized session
    ORA-01034: ORACLE not available
    RMAN>
    RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/u01/app/oracle/backup/db/ora_df%t_s%s_s%p';
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of configure command at 08/26/2009 20:03:30
    RMAN-06403: could not obtain a fully authorized session
    ORA-01034: ORACLE not available
    But this problem is understandable, as the database is not running. The main problem why database is not running, I have found the reason but do not understand how to solve the problem.
    Since, the database was not running, I tried to startup the database, I then came across the following which is my problem (Why so many files are open? Linux OS error says too many files open. See below,
    SQL> conn /as sysdba
    Connected to an idle instance.
    SQL> startup
    ORACLE instance started.
    Total System Global Area 419430400 bytes
    Fixed Size 779516 bytes
    Variable Size 258743044 bytes
    Database Buffers 159383552 bytes
    Redo Buffers 524288 bytes
    Database mounted.
    ORA-00313: open failed for members of log group 2 of thread 1
    ORA-00312: online log 2 thread 1: '/u01/app/oracle/oradata/devpose/redo02.log'
    ORA-27041: unable to open file
    Linux Error: 23: Too many open files in system
    Can anybody has run into such problem and guide me to a solution, please?
    Thanks

    Hi,
    yes, this DB was functioning o.k. this configuration script was part of RMAN daily backup.
    Last night the backup failed. So, when I opened "Failed job" in the EM, I saw this type of messages.
    That was the starting point. Gradually, I tried to narrow down on to the actual problem and found the findings as I have posted.
    One way of sovling problem, I thought that, all these processes I would kill and then try to open the database, it might startup. However, that wouldnot lead me in ensuring this won't occur again.
    That's why I am trying to understand why it should open, so many processes (why spawn so many .flb files?) Any thoughts you have around this?
    I will try to restart the OS as the last resort.
    Thanks for your help and suggestions.
    Regards,

  • Linux Error: 23: Too many open files in system

    I crashed the my oracle instance with the following error:
    Tue Feb 13 22:15:16 2001
    Errors in file /home/oracle/product/8.1.6/admin/v2qa1/bdump/lgwr_14175.trc:
    ORA-00313: open failed for members of log group 2 of thread 1
    ORA-00312: online log 2 thread 1: '/db03/v2qa1/system/log/redo02.log'
    ORA-27041: unable to open file
    Linux Error: 23: Too many open files in system
    Additional information: 2
    LGWR: terminating instance due to error 313
    Instance terminated by LGWR, pid = 14175
    Is the number of open files adjustable?
    Why am I opening files?
    Could the fact that our java stored procedures are trying, unsuccessfully, be leaving files open?
    Thanks - Craig
    null

    increasing the oracle users ulimit to unlimited fixed that error for me. Type
    ulimit -a
    and look at the open files parameter. See your linux doc's to increase it to unlimited (will probably be increased from 1024 to 4096).

  • Too many open cursors exception caused by LRS Iterator

    Using Kodo4.1.4 with Oracle10, and Large Result Set Proxies, I encountered
    the error "maximum number of open cursors exceeded".
    It seems to have been caused because of incomplete LRSProxy iterators within
    the context of a single PersistenceManager. These iterators were over
    collections obtained by reachability, not directly from Queries or Extents.
    The Iterator is always closed, but the max-cursors exception still occurs.
    Following is a pseudocode example of the case... Note that if the code is
    refactored to remove the break; statement, then the program works fine, with
    no max-cursors exception.
    Any suggestions?
    // This code pattern is called hundreds of times
    // within the context of a PersistenceManager
    Collection c = persistentObject.getSomeCollection(); // LRS Collection
    Iterator i = c.iterator()
    try
    while(i.hasNext())
    Object o = i.next();
    if (someCondition)
    break; // if this break is removed, everything is fine
    finally
    KodoJDOHelper.close(i);
    }

    XSQL Servlet v. 0.9.9.1
    Netscape Enterprise / JRUN 2.3.3 / Windows NT
    I modified the document demo (insert request).
    The XSQL document:
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="newdocinsform.xsl"?>
    <page connection="demo" xmlns:xsql="urn:oracle-xsql">
    <xsql:insert-request table="xmlclob" transform="newdocins.xsl"/>
    <data>
    <xsql:query null-indicator="yes" max-rows="4">
    select id, doc
    from xmlclob
    order by id desc
    </xsql:query>
    </data>
    </page>
    The difference between this and your demo is the table: the table xmlclob has
    ID NUMBER and DOC CLOB. No constraints were enforced, so I was inserting the ID and the DOC. Upon page reload, several rows with the same values were inserted.
    I had a similar problem in the previous release.
    As a general question, how can I configure the XSQLConfig file for optimal performance?
    Although you provided default values, I'm not sure how much is necessary for connection pooling.

  • TOO many OPEN CURSORS during loop of INSERT's

    Running ODP.NET beta2 (can't move up yet but will do that soon)
    I don't think it is related with ODP itself but probably on how .Net works with cursors. We have a for/next loop that executes INSERT INTO xxx VALUES (:a,:b,:c)
    statements. Apparently, when monitoring v$sysstat (current open cursors) we see these raising with 1 INSERT = 1 cursor. If subsequently we try to perform another action, we get max cursors exceeded. We allready set open_cursor = 1000, but the number of inserts can be very high. Is there a way to release these cursors (already wrote oDataAdaptor.dispose, oCmd.dispose but this does not help.
    Is it normal that each INSERT has it's own cursor ? they all have the same hashvalue in v$open_cursor. They seem to be released after a while, especially when moving to another asp.net page, but it's not clear when that happens and if it is possible to force the release of the (implicit?) cursors faster.
    Below is a snippet of the code, I unrolled a couple of function-calls into the code so this is just an example, not sure it will run without errors like this, but the idea should be clear (the code looks rather complex for what it does but the unrolled functions make the code more generic and we have a database-independend datalayer):
    Try
    ' Set the Base Delete statement
    lBaseSql = _
    "INSERT INTO atable(col1,col2,col3) " & _
    "VALUES(:col1,:col2,:col3)"
    ' Initialize a transaction
    lTransaction = oConnection.BeginTransaction()
    ' Create the parameter collection, containing for each
    ' row in the list the arguments
    For Each lDataRow In aList.Rows
    lOracleParamters = New OracleParameterCollection()
    lOracleParameter = New OracleParameter("luserid", OracleDbType.Varchar2,
    _ CType(aCol1, Object))
    lOracleParamters.Add(lOracleParameter)
    lOracleParameter = New OracleParameter("part_no", OracleDbType.Varchar2, _
    CType(lDataRow.Item("col2"), Object))
    lOracleParamters.Add(lOracleParameter)
    lOracleParameter = New OracleParameter("revision", OracleDbType.Int32, _
    CType(lDataRow.Item("col3"), Object))
    lOracleParamters.Add(lOracleParameter)
    ' Execute the Statement;
    ' If the execution fails because the row already exists,
    ' then the insert should be considered as succesfull.
    Try
    Dim aCommand As New OracleCommand()
    Dim retval As Integer
    'associate the aConnection with the aCommand
    aCommand.Connection = oConnection
    'set the aCommand text (stored procedure name or SQL statement)
    aCommand.CommandText = lBaseSQL
    'set the aCommand type
    aCommand.CommandType = CommandType.Text
    'attach the aCommand parameters if they are provided
    If Not (lOracleParameters Is Nothing) Then
    Dim lParameter As OracleParameter
    For Each lParameter In lOracleParameters
    'check for derived output value with no value assigned
    If lParameter.Direction = ParameterDirection.InputOutput _
    And lParameter.Value Is Nothing Then
    lParameter.Value = Nothing
    End If
    aCommand.Parameters.Add(lParameter)
    Next lParameter
    End If
    Return
    ' finally, execute the aCommand.
    retval = cmd.ExecuteNonQuery()
    ' detach the OracleParameters from the aCommand object,
    ' so they can be used again
    cmd.Parameters.Clear()
    Catch ex As Exception
    Dim lErrorMsg As String
    lErrorMsg = ex.ToString
    If Not lTransaction Is Nothing Then
    lTransaction.Rollback()
    End If
    End Try
    Next
    lTransaction.Commit()
    Catch ex As Exception
    lTransaction.Rollback()
    Throw New DLDataException(aConnection, ex)
    End Try

    I have run into this problem as well. To my mind
    Phillip's solution will work but seems completey unnecessary. This is work the provider itself should be managing.
    I've done extensive testing with both ODP and OracleClient. Here is one of the scenarios: In a tight loop of 10,000 records, each of which is either going to be inserted or updated via a stored procedure call, the ODP provider throws the "too many cursor errors at around the 800th iteration. With over 300 cursors being open. The exact same code with OracleClient as the provider never throws an error and opens up 40+ cursors during execution.
    The applicaation I have updates a Oracle8i database from a DB2 database. There are over 30 tables being updated in near real time. Reusing the command object is not an option and adding all the code Phillip did for each call seems highly unnecessary. I say Oracle needs to fix this problem. As much as I hate to say it the microsoft provider seems superior at this point.

  • Too many open cursors

    Could someone help me understand this problem, and how to remedy it? We're getting warnings as the number of open cursors nears 1200. I've located the V$OPEN_CURSOR view, and after investigating it, this is what I think:
    Currently:
    SQL> select count(*)
    2 from v$open_cursor;
    COUNT(*)
    535
    1) I have one session open in the database, and 40 records in this view. Does that mean my cursors are still in the cursor cache?
    2) Many of these cursors are associated with our analysts, and it looks like they are likely queries TOAD runs in order to gather meta-data for the interface. Can I overcome this?
    3) I thought that the optimizer only opened a new cursor when a query that didn't match one in the cache was executed. When I run the following, I get 105 SQL statements with the same hash_value and sql_id, of which, they total 314 of the 535 open cursors (60% of the open cursors):
    SQL> ed
    Wrote file afiedt.buf
    1 SELECT COUNT(*), SUM(cnt)
    2 FROM (SELECT hash_value,
    3 sql_id,
    4 COUNT(*) as cnt
    5 FROM v$open_cursor
    6 GROUP BY hash_value, sql_id
    7* HAVING COUNT(*) > 1)
    SQL> /
    COUNT(*) SUM(CNT)
    104 314
    4) Most of our connections in production will use Oracle Forms. Is there something we need to do in order to get Forms to use bind variables, or will it do so by default?
    Thanks for helping me out with this.
    -Chuck

    CURSOR_SHARING=EXACT
    OPEN_CURSORS=500
    CURSOR_SHARING
    From what I've read, cursor sharing is always in effect, although we have the most conservative method set. So I'm not sure how this affects things. Several identical queries are being submitted in several separate cursors.
    OPEN_CURSORS
    This value corresponds with the maximum number of cursors allowed for a single session. We're using shared servers, so I'm exactly sure if this is still 'per session' or 'per shared server', but 500 should be more than enough.
    It sounds like you're suggesting that a warning is being triggered based upon our init params. If that's the case, then what are people seeing as a limit for cursors on a 2-CPU Linux box with 2G of memory?
    -Chuck

  • Tomcat error - SocketException: Too many open files

    I periodically get this error in Tomcats CiscoWorkds Home/MDC/tomcat/logs/stdout.log file. The symptom occurs after logging in and opening any page on the CiscoWorks Web portal ( eg Common Services ). Several applets fail to load, and I'll see the error included in the attached file come up repeatedly after each page refresh.
    After quite a bit of research, I found many recommendations to increase the ulimit for files, using the "ulimit -n" command. I've upped it to "unlimited" and thne problem continues to reoccur.
    Here's my environment:
    LMS 3.2
    Solaris 10 (patched up to the April 2011 CPU )
    64 bit SPARC processor
    Java Runtime Engine 1.5.0_28
    Any more ideas? This might be a case where I need to increase a max network connections setting for each of the Tomcat applications. I've never done that, but that's my best guess.

    The dmgtd script sets the number of files for casuser processes.  Setting the limit outside of dmgtd will not be helpful.  Tomcat should not be running out of file descriptors.  When the problem happens again, post the output of pfiles for the Tomcat PID (use pdshow to find the PID for Tomcat).  There may be a file descriptor leak somewhere.

  • ORA-01000: Too many open cursors -- Need Help

    Hi All,
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    I am getting error ora-01000 for  the following procedures gather stats
    Could you please guide how to get-rid-off this error.
    thanks in advance;
    CREATE OR REPLACE PROCEDURE SHEMA_NAME ANALYZE_TABLES IS
       rec_table_name   VARCHAR2 (30);
       CURSOR c1
       IS
          SELECT table_name
            FROM USER_tables;  ------ 18000 table for this cursor
    BEGIN
       OPEN c1;
       LOOP
          FETCH c1 INTO rec_table_name;
          EXIT WHEN c1%NOTFOUND;
          -- block was hereÿÿÿ
          BEGIN
             DBMS_STATS.
             GATHER_TABLE_STATS (
                OWNNAME            => 'SHEMA_NAME',
                TABNAME            => rec_table_name,
                PARTNAME           => NULL,
                ESTIMATE_PERCENT   => 30,
                METHOD_OPT         => 'FOR ALL COLUMNS SIZE AUTO',
                DEGREE             => 5,
                CASCADE            => TRUE);
          END;
       END LOOP;
       CLOSE c1;
    EXCEPTION
       WHEN OTHERS
       THEN
          raise_application_error (
             -20001,
             'An error was encountered - ' || SQLCODE || ' -ERROR- ' || SQLERRM);
    END;

    Look at the following:
    SQL> begin
      2          raise no_data_found;
      3  end;
      4  /
    begin
    ERROR at line 1:
    ORA-01403: no data found
    ORA-06512: at line 2
    The error code the caller executing this code receive is -01403. A unique error number that has a known and specific meaning.
    In addition, the error stack tells the caller that this unique error occurred on line 2 in the source code.
    The caller knows EXACTLY what the error is and where it occurred.
    SQL> begin
      2          raise no_data_found;
      3  exception when OTHERS then
      4          raise_application_error(
      5                  -20000,
      6                  'oh damn some error happened. the error is '||SQLERRM
      7          );
      8  end;
      9  /
    begin
    ERROR at line 1:
    ORA-20000: oh damn some error happened. the error is ORA-01403: no data found
    ORA-06512: at line 4
    In this case the caller gets the error code -20000. It is meaningless as the same error code will be use for ALL errors (when OTHERS). So the caller will never know what the actual real error is.
    For the caller to try and figure that out, it will need to parse and process the error message text to look for the real error code. A very silly thing to do.
    In addition, the error stack says that the error was caused by line 4 in the code called.. except that this is the line that raised the meaningless generic error and not the actual line causing the error.
    There are 3 basic reasons for writing an exception handler:
    - the exception is not an error
    - the exception is a system exception (e.g. no data found) and needs to be turned into meaningful application exceptions (e.g. invoice not found, customer not found, zip code not found, etc)
    - the exception handler is used as a try..finally resource protection block (which means it re-raises the exception)
    If your exception handler cannot tick one of these three reasons for existing, you need to ask yourself why you are writing that handler.

  • TOO MANY OPEN CURSORS PROBLEM ... PLEASE HELP

    Hi,
    my problem is the following :
    I got data from a system in flat file format. ( ascii, semicolon separated )
    I wrote mapping classes to different tables and insert via Oracle thin driver.
    The data I got isn't 100% consistent. It may happen that there are double
    records for tables whith unique indexes.
    I catched the Exception like in the segment below
    Statement insertStmnt = null;
    try{
    insertStmnt = connection.createStatement();
    insertStmnt.execute(insertString);
    connection.commit(); // autocommit is diabled
    } catch ( Exception sql ) {
    System.out.println(sql.toString());
    connection.rollback();
    insertStmnt.close();
    The Problem : when receiving the SQLException ( UNIQUE CONSTRAINT VIOLATED )
    the cursor remains open.
    After exeeding the open_cursors system property ( Oracle )
    no more data is loaded.
    ( the input files contains sometimes more than one million rows )
    Any suggestion to my Mail
    [email protected]
    Thanks

    Hi!
    Now you only close your statement when you catch an error. You will have to close it if things works out fine as well:
    Statement insertStmnt = null;
    try{
    insertStmnt = connection.createStatement();
    insertStmnt.execute(insertString);
    connection.commit(); // autocommit is diabled
    insertStmnt.close();
    } catch ( Exception sql ) {
    System.out.println(sql.toString());
    connection.rollback();
    insertStmnt.close();
    Good luck!
    /Tale

  • Too many open files in system cause database goes down

    Hello experts I am very worry because of the following problems. I really hope you can help me.
    some server features
    OS: Suse Linux Enterprise 10
    RAM: 32 GB
    CPU: intel QUAD-CORE
    DB: There is 3 instances RAC databases (version 11.1.0.7) in the same host.
    Problem: The database instances begin to report Error message: Linux-x86_64 Error: 23: Too many open files in system
    and here you are other error messages:
    ORA-27505: IPC error destroying a port
    ORA-27300: OS system dependent operation:close failed with status: 9
    ORA-27301: OS failure message: Bad file descriptor
    ORA-27302: failure occurred at: skgxpdelpt1
    ORA-01115: IO error reading block from file 105 (block # 18845)
    ORA-01110: data file 105: '+DATOS/dac/datafile/auditoria.519.738586803'
    ORA-15081: failed to submit an I/O operation to a disk
    At the same time I search into the /var/log/messages as root user and I the error notice me the same problem:
    Feb 7 11:03:58 bls3-1-1 syslog-ng[3346]: Cannot open file /var/log/mail.err for
    writing (Too many open files in system)
    Feb 7 11:04:56 bls3-1-1 kernel: VFS: file-max limit 131072 reached
    Feb 7 11:05:05 bls3-1-1 kernel: oracle[12766]: segfault at fffffffffffffff0 rip
    0000000007c76323 rsp 00007fff466dc780 error 4
    I think I get clear about the cause, maybe I need to increase the fs.file-max kernel parameter but I do not know how to set a good value. Here you are my sysctl.conf file and the limits.conf file:
    sysctl.conf
    kernel.shmall = 2097152
    kernel.shmmax = 17179869184
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    fs.file-max = 6553600
    net.ipv4.ip_local_port_range = 1024 65000
    net.core.rmem_default = 4194304
    net.core.rmem_max = 4194304
    net.core.wmem_default = 262144
    net.core.wmem_max = 4194304
    limits.conf
    oracle soft nproc 2047
    oracle hard nproc 16384
    oracle soft nofile 1024
    oracle hard nofile 65536

    process limit
    bcm@bcm-laptop:~$ ulimit -a
    core file size          (blocks, -c) 0
    data seg size           (kbytes, -d) unlimited
    scheduling priority             (-e) 20
    file size               (blocks, -f) unlimited
    pending signals                 (-i) 16382
    max locked memory       (kbytes, -l) 64
    max memory size         (kbytes, -m) unlimited
    open files                      (-n) 1024
    pipe size            (512 bytes, -p) 8
    POSIX message queues     (bytes, -q) 819200
    real-time priority              (-r) 0
    stack size              (kbytes, -s) 8192
    cpu time               (seconds, -t) unlimited
    max user processes              (-u) unlimited
    virtual memory          (kbytes, -v) unlimited
    file locks                      (-x) unlimited

  • "Too many open files" - Mac OS X 10.7.5

    I'm getting errors about "Too many open files" when trying to use Berkeley DB on Mac OS X 10.7.5.
    The BerkeleyDB site mentions this, but it goes back to 2003 and refers to a file which doesn’t exist in OS X 10.7.5:
    http://docs.oracle.com/cd/E17076_03/html/installation/build_unix_macosx.html
    I tried ulimit, but it doesn't seem to have any affect:
    $ sudo ulimit -n 1024 1024
    Password:
    $ ulimit -a
    core file size          (blocks, -c) 0
    data seg size           (kbytes, -d) unlimited
    file size               (blocks, -f) unlimited
    max locked memory       (kbytes, -l) unlimited
    max memory size         (kbytes, -m) unlimited
    open files                      (-n) 256
    pipe size            (512 bytes, -p) 1
    stack size              (kbytes, -s) 8192
    cpu time               (seconds, -t) unlimited
    max user processes              (-u) 709
    virtual memory          (kbytes, -v) unlimited
    Any suggestions?
    Thanks,
    James.

    sudo sysctl -w kern.maxfiles=32768
    sudo sysctl -w kern.maxprocperuid=16384
    seemed to work.
    Created file /etc/sysctl.conf containing:
    kern.maxfiles=32768
    kern.maxprocperuid=16384
    so that the setting would survive a reboot.
    Hopefully that permanently fixes it.

  • VISACOM - Alloc Error using 488.2 USB-B Interface - too many open sessions

    I have been having the following issue in my VB .NET RF-ATE application.... It usually happens when my program enters a measurement loop (I.E. searching for P1dB). It begins to solve for P1dB and performs about 15 cycles (sets power level on SigGen and takes SpecAn meas) and then crashes due to the following error :
    As Logged in the Event Viewer :
    VISA: May 13 09:45:22: Error=bfff003c,"VI_ERROR_ALLOC: Insufficient system resources/memory": ViTable::add - too many open sessions
    As Logged in VB .NET :
    An unhandled exception of type 'System.Runtime.InteropServices.COMException' occurred in RFATE.exe
    Additional information: HRESULT = 8004003c
    VI_ERROR_ALLOC
    Not sure why its happening my code is pretty solid (or so I thought) and I believe it closes the VISA session properly after each R/W operation? (See attached). I call the same procedure to talk to the GPIB instruments over and over.
    Is it possible that I am not freeing and disposing of resources properly? I have read a little about destructors in .NET etc. but I was under the impression that as soon as "Sub End" is executed that the resources were free'd up? But that doesn't explain why I would get a "too many open sessions" error occour if I am closing the session after I am finished?
    Another odd thing to note is that I have created testplan scripts and loaded them into my application with literally hundreds of measurement commands and my program didn't crash then???
    I am very new to VB and .NET (3 months) so I have a lot to learn but I can't understand why this is occouring. Any help is appreciated.
    Attachments:
    GPIB.txt ‏3 KB

    Just posting a followup... Turns out I found a way to make it work! (I have been fighting this for almost 3 days!).
    If you look at the variable declarations of my procedure.....
    Sub GPIB(ByVal Addr As Object, ByRef Data As Object, ByVal IO As String)
    Dim ioMgr As Ivi.Visa.Interop.ResourceManager
    Dim instrument As Ivi.Visa.Interop.FormattedIO488
    Dim session As Ivi.Visa.Interop.IMessage
    I use the "IMessage" interface for the variable session.
    Just before the "Sub End" is executed I use the following line to close the session :
    session.Close
    Which it would appear doesn't close the session properly when you use the "IMessage" interface.
    So I changed the line to use the "IVisaSession" interface instead :
    Dim session As Ivi.Visa.Interop.IVisaSession
    Now when the Close method is executed aparently it closes the session properly because my program isn't crashing at all!
    I was using VISA COM 3.0 Reference object (GlobMgr.dll).
    If anyone has any insight on this please do share.

  • Intermittent too many open files error and Invalid TLV error

    Post Author: jam2008
    CA Forum: General
    I'm writing this up in the hopes of saving someone else a couple of days of hair-pulling...
    Environment: Crystal Reports XI Enterprise / also runtime via Accpac ERP 5.4
    Invalid TLV error in Accpac
    "too many open files" error in event.log file
    Situation:
    Invalid TLV error occurs seemingly randomly on report created in CR Professional 11.  Several days of troubleshooting finally lead to the following diagnosis:
    This error occurs in a report that contains MORE THAN 1 bitmap image.
    The error only shows up after 20 or more reports have been generated sequentially, WITHOUT CLOSING the application that is calling the report.  In our case the Invoice Report dialog within Accpac.  This same error occurred in a custom 3rd party VB.NET app that also called the report through an Accpac API.

    after getting this message you need to do 2 things:
    1. delete the current workspace because it contains some bad data in one the config files - failure to delete the workspace will result the error message to appear even if trying to upload a single file.
    2. add to DTR files in groups - no more than 500 in a single add.

  • Runtime.exec - Too Many Open Files

    System version : Red Hat Enterprise Linux 2.4.21-47.ELsmp AS release 3 (Taroon Update 8)
    JRE version : 1.6.0-b105
    Important : the commands described below are launched from a Web application : Apache Tomcat 6.0.10
    Hello,
    I'm facing a problem already known, but appearantly never really solved ??!! ;)
    When I invoke many system commands with the 'Runtime.exec(...)' method, there are open files that are not released (I can see them with the "lsof" system command) .
    At the end, the unavoidable "too many open files" Exception.
    The lauched commands are "ssh ... " commands.
    In the topics relating to this problem, the solution is always to close all Streams / threads and to explicitely invoke the method "Process.destroy()".
    My problem is that this is what I do ! And I can't do more...
    Here is the code :
           Runtime rt = Runtime.getRuntime();
           Process process = rt.exec("ssh ...");
            // ProcessStreamHolder extends Thread and reads from the InputStream given in constructor...
            ProcessStreamHolder errorStream = new ProcessStreamHolder(process.getErrorStream());
            ProcessStreamHolder outputStream = new ProcessStreamHolder(process.getInputStream());
            errorStream.start();
            outputStream.start();
            exitValue = process.waitFor();
            try {
                errorStream.interrupt();
            } catch (RuntimeException e) {
                logger.warn("...");
            try {
                outputStream.interrupt();
            } catch (RuntimeException e) {
                logger.warn("...");
            try {
                process.getInputStream().close();
            } catch (RuntimeException e) {
                logger.warn("...");
            try {
                process.getOutputStream().close();
            } catch (RuntimeException e) {
                logger.warn("...");
            try {
                process.getErrorStream().close();
            } catch (RuntimeException e) {
                logger.warn("...");
            process.destroy();Does someone know if my code is wrong or if there's a workaround for me ?
    Thanks by advance !
    Richard.

    Don't interrupt those threads. Close the output stream first, then wait for the process to exit, then both threads reading the stdout and stderr of the process should get EOFs, so they should exit naturally, and incidentally close the streams themselves.

Maybe you are looking for

  • Remote app is not working atv2

    Before ios5 was out I was using apple remote app from my iphone.  I was also using airplay.  I cannot get any of my devices to connect to remote app, or air play now that we have upgraded to ios5. Anyone else having this issue? Everything is updated

  • FICO-Invoice Parking and approval; Unable to complete some Config steps

    FICO-Invoice Parking and approval; Unable to complete some Config steps I am using standard WF: WS10000051 (SAP template) in Accounts payable to get triggered for Invoice Parking-Approval and Posting. One of the steps mentioned at help.sap.com is bel

  • How to get number of failed log atempt for one user ( schema )

    Hi All, If one use try to connecte with wrrong password he will get this message ORA-1017. and the nombre of failed log atemp will increase. Then who can i check this number for one user in my date base ? Cheers Fayçal.

  • When I try to send an animated gif, it's just a still pic.

    Hi when I send a gif in iMessage, the gif will not animate but is just a normal still picture, I just want to know if i need to enable something or am I just doing it wrong?

  • Corrupt files preventing time machine backup

    When I tried to backup to time machine, my backup freezes after approximately 20MB and does not move any further. I have left it for several days and it still hasn't moved. When I open Console, I get an error message 35 which gives details of a corru