Report causes too many open cursors
Hello there!
I've got the following situation:
I've a very heavy report used for generating our Users Manual. In Reports 6i this Report works fine, generating the Manual works.
In 10g the Report starts, and formats about 240 pages (in 6i I can generate over 1000 pages and more with this report), and cancels with the message "too many open cursors".
So I took a look at the open cursors:
In 6i there are about 100 open cursors caused by this report; in 10g there are...uhm...in all cases to much for the max_open_cursors parameter of the database (standard value which is used by our application is 1000; increasing this to e.g. 5000 resulted in the same behaviour => too many open cursors).
Checked the open cursors while running the report which showed the following behaviour:
The report formats about 230 pages, and opens about 20 cursors (~30 sec.). for the next 10 sites the report opens the pending 980 cursors (~5 sec.), and stops formatting...
So it seems the report server causes some bad recursion: When restarting the reports server and re-running the report, I get sometimes the following error:
Mit Fehler beendet: REP-536870981: Interner Fehler REP-62204: Interner Fehler beim Schreiben des Bildes BandCombine: a row of the matrix does not have the correct number of entries, should be OpImage.getExpandedNumBands(source0.getSampleModel(), source0.getColorModel()) + 1.. REP-0069: Interner Fehler REP-50125: Exception abgefangen: java.lang.NullPointerException REP-0002: Unable to retrieve a string from the Report Builder message file. REP-536870981:
or maybe the report server tries to paralellize some querys (as this report consists of about 5 querys)?
As said - this is a very complex report (my colleague spent about 3 months of his life with creating it and that's not why he is a lamer in reports ;-)) so it's very hard to give you a repcase, but if anyone knows some advice like "edit the <repservername>.conf; append 'DO NEVER EVER PARALLELYZE QUERYS' to the config" or something this would be very useful ;-).
many thanks
best regards
Christian
I've now located the problem:
The report consists of several querys based on a ref cursor; and this cursors are opend and not closed in 10g...
I'll open a SR on metalink....
best regards
Christian
Similar Messages
-
Too many open cursors exception caused by LRS Iterator
Using Kodo4.1.4 with Oracle10, and Large Result Set Proxies, I encountered
the error "maximum number of open cursors exceeded".
It seems to have been caused because of incomplete LRSProxy iterators within
the context of a single PersistenceManager. These iterators were over
collections obtained by reachability, not directly from Queries or Extents.
The Iterator is always closed, but the max-cursors exception still occurs.
Following is a pseudocode example of the case... Note that if the code is
refactored to remove the break; statement, then the program works fine, with
no max-cursors exception.
Any suggestions?
// This code pattern is called hundreds of times
// within the context of a PersistenceManager
Collection c = persistentObject.getSomeCollection(); // LRS Collection
Iterator i = c.iterator()
try
while(i.hasNext())
Object o = i.next();
if (someCondition)
break; // if this break is removed, everything is fine
finally
KodoJDOHelper.close(i);
}XSQL Servlet v. 0.9.9.1
Netscape Enterprise / JRUN 2.3.3 / Windows NT
I modified the document demo (insert request).
The XSQL document:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="newdocinsform.xsl"?>
<page connection="demo" xmlns:xsql="urn:oracle-xsql">
<xsql:insert-request table="xmlclob" transform="newdocins.xsl"/>
<data>
<xsql:query null-indicator="yes" max-rows="4">
select id, doc
from xmlclob
order by id desc
</xsql:query>
</data>
</page>
The difference between this and your demo is the table: the table xmlclob has
ID NUMBER and DOC CLOB. No constraints were enforced, so I was inserting the ID and the DOC. Upon page reload, several rows with the same values were inserted.
I had a similar problem in the previous release.
As a general question, how can I configure the XSQLConfig file for optimal performance?
Although you provided default values, I'm not sure how much is necessary for connection pooling. -
TOO many OPEN CURSORS during loop of INSERT's
Running ODP.NET beta2 (can't move up yet but will do that soon)
I don't think it is related with ODP itself but probably on how .Net works with cursors. We have a for/next loop that executes INSERT INTO xxx VALUES (:a,:b,:c)
statements. Apparently, when monitoring v$sysstat (current open cursors) we see these raising with 1 INSERT = 1 cursor. If subsequently we try to perform another action, we get max cursors exceeded. We allready set open_cursor = 1000, but the number of inserts can be very high. Is there a way to release these cursors (already wrote oDataAdaptor.dispose, oCmd.dispose but this does not help.
Is it normal that each INSERT has it's own cursor ? they all have the same hashvalue in v$open_cursor. They seem to be released after a while, especially when moving to another asp.net page, but it's not clear when that happens and if it is possible to force the release of the (implicit?) cursors faster.
Below is a snippet of the code, I unrolled a couple of function-calls into the code so this is just an example, not sure it will run without errors like this, but the idea should be clear (the code looks rather complex for what it does but the unrolled functions make the code more generic and we have a database-independend datalayer):
Try
' Set the Base Delete statement
lBaseSql = _
"INSERT INTO atable(col1,col2,col3) " & _
"VALUES(:col1,:col2,:col3)"
' Initialize a transaction
lTransaction = oConnection.BeginTransaction()
' Create the parameter collection, containing for each
' row in the list the arguments
For Each lDataRow In aList.Rows
lOracleParamters = New OracleParameterCollection()
lOracleParameter = New OracleParameter("luserid", OracleDbType.Varchar2,
_ CType(aCol1, Object))
lOracleParamters.Add(lOracleParameter)
lOracleParameter = New OracleParameter("part_no", OracleDbType.Varchar2, _
CType(lDataRow.Item("col2"), Object))
lOracleParamters.Add(lOracleParameter)
lOracleParameter = New OracleParameter("revision", OracleDbType.Int32, _
CType(lDataRow.Item("col3"), Object))
lOracleParamters.Add(lOracleParameter)
' Execute the Statement;
' If the execution fails because the row already exists,
' then the insert should be considered as succesfull.
Try
Dim aCommand As New OracleCommand()
Dim retval As Integer
'associate the aConnection with the aCommand
aCommand.Connection = oConnection
'set the aCommand text (stored procedure name or SQL statement)
aCommand.CommandText = lBaseSQL
'set the aCommand type
aCommand.CommandType = CommandType.Text
'attach the aCommand parameters if they are provided
If Not (lOracleParameters Is Nothing) Then
Dim lParameter As OracleParameter
For Each lParameter In lOracleParameters
'check for derived output value with no value assigned
If lParameter.Direction = ParameterDirection.InputOutput _
And lParameter.Value Is Nothing Then
lParameter.Value = Nothing
End If
aCommand.Parameters.Add(lParameter)
Next lParameter
End If
Return
' finally, execute the aCommand.
retval = cmd.ExecuteNonQuery()
' detach the OracleParameters from the aCommand object,
' so they can be used again
cmd.Parameters.Clear()
Catch ex As Exception
Dim lErrorMsg As String
lErrorMsg = ex.ToString
If Not lTransaction Is Nothing Then
lTransaction.Rollback()
End If
End Try
Next
lTransaction.Commit()
Catch ex As Exception
lTransaction.Rollback()
Throw New DLDataException(aConnection, ex)
End TryI have run into this problem as well. To my mind
Phillip's solution will work but seems completey unnecessary. This is work the provider itself should be managing.
I've done extensive testing with both ODP and OracleClient. Here is one of the scenarios: In a tight loop of 10,000 records, each of which is either going to be inserted or updated via a stored procedure call, the ODP provider throws the "too many cursor errors at around the 800th iteration. With over 300 cursors being open. The exact same code with OracleClient as the provider never throws an error and opens up 40+ cursors during execution.
The applicaation I have updates a Oracle8i database from a DB2 database. There are over 30 tables being updated in near real time. Reusing the command object is not an option and adding all the code Phillip did for each call seems highly unnecessary. I say Oracle needs to fix this problem. As much as I hate to say it the microsoft provider seems superior at this point. -
Runtime Error - DBIF_RSQL_INVALID_RSQL - Too many OPEN CURSOR
When I try to train a Decision Tree Model via an APD process in RSANWB, I get a runtime error when my model is configured with too many parameter fields or too many leaves (with 2 leaves it works, with more it fails).
By searching SAP Notes I see that there are many references to this kind of runtime errors. But no note on occurences of it in RSANWB / RSDMWB .
Any information on this anyone?
Claudio Ciardelli
Runtime Errors DBIF_RSQL_INVALID_RSQL
Date and Time 29.07.2005 16:19:21
|ShrtText |
| Error in RSQL module of database interface. |
|What happened? |
| Error in ABAP application program. |
| |
| The current ABAP program "SAPLRS_DME_DECISION_TREE_PRED" had to be terminated |
| because one of the |
| statements could not be executed. |
| |
| This is probably due to an error in the ABAP program. |
| |
|Error analysis |
| The system attempted to open a cursor for a SELECT or OPEN CURSOR |
| statement but all 16 cursors were already in use. |
| The statement that failed accesses table "/BIC/0CDT000030 ". |
| The erroneous statement accesses table "/BIC/0CDT000030 ". |
|Trigger Location of Runtime Error |
| Program SAPLRS_DME_DECISION_TREE_PRED |
| Include LRS_DME_DECISION_TREE_PREDU06 |
| Row 103 |
| Module type (FUNCTION) |
| Module Name RS_DME_DTP_EVALUATE |
|Source Code Extract |
|Line |SourceCde |
| 73|* Prepare for Data evaluation |
| 74| CATCH SYSTEM-EXCEPTIONS OTHERS = 15. |
| 75| CREATE DATA ref TYPE (i_enum_dbtab). |
| 76| ASSIGN ref->* TO <fs_wkarea>. |
| 77| ASSIGN COMPONENT gv_class_dbposit OF STRUCTURE |
| 78| <fs_wkarea> TO <fs_class>. |
| 79| CREATE DATA ref TYPE TABLE OF (i_enum_dbtab). |
| 80| ASSIGN ref->* TO <ft_data>. |
| 81| |
| 82| ENDCATCH. |
| 83| IF sy-subrc = 15. |
| 84|* Error on Assignment. |
| 85| CALL FUNCTION 'RS_DME_COM_ADDMSG_NOLOG' |
| 86| EXPORTING |
| 87| i_type = 'E' |
| 88| i_msgno = 301 |
| 89| i_msgv1 = 'EVALUATION_PHASE' |
| 90| IMPORTING |
| 91| es_return = ls_return. |
| 92| APPEND ls_return TO e_t_return. |
| 93| EXIT. |
| 94| ENDIF. |
| 95| |
| 96|* For the un-trained Rec-Ids, evaluate..... |
| 97| REFRESH lt_recinp. |
| 98| APPEND LINES OF i_t_records TO lt_recinp. |
| 99| SORT lt_recinp . |
| 100|* Open Cursor.. |
| 101| DATA: l_curs TYPE cursor. |
| 102| DATA: l_psize TYPE i VALUE 10000. |
|>>>>>| OPEN CURSOR WITH HOLD l_curs FOR |
| 104| SELECT * FROM (i_enum_dbtab) |
| 105| WHERE rsdmdt_recid NOT IN |
| 106| ( SELECT rsdmdt_recid FROM |
| 107| (i_learn_tab) ). |
| 108| |
| 109|* Start Fetch... |
| 110| DO. |
| 111| FETCH NEXT CURSOR l_curs |
| 112| INTO CORRESPONDING FIELDS OF TABLE <ft_data> |
| 113| PACKAGE SIZE l_psize. |
| 114| IF sy-subrc NE space. |
| 115| EXIT. |
| 116| ENDIF. |
| 117| |
| 118|* Process records... |
| 119| LOOP AT <ft_data> ASSIGNING <fs_wkarea>. |
| 120| |
| 121|* Call Prediction Function. |
| 122| CALL FUNCTION 'RS_DME_DTP_PREDICT_STRUCTURE' |Hi Claudio,
well the message is very clear and I think in your case you need to split your model into a few somehow equal models, each not having more than 2 leaves.
Another option might be to do more things serially instead of parallel.
Hope it helps
regards
Siggi -
Could someone help me understand this problem, and how to remedy it? We're getting warnings as the number of open cursors nears 1200. I've located the V$OPEN_CURSOR view, and after investigating it, this is what I think:
Currently:
SQL> select count(*)
2 from v$open_cursor;
COUNT(*)
535
1) I have one session open in the database, and 40 records in this view. Does that mean my cursors are still in the cursor cache?
2) Many of these cursors are associated with our analysts, and it looks like they are likely queries TOAD runs in order to gather meta-data for the interface. Can I overcome this?
3) I thought that the optimizer only opened a new cursor when a query that didn't match one in the cache was executed. When I run the following, I get 105 SQL statements with the same hash_value and sql_id, of which, they total 314 of the 535 open cursors (60% of the open cursors):
SQL> ed
Wrote file afiedt.buf
1 SELECT COUNT(*), SUM(cnt)
2 FROM (SELECT hash_value,
3 sql_id,
4 COUNT(*) as cnt
5 FROM v$open_cursor
6 GROUP BY hash_value, sql_id
7* HAVING COUNT(*) > 1)
SQL> /
COUNT(*) SUM(CNT)
104 314
4) Most of our connections in production will use Oracle Forms. Is there something we need to do in order to get Forms to use bind variables, or will it do so by default?
Thanks for helping me out with this.
-ChuckCURSOR_SHARING=EXACT
OPEN_CURSORS=500
CURSOR_SHARING
From what I've read, cursor sharing is always in effect, although we have the most conservative method set. So I'm not sure how this affects things. Several identical queries are being submitted in several separate cursors.
OPEN_CURSORS
This value corresponds with the maximum number of cursors allowed for a single session. We're using shared servers, so I'm exactly sure if this is still 'per session' or 'per shared server', but 500 should be more than enough.
It sounds like you're suggesting that a warning is being triggered based upon our init params. If that's the case, then what are people seeing as a limit for cursors on a 2-CPU Linux box with 2G of memory?
-Chuck -
ORA-01000: Too many open cursors -- Need Help
Hi All,
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
PL/SQL Release 11.1.0.7.0 - Production
I am getting error ora-01000 for the following procedures gather stats
Could you please guide how to get-rid-off this error.
thanks in advance;
CREATE OR REPLACE PROCEDURE SHEMA_NAME ANALYZE_TABLES IS
rec_table_name VARCHAR2 (30);
CURSOR c1
IS
SELECT table_name
FROM USER_tables; ------ 18000 table for this cursor
BEGIN
OPEN c1;
LOOP
FETCH c1 INTO rec_table_name;
EXIT WHEN c1%NOTFOUND;
-- block was hereÿÿÿ
BEGIN
DBMS_STATS.
GATHER_TABLE_STATS (
OWNNAME => 'SHEMA_NAME',
TABNAME => rec_table_name,
PARTNAME => NULL,
ESTIMATE_PERCENT => 30,
METHOD_OPT => 'FOR ALL COLUMNS SIZE AUTO',
DEGREE => 5,
CASCADE => TRUE);
END;
END LOOP;
CLOSE c1;
EXCEPTION
WHEN OTHERS
THEN
raise_application_error (
-20001,
'An error was encountered - ' || SQLCODE || ' -ERROR- ' || SQLERRM);
END;Look at the following:
SQL> begin
2 raise no_data_found;
3 end;
4 /
begin
ERROR at line 1:
ORA-01403: no data found
ORA-06512: at line 2
The error code the caller executing this code receive is -01403. A unique error number that has a known and specific meaning.
In addition, the error stack tells the caller that this unique error occurred on line 2 in the source code.
The caller knows EXACTLY what the error is and where it occurred.
SQL> begin
2 raise no_data_found;
3 exception when OTHERS then
4 raise_application_error(
5 -20000,
6 'oh damn some error happened. the error is '||SQLERRM
7 );
8 end;
9 /
begin
ERROR at line 1:
ORA-20000: oh damn some error happened. the error is ORA-01403: no data found
ORA-06512: at line 4
In this case the caller gets the error code -20000. It is meaningless as the same error code will be use for ALL errors (when OTHERS). So the caller will never know what the actual real error is.
For the caller to try and figure that out, it will need to parse and process the error message text to look for the real error code. A very silly thing to do.
In addition, the error stack says that the error was caused by line 4 in the code called.. except that this is the line that raised the meaningless generic error and not the actual line causing the error.
There are 3 basic reasons for writing an exception handler:
- the exception is not an error
- the exception is a system exception (e.g. no data found) and needs to be turned into meaningful application exceptions (e.g. invoice not found, customer not found, zip code not found, etc)
- the exception handler is used as a try..finally resource protection block (which means it re-raises the exception)
If your exception handler cannot tick one of these three reasons for existing, you need to ask yourself why you are writing that handler. -
TOO MANY OPEN CURSORS PROBLEM ... PLEASE HELP
Hi,
my problem is the following :
I got data from a system in flat file format. ( ascii, semicolon separated )
I wrote mapping classes to different tables and insert via Oracle thin driver.
The data I got isn't 100% consistent. It may happen that there are double
records for tables whith unique indexes.
I catched the Exception like in the segment below
Statement insertStmnt = null;
try{
insertStmnt = connection.createStatement();
insertStmnt.execute(insertString);
connection.commit(); // autocommit is diabled
} catch ( Exception sql ) {
System.out.println(sql.toString());
connection.rollback();
insertStmnt.close();
The Problem : when receiving the SQLException ( UNIQUE CONSTRAINT VIOLATED )
the cursor remains open.
After exeeding the open_cursors system property ( Oracle )
no more data is loaded.
( the input files contains sometimes more than one million rows )
Any suggestion to my Mail
[email protected]
ThanksHi!
Now you only close your statement when you catch an error. You will have to close it if things works out fine as well:
Statement insertStmnt = null;
try{
insertStmnt = connection.createStatement();
insertStmnt.execute(insertString);
connection.commit(); // autocommit is diabled
insertStmnt.close();
} catch ( Exception sql ) {
System.out.println(sql.toString());
connection.rollback();
insertStmnt.close();
Good luck!
/Tale -
Too many open files in system cause database goes down
Hello experts I am very worry because of the following problems. I really hope you can help me.
some server features
OS: Suse Linux Enterprise 10
RAM: 32 GB
CPU: intel QUAD-CORE
DB: There is 3 instances RAC databases (version 11.1.0.7) in the same host.
Problem: The database instances begin to report Error message: Linux-x86_64 Error: 23: Too many open files in system
and here you are other error messages:
ORA-27505: IPC error destroying a port
ORA-27300: OS system dependent operation:close failed with status: 9
ORA-27301: OS failure message: Bad file descriptor
ORA-27302: failure occurred at: skgxpdelpt1
ORA-01115: IO error reading block from file 105 (block # 18845)
ORA-01110: data file 105: '+DATOS/dac/datafile/auditoria.519.738586803'
ORA-15081: failed to submit an I/O operation to a disk
At the same time I search into the /var/log/messages as root user and I the error notice me the same problem:
Feb 7 11:03:58 bls3-1-1 syslog-ng[3346]: Cannot open file /var/log/mail.err for
writing (Too many open files in system)
Feb 7 11:04:56 bls3-1-1 kernel: VFS: file-max limit 131072 reached
Feb 7 11:05:05 bls3-1-1 kernel: oracle[12766]: segfault at fffffffffffffff0 rip
0000000007c76323 rsp 00007fff466dc780 error 4
I think I get clear about the cause, maybe I need to increase the fs.file-max kernel parameter but I do not know how to set a good value. Here you are my sysctl.conf file and the limits.conf file:
sysctl.conf
kernel.shmall = 2097152
kernel.shmmax = 17179869184
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6553600
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 4194304
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 4194304
limits.conf
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536process limit
bcm@bcm-laptop:~$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 20
file size (blocks, -f) unlimited
pending signals (-i) 16382
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited -
Operation Could Not Be Completed: Too Many Open...
I've had this error showing up a lot in the Safari 3.2 and 4.0 beta activity window when loading multiple tabs of pages with many images (usually gallery type pages or blogs with lots of thumbnails). The result is that some images or other elements like style sheets don't load, leading to the blue question mark, or messed up formatting. Unfortunately, the activity window can't be stretched wide enough to see the whole error, but in fact it's "Operation Could Not Be Completed: Too Many Open Files". The error "socket(PF_ROUTE) failed: Too many open files" shows up in the Console as well, at least with Safari 4. I had also experienced similar problems with "Operation Timed Out" errors as well.
I've been pulling my hair out on this one, because it's rather inconsistent, not to mention annoying. You can usually get an individual page to load completely by refreshing it, but that kind of defeats the purpose of loading multiple tabs all at once. It's also less of a problem if more of those pages are already cached, but you never know for sure. Running your connection through a proxy also helps a bit, but not always.
I found a fix, but it's actually not Safari's "fault" per se. The issue lies in the allowable number of open files per user process, which is set by the system's launchd process at boot time. Note that Safari is not entirely innocent, as FireFox doesn't have this problem. It Seems that Safari just tries to load everything all at once, whereas FireFox does a better job of managing its load requests. Anyway if you run the following command in Terminal:
sudo launchctl limit
the following list should show up (with perhaps slightly different values)
cpu unlimited unlimited
filesize unlimited unlimited
data 6291456 unlimited
stack 8388608 67104768
core 0 unlimited
rss unlimited unlimited
memlock unlimited unlimited
maxproc 200 532
maxfiles 256 unlimited
The second column is a "soft limit" and the third column is a "hard limit", though to be honest I'm not exactly sure what the difference entails. The image loading problem is caused by hitting the maxfiles limit of just 256 files. The solution is to change maxfiles to 4096/unlimited, and also change maxproc to 1000/2000 since it's pretty low as well. That sounds like a pretty big change, but OS X server is supposed to change them to numbers like this when services like Apache are enabled, and Apple even mentions how to change maxproc at http://support.apple.com/kb/TS1659
To make these changes, run the following two commands in Terminal and restart the computer:
echo "limit maxproc 1000 2000" | sudo tee -a /etc/launchd.conf
echo "limit maxfiles 4096 unlimited" | sudo tee -a /etc/launchd.conf
The commands add the two lines in quotes to the launchd.conf file in /etc/ (if no file exists yet, it creates it). That should clear up the loading issues. I haven't noticed any other problems with these increased numbers, but I'll report back if anything seems to go amiss. Hopefully this will be helpful to someone.I faced the same problem with an image gallery using css for image resizing. Thanks for the explanation.
-
"java.io.IOException: Too many open files" in LinuX
Hi Developers,
* I am continiously running and processing more than 2000 XML files by using SAX and DOM.....
* My process is as follows,
- Converting the XML file as Document object by DOM....
- And that DOM will be used while creating log file report, that log file will be created after executing all XML files..
* After processing approx 1000 files, it throws *"java.io.IOException: Too many open files" in LinuX system* ....
* I have googled more and more in all sites including sun forum also, but they are telling only to increase the system config by ULIMIT in linux....If i increase that its executing well without exception........
* My question is, Is it possible to do it by JAVA code itself or any other VM arguments like -Xms512m and -Xmx512m.....
* Please let me know , if you have any idea.....
Thanks And Regards,
JavaImranDoh! I forgot to post my little code sample...
package forums.crap;
import java.io.*;
import java.util.*;
public class TooManyFileHandles
private static final int HOW_MANY = 8*1024;
public static void main(String[] args) {
List<PrintWriter> writers = new ArrayList<PrintWriter>(HOW_MANY);
try {
try {
for (int i=1; i<=HOW_MANY; i++ ) {
writers.add(new PrintWriter("file"+i+".txt"));
} finally {
for (PrintWriter w : writers) {
if(w!=null)w.close();
} catch (Exception e) {
e.printStackTrace();
}... and the problem still isn't OOME ;-)
Cheers. Keith. -
"Too many open files" Exception on "tapestry-framework-4.1.1.jar"
When a browser attempts accessing to my webwork, the server opens a certain number of file descriptors to "tapestry-framework-4.1.1.jar" file and don't release them for a while.
Below is the output from "lsof | grep tapestry":
java 26735 root mem REG 253,0 62415 2425040 /usr/local/apache-tomcat-5.5.20/my_webwork/WEB-INF/lib/tapestry-portlet-4.1.1.jar
java 26735 root mem REG 253,0 2280602 2425039 /usr/local/apache-tomcat-5.5.20/my_webwork/WEB-INF/lib/tapestry-framework-4.1.1.jar
java 26735 root mem REG 253,0 320546 2425036 /usr/local/apache-tomcat-5.5.20/my_webwork/WEB-INF/lib/tapestry-contrib-4.1.1.jar
java 26735 root mem REG 253,0 49564 2424979 /usr/local/apache-tomcat-5.5.20/my_webwork/WEB-INF/lib/tapestry-annotations-4.1.1.jar
java 26735 root 28r REG 253,0 2280602 2425039 /usr/local/apache-tomcat-5.5.20/my_webwork/WEB-INF/lib/tapestry-framework-4.1.1.jar
java 26735 root 29r REG 253,0 2280602 2425039 /usr/local/apache-tomcat-5.5.20/my_webwork/WEB-INF/lib/tapestry-framework-4.1.1.jar
java 26735 root 30r REG 253,0 2280602 2425039 /usr/local/apache-tomcat-5.5.20/my_webwork/WEB-INF/lib/tapestry-framework-4.1.1.jar
These unknown references are sometimes released automatically, but sometimes not.
And I get "Too many open files" exception after using my application for a few hours.
The number of the unknown references increases as I access to my webwork or just hit on "F5" key on my browser to reload it.
I tried different types of browsers to see if I could see any differences in consequence, and in fact it differed by the browser I used.
When viewed by Internet Explorer it increased by 3 for every access.
On the other hand it increased by 7 for each attempt when accessed by FireFox.
I have already tried optimizing the max number of file discriptors, and it solved the "Too many open files" exception.
But stil I'm wondering who actually is opening "tapestry-framework-4.1.1.jar" this many.
Could anyone figure out what is going on?
Thanks in advance.
The following is my environmental version info:
- Red Hat Enterprise Linux ES release 4 (Nahant Update 4)
- Java: 1.5.0_11
- Tomcat: 5.5.20
- Tapestry: 4.1.1Hi,
Cause might The server got an exception while trying to accept client connections. It will try to backoff to aid recovery.
The OS limit for the number of open file descriptor (FD limit) needs to be increased. Tune OS parameters that might help the server to accept more client connections (e.g. TCP accept back log).
http://e-docs.bea.com/wls/docs90/messages/Server.html#BEA-002616
Regards,
Prasanna Yalam -
UTL_MAIL --- ORA-30678: too many open connections
Hello,
I have a pl/sql package that sends out emails using UTL_MAIL pkg pointing to an Exchange server, an APEX app calls this pkg.. This package used to work fine for months but I recently noticed that some emails are not being sent as expected. The package loops through a set of action items satisfying some conditions and send emails based on that ( this number is expected to grow every day ). I checked the errors log and I found this error:
ORA-30678: too many open connections
I think this means that I have to close the connection everytime I send en email, but UTL_MAIL does NOT have a function or a proc to close connections, right ?
I don't know what causes this error to happen, but I suspect that this started happening right after we re-pointed the UTL_MAIL pkg from a Lotus Notes server to an Exchange server.
I am also seeing this error:
ORA-29279: SMTP permanent error: 501 5.1.3 Invalid address
I know where this error comes from (usually a null email id in the FROM or TO field ), but can this be causing the first error to happen ?
Please advise if you got this error before, is it a bug in oracle 10g as I read in some blog ? or is the second error happening make the Exchange server refuse SMTP connections ???
Thanks,
SamHi Sam,
seems to be a bug in UTL_MAIL if you ask me, as you are right - there is only /send/, no option to close, so I'd expect this to be done automatically.
Anyway, though UTL_MAIL is usable for basic mailing, I prefer using a custom mail implementation based on UTL_SMTP. The most important reason is that most mail servers don't work without authentication. And if you have done this once, you can reuse the function/procedure/package as simple as UTL_MAIL. The good news is, that there are several examples published that provide you with the functionality you have in UTL_MAIL at once - with the difference, that you definetly get your connection closed when you expect it to be closed.
You'll also be able to handle empty addresses. Perhaps this error actually causes UTL_MAIL to "forget" to close the connection, if this exception isn't caught before in order to close an open connection before raising it to the outside.
One example implementation for using UTL_SMTP can be found [url http://www.morganslibrary.com/reference/pkgs/utl_smtp.html]here
-Udo -
Intermittent too many open files error and Invalid TLV error
Post Author: jam2008
CA Forum: General
I'm writing this up in the hopes of saving someone else a couple of days of hair-pulling...
Environment: Crystal Reports XI Enterprise / also runtime via Accpac ERP 5.4
Invalid TLV error in Accpac
"too many open files" error in event.log file
Situation:
Invalid TLV error occurs seemingly randomly on report created in CR Professional 11. Several days of troubleshooting finally lead to the following diagnosis:
This error occurs in a report that contains MORE THAN 1 bitmap image.
The error only shows up after 20 or more reports have been generated sequentially, WITHOUT CLOSING the application that is calling the report. In our case the Invoice Report dialog within Accpac. This same error occurred in a custom 3rd party VB.NET app that also called the report through an Accpac API.after getting this message you need to do 2 things:
1. delete the current workspace because it contains some bad data in one the config files - failure to delete the workspace will result the error message to appear even if trying to upload a single file.
2. add to DTR files in groups - no more than 500 in a single add. -
WLS 92MP1: Application Poller issue Too many open files
Hi,
We have a wls92mp1 domain on linux AS4(64bit) with Sun jdk 1.5.0_14. It contains only Admin server where we have deployed the application. Over a period of time the server start showing up below message in the logs. We have not deployed the application from autodeploy directory. And the file "/home/userid/wls92/etg/servers/userid_a/cache/.app_poller_lastrun " is available in the location, still it throws FileNotFoundException.
<Error> <Application Poller> <BEA-149411> <I/O exception encountered java.io.FileNotFoundException: /home/userid/wls92/etg/servers/userid_a/cache/.a
pp_poller_lastrun (Too many open files).
java.io.FileNotFoundException: /home/userid/wls92/etg/servers/userid_a/cache/.app_poller_lastrun (Too many open files)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:179)
at java.io.FileOutputStream.<init>(FileOutputStream.java:131)
at java.io.FileWriter.<init>(FileWriter.java:73)
at weblogic.management.deploy.GenericAppPoller.setLastRunFileMap(GenericAppPoller.java:423)
Any help regarding this would be highly appreciated.
Thanks.Hi,
By above seeing in error, this error code (BEA-149411) describe the following
149411: I/O exception encountered {0}.
L10n Package: weblogic.management.deploy.internal
I18n Package: weblogic.management.deploy.internal
Subsystem: Application Poller
Severity: Error
Stack Trace: true
Message Detail: An I/O exception denotes a failure to perform a read/write operation on the application files.
Cause: An I/O exception can occur during file read/write while deploying an application.
Action: Take corrective action based on the exception message details.
i think it helps u.
-abhi -
Open Directory logging? (Too many open files)
I recently had a request from another admin to up the logging output of my OD servers. So I opened WGM, opened the OLCGlobalConfig section and changed the olcLogLevel to Filter. At which point my OD server stopped allowing edits and updates to the server. And started reporting "too many open files" in the system.log. So I guess I have 2 questions.
1) What is the proper method for increasing the log level of an OD server? The other admin is looking for query data and queries per second data. So if there is some other way of getting that data I'm all for it.
2) Is there any solution for the "too many open files" if I change the olcLogLevel in the future?
Thanks,
DerekExplore the odutil command. There are 7 levels of logging available. error is the default. You may want to try notice to see if that gives you the information you want.
sudo odutil set log notice
Maybe you are looking for
-
Supplier Open Interface Import
Hi All, I loaded data into AP_SUPPLIERS_INT but when I run the program Supplier Open Interface Import, it does not update the STATUS field in AP_SUPPLIERS_INT and report shows: Atrium Corporation Report Date: 25-SEP-2010 12:39 Suppliers Open Interfac
-
ITunes and Quicktime will not start on new PC using Vista
Hi everyone, I just bought a new PC using Vista "Home Premium" edition. No previous version of iTunes on PC. I downladed iTunes 7.6. When I try to start iTunes Windows error msg says "iTunes has stopped working check on-line for a solution and close
-
so i bought a mac book air, and i had to buy the LG portable cd drive... which is supposed to burn cds? the lg cd/dvd writer came with a cd but it wont load onto my computer when i plug the usb cord in. all i want to do is burn a cd from itunes but i
-
Widget support for a dynamic 'URL'
There is a website http://planefinder.net that displays the current location of aircraft in real time. This web page is programmed to refresh in timed intervals and display the last real time location of each aircraft. [I hope that makes sense]. This
-
How do I move from MobileMe to iCloud on a macbook??
I already went through the "move your mobileme account to iCloud" procedure on my macbook several times but still cannot access the iCloud settings on my mac - each time I open iCloud I am asked to first move my mobileme account iCloud??