Genunix: basic rctl process.max-file-descriptor (value 256) exceeded
Hi .,
I am getting the following error in my console rapidly.
I am using Sun Sparc server running with Solaris 10 ., We start getting this error
suddently after a restart of the server and the error is continously rolling on the console...
The Error:
Rebooting with command: boot
Boot device: disk0 File and args:
SunOS Release 5.10 Version Generic_118822-25 64-bit
Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Hardware watchdog enabled
Failed to send email alert for recent event.
SC Alert: Failed to send email alert for recent event.
Hostname: nitwebsun01
NOTICE: VxVM vxdmp V-5-0-34 added disk array DISKS, datype = Disk
NOTICE: VxVM vxdmp V-5-3-1700 dmpnode 287/0x0 has migrated from enclosure FAKE_ENCLR_SNO to enclosure DISKS
checking ufs filesystems
/dev/rdsk/c1t0d0s4: is logging.
/dev/rdsk/c1t0d0s7: is logging.
nitwebsun01 console login: Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 439
Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 414
Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 413
Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 414
Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 413
Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 414
Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 413
Nov 20 14:56:41 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
Nov 20 14:56:41 nitwebsun01 last message repeated 1 time
Nov 20 14:56:43 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 470
Nov 20 14:56:43 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 467
Nov 20 14:56:44 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 470
Nov 20 14:56:44 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
Nov 20 14:56:44 nitwebsun01 last message repeated 1 time
Nov 20 14:56:49 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 503
Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 510
Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 519
Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 516
Nov 20 14:56:50 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 519
Nov 20 14:56:53 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 540
Nov 20 14:56:53 nitwebsun01 last message repeated 2 times
Nov 20 14:56:53 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 549
Nov 20 14:56:53 nitwebsun01 last message repeated 4 times
Nov 20 14:56:56 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 665
Nov 20 14:56:56 nitwebsun01 last message repeated 6 times
Nov 20 14:56:56 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 667
Nov 20 14:56:56 nitwebsun01 last message repeated 2 times
Nov 20 14:56:56 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 121
Nov 20 14:56:57 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 868
Nov 20 14:56:57 nitwebsun01 /usr/lib/snmp/snmpdx: unable to get my IP address: gethostbyname(nitwebsun01) failed [h_errno: host not found(1)]
Nov 20 14:56:58 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 887
Nov 20 14:57:00 nitwebsun01 genunix: basic rctl process.max-file-descriptor (value 256) exceeded by process 976
nitwebsun01 console login: root
Nov 20 14:57:00 nitwebsun01 last message repeated 2 times
Here I attached my /etc/project file also..
[root@nitwebsun01 /]$ cat /etc/project
system:0::::
user.root:1::::
process.max-file-descriptor=(privileged,1024,deny);
process.max-sem-ops=(privileged,512,deny);
process.max-sem-nsems=(privileged,512,deny);
project.max-sem-ids=(privileged,1024,deny);
project.max-shm-ids=(privileged,1024,deny);
project.max-shm-memory=(privileged,4294967296,deny)
noproject:2::::
default:3::::
process.max-file-descriptor=(privileged,1024,deny);
process.max-sem-ops=(privileged,512,deny);
process.max-sem-nsems=(privileged,512,deny);
project.max-sem-ids=(privileged,1024,deny);
project.max-shm-ids=(privileged,1024,deny);
project.max-shm-memory=(privileged,4294967296,deny)
group.staff:10::::
[root@nitwebsun01 /]$
Help me to came out of this issue
Regards
Suseendran .A
This is an old post but I'm going to reply to it for future reference of others.
Please ignore the first reply to this thread... by default /etc/rctladm.conf doesn't exist, and you should never use it. Just put it out of your mind.
So, then... by default, a process can have no more than 256 file descriptors open at any given time. The likelyhood that you'll have a program using more than 256 files very low... but, each network socket counts as a file descriptor, therefore many network services will exceed this limit quickly. The 256 limit is stupid but it is a standard, and as such Solaris adheres to it. To look at the open file descriptors of a given process use "pfiles <pid>".
So, to change it you have several options:
1) You can tune the default threshold on the number of descriptors by specifying a new default threshold in /etc/system:
set rlim_fd_cur=1024
2) On the shell you can view your limit using 'ulimit -n' (use 'ulimit' to see all your limit thresholds). You can set it higher for this session by supplying a value, example: 'ulimit -n 1024', then start your program. You might also put this command in a startup script before starting your program.
3) The "right" way to do this is to use a Solaris RCTL (resource control) defined in /etc/project. Say you want to give the "oracle" user 8152 fd's... you can add the following to /etc/project:
user.oracle:101::::process.max-file-descriptor=(priv,8152,deny)
Now log out the Oracle user, then log back in and startup.
You can view the limit on a process like so:
prctl -n process.max-file-descriptor -i process <pid>
In that output, you may see 3 lines, one for "basic", one for "privileged" and one for "system". System is the max possible. Privileged is the limit by which you need to have special privs to raise. Basic is the limit that you as any user can increase yourself (such as using 'ulimit' as we did above). If you define a custom "priviliged" RCTL like we did above in /etc/projects it will dump the "basic" priv which is, by default, 256.
For reference, if you need to increase the threshold of a daemon that you can not restart, you can do this "hot" by using the 'prctl' program like so:
prctl -t basic -n process.max-file-descriptor -x -i process <PID>
The above just dumps the "basic" resource control (limit) from the running process. Do that, then check it a minute later with 'pfiles' to see that its now using more FD's.
Enjoy.
benr.
Similar Messages
-
Changing process.max-file-descriptor in non global zone
Hello Folks,
I have non global zone.
i wanted to change process.max-file-descriptor to 8192 so i issued the below command
projmod -s -K 'process.max-file-descriptor=(basic,8192,deny)' default
i have rebooted zone, after reboot system is not showing the value as 8192.
can u someone help me to find out the missed# id -p
uid=0(root) gid=0(root) projid=1(user.root)
# prctl -P $$ | grep file
process.max-file-descriptor basic 256 - deny 19452
process.max-file-descriptor privileged 65536 - deny -
process.max-file-descriptor system 2147483647 max deny -
process.max-file-size privileged 9223372036854775807 max deny,signal=XFSZ -
process.max-file-size system 9223372036854775807 max deny -
# ulimit -n
256
# cat /etc/project | grep file
default:3::::process.max-file-descriptor=(basic,8192,deny)
# -
Reg process.max-file-descriptor setting
Hi,
We are running Solaris 10 and have set project for Oracle user id. When I run prctl for one of running process, I am getting below output.
process.max-file-descriptor
basic 8.19K - deny 351158
privileged 65.5K - deny -
system 2.15G max deny -
My question is whats the limit for process running under this project as far as max-file-descriptor attribute is concerned? Will it be 8.19K or 65.5K or 2.15G? Also what is the difference among all three. Please advice. Thanks.Hi,
Welcome to oracle forums :)
User wrote:
Hi,
We are running Solaris 10 and have set project for Oracle user id. When I run prctl for one of running process, I am getting >below output.
process.max-file-descriptor
basic 8.19K - deny 351158
privileged 65.5K - deny -
system 2.15G max deny -
My question is whats the limit for process running under this project as far as max-file-descriptor attribute is concerned? Will it >be 8.19K or 65.5K or 2.15G? Also what is the difference among all three. Please advice. Thanks.Kernel parameter process.max-file-descriptor: Maximum file descriptor index. Oracle recommends *65536*
For more information on these settings please refer MOS tech note:
*Kernel setup for Solaris 10 using project files. [ID 429191.1]*
Hope helps :)
Regards,
X A H E E R -
Cannot reset max-file-descriptor?
My /var/ad/messages is full of :
Apr 17 12:30:27 srv1 genunix: [ID 883052 kern.notice] basic rctl process.max-file-descriptor (value 256) exceeded by process 6910
Even though I have tried to set process.max-file-descriptor set to 4096 for all projects, which appears correct whenever I query any running process, ie:
srv1 /var/adm # prctl -t basic -n process.max-file-descriptor -i process $$
process: 2631: -ksh
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-file-descriptor
basic 4.10K - deny 2631
Any ideas...?
Thanks!!Hi,
Finally found the route cause.
It is the mistake of the user. In one of his startup scripts(.profile) he is running the command (ulimit -n 1024) which is setting both the soft and hard limits of file descriptors.
This was the reason, I was unable to increase the file descriptor limit beyond 1024.
Thanks & Regards,
-GnanaShekar- -
Change 'Soft Resource Limit file descriptor' value
Hi,
How do I change 'Soft Resource Limit file descriptor; value in Solaris 10
Is there any conditions to change this?
Ashraf.There are 2 values. The default and the maximum.
It starts at the default and can be increased up the maximum.
If you want change the value persistently and machine wide, you can put a config option
in /etc/system. Something like
set rlim_fd_cur = 8129
set rlim_fd_max = 8129
If you want to do it for your account only then, just put those lines into your profile.
But you'll still be only able to increase it up to the max.
The max can only be increased by root or in /etc/system. -
No of file descriptors in solaris 10
hi,
I had an open files issue and updated the no of files descriptors with the following command(using zones on Solaris 10 running on SPARC)
projmod -s -K 'process.max-file-descriptor=(basic,8192,deny)' PROJECTNAME
i wanted to check is there any way to know if the new no of files has come into effect and is it also possible to check how many files are currently open, just to make sure i am not reaching the limits
Thank you
Jonu JoyThank you alan
even after setting the max file descriptor to 8192, the output from pfiles show as 4096
Current rlimit: 4096 file descriptors
would you know if there is something wrong with the command which i am using - projmod -s -K 'process.max-file-descriptor=(basic,8192,deny)' PROJECTNAME( I am issuing this command as root)
thank you
Jonu Joy -
We are using iDS 5.1 sp2 running on solaris 8. We have idar with 2 ldap server on back(1 master, 1 slave).
We didnt't setup the max connection for iDAR, which mean unlimited connection is allowed. However, the unix system ulimit setting was 256, which is too low. I changed the setting under /etc/system and rebooted the unix.. Then the ulimit setting is 4096 for both hard limit and soft limit. It looks good.
However, whenever the total connection to iDAR approaching 256, fwd.log file will show that "socket closed". The iDAR is still available, but the socked is used up.
I have been wondering why the new setting didn't take effect for iDAR.
Can anybody help me or give me some clue?
Thanks!Hi,
Welcome to oracle forums :)
User wrote:
Hi,
We are running Solaris 10 and have set project for Oracle user id. When I run prctl for one of running process, I am getting >below output.
process.max-file-descriptor
basic 8.19K - deny 351158
privileged 65.5K - deny -
system 2.15G max deny -
My question is whats the limit for process running under this project as far as max-file-descriptor attribute is concerned? Will it >be 8.19K or 65.5K or 2.15G? Also what is the difference among all three. Please advice. Thanks.Kernel parameter process.max-file-descriptor: Maximum file descriptor index. Oracle recommends *65536*
For more information on these settings please refer MOS tech note:
*Kernel setup for Solaris 10 using project files. [ID 429191.1]*
Hope helps :)
Regards,
X A H E E R -
How to determine which file descriptor opened my driver?
Suppose a user process opens my driver twice. How does open() determine which file descriptor opened the device? In Linux, the kernel will pass a pointer to a structure which represents the open file descriptor. However, Solaris only passes the device number to open(), so I can only determine my device was opened, but not which file. I need this information because my driver needs to keep track of all file descriptors opened for the device.
Thanks!
-DarrenI'm still at a loss why you need to know the file descriptor value (unless the app is sufficiently spaghettied that it has to query the driver to figure out what it opened with what). It's like asking what filename was used to open the device (which you can't get either). Since Solaris is based on a Streams framework, it would be bad to have drivers to even think it has a direct mapping into user space. It would be the same in asking (using /bin/sh):
prog3 4>&1 3>&1 2>&1 | prog2 | prog1
and you want to know from prog1 what descriptor prog3 wrote to. I don't see how linux even does this properly, since any given file open can have multiple file descriptors (via dup). -
How to increase the per-process file descriptor limit for JDBC connection 15
If I need JDBC connection more that 15, the only solution is increase the per-process file descriptor limit. But how to increase this limit? modify the oracle server or JDBC software?
I'm using JDBC thin driver connect to Oracle 806 server.
From JDBC faq:
Is there any limit on number of connections for jdbc?
No. As such JDBC drivers doesn't have any scalability restrictions by themselves.
It may be it restricted by the no of 'processes' (in the init.ora file) on the server. However, now-a-days we do get questions that even when the no of processes is 30, we are not able to open more than 16 active JDBC-OCI connections when the JDK is running in the default (green) thread model. This is because the no. of per-process file descriptor limit exceeded. It is important to note that depending on whether you are using OCI or THIN, or Green Vs Native, a JDBC sql connection can consume any where from 1-4 file descriptors. The solution is to increase the per-process file descriptor limit.
nullmaybe it is OS issue, but the suggestion solution is from Oracle document. However, it is not provide a clear enough solution, just state "The solution is to increase the per-process file descriptor limit"
Now I know the solution, but not know how to increase it.....
pls help. -
Max number of file descriptors in 32 vs 64 bit compilation
Hi,
I compiled a simple C app (with Solaris CC compiler) that attempts to open 10000 file descriptors using fopen(). It runs just fine when compile in 64-bit mode (with previously setting �ulimit �S -n 10000�).
However, when I compile it in 32-bit mode it fails to open more than 253 files. Call to system(�ulimit �a�) suggests that �nofiles (descriptors) 10000�.
Did anybody ever see similar problem before?
Thanks in advance,
MikhailOn 32-bit Solaris, the stdio "FILE" struct stores the file descriptor (an integer) in an 8-bit field. WIth 3 files opened automatically at program start (stdin, stdout, stderr), that leaves 253 available file descriptors.
This limitation stems from early versions of Unix and Solaris, and must be maintained to allow old binaries to continue to work. That is, the layout of the FILE struct is wired into old programs, and thus cannot be changed.
When 64-bit Solaris was introduced, there was no compatibility issue, since there were no old 64-bit binaries . The limit of 256 file descriptors in stdio was removed by making the field larger. In addition, the layout of the FILE struct is hidden from user programs, so that future changes are possible, should become necessary.
To work around the limit, you can play some games with dup() and closing the original descriptor to make it available for use with a new file, or you can arrange to have fewer than the max number of files open at one time.
A new interface for stdio is being implemented to allow a large number of files to be open at one time. I don't know when it will be available or for which versions of Solaris. -
Max file size OSB 11g can process
Hi,
What is the max file size OSB 11g can process? We want to do a POC that picks a file from FTP do some complex transformation and post to another FTP server.
So in this scenario what can be the max file size OSB 11g can handle?
Regards,
AbdulAgain, there is no fixed limit for message size JMS can handle. It depends on the Heap size available. But, In my experience JMS will reduce the practical limit much more than File/FTP transport. I have seen that you start getting a lot of OOM errors if you try to put very large messages on JMS.
Also, if you are going to transform the payload after reading via FTP/File transport, then you will need to initialize the complete payload in memory which will restrict the maximum message size which can be processed properly.
For large Files using Content Streaming is recommended but you can not access the streamed content within a message flow (hence no transformations).
Another limiting factor is CPU utilization, doing complex transformations on large payload will consume a lot of CPU which will effect any other processes running on the same machine (could be another service instance on OSB itself).
OSB is supposed to work with lightweight, stateless and fast processing. If you have very complex transformations, invest in an XML Appliance.
If you need to transfer huge files, then use ODI. -
Max file size OSB 11 can Process
Hi,
What is the max file size OSB 11g can process.
Regards,
AbdulIn addition to providing additional detail, you should ask this question in the SOA Suite forum:
SOA Suite -
Set file descriptor limit for xinetd initiated process
I am starting the amanda backup service on clients through xinetd and we
are hitting the open file limit, ie file descriptor limit.
I have set resource controls for the user and I can see from the shell that
the file descriptor limit has increased, but I have not figured out how to get
the resource control change to apply to the daemon started by xinetd.
The default of 256 file channels persists for the daemon, I need to increase
that number.
I have tried a wrapper script, clearly doing it incorrectly for Solaris 10/SMF
services. That route didn't work, or is not as straight forward as it used to be.
There is a more direct way ?
Thanks - BrianHi Brian,
It appears with 32 bits applications. You have to use the enabler of the extended FILE facility /usr/lib/extendedFILE.so.1
% ulimit -n
256
% echo 'rlim_fd_max/D' | mdb -k | awk '{ print $2 }'
65536
% ulimit -n
65536
% export LD_PRELOAD_32=/usr/lib/extendedFILE.so.1
% ./your_32_bits_applicationMarco -
Upload-Szenario - WDRuntimeException Bad file descriptor
Hi All,
i'm using the Adobe Document Services on a NW04, ADS SP 19, NWDS 2.0.19 with IE 6.0.2900 SP2.
If i use the Upload-UI-Element to show a PDF i got the error: Bad file descriptor !!
The PDF to upload is not corrupt, an i can open it with Acrobat Reader or the Tutorial (Download-Upload-Szenario) - Example.
I had a binary Value-Attribute mapped as Data-Element of the UploadUI-Element. Read the Context-Value-Attribute to the controller context element and the Interactive Form Element had the reference to the attribute as pdfsource. The same code as the tutorial.
What's wrong ?
I didn't find anything about the error !
Thanks for help.
Regards Jürgen
Exception(com.sap.tc.webdynpro.services.exceptions.WDRuntimeException: Bad file descriptor) during processing a Web Dynpro Application, Session with IDs: (J2EE7802600)ID1177271450DB10984010199363513659End,78651230d78a11db9c34000d608e44df,Id78651230d78a11db9c34000d608e44df6
[EXCEPTION]
com.sap.tc.webdynpro.services.exceptions.WDRuntimeException: Bad file descriptor
at com.sap.tc.webdynpro.clientimpl.http.client.AbstractHttpClient.updateUpLoad(AbstractHttpClient.java:478)
at com.sap.tc.webdynpro.progmodel.context.ModifiableBinaryType.parse(ModifiableBinaryType.java:95)
at com.sap.tc.webdynpro.clientserver.data.DataContainer.doParse(DataContainer.java:1418)
at com.sap.tc.webdynpro.clientserver.data.DataContainer.validatePendingUserInput(DataContainer.java:1328)
at com.sap.tc.webdynpro.clientserver.data.DataContainer.validatePendingUserInput(DataContainer.java:672)
at com.sap.tc.webdynpro.clientserver.cal.ClientComponent.validate(ClientComponent.java:624)
at com.sap.tc.webdynpro.clientserver.cal.ClientApplication.validate(ClientApplication.java:741)
at com.sap.tc.webdynpro.clientserver.task.WebDynproMainTask.transportData(WebDynproMainTask.java:712)
at com.sap.tc.webdynpro.clientserver.task.WebDynproMainTask.execute(WebDynproMainTask.java:649)
at com.sap.tc.webdynpro.clientserver.cal.AbstractClient.executeTasks(AbstractClient.java:59)
at com.sap.tc.webdynpro.clientserver.cal.ClientManager.doProcessing(ClientManager.java:251)
at com.sap.tc.webdynpro.serverimpl.defaultimpl.DispatcherServlet.doWebDynproProcessing(DispatcherServlet.java:154)
at com.sap.tc.webdynpro.serverimpl.defaultimpl.DispatcherServlet.doContent(DispatcherServlet.java:116)
at com.sap.tc.webdynpro.serverimpl.defaultimpl.DispatcherServlet.doPost(DispatcherServlet.java:55)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:760)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.runServlet(HttpHandlerImpl.java:401)
at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.handleRequest(HttpHandlerImpl.java:266)
at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:387)
at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:365)
at com.sap.engine.services.httpserver.server.RequestAnalizer.invokeWebContainer(RequestAnalizer.java:944)
at com.sap.engine.services.httpserver.server.RequestAnalizer.handle(RequestAnalizer.java:266)
at com.sap.engine.services.httpserver.server.Client.handle(Client.java:95)
at com.sap.engine.services.httpserver.server.Processor.request(Processor.java:160)
at com.sap.engine.core.service630.context.cluster.session.ApplicationSessionMessageListener.process(ApplicationSessionMessageListener.java:33)
at com.sap.engine.core.cluster.impl6.session.MessageRunner.run(MessageRunner.java:41)
at com.sap.engine.core.thread.impl3.ActionObject.run(ActionObject.java:37)
at java.security.AccessController.doPrivileged(Native Method)
at com.sap.engine.core.thread.impl3.SingleThread.execute(SingleThread.java:100)
at com.sap.engine.core.thread.impl3.SingleThread.run(SingleThread.java:170)
Caused by: java.io.IOException: Bad file descriptor
at java.io.FileInputStream.readBytes(Native Method)
at java.io.FileInputStream.read(FileInputStream.java:177)
at com.sap.tc.webdynpro.clientimpl.http.client.AbstractHttpClient.writeIn2Out(AbstractHttpClient.java:493)
at com.sap.tc.webdynpro.clientimpl.http.client.AbstractHttpClient.updateUpLoad(AbstractHttpClient.java:435)
... 29 moreTwo things I think helped to solve this
PROPAGATE_EXCEPTIONS = True
in config.py and I removed threading from my vassal ini file the ending uwsgi files looked like this:
/etc/uwsgi/emperor.ini:
[uwsgi]
emperor = /etc/uwsgi/vassals
master = true
plugins = python2
uid = http
gid = http
[/etc/uwsgi/vassals/test.ini:
[uwsgi]
chdir = /srv/http/test_dir/src
wsgi-file = run.py
callable = app
processes = 4
stats = 127.0.0.1:9191
max-requests = 5000
enable-threads = true
vacuum = true
thunder-lock = true
socket = /run/uwsgi/test-sock.sock
chmod-socket = 664
harakiri = 60
logto = /var/log/uwsgi/test.log
Not sure on the
PROPAGATE_EXCEPTIONS = True
but removing the threads option in test.ini and making sure there was a master option in emperor.ini seemed to have solved the issue of sql being tossed around to different treads, or at least it complaining about it and crashing the site out, either or.
Also don't use the uwsgi from this distribution, get it from pip, the distros are broken. -
1. We can change the limits by setting the values in the /etc/system file rlim_fd_cur & rlim_fd_max.
2. There is some documentation that states that the mx should never exceed 1024.
3. Question:
a. For Solaris 8 can we ever set the max to be > 1024?
b. If we can, is there another ceiling?
c. Can we redefine FD_SETSIZE in the app that wants to use select() with fds > 1024? Is there any mechanism to do a select() on FDs > 1023?
4. If the process is running at root, does it still have a limit on FDs? Can it then raise it using setrlimit()?
Thnx
AmanThe hard limit is 1024 for number of descriptors.
The man page for limit(1) says that root can
change the hard limits, but if you raise the
limit for fd above 1024 you may encounter kernel
performance or even failure considtions. The number
is a recommendation and emperical based on what a
selection of processors and memory models can
tolerate. You might get more expert info by cross
posting this question to the solaris-OS/kernel
forum. Raising the hard limit might be possible, but
I cannot speak to the risks with much direct
knowledge.
You might want to examine the design of an app that
needs to have more than 1024 files open at once. maybe
there is an alternative design that allows you to
close more file descriptors.
Maybe you are looking for
-
How can i move a mail message from INBOX to other Folder
I want to store all the mail messages to some specific folder after read it. i tried it with setFlags but it's not working Plz suggest any of the option Thanks in advance ..
-
I have lost purchased music from itunes when I reset my ipod
I recently reset my ipod due to a software issue. As my laptop with all my songs which I had purchased from itunes is no longer with me, i can not reload those songs. Will iTunes be able to resend those songs to me that I had bought from iTunes or at
-
Hi, I have a requirement with the source structure looking as Sourece : ACCOUNTGL 0....1 item 0...unbounded ITEMNO_ACC 0....1 CURRENCYAMOUNT 0....1 item 0...unbounded ITEMNO_ACC 0....1 CURRENCY_TYPE 0....1 CURRENCY 0.....1 XXXXXX 0......1 TARGET STRU
-
Mapping Workbench error when exporting project XML
The following error pops up when trying to export the project deployment xml file. "The following descriptors do not have corresponding class files. Please check your class path." The window then lists all of my descriptors. I have the Oracle 10g dat
-
Cannot update my Photoshop CS6 to 13.0.4
I keep getting "failed" with errors U44M1P6 and U44M2P6. How do I fix this? This is happening on both my Imac and my Macbook Pro.