Cannot increase file descriptors

Hello all,
I'm trying to increase the number of file descriptors of the system.
Currently with ulimit -n, I get 2048. So I would like to increase the limit to 8192.
I have added the following lines in the /etc/system file:
set rlim_fd_max=8192
set rlim_fd_cur=8192
These are standard lines I have added in other systems and after rebooting I always get the right value. But in one of the machines of my server room this doesn't seem to work. The machine is exactly the same as all the others: SunFire V210, Solaris 10 with Patch Cluster 31/10/2007
I have tried to reboot several times: init 6, reboot -- -r ... but I always get 2048 with ulimit -n
Is there any other parameter somewhere than can limit this value?
Thanx.

Doing more tests... now I'm even more confused.
Rebooting the system, I connected to the console and saw that during the boot part, there is a warning about the /etc/system file:
Rebooting with command: boot
Boot device: disk0 File and args:
WARNING: unknown command 'nfs' on line 85 of etc/system
WARNING: unknown command 'nfs' on line 86 of etc/system
SunOS Release 5.10 Version Generic_118833-33 64-bit
Copyright 1983-2006 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Hardware watchdog enabled
Those warnings refer to a problem of the /etc/system file that I had in the past (when I took over the system), but I modified the lines.
They used to be just:
nfs:nfs4_bsize=8192
nfs:nfs4_nra=0
Later I added the "set" in the front.
Anyway, i changed the order of some commands and in lines 85 and 86 I have the following:
85 * Begin MDD root info (do not edit)
86 rootdev:/pseudo/md@0:0,30,blk
The mirroring lines.
So for some reason at boot, Solaris reads the old file. But I dont know which old file because it's been modified and I dont keep any backup of the original one. So, from where is Solaris reading that "strange" /etc/system file? It's definitely not the one I can see doing: cat /etc/system

Similar Messages

  • Increase file descriptor limits on managed server

    Hi,
    we have an Admin Server which manages a managed server.
    We need to increase file descriptor limits of managed server.
    We modified the script commEnv.sh on Admin server and we successfully increased to 65,536 the limit. Here is the log of the boot of Admin Server
    ####<Sep 25, 2013 11:04:18 AM CEST> <Info> <Socket> <lv01469> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1380099858592> <BEA-000416> <Using effective file descriptor limit of: 65,536 open sockets/files.>
    How can we do the same thing on managed server. We tried to modify the same script (commEnv.sh) on managed server but the file descriptor limits is still 1,024.
    ####<Sep 25, 2013 11:23:30 AM CEST> <Info> <Socket> <lv01470> <119LIVE_01> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1380101010988> <BEA-000415> <System has file descriptor limits of - soft: 1,024, hard: 1,024>
    ####<Sep 25, 2013 11:23:30 AM CEST> <Info> <Socket> <lv01470> <119LIVE_01> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1380101010989> <BEA-000416> <Using effective file descriptor limit of: 1,024 open sockets/files.>
    Thanks in advance

    Solved.
    It was necessary restart Node Manager after modify the commEnv.sh.

  • How to increase the per-process file descriptor limit for JDBC connection 15

    If I need JDBC connection more that 15, the only solution is increase the per-process file descriptor limit. But how to increase this limit? modify the oracle server or JDBC software?
    I'm using JDBC thin driver connect to Oracle 806 server.
    From JDBC faq:
    Is there any limit on number of connections for jdbc?
    No. As such JDBC drivers doesn't have any scalability restrictions by themselves.
    It may be it restricted by the no of 'processes' (in the init.ora file) on the server. However, now-a-days we do get questions that even when the no of processes is 30, we are not able to open more than 16 active JDBC-OCI connections when the JDK is running in the default (green) thread model. This is because the no. of per-process file descriptor limit exceeded. It is important to note that depending on whether you are using OCI or THIN, or Green Vs Native, a JDBC sql connection can consume any where from 1-4 file descriptors. The solution is to increase the per-process file descriptor limit.
    null

    maybe it is OS issue, but the suggestion solution is from Oracle document. However, it is not provide a clear enough solution, just state "The solution is to increase the per-process file descriptor limit"
    Now I know the solution, but not know how to increase it.....
    pls help.

  • Oracle Portal item cannot be deleted using dav (Bad File Descriptor)

    I cannot delete an Oracle Portal item with webdav. I get an error 500 and the item is not deleted.
    When this same user logs in as portal user with a browser, the item kan be deleted.
    So the user permissions are probably not the problem.
    What can be the problem?
    How do I have to solve this?
    <h1>Info found in log files</h1>
    <h2>C:\OraHome_2\webcache\logs</h2>
    Here I find an access.log file, but this one does not seem to contain anything useful.
    <h2>C:\OraHome_2\Apache\Apache\logs\</h2>
    Here I find two recent log files:
    <h3>access_log.1340236800</h3>
    HTTP/1.1" 207 3215
    192.168.6.57 - - [21/Jun/2012:09:28:53 +0200] "DELETE /dav_portal/portal/Bibnet/Open_Vlacc_regelgeving/Werkgroepen/vlacc_wgCAT/fgtest.txt HTTP/1.1" 500 431
    <h3>error_log.1340236800</h3>
    [Thu Jun 21 09:28:53 2012] [error] [client 192.168.6.57] [ecid: 3781906711623,1] Could not DELETE /dav_portal/portal/Bibnet/Open_Vlacc_regelgeving/Werkgroepen/vlacc_wgCAT/fgtest.txt. [500, #0] [mod_dav.c line 2008]
    [Thu Jun 21 09:28:53 2012] [error] [client 192.168.6.57] [ecid: 3781906711623,1] (9)Bad file descriptor: Delete unsuccessful. [500, #0] [dav_ora_repos.c line 8913]
    In the error log, you also often find back message :
    [Thu Jun 21 10:33:02 2012] [notice] [client 192.168.6.57] [ecid: 3421133404379,1] ORA-20504: User not authorized to perform the requested operation
    This has probably nothing to do with it, you also have this message when the delete is successful.
    <h1>Versions I have used</h1>
    Dav client: I have tried with clients "Oracle Drive 10.2.0.0.27 Patch" and Cyberduck 4.2.1
    Oracle Portal 10.1.4
    In the errorX.log file, I find back these lines too:
    [Thu Jun 21 09:53:17 2012] [notice] [client 192.168.6.57] [ecid: 4348843884218,1] OraDAV: Initializing OraDAV Portal Driver (1.0.3.2.3-0030) using API version 2.00
    [Thu Jun 21 09:53:17 2012] [notice] [client 192.168.6.57] [ecid: 4348843884218,1] OraDAV: oradav_driver_info Name=interMedia Version=2.3

    You may want to try a rebuild of the DAV tables in Oracle Portal. Before you do so, take a backup of the Portal repository database to ensure that you can revert back in case of disaster.
    Rebuilding the DAV tables is done with the following instructions :
    <li> Start SQL*PLUS and connect to the Portal metadata repository database as PORTAL user
    <li> Execute wwdav_loader.create_dav_content :
    SQL> exec wwdav_loader.create_dav_content();{code}
    Thanks,
    EJ                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Want to increase the file descriptors

    Hi,
    I am trying the increase the max number for file descriptor allowed in Solaris.
    I changed the ulimit soft value to hard limit value (65536) as a root.Even the ulimit -a shows the change value for the soft limit.
    When I run my test program to find the value of sysconf(_SC_OPEN_MAX) it shows the changed value.But when I try to open more than 253 files it fails.How do I increase this, also when I change the ulimit -n 65536 why the max number for files open is not increased.
    Looking forward for help.
    Thanks in advance
    -A

    The simplest workaround is to compile as a 64bit executable.
    -m64 in gcc. I don't remember offhand what the option is for Sun CC.
    The difficulty is that any non system libraries your using will also need to be recompiled to be 64 bit..

  • Cannot reset max-file-descriptor?

    My /var/ad/messages is full of :
    Apr 17 12:30:27 srv1 genunix: [ID 883052 kern.notice] basic rctl process.max-file-descriptor (value 256) exceeded by process 6910
    Even though I have tried to set process.max-file-descriptor set to 4096 for all projects, which appears correct whenever I query any running process, ie:
    srv1 /var/adm # prctl -t basic -n process.max-file-descriptor -i process $$
    process: 2631: -ksh
    NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
    process.max-file-descriptor
    basic 4.10K - deny 2631
    Any ideas...?
    Thanks!!

    Hi,
    Finally found the route cause.
    It is the mistake of the user. In one of his startup scripts(.profile) he is running the command (ulimit -n 1024) which is setting both the soft and hard limits of file descriptors.
    This was the reason, I was unable to increase the file descriptor limit beyond 1024.
    Thanks & Regards,
    -GnanaShekar-

  • Sudden increase in open file descriptors

    Our system is live since an year and half and for the first time I encountered the following exception
    "java.net.SocketException: Too many open files"
    When our internet application stopped responding I immediately checked the number of open file descriptors on my Solaris machine by using the "lsof" command and found that it was an abnormal 600 value and as I continued monitoring it reached 1024 in matter of minutes and WebLogic gave the above exception. The current setting for file descriptors is 1024. But all these days the average number of open file descriptors was well below 220.
    I also took thread dumps and found that most of the threads were stuck at the following location
    ""ExecuteThread: '3' for queue: 'weblogic.kernel.Default'" daemon prio=5 tid=0x00883a90 nid=0x10 runnable [6d080000..6d0819c0]
         at java.net.SocketInputStream.socketRead0(Native Method)
         at java.net.SocketInputStream.read(SocketInputStream.java:129)
         at com.sybase.jdbc2.timedio.RawDbio.reallyRead(RawDbio.java:202)
         at com.sybase.jdbc2.timedio.Dbio.doRead(Dbio.java:243)
         at com.sybase.jdbc2.timedio.InStreamMgr.readIfOwner(InStreamMgr.java:512)
         at com.sybase.jdbc2.timedio.InStreamMgr.doRead(InStreamMgr.java:273)
         at com.sybase.jdbc2.tds.TdsProtocolContext.getChunk(TdsProtocolContext.java:561)
         at com.sybase.jdbc2.tds.PduInputFormatter.readPacket(PduInputFormatter.java:229)
         at com.sybase.jdbc2.tds.PduInputFormatter.read(PduInputFormatter.java:62)
         at com.sybase.jdbc2.tds.TdsInputStream.read(TdsInputStream.java:81)
         at com.sybase.jdbc2.tds.TdsInputStream.readUnsignedByte(TdsInputStream.java:114)
         at com.sybase.jdbc2.tds.Tds.nextResult(Tds.java:1850)
         at com.sybase.jdbc2.jdbc.ResultGetter.nextResult(ResultGetter.java:69)"
    There is no file descriptor leak in the application. My question is since all the threads are hung at JDBC and socket level, does it mean that a faulty query would have triggered this problem(may be the database was too busy executing a faulty query).
    I suspect this because I recieved a database exception soon after the problem appeared. One of my database insert transaction had timed out after 300 seconds. Also this was the first time I recieved this kind of an exception
    java.sql.SQLException: The transaction is no longer active - status: 'Marked rollback. [Reason=weblogic.transaction.internal.TimedOutException: Transaction timed out after 299 seconds
    Xid=BEA1-11FE69525362E51BFA16(6404670),Status=Active,numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds since begin=299,seconds left=60,activeThread=Thread[ExecuteThread: '22' for queue: 'weblogic.kernel.Default',5,Thread Group for Queue: 'weblogic.kernel.Default'],XAServerResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]=(ServerResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]=(state=started,assigned=none),xar=weblogic.jdbc.wrapper.JTSXAResourceImpl@a68c0),SCInfo[Mizuho-RWS+myserver]=(state=active),properties=({weblogic.jdbc=t3://10.104.8.81:7001}),OwnerTransactionManager=ServerTM[ServerCoordinatorDescriptor=(CoordinatorURL=myserver+10.104.8.81:7001+Mizuho-RWS+t3+, XAResources={},NonXAResources={})],CoordinatorURL=myserver+10.104.8.81:7001+Mizuho-RWS+t3+)]'. No further JDBC access is allowed within this transaction.
         at weblogic.jdbc.wrapper.JTSConnection.checkIfRolledBack(JTSConnection.java:118)
         at weblogic.jdbc.wrapper.JTSConnection.checkConnection(JTSConnection.java:127)
    Any inputs regarding this problem?

    Raghu S wrote:
    Hi,
    I am using WebLogic 8.1 SP2 on a Solaris machine.Ok, good enough. Once WebLogic times out a transaction, it rolls it back on the
    connection. That Sybase driver's rollback doesn't unfortunately affect it's
    own running statements. For 81sp3 we added code to explicitly cancel any ongoing
    statement during a rollback. This may be what you need to free up those
    threads and the socekts the driver may be keeping open. If you can upgrade
    to a newer version of 8.1, this code will free you up. Alternately, you can
    try either upgrading to Sybase'e latest driver or to our BEA driver for Sybase.
    Ask support for the latest BEA driver package for 8.1.
    Joe
    >
    Stacktrace of one of the threads at the time I took the thread dump....All the threads at the time of taking the thread dump are stuck in a similar fashion.
    ExecuteThread: '3' for queue: 'weblogic.kernel.Default'" daemon prio=5 tid=0x00883a90 nid=0x10 runnable [6d080000..6d0819c0]
    at java.net.SocketInputStream.socketRead0(Native Method)
    at java.net.SocketInputStream.read(SocketInputStream.java:129)
    at com.sybase.jdbc2.timedio.RawDbio.reallyRead(RawDbio.java:202)
    at com.sybase.jdbc2.timedio.Dbio.doRead(Dbio.java:243)
    at com.sybase.jdbc2.timedio.InStreamMgr.readIfOwner(InStreamMgr.java:512)
    at com.sybase.jdbc2.timedio.InStreamMgr.doRead(InStreamMgr.java:273)
    at com.sybase.jdbc2.tds.TdsProtocolContext.getChunk(TdsProtocolContext.java:561)
    at com.sybase.jdbc2.tds.PduInputFormatter.readPacket(PduInputFormatter.java:229)
    at com.sybase.jdbc2.tds.PduInputFormatter.read(PduInputFormatter.java:62)
    at com.sybase.jdbc2.tds.TdsInputStream.read(TdsInputStream.java:81)
    at com.sybase.jdbc2.tds.TdsInputStream.readUnsignedByte(TdsInputStream.java:114)
    at com.sybase.jdbc2.tds.Tds.nextResult(Tds.java:1850)
    at com.sybase.jdbc2.jdbc.ResultGetter.nextResult(ResultGetter.java:69)
    at com.sybase.jdbc2.jdbc.SybStatement.nextResult(SybStatement.java:204)
    at com.sybase.jdbc2.jdbc.SybStatement.nextResult(SybStatement.java:187)
    at com.sybase.jdbc2.jdbc.SybStatement.executeLoop(SybStatement.java:1698)
    at com.sybase.jdbc2.jdbc.SybStatement.execute(SybStatement.java:1690)
    at com.sybase.jdbc2.jdbc.SybCallableStatement.execute(SybCallableStatement.java:129)
    at weblogic.jdbc.wrapper.PreparedStatement.execute(PreparedStatement.java:68)
    at com.mizuho.rws.report.business.dao.FIReportsDAO.getReportList(FIReportsDAO.java:3463)
    at com.mizuho.rws.report.business.businessObject.FIReports.getReportList(FIReports.java:98)
    at com.mizuho.rws.report.business.ejb.FIReportsBean.getReportList(FIReportsBean.java:96)
    at com.mizuho.rws.report.business.ejb.FIReports_4f92ds_EOImpl.getReportList(FIReports_4f92ds_EOImpl.java:270)
    at com.mizuho.rws.report.client.delegates.FIReportsBusinessDelegates.getReportList(FIReportsBusinessDelegates.java:173)
    at com.mizuho.rws.report.client.web.FIReportsAction.handleFIReportsBean(FIReportsAction.java:1759)
    at com.mizuho.rws.report.client.web.FIReportsAction.performAction(FIReportsAction.java:349)
    at com.mizuho.foundation.presentation.AppBaseAction.perform(AppBaseAction.java:143)
    at com.mizuho.foundation.presentation.AppActionServlet.processActionPerform(AppActionServlet.java:518)
    at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1586)
    at com.mizuho.foundation.presentation.AppActionServlet.doPost(AppActionServlet.java:562)
    at com.mizuho.foundation.presentation.AppActionServlet.doGet(AppActionServlet.java:544)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:740)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
    at weblogic.servlet.internal.ServletStubImpl$ServletInvocationAction.run(ServletStubImpl.java:971)
    at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:402)
    at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:305)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:6350)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:317)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:118)
    at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:3635)
    at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2585)
    at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
    And the stack trace of the exception I recieved.
    java.sql.SQLException: The transaction is no longer active - status: 'Marked rollback. [Reason=weblogic.transaction.internal.TimedOutException: Transaction timed out after 299 seconds
    Xid=BEA1-11FE69525362E51BFA16(6404670),Status=Active,numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds since begin=299,seconds left=60,activeThread=Thread[ExecuteThread: '22' for queue: 'weblogic.kernel.Default',5,Thread Group for Queue: 'weblogic.kernel.Default'],XAServerResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]=(ServerResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]=(state=started,assigned=none),xar=weblogic.jdbc.wrapper.JTSXAResourceImpl@a68c0),SCInfo[Mizuho-RWS+myserver]=(state=active
    ),properties=({weblogic.jdbc=t3://10.104.8.81:7001}),OwnerTransactionManager=ServerTM[ServerCoordinatorDescriptor=(CoordinatorURL=myserver+10.104.8.81:7001+Mizuho-RWS+t3+, XAResources={},NonXAResources={})],CoordinatorURL=myserver+10.104.8.81:7001+Mizuho-RWS+t3+)]'. No further JDBC access is allowed within this transaction.
    at weblogic.jdbc.wrapper.JTSConnection.checkIfRolledBack(JTSConnection.java:118)
    at weblogic.jdbc.wrapper.JTSConnection.checkConnection(JTSConnection.java:127)
    at weblogic.jdbc.wrapper.Statement.checkStatement(Statement.java:222)
    at weblogic.jdbc.wrapper.PreparedStatement.setString(PreparedStatement.java:414)
    at com.mizuho.rws.services.mail.business.dao.FIReportMailDAO.insertMailClientDetails(FIReportMailDAO.java:2790)
    at com.mizuho.rws.services.mail.business.businessObject.FIReportMail.sendMail(FIReportMail.java:645)
    at com.mizuho.rws.services.mail.business.ejb.MailerBean.sendFIReportMail(MailerBean.java:87)
    at com.mizuho.rws.services.mail.business.ejb.Mailer_fyyt2g_EOImpl.sendFIReportMail(Mailer_fyyt2g_EOImpl.java:662)
    at com.mizuho.rws.services.mail.client.delegates.MailerBeanBusinessDelegates.sendFIReportMail(MailerBeanBusinessDelegates.java:153)
    at com.mizuho.rws.services.mail.business.businessObject.SendMailHandler.sendFIReportMail(SendMailHandler.java:181)
    at com.mizuho.rws.services.mail.business.businessObject.SendMailHandler.resolveMail(SendMailHandler.java:429)
    at com.mizuho.rws.services.mail.business.businessObject.SendMailHandler.notify(SendMailHandler.java:651)
    at com.mizuho.foundation.utils.AppNotificationListener.handleNotification(AppNotificationListener.java:66)
    at weblogic.time.common.internal.TimerListener$1.run(TimerListener.java:48)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:317)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:118)
    at weblogic.time.common.internal.TimerListener.deliverNotification(TimerListener.java:44)
    at weblogic.management.timer.Timer.deliverNotifications(Timer.java:578)
    at weblogic.time.common.internal.TimerNotification$1.run(TimerNotification.java:118)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:317)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:118)
    Regards
    Raghu

  • Accpet() need more file descriptor

    my application server is using multithread to deal with high concurrency socket requests. When accpet() a request, it assign a FD, create a thread to deal, the thread will close the FD after finish the processing and thread_exit.
    my question is: when there are threads concurrency dealing 56~57 FDs, accept() can't get a new FD (errno.24). I know FD is limited in one process, I can try to fork sub_process to reach the high concurrency.
    But I wonder isn't there any other good method to solve the problem? How can a Web server reach a high concurrency?
    Any suggest is appreciated!
    Jenny

    Hi Jenny,<BR><BR>
    First of all, you did not say which release of Solaris you are using,<BR>but I'll assume you are on a version later than 2.4.<BR>
    You are correct when you say that the number of file descriptors <BR>
    that can be opened is a per-process limit. Depending on the OS <BR>version the default value for this limit changes, but there <BR>are simple ways to increase it.<BR>
    First of all there are two types of limits: a hard (system-wide) <BR>
    limit and a soft limit. The hard limit can only be changed by root<BR>
    but the soft limit can be changed by any user. There is one restriction<BR> on soft limits, they can never be set higher then the<BR>corresponding hard limit.<BR>
    1. Use the command ulimit(1) from your shell to increase the soft<BR>
    limit from its default value (64 before Solaris 8) to a specified <BR>value less than the hard limit.<BR>
    2. Use the setrlimit(2) call to change both the soft and hard limits.<BR> You must be root to change the hard limit though.<BR>
    3. Modify the /etc/system file and include the following line in <BR>
    to increase the hard limit to 128:<BR><BR>
    <CODE>set rlim_fd_cur=0x80</CODE><BR><BR>
    After changing the /etc/system file, the system should be rebooted <BR>so that the change takes effect.<BR><BR>
    Note that stdio routines are limited to using file descriptors <BR>
    0 through 255. Even though the limit can be set higher than 256, if<BR>the fopen function cannot get a file descriptor lower than 256,<BR>then it will fail. This can be a problem if other routines <BR>use the open function directly. For example, if 256 files are<BR>
    opened with the open function and none of them are closed, no other<BR>files can be opened with the fopen function because all of<BR>the low-numbered file descriptors have been used.<BR>
    Also, note that it is somewhat dangerous to set the fd limits higher<BR>than 1024. There are some structures, such as fd_set in<BR> <sys/select.h>, defined in the system that assume the maximum fd is<BR>1023. If a program uses an fd larger than 1023 with the macros<BR>and routines that access such a structure, the program will<BR>corrupt its memory space because it will modify memory outside<BR>of the bounds of the structure.<BR><BR>
    Caryl<BR>
    Sun Developer Technical Support<BR>

  • IMovie for Mac: "iMovie cannot open files in the "iMovie for iOS Project" format."

    Question, asked professionally
    When will iMovie for Mac be able to open iMovie for iOS projects?
    Question, asked snarkily
    Any reasonable expectation that we will ever be able to edit projects--created in iMovie on our fancy new expensive phones--in iMovie on our fancy new expensive laptops?
    Specs
    iPhone 6 Plus (contract-free, 128 GB)
    iOS 8.1.2
    iMovie for IOS 2.1.1
    MacBook Pro (Retina, 15-inch, Mid-2014)
    OS X Yosemite 10.10.1
    iMovie for Mac 10.0.6
    Error Messages
    iMovie for Mac: "iMovie cannot open files in the "iMovie for iOS Project" format."
    iMovie for iOS: "An error occurred during export."
    Frustration
    Manufacturer: Apple (a multibillion dollar company)
    The year: 2015
    Hours lost: Three Weeks
    Total costs for both devices: $4,000
    Background Details
    I just wasted thousands of dollars on these two hardware devices by making the rookie mistake of assuming projects were compatible across iMovie applications. However, the exported .iMovieMobile project files are in an incompatible iMovie for iOS Project format that iMovie for Mac cannot import. So I am stuck with a 15 GB .iMovieMobile project file--for an hour-long+ video--that I created and I was editing on my iPhone for weeks that I can no longer edit on my iPhone due to an error, nor potentially rescue the lost hours on my Mac, the sole reason I purchased it one day ago. The reason I can no longer edit the project on the iPhone is because iMovie for iOS suddenly stopped displaying the video of the project in preview, external display, etc. The clip snapshots remain visible and the audio remains audible, but the video appears fully black. Any attempt to export the video at any resolution generates an error message. (This error applies only to this project; other projects export without issue.) I can save to and from iTunes or iCloud without issue, but import into iMovie for Mac is unavailable for this latest version. Deleting and reinstalling the app does not resolve the issue. Nothing has resolved this issue; so I purchased the Mac as a last attempt, and it appears that I will have to begin the meticulously painful process of recreating the video from scratch with all the titling, sound effects, precise edits, transitions, organization, etc. This video was to surprise my mother for her birthday with "this is your life" footage, including my late father. It may seem like a small first-world problem, but such things carry big emotional impacts. What began as such a wonderfully intuitive and joyful experience has descended into a soul-suffering nightmare of catastrophic proportions. This software grinch stole christmas.
    Likely Suspect
    I believe this originated from Apple's confusingly designed iTunes client that removed videos used by iMovie for IOS during a sync, which I successfully restored back to the iPhone. This apparently resulted in iOS seeking and then claiming to find all of the videos, but somehow a bug in the coding causes the audio to play but not display video, and no way to resolve the bug from an enduser standpoint. I was hoping that iMovie for Mac would allow me a workaround, but I'll never know because the iOS and Mac formats are currently and dishearteningly incompatible.
    Suggestions
    I never performed a full backup of the iPhone, but I don't believe any of the iCloud nor local backups actually backup iMovie projects themselves anyway. (That's a fail.) So one must back them up manually to iTunes or iCloud.
    MANUALLY BACKUP YOUR PROJECTS DAILY, IF NOT HOURLY.
    MANUALLY BACKUP YOUR PROJECTS BEFORE IMPORTING OR SYNCING FROM ITUNES.
    MANUALLY BACKUP YOUR PROJECTS BEFORE IMPORTING VIDEO CREATED OUTSIDE  OF YOUR IOS DEVICE.
    Resolutions
    Hope Apple fixes this problem soon? Perhaps.
    Return the Mac to Apple, if they allow? Perhaps.
    Someone in this community will provide a magical workaround--that I haven't already attempted? Perhaps.
    As unlikely these options may be, I shudder to imagine that I may be forced to suffer no resolution and $3,000+ completely down the drain, because if iMovie for IOS worked as intended, I wouldn't have ever purchased the Mac as a last ditch effort to rescue the project. It meant that much to me to surprise my mother by attempting to retrieve this project file from oblivion--but to no avail.

    I appreciate you taking the time to copy and paste boilerplate responses to increase your points in this forum, but I've already read all those support articles in depth; but you have barely read my post at all. Please don't guess a fix. Only someone with the latest versions of iMovie, iOS, iPhone, Yosemite, and MacBook Pro is qualified to troubleshoot this, because anyone would immediately see that the following option no longer exists:
    Open iMovie on your Mac, and choose File > Import > iMovie for iOS Project.
    Read my post before you reply: It clearly says in the title and within my post that I can neither export nor import through iTunes without receiving an error message. So your response neither solved my question nor helped me whatsoever.

  • Bad File Descriptor in /dev/fd/3, and 94Gb of disk space missing

    I noticed a few days ago, possibly as the result of a recent kernel panic, that I have a large chunk of hard drive space missing. The Finder reports that I have approximately 89Gb of free space, but using "df" reports that there is approximately 178Gb free. Using "du" doesn't report any unexpected huge files, so I tried running GrandPerspective. In addition to the usual file usage and free space, this shows a single 94Gb block of "miscellaneous used space".
    I then booted into Single User mode to run fsck on the startup drive. This reported several errors, and took 3 passes to repair the directory structure, but didn't recover the missing space. I have subsequently run TechTool Pro and DiskWarrior on the startup drive (both of which found various minor errors), but the 94Gb still refuses to show itself.
    I then tried using "find" to look for single large files, using "sudo find / -size +94371840" (anything larger than 90Gb), and I get the following errors:
    find: /dev/fd/3: Bad file descriptor
    find: /dev/fd/4: Not a directory
    find: /dev/fd/5: Not a directory
    After searching Google, a "Bad file descriptor" error points to an inode issue, that fsck cannot fix, but I don't know enough (read: anything) about inodes to risk running the clri command to zero the problem inode.
    Short of blanking the startup disk and installing from scratch (not an attractive option), is there anything I can do to fix the broken inode and recover the missing space?
    Any help appreciated.

    Drawing Business wrote:
    I then tried using "find" to look for single large files, using "sudo find / -size +94371840" (anything larger than 90Gb), and I get the following errors:
    find: /dev/fd/3: Bad file descriptor
    find: /dev/fd/4: Not a directory
    find: /dev/fd/5: Not a directory
    This is not an error and always happens with find unless you exclude the /dev hierarchy from the search. (Interestingly this seems to have gone away with 10.5??)
    To locate your missing space, try WhatSize. Another alternative which I have not used personally is Disk Inventory X.
    As an additional point, with 10.4 it is actually better to use Disk Utility, since it does more than fsck: Resolve startup issues and perform disk maintenance with Disk Utility and fsck, quote:
    Note: If you're using Mac OS X 10.4 or later, you should use Disk Utility instead of fsck, whenever possible.

  • Problem with file descriptors not released by JMF

    Hi,
    I have a problem with file descriptors not released by JMF. My application opens a video file, creates a DataSource and a DataProcessor and the video frames generated are transmitted using the RTP protocol. Once video transmission ends up, if we stop and close the DataProcessor associated to the DataSource, the file descriptor identifying the video file is not released (checkable through /proc/pid/fd). If we repeat this processing once and again, the process reaches the maximum number of file descriptors allowed by the operating system.
    The same problem has been reproduced with JMF-2.1.1e-Linux in several environments:
    - Red Hat 7.3, Fedora Core 4
    - jdk1.5.0_04, j2re1.4.2, j2sdk1.4.2, Blackdown Java
    This is part of the source code:
    // video.avi with tracks audio(PCMU) and video(H263)
    String url="video.avi";
    if ((ml = new MediaLocator(url)) == null) {
    Logger.log(ambito,refTrazas+"Cannot build media locator from: " + url);
    try {
    // Create a DataSource given the media locator.
    Logger.log(ambito,refTrazas+"Creating JMF data source");
    try
    ds = Manager.createDataSource(ml);
    catch (Exception e) {
    Logger.log(ambito,refTrazas+"Cannot create DataSource from: " + ml);
    return 1;
    p = Manager.createProcessor(ds);
    } catch (Exception e) {
    Logger.log(ambito,refTrazas+"Failed to create a processor from the given url: " + e);
    return 1;
    } // end try-catch
    p.addControllerListener(this);
    Logger.log(ambito,refTrazas+"Configure Processor.");
    // Put the Processor into configured state.
    p.configure();
    if (!waitForState(p.Configured))
    Logger.log(ambito,refTrazas+"Failed to configure the processor.");
    p.close();
    p=null;
    return 1;
    Logger.log(ambito,refTrazas+"Configured Processor OK.");
    // So I can use it as a player.
    p.setContentDescriptor(new FileTypeDescriptor(FileTypeDescriptor.RAW_RTP));
    // videoTrack: track control for the video track
    DrawFrame draw= new DrawFrame(this);
    // Instantiate and set the frame access codec to the data flow path.
    try {
    Codec codec[] = {
    draw,
    new com.sun.media.codec.video.colorspace.JavaRGBToYUV(),
    new com.ibm.media.codec.video.h263.NativeEncoder()};
    videoTrack.setCodecChain(codec);
    } catch (UnsupportedPlugInException e) {
    Logger.log(ambito,refTrazas+"The processor does not support effects.");
    } // end try-catch CodecChain creation
    p.realize();
    if (!waitForState(p.Realized))
    Logger.log(ambito,refTrazas+"Failed to realize the processor.");
    return 1;
    Logger.log(ambito,refTrazas+"realized processor OK.");
    /* After realize processor: THESE LINES OF SOURCE CODE DOES NOT RELEASE ITS FILE DESCRIPTOR !!!!!
    p.stop();
    p.deallocate();
    p.close();
    return 0;
    // It continues up to the end of the transmission, properly drawing each video frame and transmit them
    Logger.log(ambito,refTrazas+" Create Transmit.");
    try {
    int result = createTransmitter();
    } catch (Exception e) {
    Logger.log(ambito,refTrazas+"Error Create Transmitter.");
    return 1;
    } // end try-catch transmitter
    Logger.log(ambito,refTrazas+"Start Procesor.");
    // Start the processor.
    p.start();
    return 0;
    } // end of main code
    * stop when event "EndOfMediaEvent"
    public int stop () {
    try {   
    /* THIS PIECE OF CODE AND VARIATIONS HAVE BEEN TESTED
    AND THE FILE DESCRIPTOR IS NEVER RELEASED */
    p.stop();
    p.deallocate();
    p.close();
    p= null;
    for (int i = 0; i < rtpMgrs.length; i++)
    if (rtpMgrs==null) continue;
    Logger.log(ambito, refTrazas + "removeTargets;");
    rtpMgrs[i].removeTargets( "Session ended.");
    rtpMgrs[i].dispose();
    rtpMgrs[i]=null;
    } catch (Exception e) {
    Logger.log(ambito,refTrazas+"Error Stoping:"+e);
    return 1;
    return 0;
    } // end of stop()
    * Controller Listener.
    public void controllerUpdate(ControllerEvent evt) {
    Logger.log(ambito,refTrazas+"\nControllerEvent."+evt.toString());
    if (evt instanceof ConfigureCompleteEvent ||
    evt instanceof RealizeCompleteEvent ||
    evt instanceof PrefetchCompleteEvent) {
    synchronized (waitSync) {
    stateTransitionOK = true;
    waitSync.notifyAll();
    } else if (evt instanceof ResourceUnavailableEvent) {
    synchronized (waitSync) {
    stateTransitionOK = false;
    waitSync.notifyAll();
    } else if (evt instanceof EndOfMediaEvent) {
    Logger.log(ambito,refTrazas+"\nEvento EndOfMediaEvent.");
    this.stop();
    else if (evt instanceof ControllerClosedEvent)
    Logger.log(ambito,refTrazas+"\nEvent ControllerClosedEvent");
    close = true;
    waitSync.notifyAll();
    else if (evt instanceof StopByRequestEvent)
    Logger.log(ambito,refTrazas+"\nEvent StopByRequestEvent");
    stop =true;
    waitSync.notifyAll();
    Many thanks.

    Its a bug on H263, if you test it without h263 track or with other video codec, the release will be ok.
    You can try to use a not-Sun h263 codec like the one from fobs or jffmpeg projects.

  • Help tracking down a file descriptor leak under java 6

    I have a large application I work on that runs fine under java5 (apart from possibly the latest update) but running under java 6 results in file descriptors used for TCP sockets being leaked.
    I'm testing this under FreeBSD 6 (both i386 and amd64) using diablo JDK and a port build jdk-1.6.0.3p3 but I have had reports from other users of exactly the same issue under various linux distributions. There are some reports that going back as far as 1.6.0b5 will resolve the issue but no later version works and a few reports that the latest 1.5 updates have the same issue.
    This application is using standard IO so Socket/ServerSocket and occasionally SSLSocket, no NIO is involved. Under the problem JDKs it will run for a while before available FDs are exhausted and then fall over with a "too many open files" exception. So far I have been unable to recreate the situation in a simple testcase and the fact it works fine under earlier JDKs is really causing me issues with deciding where to look for the issue.
    Using lsof to watch the FDs that are leaked I see a steadily increasing number shown in the following state:
    java 23438 djb 54u IPv4 0xffffff0091ad02f8 0t0 TCP *:* (CLOSED)
    java 23438 djb 55u IPv4 0xffffff0105aa45f0 0t0 TCP *:* (CLOSED)
    java 23438 djb 56u IPv4 0xffffff01260c15f0 0t0 TCP *:* (CLOSED)
    java 23438 djb 57u IPv4 0xffffff012a2ae8e8 0t0 TCP *:* (CLOSED)
    If these were showing as say (CLOSE_WAIT) then I would understand where they are coming from but as far as I understand the above means the socket has been fully closed but the FD simply hasn't been released. I'm not an expert on the TCP protocol however so I may be wrong here.
    I did try making the application set SoLinger(0,true) on all sockets which of course made all connecting clients think the connection was aborted rather than gracefully closed but even with this setting the FD leak persisted.
    I've gone as far as looking at what I think are the relevant parts of the src for both JDK versions I am using but there are very few changes and nothing that obviously looks linked.
    I'm fully prepared to spend a lot of time looking into this and I'm sure I'd eventually find the cause but if anyone here already knows what the answer may be or can simply give me a nudge in the best direction to look I would be very grateful.

    After weeks of dancing around the issue for weeks, we narrowed it down to garbage collection. If we put System.gc() to run periodically , file descriptors get garbage collected properly . I've tried playing with the settings by using XX:+UseConcMarkSweepGC which seems to help a great deal while system is under stress. However when there is light activity.. the file descriptors grow again and eventually bring everything down.
    Any clues ? is there any way to make gc to perform full collection more often ?
    pls whelp !!!

  • Running Out Of FIle Descriptors "Too many open files"

    We have a 32 application(running on Solaris 8) that opens socket connections and also some files in read/write mode. The application works fine in normal(under low load) case.
    But it is failing under stress environment. At some point under stress environment, when it tries opening a file, it(fopen) gives me error code 24 that means that "too many files opened".
    From this it seems that the application is running out of file descriptors. I used the truss, pfiles and lsof utilities to see how many descriptors are currently opened by my application and the number it gives is around 900(and this is the expected figure to be opened by my application).
    I also set the ulimit(both hard and soft) to a larger number but it also didn't work. Also when i set the soft limit to 70000, the truss output shows like:
    25412/1:     5.3264     sysconfig(_CONFIG_OPEN_FILES)               = 70000
    23123/1: 7.2926 close(69999) Err#9 EBADF
    23123/1: 7.2927 close(69998) Err#9 EBADF
    23123/1: 7.2928 close(69997) Err#9 EBADF
    23123/1: 7.2928 close(69996) Err#9 EBADF
    23123/1: 7.2929 close(69995) Err#9 EBADF
    23123/1: 7.2929 close(69994) Err#9 EBADF
    23123/1: 7.2930 close(69993) Err#9 EBADF
    This goes to close(3).. loops almost 70K times.
    Don't know why such output is.
    Note: under moderate stress environment where only 400 file descriptors are opened, the application works fine.
    can you please help me in this? Is this the file descriptor problem or can be other potential source of problem.
    Is this any other way to increase the file descriptor limit.
    I aldo trying using LD_PRELOAD_32=/usr/lib/extendedFILE.so.1 but it gave me following error while starting application:
    "ld.so.1: ls: fatal: /usr/lib/extendedFILE.so.1: open failed: No such file or direcroty"
    Also i cant use purify(because of some reasons) to find file descriptors leakage(if any) and is not possible to upgrade the system to Solaris 10.
    Thanks in advance.

    http://developers.sun.com/solaris/articles/stdio_256.html

  • Max number of file descriptors in 32 vs 64 bit compilation

    Hi,
    I compiled a simple C app (with Solaris CC compiler) that attempts to open 10000 file descriptors using fopen(). It runs just fine when compile in 64-bit mode (with previously setting �ulimit �S -n 10000�).
    However, when I compile it in 32-bit mode it fails to open more than 253 files. Call to system(�ulimit �a�) suggests that �nofiles (descriptors) 10000�.
    Did anybody ever see similar problem before?
    Thanks in advance,
    Mikhail

    On 32-bit Solaris, the stdio "FILE" struct stores the file descriptor (an integer) in an 8-bit field. WIth 3 files opened automatically at program start (stdin, stdout, stderr), that leaves 253 available file descriptors.
    This limitation stems from early versions of Unix and Solaris, and must be maintained to allow old binaries to continue to work. That is, the layout of the FILE struct is wired into old programs, and thus cannot be changed.
    When 64-bit Solaris was introduced, there was no compatibility issue, since there were no old 64-bit binaries . The limit of 256 file descriptors in stdio was removed by making the field larger. In addition, the layout of the FILE struct is hidden from user programs, so that future changes are possible, should become necessary.
    To work around the limit, you can play some games with dup() and closing the original descriptor to make it available for use with a new file, or you can arrange to have fewer than the max number of files open at one time.
    A new interface for stdio is being implemented to allow a large number of files to be open at one time. I don't know when it will be available or for which versions of Solaris.

  • Cannot increase partition size

    I originally had three two partitions one called BOOT and one called FILES. I ran out of room on my boot drive and I was told to clone it, add a partition with a larger size and clone it back to the larger parttion. I crteated BOOT 2 and then once cloned back I deleted BOOT.
    Now I cannot increase the size of FILES to its original size and cannot seem to recover any of the space from the original BOOT parttiion. Neither BOOT 2 or FILES can be increased. How do I acheive this?
    Here is a screen capture of what is going on.......

    You can not do that with Disk Utility. While it can be used to create and resize partitions, it can not be used to regain the space between partitions if there is data on the second partition.
    You might try something like > iPartition for Mac - Smart hard disk partitioning utility.
    But personally if I were in that boat. I would Clone the data from the 2nd and 3rd Partitions to partitions on an external drive. Then start over by reformatting the internal drive into 1 or 2 new partitions using Disk Utility and then clone the partitions back onto the internal drive.
    In addition: In OS X it is not necessary to have a boot partition and a file partition, OS X does quite well at managing everything including quite large user libraries in one partition.

Maybe you are looking for

  • Sender Soap Adapter - error

    Hi,   I am testing SOAP to RFC Scenario in Quality landscape. I am getting the following error <?xml version="1.0"?> <!-- see the documentation --> <SOAP:Envelope xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/">      <SOAP:Body>           <SOA

  • ALBPM Enterprise SA 5.7 MP3 and Weblogic Express 9.2

    We are using ALBPM Enterprise SA 5.7 MP3 and Weblogic Express 9.2. We have the following questions: 1) Why, after assigning a Parametric Role to an user, we have to restart the portal in Weblogic before the user can log back into the system? When we

  • Node Binding Error

    Hello, I am on EP7.0 ERP05 NW04s and trying to modify the ESS Address applicaion to point to a custom Function Module ,I changed the Model Bindings for that matter but when I try to execute the application I get the following error during runtime: co

  • How do i reduce the amount of space used by 'other on the iPhone

    I have 2.28 GB 'other' used space on my iphone 4S 16GB running IOS 6.02 latest IOS and synced with IMAC using the latest OSX 10.8.2 and latest Itunes. How do i reduce this. what is the 2.28 made up of? If the space is old emails, texts, address book

  • What is a good game controller for mountain Lion to use on a mac

    Hello everyone, I like playing games on my mac. But I don't like using my mouse. Is there a good game controller out there, that can be use on Mountain Lion. I have tried using a PS3 with gamepad companion, which don't work. So if anyone is using a n