Weblogic 10 - application deployment error: Exception is: "File too large"

I posted this in Weblogic -> general but realise is should have really gone here as it's about admin server/deployment services setup / configuration.
I am using weblogic application server 10 in a weblogic clustered enviornment.
I am trying to deploy an application to a managed server when it starts up, all goes well and I can see it deploying the war files to the managed server.
It hits a certain war and panics with the exception
####<Nov 19, 2011 2:03:59 PM BRST> <Error> <Deployer> <devnode01> <managedserver2> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <1321718639109> <BEA-149205> <Failed to initialize the application 'test_war' due to error weblogic.management.DeploymentException: Exception occured while downloading files.
weblogic.management.DeploymentException: Exception occured while downloading files
at weblogic.deploy.internal.targetserver.datamanagement.AppDataUpdate.doDownload(AppDataUpdate.java:43)
at weblogic.deploy.internal.targetserver.datamanagement.DataUpdate.download(DataUpdate.java:56)
at weblogic.deploy.internal.targetserver.datamanagement.Data.prepareDataUpdate(Data.java:97)
at weblogic.deploy.internal.targetserver.BasicDeployment.prepareDataUpdate(BasicDeployment.java:682)
at weblogic.deploy.internal.targetserver.BasicDeployment.stageFilesForStatic(BasicDeployment.java:725)
at weblogic.deploy.internal.targetserver.AppDeployment.prepare(AppDeployment.java:104)
at weblogic.management.deploy.internal.DeploymentAdapter$1.doPrepare(DeploymentAdapter.java:39)
at weblogic.management.deploy.internal.DeploymentAdapter.prepare(DeploymentAdapter.java:187)
at weblogic.management.deploy.internal.AppTransition$1.transitionApp(AppTransition.java:21)
at weblogic.management.deploy.internal.ConfiguredDeployments.transitionApps(ConfiguredDeployments.java:233)
at weblogic.management.deploy.internal.ConfiguredDeployments.prepare(ConfiguredDeployments.java:165)
at weblogic.management.deploy.internal.ConfiguredDeployments.deploy(ConfiguredDeployments.java:122)
at weblogic.management.deploy.internal.DeploymentServerService.resume(DeploymentServerService.java:173)
at weblogic.management.deploy.internal.DeploymentServerService.start(DeploymentServerService.java:89)
at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
Caused By: java.io.IOException: [DeploymentService:290066]Error occurred while downloading files from admin server for deployment request "0". Underlying error is: "[DeploymentService:290065]Deployment service servlet encountered an Exception while handling the deployment datatransfer message for request id "0" from server "managedserver2". Exception is: "File too large"."
at weblogic.deploy.service.datatransferhandlers.HttpDataTransferHandler.getDataAsStream(HttpDataTransferHandler.java:86)
at weblogic.deploy.service.datatransferhandlers.DataHandlerManager$RemoteDataTransferHandler.getDataAsStream(DataHandlerManager.java:153)
at weblogic.deploy.internal.targetserver.datamanagement.AppDataUpdate.doDownload(AppDataUpdate.java:39)
at weblogic.deploy.internal.targetserver.datamanagement.DataUpdate.download(DataUpdate.java:56)
at weblogic.deploy.internal.targetserver.datamanagement.Data.prepareDataUpdate(Data.java:97)
at weblogic.deploy.internal.targetserver.BasicDeployment.prepareDataUpdate(BasicDeployment.java:682)
at weblogic.deploy.internal.targetserver.BasicDeployment.stageFilesForStatic(BasicDeployment.java:725)
at weblogic.deploy.internal.targetserver.AppDeployment.prepare(AppDeployment.java:104)
at weblogic.management.deploy.internal.DeploymentAdapter$1.doPrepare(DeploymentAdapter.java:39)
at weblogic.management.deploy.internal.DeploymentAdapter.prepare(DeploymentAdapter.java:187)
at weblogic.management.deploy.internal.AppTransition$1.transitionApp(AppTransition.java:21)
at weblogic.management.deploy.internal.ConfiguredDeployments.transitionApps(ConfiguredDeployments.java:233)
at weblogic.management.deploy.internal.ConfiguredDeployments.prepare(ConfiguredDeployments.java:165)
at weblogic.management.deploy.internal.ConfiguredDeployments.deploy(ConfiguredDeployments.java:122)
at weblogic.management.deploy.internal.DeploymentServerService.resume(DeploymentServerService.java:173)
at weblogic.management.deploy.internal.DeploymentServerService.start(DeploymentServerService.java:89)
at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
The error appears to be stating the physical file is too big to be deployed
I'm running the managed servers with a heap size of 3GB and the managed server is running with 2GB - I know these are large but they where being used for debugging
I can't find any documentation on the file size too large error, or how to resolve it
DeploymentService:290065 says to look in the log (details are above) and DeploymentService:290066 says the error will be explained in it's description, which it is, "file size too big", it doesn't say where to see/set the max file size, there is plenty of disk space so I can only assume it's setting for the deployment service that needs to be increase, but I cannot find info on this.

I don't think this would help, but would using the nostage option for deployment change this behaviour.
I don't think it would as this is for disk based problems rather than transfer size issues.

Similar Messages

  • Posfix error writing message: File too large

    I was looking for a reason for my Mac hanging occasionally. In Console I found the following recurring error message.
    10/2/11 8:15:19 PM     postfix/master[656]     daemon started -- version 2.5.5, configuration /etc/postfix
    10/2/11 8:16:00 PM     postfix/pickup[657]     B6095D33A25: uid=501 from=<Al>
    10/2/11 8:16:00 PM     postfix/cleanup[664]     B6095D33A25: message-id=<[email protected]>
    10/2/11 8:16:00 PM     postfix/qmgr[658]     B6095D33A25: from=<[email protected]>, size=668, nrcpt=1 (queue active)
    10/2/11 8:16:00 PM     postfix/local[666]     B6095D33A25: to=<[email protected]>, orig_to=<Al>, relay=local, delay=0.08, delays=0.04/0.02/0/0.02, dsn=5.2.2, status=bounced (cannot update mailbox /var/mail/al for user al. error writing message: File too large)
    10/2/11 8:16:00 PM     postfix/cleanup[664]     C5ACFD33A28: message-id=<[email protected]>
    10/2/11 8:16:00 PM     postfix/bounce[667]     B6095D33A25: sender non-delivery notification: C5ACFD33A28
    10/2/11 8:16:00 PM     postfix/qmgr[658]     C5ACFD33A28: from=<>, size=2366, nrcpt=1 (queue active)
    10/2/11 8:16:00 PM     postfix/qmgr[658]     B6095D33A25: removed
    10/2/11 8:16:00 PM     postfix/local[666]     C5ACFD33A28: to=<[email protected]>, relay=local, delay=0.01, delays=0/0/0/0, dsn=5.2.2, status=bounced (cannot update mailbox /var/mail/al for user al. error writing message: File too large)
    10/2/11 8:16:00 PM     postfix/qmgr[658]     C5ACFD33A28: removed
    10/2/11 8:16:19 PM     postfix/master[656]     master exit time has arrived
    I haven't done anything with Postfix, but I'm guessing one of the monitoring utilities is set up to send me an email message if it finds an error.
    Looking for ideas on how to fix this.

    Well, I think I figured this out.
    I found this posting on a message board http://www.linuxquestions.org/questions/linux-networking-3/file-too-large-in-pos tfix-495988/#post2480213
    I ran this command in terminal: sudo postconf -e "virtual_mailbox_limit=0" then: sudo postfix reload
    That still didn't fix the error messages.
    Then I ran this command in terminal: sudo postconf -e "mailbox_size_limit=0" then: sudo postfix reload
    That stopped the "file too large" error message.
    Now the console message I get is:
    postfix/master[1630]          daemon started -- version 2.5.14, configuration /etc/postfix
    postfix/pickup[1631]          68EA012A66BD: uid=501 from=<Al>
    postfix/cleanup[1643]          68EA012A66BD: message-id=<[email protected]>
    postfix/qmgr[1632]          68EA012A66BD: from=<[email protected]>, size=707, nrcpt=1 (queue active)
    postfix/pickup[1631]          BA07112A66BE: uid=501 from=<Al>
    postfix/cleanup[1643]          BA07112A66BE: message-id=<[email protected]>
    postfix/local[1645]          68EA012A66BD: to=<[email protected]>, orig_to=<Al>, relay=local, delay=0.82, delays=0.5/0.09/0/0.23, dsn=2.0.0, status=sent (delivered to mailbox)
    postfix/master[1630]          master exit time has arrived
    I'm guessing this is normal function of the postfix system. However, I'm still not sure what postfix is doing. Maybe its the under-the-hood stuff for Apple Mail?

  • How can I send pictures I have taken with my phone?  It keeps giving me an error msg about file too large.

    This problem just started a three days ago out of nowhere.  Any ideas besides the ones I have already read?

    I assume you mean sending them in a text message. There is a size limitation in photos sent in a text message. Does your texting application not offer you the option of resizing the photo?
    If you must send it full size, you would have to send it via email.

  • File too large error unpacking War during app deploy - RHEL &WLS 10.3.5

    I'm stumped and I'm hoping someone can help out here. Does anyone have any insights into the cause of my problem below, or tips on how to diagnose the cause?
    scenario
    We ran into an open files limit issue on our RH Linux servers, and had the SA boost the our open files limit fro 1024 to 3096. This seems to have solved the open files limit issue, once we restarted the node managers and the managed servers (our WLS startup script sets the soft limit to the hard limit).
    But now we've got a new issue, right after this change. The admin server is now no longer able to deploy and war/ear, as when I click on "Activate" after the install I get
    Message icon - Error An error occurred during activation of changes, please see the log for details.
    Message icon - Error Failed to load webapp: 'TemplateManagerAdmin-1.0-SNAPSHOT.war'
    Message icon - Error File too large
    on the console, and see the stack trace below in the Admin server log (nothing in the managed server logs) - indicating it's getting the error in exploding the war.
    I've tried both default deployment mode, and the mode "will make the deployment available in the following location" where the war is manually copied to the same location on each box, available to each server - all with the same result. I've also tried restarting the admin server, but no luck.
    The files are not overly large (<=34 MByte) and we had no trouble with them before today. I'm able to log in as the WebLogic user and copye files, etc. with no problem.
    There is no disk space issue - plenty of space left on all of our filesystems. There is, as far as I can tell, no OS or user file size limit issue:
         -bash-3.2$ ulimit -a
         core file size (blocks, -c) 0
         data seg size (kbytes, -d) unlimited
         scheduling priority (-e) 0
         file size (blocks, -f) unlimited
         pending signals (-i) 73728
         max locked memory (kbytes, -l) 32
         max memory size (kbytes, -m) unlimited
         open files (-n) 3096
         pipe size (512 bytes, -p) 8
         POSIX message queues (bytes, -q) 819200
         real-time priority (-r) 0
         stack size (kbytes, -s) 10240
         cpu time (seconds, -t) unlimited
         max user processes (-u) unlimited
         virtual memory (kbytes, -v) unlimited
         file locks (-x) unlimited
    environment
    WLS 10.3.5 64-bit
    Linux 64-bit RHEL 5.6
    Sun Hotspot 1.6.0_29 (64--bit)
    stack trace
    ####<Mar 6, 2013 4:09:33 PM EST> <Error> <Console> <nj09mhm5111> <prp_appsvcs_admin> <[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'> <steven_elkind> <> <> <1362604173724> <BEA-240003> <Console encountered the following error weblogic.application.ModuleException: Failed to load webapp: 'TemplateManagerAdmin-1.0-SNAPSHOT.war'
    at weblogic.servlet.internal.WebAppModule.prepare(WebAppModule.java:393)
    at weblogic.application.internal.flow.ScopedModuleDriver.prepare(ScopedModuleDriver.java:176)
    at weblogic.application.internal.flow.ModuleListenerInvoker.prepare(ModuleListenerInvoker.java:199)
    at weblogic.application.internal.flow.DeploymentCallbackFlow$1.next(DeploymentCallbackFlow.java:517)
    at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:52)
    at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:159)
    at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:45)
    at weblogic.application.internal.BaseDeployment$1.next(BaseDeployment.java:613)
    at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:52)
    at weblogic.application.internal.BaseDeployment.prepare(BaseDeployment.java:184)
    at weblogic.application.internal.SingleModuleDeployment.prepare(SingleModuleDeployment.java:43)
    at weblogic.application.internal.DeploymentStateChecker.prepare(DeploymentStateChecker.java:154)
    at weblogic.deploy.internal.targetserver.AppContainerInvoker.prepare(AppContainerInvoker.java:60)
    at weblogic.deploy.internal.targetserver.operations.ActivateOperation.createAndPrepareContainer(ActivateOperation.java:207)
    at weblogic.deploy.internal.targetserver.operations.ActivateOperation.doPrepare(ActivateOperation.java:98)
    at weblogic.deploy.internal.targetserver.operations.AbstractOperation.prepare(AbstractOperation.java:217)
    at weblogic.deploy.internal.targetserver.DeploymentManager.handleDeploymentPrepare(DeploymentManager.java:747)
    at weblogic.deploy.internal.targetserver.DeploymentManager.prepareDeploymentList(DeploymentManager.java:1216)
    at weblogic.deploy.internal.targetserver.DeploymentManager.handlePrepare(DeploymentManager.java:250)
    at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.prepare(DeploymentServiceDispatcher.java:159)
    at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doPrepareCallback(DeploymentReceiverCallbackDeliverer.java:171)
    at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.access$000(DeploymentReceiverCallbackDeliverer.java:13)
    at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer$1.run(DeploymentReceiverCallbackDeliverer.java:46)
    at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
    Caused by: java.io.IOException: File too large
    at java.io.FileOutputStream.writeBytes(Native Method)
    at java.io.FileOutputStream.write(FileOutputStream.java:282)
    at weblogic.utils.io.StreamUtils.writeTo(StreamUtils.java:19)
    at weblogic.utils.FileUtils.writeToFile(FileUtils.java:117)
    at weblogic.utils.jars.JarFileUtils.extract(JarFileUtils.java:285)
    at weblogic.servlet.internal.ArchivedWar.expandWarFileIntoDirectory(ArchivedWar.java:139)
    at weblogic.servlet.internal.ArchivedWar.extractWarFile(ArchivedWar.java:108)
    at weblogic.servlet.internal.ArchivedWar.<init>(ArchivedWar.java:57)
    at weblogic.servlet.internal.War.makeExplodedJar(War.java:1093)
    at weblogic.servlet.internal.War.<init>(War.java:186)
    at weblogic.servlet.internal.WebAppServletContext.processDocroot(WebAppServletContext.java:2789)
    at weblogic.servlet.internal.WebAppServletContext.setDocroot(WebAppServletContext.java:2666)
    at weblogic.servlet.internal.WebAppServletContext.<init>(WebAppServletContext.java:413)
    at weblogic.servlet.internal.WebAppServletContext.<init>(WebAppServletContext.java:493)
    at weblogic.servlet.internal.HttpServer.loadWebApp(HttpServer.java:418)
    at weblogic.servlet.internal.WebAppModule.registerWebApp(WebAppModule.java:972)
    at weblogic.servlet.internal.WebAppModule.prepare(WebAppModule.java:382)

    In the end, the problem was not in the Admin server where the log entry is, but in one of the managed servers where there was no such log entry.
    Somehow, and we have no idea how, the NodeManager process had the soft limit for max file size set to 2k blocks. Thus, the managed server inherited that. We restarted the Node Manager, then the managed server, and the problem went away.
    The diagnostic that turned the trick:
    cat /proc/<pid>/limits
    for the managed server showed the bad limit setting, then diagnosis proceeded from there. The admin server, of course, had "unlimited" since it was not the source of the problem.

  • WebLogic Issue: File too large

    Hi All,
    I am getting below error in logs while starting the WLS (10.3.5 on IBM AIX 6.1 using IBM JDK) AdminServer:
    ####<Nov 8, 2012 10:28:45 PM PST> <Notice> <Security> <edrpoc10.ftb.ca.gov> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1352442525279> <BEA-090082> <Security initializing using security realm myrealm.>
    ####<Nov 8, 2012 10:28:51 PM PST> <Notice> <WebLogicServer> <edrpoc10.ftb.ca.gov> <AdminServer> <main> <<WLS Kernel>> <> <> <1352442531303> <BEA-000365> <Server state changed to STANDBY>
    ####<Nov 8, 2012 10:28:51 PM PST> <Notice> <WebLogicServer> <edrpoc10.ftb.ca.gov> <AdminServer> <main> <<WLS Kernel>> <> <> <1352442531304> <BEA-000365> <Server state changed to STARTING>
    ####<Nov 8, 2012 10:28:54 PM PST> <Warning> <oracle.as.jmx.framework.MessageLocalizationHelper> <edrpoc10.ftb.ca.gov> <AdminServer> <JMX FRAMEWORK Domain Runtime MBeanServer pooling thread> <<anonymous>> <> <0000JfZqpLg4ykJLQm5Eid1GbAAX000001> <1352442534039> <J2EE JMX-46041> <The resource for bundle "oracle.jrf.i18n.MBeanMessageBundle" with key "oracle.jrf.JRFServiceMBean.checkIfJRFAppliedOnMutipleTargets" cannot be found.>
    ####<Nov 8, 2012 10:28:57 PM PST> <Error> <Deployer> <edrpoc10.ftb.ca.gov> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1352442537493> <BEA-149205> <Failed to initialize the application 'adf.oracle.domain [LibSpecVersion=1.0,LibImplVersion=11.1.1.2.0]' due to error weblogic.application.library.LibraryDeploymentException: [J2EE:160141]Could not initialize the library Extension-Name: adf.oracle.domain, Specification-Version: 1, Implementation-Version: 11.1.1.2.0. Please ensure the deployment unit is a valid library type (war, ejb, ear, plain jar). weblogic.application.library.LibraryProcessingException: java.io.IOException: File too large
         at weblogic.application.internal.library.EarLibraryDefinition.init(EarLibraryDefinition.java:93)
         at weblogic.application.utils.LibraryLoggingUtils.initLibraryDefinition(LibraryLoggingUtils.java:277)
         at weblogic.application.internal.library.LibraryDeployment.prepare(LibraryDeployment.java:44)
         at weblogic.application.internal.DeploymentStateChecker.prepare(DeploymentStateChecker.java:154)
         at weblogic.deploy.internal.targetserver.AppContainerInvoker.prepare(AppContainerInvoker.java:60)
         at weblogic.deploy.internal.targetserver.AppDeployment.prepare(AppDeployment.java:141)
         at weblogic.management.deploy.internal.DeploymentAdapter$1.doPrepare(DeploymentAdapter.java:39)
         at weblogic.management.deploy.internal.DeploymentAdapter.prepare(DeploymentAdapter.java:191)
         at weblogic.management.deploy.internal.AppTransition$1.transitionApp(AppTransition.java:21)
         at weblogic.management.deploy.internal.ConfiguredDeployments.transitionApps(ConfiguredDeployments.java:240)
         at weblogic.management.deploy.internal.ConfiguredDeployments.prepare(ConfiguredDeployments.java:165)
         at weblogic.management.deploy.internal.ConfiguredDeployments.deploy(ConfiguredDeployments.java:122)
         at weblogic.management.deploy.internal.DeploymentServerService.resume(DeploymentServerService.java:180)
         at weblogic.management.deploy.internal.DeploymentServerService.start(DeploymentServerService.java:96)
         at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
    Caused by: java.io.IOException: File too large
         at java.io.FileOutputStream.writeBytes(Native Method)
         at java.io.FileOutputStream.write(FileOutputStream.java:282)
         at weblogic.utils.io.StreamUtils.writeTo(StreamUtils.java:19)
         at weblogic.utils.FileUtils.writeToFile(FileUtils.java:117)
         at weblogic.utils.jars.JarFileUtils.extract(JarFileUtils.java:285)
         at weblogic.utils.jars.JarFileUtils.extract(JarFileUtils.java:246)
         at weblogic.application.io.ExplodedJar.extractJarFile(ExplodedJar.java:301)
         at weblogic.application.io.ExplodedJar.<init>(ExplodedJar.java:54)
         at weblogic.application.io.Ear.<init>(Ear.java:47)
         at weblogic.application.internal.library.EarLibraryDefinition.init(EarLibraryDefinition.java:81)
         ... 16 more
    ####<Nov 8, 2012 10:28:59 PM PST> <Error> <Deployer> <edrpoc10.ftb.ca.gov> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1352442539037> <BEA-149205> <Failed to initialize the application 'emai' due to error weblogic.application.library.LibraryDeploymentException: [J2EE:160141]Could not initialize the library Extension-Name: emai. Please ensure the deployment unit is a valid library type (war, ejb, ear, plain jar). weblogic.application.library.LibraryProcessingException: java.io.IOException: File too large
    Any pointers on resolving the issue?
    Regards,
    Sunny
    Edited by: ajmerasunny on Nov 9, 2012 10:23 AM

    Hi Sunny,
    The issue is because in your OS AIX,You dont have the support for largefile .
    How do you enable largefile support in AIX?
    On file /etc/security/limits , change the value of "fsize" to -1 . "-1" denotes unlimited. Log-off and login , stop & start applications to make largefiles work.
    Ref: http://unixfoo.blogspot.in/2008/11/aix-filesystem-tips.html

  • Disk Utility: Creating a new blank image receiving "file too large" error.

    Hello All!
    I'm trying to create a 10GB non-encrypted, non-compressed RW blank image via the disk utility. The DU runs for a few minutes then barfs out "file too large" error. I have over 30GB free on my HDD. I tried with a smaller size of 6GB to no avail. Also tried unsuccessfully to create from a file (about 4 GB). My ultimate goal is to create a case-insensitive image to run an extremely important program needed for high priority work productivity (i.e. WoW). Thanks in advance for any advice! You will be my new best friend if you help me resolve this. =D
    Hollie
    "There are only 10 types of people in this world: Those who understand binary, and those who don't."

    Hi Hollie, and welcome to the forums!
    Have you created images before successfully?
    Is this to/on your boot drive, or an external drive?
    Have you done any Disk/OS maintenance lately?
    We might see if there are some big temp files left or such...
    How much free space is on the HD, where has all the space gone?
    OmniDiskSweeper is now free, and likely the best/easiest...
    http://www.omnigroup.com/applications/omnidisksweeper/
    WhatSize...
    http://www.macupdate.com/info.php/id/13006/
    Disk Inventory X...
    http://www.derlien.com/
    GrandPerspective...
    http://grandperspectiv.sourceforge.net/

  • File too large error or corrupt file error

    I have scanned some images using a Nikon Cool Scan and when trying to import the the NEF files into Lightroom I get a corrupt or unrecognized file error. Bring it into CS2 or CS3 and save as TIFF and try the import and get a File too large error.
    Any ideas or help on this. What is the file size max for import?
    The scan is 4000dpi even tried at 300dpi.
    Thanks in advance for any insight.

    &gt; Is it truly a size problem? If so, what is recommended? Lee Jay states that 10000 pixels is the max on either side. Okay, in DPI, what does that translate to?
    <br />
    <br />There's no necessary relationship between pixels and dots. You could scan an image at 4,000,000 dpi and translate it into an image of 100 x 100 pixels. I've used ridiculous extremes to make a point. The LR limitation is currently 10,000 pixels for any side. So you could have 9,000 x9,000 pixels but not 10,001 x50 pixels.
    <br />
    <br />Is this now clearer?
    <br />
    <br />
    <span style="color: rgb(102, 0, 204);">John "McPhotoman"</span>
    <font br="" /></font> color="#800000" size="2"&gt;~~ John McWilliams
    <br />
    <br />
    <br />
    <br />MacBookPro 2 Ghz Intel Core Duo, G-5 Dual 1.8;
    <br />Canon DSLRs

  • Time Machine Error - The backup is too large for the backup disk

    I have been using Lion (currently 10.7.1) on my MacBook Pro (13" - early 2011) since it was released.  I haven't had any serious problems with it.
    All of a sudden, I am getting an error in Time Machine.  When it tries to run a backup, the error "This backup is too large for the backup disk.  The backup requires 7.51 GB but only 630.1 GB are available."  What gives?  That's plenty of room.  I have installed Logic Studio and a few plug-ins, so the 7.51 GB is probably right.  The free space is correct as well.  I can't understand what the problem is.
    The backup disk is an external USB 2.0 drive with no other Time Machine backups on it or any other files.  The folder "Backups.backupdb" is the only thing on the root of the disk.
    I am reluctant to reset the Time Machine and lose all of the backups, but I will if anyone recommends it.

    Hi Linc,
    It is not working at the moment, as I have restored the original Lion image again; it has all my work and apps on it.
    Many thanks for the info on the log, though.  It tells a strange story.  Here's the log from the last backup that worked to the first one that failed: --
    Sep 12 17:15:55 Johns-MacBook-Pro com.apple.backupd[674]: Starting standard backup
    Sep 12 17:15:55 Johns-MacBook-Pro com.apple.backupd[674]: Backing up to: /Volumes/Backup/Backups.backupdb
    Sep 12 17:15:55 Johns-MacBook-Pro com.apple.backupd[674]: 100.0 MB required (including padding), 633.72 GB available
    Sep 12 17:15:55 Johns-MacBook-Pro com.apple.backupd[674]: Waiting for index to be ready (100)
    Sep 12 17:16:00 Johns-MacBook-Pro com.apple.backupd[674]: Copied 793 files (601 KB) from volume System.
    Sep 12 17:16:00 Johns-MacBook-Pro com.apple.backupd[674]: 100.0 MB required (including padding), 633.72 GB available
    Sep 12 17:16:01 Johns-MacBook-Pro com.apple.backupd[674]: Copied 89 files (93 bytes) from volume System.
    Sep 12 17:16:01 Johns-MacBook-Pro mds[34]: (Error) Volume: Could not find requested backup type:2 for volume
    Sep 12 17:16:01 Johns-MacBook-Pro com.apple.backupd[674]: Starting post-backup thinning
    Sep 12 17:16:01 Johns-MacBook-Pro com.apple.backupd[674]: Deleted /Volumes/Backup/Backups.backupdb/John’s MacBook Pro/2011-09-11-154229 (1.1 MB)
    Sep 12 17:16:01 Johns-MacBook-Pro com.apple.backupd[674]: Post-back up thinning complete: 1 expired backups removed
    Sep 12 17:16:01 Johns-MacBook-Pro com.apple.backupd[674]: Backup completed successfully.
    Sep 13 10:34:12 Johns-MacBook-Pro com.apple.backupd[287]: Starting standard backup
    Sep 13 10:34:12 Johns-MacBook-Pro com.apple.backupd[287]: Backing up to: /Volumes/Backup/Backups.backupdb
    Sep 13 10:34:52 Johns-MacBook-Pro com.apple.backupd[287]: 7.51 GB required (including padding), 630.11 GB available
    Sep 13 10:34:52 Johns-MacBook-Pro com.apple.backupd[287]: No expired backups exist - deleting oldest backups to make room
    Sep 13 10:34:52 Johns-MacBook-Pro mds[32]: (Error) Volume: Could not find requested backup type:2 for volume
    Sep 13 10:35:03 Johns-MacBook-Pro com.apple.backupd[287]: Backup failed with error: Not enough available disk space on the target volume.
    I don't understand.  For starters, I think it's a little wasteful that 3.5 GB has been used to back up 601 KB.  That's the difference in free space on the backup volume between the two backups.  That can't be normal, surely.
    The only error is that mds[32] error, and from what I've read on forums, that seems to appear on backups that work perfectly.
    Too weird.  It looks like I'll have to reinstall Lion and all my applications again to get Time Machine working, or find another backup solution.

  • Jrun error 413 header length too large

    hi there,
    is there any way to check where this error originates? any coldfusion logs etc?
    error: jrun error 413 header length too large
    it's giving me complete headache
    cheers,
    Simon

    Hi Simon,
    I am not entirely sure changing JVM is going to help however thought I would post some notes on how to do that.
    Download from Oracle Java developer kit (not runtime):
    http://www.oracle.com/technetwork/java/javase/downloads/index.html
    Java JDK 1.6.0_23 is current (note I have not trialled that one on CF9 very much yet).
    Install that via running EXE you downloaded - default install will be fine.
    Stop CF - SERVICES.msc stop "ColdFusion 9 Application Server".
    Take a copy of CF\runtime\bin\jvm.config - so you got a backup.
    Edit CF\runtime\bin\jvm.config find line "java.home=" and comment it out eg:
    #java.home=C:/ColdFusion9/runtime/jre
    Add new line like so and save jvm.config:
    java.home=C:/Program Files/Java/jdk1.6.0_23/jre
    Note there the slashes and the location of the JRE (runtime) - you need to point to the one in JDK because the other JRE in C:\Program Files\Java\jre6 will be missing a DLL.
    Start CF via SERVICES.msc.
    HTH, Carl.

  • Conditional format with large data fails and show error as "Selection is too large" in Excel 2007

    I am facing a issue in paste special operation using conditional formats for large data in Excel 2007
    I have uploaded a file at below given location. 
    http://sdrv.ms/1fYC9qE
    The file contains two sheets, Sheet "Data" contains the data on which formats are to be applied and sheet "FormatTables" contains the format tables which contains conditional formating.
    There are two table in "FormatTables" sheet. Both have some conditional formats applied on it. 
    Case 1: 
    1. Select the table range of Table1 i.e $A$2:$AV$2
    2. Copy it
    3. Goto Sheet "Data" 
    4. Select data area i.e $A$1:$AV$20664
    5. Perform a paste special operation on full range and select "Formats" option while performing paste special.
    Result:
    It throws error as "Selection is too large"
    Case 2:
    1. Select the table range of Table2 i.e $A$5:$AV$5
    2. Copy it
    3. Goto Sheet "Data" 
    4. Select data area i.e $A$1:$AV$20664
    5. Perform a paste special operation on full range and select "Formats" option while performing paste special.
    Result:
    Formats get applied successfully.
    Both are the same format tables with same no of column and applied to same data range($A$1:$AV$20664) where one of the case works and another fails.
    The only diffrence is Table1 has appliesTo range($A$2:$T$2) as partial of total table range($A$2:$AV$2) whereas the Table2 has appliesTo range($A$5:$AV$5) same as of its total table range($A$5:$AV$5)
    NOTE : This issue is only in Excel 2007

    Excel 2007 No Supporting formating to take a formatting form another if source table has more then 16000 rows and if you want to do that in more then it then you have ot inset 1 more row in your format table to have 3 rows
    like: A1:AV3
    then try to copy that formating and apply
    Solution Case 1: 
    1.Select the table range of Table1 i.e AV21 and drage it down to one row down
    2. Select the table range of Table1 i.e $A$2:$AV$3
    3. Copy it
    4. Goto Sheet "Data" 
    5. Select data area i.e $A$1:$AV$20664
    6. Perform a paste special operation on full range and select "Formats" option while performing paste special

  • File too large - attachment settings not working

    Hi there
    We are having problems with attaching files in IMS 5.2 & wondered if anybody can help.
    Our outgoing mail message max size is set to 50MB (I know about the extra 33% space required for encoding) and yet we still cannot attach files to emails that are greater than 5MB.
    Does anyone have any idea why this is not working?
    Anytime we try to send a 7 or 8 MB file an error "File too large "comes up right away
    The settings in the messaging server console is set to 50MB. This is under the HTTP service
    There was a previous post but the solution did not solve my problem.
    Can anyone help?
    Thanks

    There are separate settings for webmail attaqchments. Please check documentation at:
    http://docs.sun.com/source/816-6020-10/cfgutil.htm
    and look at:
    service.http.maxmessagesize
    and
    service.http.maxpostsize
    these both default to 5 megs.
    You have to restart the webmail daemon to make a change take effect.

  • Replicat error: ORA-12899: value too large for column ...

    Hi,
    In our system Source and Target are on the same physical server and in the same Oracle instance. Just different schemes.
    Tables on the target were created as 'create table ... as select * from ... source_table', so they have a similar structure. Table names are also similar.
    I started replicat, it worked fine for several hours, but when I inserted Chinese symbols into the source table I got an error:
    WARNING OGG-00869 Oracle GoldenGate Delivery for Oracle, OGGEX1.prm: OCI Error ORA-12899: value too large for column "MY_TARGET_SCHEMA"."TABLE1"."*FIRSTNAME*" (actual: 93, maximum: 40) (status = 12899), SQL <INSERT INTO "MY_TARGET_SCHEMA"."TABLE1" ("USERID","USERNAME","FIRSTNAME","LASTNAME",....>.
    FIRSTNAME is Varchar2(40 char) field.
    I suppose the problem probably is our database is running with NLS_LENGTH_SEMANTICS='CHAR'
    I've double checked tables structure on the target - it's identical with the source.
    I also tried to manually insert this record into the target table using 'insert into ... select * from ... ' statement - it works. The problem seems to be in the replicat.
    How to fix this error?
    Thanks in advance!
    Oracle GoldenGate version: 11.1.1.1
    Oracle Database version: 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    NLS_LANG: AMERICAN_AMERICA.AL32UTF8
    NLS_LENGTH_SEMANTICS='CHAR'
    Edited by: DeniK on Jun 20, 2012 11:49 PM
    Edited by: DeniK on Jun 23, 2012 12:05 PM
    Edited by: DeniK on Jun 25, 2012 1:55 PM

    I've created the definition files and compared them. They are absolutely identical, apart from source and target schema names:
    Source definition file:
    Definition for table MY_SOURCE_SCHEMA.TABLE1
    Record length: 1632
    Syskey: 0
    Columns: 30
    USERID 134 11 0 0 0 1 0 8 8 8 0 0 0 0 1 0 1 3
    USERNAME 64 80 12 0 0 1 0 80 80 0 0 0 0 0 1 0 0 0
    FIRSTNAME 64 160 98 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
    LASTNAME 64 160 264 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
    PASSWORD 64 160 430 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
    TITLE 64 160 596 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
    Target definition file:
    Definition for table MY_TAEGET_SCHEMA.TABLE1
    Record length: 1632
    Syskey: 0
    Columns: 30
    USERID 134 11 0 0 0 1 0 8 8 8 0 0 0 0 1 0 1 3
    USERNAME 64 80 12 0 0 1 0 80 80 0 0 0 0 0 1 0 0 0
    FIRSTNAME 64 160 98 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
    LASTNAME 64 160 264 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
    PASSWORD 64 160 430 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
    TITLE 64 160 596 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
    Edited by: DeniK on Jun 25, 2012 1:56 PM
    Edited by: DeniK on Jun 25, 2012 1:57 PM

  • File_To_RT data truncation ODI error ORA-12899: value too large for colum

    Hi,
    Could you please provide me some idea so that I can truncate the source data grater than max length before inserting into target table.
    Prtoblem details:-
    For my scenario read data from source .txt file and insert the data into target table.suppose source file data length exceeds max col length of the target table.Then How will I truncate the data so that data migration will be successful and also can avoid the ODI error " ORA-12899: value too large for column".
    Thanks
    Anindya

    Bhabani wrote:
    In which step you are getting this error ? If its loading step then try increasing the length for that column from datastore and use substr in the mapping expression.Hi Bhabani,
    You are right.It is for Loading SrcSet0 Load data.I have increased the column length for target table data store
    and then apply the substring function but it results the same.
    If you wanted to say to increase the length for source file data store then please tell me which length ?Physical length or
    logical length?
    Thanks
    Anindya

  • TFTP file too large for upload

    I'm trying to upgrade my router via TFTP. I keep getting this File too large for TFTP error. I'm using the recommended TFTP server from Solarwinds.
    There does not seem to be any setting in the server to let large file pass, It's the first time I see that, but this is the biggest IOS I had to upload. I had no problem sending the last IOS witch is only about 3MB smaller.

    correct. copy ftp://userid:password@servername/directory/filename flash:
    For more information, refer to the following URL:
    http://www.cisco.com/univercd/cc/td/doc/product/software/ios124/124tcr/tcf_r/cf_02ht.htm#wp1032450
    Hope this helps,

  • BR0253E errno 27: File too large in db13

    Dear,
    When I am taking full online backup  via DB13 . it give follwing error.Our this data file size  is near about 3 Gb.Archive log is working fine.
    BR0202I Saving /oracle/PRD/sapdata1/sr3_3/sr3.data3
    BR0203I to /dev/rmt/1mn ...
    #FILE..... /oracle/PRD/sapdata1/sr3_3/sr3.data3
    #SAVED.... sr3.data3  PRD_ON_01/6
    BR0280I BRBACKUP time stamp: 2011-09-01 14.42.08
    BR0063I 3 of 88 files processed - 6000.023 MB of 251452.688 MB done
    BR0204I Percentage done: 2.39%, estimated end time: 15:55
    BR0001I *_________________________________________________
    BR0252E Function fwrite() failed for '/oracle/PRD/sapbackup/begrilim.spa/sr3.data34' at location BrSparseCreate-8
    BR0253E errno 27: File too large
    BR0280I BRBACKUP time stamp: 2011-09-01 14.42.10
    BR0317I 'Alter tablespace PSAPSR3 end backup' successful
    BR0056I End of database backup: begrilim.fnt 2011-09-01 14.42.08
    BR0280I BRBACKUP time stamp: 2011-09-01 14.42.10
    my initSID.sap setting is given below.
    backup_mode = all
    restore_mode = all
    backup_type = online
    backup_dev_type = tape
    backup_root_dir = /oracle/PRD/sapbackup
    stage_root_dir = /oracle/PRD/sapbackup
    compress=hardware
    compress_cmd = "compress -c $ > $"
    uncompress_cmd = "uncompress -c $ > $"
    compress_dir = /oracle/PRD/sapreorg
    archive_function = save
    archive_copy_dir = /oracle/PRD/sapbackup
    archive_stage_dir = /oracle/PRD/sapbackup
    tape_copy_cmd = dd
    disk_copy_cmd = copy
    stage_copy_cmd = rcp
    pipe_copy_cmd = rsh
    cpio_flags = -ovB
    cpio_in_flags = -iuvB
    dd_flags = "obs=128k bs=128k"
    dd_in_flags = "ibs=128k bs=128k"
    saveset_members = 1
    copy_out_cmd = "dd ibs=8k obs=128k of=$'
    copy_in_cmd = "dd ibs=128k obs=8k if=$"
    tape_size = 800G
    exec_parallel = 0
    tape_address = /dev/rmt/1mn
    Regards

    Hi Pooja,
    This note may be helpful.
    Note 553854 - Oracle: Problems with file size limit
    This note has information about the error "<unix> Error: 27: File too large"
    Br,
    Venky.

Maybe you are looking for

  • AWT error on Solaris, please help

    Hello all, I have designed a class to convert any picture format to WBMP using Jimi via a web browser. The user specifies the file to convert on his computer and presses a OK button. Then, the file is converted and saved to the server. When running i

  • Deleted QuickTime 10 so I could get QT7 Pro, but now I can't reinstall ANY version of QuickTime.

    I purchased & tried installing QuickTime 7 Pro on my MacBook Pro running OS X 10.6.8. I already had QuickTime version 10. The instructions said I needed to download QT7, but it wouldn't install. So I tried deleting QT10 by moving it to the trash & em

  • How to return Varchar2(1) from function ?? Simple but annoying :(

    Hi, I have a function something like... CREATE OR REPLACE FUNCTION Test ( id number) RETURN VARCHAR2 ... Now I am using this function in a View as ... select test(123) as Test In the View description, this column appears as Varchar 4000. whereas I wa

  • Adding old bookmarks to my current ones?

    Hi all, I bought a new PC abroad whilst travelling this year and used it for a good 6 months. Now I'm back home, I want to move all of the bookmarks I had on my old laptop onto my new one. However, I don't want to lose the new ones I've created, so I

  • PowerPoint (ppt) attachments are cropped

    Whenever I am sent a ppt document, the slides are cropped. I can see the whole image on a computer, but not on the iPhone. This is very consistent across multiple documents from multiple sources. I try to un-zoom, but it does not help. Is there a set