File import size limit - increased for LR 3+?

I've been using LR 1.4 for years, but lately I've been working on larger files and I'm getting "Files too large" to import... most recently with a 729.3 MB TIF. Has LR 3+ raised that limit?

Kenschuster wrote:
Most of my images now are at least 512 MP.
Ken,
Are you sure? 512MP means 512'000'000 piexels, wich results in a 1.5 GB uncompressed 8bit image. We're not talking about 512 Megabytes ....
Beat Gossweiler
Switzerland

Similar Messages

  • Cap file Components size limit ( ? )

    Hello,
    I'm wondering if there is a limitation on the Cap file internal components size.
    as specified in the JVM Spec from SUN, a cap file consists on several elements (byte sequences) : Header.cap , Class.cap, StaticField.cap, etc.
    Does a component have a size limit ?
    I'm asking this question because I'm facing serious problem when loading Applets on Gemplus SIM cards(GemWpressoV3 ) : the load process interrupts sudenly and the card responds with 6F00 !!
    For info, the Method.cap components is 11748 bytes !!
    Thank you for your Help and Info
    Kartagos

    here is a complete description of the Cap I can't load :
        Header.cap (29 Bytes)
        Directory.cap (34 Bytes)
        Import.cap (62 Bytes)
        Applet.cap (23 Bytes)
        Class.cap (94 Bytes)
        Method.cap (8484 Bytes)
        StaticField.cap (6312 Bytes)
        ConstantPool.cap (925 Bytes)
        RefLocation.cap (1411 Bytes)
        Descriptor.cap (2340 Bytes)

  • My scanner isn't a choice in File Import in PSE 8 for Mac

    I just loaded PSE 8 in my new Macbook (Snow Leopard), but my Epson scanner doesn't appear in File>Import. I've followed all the instructions I can find including loading Rosetta and downloading the Twain drivers from the Epson site. Can anyone help?

    You will need to go to Epson's site to see what is available for your scanner model. If you find one, download it and follow the instructions for installing it.
    Sorry I can't be more specific, but I have a canon scanner (and it's so old there's never going to be an intel plug-in for it).

  • What kind of credit card usage led to a credit limit increase for you?

     What type of spending behavior led to your credit limit increases?  (Ie zero balance, high usage with early payments, 9%, 30%, ect).   Were they notable increases?   What time frame did the increases occur in?

    As others have noted, there is no set formula, it depends on the lender (and perhaps involves some voodoo or superstition) I got an AMEX Delta Gold card in June 2013, with a $2k limit, and put $4,500 through the first 11 months. Always letting the balance report on the statement, always PIF close to the payment due date. Then purposefully maxed it out twice, reaching $1,800 in early June 2014, then up to $1,300 again in mid-June when they gave my an auto CLI to $3k. My BofA AMEX card, back in the day, had been auto CLI up to $28,800 in April 2008, so I used $21,800 of it. Then they started CLD:Jan 2009, $21,700 (these are all CL, not balances), though the balances were only a few hundred under each step down in CL, the true Balance Chasing scenario)Apr 2009, $20,500Sept 2010, $17,300May 2011, $16,000Jan 2013, $12,700Then in August 2014, I got a 0% offer from US Bank, so I cleared off all the remaining $12k from this BofA AMEX, and in October 2014 moved a total of $10,700 back onto this card on a new 0% 18+ month offer. Proceeded to make some large payments on that. In Feb 2015, got a $3k CLI on this BofA AMEX, from $12,700 to $15,700. Needless to say, it was a good day

  • Required "/" (root) file system size on UNIX for Solution Manager.

    Hello SAP Gurus,
       I am setting up SAP Solution Manager 3.2 on HP-UX. It is asking me about 350MB free sapce on "/" file system for Central Instance installation and about 120MB free sapce on "/" file system for Database Instance installation.
       I am installaing everything on to shared disk which mounted under /usr/sap. Why it needs free sapce in "/" file system. Is there any workaround to get rid of this requirement, as I have very less free sapce on "/" file system and I don't want to take the risks involved in increasing this size.
       Are there any SAP recommended sizes for "/" file system?
       I stuck in the middle of setting up SAP landscape on HP-UX (11.23). I searched through the Installation documents but I couldn't find any thing helpful in this regard. It is urgent requirement to set up this so please let me know any solution or workaround ASAP.
       Any help is greatly appriciated.
    Thanks in advance.
    Regards,
    cvr/

    Hi Vaibhav.
    Normally "canonical path not available for (folder name)" means:
    1. Wrong username/password. Please double check you credentials.
    2. The resource cannot be linked from the portal server. Please be sure that you can connect to the next ports in windows server from the Unix Server:
    a. NetBIOS Session Service TCP 139 This port is used to connect file shares for example.
    b. TCP 445 The SMB (Server Message Block) protocol is used among other things for file sharing in Windows NT/2000/XP. In windows NT it ran on top of NetBT (NetBIOS over TCP/IP), which used the famous ports 137, 138 (UDP) and 139 (TCP). In Windows 2000/XP/2003, Microsoft added the possibility to run SMB directly over TCP/IP, without the extra layer of NetBT. For this they use TCP port 445.
    I hope these things help somebody.
    Best Regards,
    Jheison A. Urzola H.

  • Execute a xquery in XML file ith Size Limit.

    Hi there,
    I have the following XML file:
    <Fuzzy>
    <Rule>
    <rule_ID>1</rule_ID>
    <event>insert</event>
    <happen>after</happen>
    <condaction>collection('test.dbxml')/Bookstore/Book</condaction>
    </Rule>
    </Fuzzy>
    I use the following code to execute the xquery in the TAG <condaction>:
    String Query = "collection('test.dbxml')/Fuzzy/Rule/condaction/text()";
    XmlQueryExpression ue = myManager.prepare(Query, context);
    XmlResults results = ue.execute(context);
    while(results.hasNext()) {
    System.out.println("==================================");
    String test = results.next().asString();
    System.out.println(test);
    XmlQueryExpression e = myManager.prepare(test, context);
    XmlResults results1 = e.execute(context);
    while(results1.hasNext()) {
    System.out.println("==================================");
    System.out.println(results1.next().asString());
    The above execute the xquery and provides me the result. But the same code has problems when I use the below Xquery which has a longer Xquery length. Is there any limit on the size of xquery ?
    <Fuzzy>
    <Rule>
    <rule_ID>1</rule_ID>
    <event>insert</event>
    <happen>af ter</happen>
    <condaction>if(collection('test.dbxml')/Bookstore/Book/book_ID gt 0 and collection('test.dbxml')/Bookstore/Book/price lt 20) then insert nodes <b4>inserted child</b4> after
    collection('test.dbxml')/Bookstore/Book/title else insert nodes <b4>inserted child</b4> after collection('test.dbxml')/Bookstore/Book</condaction>
    </Rule>
    </Fuzzy>
    The problem I identify is that the 'test' variable only prints the part of the xquery present in the tag:
    ================================
    if(collection('test.dbxml')/Bookstore/Book/book_ID gt 0 and collection('test.dbxml')/Bookstore/Book/price lt 20) then insert nodes
    Please help me to identify if I am wrong or if there are any limitations that is causing this problem.
    Thanks.

    FYI..
    XML has a special set of characters that cannot be used in normal XML strings. These characters are:
    <ol><li>& - &amp; </li>
    <li>&lt; - &lt; </li>
    <li>&gt; - &gt; </li>
    <li>" - " </li>
    <li>' - ' </li>
    </ol>
    I replaced the above ones in my XML Tag that has xquery and it worked !!

  • Why is same file different sizes after saving for web?

    I have files that I save in PSCS3, under, "Save for Web & Devices" and I indicate under "optimize to file size" that they should be no larger than 60K. After item is saved and I go to "Get info", it says the file is 60K. Yet when I open the file in PS and go to Image Size, it says 467K. Why the discrepancy? Thanks for your feedback!

    One is the compressed file size on disk (60K). The other is the uncompressed image size (369 pixels x 432 pixels x 8 bits per channel x 3 channels / 8 bits per byte  / 1024, since K really is 1024, not 1000).
    http://en.wikipedia.org/wiki/Image_compression
    [Yoink!]

  • Who does chase pull for credit limit increase for the freedom card

    Trying to increase my Freedom limit I've had the card since April used it heavily payed it off almost in full since then usually after every major purchase I'm wondering if they pull like strictly ex or eq 

    CAPTOOL wrote:
    Yes-Its-Me wrote:
    bigalkescott514 wrote:
    Trying to increase my Freedom limit I've had the card since April used it heavily payed it off almost in full since then usually after every major purchase I'm wondering if they pull like strictly ex or eq Whatever they pulled for your initial application is what they will pull for the CLI..  It all depends on what bureau is pulled for your region.Well, that sucks. They doubled pulled me, EQ and EX. Both times I applied.For CLI, they will only pull one.  Chase will pull EX for most people and during initial application, if they are unsure of the credit profile  they pull another bureau.  Another thing they do also is, if you do the pre-equal on their site you will end up with a double pull if you proceed to apply after seeing it on their site...

  • File download size limit - 2 GB?

    I have CE560 content engines running ACNS 5.3.1. Users trying to download files over 2GB via a browser get only about 2GB of the download. There are no errors. Is there some limit? Setting I can change?

    Allan wrote:
    > I want to stream data to disk for a long period, in the format waveform data
    > type - my problem is that the data files tends to reach the limit of 2 GB.
    >
    > Is it possible to make files in LabVIEW greater than 2 GB ?
    >
    > My OS is Windows XP and the disk is NTFS formatted.
    >
    > I really appreciate your help on this one!
    If you do the writing of files yourself, e.g. using the Write File node,
    then you can go to www.openg.org and checkout the OpenG Toolkit. There
    should be a link from
    http://www.openg.org/tiki/tiki-index.php?page=OpenG+Toolkit to the
    sourceforge page where you can download the entire Toolkit or portions
    thereof. You are interested in the Large File IO libarary which access
    the Win32 API to handle files bigger than 2GB.
    Rol
    f Kalbermatter
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • WIDE-SCALE CALL FOR INPUT: The NSS 8TB Size Limit

    NOTE: This thread is purposefully double-posted in the OES:Linux and OES:NetWare storage forums.
    Like most of you -- I'm just a Novell customer. While I do not represent Novell in any official capacity, this call for information has been encouraged by Novell's OES team.
    During this week's OES2SP3 Beta conference call, a topic was brought up again regarding the aging size limit of the NSS file-system.
    Quite simply the current NSS file system size limit of 8TB is too small for modern and emerging needs. The reality is that customer data is trending larger all the time. A failure to act quickly will eventually mean the obsolecense of this file-system and apathy in the customer base.
    Novell will be monitoring this thread. If sufficent interest can be documented then Novell could more easily marshall the needed internal resources to make this happen sooner rather than later.
    WHAT THIS THREAD IS -NOT- INTENDED TO BECOME
    - A discussion of how one could use DFS or other techniques to mitigate NSS' size constraint.
    - A string of suggestions for alternative file-systems such as NTFS or Posix-based one like XFS, BTRFS, EXT4
    - A debate on why people should not want a larger-than-8TB file-system. That debate is effectively over -- almost every major player in the file-system space is doing anything from 64TB into the Exabyte range (XFS, NTFS, others).
    WHAT THIS THREAD IS INTENDED FOR
    - A tally of other Novell customers who DO see the need and would prefer to keep this data on NSS if it could accomodate it. Your post can be as simple as: "This is important to us, too!" Also helpful, though not required, would be a brief statement or case-study of what your needs would look like (types of data, overall size and quantity of files).
    On the beta calls, several of us have been vocal supporters for this change. We now hope that by casting a wider net that we can find others who perhaps have been suffering in silence.

    Originally Posted by Elfstone
    NOTE: This thread is purposefully double-posted in the OES:Linux and OES:NetWare storage forums.
    Quite simply the current NSS file system size limit of 8TB is too small for modern and emerging needs. The reality is that customer data is trending larger all the time. A failure to act quickly will eventually mean the obsolecense of this file-system and apathy in the customer base.
    I have a couple instances where the 8TB limit is "inconvenient," but all are for comparatively small numbers of large files. As a practical matter the bottlenecks in the metadata are reached far in advance of the storage limits. For example, how would a NSS volume perform with 100,000,000 files on it? This is the biggest issue.
    So sure, there are things which could be done to expand NSS. As a practical matter the easiest would be to support larger block sizes. So 8TB becomes 16, becomes 32, ... all the way to 128TB. I assume 128TB would handle your needs. Of course how you back up and restore 128 TB in less than the age of the Universe, that's up to you.
    -- Bob

  • Increase File attachement size/number of attachments

    Hello all,
    Is there any way to raise the MAXIMUM file attachment size limit in Service Manager? We have a need to have more than 10 attachments to a request and have those attachments be more than 10mbs each (the max in the GUI). Has anyone figured out how to
    increase this maximum in Service Manager 2012? 

    The restrictions for these settings are simply hard coded into the GUI. Why, I do not know. However, it is possible to over-ride, please see this blog from Marcel Zehner:
    http://marcelzehner.ch/2013/07/06/increase-the-number-of-max-incident-attachments-to-more-than-10/
    Rob Ford scsmnz.net
    Cireson www.cireson.com
    For a free SCSM 2012 Notify Analyst app click
    here

  • 3d files/ size limit in CS5?

    Anyone know if theres a 3d file quantity / size limit within a photoshop document? What would any limit be dependant on, e.g, VRAM?
    Running Photoshop 64bit Cs5 Extended, all updates, on a dual xeon, 12gb ram, 64bit win 7, NVidia Quadro FX 3800 (1gb), Raptor scratch disk with 50gb space, used as a dedicated scratch disk. PS settings are set to alocate over 9gb ram to PS, GPU Open GL settings enabled and set to Normal, 3d settings allocate 100% VRAM (990mb) and rendering set to Open GL. You'd expect this to perform admirably and handle most tasks.
    Background:
    Creating a PSD website design file with 3 x 3d files embedded. One 'video' animation file linked and a few smart objects (photos) and the rest is shapes and text with a few mask etc. Nothing unusual other than maybe the vidoe and 3d files. The file size is 500mb, which isnt unusual as I've worked on several 800mb files at the same time all open in the same workspace. PC handles that without any problems.
    Introducing the 3d files and video seems to have hit an error or a limit of some sort but I cant seem to pinpoint whats causing it or how to resolve it.
    Problem:
    I have the one 500mb file I've been working on, open. I try to open any ONE file or create a new one and it says the following error. "Could not complete the command because too many files were selected for opening at once". I've tried with 3 files, other PSD files, JPEGs, anything that can be opened in PS. All with the same message. Only one PSD file open, only trying to opne one more file or create a new file from scratch.
    I've also had a similar error "Could not complete your request because there are too many files open. Try closing some windows & try again". Have re-booted and only opened PS and still the same errors.
    Tried removing the video file and saving a copy. That doesnt work. Removed some of th 3 files and saved a copy and then it sometimes allows me to open more files. Tried leaving the 3d files in and reducing lighting (no textures anyway) and rendering without ray tracing, still no effect. Tried rasterising the files and it allowed more files to be opened. I'm working across a network so tried using local files which made no difference. Only thing that seems to make a difference is removing or rasterising some of the the 3d files.
    Anyone had similar problems on what seems to be a limit either on quantity of 3d files, or maybe complexity limit, or something else to do with 3d files limits? Anyone know of upgrades that might help? I've checked free ram and thats at 7gb, using about 10gb swap file. I've opened 5 documents at the sam time of over 700mb each, and its not caused problems, so I can only think the limit is with the GPU with regards to 3d. Cant get that any higher than 990mb, which I'd assume would be enough anyway if that was the problem. I've palyed about with preferences to adjust the 3d settings to lower but no use.
    Anyone any idea whats limiting it and causing it to give the error message above? Is it even a PS5 limit or a win 7 64bit limit?
    Any ideas greatly appreciated. Thanks all.

    Thanks for your comments Mylenium, I originally thought it might be VRAM, but at 1gb (still quite an acceptable size from what I can tell - I'd expect more than 3 x 3d files for that) I originally dismissed it as the complexity of the files seemed quite low for it to be this. I'm still not completely convinced its the VRAM though because of the error message it gives, and have tried it on more complex 3d models and using more of them and it works fine on those. Seems odd about not letting me create a new document too. Would like the money get a 6gb card but a bit out of the budget range at the moment.
    Do you know of a way to "optimise" 3d files so they take up less VRAM, for example reducing any unwanted textures, materials, vertices or faces within PS in a similar fashion to how illustrator can reduce the complexity/ number of shapes/ points etc? Cant ask the client as they dont have the time or I'd do this . Does rendering quality make a difference, or changing to a smart object? Doesnt seem to from what I've tried.
    Re: using a dedicated 3d program, I'd be a bit reluctant to lose the ability to rotate / edit/ draw onto/ light etc objects within Photoshop now that I have a taste for it and go back to just using 3d renderings, otherwise I'd go down the route as suggested for a dedicated 3d package. Thanks for the suggestion though.

  • Size limit for sending attachments?

    Anyone know what the size limit is for sending/forwarding e-mails with attachments when using a BES?
    I am the IT administrator and all the file sizes are set to unlimited in the BES attachment manager configuration.
    A user received an email with a 4MB attachment and they are attempting to forward it and they are unable to.
    In researching this some people have said this is a RIM limitation, I haven't found any definitive answer from an official source on this though.
    I did find this FAQ: 
    http://wirelesssupport.verizon.com/faqs/PDA+Smartphone+Email/faqs.html?t=5#item12
    That says it's either 4MB or 2MB.
    But i'm not sure if they're only taking about POP or BIS accounts and not a corporate server in the mix or not.
    Any info would be appreciated.
    Thanks!

    Thank you very much.
    it is 3MB.
    this is also described here: http://docs.blackberry.com/en/admin/deliverables/12107/Change_max_file_size_for_attach_users_send_497943_11.jsp
    This is really surprising to me.
    RIM needs to upgrade their network from  late 90's standards.
    It's pretty common for people to send and receive attachments larger than 3MB these days.
    I will need to start recommending people buy android, windows mobile or iphones moving forward.

  • Share Point folder size limit - File Services

    I created a new folder inside a share point using Server Admin. Is there a way to set a folder size limit (quota) for that folder.
    P.S. I am NOT talking about user accounts quotas for home folders created using Workgroup Manager! ...Just any new folder created, to be used on any volume, is there a way to set a mamimum size?
    For example I have a 1TB volume on my Xserve RAID. I create a new folder but want to set its maximum capacity to 200GB. Is that possible?
    Thank you very much in advance for any feedback.

    There's no direct way to set a limit on a folder size.
    The 'simplest' method I can think of is to create partitions on your disk of the appropriate sizes and share these - they will have inherent size limits based on the size of the partition. It's a little messier but should solve your issue.
    :: thinking :: you might be able to use disk images rather than partitions, although I've never tried sharing a mounted disk image, and you'd need to address automounting the disk images when the server starts, so it might not be viable.

  • Messaging server size limit

    Hallo
    As I see, I have common problem among the Sun Messaging Server administrators. I have whole system distributed on several virtual solaris machines and some days ago emerged message size problem. I noticed it in relation with incoming mail. When I create new user, he or she can't get larger mail then 300K. Sender get will known message:
    This message is larger than the current system limit or the recipient's mailbox is full. Create a shorter message body or remove attachments and try sending it again.
    <server.domain.com #5.3.4 smtp;552 5.3.4 a message size of 302 kilobytes exceeds the size limit of 300 kilobytes computed for this transaction>
    Interesting thin is, that this problem arised with no correlation with other actions. I noticed this problem with new users before, but i could successful manage it with different service packs. Now this method with new users, this method doesn't work! Old users normally recieve messages bigger then 300k, as before.
    I tried to set default setting blocklimit 2000 in imta.cnf, but I didn't succeed.
    I know, that size limit can be set on different places, but is there a simple way, to set sending and recieving message size unlimited?
    Messaging server version is:
    Sun Java(tm) System Messaging Server 7u2-7.02 64bit (built Apr 16 2009)*
    libimta.so 7u2-7.02 64bit (built 03:03:02, Apr 16 2009)*
    Using /opt/sun/comms/messaging64/config/imta.cnf (compiled)*
    SunOS mailstore 5.10 Generic_138888-01 sun4v sparc SUNW,SPARC-Enterprise-T5120*
    Regards
    Matej

    For the sake of correctness, the attribute name in LDAP is mailMsgMaxBlocks.
    I also stumbled upon this - the values like 300 blocks or 7000 blocks are set in (sample) service packages but are not advertised in Delegated Admin web-interface. When packages are assigned, these values are copied into each user's LDAP entry as well, and can not be seen or changed in web-interface.
    And then mail users get "weird" errors like:
    550 5.2.3 user limit of 7000 kilobytes on message size exceeded: [email protected]
    or
    550 5.2.3 user limit of 300 kilobytes on message size exceeded: [email protected]
    resulting in
    <[email protected]>... User unknown
    or
    552 5.3.4 a message size of 7003 kilobytes exceeds the size limit of 7000 kilobytes computed for this transaction
    or
    552 5.3.4 a message size of 302 kilobytes exceeds the size limit of 300 kilobytes computed for this transaction
    resulting in
    Service unavailable
    I guess there are other similar error messages, but these two are most common.
    I hope other people googling up the problem would get to this post too ;)
    One solution is to replace the predefined service packages with several of your own, i.e. ldapadd entries like these (fix the dc=domain,dc=com part to suit your deployment, and both cn parts if you rename them), and restart the DA webcontainer:
    dn: cn=Mail-Calendar - Unlimited,o=mailcalendaruser,o=cosTemplates,dc=domain,dc=com
    cn: Mail-Calendar - Unlimited
    daservicetype: calendar user
    daservicetype: mail user
    mailallowedserviceaccess: imaps:ALL$pops:ALL$+smtps:ALL$+https:ALL$+pop:ALL$+imap:ALL$+smtp:ALL$+http:ALL
    mailmsgmaxblocks: 20480
    mailmsgquota: -1
    mailquota: -1
    objectclass: top
    objectclass: LDAPsubentry
    objectclass: costemplate
    objectclass: extensibleobject
    dn: cn=Mail-Calendar - 100M,o=mailcalendaruser,o=cosTemplates,dc=domain,dc=com
    cn: Mail-Calendar - 100M
    daservicetype: calendar user
    daservicetype: mail user
    mailallowedserviceaccess: imaps:ALL$pops:ALL$+smtps:ALL$+https:ALL$+pop:ALL$+imap:ALL$+smtp:ALL$+http:ALL
    mailmsgmaxblocks: 20480
    mailmsgquota: 10000
    mailquota: 104857600
    objectclass: top
    objectclass: LDAPsubentry
    objectclass: costemplate
    objectclass: extensibleobject
    dn: cn=Mail-Calendar - 500M,o=mailcalendaruser,o=cosTemplates,dc=domain,dc=com
    cn: Mail-Calendar - 500M
    daservicetype: calendar user
    daservicetype: mail user
    mailallowedserviceaccess: imaps:ALL$pops:ALL$+smtps:ALL$+https:ALL$+pop:ALL$+imap:ALL$+smtp:ALL$+http:ALL
    mailmsgmaxblocks: 20480
    mailmsgquota: 10000
    mailquota: 524288000
    objectclass: top
    objectclass: LDAPsubentry
    objectclass: costemplate
    objectclass: extensibleobject
    See also limits in config files -
    * msg.conf (in bytes):
    service.http.maxmessagesize = 20480000
    service.http.maxpostsize = 20480000
    and
    * imta.cnf (in 1k blocks): <channel block definition> ... maxblocks 20000 blocklimit 20000 sourceblocklimit 20000
    i.e.:
    tcp_local smtp mx single_sys remotehost inner switchchannel identnonenumeric subdirs 20 maxjobs 2 pool SMTP_POOL maytlsserver maysaslserver saslswitchchannel tcp_auth missingrecipientpolicy 0 loopcheck slave_debug sourcespamfilter2optin virus destinationspamfilter2optin virus maxblocks 20000 blocklimit 20000 sourceblocklimit 20000 daemon outwardrelay.domain.com
    tcp_intranet smtp mx single_sys subdirs 20 dequeue_removeroute maxjobs 7 pool SMTP_POOL maytlsserver allowswitchchannel saslswitchchannel tcp_auth missingrecipientpolicy 4 maxblocks 20000 blocklimit 20000 sourceblocklimit 20000
    tcp_submit submit smtp mx single_sys mustsaslserver maytlsserver missingrecipientpolicy 4 slave_debug maxblocks 20000 blocklimit 20000 sourceblocklimit 20000
    tcp_auth smtp mx single_sys mustsaslserver missingrecipientpolicy 4 maxblocks 20000 blocklimit 20000 sourceblocklimit 20000
    If your deployment uses other SMTP components, like milters to check for viruses and spam, in/out relays separate from Sun Messaging, other mailbox servers, etc. make sure to use a common size limit.
    For sendmail relays sendmail.mc (m4) config source file it could mean lines like these:
    define(`SMTP_MAILER_MAX', `20480000')dnl
    define(`confMAX_MESSAGE_SIZE', `20480000')dnl
    HTH,
    //Jim Klimov
    PS: Great thanks to Shane Hjorth who originally helped me to figure all of this out! ;)

Maybe you are looking for