552 Message size exceeds maximum message size

[Dell Inspiron 560 desktop;
MS Windows 7 Home Premium SP1 64bit; Windows Live Mail 2011; MS Word Pro 2003; Mozilla Firefox 22.0; Adobe FP 11,8,800,94; Adobe Reader 11.......]
Hello, I have the same problem, 10 emails I sent out bounced with the following same error message:
">>>xxxxxxxxxatxxxxxxxx.co.il(reading confirmation): 552 Message size exceeds maximum message size"
And they also sent the headers....which is all Greek to me.  Being a very beginner computer user, I can't execute the good advice given here.
This is very strange since the emails so bounced only had 5 small pictures I had taken in my small, cheap dig camera, while other messages I had sent out
successfully before were probably 4 times as big.
Also, whose fault is it: my server RCN?  Microsoft?  I wonder...
I hope there's something easier for me to do to fix this problem? 
Thanks ever so much for any help!  :)      Adela

It says to me that the message you sent was rejected along the way because it was too big.  Ask the e-mail administrator of the system that's bouncing it.
Ed Crowley MVP "There are seldom good technological solutions to behavioral problems."

Similar Messages

  • Mac OS X Hello World: Texture Dimensions exceed maximum texture size

    Just installed Netbeans 7.1 beta and JavaFX to try it out for the the first time.
    Using Java 1.6_26 with OS X 10.6.6
    I tried creating a new project. It generates a default Hello World app. I tried running this app without modifications. If opens a blank window but then crashes with runtime exceptions as follows. Can anyone suggest where I may be going wrong?
    init:
    Deleting: /Users/shannah/NetBeansProjects/JavaFXApplication2/build/built-jar.properties
    deps-jar:
    Updating property file: /Users/shannah/NetBeansProjects/JavaFXApplication2/build/built-jar.properties
    Compiling 1 source file to /Users/shannah/NetBeansProjects/JavaFXApplication2/build/classes
    compile-single:
    run-single:
    java.lang.RuntimeException: Requested texture dimensions (256x4096) require dimensions (256x0) that exceed maximum texture size (2048)
         at com.sun.prism.es2.ES2Texture.create(ES2Texture.java:147)
         at com.sun.prism.es2.ES2ResourceFactory.createTexture(ES2ResourceFactory.java:45)
         at com.sun.prism.impl.BaseResourceFactory.createMaskTexture(BaseResourceFactory.java:131)
         at com.sun.prism.impl.GlyphCache$GlyphManager.allocateBackingStore(GlyphCache.java:447)
         at com.sun.prism.impl.GlyphCache$GlyphManager.allocateBackingStore(GlyphCache.java:444)
         at com.sun.prism.impl.packrect.RectanglePacker.getBackingStore(RectanglePacker.java:69)
         at com.sun.prism.impl.GlyphCache.getBackingStore(GlyphCache.java:261)
         at com.sun.prism.impl.ps.BaseShaderGraphics.drawString(BaseShaderGraphics.java:1151)
         at com.sun.prism.impl.ps.BaseShaderGraphics.drawString(BaseShaderGraphics.java:1066)
         at com.sun.javafx.sg.prism.NGText.drawString(NGText.java:967)
         at com.sun.javafx.sg.prism.NGText.renderContent(NGText.java:1191)
         at com.sun.javafx.sg.prism.NGNode.renderRectClip(NGNode.java:324)
         at com.sun.javafx.sg.prism.NGNode.renderClip(NGNode.java:351)
         at com.sun.javafx.sg.prism.NGNode.doRender(NGNode.java:177)
         at com.sun.javafx.sg.prism.NGNode.doRender(NGNode.java:39)
         at com.sun.javafx.sg.BaseNode.render(BaseNode.java:1143)
         at com.sun.javafx.sg.prism.NGGroup.renderContent(NGGroup.java:205)
         at com.sun.javafx.sg.prism.NGRegion.renderContent(NGRegion.java:420)
         at com.sun.javafx.sg.prism.NGNode.doRender(NGNode.java:185)
         at com.sun.javafx.sg.prism.NGNode.doRender(NGNode.java:39)
         at com.sun.javafx.sg.BaseNode.render(BaseNode.java:1143)
         at com.sun.javafx.sg.prism.NGGroup.renderContent(NGGroup.java:205)
         at com.sun.javafx.sg.prism.NGNode.doRender(NGNode.java:185)
         at com.sun.javafx.sg.prism.NGNode.doRender(NGNode.java:39)
         at com.sun.javafx.sg.BaseNode.render(BaseNode.java:1143)
         at com.sun.javafx.sg.prism.NGGroup.renderContent(NGGroup.java:205)
         at com.sun.javafx.sg.prism.NGNode.doRender(NGNode.java:185)
         at com.sun.javafx.sg.prism.NGNode.doRender(NGNode.java:39)
         at com.sun.javafx.sg.BaseNode.render(BaseNode.java:1143)
         at com.sun.javafx.tk.quantum.AbstractPainter.doPaint(AbstractPainter.java:257)
         at com.sun.javafx.tk.quantum.AbstractPainter.paintImpl(AbstractPainter.java:187)
         at com.sun.javafx.tk.quantum.PresentingPainter.run(PresentingPainter.java:65)
         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
         at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
         at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
         at com.sun.prism.render.RenderJob.run(RenderJob.java:39)
         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
         at com.sun.javafx.tk.quantum.QuantumRenderer$PipelineRunnable.run(QuantumRenderer.java:102)
         at java.lang.Thread.run(Thread.java:722)
    java.lang.RuntimeException: Requested texture dimensions (256x4096) require dimensions (256x0) that exceed maximum texture size (2048)
         at com.sun.prism.es2.ES2Texture.create(ES2Texture.java:147)
         at com.sun.prism.es2.ES2ResourceFactory.createTexture(ES2ResourceFactory.java:45)
         at com.sun.prism.impl.BaseResourceFactory.createMaskTexture(BaseResourceFactory.java:131)
         at com.sun.prism.impl.GlyphCache$GlyphManager.allocateBackingStore(GlyphCache.java:447)
         at com.sun.prism.impl.GlyphCache$GlyphManager.allocateBackingStore(GlyphCache.java:444)
         at com.sun.prism.impl.packrect.RectanglePacker.getBackingStore(RectanglePacker.java:69)
         at com.sun.prism.impl.GlyphCache.getBackingStore(GlyphCache.java:261)
         at com.sun.prism.impl.ps.BaseShaderGraphics.drawString(BaseShaderGraphics.java:1151)
         at com.sun.prism.impl.ps.BaseShaderGraphics.drawString(BaseShaderGraphics.java:1066)
         at com.sun.javafx.sg.prism.NGText.drawString(NGText.java:967)
         at com.sun.javafx.sg.prism.NGText.renderContent(NGText.java:1191)
         at com.sun.javafx.sg.prism.NGNode.renderRectClip(NGNode.java:324)
         at com.sun.javafx.sg.prism.NGNode.renderClip(NGNode.java:351)
         at com.sun.javafx.sg.prism.NGNode.doRender(NGNode.java:177)
         at com.sun.javafx.sg.prism.NGNode.doRender(NGNode.java:39)
         at com.sun.javafx.sg.BaseNode.render(BaseNode.java:1143)
         at com.sun.javafx.sg.prism.NGGroup.renderContent(NGGroup.java:205)
         at com.sun.javafx.sg.prism.NGRegion.renderContent(NGRegion.java:420)
         at com.sun.javafx.sg.prism.NGNode.doRender(NGNode.java:185)
         at com.sun.javafx.sg.prism.NGNode.doRender(NGNode.java:39)
         at com.sun.javafx.sg.BaseNode.render(BaseNode.java:1143)
         at com.sun.javafx.sg.prism.NGGroup.renderContent(NGGroup.java:205)
         at com.sun.javafx.sg.prism.NGNode.doRender(NGNode.java:185)
         at com.sun.javafx.sg.prism.NGNode.doRender(NGNode.java:39)
         at com.sun.javafx.sg.BaseNode.render(BaseNode.java:1143)
         at com.sun.javafx.sg.prism.NGGroup.renderContent(NGGroup.java:205)
         at com.sun.javafx.sg.prism.NGNode.doRender(NGNode.java:185)
         at com.sun.javafx.sg.prism.NGNode.doRender(NGNode.java:39)
         at com.sun.javafx.sg.BaseNode.render(BaseNode.java:1143)
         at com.sun.javafx.tk.quantum.AbstractPainter.doPaint(AbstractPainter.java:257)
         at com.sun.javafx.tk.quantum.AbstractPainter.paintImpl(AbstractPainter.java:181)
         at com.sun.javafx.tk.quantum.PresentingPainter.run(PresentingPainter.java:65)
         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
         at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
         at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
         at com.sun.prism.render.RenderJob.run(RenderJob.java:39)
         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
         at com.sun.javafx.tk.quantum.QuantumRenderer$PipelineRunnable.run(QuantumRenderer.java:102)
         at java.lang.Thread.run(Thread.java:722)
    JavaFX application launcher: calling System.exit
    BUILD SUCCESSFUL (total time: 20 seconds)

    It's a slightly different error message and stack trace, but pretty similar to that in this thread =>
    JavaFX2 sample app: Error creating framebuffer object
    You can try the workaround suggested in the thread =>
    Use runtime parameter -Dprism.order=j2d to select a SW pipe.
    You can also log a jira at http://javafx-jira.kenai.com, or post your trace and configuration in a comment on a similar Mac rendering error jira to get an Oracle Tech to look at your error.

  • Volume exceeds maximum file size

    Hi,
    I am trying to compress a FC7 file using Compressor. I want to compress as a ProRes and as .264 file. I have an error each time: it says: "volume exceeds maximum file size". I don't understand where it come from. I have got space on my disk. I don't have any other applications opened.
    I have a MacBook Pro with dual core 2,6GH RAM.
    Thanks for your help

    I have the same problem, except I'm exporting a single recording about 2 hrs 40 mins anywhere between 192-320 is fine (All I've done is trim another mp3 file down - imported originally at 3hrs 30 mins 470mb 320 kbps), it shouldn't exceed 2GB it should be way under that. Even if i try to export at 64kbps it doesnt let me.
    I used to do this all the time in Garageband, I don't understand the problem. I've looked at my prefernces and menu options and I can't see a way to tackle it.
    Can someone please advise?
    Cheers

  • Rounding Value,Minimum Lot Size and Maximum Lot Siz parameters

    Hello Gurus,
           Please explain me the what is the use of the parameter Rounding Value
    ,Minimum Lot Size and Maximum Lot Size parameters in the product master and how does it in impact during the Heuristic Run?
    Thanks.

    Rounding value is the incremental quantities in which the order can be produced/procured. Eg. if orders are possible with quantities 40,60, 80, 100 etc..then rounding value is 20.
    Min lot size is the minimum quanity in which order can be produced eg. 40 in above example
    Max lot size is the maximum quanitity in which the order can be produced/procured. eg. 100 in our example
    Impact on Heuristic run: heuristics takes all the above parameter to plan supply order. eg. if requirement is 55, it will supply plan for 60. If requirement is 120, it will produce two orders - one for 100 and the other for 20. if the reuqirement is 10, the order size will be 40.
    Hope this helps.

  • Table size exceeds Keep Pool Size (db_keep_cache_size)

    Hello,
    We have a situation where one of our applications started performing bad since last week.
    After some analysis, it was found this was due to data increase in a table that was stored in KEEP POOL.
    After the data increase, the table size exceeded db_keep_cache_size.
    I was of the opinion that in such cases KEEP POOL will still be used but the remaining data will be brought in as needed from the table.
    But, I ran some tests and found it is not the case. If the table size exceeds db_keep_cache_size, then KEEP POOL is not used at all.
    Is my inference correct here ?
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - ProductionSetup
    SQL> show parameter keep                    
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 4M
    SQL>
    SQL>     
    SQL> create table t1 storage (buffer_pool keep) as select * from all_objects union all select * from all_objects;
    Table created.
    SQL> set autotrace on
    SQL>
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    PL/SQL procedure successfully completed.
    SQL> set serveroutput on
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    SEGMENT_NAME                  : T1
    PARTITION_NAME                :
    SEGMENT_TYPE                  : TABLE
    SEGMENT_SUBTYPE               : ASSM
    TABLESPACE_NAME               : HR_TBS
    BYTES                         : 16777216
    BLOCKS                        : 2048
    EXTENTS                       : 31
    INITIAL_EXTENT                : 65536
    NEXT_EXTENT                   : 1048576
    MIN_EXTENTS                   : 1
    MAX_EXTENTS                   : 2147483645
    MAX_SIZE                      : 2147483645
    RETENTION                     :
    MINRETENTION                  :
    PCT_INCREASE                  :
    FREELISTS                     :
    FREELIST_GROUPS               :
    BUFFER_POOL                   : KEEP
    FLASH_CACHE                   : DEFAULT
    CELL_FLASH_CACHE              : DEFAULT
    PL/SQL procedure successfully completed.DB_KEEP_CACHE_SIZE=4M
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              9  recursive calls
              0  db block gets
           2006  consistent gets
           2218  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=10M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=10M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 12M
    SQL>
    SQL> set autotrace on
    SQL>
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=20M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=20M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 20M
    SQL> set autotrace on
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
           1656  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
              0  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedOnly with 20M db_keep_cache_size I see no physical reads.
    Does it mean that if the db_keep_cache_size < table size, there is no caching for that table ?
    Or am I missing something ?
    Rgds,
    Gokul

    Hello Jonathan,
    Many thanks for your response.
    Here is the test I ran;
    SQL> select buffer_pool,blocks from dba_tables where owner = 'HR' and table_name = 'T1';
    BUFFER_     BLOCKS
    KEEP          1977
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
          1939
    SQL> show parameter db_keep_cache_size
    NAME                                 TYPE        VALUE
    db_keep_cache_size                   big integer 20M
    SQL>
    SQL> alter system set db_keep_cache_size = 5M scope=both;
    System altered.
    SQL> select count(*) from hr.t1;
      COUNT(*)
        135496
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
           992I think my inference is wrong and as you said I am indeed seeing the effect of tail end flushing the start of the table.
    Rgds,
    Gokul

  • "Content generation error. The article exceeds maximum file size limit"

    We are building a single edition app for our univesity's literary and arts magazine and have no problem previewing it through the adobe viewer app, but when we try to create a folio so we can begin the App Store process, we keep getting this pesky error. It refuses to export a folio.
    The links folder associated with the indesign file totals 474mb.
    The folder containing all the Indesign files associated with the app is 537mb.
    So how are we exceeding the 1gb file size?
    Details:
    115 pages within a single article (we converted from the print edition, rather than placing each piece into an article of toys own. Is that a problem?).
    All images (about 50) are png files of under 1mb and the videos have been exported to mp4 (total 370mb of video--do we need to switch to streaming?)
    Every page has a button linking back to the home screen. And at least one, and sometimes two, MSOs (to full-screen an image or to pop up the author's bio.
    3 mp3 audio pieces are insignificant in size.
    We removed extraneous states from the MSOs.
    Our PNG files are sized to a maximum width of 2048px.
    The .indd file itself is 58mb
    No HTML overlays, but we do have 5 or 6 hyperlinks that launch the browser.
    Any guesses why we are unable to generate a folio with this article?
    Is there a way to audit our project to see what the problem is?
    We've scoured these forums and applied the advice that seemed pertinent , so your patience and recommendations would be a great help.

    Thanks to both of you for your replies.
    We watched Adobe's tutorial videos and read the guides on Adobe's site before starting our project (a first app for our lit-mag), and never found the kind of overview that would have made the built-in functionality of the Folio system evident. Are there beginner videos you recommend we screen for the editorial board next year? It has been a great learning process for us this summer, but one involving many errors along the way.
    It seems like it won't be difficult to break up the journal into "articles," but in order to make the TOC work correctly it really is a one piece to a page thing (none of our pieces extends beyond one screen because we used scrollable overlays). Do you think we'll run into trouble with 100+ articles in our folio?
    Thanks again for taking the time to respond!
    --Clamor

  • Outlook for MAC 2011 - Error Code 1026 - Specified string literal size exceeds maximum supported string literal size.

    How do I resolve this.  Several people have asked this same question, but there is no answer.

    Hi,
    In the current forum we provide support for Office for Windows, since your question is about Office for Mac, please kindly refer to the Office for Mac forum:
    http://answers.microsoft.com/en-us/mac
    The reason why we recommend posting appropriately is you will get the most qualified pool of respondents, and other partners who read the forums regularly can either share their knowledge or learn from your interaction with us. Thank
    you for your understanding.
    Regards,
    Melon Chen
    TechNet Community Support

  • How do you upload playlists when your library size exceeds the ipod size?

    So I recently loaded more music onto my computer than my ipod can hold. Itunes created a new playlist, which the ipod is now using as the "library" that is uploaded into the ipod. That is all fine and dandy, but now my other playlists are no longer being uploaded into the ipod. How do I create playlists within a playlist?

    Quit using the playlist that iTunes created. Tell your iPod options to only update selected playlists. Then check off your favorite playlists you created.
    Your iTunes Library is Bigger Than Your iPod HD

  • Forward to gmail/hotmail Event ID 3030 552 5.7.0 Number of Received: DATA headers exceeds maximum permitted

    A user wants me to forward his Exchange 2003 recipient’s email to his Gmail account. 
    In Active Directory Users and Computers I created a Contact and then in the recipient “Delivery Options” Forward to: I put that contact.
    He receives all the forwards of email created internally at Gmail, but only some email that comes from external domains make it to Gmail. 
    Most (but not all) external email when forwarded to his Gmail account, by Exchange 2003, creates an NDR (non-delivery report) saying “DATA headers exceeds maximum permitted” (shown below). I’m pretty sure this message is generated by Exchange 2003 because it
    is the same if the external, forwarded to, account is Gmail or Hotmail.
    Any ideas how to solve this?
    -------Error Msg in Outlook----------
    John Smith on 12/12/2014 5:26 PM
                The recipient could not be processed because it would violate the security policy in force
               <ourdomain.com #5.7.0 smtp;552 5.7.0 Number of 'Received:' DATA headers exceeds maximum permitted>
    “ourdomain.com”, above, is the name of our email domain.
    -------- Event Viewer Error-----------
    Event Type: Error
    Event Source: MSExchangeTransport
    Event Category: NDR 
    Event ID: 3030
    Date: 12/12/2014
    Time: 5:08:58 PM
    User: N/A
    Computer: WIN2K3
    Description:
    A non-delivery report with a status code of 5.7.0 was generated for recipient rfc822;[email protected] (Message-ID  <001301d01671$54abb8c0$fe032a40$@com>).

    Hi,
    Based on the error Number of 'Received:' DATA headers exceeds maximum permitted. This message header size could exceeds message header size limits.
    Message header size limits  These limits apply to the total size of all message header fields that are present in a message. The size of the message body or attachments isn’t considered. Because the header fields
    are plain text, the size of the header is determined by the number of characters in each header field and by the total number of header fields. Each character of text consumes 1 byte.
    So, please check the message header size limits setting on receive connector by the following cmdlet:
    Get-ReceiveConnector “Connector name” | FL MaxHeaderSize
    Then check the problematic message header and compare them to check this issue. If these message header exceeds the message header size limit, we can use the following cmdlet to change the maximum header size:
    Set-ReceiveConnector “Connector Name” –MaxHeaderSize “value”
    The MaxHeaderSize parameter specifies in bytes the maximum size of the SMTP message header that the Receive connector accepts before it closes the connection. The default value is 65536 bytes. When you enter a value, qualify the value with one of
    the following units:
    B (bytes)
    KB (kilobytes)
    MB (megabytes)
    GB (gigabytes)
    Unqualified values are treated as bytes. The valid input range for this parameter is from 1 through 2147483647 bytes.
    Note: Some third-party firewalls or proxy servers apply their own message header size limits. These third-party firewalls or proxy servers may have difficulty processing messages that contain attachment file names that are greater than 50 characters
    or attachment file names that contain non-US-ASCII characters.
    Best Regards.

  • How to Increase Instance size for ALBPM-Err:'Max instance size exceeded.

    HI,
    Can anybody in How to increase maximum instance size for
    1. ALBPM Studio
    2. runtime i.e. process administrator?
    Look forward for help.
    Cheers
    The exception in detail is:
    Error while persisting the transaction data: 'Max instance size exceeded.
    Current size is 33262, whereas the maximum size is 16384. This occurs with instance 'Process1' at activity 'StartExecution[Process1DownloadMessage]' of process '/Process1Download#Default-1.0''
    Details:
    Max instance size exceeded.
    Current size is 33262, whereas the maximum size is 16384. This occurs with instance 'Process1' at activity 'StartExecution[Process1DownloadMessage]' of process '/Process1Download#Default-1.0'
    fuego.server.exception.MaxInstanceSizeRuntimeException: Max instance size exceeded.
    Current size is 33262, whereas the maximum size is 16384. This occurs with instance 'Process1' at activity 'StartExecution[Process1DownloadMessage]' of process '/Process1Download#Default-1.0'      at fuego.server.ProcInst.getComponentData(ProcInst.java:792)      at fuego.server.ProcInst.mustStoreComponent(ProcInst.java:2777)      at fuego.server.persistence.jdbc.JdbcProcessInstancePersMgr.executeUpdateInstance(JdbcProcessInstancePersMgr.java:2870)      at fuego.server.persistence.jdbc.JdbcProcessInstancePersMgr.updateInstance(JdbcProcessInstancePersMgr.java:2272)      at fuego.server.persistence.Persistence.updateProcessInstance(Persistence.java:1008)      at fuego.server.execution.EngineExecutionContext.persistInstances(EngineExecutionContext.java:1819)      at fuego.server.execution.EngineExecutionContext.persist(EngineExecutionContext.java:1109)      at fuego.transaction.TransactionAction.beforeCompletion(TransactionAction.java:132)      at fuego.connector.ConnectorTransaction.beforeCompletion(ConnectorTransaction.java:685)      at fuego.connector.ConnectorTransaction.commit(ConnectorTransaction.java:368)      at fuego.transaction.TransactionAction.commit(TransactionAction.java:302)      at fuego.transaction.TransactionAction.startBaseTransaction(TransactionAction.java:481)      at fuego.transaction.TransactionAction.startTransaction(TransactionAction.java:551)      at fuego.transaction.TransactionAction.start(TransactionAction.java:212)      at fuego.server.execution.DefaultEngineExecution.executeImmediate(DefaultEngineExecution.java:123)      at fuego.server.execution.DefaultEngineExecution.executeAutomaticWork(DefaultEngineExecution.java:63)      at fuego.server.execution.EngineExecution.executeAutomaticWork(EngineExecution.java:42)      at fuego.server.execution.ToDoItem.executeAutomaticWork(ToDoItem.java:264)      at fuego.server.execution.ToDoItem.run(ToDoItem.java:559)      at fuego.component.ExecutionThread.processMessage(ExecutionThread.java:773)      at fuego.component.ExecutionThread.processBatch(ExecutionThread.java:753)      at fuego.component.ExecutionThread.doProcessBatch(ExecutionThread.java:142)      at fuego.component.ExecutionThread.doProcessBatch(ExecutionThread.java:134)      at fuego.fengine.ToDoQueueThread$PrincipalWrapper.processBatch(ToDoQueueThread.java:446)      at fuego.component.ExecutionThread.work(ExecutionThread.java:837)      at fuego.component.ExecutionThread.run(ExecutionThread.java:408)

    First take a look at your instance variables in your processes. Determine if some could be changed to be Separated instance variables. Once an instance variable's category changes from "Normal" to "Separated", it is not included in the instance size calculation.
    If you cannot mark variables as Separated, then in Studio's "Project Navigator" tab, right mouse click the name of your project -> click "Engine Preferences" -> with the "Engine" selected as the Category, click the "Advanced" tab on the upper right change the "Maximum Instance Size" to 64KB (4x the original 15kb value) and change the "Instances Cache" to 1250 (1/4th the original value).
    What version of Enterprise are you on (Standalone or WLS)? There is a similar setting on Enterprise, but it is slightly different between the two types of Enterprise Engines.
    Dan

  • Maximum Memory Size & Memory Size

    Will a server's memory automatically increase from Memory Size to Maximum Memory Size as the VM requires it? Or is this only valid for adding memory to the Memory Size value manually?

    keithrust wrote:
    Will a server's memory automatically increase from Memory Size to Maximum Memory Size as the VM requires it? Or is this only valid for adding memory to the Memory Size value manually?No, there is no automatic increase in memory. The max memory size just specifies the maximum amount to which you can manually adjust the memory of a running VM. I've now taken to setting the max memory size to the total available memory on my Oracle VM Servers, with memory size set to something reasonable. This way, I can adjust right up to the limit of physical memory without rebooting.

  • Maximum DB Size?

    Mark, hello;
    can you please provide some kind of formula to estimate BDB JE limits?
    maximum database size?
    maximum number or records?
    maximum record size?
    maximum key size?
    thank you!
    Andrei.

    Hi,
    I'm trying to read between the lines and have concluded that you're using key ranges instead of databases, because you can't create enough databases in total over the lifetime of your app. Correct?
    Would support for 2^63 databases solve your problem? Not promising anything, just curious.
    An optimized range deletion is a nice thing, and we should probably do it in the future. But because of JE's architecture I don't think it will ever be nearly as fast as a Database removal or truncation, which is already optimized.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Credit receipt exceeds Maximum amount allowed

    Hello,
    We were recently requested to update one of our expense type's "Default/Max. value", also we were requested to have the "Amount type" set to "Error message for exceeding maximum".  We have now come across a problem where credit receipts that are uploaded to our system are above the maximum amount allowed and are receiving the error message, these receipts cannot be itemized with personal expense down to the maximum because of the error message, so the users are having to enter these receipts in manually. 
    Is there a way to keep the error message for the maximum value and be able to itemize the receipt even if the receipt is above the limit?
    - Edward Edge

    hi
    no you cannot.
    the only way you can change the system error message as warning and proceed.

  • Fixed lot size and Minimum lot size

    Dear senior,
    My client has the following scenario. His supplier provides the item in a fixed lot size of 100. The minimum lot size to be ordered is 300.
    The supplier is a fixed vendor.
    I tried to maintain this information in the Material Master MRP views. Lot size  - FX, Fixed lot size - 100, Minimum lot size - 300.
    However, everytime, the minimum lot size will get reset.
    Is there any way , I can maintain this scenario via configuration or any other method?
    Thank you.

    Hi,
    Fixed lot size - 100 means for every 100 quantity system will create one purchase document.
    If you selected Lot size - FX - Fixed order quantity , then no need to maintain Minimum lot size & Maximum lot size fields.
    Then you have to maintain Fixed lot size filed only.
    Ex: 
    Lot size = FX
    Fixed lot size = 100
    There is a requirement of 300. Then system will create 3 PO's with quantity of 100 each.
    If Lot size is Ex - Lot-for-lot order quantity, then only need to maintain Minimum lot size & Maximum lot size fields in the MRP 1 view.
    Ex: Lot size = EX
    Minimum lot size = 60
    Maximum lot size = 100
    There is a requirement of 250. Then system will create 3 purchase documents, two documents with the quantity of 100 and third one with the quantity of 60.
    That's why, in your case system is reseting Minimum lot size every time.
    If you want to maintain Minimum lot size, then maintain
    Lot size as EX
    Minimum lot size = 100
    Max lot size = 300
    It will solve your problem.
    Regards
    KRK

  • ORA-29279:SMTP permanent error:552 5.3.4 Message size exceeds fixed maximum

    Hi,
    I can send attachments smaller than 25MB.
    But my attachment is 27MB and it fails. Don't ask why I need to send this big file by email, I have to.
    Where can I change this maximum size?
    - any parameter in utl_smpt?
    - any parameter in server where DB is located? (Linux 2.6.18-238.1.1.el5)
    - any parameter in smtp machine server?
    Thanks.
    Joaquín González
    Edited by: Joaquin Gonzalez on Apr 12, 2012 11:46 AM

    The actual error, "+552 5.3.4 Message size exceeds fixed maximum+", is from the smtp server. Oracle expects a 0 return code and instead get a 522 return code in the smtp server's response (to the DATA command). So Oracle throws an exception that in turn contains the error response of the smtp server.
    Interesting that you did not get the error earlier - many smtp servers have the ceiling set a lot lower and you can expect this error after sending around 5MB of data.
    There are ways around this. Send the attachment as multiple fragments using a compression technology like zip or rar (where it is fairly easy for the recipient to reconstruct the source from the pieces send).
    There's also another problem. The 20+ MB e-mail may be accepted by the smtp server, but the POP3, IMAP (or Exchange?) account (mail drop) may not accept an e-mail of that size.
    Success is also assuming that you are delivering that mail directly to the correct/target smtp server. If not, then that e-mail needs to be relayed via a number of intermediate smtp servers before reaching the target smtp server and domain.
    And there is a good chance that one of these relay servers will object to the size of the e-mail and trigger a hard bounce response.
    Keeping an attachment size down to at most 3 to 5 MB is IMO the better approach as that guarantees a better chance of delivery than a 20+ MB e-mail.

Maybe you are looking for