WCI 10.3 migration package size limit

Hi,
I'm trying to migrate content from old wci-server to new server (win2003 -->win2008) and some packages cannot be imported with portal interface.
Is there a way to increase the size limit to allow bigger pte files than 20 MB?
I'm aware of the command line tool, but there seems to be no way to exclude unwanted content.
Regards,
Johanna

Hi,
I'm trying to import certain user group with dependencies and the package is about 30 MB.
This is One group containing about 4 other groups and dynamic members.
When trying to import to a new environment nothing happens and after a while the error page is displayed.
Logging spy gives 2 errors:
Unable to load request data.
com.plumtree.openfoundation.io.XPIOException: Posted content length of 29289005 exceeds limit of 20971520
    at com.plumtree.openfoundation.util.XPException.GetInstance(XPException.java:385)
    at com.plumtree.openfoundation.util.XPException.GetInstance(XPException.java:350)
    at com.plumtree.openfoundation.web.XPRequest.ParseRequest(XPRequest.java:935)
    at com.plumtree.uiinfrastructure.interpreter.Interpreter.HandleRequest(Interpreter.java:264)
    at com.plumtree.uiinfrastructure.interpreter.Interpreter.DoService(Interpreter.java:190)
    at com.plumtree.uiinfrastructure.web.XPPage.service(XPPage.java:300)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(Unknown Source)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(Unknown Source)
    at weblogic.servlet.internal.WebAppServletContext.securedExecute(Unknown Source)
    at weblogic.servlet.internal.WebAppServletContext.execute(Unknown Source)
    at weblogic.servlet.internal.ServletRequestImpl.run(Unknown Source)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
Caused by: java.io.IOException: Posted content length of 29289005 exceeds limit of 20971520
    at com.oreilly.servlet.multipart.MultipartParser.<init>(MultipartParser.java:172)
    at com.plumtree.openfoundation.web.XPRequest.ParseRequest(XPRequest.java:782)
    at com.plumtree.uiinfrastructure.interpreter.Interpreter.HandleRequest(Interpreter.java:264)
    at com.plumtree.uiinfrastructure.interpreter.Interpreter.DoService(Interpreter.java:191)
    at com.plumtree.uiinfrastructure.web.XPPage.service(XPPage.java:300)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:821)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:176)
    ... 8 more
Server error.
com.plumtree.openfoundation.io.XPIOException: Posted content length of 29289005 exceeds limit of 20971520
    at com.plumtree.openfoundation.util.XPException.GetInstance(XPException.java:385)
    at com.plumtree.openfoundation.util.XPException.GetInstance(XPException.java:350)
    at com.plumtree.openfoundation.web.XPRequest.ParseRequest(XPRequest.java:935)
    at com.plumtree.uiinfrastructure.interpreter.Interpreter.HandleRequest(Interpreter.java:264)
    at com.plumtree.uiinfrastructure.interpreter.Interpreter.DoService(Interpreter.java:190)
    at com.plumtree.uiinfrastructure.web.XPPage.service(XPPage.java:300)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(Unknown Source)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(Unknown Source)
    at weblogic.servlet.internal.WebAppServletContext.securedExecute(Unknown Source)
    at weblogic.servlet.internal.WebAppServletContext.execute(Unknown Source)
    at weblogic.servlet.internal.ServletRequestImpl.run(Unknown Source)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
Caused by: java.io.IOException: Posted content length of 29289005 exceeds limit of 20971520
    at com.oreilly.servlet.multipart.MultipartParser.<init>(MultipartParser.java:172)
    at com.plumtree.openfoundation.web.XPRequest.ParseRequest(XPRequest.java:782)
    at com.plumtree.uiinfrastructure.interpreter.Interpreter.HandleRequest(Interpreter.java:264)
    at com.plumtree.uiinfrastructure.interpreter.Interpreter.DoService(Interpreter.java:191)
    at com.plumtree.uiinfrastructure.web.XPPage.service(XPPage.java:300)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:821)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:176)
    ... 8 more
Google gives a lot of results but nothing for this product for it's old stuff
Regards,
Johanna

Similar Messages

  • Mailbox migration - Item size limit

    I'm in the process of migrating mailboxes from Exchange 2003 to Exchange 2010. I had one mailbox so far with an item bigger than my message size policy allows, so that item did not migrate. I suspect there will be more items. Is there a command I can issue
    with the local move request EMS commands to allow items bigger than the send policy?

    http://technet.microsoft.com/en-us/library/dd351123(v=exchg.141).aspx
    The AllowLargeItem parameter specifies that the message size limit will not be enforced on the message item during the move operation.
    A value does have to be specified when you use the AllowLargeItem parameter
    Twitter!: Please Note: My Posts are provided “AS IS” without warranty of any kind, either expressed or implied.

  • Aperture  migration file size limit?

    I have been able to load a library of 19gb into Lightroom. However I have several libraries of over 100gb and I can't load them. After Lightroom analyzes the film to be imported it doesn't come up with the photo count. Without the count it gives an error when trying to import. Is there a maximum file size that can be successfully migrated?

    Talking about "library" size, or the total size of your library and images?  Several users have libraries (including images)around 1TB and seem to do fine, in many cases.  I'd guess the number of images would be the first problem, not total library/image size.
    Make a backup of your library, then run a permissions and library repair.  Make sure permissions are set correctly on you library.  If you're unsure about how to do this, put your library on an external volume, "get info", and "ignore ownership on this volume".  This was the cause of failure for two other forum users.

  • Hitting the 32k size limit with Keyword Expansion in packages

    Hi!
    I am hitting the 32k size limit with Keyword Expansion (KE). It is hardcoded in the procedure jr_keyword.analyze_mlt.
    Are there any plans to get rid of this limit, so package bodies with size > 32000 bytes can be expanded?

    Well, I am making progress. With a combination if utl_tcp.get_line() - to trap the header and utl_tcp.get_text - to get the data at 32K chunks - which I then put into a CLOB or VARRAY of VARCHAR2; I can get as much data as is sent.
    The problem now is speed.
    It takes over 60 seconds to get 160K (5 32K chunks) of data when I put it into the VARRAY of VARCHAR2, and it takes even longer if I use dbms_lob.write() and dbms_lob.writeappend() to store the data.
    Am I doing something wrong? Is there another way?
    Thank You for any Help.
    Shannon

  • Mailbox migration size(data copied) vs mailbox size limit granted to user

    Hi,
    I have an odd question. I am migrating a user from exchange 2010 to 2013 I can see that the statistics of the user are as follows:
    The catch is that the user is restricted to a 2GB mailbox size limit. So where did the extra 10GB come from?

    Hi,
    The data migrated contains two large things I can think of: Personal archive mailboxes and corrupted items.
    And corrupted items are most likely to lead to greater 2G.
    We can cheek this problem after migration completed.
    Mailbox moves in Exchange 2013
    https://technet.microsoft.com/en-us/library/jj150543%28v=exchg.150%29.aspx?f=255&MSPPError=-2147217396
    Best Regards.

  • Size limit for how much video can be included in an AIR file?

    I'm making an AIR app to play a video locally.  It is on a PC with no internet connection, so the video (fv4) is  included with the air install file.  Is there a size limit for how much  video can be included in an AIR file? I seem to get the "...file is  damaged..." error when I include a video over 200mb. I use Flash CS4 to  create and AIR is v 1.53.

    I'm authoring in CS4 on a MAC with the latest OS and 4GB ram.  It plays fine on the Mac.  On a WIN7 PC with all updates and 4GB ram I get the errors if the air file that contains a video file over 200mb.  I heard that AIR 2 should solve the issue and that it is just a PC issue.. Maybe...   I have not tried using ADT to package the file.

  • Messaging server size limit

    Hallo
    As I see, I have common problem among the Sun Messaging Server administrators. I have whole system distributed on several virtual solaris machines and some days ago emerged message size problem. I noticed it in relation with incoming mail. When I create new user, he or she can't get larger mail then 300K. Sender get will known message:
    This message is larger than the current system limit or the recipient's mailbox is full. Create a shorter message body or remove attachments and try sending it again.
    <server.domain.com #5.3.4 smtp;552 5.3.4 a message size of 302 kilobytes exceeds the size limit of 300 kilobytes computed for this transaction>
    Interesting thin is, that this problem arised with no correlation with other actions. I noticed this problem with new users before, but i could successful manage it with different service packs. Now this method with new users, this method doesn't work! Old users normally recieve messages bigger then 300k, as before.
    I tried to set default setting blocklimit 2000 in imta.cnf, but I didn't succeed.
    I know, that size limit can be set on different places, but is there a simple way, to set sending and recieving message size unlimited?
    Messaging server version is:
    Sun Java(tm) System Messaging Server 7u2-7.02 64bit (built Apr 16 2009)*
    libimta.so 7u2-7.02 64bit (built 03:03:02, Apr 16 2009)*
    Using /opt/sun/comms/messaging64/config/imta.cnf (compiled)*
    SunOS mailstore 5.10 Generic_138888-01 sun4v sparc SUNW,SPARC-Enterprise-T5120*
    Regards
    Matej

    For the sake of correctness, the attribute name in LDAP is mailMsgMaxBlocks.
    I also stumbled upon this - the values like 300 blocks or 7000 blocks are set in (sample) service packages but are not advertised in Delegated Admin web-interface. When packages are assigned, these values are copied into each user's LDAP entry as well, and can not be seen or changed in web-interface.
    And then mail users get "weird" errors like:
    550 5.2.3 user limit of 7000 kilobytes on message size exceeded: [email protected]
    or
    550 5.2.3 user limit of 300 kilobytes on message size exceeded: [email protected]
    resulting in
    <[email protected]>... User unknown
    or
    552 5.3.4 a message size of 7003 kilobytes exceeds the size limit of 7000 kilobytes computed for this transaction
    or
    552 5.3.4 a message size of 302 kilobytes exceeds the size limit of 300 kilobytes computed for this transaction
    resulting in
    Service unavailable
    I guess there are other similar error messages, but these two are most common.
    I hope other people googling up the problem would get to this post too ;)
    One solution is to replace the predefined service packages with several of your own, i.e. ldapadd entries like these (fix the dc=domain,dc=com part to suit your deployment, and both cn parts if you rename them), and restart the DA webcontainer:
    dn: cn=Mail-Calendar - Unlimited,o=mailcalendaruser,o=cosTemplates,dc=domain,dc=com
    cn: Mail-Calendar - Unlimited
    daservicetype: calendar user
    daservicetype: mail user
    mailallowedserviceaccess: imaps:ALL$pops:ALL$+smtps:ALL$+https:ALL$+pop:ALL$+imap:ALL$+smtp:ALL$+http:ALL
    mailmsgmaxblocks: 20480
    mailmsgquota: -1
    mailquota: -1
    objectclass: top
    objectclass: LDAPsubentry
    objectclass: costemplate
    objectclass: extensibleobject
    dn: cn=Mail-Calendar - 100M,o=mailcalendaruser,o=cosTemplates,dc=domain,dc=com
    cn: Mail-Calendar - 100M
    daservicetype: calendar user
    daservicetype: mail user
    mailallowedserviceaccess: imaps:ALL$pops:ALL$+smtps:ALL$+https:ALL$+pop:ALL$+imap:ALL$+smtp:ALL$+http:ALL
    mailmsgmaxblocks: 20480
    mailmsgquota: 10000
    mailquota: 104857600
    objectclass: top
    objectclass: LDAPsubentry
    objectclass: costemplate
    objectclass: extensibleobject
    dn: cn=Mail-Calendar - 500M,o=mailcalendaruser,o=cosTemplates,dc=domain,dc=com
    cn: Mail-Calendar - 500M
    daservicetype: calendar user
    daservicetype: mail user
    mailallowedserviceaccess: imaps:ALL$pops:ALL$+smtps:ALL$+https:ALL$+pop:ALL$+imap:ALL$+smtp:ALL$+http:ALL
    mailmsgmaxblocks: 20480
    mailmsgquota: 10000
    mailquota: 524288000
    objectclass: top
    objectclass: LDAPsubentry
    objectclass: costemplate
    objectclass: extensibleobject
    See also limits in config files -
    * msg.conf (in bytes):
    service.http.maxmessagesize = 20480000
    service.http.maxpostsize = 20480000
    and
    * imta.cnf (in 1k blocks): <channel block definition> ... maxblocks 20000 blocklimit 20000 sourceblocklimit 20000
    i.e.:
    tcp_local smtp mx single_sys remotehost inner switchchannel identnonenumeric subdirs 20 maxjobs 2 pool SMTP_POOL maytlsserver maysaslserver saslswitchchannel tcp_auth missingrecipientpolicy 0 loopcheck slave_debug sourcespamfilter2optin virus destinationspamfilter2optin virus maxblocks 20000 blocklimit 20000 sourceblocklimit 20000 daemon outwardrelay.domain.com
    tcp_intranet smtp mx single_sys subdirs 20 dequeue_removeroute maxjobs 7 pool SMTP_POOL maytlsserver allowswitchchannel saslswitchchannel tcp_auth missingrecipientpolicy 4 maxblocks 20000 blocklimit 20000 sourceblocklimit 20000
    tcp_submit submit smtp mx single_sys mustsaslserver maytlsserver missingrecipientpolicy 4 slave_debug maxblocks 20000 blocklimit 20000 sourceblocklimit 20000
    tcp_auth smtp mx single_sys mustsaslserver missingrecipientpolicy 4 maxblocks 20000 blocklimit 20000 sourceblocklimit 20000
    If your deployment uses other SMTP components, like milters to check for viruses and spam, in/out relays separate from Sun Messaging, other mailbox servers, etc. make sure to use a common size limit.
    For sendmail relays sendmail.mc (m4) config source file it could mean lines like these:
    define(`SMTP_MAILER_MAX', `20480000')dnl
    define(`confMAX_MESSAGE_SIZE', `20480000')dnl
    HTH,
    //Jim Klimov
    PS: Great thanks to Shane Hjorth who originally helped me to figure all of this out! ;)

  • Mailbox database admission control by maximum mailboxdatabase size limit

    I would like to have a feature which allowes to set a maximum size limit on a mailboxdatabase. This way we can keep the mailboxdatabases around a certain size. Existing mailboxex within the mailboxdatabase can grow according to their sizing
    limits but no new mailboxes can be added or moved to it. In collaboration with the new automatic distribution feature you can migrate more efficiently and keep maintenance and restores times low.

    Hello,
    Accroding to your description, I understand that you want to set a maximum size limit on a mailboxdatabase. Your purpose for this doing is that existing mailboxes size within the mailboxdatabase can grow, but new mailboxes can't be added or
    moved to it? If so, I'm afraid that there is no way to set a maximum size limit on a mailboxdatabase in exchange 2013 server. But we can control automatic mailbox distribution sing database scopes.
    Here is an article for your reference.
    http://technet.microsoft.com/en-us/library/ff628332(v=exchg.150).aspx
    If I have any misunderstanding, please free let me know.
    If you have any feedback on our support, please click
    here
    Cara Chen
    TechNet Community Support

  • PACKAGE SIZE n in SELECT query

    Hi,
    When using PACKAGE SIZE n option with SELECT queries, how to determine the best/optimum value of n ? Especially when we use this for querying tables like EKPO, EKKO etc.
    Regards,
    Anand.

    > When using PACKAGE SIZE n option with SELECT queries, how to determine the best/optimum value of n ?
    The 'package size' option to the select specifies how many
    rows are returned in one chunk.
    According to ABAP-Doku, it is best to use it with an internal table:
    DATA: itab TYPE STANDARD TABLE OF SCARR WITH NON-UNIQUE
                     DEFAULT KEY INITIAL SIZE 10.
    FIELD-SYMBOLS: <FS> TYPE scarr.
    SELECT * INTO TABLE itab PACKAGE SIZE 20 FROM scarr.
      LOOP AT itab ASSIGNING <FS>.
        WRITE: / <FS>-carrid, <FS>-carrname.
      ENDLOOP.
    ENDSELECT.
    But, basically, your application's requirements determine
    what's the best value for n.
    If you don't want a lot of DB-access, you choose a high
    value for n. If you don't want a lot of data in memory, you adjust it to a lower value.
    You can also use the 'up to n rows' construct in the select to limit the number of rows fetched from the db.
    thomas

  • Exchange 2010 SP1 database size limit?

    I have seen several posts concerning exchange 2010 Standard database limit being 50GB but can be increased by modifying the registry. We were migrated from Exchange 2003 to Exchange 2010 by a vendor. My current database is  188.1GB. I searched the registry
    for the changes to increase the database form the 50GB limit but it does not look like they made the changes. What has me baffled is how can my database be 188.1 GB if they did not modify the registry?
    Thank you,
    Cecil

    So you only use Database Size Limit in GB registry if you want to hard set the size? From some of the posts I had seen they were saying it was 50 GB by default and could be expanded to 1TB by modifying the registry. I did not understand
    why the instructions included modifying the registry if it was not needed. I am by no means trying to dispute it just trying to understand. 
    Thanks for the reply.
    Cecil

  • 3d files/ size limit in CS5?

    Anyone know if theres a 3d file quantity / size limit within a photoshop document? What would any limit be dependant on, e.g, VRAM?
    Running Photoshop 64bit Cs5 Extended, all updates, on a dual xeon, 12gb ram, 64bit win 7, NVidia Quadro FX 3800 (1gb), Raptor scratch disk with 50gb space, used as a dedicated scratch disk. PS settings are set to alocate over 9gb ram to PS, GPU Open GL settings enabled and set to Normal, 3d settings allocate 100% VRAM (990mb) and rendering set to Open GL. You'd expect this to perform admirably and handle most tasks.
    Background:
    Creating a PSD website design file with 3 x 3d files embedded. One 'video' animation file linked and a few smart objects (photos) and the rest is shapes and text with a few mask etc. Nothing unusual other than maybe the vidoe and 3d files. The file size is 500mb, which isnt unusual as I've worked on several 800mb files at the same time all open in the same workspace. PC handles that without any problems.
    Introducing the 3d files and video seems to have hit an error or a limit of some sort but I cant seem to pinpoint whats causing it or how to resolve it.
    Problem:
    I have the one 500mb file I've been working on, open. I try to open any ONE file or create a new one and it says the following error. "Could not complete the command because too many files were selected for opening at once". I've tried with 3 files, other PSD files, JPEGs, anything that can be opened in PS. All with the same message. Only one PSD file open, only trying to opne one more file or create a new file from scratch.
    I've also had a similar error "Could not complete your request because there are too many files open. Try closing some windows & try again". Have re-booted and only opened PS and still the same errors.
    Tried removing the video file and saving a copy. That doesnt work. Removed some of th 3 files and saved a copy and then it sometimes allows me to open more files. Tried leaving the 3d files in and reducing lighting (no textures anyway) and rendering without ray tracing, still no effect. Tried rasterising the files and it allowed more files to be opened. I'm working across a network so tried using local files which made no difference. Only thing that seems to make a difference is removing or rasterising some of the the 3d files.
    Anyone had similar problems on what seems to be a limit either on quantity of 3d files, or maybe complexity limit, or something else to do with 3d files limits? Anyone know of upgrades that might help? I've checked free ram and thats at 7gb, using about 10gb swap file. I've opened 5 documents at the sam time of over 700mb each, and its not caused problems, so I can only think the limit is with the GPU with regards to 3d. Cant get that any higher than 990mb, which I'd assume would be enough anyway if that was the problem. I've palyed about with preferences to adjust the 3d settings to lower but no use.
    Anyone any idea whats limiting it and causing it to give the error message above? Is it even a PS5 limit or a win 7 64bit limit?
    Any ideas greatly appreciated. Thanks all.

    Thanks for your comments Mylenium, I originally thought it might be VRAM, but at 1gb (still quite an acceptable size from what I can tell - I'd expect more than 3 x 3d files for that) I originally dismissed it as the complexity of the files seemed quite low for it to be this. I'm still not completely convinced its the VRAM though because of the error message it gives, and have tried it on more complex 3d models and using more of them and it works fine on those. Seems odd about not letting me create a new document too. Would like the money get a 6gb card but a bit out of the budget range at the moment.
    Do you know of a way to "optimise" 3d files so they take up less VRAM, for example reducing any unwanted textures, materials, vertices or faces within PS in a similar fashion to how illustrator can reduce the complexity/ number of shapes/ points etc? Cant ask the client as they dont have the time or I'd do this . Does rendering quality make a difference, or changing to a smart object? Doesnt seem to from what I've tried.
    Re: using a dedicated 3d program, I'd be a bit reluctant to lose the ability to rotate / edit/ draw onto/ light etc objects within Photoshop now that I have a taste for it and go back to just using 3d renderings, otherwise I'd go down the route as suggested for a dedicated 3d package. Thanks for the suggestion though.

  • Received error message that message could not be sent due to domain's size limit

    Sent messages come back saying the domain's size limit causes messages to be undeliverable. How can I fix this?

    Folling your instructions, settings same as ISP's, I did manage to send and receive One test message yesterday.
    Trying this morning all attempts are complete failures; with requests to "enter new password". A password which has not been changed.
    Just now attempted to send a test message three times; receiving the same request for a new password then some how forcing the issue I received the following:
    "An error occurred sending mail: The mail server sent an incorrect greeting: pacmmta54.windstream.net pacmmta54 4.7.1 - Connection Refused - 72.168.134.151 - Too many connections."
    By the way, I've (using the package manager) have completely un-installed and then then reinstall "Thunderbird".
    An interesting note; I should have had NO orginal settings or have retained any incoming email messages, correct?, but I did and infact click on the link of your original reply. This message is not being sent from in side Thunderbird, but from Mozilla support web page. Thank you for your assistance.
    Bob

  • FILE SIZE LIMIT WITH UTL_FILE?

    Hello all,
    is there any file size limit using utl_file on SUSE LINUX 5.3
    (RDBMS 8.0.5)?
    We have created a file with UTL_FILE.FOPEN, put some line into it
    (UTL_FILE.PUT_LINE), and close the file (UTL_FILE.FCLOSE).
    The file size is never greater than 32 kB.
    On 7.3.4 the same procedure works fine with files larger than 1
    MB.
    Thx
    R. Petuker
    null

    Robert Petuker (guest) wrote:
    : Hello all,
    : is there any file size limit using utl_file on SUSE LINUX 5.3
    : (RDBMS 8.0.5)?
    : We have created a file with UTL_FILE.FOPEN, put some line into
    it
    : (UTL_FILE.PUT_LINE), and close the file (UTL_FILE.FCLOSE).
    : The file size is never greater than 32 kB.
    : On 7.3.4 the same procedure works fine with files larger than 1
    : MB.
    : Thx
    : R. Petuker
    Robert,
    There is no limit on the size of the file created using the
    utl_file package. It could be the limitation of the shell
    above which you are working on. You may verify the filesize
    limit of the shell and try it again.
    Maturi
    null

  • Data package size

    What is the basic difference between RSCUSTV6 & SBIW->General setting ->Maintain Control parameters in relation of modification of data package format.

    Hi,
    Just see the help on
    Maintain Control Parameters for Data Transfer:
    1. Source System
    Enter the logical system of your source client and assign the control parameters you selected to it.
    You can find further information on the source client in the source system by choosing the path
    Tools -> Administration -> Management -> Client Maintenance.
    2. Maximum Size of the Data Package
    When you transfer data into BW, the individual data records are sent in packages of variable size. You can use these parameters to control how large a typical data packet like this is.
    If no entry was maintained then the data is transferred with a default setting of 10,000 kBytes per data packet. The memory requirement not only depends on the settings of the data package, but also on the size of the transfer structure and the memory requirement of the relevant extractor.
    3. Maximum Number of Rows in a Data Package
    With large data packages, the memory requirement mainly depends on the number of data recrods that are transferred with this package. Using this parameter you control the maximum number of data records that the data package should contain.
    By default a maximum of 100,000 records are transferred per  data package.
    The maximum main memory requiremen per data package is approximately 2  Max. Rows 1000 Byte.
    4. Frequency
    The specified frequency determines the number of IDocs that an Info IDoc is to be sent to, or how many data IDocs an Info Idoc describes.
    Frequency 1 is set by default.. This means that an Info Idoc follows every data Idoc. In general, you should select a frequency between 5 and 10 but no higher than 20.
    The bigger the data IDoc packet, the lower the frequency setting should be. In this way, when you upload you can obtain information on the respective data loading in relatively short spans of time .
    With the help of every Info IDoc, you can check the BW monitor to see if there are any errors in the loading process. If there are none, then the traffic light in the monitor will be green. The Info IDocs contain information such as whether the respective data IDocs were uploaded correctly.
    5. Maximum number of parallel processes for the data transfer
    An entry in this field is only relevant from release 3.1I onwards.
    Enter a number larger than 0. The maximum number of parallel processes is set by default at 2. The ideal parameter selection depends on the configuration of the application server, which you use for transferring data.
    6. Background job target system
    Enter the name of the application server on which the extraction job is to be processed.
    To determine the name of the application server, choose
    Tools -> Administration -> Monitor -> System monitoring -> Server. The name of the application server is displayed in the column Computer.
    7. Maximum Number of Data Packages in a Delta Request
    With this parameter, you can restrict the number of data packages in a delta request or in the repetition of a delta request.
    Only use this restriction when you expect delta requests with a very high data volume, so that, despite sufficiently large data package sizes, more than 1000 data packages can result in a request.
    With an initial value or when the value is 0, there is no restriction. Only a value larger than 0 leads to a restriction in the number of data packages. For reasons of consistency, this number is not generally exactly adhered to. The actual restriction can, depending on how much the data is compressed in the qRFC queue , deviate from the given limit by up to 100.
    RSA6:
    Used to change the Datapacket Size.
    Thanks
    Reddy
    Edited by: Surendra Reddy on Mar 12, 2010 6:27 AM

  • What is the size limit of inbound container in Generic Synchronization?

    Hello Everyone,
       I am calling a RFC function module from the back end. It is returning me 102 entries for a particulor set of input parameters when i run it there. 
       But when i call it from MI Client using Generic Synchronization then it returns me 99 entries.
       Is it beacuse inbound container size limit?
       Can anybody solve this problem?
       Thanks in advance.

    Hi Abhijit,
    Please refer to the note 842475 for getting info about package size for inbound containers.
    The maximum size a package can obtain is 5MB. However, this will not restrict data from coming on client. The remaining data will be sent in next container.
    Regards,
    Rahul
    If this helps kindly assign me some points.

Maybe you are looking for