Netware 6.5 Sp7 Breaks FTP large Files

Has anyone had a issue with FTP on Netware when after
loading sp7 we can no longer ftp files that are larger then
4 gig, any help would be nice

Mike,
It appears that in the past few days you have not received a response to your
posting. That concerns us, and has triggered this automated reply.
Has your problem been resolved? If not, you might try one of the following options:
- Visit http://support.novell.com and search the knowledgebase and/or check all
the other self support options and support programs available.
- You could also try posting your message again. Make sure it is posted in the
correct newsgroup. (http://forums.novell.com)
Be sure to read the forum FAQ about what to expect in the way of responses:
http://support.novell.com/forums/faq_general.html
If this is a reply to a duplicate posting, please ignore and accept our apologies
and rest assured we will issue a stern reprimand to our posting bot.
Good luck!
Your Novell Product Support Forums Team
http://support.novell.com/forums/

Similar Messages

  • Using XI to FTP large files

    Hi Folks,
    I have a scenario in which I need to transfer a flat file to an external system. No mapping is required. We are not using BPM. I read Michael's comments on using Java proxies to transfer large files. I know that we can use standard Java IO APIs to copy the file over. However, I don't know how to implement this.
    In my scenario an SAP tranaction will create the file. I just need XI to pick it up and FTP it to another server. Can you point in the right direction as to how i should go about imlementing this?
    1. I assume i will still have to use file adapter to pick up the file.Right?
    2. Then, i use Java server proxy to FTP it to the target system?
    3. In order to generate the proxy i need a message interface. Should i use a dummy Message Interface as my inbound and outbound that points to a dummy message type?
    Can someone provide me a sample?
    Thanks,
    Birla

    Hi Nilesh,
    Thanks for the reply and the link. However, the blog doesn't provide solution to my problem. I was asking if XI can pick-up a large file (say 200MB) and FTP it to an external system without doing content conversion.
    I already read these blogs.
    FTP_TO_FTP
    /people/william.li/blog/2006/09/08/how-to-send-any-data-even-binary-through-xi-without-using-the-integration-repository
    /people/shabarish.vijayakumar/blog/2006/04/03/xi-in-the-role-of-a-ftp
    These blogs suggest that i can use Java proxy to achieve better performance.
    I just don't know how to implement it.
    Any help would be much appreciated.
    Thanks,
    Birla.

  • Breaking a large file into multiple SWFs

    Hi -
    I've created a project with about 70 slides, including a few that contain full motion video. Is there a way to publish each slide as an individual SWF? If I accomplish publishing each as an individual SWF should I use Aggregator to connect them, or should I have the end action of each slide be open the next slide?
    Thanks

    Select all and Hide all the slides in the filmstrip (rightclick when in the filmstrip) then UNHIDE the one you are pubishing, this will result in only publishing the one you have deselected. You will have to pubish 70 times, but it will save you from cutting and pasting them into new projects.

  • FTP and HTTP large file ( 300MB) uploads failing

    We have two IronPort Web S370 proxy servers in a WCCP Transparent proxy cluster.  We are experiencing problems with users who upload large video files where the upload will not complete.  Some of the files are 2GB in size, but most are in the hundreds of megabytes size.  Files of sizes less than a hundred meg seem to work just fine.  Some users are using FTP proxy and some are using HTTP methods, such as YouSendIt.com. We have tried explict proxy settings with some improvment, but it varies by situation.
    Is anyone else having problems with users uploading large files and having them fail?  If so, any advice?
    Thanks,
       Chris

    Have you got any maximum sizes set in the IronPort Data Security Policies section?
    Under Web Security Manager.
    Thanks
    Chris

  • FTP Sender Adapter with EOIO for Large files

    Hi Experts,
    I am Using XI 3.0 version,
    My scenario is File->XI->FIle, I need to pick the files from FTP Server , there are around 50 files in it each of 10 MB Size.
    So i have to pick the files from FTP folder in the same order as they put into folder , i,e.., FIFO.
    So in Sender FTP Comm channel , i am Specifying
    Qos = EOIO
    queue name =  ACCOUNT
    whether the files will be picked in FIFO into the queue XBTO_ACCOUNT?
      So my question is,
    what is the procedure to specify the parameters so that it will process in FIFO?
      What would be the BEST PRACTICE to acheive it for better performance point of view.
    Thanks
    Sai.

    Hi spantaleoni ,
    I want to process the files using FTP protocol which should be in FIFO ,
    ie.., Files placed first in a folder should be picked up first and remaining one next and so on.
    So if i use FTP ,
    Then to process the files in sequence i.e, FIFO
    the processing parameters
    Qos= EOIO
    Queue name = ACCOUNT
    would process the files in FIFO.
    and to process large files (10MB size) what would be the best  Polling interval secs.
    Thanks,
    Sai

  • Handling large files with FTP in XI

    Hi All,
    I have a file scenario where i have to post the file with size of more than 500MB and with the fields upto 700 in each line.
    The other scenario worked fine if the file size is below 70MB and less number of fields.
    Could anyone help me in handling such scenario with large file size and without splitting the file.
    1) From your previous experience did you use any Tools to help with the development of the FTP interfaces?
    2) The client looked at ItemField, but is not willing to use it, due to the licensing costs. Did you use any self-made pre-processors?
    3) Please let me know the good and bad experiences you had when using the XI File Adapter?
    Thanks & Regards,
    Raghuram Vedagiri.

    500 MB is huge. XI will not be able to handle such a huge payload for sure.
    Are you using XI as a mere FTP or are you using Content Conversion with mapping etc?
    1. Either use a splitting logic to split the file outside XI ( using Scripts ) and then XI handles these file.
    2. or Quick Size your hardware ( Java Heap etc ) to make sure that XI can handle this file ( not recommended though). SAP recommends a size of 5 MBas the optimum size.
    Regards
    Bhavesh

  • FTP how much large  file can take?

    Hi Experts,
    can you please any one tell me FTP,how much large size file can pull.maximum size of FTP?
    Thanks
    narendran

    Hi Narendan,
    if you read the question 14 (Q: Which memory requirements does the File Adapter have? Is there a restriction on the maximum file size it can process?) in this note  821267 - FAQ: XI 3.0 / PI 7.0 / PI 7.1 / PI 7.3 File Adapter , shows that the size restriction depends about several factors, i recommend that you read it.
    Check also this thread Size recommendation for placing file from FTP to Application Server??
    Also, i suggest that you read this blog if you would need to work with big files: File/FTP Adapter - Large File Transfer (Chunk Mode)
    Regards.

  • WRT110 router - FTP problem - Timeouts on large files

    When I upload files via FTP and if the transfer of a file takes more then 3 minutes,
    then my FTP programs doesn't jump to the next file to upload when the previous one has been just finished.
    The problem occurs only if I connect to the internet behind the wrt110 router.
    I have tried FileZilla, FTPRush and FlashFXP FTP softwares to upload via FTP.
    At FileZilla sending keep-alives doesn't help,
    only it works at FlashFXP, but I have a trial version.
    At FIleZilla's site I can read this:
    http://wiki.filezilla-project.org/Network_Configuration#Timeouts_on_large_files
    Timeouts on large files
    If you can transfer small files without any issues,
    but transfers of larger files end with a timeout,
    a broken router and/or firewall exists
    between the client and the server and is causing a problem.
    As mentioned above,  
    FTP uses two TCP connections:
    a control connection to submit commands and receive replies,
    and a data connection for actual file transfers.
    It is the nature of FTP that during a transfer the control connection stays completely idle.
    The TCP specifications do not set a limit on the amount of time a connection can stay idle.
    Unless explicitly closed, a connection is assumed to remain alive indefinitely.
    However, many routers and firewalls automatically close idle connections after a certain period of time.  
    Worse, they often don't notify the user, but just silently drop the connection.
    For FTP, this means that during a long transfer
    the control connection can get dropped because it is detected as idle, but neither client nor server are notified.
    So when all data has been transferred, the server assumes the control connection is alive
    and it sends the transfer confirmation reply.
    Likewise, the client thinks the control connection is alive and it waits for the reply from the server.
    But since the control connection got dropped without notification,
    the reply never arrives and eventually the connection will timeout.
    In an attempt to solve this problem,
    the TCP specifications include a way to send keep-alive packets on otherwise idle TCP connections,
    to tell all involved parties that the connection is still alive and needed.
    However, the TCP specifications also make it very clear that these keep-alive packets should not be sent more often than once every two hours. Therefore, with added tolerance for network latency, connections can stay idle for up to 2 hours and 4 minutes.
    However, many routers and firewalls drop connections that have been idle for less than 2 hours and 4 minutes. This violates the TCP specifications (RFC 5382 makes this especially clear). In other words, all routers and firewalls that are dropping idle connections too early cannot be used for long FTP transfers. Unfortunately manufacturers of consumer-grade router and firewall vendors do not care about specifications ... all they care about is getting your money (and only deliver barely working lowest quality junk).
    To solve this problem, you need to uninstall affected firewalls and replace faulty routers with better-quality ones.

    When I upload files via FTP and if the transfer of a file takes more then 3 minutes,
    then my FTP programs doesn't jump to the next file to upload when the previous one has been just finished.
    The problem occurs only if I connect to the internet behind the wrt110 router.
    I have tried FileZilla, FTPRush and FlashFXP FTP softwares to upload via FTP.
    At FileZilla sending keep-alives doesn't help,
    only it works at FlashFXP, but I have a trial version.
    At FIleZilla's site I can read this:
    http://wiki.filezilla-project.org/Network_Configuration#Timeouts_on_large_files
    Timeouts on large files
    If you can transfer small files without any issues,
    but transfers of larger files end with a timeout,
    a broken router and/or firewall exists
    between the client and the server and is causing a problem.
    As mentioned above,  
    FTP uses two TCP connections:
    a control connection to submit commands and receive replies,
    and a data connection for actual file transfers.
    It is the nature of FTP that during a transfer the control connection stays completely idle.
    The TCP specifications do not set a limit on the amount of time a connection can stay idle.
    Unless explicitly closed, a connection is assumed to remain alive indefinitely.
    However, many routers and firewalls automatically close idle connections after a certain period of time.  
    Worse, they often don't notify the user, but just silently drop the connection.
    For FTP, this means that during a long transfer
    the control connection can get dropped because it is detected as idle, but neither client nor server are notified.
    So when all data has been transferred, the server assumes the control connection is alive
    and it sends the transfer confirmation reply.
    Likewise, the client thinks the control connection is alive and it waits for the reply from the server.
    But since the control connection got dropped without notification,
    the reply never arrives and eventually the connection will timeout.
    In an attempt to solve this problem,
    the TCP specifications include a way to send keep-alive packets on otherwise idle TCP connections,
    to tell all involved parties that the connection is still alive and needed.
    However, the TCP specifications also make it very clear that these keep-alive packets should not be sent more often than once every two hours. Therefore, with added tolerance for network latency, connections can stay idle for up to 2 hours and 4 minutes.
    However, many routers and firewalls drop connections that have been idle for less than 2 hours and 4 minutes. This violates the TCP specifications (RFC 5382 makes this especially clear). In other words, all routers and firewalls that are dropping idle connections too early cannot be used for long FTP transfers. Unfortunately manufacturers of consumer-grade router and firewall vendors do not care about specifications ... all they care about is getting your money (and only deliver barely working lowest quality junk).
    To solve this problem, you need to uninstall affected firewalls and replace faulty routers with better-quality ones.

  • WiFi breaks when transferring large file over LAN

    Hi guys,My WiFi mostly behaves itself, however if I try to transfer a large file (The last one was ~1GB) over the LAN (standard SMB file copy), WiFi drops out. It gets roughly 30MB through before connectivity is lost.The WiFi continues to show as connected, but I cannot even ping my router.Only disconecting and reconnecting WiFi re-establishes the connection.Any idea what's up? (My router is the black 'home hub' style. I can't find any version information in it's admin panel.) (And before someone asks, no I'm not telling you the contents of the file as it isn't relevant. With the greatest of respect, if you just ask unrelated questions to pry, please don't waste everyones time.)

    Didn't think of that, thanks. Great workaround.I'm going to call Sky, see if they have any firmware updates or other solution. Surely they wouldn't leave such a big knows issue unfixed. I'll report back with what they say. Edit:Just got off the phone from Sky technical support.This issue was first reported Nov/Dec 2014.The chap I spoke to does not know of a fix, or whether a fix is being worked on, or if a fix will ever be worked on.He has not way of contacting the development team to find out any of this information.He said firmware updates are pushed out to routers every now and again. He does not know when this is going to happen and does not have a way to notify about firmware updates installed on my router. (This terrifies me. Sky have a backdoor into my router and can install any code they wish, without even telling me what they changed/installed). There is no way to turn these auto-updates off either. Here's a key thing he did say though:He said he was not at liberty to provide my ADSL credentials, but if I chose to extract them from the router and install my own router instead, they would not have any problem with that.When I told him that this was against their TOS and they could terminate my contract for doing this, he went silent and just said "errr..." Allowing Sky, their subcontractors and anyone else they choose to install any code they wish to my router whenever they please and have free open access to my entire LAN is a total deal breaker for me. That router is getting unplugged today and I'm putting a new router on the credit card. I told tech support this. I said if they have a problem with that, let's go the legal route. I literally have a recorded call where they give me permission to install my own router. 

  • Is it possible to upload large files through FTP to server with iWeb?

    Is it possible to upload large files through FTP to server with iWeb like for example with Cyberduck?
    I don't need to publish website with files, I just need to upload files on server. Can it be done somehow with iWeb?

    Here's some info about FTP...
    http://www.iwebformusicians.com/Search-Engine-Optimization/Upload.html
    Make sure you are allowed to store files on your server. Most hosting services don't allow anything to be stored that can't be accessed via the internet. Check the fine print!

  • Large Files from FTP to Azure Storage Account.

             We required to transfer files from FTP to Azure Storage Account.
    Our FTP contains Almost 4 TB data and in future it will be growing.
    We require some automate process which will transfer data from FTP to Windows Azure Storage.
    Currently Implemented Solution:  We have created
    a windows service which will download files from FTP and converting to file stream and with Blob Storage object it will be uploaded to
    Storage account.
    With the mentioned approach bandwidth is utilized at 2 places
    For downloading file from FTP
    For Uploading to Azure Storage 
    Can you please provide any solution/suggestion which will reduce the consumption of bandwidth and large data will transfer in less time

    Hi,
    Please have a look at below article, it is talking about Blob Transfer Utility tool. Blob Transfer Utility is a GUI tool to upload and download thousands of small/large files to/from Windows Azure Blob Storage.
    #https://blobtransferutility.codeplex.com/
    Disclaimer: This is not a Microsoft official tool and it is not supported by Microsoft. It's just a (very useful) sample.
    Best Regards,
    Jambor
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Put File of a large file through FTP not working correctly

    I put a lot of large files to my web server through Dreamweaver, and when I get files larger than about 45 meg, the file is put fine to the server.  But then Dreamweaver waits for a server responce to make sure it got the file.  and after about 30 seconds or so, it says server timed out, and it starts over with the FTP, and the first thing it does on the start over is to delete the file.
    The only wai I have been able to put my large files to the server, is to sit and wait for it to finish, and then cancel the check for if the file is there or not.
    I have my FVT server timeout set to 300 seconds, but that does not help eihter.  it still times out after about 30 seconds.
    Is there a fix for this so I don't have to sit and watch my files being transfered to the server?

    I changed it to passive PTF, but that did not help, sending a 90 Meg file did the same thing.  What the problem is, is that the Dreamweaver FTP wants to verify that the file got there. and when you send that large of a file, the server wants to make sure it is a clean file, so it takes a while to put it to the correct location.  And while the server is doing this, Dreamweaver says that there is no responce from the server, and causes a reconnect.  and on the reconnect it starts over, and the first thing it does is to make sure the file is deleted.  What it should do it verify that the file is there and the same date as the one you sent up there.  That would eliminate this problem.  I was hoping there was a setting in Dreamweaver that I could tell the FTP not to verify the send.
    Are there any Dreamweaver developers that visit this forum. that could tell me how to bypas the verify on put.  It seams silly to have a tool that costs this much that can not do a task that a free pice of software can do.

  • How can I get to read a large file and not break it up into bits

    Hi,
    How do I read a large file and not get the file cut into bits so each has its own beginning and ending.
    like
    1.aaa
    2.aaa
    3.aaa
    4....
    10.bbb
    11.bbb
    12.bbb
    13.bbb
    if the file was read on the line 11 and I wanted to read at 3 and then read again at 10.
    how do I specify the byte in the file of the large file since the read function has a read(byteb[],index,bytes to read);
    And it will only index in the array of bytes itself.
    Thanks
    San Htat

    tjacobs01 wrote:
    Peter__Lawrey wrote:
    Try RandomAccessFile. Not only do I hate RandomAccessFiles because of their inefficiency and limited use in today's computing world, The one dominated by small devices with SSD? Or the one dominated by large database servers and b-trees?
    I would also like to hate on the name 'RandomAccessFile' almost always, there's nothing 'random' about the access. I tend to think of the tens of thousands of databases users were found to have created in local drives in one previous employer's audit. Where's the company's mission-critical software? It's in some random Access file.
    Couldn't someone have come up with a better name, like NonlinearAccessFile? I guess the same goes for RAM to...Non-linear would imply access times other than O(N), but typically not constant, whereas RAM is nominally O(1), except it is highly optimised for consecutive access, as are spinning disk files, except RAM is fast in either direction.
    [one of these things is not like the other|http://www.tbray.org/ongoing/When/200x/2008/11/20/2008-Disk-Performance#p-11] silicon disks are much better at random access than rust disks and [Machine architecture|http://video.google.com/videoplay?docid=-4714369049736584770] at about 1:40 - RAM is much worse at random access than sequential.

  • Large file transfer problems over client/server socket

    Hi,
    I wrote a simple chat problem in which I can send files from client to client. The problem is when I send large files over 101 MB the transfer freezes. I do not even get any error mesages in the console. The files I am sending are of any type (Ex mp3, movies, etc...). I am serializing the data into a "byteArray[packetSize]" and sending the file packet by packet until the whole file has been sent, and reconstructed on the other side. The process works perfectly for files smaller than 101MB, but for some reason freezes after that if the file is larger. I have read many forums and there aren't too many solutions out there. I made sure to use the .reset() method to reset my ObjectOutputStream each time I write to it.
    Here's my file sending code:
    byte[] byteArray = new byte[defaultPacketSize];
    numPacketsRequired = Math.ceil(fileSize / defaultPacketSize);
    try {
    int i = 0;
    reader = new FileInputStream(filePath);
    while (reader.available() > 0) {
    if (reader.available() < defaultPacketSize) {
    byte[] lastPacket = new byte[reader.available()];
    reader.read(lastPacket);
    try {
    if (socket == null || output == null) {
    throw new SocketException("socket does not exist");
    output.writeObject(lastPacket);
    output.reset();
    output.writeObject("DONE");
    output.reset();
    output.close();
    socket.close();
    catch (Exception e) {
    System.out.println("Exception ***: " + e);
    output.close();
    socket.close();
    else {
    reader.read(byteArray);
    try {
    if (socket == null || output == null) {
    throw new SocketException("socket does not exist");
    output.writeObject(byteArray);
    output.reset();
    catch (Exception e) {
    System.out.println("Exception ***: " + e);
    output.close();
    socket.close();
    reader.close();
    catch (Exception e) {
    System.out.println("COULD NOT READ PACKET");
    Here's my file receiving code:
    try {
    // The message from the client
    Object streamInput;
    FileOutputStream writer;
    byte[] packet;
    while (true) {
    streamInput = input.readObject();
    if (streamInput instanceof byte[]) {
    packet = (byte[]) streamInput;
    try {
    writer = new FileOutputStream(outputPath, true);
    writer.write(packet); //Storing the bytes on file
    writer.close();
    catch (Exception e) {
    System.out.println("Exception: " + e);
    else if (streamInput.equals("DONE")) {
    socket.close();
    input.close();
    break;
    catch (Exception e) {
    I'm looking for any way I can possibly send large files from client to client without having it freeze. Are there any better file transfer ways other than socket? I don't really want FTP. I think I want to keep it HTTP.
    Any suggestions would be helpful.Thanks!
    Evan

    I've taken a better look a the code you posted, and
    there is one problem with the receiving code. You
    keep repeatedly opening and closing the
    FileOutputStream. This is not going to be efficient
    as the file will keep needing to be positioned to its
    end.Yes sorry I did change that code so that it leaves the file open until completely done writing. Basically I have a progress bar that records how far along in the writing process the client is, and when the progress bar reaches 100%, meaning the file is complete, the file.close() method is invoked. Sorry about that.
    I also ran some memory tests using the "Runtime.getRuntime().totalMemory()", and "Runtime.getRuntime().freeMemory()" methods. I put these methods inside the loop where I read in the file and send it to the client. here's the output:
    Sender's free memory: 704672
    File reader read 51200 bytes of data.
    767548 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 702968
    File reader read 51200 bytes of data.
    716348 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 701264
    File reader read 51200 bytes of data.
    665148 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 699560
    File reader read 51200 bytes of data.
    613948 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 697856
    File reader read 51200 bytes of data.
    562748 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 696152
    File reader read 51200 bytes of data.
    511548 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 694448
    File reader read 51200 bytes of data.
    460348 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 692744
    File reader read 51200 bytes of data.
    409148 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 691040
    File reader read 51200 bytes of data.
    357948 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 689336
    File reader read 51200 bytes of data.
    306748 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 687632
    File reader read 51200 bytes of data.
    255548 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 685928
    File reader read 51200 bytes of data.
    204348 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 684224
    File reader read 51200 bytes of data.
    153148 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 682520
    File reader read 51200 bytes of data.
    101948 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 680816
    File reader read 51200 bytes of data.
    50748 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 679112
    File reader read 50748 bytes of data.
    0 left to read.
    Creating last packet of size: 50748
    Last packet size after setting it equal to byteArray: 50748
    Here's the memory stats from the receiver:
    Receiver's free memory: 639856
    Receiver's total memory: 2842624
    Receiver's free memory: 638920
    Receiver's total memory: 2842624
    Receiver's free memory: 637984
    Receiver's total memory: 2842624
    Receiver's free memory: 637048
    Receiver's total memory: 2842624
    Receiver's free memory: 636112
    Receiver's total memory: 2842624
    Receiver's free memory: 635176
    Receiver's total memory: 2842624
    Receiver's free memory: 634240
    Receiver's total memory: 2842624
    Receiver's free memory: 633304
    Receiver's total memory: 2842624
    Receiver's free memory: 632368
    Receiver's total memory: 2842624
    Receiver's free memory: 631432
    Receiver's total memory: 2842624
    Receiver's free memory: 630496
    Receiver's total memory: 2842624
    Receiver's free memory: 629560
    Receiver's total memory: 2842624
    Receiver's free memory: 628624
    Receiver's total memory: 2842624
    Receiver's free memory: 627688
    Receiver's total memory: 2842624
    Receiver's free memory: 626752
    Receiver's total memory: 2842624
    Receiver's free memory: 625816
    Receiver's total memory: 2842624
    Receiver's free memory: 624880
    Receiver's total memory: 2842624
    Receiver's free memory: 623944
    Receiver's total memory: 2842624
    Receiver's free memory: 623008
    Receiver's total memory: 2842624
    Receiver's free memory: 622072
    Receiver's total memory: 2842624
    Receiver's free memory: 621136
    Receiver's total memory: 2842624
    Receiver's free memory: 620200
    Receiver's total memory: 2842624
    Receiver's free memory: 619264
    Receiver's total memory: 2842624
    Receiver's free memory: 618328
    Receiver's total memory: 2842624
    Receiver's free memory: 617392
    Receiver's total memory: 2842624
    Receiver's free memory: 616456
    Receiver's total memory: 2842624
    Receiver's free memory: 615520
    Receiver's total memory: 2842624
    Receiver's free memory: 614584
    this is just a sample of both receiver and sender's stats. Everything appears to be fine! Hope this message post isn't too long.
    Thanks!

  • Handling Large File

    Hi all,
    We need to handle a large file 88omb .is there any provision at the adapter level to break the file in to smaller chunks.
    Can we avoid using the shell scripts ansd OS level commands.
    Thanks,
    Srinivas

    Hi Srinivas,
       if it is a text file then you could break up the file into multiple recordsets,
      e.g.,
    [Converting Text Format in the Sender File/FTP Adapter to XML   |http://help.sap.com/saphelp_nwpi711/helpdata/en/44/658ac3344a4de0e10000000a1553f7/frameset.htm]
    and
    [#821267 File Adapter FAQ|http://service.sap.com/sap/support/notes/821267]
    14. Memory Requirements
    Regards
      Kenny

Maybe you are looking for