Handling big files

i have file sizes over 100MB to process some transactions.
when i open it at one shot, it runs out of memory.
i tried opening and splitting it into 1MB each. Even then since the memory is accumulated, it runs out of memory.
Is there a way i can open only 1MB, instead of splitting it into 1MB by means of buffering or any other means?

You can increase the amount of memory available to Java (can't remember what the default is, but it's obviously too small for what you're doing) by using the -Xmx command line switch:
java -Xmx512m MyProg Will set aside 512MB for your program (assuming your computer has that much physical memory, of course). This isn't the most elegant solution to your problem, you may instead want to look into streamlining your program's memory requirements, as rustypup suggested.

Similar Messages

  • How to handle Big FIles in SAP PI Sender file adapter

    Hi all ,
    I have developed a interface , where it is File to Proxy, it is fine when i do with small and normal files
    The structure contain one  Header unbounded  detail and one  Trailer, how to handle when the file size is more than 40 MB
    Thanking you
    Sridhar

    Hi Sridhar Gautham,
    We can set a limit on the request body message length that can be accepted by the HTTP Provider Service on the Java dispatcher. The system controls this limit by inspecting the Content-Length header of the request or monitoring the chunked request body (in case chunked encoding is applied to the message). If the value of the Content-Length header exceeds the maximum request body length, then the HTTP Provider Service will reject the request with a 413 u201CRequest Entity Too Largeu201D error response. You can limit the length of the request body using the tting MaxRequestContentLength property of the HTTP Provider Service running on the Java dispatcher. By default, the maximum permitted value is 131072 KB (or 128MB).You can configure the MaxRequestContentLength property using the Visual Administrator tool. Proceed as follows:
           1.      Go to the Properties tab of the HTTP Provider Service running on the dispatcher.
           2.      Choose MaxRequestContentLength property and enter a value in the Value field. The length is specified in KB.
           3.      Choose Update to add it to the list of properties.
           4.      To apply these changes, choose  (Save Properties).
    The value of the parameter MaxRequestContentLength has to be set to a high value.
    The Visual administartor tool may be accessed using this link
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/40a08db9-59b6-2c10-5e8f-a4b2a9aaa3d2?quicklink=index&overridelayout=true
    In short  ICM parameters to reset values for this case are
    icm/HTTP/max_request_size_KB
    icm/server_port_ TIMEOUT
    rdisp/max_wprun_time
    zttp/max_memreq_MB
    Please look into this thread to know more about ICM parameters
    http://help.sap.com/saphelp_nw04/helpdata/en/61/f5183a3bef2669e10000000a114084/frameset.htm
    Second solution is that you must split the source file, so that each file is less than 5MB in size, then PI would not cause problem for file size between 1MB-5MB. you can insert header and trailer for individual smaller file obtained after split. All this can be done using scripts or conventional programing provided individual records within file are independent of each other. Finally you have to rename each new file created and put them in PI folder in sequential manner. All this can be achieved by simple shell script/batch file, a C code or java code. If you are going for a C or Java code you need a script to call them from PI communication channel parameter  "run operating system command before message processing".
    regards
    Anupam

  • How to handle big java source files in JDEV 10.1.3

    Hi,
    I have some .java files that are like over 10,000 lines long. I know this is crazy but I didn't write it but I have to maintain it. I really dont feel like refactoring all these big source files right now.
    JDeveloper has lots of while-editing features that are probably grinding to a halt on this really big file. My CPU usage shows ~97% whenever I do anything in this file.
    Is there some preferences I could set or something I can disable just while editing this big file.
    Thanks,
    Simon.

    The delays between edits and the IDE stalls seemed
    about the same with this change in place.Ah... that's sad. It seems like there's another (un-configurable) timer mechanism for background parsing used by the structure pane.
    I didn't
    really understand what the numbers were controlling
    so I didn't try any other values. They control the frequency of a timer that is reset each time you press a key in the code editor. After the number of ms specified in these parameters has elapsed after the last keypress, the code in the editor is parsed. There are two settings, one for short files and one for long files (I forget what the threshold is between short and long).
    Maybe it isn't the parser since it still goes to 100%
    CPU (just less often) but I can still move the mouse
    around the file - no stalling. Here's something I'd find useful to investigate this problem: Run jdeveloper with jdev.exe so that you have a console. When the CPU hits 100%, switch to the console and press Ctrl+Break. Copy the stack dump (maybe take a couple of them) and email them to me (brian dot duff at oracle dot com).
    Dropping the structure window is a fairly simple work
    around for me. I can live without the structure
    window for this file although it is actually most
    useful as the file's size, and structure, increases.Yes, I can imagine the structure pane would be quite useful with a mammoth java file like the one you're working with :) Incremental find might be your friend in its absence (Ctrl+E, I think).
    If your considering an ER for the structure window
    (:-) then how about this too:
    "Cursur location within the file should be shown in
    the structure window"
    This already happens for Ant (build.xml) files but
    not for Java files - or at least not for me. The
    problem is that it is easy to get lost in a large
    file and the navigation assistance should be the same
    across all structured file types. Funnily enough, I was talking to one of the developers on the team that builds the infrastructure for the xml based parts of the product (including ant) about these kind of inconsistencies between the XML and Java structure panes just a couple of days ago... :) I'll file a bug for this (rather than an ER), since it's a UI inconsistency.
    Thanks,
    Brian

  • Adobe Photoshop CS3 collapse each time it load a big file

    I was loading a big file of photos from iMac iPhoto to Adobe Photoshop CS3 and it keep collapsing, yet each time I reopen photoshop it load the photos again and collapse again. is there a way to stop this cycle?

    I don't think that too many users here actually use iPhoto (even the Mac users)
    However, Google is your friend. A quick search came up with some other non-Adobe forum entries:
    .... but the golden rule of iPhoto is NEVER EVER MESS WITH THE IPHOTO LIBRARY FROM OUTSIDE IPHOTO.In other words, anything you might want to do with the pictures in iPhoto can be done from *within the program,* and that is the only safe way to work with it. Don't go messing around inside the "package" that is the iPhoto Library unless you are REALLY keen to lose data, because that is exactly what will happen.
    .....everything you want to do to a photo in iPhoto can be handled from *within the program.* This INCLUDES using a third-party editor, and saves a lot of time and disk space if you do this way:
    1. In iPhoto's preferences, specify a third-party editor (let's say Photoshop) to be used for editing photos.
    2. Now, when you right-click (or control-click) a photo in iPhoto, you have two options: Edit in Full Screen (ie iPhoto's own editor) or Edit with External Editor. Choose the latter.
    3. Photoshop will open, then the photo you selected will automatically open in PS. Do your editing, and when you save (not save as), PS "hands" the modified photo back to iPhoto, which treats it exactly the same as if you'd done that stuff in iPhoto's own editor and updates the thumbnail to reflect your changes. Best of all, your unmodified original remains untouched so you can always go back to it if necessary.

  • Handling Large files in PI scenarios?

    Hello,
    We have lot of scenarios (almost 50) where we deal with file interfaces atleast in receiver or sender side. Some of them are just file transfers where we use AAE and some are where we have to do message mapping (sometimes very complex ones).
    the interfaces work perfectly fine will a normal file which dont have much records but recently we started testing big files with over 1000 records and its taking a lot of time to process. It is also causing other messages which gets lined up in the same queue to wait in the queue for the amount of time it takes for the first message to process.
    This must be a very practical scenario where PI has to process large files specially files coming from banks. What is the best way to handle its processing? Apart from having a better system hardware (we are currently in the test environment. Production environment will definetely be better) is there any technique which might help us improve the processing of large files without data loss and without interrupting other message?
    Thanks,
    Yash

    Hi Yash,
    Check this blogs for the strcuture you are mentioning:
    /people/shabarish.vijayakumar/blog/2006/02/27/content-conversion-the-key-field-problem
    /people/shabarish.vijayakumar/blog/2005/08/17/nab-the-tab-file-adapter
    Regards,
    ---Satish

  • Suggestion needed for processing Big Files in Oracle B2B

    Hi,
    We are doing a feasibility study for Using Oracle AS Integration B2B over TIBCO. We are presently using TIBCO for our B2B transactions. Now since my client company planning to Implement Fusion Middleware (Oracle ESB and Oracle BPEL), we are also looking at Oracle AS Integration B2B for B2B transactions (On other words we are planning to replace TIBCO by Oracle Integration B2B if possible).
    I am really concern about one thing that is receiving and processing any "BIG FILE" (15 MB of size) from trading partner.
    Present Scenario: One of our trading partner is sending Invoice documents in a single file and that file size can grow upto 15 MB of size. In our existing scenario when we receive such big files from trading partner (through TIBCO Business Connect - BC), Tibco BC works fine for 1 or 2 files but it crashes once it received multiple files of such size. What exactly happening is Whatever Memory that TIBCO BC is consuming to receive one such big file, are not getting released after processing and as a result TIBCO BC throws "OUT OF MEMORY" error after processing some files.
    My questions:
         1. How robust the Oracle AS Integration B2B is, in terms of processing such big files?
         2. Is there any upper limit in terms of size that Oracle AS Integration B2B can handle for receiving and processing data?
         3. What is the average time required to receive and process such big file? (Lets say we are talking about 15MB of size).
         4. Is there any documentation availble that talks about any such big files through Oracle B2B?
    Please let me know if you need more information.
    Thanks in advance.
    Regards,
    --Kaushik                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    Hi Ramesh,
    Thanks for your comment. We will try to do POC ASAP. I will definitely keep in touch with you during this.
    Thanks bunch.
    Regards,
    --Kaushik                                                                                                                                                                                                                                                                                                                               

  • Handle big amount of data

    Hello,
    I have to analyse a great amount of data (more than 20MByte), that I read from a logfile. I can't split these data into smaller parts because some of my analysis-methods need all data (regression,....).
    Are there any tricks for LabVIEW (5.0.1), how to handle big amounts of data?
    hans

    You might be able to do as you would like. If analysing process needs
    all amont of data,
    the whole process takes a couple of minutes but no problem. It gives you
    no information
    on the way of process like as hanging, so using "read files", not "read
    from spread
    sheet files", is better to have confirming process including the graphic
    monitor involved
    in the analysing vi that you like to build.
    Thanks in advance,
    Tom
    hans wrote:
    > Hello,
    >
    > I have to analyse a great amount of data (more than 20MByte), that I
    > read from a logfile. I can't split these data into smaller parts
    > because some of my analysis-methods need all data (regression,....).
    >
    > Are there any tricks for LabVIEW (5.0.1), how to handle big amounts of
    > data?
    >
    > hans

  • Big File vs Small file Tablespace

    Hi All,
    I have a doubt and just want to confirm that which is better if i am using Big file instead of many small datafile for a tablespace or big datafiles for a tablespace. I think better to use Bigfile tablespace.
    Kindly help me out wheather i am right or wrong and why.

    GirishSharma wrote:
    Aman.... wrote:
    Vikas Kohli wrote:
    With respect to performance i guess Big file tablespace is a better option
    Why ?
    If you allow me to post, I would like to paste the below text from my first reply's doc link please :
    "Performance of database opens, checkpoints, and DBWR processes should improve if data is stored in bigfile tablespaces instead of traditional tablespaces. However, increasing the datafile size might increase time to restore a corrupted file or create a new datafile."
    Regards
    Girish Sharma
    Girish,
    I find it interesting that I've never found any evidence to support the performance claims - although I can think of reasons why there might be some truth to them and could design a few tests to check. Even if there is some truth in the claims, how significant or relevant might they be in the context of a database that is so huge that it NEEDS bigfile tablespaces ?
    Database opening:  how often do we do this - does it matter if it takes a little longer - will it actually take noticeably longer if the database isn't subject to crash recovery ?  We can imagine that a database with 10,000 files would take longer to open than a database with 500 files if Oracle had to read the header blocks of every file as part of the database open process - but there's been a "delayed open" feature around for years, so maybe that wouldn't apply in most cases where the database is very large.
    Checkpoints: critical in the days that a full instance checkpoint took place on the log file switch - but (a) that hasn't been true for years, and (b) incremental checkpointing made a big difference the I/O peak when an instance checkpoint became necessary, and (c) we have had a checkpoint process for years (if not decades) which updates every file header when necessary rather than requiring DBWR to do it
    DBWR processes: why would DBWn handle writes more quickly - the only idea I can come up with is that there could be some code path that has to associate a file id with an operating system file handle of some sort and that this code does more work if the list of files is very long: very disappointing if that's true.
    On the other hand I recall many years ago (8i time) crashing a session when creating roughly 21,000 tablespaces for a database because some internal structure relating to file information reached the 64MB hard limit for a memory segment in the SGA. It would be interesting to hear if anyone has recently created a database with the 65K+ limit for files - and whether it makes any difference whether that's 66 tablespaces with about 1,000 files, or 1,000 tablespace with about 66 files.
    Regards
    Jonathan Lewis

  • Hardware Question – Handling large files in Photoshop

    I'm working with some big TIFF files (~1GB) for large-scale hi-res printing (60" x 90", 10718 x 14451), and my system is lagging hard like never before (Retina MacBook Pro 2012 2.6GHz i7 /8 GB RAM/ 512GB HD).
    So far I've tried:
    1) converting to .psd and .psb
    2) changing the scratch disk to an external Thunderbolt SSD
    3) allocating all available memory to the program within photoshop preferences
    4) closing all other applications
    In general I'm being told that I don't have enough RAM. So what are the minimum recommended system requirements to handle this file size more comfortably? Newest Retina Pro with 16GB RAM? Or switch to iMac w/ 32? Mac Pro?
    Thanks so much!

    Hi Yash,
    Check this blogs for the strcuture you are mentioning:
    /people/shabarish.vijayakumar/blog/2006/02/27/content-conversion-the-key-field-problem
    /people/shabarish.vijayakumar/blog/2005/08/17/nab-the-tab-file-adapter
    Regards,
    ---Satish

  • Problem reading big file. No, bigger than that. Bigger.

    I am trying to read a file roughly 340 GB in size. Yes, that's "Three hundred forty". Yes, gigabytes. (I've been doing searches on "big file java reading" and I keep finding things like "I have this huge file, it's 600 megabytes!". )
    "Why don't you split it, you moron?" you ask. Well, I'm trying to.
    Specifically, I need a slice "x" rows in. It's nicely delimited, so, in theory:
    (pseudocode)
    BufferedFileReader fr=new BufferedFileReader(new FileReader(new File(myhugefile)));
    int startLine=70000000;
    String line;
    linesRead=0;
    while ((line=fr.ReadLine()!=null)&&(linesRead<startLine))
    linesRead++; //we don't care about this
    //ok, we're where we want to be, start caring
    int linesWeWant=100;
    linesRead=0;
    while ((line=fr.ReadLine()!=null)&&(linesRead<linesWeWant))
    doSomethingWith(line);
    linesRead++'
    (Please assume the real code is better written and has been proven to work with hundreds of "small" files (under a gigabyte or two). I'm happy with my file read/file slice logic, overall.)
    Here's the problem. No matter how I try reading the file, whether I start with a specific line or not, whether I am saving out a line to a string or not, it always dies with an OEM at around row 793,000,000. the OEM is thrown from BufferedReader->ReadLine. Please note I'm not trying to read the whole file into a buffer, just one line at a time. Further, the file dies at the same point no matter how high or low (with reason) I set my heap size, and watching the memory allocation shows it's not coming close to filling memory. I suspect the problem is occurring when I've read more than int bytes into a file.
    Now -- the problem is that it's not just this one file -- the program needs to handle a general class of comma- or tab- delimited files which may have any number of characters per row and any number of rows, and it needs to do so in a moderately sane timeframe. So this isn't a one-off where we can hand-tweak an algorithm because we know the file structure. I am trying it now using RandomAccessFile.readLine(), since that's not buffered (I think...), but, my god, is it slow... my old code read 79 million lines and crashed in under about three minutes, the RandomAccessFile() code has taken about 45 minutes and has only read 2 million lines.
    Likewise, we might start at line 1 and want a million lines, or start at line 50 million and want 2 lines. Nothing can be assumed about where we start caring about data or how much we care about, the only assumption is that it's a delimited (tab or comma, might be any other delimiter, actually) file with one record per line.
    And if I'm missing something brain-dead obvious...well, fine, I'm a moron. I'm a moron who needs to get files of this size read and sliced on a regular basis, so I'm happy to be told I'm a moron if I'm also told the answer. Thank you.

    LizardSF wrote:
    FWIW, here's the exact error message. I tried this one with RandomAccessFile instead of BufferedReader because, hey, maybe the problem was the buffering. So it took about 14 hours and crashed at the same point anyway.
    Exception in thread "AWT-EventQueue-0" java.lang.OutOfMemoryError: Java heap space
         at java.util.Arrays.copyOf(Unknown Source)
         at java.lang.AbstractStringBuilder.expandCapacity(Unknown Source)
         at java.lang.AbstractStringBuilder.append(Unknown Source)
         at java.lang.StringBuffer.append(Unknown Source)
         at java.io.RandomAccessFile.readLine(Unknown Source)
         at utility.FileSlicer.slice(FileSlicer.java:65)
    Still haven't tried the other suggestions, wanted to let this run.Rule 1: When you're testing, especially when you don't know what the problem is, change ONE thing at a time.
    Now you've introduced RandomAccessFile into the equation you still have no idea what's causing the problem, and neither do we (unless there's someone here who's been through this before).
    Unless you can see any better posts (and there may well be; some of these guys are Gods to me too), try what I suggested with your original class (or at least a modified copy). If it fails, chances are that there IS some absolute limit that you can't cross; in which case, try Kayaman's suggestion of a FileChannel.
    But at least give yourself the chance of KNOWING what or where the problem is happening.
    Winston

  • Transfering big files

    Hi,
    i would like to transfer really big files ( 500 megs and above ). This is a file to file transfer without any conversion (except code page). How can i do that with XI without having the whole data in XI which will for sure waste a lot of time and space in the database (log tables/msg tables)
    The file is written from a BW process with a tempory filename, after the file is written it is renamed to a filename XI is looking for. XI should then trigger the file transfer without reading the content.
    Any idea?

    Use the Chunk Mode of the file adapter if you are on PI 7.11(For binary data transfers)
    /people/niki.scaglione2/blog/2009/10/31/chunkmode-for-binary-file-transfer-within-pi-71-ehp1
    Check the "Large Files Handling" section in this guide:
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/2016a0b1-1780-2b10-97bd-be3ac62214c7?quicklink=index&overridelayout=true
    Regards,
    Ravi

  • Not enough space on my new SSD drive to import my data from time machine backup, how can I import my latest backup minus some big files?

    I just got a new 256GB SSD drive for my mac, I want to import my data from time machine backup, but its larger than 256GB since it used to be on my old optical drive. How can I import my latest backup keeping out some big files on the external drive?

    Hello Salemr,
    When you restore from a Time Machine back up, you can tell it to not transfer folders like Desktop, Documents. Downloads, Movies, Music, Pictures and Public. Take a look at the article below for the steps to restore from your back up.  
    Move your data to a new Mac
    http://support.apple.com/en-us/ht5872
    Regards,
    -Norm G. 

  • Change upload file name with com.oreilly.servlet.MultipartRequest to handle the file upload

    1. when use com.oreilly.servlet.MultipartRequest to handle the file upload, can I change the upload file name .
    2. how com.oreilly.servlet.MultipartReques handle file upload? do it change to byte ?
    what  different?  if I use the following method?
       File uploadedFile = (File) mp.getFile("filename");
                   FileOutputStream fos = new FileOutputStream(filename);
                    byte[] uploadedFileBuf = new byte[(int) uploadedFile.length()];
                   fos.write(data);
                 fos.close();

    My questions are
    1) when use oreilly package to do file upload , it looks like i line of code is enough to store the upload file in the
    file direction.
    MultipartRequest multi =
            new MultipartRequest(request, dirName, 10*1024*1024); // 10MB
    why some example still use FileOutputStream?
    outs = new FileOutputStream(UPLOADDIR+fileName); 
        filePart.writeTo(outs); 
       outs.flush(); 
      outs.close();
    2) can I rename the file name when I use oreilly package?

  • Photoshop CC slow in performance on big files

    Hello there!
    I've been using PS CS4 since release and upgraded to CS6 Master Collection last year.
    Since my OS broke down some weeks ago (RAM broke), i gave Photoshop CC a try. At the same time I moved in new rooms and couldnt get my hands on the DVD of my CS6 resting somewhere at home...
    So I tried CC.
    Right now im using it with some big files. Filesize is between 2GB and 7,5 GB max. (all PSB)
    Photoshop seem to run fast in the very beginning, but since a few days it's so unbelievable slow that I can't work properly.
    I wonder if it is caused by the growing files or some other issue with my machine.
    The files contain a large amount of layers and Masks, nearly 280 layers in the biggest file. (mostly with masks)
    The images are 50 x 70 cm big  @ 300dpi.
    When I try to make some brush-strokes on a layer-mask in the biggest file it takes 5-20 seconds for the brush to draw... I couldnt figure out why.
    And its not so much pretending on the brush-size as you may expect... even very small brushes (2-10 px) show this issue from time to time.
    Also switching on and off masks (gradient maps, selective color or leves) takes ages to be displayed, sometimes more than 3 or 4 seconds.
    The same with panning around in the picture, zooming in and out or moving layers.
    It's nearly impossible to work on these files in time.
    I've never seen this on CS6.
    Now I wonder if there's something wrong with PS or the OS. But: I've never been working with files this big before.
    In march I worked on some 5GB files with 150-200 layers in CS6, but it worked like a charm.
    SystemSpecs:
    I7 3930k (3,8 GHz)
    Asus P9X79 Deluxe
    64GB DDR3 1600Mhz Kingston HyperX
    GTX 570
    2x Corsair Force GT3 SSD
    Wacom Intous 5 m Touch (I have some issues with the touch from time to time)
    WIN 7 Ultimate 64
    all systemupdates
    newest drivers
    PS CC
    System and PS are running on the first SSD, scratch is on the second. Both are set to be used by PS.
    RAM is allocated by 79% to PS, cache is set to 5 or 6, protocol-objects are set to 70. I also tried different cache-sizes from 128k to 1024k, but it didn't help a lot.
    When I open the largest file, PS takes 20-23 GB of RAM.
    Any suggestions?
    best,
    moslye

    Is it just slow drawing, or is actual computation (image size, rotate, GBlur, etc.) also slow?
    If the slowdown is drawing, then the most likely culprit would be the video card driver. Update your driver from the GPU maker's website.
    If the computation slows down, then something is interfering with Photoshop. We've seen some third party plugins, and some antivirus software cause slowdowns over time.

  • Hi,  I've just purchased and installed an upgrade from Lightroom 4 to 5.  It doesn't seem to handle raw files authored with a new Nikon D750 camera.  I spoke to the sales rep about this and he gave me a link to the 8.6 DNG converter page with instructions

    Hi,  I've just purchased and installed an upgrade from Lightroom 4 to 5.  It doesn't seem to handle raw files authored with a new Nikon D750 camera.  I spoke to the sales rep about this and he gave me a link to the 8.6 DNG converter page with instructions to download.  8.6 only works with Mac OS 10.7-10.9, according to the page.  I'm running Yosemite, Mac 10.10.  Please can you tell me my options?  Lightroom 4 worked beautifully with my older cameras' raw files so I would like to continue using the application.  What should I do?  How soon will Lightroom 5 be able to deal with raw files from a D750.  Many thanks, Adam.

    Until the next version of Lightroom is released, you need to use the DNG Converter version 8.7RC to convert your RAW photos to DNG and then import the DNGs into Lightroom.

Maybe you are looking for