How to process large data files in XI  ?  100 MB files ?

Hi All
   At present we have a scenario as follows
  It is File to IDoc ....Problem is the size of the file
  We need to transfer 100mb file to SAP R/3 system ? So this huge data how to
  process ?
Adv thanx and regards
Rakesh

Hi,
In general, an extra sizing for XI memory consumption is not required. The total memory of the SAP Web Application Server should be sufficient except in the case of large messages (>1MB).
To determine the memory consumption for processing large messages, you can use the following rules of thumb:
Allocate 3 MB per process (for example, the number of parallel messages per second may be an indicator)
Allocate 4 kB per 1kB of message size in the asynchronous case or 9 kB per 1kB message size in the synchronous case
Example: asynchronous concurrent processing of 10 messages with a size of 1MB requires 70 MB of memory
(3MB + 4 * 1MB) * 10 = 70 MB With mapping or content-based routing where an internal representation of the message payload may be necessary, the memory requirements can be much higher (possibly exceeding 20 kBytes per 1kByte
message, depending on the type of mapping).
The size of the largest message thus depends mainly on the size of the available main memory. On a normal 32Bit operating system, there is an upper boundary of approximately 1.5 to 2 GByte per process, limiting the respective largest message size.
please check these links..
/community [original link is broken]:///people/michal.krawczyk2/blog/2006/06/08/xi-timeouts-timeouts-timeouts
Input Flat File Size Determination
/people/shabarish.vijayakumar/blog/2006/04/03/xi-in-the-role-of-a-ftp
data packet size  - load from flat file
How to upload a file of very huge size on to server.
Please let me know , your problem is solved or not..
Regards
Chilla..

Similar Messages

  • How to import the data in a .xls or .xlsx file into a oracle database table

    Hi,
    Please tell me how to import the data in a .xls or .xlsx file into a oracle database table in Oracle 10gR2 using Oracle Warehouse Builder 10gR2.

    ....can we do something through Non-Oracle->ODBC?Yes, it is possible, look at this thread
    [SQLServer access from AIX Warehouse builder|http://forums.oracle.com/forums/thread.jspa?messageID=2502982]
    If your server (with target DB and OWB runtime) is on Windows OS this configuration will be simpler - you can use single server.
    And additional link on OWB blog (with 11g transparent gateway)
    [http://blogs.oracle.com/warehousebuilder/2008/01/11g_heterogeneous_agent.html]
    (configuring nonoracle connection with 10g generic connectivity very similar to 11g gateway)
    Also look at
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4406709207206#18830681837358
    Regards,
    Oleg

  • How to read the date and time information of a file by labview

    how to read the date and time information of a file by labview? for example, created time and modified time.
    Solved!
    Go to Solution.

    if you need to know the last modification date of file:-
    "Functions->File I/O->Advanced File Functions->File/Directory Info.vi"
    This vi returns the value of file's last modification date. This is returned as U32 number. To see it in MM/DD/YY format you must create the indicator, right-click on it and select "Format & Precision" item from drop-down menu. Then select "Time and Date" format there.
    Thanks as kudos only

  • How can i extract data from oracle table  to flat file or excel spread shee

    Hello,
    DB Version is 10.1.0.3.0
    How can i extract data from oracle table to flat file or excel spread sheet by using sub programs?
    Regards,
    D

    Here what I did
    SET NEWPAGE 0
    SET SPACE 0
    SET LINESIZE 80
    SET PAGESIZE 0
    SET ECHO OFF
    SET FEEDBACK OFF
    SET VERIFY OFF
    SET HEADING OFF
    SET MARKUP HTML OFF SPOOL OFF
    Sql> SPOOL bing
    select * from -------;
    SPOOL OFF;
    I do not see file.
    I also tried
    Sql> SPOOL /tmp/bing
    select * from -------;
    SPOOL OFF;
    But still not seeing the fie,

  • How to handle large data in file adapter

    We have a scenario Proxy -> PI -> File Sever using File adapter.
    File adapter is using FCC for conversion.
    recently we had wave 2 products live and suddenly for this interface we have increase in volume of messages, due to which File adapter is not performing well, PI goes slow or frequent disconnect from file server problem. Due to which either we will have duplicate records in file or file format created is wrong.
    File size is somewhere around 4.07 GB which I also think quite high for PI to handle.
    Can anybody suggest how we can handle such large data.
    Regards,
    Vikrant

    Check this Blog for Huge File Processing:
    Night Mare-Processing huge files in SAP XI
    However, you can take a look also to this Blog, about High Volume Messages:
    Step-by-Step Guide in Processing High-Volume Messages Using PI 7.1's Message Packaging
    PI Performance Tuning Best Practice:
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/2016a0b1-1780-2b10-97bd-be3ac62214c7?QuickLink=index&overridelayout=true&45896020746271

  • How to process Large Image Files (JP2 220MB+)?

    All,
    I'm relatively new to Java Advanced Imaging, so I need a little help. I've been working on a thesis that involves converting digital terrain data into X3D scenes for future use in military training and applications. Part of this work involves processing large imagery data to texture the previously mentioned terrain data. I have an image slicer that can handle rather large files (200MB+ jpeg files). But it can't seem to process jpeg 2000 data. Below is an excerpt from my code.
    public void testSlicer(){
    String fname = "file.jp2";
    Iterator readers = ImageIO.getImageReadersByFormatName("jpeg2000");
    ImageReader imageReader = (ImageReader) readers.next();
    try {
    ImageInputStream imageInputStream = ImageIO.createImageInputStream(new File(fname));
    imageReader.setInput(imageInputStream, true);
    } catch (IOException ex) {
    System.out.println("Error: " + ex);
    ImageReadParam imageReadParam = imageReader.getDefaultReadParam();
    BufferedImage destBImage = new BufferedImage(256, 256, BufferedImage.TYPE_INT_RGB);
    Rectangle rect = new Rectangle(0, 0, 1000, 1000);
    //Only reading a portion of the file
    imageReadParam.setSourceRegion(rect);
    //Used to subsampling every 4th pixel
    imageReadParam.setSourceSubsampling(4, 4, 0, 0);
    try {
    destBImage = imageReader.read(0, imageReadParam);
    } catch (IOException ex) {
    System.out.println("IO Exception: " + ex);
    The images I am trying to read are in excess of 30000 pixels by 30000 pixels (15m resolution at 5 degrees latitude and 6 degrees longitude). I continually get an OutOfMemoryError, though I am pumping up the heap size to 16000MB when using the command line.
    Any help would be greatly appreciated.

    Hi,
    In general, an extra sizing for XI memory consumption is not required. The total memory of the SAP Web Application Server should be sufficient except in the case of large messages (>1MB).
    To determine the memory consumption for processing large messages, you can use the following rules of thumb:
    Allocate 3 MB per process (for example, the number of parallel messages per second may be an indicator)
    Allocate 4 kB per 1kB of message size in the asynchronous case or 9 kB per 1kB message size in the synchronous case
    Example: asynchronous concurrent processing of 10 messages with a size of 1MB requires 70 MB of memory
    (3MB + 4 * 1MB) * 10 = 70 MB With mapping or content-based routing where an internal representation of the message payload may be necessary, the memory requirements can be much higher (possibly exceeding 20 kBytes per 1kByte
    message, depending on the type of mapping).
    The size of the largest message thus depends mainly on the size of the available main memory. On a normal 32Bit operating system, there is an upper boundary of approximately 1.5 to 2 GByte per process, limiting the respective largest message size.
    please check these links..
    /community [original link is broken]:///people/michal.krawczyk2/blog/2006/06/08/xi-timeouts-timeouts-timeouts
    Input Flat File Size Determination
    /people/shabarish.vijayakumar/blog/2006/04/03/xi-in-the-role-of-a-ftp
    data packet size  - load from flat file
    How to upload a file of very huge size on to server.
    Please let me know , your problem is solved or not..
    Regards
    Chilla..

  • Java servlet: how to store large data result across multiple web session

    Hi, I am writing a java servlet to process some large data.
    Here is the process
    1), user will submit a query,
    2) servlet return a lot of results for user to make selection,
    3). user submit their selections (with checkboxes).
    4). servlet send back the complete selected items in a file.
    The part I have trouble with (new to servlet) is that how I can store the results arraylist (or vector) after step 2 so I needn't re-search again in step 4.
    I think Session may be helpful here. But from what I read from tutorial, session seems only store small item instead of large dataset. Is it possible for session to store large dataset? Can you point me to an example or provide some example code?
    Thanks for your attention.
    Mike

    I don't know whether you connect the databases and store the resultset?

  • How to handle large data sets?

    Hello All,
    I am working on a editable form document. It is using a flowing subform with a table. The table may contain up to 50k rows and the generated pdf may even take up to 2-4 Gigs of memory, in some cases adobe reader fails and "gives up" opening these large data sets.
    Any suggestions? 

    On 25.04.2012 01:10, Alan McMorran wrote:
    > How large are you talking about? I've found QVTo scales pretty well as
    > the dataset size increases but we're using at most maybe 3-4 million
    > objects as the input and maybe 1-2 million on the output. They can be
    > pretty complex models though so we're seeing 8GB heap spaces in some
    > cases to accomodate the full transformation process.
    Ok, that is good to know. We will be working in roughly the same order
    of magnitude. The final application will run on a well equipped server,
    unfortunately my development machine is not as powerful so I can't
    really test that.
    > The big challenges we've had to overcome is that our model is
    > essentially flat with no containment in it so there are parts of the
    We have a very hierarchical model. I still wonder to what extent EMF and
    QVTo at least try to let go of objects which are not needed anymore and
    allow them to be garbage collected?
    > Is the GC overhead limit not tied to the heap space limits of the JVM?
    Apparently not, quoting
    http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html:
    "The concurrent collector will throw an OutOfMemoryError if too much
    time is being spent in garbage collection: if more than 98% of the total
    time is spent in garbage collection and less than 2% of the heap is
    recovered, an OutOfMemoryError will be thrown. This feature is designed
    to prevent applications from running for an extended period of time
    while making little or no progress because the heap is too small. If
    necessary, this feature can be disabled by adding the option
    -XX:-UseGCOverheadLimit to the command line."
    I will experiment a little bit with different GC's, namely the parallel GC.
    Regards
    Marius

  • How to Improve large data loads ?

    Hello Gurus,
    Large data loads at my client long hours. I have tried using the recommedations from various blogs and SAP sites, for control parameters for DTP's and Infopackages. I need some viewpoints on what are the parameters that can be checked in the Oracle and Unix systems. I would also need some insight on:-
    1) How to clear log files
    2) How to clear any cached up memory in SAP BW.
    3) Control parameters in Oracle and Unix for any improvements.
    Thanks in advance.

    Hi
    I think those work should be performed by the BASIS guys.
    2)U can delete the cache memory by using the Tcode : RSRT and then select the cache monitor and then delete.
    Thanx & Regards,
    RaviChandra

  • How can i retrieved data into the infocube from archived files

    hi,
    i have archived cube data and i have to load data into the cube from archived files.
    so now i want to find archived files and how to load data into the cube.
    thanks

    Hi.....
    Reloading archived data should be an exception rather than the general case, since data should be
    archived only if it is not needed in the database anymore. When the archived data target is serving also as a
    datamart to populate other data targets, Its recommend that you load the data to a copy of the original
    (archived) data target, and combine the two resulting data targets with a MultiProvider.
    In order to reload the data to a data target, you have to use the export DataSource of the archived data
    target. Therefore, you create an update rule based on the respective InfoSource (technical name 8<data
    target name>). You then trigger the upload either by using ‘Update ODS data in data target’ or by
    replicating the DataSources of the MYSELF source system and subsequently scheduling an InfoPackage
    for the respective InfoSource
    If you want to read the data for reporting or
    control purposes, you have to write a report, which reads data from the archive files sequentially.
    Alternatively, you can also use the Archiving Information System (AS). This tool enables you to define an
    InfoStructure, and create reports based on these InfoStructures. The InfoStructures define an index for
    the archive file data. At the moment, the archiving process in the BW system does not fill the
    InfoStructures during the archiving session automatically. This has to be performed manually when
    needed.
    Another way of displaying data from the archive file is by using the ‘Extractor checker’ (TCODE RSA3).
    Enter the name of the export DataSource of the respective data target (name of the data target preceded
    by ‘8’), and choose the archive files that are to be read. The extractor checker reads the selected archive
    files sequentially. Selection conditions can be entered for filtering but have to be entered in in internal
    format
    It will remain same in the change log table.
    Check this link :
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/b32837f2-0c01-0010-68a3-c45f8443f01d
    Hope this helps you...........
    Regards,
    Debjani............

  • How to write a Data Plugin to access a binary file

    hi
    Im a newbee to DIAdem, i want to develop a data plugin to access a binary file with any number of channels.For example if there around 70 channels, the raw data would in x number of files which will contain may be around 20 channels in each file . Raw file consist of header(one per file), channel sub header(one per channel),Calibration Data Segment(unprocessed datas) and Test data segments(processed data)....
    Each of these contains many different fields under them and their size varies ....
    Could suggest me some procedure to carry out this task taking into consideration of any number of channels and any number of fields under them....
    Expecting your response....
    Jhon

    Jhon,
    I am working on a collection of useful examples and hints for DataPlugin development. This document and the DataPlugin examples are still in a early draft phase. Still I thought it could be helpful for you to look at it.
    I have added an example file format which is similar to what you described. It's referred to as Example_1. Let me know whether this is helpful ...
    Andreas
    Attachments:
    Example_1.zip ‏153 KB

  • Import and process larger data with SQL*Loader and Java resource

    Hello,
    I have a project to import data from a text file in a schedule. A lager data, with nearly 20,000 record/1 hours.
    After that, we have to analysis the data, and export the results into a another database.
    I research about SQL*Loader and Java resource to do these task. But I have no experiment about that.
    I'm afraid of the huge data, Oracle could be slowdown or the session in Java Resource application could be timeout.
    Please tell me some advice about the solution.
    Thank you very much.

    With '?' mark i mean " How i can link this COL1 with column in csv file ? "
    Attilio

  • How to recover the data from the corrupted SAP archive files throgh SARA.

    Hi All ,
    We have restore the archive files on storage from 5 different backups, still the files are not readable through transaction code SARA. when we click on store button in SARA, it will not give the option to store the files and gives the error that file is not readable by the systems.. How can we retrive the data through these  files? Please let me know.
    Regards,
    Gagan

    Hi Olivier ,
    Thanks for your response.
    We are not reloading the data, we have restored the archives files to our storage and after that when try to access the stored files we are getting the error the files are not accessible.
    Archive files are perfectly restored in the archive directory and also directories has all the necessary permissions .
    In SARA when we go to Management , there it shows the status archive completed.
    It also shows that file not store and not accesible but the file exists in proper directory.
    Please suggest...
    Thanks,
    Gagan

  • How to bulk import data into CQ5 from MySQL and file system

    Is there an easy way to bulk import data into CQ5 from MySQL and file system?  Some of the files are ~50MB each (instrument files).  There are a total of ~1,500 records spread over about 5 tables.
    Thanks

    What problem are you having writing it to a file?
    You can't use FORALL to write the data out to a file, you can only loop through the entries in the collection 1 by 1 and write them out to the file like that.
    FORALL can only be used for SQL statements.

  • How to process large input CSV file with File adapter

    Hi,
    could someone recommend me the right BPEL way to process the large input CSV file (4MB or more with at least 5000 rows) with File Adapter?
    My idea is to receive data from file (poll the UX directory for new input file), transform it and then export to one output CSV file (input for other system).
    I developed my process that consists of:
    - File adapter partnerlink for read data
    - Receive activity with checked box to create instance
    - Transform activity
    - Invoke activity for writing to output CSV.
    I tried this with small input file and everything was OK, but now when I try to use the complete input file, the process doesn't start and automatically goes to OFF state in BPEL console.
    Could I use the MaxTransactionSize parameter as in DB adapter, should I batch the input file or other way could help me?
    Any hint from you? I've to solve this problem till this thursday.
    Thanks,
    Milan K.

    This is a known issue. Martin Kleinman has posted several issues on the forum here, with a similar scenario using ESB. This can only be solved by completely tuning the BPEL application itself, and throwing in big hardware.
    Also switching to the latest 10.1.3.3 version of the SOA Suite (assuming you didn't already) will show some improvements.
    HTH,
    Bas

Maybe you are looking for

  • MacBook Pro (Retina, Mid 2012) - Boot Camp/Win 7 - Num Lock Problem

    Hey, I have a client who has a MacBook Pro (Retina, Mid 2012) who is experiencing a problem when booted into Windows 7 via Boot Camp in that he cannot type certain character like # (hash character by keystroke alt + 3) without Windows 7 error bleepin

  • Zen Micro freezing on connect

    Hi I just bought my Zen Micro today and had no problems with it at first. I charged it until it was fully charged and then plugged it into my computer. It was detected by the computer and everything seemed to be working well. I ran Media Explorer and

  • [JS][CS5] automate numbering swatches?

    Hey, I'm trying to make a script that will automatically number swatches based on their order in the swatch panel. I found this script in another topic already var myDocument = app.activeDocument; for(i = 4; i<myDocument.swatches.length; i++){      i

  • Can't connect to Webdav

    Hello, Each time I want to connect to webdav through windows, I get the following error (after entering credentials): -HTTP/1.1 424 Failed Dependency From the logs on UCM, I get the following traces (see below) Any idea what could cause this problem?

  • Configuring JTA with connection pool

              Hi all,           I need a sanity check here. I want to have the container manage the transactions           on my stateless session beans. I am connecting to a single database (no distributed           transactions needed) and want to use