Ways to handle large volume data (file size = 60MB) in PI 7.0 file to file

Hi,
In a file to file scenario (flat file to xml file), the flat file is getting picked up by FCC and then send to XI. In xi its performing message mapping and then XSL transformation in a sequence.
The scenario is working fine for small files (size upto 5MB) but when the input flat file size is more then 60 MB, then XI is showing lots of problem like (1) JCo call error or (2) some times even XI is stoped and we have to strat it manually again to function properly.
Please suggest some way to handle large volume (file size upto 60MB) in PI 7.0 file to file scenario.
Best Regards,
Madan Agrawal.

Hi Madan,
If every record of your source file was processed in a target system, maybe you could split your source file into several messages by setting up this in Recordset Per Messages parameter.
However, you just want to convert you .txt file into a .xml file. So, try firstly to setting up
EO_MSG_SIZE_LIMIT parameter in SXMB_ADM.
However this could solve the problem in Inegration Engine, but the problem will persit in Adapter Engine, I mean,  JCo call error ...
Take into account that file is first proccessed in Adapter Engine, File Content Conversion and so on...
and then it is sent to the pipeline in Integration Engine.
Carlos

Similar Messages

  • File size changing when pass through after effects  (180Mb file goes in and comes out 33GB on export)

    im processing a video recorded of an  interview where the lighting was not very good on the interviewee
    im brightening up the light on the interviewees face
    the file was a 180Mb  H.264 file on import 
    when i export the file its coming out at 33GB in size.... im not sure why this is
    i have chosen the default setting "lossless" as the format for export
    i just want the file to export the same size and quality as it was when i imported it
    what options should i choose for this?    AE does not seem to tell me how big the end file will be and it takes 1 hour and half to process this 2 minute interview so it will take me an awfull long time to get to the bottom of this if i try by myself
    thanks

    It doesn't "increase" the file size. You're making a brand new file.
    What you put into After Effects has nothing to do with the resulting file. You can have an AE composition with no footage whatsoever and the resulting file will be much larger than 0!
    After Effects works internally with completely uncompressed pixel data, so no matter what you toss into it, it produces whatever you tell it to produce. As Mylenium points out, you made an uncompressed final file.
    Your original video file is VERY compressed. So, of COURSE uncompressed video is going to look massive by comparison.
    Read Mylenium's link to understand a bit more and go here to learn AE: Getting started with After Effects There are lots of "gotchas" like this that'll bit your butt if you skip this basic training stuff.

  • Best way of handling large amounts of data movement

    Hi
    I like to know what is the best way to handle data in the following scenario
    1. We have to create Medical and Rx claims Tables for 36 months of data about 150 million records each - First month (month 1, 2, 3, 4, .......34, 35, 36)
    2. We have to add the DELTA of month two to the 36 month baseline. But the application requirement is ONLY 36 months, even though the current size is 37 months.
    3 Similarly in the 3rd month we will have 38 months, 4th month will have 39 months.
    4. At the end of 4th month - how can I delete the First three months of data from Claim files without affecting the performance which is a 24X7 Online system.
    5. Is there a way to create Partitions of 3 months each and that can be deleted - Delete Partition number 1, If this is possible, then what kind of maintenance activity needs to be done after deleting partition.
    6. Is there any better way of doing the above scenario. What other options do I have.
    7 My goal is to eliminate the initial months data from system as the requirement is ONLY 36 months data.
    Thanks in advance for your suggestion
    sekhar

    Hi,
    You should use table partitioning to keep your data on monthly partitions. Serach on table partitioning for detailed examples.
    Regards

  • Best way to handle large files in FCE HD and iDVD.

    Hi everyone,
    I have just finished working on a holiday movie that my octagenarian parents took. They presented me with about 100 minutes of raw footage that I have managed to edit down to 64 minutes. They have viewed the final version that I recorded back to tape for them. They now want to know if I can put it onto a DVD for them as well. Problem is the FCE HD file is 13Gb.
    So here is my question.
    What is the best way to handle this problem?
    I have spoken to a friend of mine who is a professional editor. She said reduce the movie duration down to about 15mins because it's probably too long and boring. (rather hurtful really) Anyway that is out of the question as far as my oldies are concerned.
    I have seen info on Toast 8 that mentions a "Fit to DVD" process that purports to "squash" 9Gb of movie to a 4.7Gb disk. I can't find if it will also put 13Gb onto a dual layer 8.5Gb disk.
    Do I have to split the movie into two parts and make two dual layer DVD's? If so I have to ask - How come "Titanic", 3hrs+ fits on one disk??
    Have I asked too many questions?

    Take a deep breath. Relax. All is fine.
    iDVD does not look at the size of your video file, it looks at the length. iDVD can accomodate up to 2 hours of movie
    iDVD gives you different options depending on the length of your movie. Although I won't agree with your friend about reducing the length of your movie to 15 minutes, if you could trim out a few minutes to get it under an hour that setting in iDVD (Best Performance though the new version may have renamed it) gives you the best quality. Still, any iDVD setting will give you good quality even at 64 minutes
    In FCE export as Quicktime Movie NOT any flavour of Quicktime Conversion. Select chapter markers if you have them. If everything is on one system unchecked the Make Movie Self Contained button. Drop the QT file into iDVD

  • In OSB , xquery issue with large volume data

    Hi ,
    I am facing one problem in xquery transformation in OSB.
    There is one xquery transformation where I am comparing all the records and if there are similar records i am clubbing them under same first node.
    Here i am reading the input file from the ftp process. This is perfectly working for the small size input data. When there is large input data then also its working , but its taking huge amount of time and the file is moving to error directory and i see the duplicate records created for the same input data. I am not seeing anything in the error log or normal log related to this file.
    How to check what is exactly causing the issue here,  why it is moving to error directory and why i am getting duplicate data for large input( approx 1GB).
    My Xquery is something like below.
    <InputParameters>
                    for $choice in $inputParameters1/choice              
                     let $withSamePrimaryID := ($inputParameters1/choice[PRIMARYID eq $choice/PRIMARYID])                
                     let $withSamePrimaryID8 := ($inputParameters1/choice[FIRSTNAME eq $choice/FIRSTNAME])
                     return
                      <choice>
                     if(data($withSamePrimaryID[1]/ClaimID) = data($withSamePrimaryID8[1]/ClaimID)) then
                     let $claimID:= $withSamePrimaryID[1]/ClaimID
                     return
                     <ClaimID>{$claimID}</ClaimID>                
                     else
                     <ClaimID>{ data($choice/ClaimID) }</ClaimID>

    HI ,
    I understand your use case is
    a) read the file ( from ftp location.. txt file hopefully)
    b) process the file ( your x query .. although will not get into details)
    c) what to do with the file ( send it backend system via Business Service?)
    Also noted the files with large size take long time to be processed . This depends on the memory/heap assigned to your JVM.
    Can say that is expected behaviour.
    the other point of file being moved to error dir etc - this could be the error handler doing the job ( if you one)
    if no error handlers - look at the timeout and error condition scenarios on your service.
    HTH

  • Large Volume Data Merge Issues with Indesign CS5

    I recently started using Indesign CS5 to design a marketing mail piece which requires merging data from Microsoft Excel.  With small tasks up to 100 pieces I do not have any issues.  However I need to merge 2,000-5,000 pieces of data through Indesign on a daily basis and my current lap top was not able to handle merging more than 250 pieces at a time, and the process of merging 250 pieces takes up to 30-45 mins if I get lucky and software does not crash.
    To solve this issue, I purchased a Desktop with a second generation Core i7 processor and 8GB of memerory thinking that this would solve my problem.  I tried to merge 1,000 piece of data with this new computer, and I was forced to restart Adobe Indesign after 45 mins of no results.  I then merged 500 pieces and the task was completed, but the process took a little less than 30 minutes.
    I need some help with this issue because I can not seem to find another software that can design my Mail Piece the way Indesign can, yet the time it takes to merge large volumes of data is very frustrating as the software does crash from time to time after waiting a good 30-45 mins for completion.
    Any feedback is greatly appreciated.
    Thank you!

    Operating System is Windows 7
    I do not know what you mean by Patched to 7.0.4
    I do not have a crash report, the software just freezes and I have to do a force close on the program.
    Thank you for your time...

  • Should Not handle large base64Binary data with BPEL?

    Hi,
    we need to implement a file saving function. I have no problem to implement the web service with Java class by using MTOM streaming but I question on the best design with BPEL for this or if BPEL should not be used for this at all. Please help.
    For the requirement, the file content could be the text entered from a web page or the binary data from any resource such as an existing file or email message body etc, which is not limited. Also the web service would receive the desired file name. But the actual file name should be created by the web service based on the desired file name plus some business rule.
    I am thinking of creating a BPEL app for this. The input for the file content is designed to be of type base64Binary so that the application could handle either ASCII or Binary data. In this BEPL app, it needs first to call a web service to get the information where to put the file (this is dynamic) and generate the actual file name and then it calls another web service to save the file with the actual file name and content. I wonder in the case of saving content of big size such as content read from a PDF file, it could cause resource issue due to the dehydration in BPEL. I am not so clear about dehydration. Does that mean when the BPEL invokes the 1st web service to get the information where to put the file, the base64Binary data for the file content would be first saved into the DB (dehydrated)? Would this cause issue? If so, does that mean for this business needs, we should not use SOA, instead, we should just implement it with JAX-WS?

    Operating System is Windows 7
    I do not know what you mean by Patched to 7.0.4
    I do not have a crash report, the software just freezes and I have to do a force close on the program.
    Thank you for your time...

  • What's the best way to handle all my data?

    I have a black box system that connects directly to a PC and sends 60 words of data at 10Hz (worse case scenario). The black box continuously transmits these words, which contain a large amount of data that is continuously updated from up to 50 participants (again worst case scenario) 
    i.e. 60words * 16bits * 10Hz * 50participants = 480Kbps.  All of this is via a UDP Ethernet connection.
    I have LabVIEW reading the data without any problem. I now want to manipulate this data and then distribute it to other PCs on a network via TCP/IP.
    My question is what is the best way of storing my data locally on the interface PC so that I can then have clients request the information they require via TCP/IP. Each message that comes in via the Ethernet will relate to one of the participants, so I need to be able to check if I already have data about that participant - if I do then I can just update it, if I don't I need to create a record for the participant, and if I havn't heard from one for a while I will need to delete it. I don't want to create unnecessary network traffic. I also want to avoid global variables if possible - especially considering that I may have up to 3000 variables to play with.
    I'm not after a solution, just some ideas about how to tackle this problem... I thought I could perhaps create a database and have labview update a table with the data, adding a record for each participant. Alternatively is there a better way of storing all the data in memory besides global variables?
    Thanks in advance.

    Hi russelldav,
    one note on your data handling:
    When  each of the 50 participants send the same 60 "words" you don't need 3000 global variables to store them!
    You can reorganize those data into a cluster for each participant, and using an array of cluster to keep all the data in one "block".
    You can initialize this array at the start of the program for the max number of participants, no need to (dynamically) add or delete elements from this array...
    Edited:
    When all "words" have the same representation (I16 ?) you can make a 2D array instead of an array of cluster...
    Message Edited by GerdW on 10-26-2007 03:51 PM
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome

  • Best way to handle large amount of text

    hello everyone
    My project involves handling large amount of text.(from
    conferences and
    reports)
    Most of them r in Ms Word. I can turn them into RTF format.
    I dont want to use scrolling. I prefer turning pages(next,
    previous, last,
    contents). which means I need to break them into chunks.
    Currently the process is awkward and slow.
    I know there wud b lots of people working on similar
    projects.
    Could anyone tell me an easy way to handle text. Bring them
    into cast and
    break them.
    any ideas would be appreciated
    thanx
    ahmed

    Hacking up a document with lingo will probably loose the rtf
    formatting
    information.
    Here's a bit of code to find the physical position of a given
    line of on
    screen text (counting returns is not accurate with word
    wrapped lines)
    This stragety uses charPosToLoc to get actual position for
    the text
    member's current width and font size
    maxHeight = 780 -- arbitrary display height limit
    T = member("sourceText").text
    repeat with i = 1 to T.line.count
    endChar = T.line[1..i].char.count
    lineEndlocV = charPosToLoc(member "sourceText",
    endChar).locV
    if lineEndlocV > maxHeight then -- fount "1 too many"
    line
    -- extract identified lines "sourceText"
    -- perhaps repeat parce with remaining part of "sourceText"
    singlePage = T.line[1..i - 1]
    member("sourceText").text = T.line[i..99999] -- put remaining
    text back
    into source text member
    If you want to use one of the roundabout ways to display pdf
    in
    director. There might be some batch pdf production tools that
    can create
    your pages in pretty scalable pdf format.
    I think flashpaper documents can be adapted to director.

  • Is it best to upload HD movies from a camcorder to iMovie or iPhoto.  iMovie gives the option for very large file sizes - presumably it is best to use this file size because the HD movie is then at its best quality?

    Is it best to upload hd movie to iPhoto or iMovie?  iMovie seems to store the movie in much larger files - presumably this is preferable as a larger file is better quality?

    Generally it is if you're comparing identical compressors & resolutions but there's something else happening here.  If you're worried about quality degrading; check the original file details on the camera (or card) either with Get Info or by opening in QuickTime and showing info. You should find the iPhoto version (reveal in a Finder) is a straight copy.  You can't really increase image quality of a movie (barring a few tricks) by increasing file size but Apple editing products create a more "scrub-able" intermediate file which is quite large.
    Good luck and happy editing.

  • Why can I only "Save as Other... Reduce File Size" a single time for an Adobe X file?

    We are struggling with our form editing in that anytime we make a change and save the file size is doubled.  As a work around we started using the ...Reduce File Size save as.   However, we can only use that once per file.    Some of our forms contain several hundred fields that are edited, with the behavior as is we cannot save our work as we go.  
    Reproduction Steps...
    1. Open a file in Acrobat X and note it's file size.
    2. Make any kind of change (our work is focued on View>Forms>Edit)
    3. Go to File > Save as Other... > Reduce Size PDF
    4. Set Version Compatability to 'Retain Existing' and Save
    5. File size should be very close to the original
    6. Open the new file
    7. Rinse and repeat steps 2 and 3
    8. Note  that when you go to try to Save as the new edit, the Save As Reduce size is grayed out and no longer available for seslection.
    Please help!!!!

    Just do a standard Save As. The problem is that Save includes the old and new. Save As deletes the old. Also, if you distribute the form it is locked from editing as I understand it.

  • I am trying to upload files to an Apple hosted feed on KSU iTunes U (unpublished site). The file size shows as 2 GB even though the files are 50MB. The upload stalls and fails. Address:

    I am trying to upload files to an Apple hosted feed on KSU iTunes U (unpublished site). The file size in the Feed Editor shows as 2 GB even though the files are 50MB. I am using Safari on Intel Mac 10.5.8. at Kennesaw State University. Any help would be appreciated.

    This is the second day. I tried using a new Apple feed to test. Still not working. I uploaded a 50MB mp3 file. The first time the upload failed. The second time the file showed that it was uploading a 2 GB file. It has been about 20 minutes now and shows "3.5GB of 2.0GB 178% Completed". Now it has started uploading all over again - it is now showing 399.4MB or 2.0 GB - 19% Completed. I dont know if this will keep looping - but I am going to Cancel.
    Please let me know what other information do you need from me. Definitely need some help with this!

  • What is the best way to extract large volume of data from a BW InfoCube?

    Hello experts,
    Wondering if someone can suggest the best method that is availabe in SAP BI 7.0 to extract a large amount of data (approx 70 million records) from an InfoCube.  I've tried OpenHub and APD but not working.  I always need to separate the extracts into small datasets.  Any advice is greatly appreciated.
    Thanks,
    David

    Hi David,
    We had the same issue but that was loading from an ODS to cube. We have over 50 million records. I think there is no such option like parallel loading using DTPs. As suggested earlier in the forum, the only best option is to split according to the calender year of fis yr.
    But remember even with the above criteria sometimes for some cal yr you might have lot of data, even that becomes a problem.
    What i can suggest you is apart from Just the cal yr/fisc, also include some other selection criteria like comp code or sales org.
    yes you will end up load more requests, but the data loads would go smooth with lesser volumes.
    Regards
    BN

  • Handling large xml data source files

    Post Author: LeCoqDeMort
    CA Forum: Crystal Reports
    Hello. I have developed a crystal report that uses an xml data file as a data source (supported by an xml schema file). I have successfully run the report for small sample xml data files, but I now have a realistic data file that is around 4Mb in size.When I run the report within the Crystal Reports designer (ver. 11.0.0.1994), i get a "failure to retrieve data from database" error.  Is there some sort of configurable limit on data file/cache size that I can adjust - if indeed that is the problem? Thanks LeCoq 

    Post Author: LeCoqDeMort
    CA Forum: Crystal Reports
    Hello. I have developed a crystal report that uses an xml data file as a data source (supported by an xml schema file). I have successfully run the report for small sample xml data files, but I now have a realistic data file that is around 4Mb in size.When I run the report within the Crystal Reports designer (ver. 11.0.0.1994), i get a "failure to retrieve data from database" error.  Is there some sort of configurable limit on data file/cache size that I can adjust - if indeed that is the problem? Thanks LeCoq 

  • Best way to handle large number video files for a project..

    Hey, I was looking at getting some insight from the community here. Bascially there is a project that is being worked on that requires large amount of footage to be sifted through that only a small percentage will be used. These are mostly HD files and while most of the footage has been watched on Quicktime with notes taken, my question is this.
    What is the best way to take only small portions of each file without having to load everything into final cut and without any loose of quality. Should I just trim and rename from Quicktime or is there an easier way?
    Reason this needs to be done this way is the smaller segments will each be sent to other editors and rather then send huge files we want to split it to smaller amounts for each editor to use.
    Thank you so much for any input regarding this, I look forward to what you have to say

    Open the clip into the viewer. Mark In and Out points on the section you want. Make it a subclip Cmd-U. Drag the subclip into the bin for the editor who needs it. Repeat.
    If you batch export from a clip there is a selection to choose whether to export the whole clip or check box to export the marked I/O.
    This does not sound like a good project on which to being learning FCP.

Maybe you are looking for

  • Displaying the value of text field A in text field B

    I have a PDF form with 2 layers. Layer 1 has information, a textfield, and a button. Layer 2 is a certificate design. The user inputs their name in to a textfield called NameEntry. When they hit the button, layer 1 disappears -- the instructions and

  • Poor Audio in project, when sent from imovie, using itunes

    Help! I have my music video all done. I clicked on "create idvd project". When I view my video in iDvd the audio sounds horrible. I am using a song I had in my iTunes library (straight from a cd). This happened for the first time the other day. I jus

  • Bootcamp running XP & effects on PC gaming

    Has anyone tried to run Bootcamp on an Intel Core Duo system and played some PC games on the Mac? I've invested several hundreds of dollars in PC games (because no one develops games for Mac) and want to know if the gaming performace from an Intel Co

  • Fxtrans currency conversion

    Hi, We use Multicurrency Trans to convert all foreign currencies and translate into common reporting currency (USD). Recently we added a new foreign currency in the Inputcurrency dimension. When I run the fxtrans package the values are not getting co

  • Inbound Processing for cremas

    I am getting CREMAS idoc from SAP to my SAP system. When it comes to my system, i dont want that to create a new vendor number instead update or overwrite the exixiting vendoe number. Can anyone lemme know how i can do that