Processing/Splitting of large file

Hi,
Some time ago I did post some related question.
Is it possible in file2file(RFC) scenario to send an XML - file containing 1000 records to XI and split it so that there will be 1000 single output-messages written to a network destination. All of them containing one record. As output it shall be written to File (or RFC)?
Please let me know if you have some ideas.
Thanks,
Sebastian

Hi Wojciech,
thanks again for your help so far.
Queue is okay.
Source files:
<?xml version="1.0" encoding="ISO-8859-1"?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://MyTutorial/SHeinz07" targetNamespace="http://MyTutorial/SHeinz07">
<xsd:element name="MT_Werkteil" type="DT_Werkteil" />
<xsd:complexType name="DT_Werkteil">
<xsd:annotation>
<xsd:appinfo source="http://sap.com/xi/TextID">
13289f7070c111db8d5e00508b691bcc
</xsd:appinfo>
</xsd:annotation>
<xsd:sequence>
<xsd:element name="Row" minOccurs="0" maxOccurs="unbounded">
<xsd:annotation>
<xsd:appinfo source="http://sap.com/xi/TextID">
96d1db50592111dbc096cb2fc0a864a8
</xsd:appinfo>
</xsd:annotation>
<xsd:complexType>
<xsd:sequence>
<xsd:element name="Nummer" type="xsd:string">
<xsd:annotation>
<xsd:appinfo source="http://sap.com/xi/TextID">
96d1db51592111db8ca0cb2fc0a864a8
</xsd:appinfo>
</xsd:annotation>
</xsd:element>
<xsd:element name="Name" type="xsd:string">
<xsd:annotation>
<xsd:appinfo source="http://sap.com/xi/TextID">
96d1db52592111db8741cb2fc0a864a8
</xsd:appinfo>
</xsd:annotation>
</xsd:element>
<xsd:element name="Preis" type="xsd:integer">
<xsd:annotation>
<xsd:appinfo source="http://sap.com/xi/TextID">
96d1db53592111db8dd6cb2fc0a864a8
</xsd:appinfo>
</xsd:annotation>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:schema>
Target file:
<?xml version="1.0" encoding="ISO-8859-1"?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://MyTutorial/SHeinz07" targetNamespace="http://MyTutorial/SHeinz07">
<xsd:element name="MT_Werkteil_single" type="DT_Werkteil_single" />
<xsd:complexType name="DT_Werkteil_single">
<xsd:annotation>
<xsd:appinfo source="http://sap.com/xi/TextID">
13a6f73070c111dbc23600508b691bcc
</xsd:appinfo>
</xsd:annotation>
<xsd:sequence>
<xsd:element name="Nummer" type="xsd:string">
<xsd:annotation>
<xsd:appinfo source="http://sap.com/xi/TextID">
e53f2c90700d11dbadfdc229c0a864a8
</xsd:appinfo>
</xsd:annotation>
</xsd:element>
<xsd:element name="Name" type="xsd:string">
<xsd:annotation>
<xsd:appinfo source="http://sap.com/xi/TextID">
e53f2c91700d11dbb999c229c0a864a8
</xsd:appinfo>
</xsd:annotation>
</xsd:element>
<xsd:element name="Preis" type="xsd:integer">
<xsd:annotation>
<xsd:appinfo source="http://sap.com/xi/TextID">
e53f2c92700d11dbb2edc229c0a864a8
</xsd:appinfo>
</xsd:annotation>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:schema>
Currently my output in Mapping Test looks like that:
<?xml version="1.0" encoding="UTF-8"?>
<ns0:Messages xmlns:ns0="http://sap.com/xi/XI/SplitAndMerge">
   <ns0:Message1>
      <ns1:MT_Werkteil_single xmlns:ns1="http://MyTutorial/SHeinz07">
         <Nummer>1000</Nummer>
         <Name>Schraube</Name>
         <Preis>12</Preis>
      </ns1:MT_Werkteil_single>
   </ns0:Message1>
</ns0:Messages>
Though I did some researches on contexts and splitbyvalue I am right now just guessing instead of understanding why there is no correct output.
do you have any idea?
Regards,
Sebastian

Similar Messages

  • File Splitting for Large File processing in XI using EOIO QoS.

    Hi
    I am currently working on a scenario to split a large file (700MB) using sender file adapter "Recordset Structure" property (eg; Row, 5000). As the files are split and mapped, they are, appended to a destination file. In an example scenario a file of 700MB comes in (say with 20000 records) the destination file should have 20000 records.
    To ensure no records are missed during the process through XI, EOIO, QoS is used. A trigger record is appended to the incoming file (trigger record structure is the same as the main payload recordset) using UNIX shellscript before it is read by the Sender file adapter.
    XPATH conditions are evaluated in the receiver determination to eighther append the record to the main destination file or create a trigger file with only the trigger record in it.
    Problem that we are faced is that the "Recordset Structure" (eg; Row, 5000) splits in the chunks of 5000 and when the remaining records of the main payload are less than 5000 (say 1300) those remaining 1300 lines get grouped up with the trigger record and written to the trigger file instead of the actual destination file.
    For the sake of this forum I have a listed a sample scenario xml file representing the inbound file with the last record wih duns = "9999" as the trigger record that will be used to mark the end of the file after splitting and appending.
    <?xml version="1.0" encoding="utf-8"?>
    <ns:File xmlns:ns="somenamespace">
    <Data>
         <Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
    </Data>
    </ns:File>
    In the sender file adapter I have for test purpose changed the "Recordset structure" set as "Row,5" for this sample xml inbound file above.
    I have two XPATH expressions in the receiver determination to take the last record set with the Duns = "9999" and send it to the receiver (coominication channel) to create the trigger file.
    In my test case the first 5 records get appended to the correct destination file. But the last two records (6th and 7th record get sent to the receiver channel that is only supposed to take the trigger record (last record with Duns = "9999").
    Destination file: (This is were all the records with "Duns NE "9999") are supposed to get appended)
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
         <R3Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</xtract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
    </R3File>
    Trigger File:
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
              <R3Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
    </R3File>
    I ve tested the XPATH condition in XML Spy and that works fine. My doubts are on the property "Recordset structure" set as "Row,5".
    Any suggestions on this will be very helpful.
    Thanks,
    Mujtaba

    Hi Debnilay,
    We do have 64 bit architecture and still we have the file processing problem. Currently we are splitting the file into smaller chuncks and processsing. But we want to process as a whole file.
    Thanks
    Steve

  • Problem using Category to split a large file

    I want to split a large class file by putting methods used only internally into a separate source file. The original file is "Calculator.m"; the new file is "CalculatorP2.m" which has been added to the project. In the main file I have added the statement:
    #import "CalculatorP2.m"
    just ahead of "@implementation Calculator".
    In "CalculatorP2.m I have placed the statements:
    #import "Calculator.h"
    @interface Calculator ( CalculatorP2 )
    @end
    @implementation Calculator ( CalculatorP2 )
    ahead of the source code. The compiler is happy with this but the linker fails, saying:
    /usr/bin/ld: multiple definitions of symbol .objccategory_name_CalculatorCalculatorP2
    Any idea as to how to fix this? I am using XCode 2.4.1.

    SecondViewControllerP2.h:
    #import <UIKit/UIKit.h>
    #import "SecondViewController.h"
    @interface Secondviewcontroller(SecondviewcontrollerP2)
    - (void)method1;
    @end
    SecondViewControllerP2.m:
    #import "SecondViewControllerP2.h"
    @implementation Secondviewcontroller(SecondviewcontrollerP2)
    - (void)method1 {
    NSLog(@"method1");
    @end
    SecondViewController.m:
    #import "SecondViewController.h"
    #import "SecondViewControllerP2.h"
    @implementation Secondviewcontroller
    @end

  • Split/join large files

    I have a need to split and eventually re-join a large file. Are there recommended utilities for doing this?
    Thanks,
    George E.

    Resolved. Found the answer in the archives. Stuffitt.

  • Splitting a Large File

    I record a 5 hour radio program that I like to listen to on my iPod. My previous recorder automatically split the program at pre-determined times. My Mac recording software is one big file. Is there an AppleScript way to split the track or add chapter bookmarks? It has to be easy since I will be doing this daily. Thanks.

    afernandes wrote:
    Thanks Michel,
    I will try this out.
    Do you know if this will create a new file(s) ?
    http://discussions.apple.com/thread.jspa?threadID=2460925&tstart=0
    What I want to do is to break up my 2 hours video into smaller chunks then burn the good chunks as raw footage ( AVI/MOV) onto backup data DVDs. Then export all the chunks into compresssed files (MPEG-4?) and save these on another data DVD.
    Avoid to compress. Save as quicktime movie.
    Michel Boissonneault

  • How to split a large file when disk space is limited

    I have a long (200 minutes) DV clip which uses up most [40 GB] of my external hard drive. The last 90 minutes of the clip is blank, and I need to delete the corresponding [18 GB] filespace to free-up space on my hard drive for editing.
    How do I delete a portion of the file without having the 22 GB of free space Quicktime seems to need to temporarily store the new (smaller file).
    Thanks,
    David

    I modified the program as follow:
    ZipEntry entry = new ZipEntry(file.getName());
    entry.setTime(file.lastModified());
    zip.putNextEntry(entry);
    byte[] bytes=new byte[1024];
    int len;
    while ((len=in.read(bytes))>0) {
    zip.write(bytes,0,len);
    And now it is working. I can zip a file with 125MB size.
    Thank you for your help.
    Michelle

  • Transferring a large file

    Hi,
    I have to transfer a huge file. I am zipping the file and passing the byteArray. but the problem is the size. I am not able to transfer file of size 2-3 MB.
    how do I split the file or send it in small parts.

    thank you for the reply but FileInputStream is not serializable i guess. I tried it.
    let me give a clear picture of what i am trying to do.
    i have to transfer a file from client to server. I am doin this using RMI.
    so i will call the remote method by passing the byteArray.
    for this i need to split the file. be it FileInputStream or byteArray.
    So how do i do it. how do i split the large file.
    do i have to use some buffer and keep appending to it.
    if you can give an example or sample code it will be great.
    hope u can help me out
    thanks

  • Large file processing in XI 3.0

    Hi,
    We are trying to process a large file of ~280 MB file size and we are getting timeout errors. I followed all the required tunings for memory and heap sizes and still the problem exists. I want to know if installation of decentral adapter engine for just this file processing might solve the problem which I doubt.
    Based on my personal experience there might be a limitation of file size processing in XI may upto 100 MB with minimul mapping and no BPM.
    Any comments on this would be appreciated.
    Thanks
    Steve

    Hi Debnilay,
    We do have 64 bit architecture and still we have the file processing problem. Currently we are splitting the file into smaller chuncks and processsing. But we want to process as a whole file.
    Thanks
    Steve

  • The amount of memory used for data is a lot larger than the saved file size why is this and can I get the memory usage down without splitting up the file?

    I end up having to take a lot of high sample rate data for relativily long periods of time. When I save the data it is usually over 100 MB. When I load the data for post-processing though the amount of memory used is excessively higher than the file size. This causes my computer to crash because 1.5 GB is not enough. Is there a way to stop this from happening withoput splitting up the file into smaller files.

    LabVIEW can efficiently handle large files, far beyond 100Mb, provided that care is taken in the coding of the loading/processing routines. Here are several suggestions:
    1) Check out the resources National Instruments has put together (NI Developer Zone > Development Library > Measurement and Automation Software > LabVIEW > Development System > Optimizing Applications > Managing Memory), specifically the article entitled "Managing Large Data Sets in LabVIEW".
    2) Load and process the data in chunks if possible.
    3) Avoid sending the data to front panel indicators, using local/global variables for data storage, or changing data types unless absolutely necessary.
    4) If using LabVIEW 7.1, use the "show buffer" tool to determine when LabVIEW is creating extra
    copies of data in memory.

  • A pdf file failed to convert to word, presumably because of size.  how do i split a large pdf file into manageable secrtions?

    I'm running Abode Reader XI version 11.0.7.  Repeated attempts to convert a large (439 page) file, a dissertation, failed.  How do I split a large pdf file like this into manageable sections for conversion?

    Hi Mike,
    Your 11MB file is well within the file-size limits for ExportPDF, but depending on the number of pages, complexity of the file (and yours doesn't sound complex), and your connection speed, it is possible that the service is simply timing out before it can finish processing. These steps can help:
    If the file already contains editable text (that is, it isn't a scanned document), try disabling OCR as outlined in this this document: How to disable Optical Character Recognition (OCR) when converting PDF to Word or Excel.
    Clear the browser cache and try again.
    Try a different browser.
    Let's start there. If you still can't export the file to Word, let me know and we'll take it from there.
    Best,
    Sara

  • Processing Large Files using Chunk Mode with ICO

    Hi All,
    I am trying to process Large files using ICO. I am on PI 7.3 and I am using new feature of PI 7.3, to split the input file into chunks.
    And I know that we can not use mapping while using Chunk Mode.
    While trying I noticed below points:
    1) I had Created Data Type, Message Type and Interfces in ESR and used the same in my scenario (No mapping was defined)Sender and receiver DT were same.
    Result: Scenario did not work. It created only one Chunk file (.tmp file) and terminated.
    2) I used Dummy Interface in my scenario and it worked Fine.
    So, Please confirm if we should always USE DUMMY Interfaces in Scenario while using Chunk mode in PI 7.3 Or Is there something that I am missing.
    Thanks in Advance,
    - Pooja.

    Hello,
    While trying I noticed below points:
    1) I had Created Data Type, Message Type and Interfces in ESR and used the same in my scenario (No mapping was defined)Sender and receiver DT were same.
    Result: Scenario did not work. It created only one Chunk file (.tmp file) and terminated.
    2) I used Dummy Interface in my scenario and it worked Fine.
    So, Please confirm if we should always USE DUMMY Interfaces in Scenario while using Chunk mode in PI 7.3 Or Is there something that I am missing.
    According to this blog:
    File/FTP Adapter - Large File Transfer (Chunk Mode)
    The following limitations apply to the chunk mode in File Adapter
    As per the above screenshots, the split never cosiders the payload. It's just a binary split. So the following limitations would apply
    Only for File Sender to File Receiver
    No Mapping
    No Content Based Routing
    No Content Conversion
    No Custom Modules
    Probably you are doing content conversion that is why it is not working.
    Hope this helps,
    Mark
    Edited by: Mark Dihiansan on Mar 5, 2012 12:58 PM

  • Bottleneck in Large file processing

    Hi,
    We are experiencing timeout and memory issues in large file processings. I want to know wheather J2EE adapter engine or Integration engine is the bottleneck in processing large messages like over 300 MB files without splitting the files.
    Thanks
    Steve

    Hi Mario,
    We are testing a scenario to find out what is the maximum file size that XI can handle based on the blog
    ( /people/william.li/blog/2006/09/08/how-to-send-any-data-even-binary-through-xi-without-using-the-integration-repository) without any mapping. Upto 20 MB it works Ok and after that we are getting timeout error .
    Data from Moni:
    com.sap.engine.services.httpserver.exceptions.HttpIOException: Read timeout. The client has disconnected or a synchronization error has occurred. Read [1704371] bytes. Expected [33353075]. at com.sap.engine.services.httpserver.server.io.HttpInputStream.read(HttpInputStream.java:186) at com.sap.aii.af.service.util.ChunkedByteArrayOutputStream.write(ChunkedByteArrayOutputStream.java:181) at com.sap.aii.af.ra.ms.transport.TransportBody.<init>(TransportBody.java:99) at com.sap.aii.af.ra.ms.impl.core.transport.http.MessagingServlet.doPost
    This could be due to ICM timeout settings which we are planning to increase.
    I would like to hear from others experience of maximum file size that they could process. Ofcouse I do know that it depends on the environment.
    Thanks
    Steve

  • Splitting large file in XI

    can we split the incoming file in XI, we are getting a large file of size 80MB , wanted to cut down to 40MB each
    Sender system is sending 80MB file at single shot, they cannot change it.
    It has become mandatory for me to break it in XI.  (scenario is File to Proxy)

    Hi Viswanath,
    Handling large files say anything above 100MB is always a problem with File adapter as the data has to be moved from Adapter Engine integration Engine and vice versa.
    Third party tools are generally used for that. Conversion Agent by Itemfield is one of the best approaches.
    Also, on the Advanced tab of the file sender adapter, select the check box next to Advanced Mode. There you can specify Maximum File Size (Bytes) option.
    Huge processing of files
    Night Mare-Processing huge files in SAP XI
    Step-by-Step Guide in Processing High-Volume Messages Using PI 7.1's Message Packaging
    Step-by-Step Guide in Processing High-Volume Messages Using PI 7.1's Message Packaging
    SAP XI acting as a (huge) file mover
    The specified item was not found.
    Managing bulky flat messages with SAP XI (tunneling once again) - UPDATED
    The specified item was not found.
    Regards,
    Vinod.

  • Copying large file sets to external drives hangs copy process

    Hi all,
    Goal: to move large media file libraries for iTunes, iPhoto, and iMovie to external drives. Will move this drive as a media drive for a new iMac 2013. I am attempting to consolidate many old drives over the years and consolidate to newer and larger drives.
    Hardware: moving from a Mac Pro 2010 to variety of USB and other drives for use with a 2013 iMac.  The example below is from the boot drive of the Mac Pro. Today, the target drive was a 3 TB Seagate GoFlex ? USB 3 drive formatted as HFS+ Journaled. All drives are this format. I was using the Seagate drive on both the MacPro USB 2 and the iMac USB 3. I also use a NitroAV Firewire and USB hub to connect 3-4 USB and FW drives to the Mac Pro.
    OS: Mac OS X 10.9.1 on Mac Pro 2010
    Problem: Today--trying to copy large file sets such as iTunes, iPhoto libs, iMovie events from internal Mac drives to external drive(s) will hang the copy process (forever). This seems to mostly happen with very large batches of files: for example, an entire folder of iMovie events, the iTunes library; the iPhoto library. Symptom is that the process starts and then hangs at a variety of different points, never completing the copy. Requires a force quit of Finder and then a hard power reboot of the Mac. Recent examples today were (a) a hang at 3 Gb for a 72 Gb iTunes file; (b) hang at 13 Gb for same 72 Gb iTunes file; (c) hang at 61 Gb for a 290 Gb iPhoto file. In the past, I have had similar drive-copying issues from a variety of USB 2, USB 3 and FW drives (old and new) mostly on the Mac Pro 2010. The libraries and programs seem to run fine with no errors. Small folder copying is rarely an issue. Drives are not making weird noises. Drives were checked for permissions and repairs. Early trip to Genius Bar did not find any hardware issues on the internal drives.
    I seem to get these "dropoff" of hard drives unmounting themselves and other drive-copy hangs more often than I should. These drives seem to be ok much of the time but they do drop off here and there.
    Attempted solutions today: (1) Turned off all networking on Mac -- Ethernet and WiFi. This appeared to work and allowed the 72 Gb iTunes file to fully copy without an issue. However, on the next several attempts to copy the iPhoto and the hangs returned (at 16 and then 61 Gb) with no additional workarounds. (2) Restart changes the amount of copying per instance but still hangs. (3) Last line of a crash report said "Thunderbolt" but the Mac Pro had no Thunderbolt or Mini Display Port. I did format the Seagate drive on the new iMac that has Thunderbolt. ???
    Related threads were slightly different. Any thoughts or solutions would be appreciated. Better copy software than Apple's Finder? I want the new Mac to be clean and thus did not do data migration. Should I do that only for the iPhoto library? I'm stumped.
    It seems like more and more people will need to large media file sets to external drives as they load up more and more iPhone movies (my thing) and buy new Macs with smaller Flash storage. Why can't the copy process just "skip" the parts of the thing it can't copy and continue the process? Put an X on the photos/movies that didn't make it?
    Thanks -- John

    I'm having a similar problem.  I'm using a MacBook Pro 2012 with a 500GB SSD as the main drive, 1TB internal drive (removed the optical drive), and also tried running from a Sandisk Ultra 64GB Micro SDXC card with the beta version of Mavericks.
    I have a HUGE 1TB Final Cut Pro library that I need to get off my LaCie Thunderbolt drive and moved to a 3TB WD USB 3.0 drive.  Every time I've tried to copy it the process would hang at some point, roughly 20% of the way through, then my MacBook would eventually restart on its own.  No luck getting the file copied.  Now I'm trying to create a disk image using disk utility to get the file from the Thunderbolt drive and saved to the 3TB WD drive. It's been running for half an hour so far and appears that it could take as long a 5 hours to complete.
    Doing the copy via disk image was a shot in the dark and I'm not sure how well it will work if I need to actually use the files again. I'll post my results after I see what's happened.

  • Upload and Process large files

    We have a SharePoint 2013 OnPrem installation and have a business application that provides an option to copy local files into UNC path and some processing logic applied before copying it into SharePoint library. The current implementation is
    1. Users opens the application and  clicks “Web Upload” link from left navigation. This will open a \Layouts custom page to select upload file and its properties
    2. User specifies the file details and chooses a Web Zip file from his local machine 
    3. Web Upload Page Submit Action will
         a. call WCF  Service to copy Zip file from local machine to a preconfigure UNC path
         b. Creates a list item to store its properties along with the UNC path details
    4. Timer Job executes in a periodic interval to
         a. Query the List to see the items that are NOT processed and finds the path of ZIP file folder
         b. Unzip the selected file 
         c. Loops of unzipped file content - Push it into SharePoint library 
         d. Updates list item in “Manual Upload List”
    Can someone suggest a different design approach that can manage the large file outside of SharePoint context? Something like
       1. Some option to initiate file copy from user local machine to UNC path when he submits the layouts page
       2. Instead of timer jobs, have external services that grab data from a UNC path and processes periodic intervals to push it into SharePoint.

    Hi,
    According to your post, my understanding is that you want to upload and process files for SharePoint 2013 server.
    The following suggestion for your reference:
    1.We can create a service to process the upload file and copy the files to the UNC folder.
    2.Create a upload file visual web part and call the process file service.
    Thanks,
    Dennis Guo
    TechNet Community Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Dennis Guo
    TechNet Community Support

Maybe you are looking for

  • IPhone doesn't work with Sony Marine Receiver CDX-H910UI

    My iPhone doesn't seem to be working with my Sony Marine Receiver CDX-H910UI. Whenever i plug my phone into my Marine Receiver it only says 'reading' for a really long time and does not load or allow me to play songs. I have only noticed this problem

  • Dataguard

    Hi IS IT possible to setup dataguard in oracle applications 11.5.10.2 .If so how to setup dataguard for applications 11.5.10.2 running on linux

  • Help with JDialog and JFrame

    I have a class that extends JDialog to display images in a slide show. I use the action performed method of a button in my main Jframe Application to start the slideshow . When the button is clicked the JDialog opens multiple windows and the images a

  • What should i choose?

    im at that point where im suppose to choose which plug-in is best suited for me.. dunno if its EXE or MSI im using windows and im using google chrome which is under the term plug-in based browsers... i have 64-bit need to know more?

  • Oracle's Siebel integration with portal

    hi, is it possible to integrate Oracle's siebel Analytics reports with oracle portal. if yes, can u recommend me on a link which would help me to explore the process. Ur instruction and help on this is appreciated. Regards, Preetha V.