Split line in File posted

Hi
I am facing rather a unique problem. I have a report that generates a file and posts it on the application server. When i open the file at the application server it shows me line split ( I mean a new line is introduced in the middle of a row).
On further investigation I observed that the field after which it split contains a # symbol. Now many files posted on the server contains # but this problem is not observed in any of those files.
I tried replicating this on quality and dev by adding # at run time in debug mode but the file posted is perfectly fine.
Please share your thoughts on this.
Preethi

Hi Preethi,
If you see the file on application server in AL11 instaed of sapce you can see there as #.
So in AL11 use MENU as
LIST--->SAVE/SEND-->FILE   and select the file format (Txt,exce,.........etc)
See in the downloaded file wheather you are getting and #.
Regards,
Pravin

Similar Messages

  • Line item split in FF_5 before posting based on business transcation type

    Hi,
    I have a requirement where in while executing FF_5 with BAI format, I have to split line items and post different documents for specific business transaction type. Please provide your inputs to achieve,if it is possible.
    Regards
    Hari

    Hi,
    I have searched forum but did not get any appropriate solution. Please provide your inputs for this requirement.

  • How to adjust splitted lines into one line in Text file?

    Hi Guys,
    I have a text file with 3 fields(comma separated): GLCode (Number), Desc1 (Char), Desc2(Char) and need to load it into BW.
    My Text file looks like:
    1011.00,"Mejor PC Infrastructure","This line is ok."
    1012.00,"Telephone Equipment $","This line ends in next line.   
    1)Need to change the equipment immediately.
    2)Take immediate action"
    1013.00,"V1 Computer Server Infrastructure # Equip","For purchases
    of components that make up the company's network, such as servers, hubs, routers etc."
    1014.00,"Flash Drive","Need to provide all IT Developer"
    This is how file looks like. Now I need the followings:
    1. Need to remove the space and need to adjust the splitted line into one. Say here
    line/record 2 is splitted into 3 lines and need to adjust in 1 line.
    2. In Line 5 (Record 3) data splitted into 2 lines and need to make 1 line.
    3. Need to remove bad characters.
    Could someone help me please how to proceed ?
    Regards,

    Not quite correct by my testing.  Try:
    $i=0
    Get-Content .\test.txt | ForEach {
    If ($i%2){
    ("$Keep $($_)").Trim()
    }Else{
    $keep=$_
    }$i++
    Good catch!
    Boe Prox
    Blog |
    Twitter
    PoshWSUS |
    PoshPAIG | PoshChat |
    PoshEventUI
    PowerShell Deep Dives Book

  • Split large .pptx file to multiple files using PowerShell

    Hi.
    I'm not a programmer and I have the task to split big .pptx file(>100 slides) to multiple .pptx files with 4 slides in each one and save them into SP library. It should be done on PowerShell.
    Thanks for any help!

    Hi,
    For splitting PowerPoint files into multiple parts, I would suggest you post this question to Office forum, you will get more help and confirmed answers there:
    http://social.technet.microsoft.com/Forums/office/en-US/home
    For uploading files to SharePoint Library using PowerShell, here are some links with script demos provided for your reference:
    http://social.technet.microsoft.com/wiki/contents/articles/19529.sharepoint-2010-upload-file-in-document-library-using-powershell.aspx
    http://spfileupload.codeplex.com/
    Best regards
    Patrick Liang
    TechNet Community Support

  • File Splitting for Large File processing in XI using EOIO QoS.

    Hi
    I am currently working on a scenario to split a large file (700MB) using sender file adapter "Recordset Structure" property (eg; Row, 5000). As the files are split and mapped, they are, appended to a destination file. In an example scenario a file of 700MB comes in (say with 20000 records) the destination file should have 20000 records.
    To ensure no records are missed during the process through XI, EOIO, QoS is used. A trigger record is appended to the incoming file (trigger record structure is the same as the main payload recordset) using UNIX shellscript before it is read by the Sender file adapter.
    XPATH conditions are evaluated in the receiver determination to eighther append the record to the main destination file or create a trigger file with only the trigger record in it.
    Problem that we are faced is that the "Recordset Structure" (eg; Row, 5000) splits in the chunks of 5000 and when the remaining records of the main payload are less than 5000 (say 1300) those remaining 1300 lines get grouped up with the trigger record and written to the trigger file instead of the actual destination file.
    For the sake of this forum I have a listed a sample scenario xml file representing the inbound file with the last record wih duns = "9999" as the trigger record that will be used to mark the end of the file after splitting and appending.
    <?xml version="1.0" encoding="utf-8"?>
    <ns:File xmlns:ns="somenamespace">
    <Data>
         <Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
    </Data>
    </ns:File>
    In the sender file adapter I have for test purpose changed the "Recordset structure" set as "Row,5" for this sample xml inbound file above.
    I have two XPATH expressions in the receiver determination to take the last record set with the Duns = "9999" and send it to the receiver (coominication channel) to create the trigger file.
    In my test case the first 5 records get appended to the correct destination file. But the last two records (6th and 7th record get sent to the receiver channel that is only supposed to take the trigger record (last record with Duns = "9999").
    Destination file: (This is were all the records with "Duns NE "9999") are supposed to get appended)
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
         <R3Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</xtract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
    </R3File>
    Trigger File:
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
              <R3Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
    </R3File>
    I ve tested the XPATH condition in XML Spy and that works fine. My doubts are on the property "Recordset structure" set as "Row,5".
    Any suggestions on this will be very helpful.
    Thanks,
    Mujtaba

    Hi Debnilay,
    We do have 64 bit architecture and still we have the file processing problem. Currently we are splitting the file into smaller chuncks and processsing. But we want to process as a whole file.
    Thanks
    Steve

  • Substitution of Split Line Items that appear only in FI - New GL?

    Hi,
    The new GL incorporation has brought in a functionality to automatically split user entered line items during GL Posting to create ledger balance at Profit Centre(or Company code) level in-case line items have multiple Pofit Centres(or Co.Code) etc.
    I have a requirement wherein I need to do substitution and validation of line-items created after splitting of document(which are visible only in general Ledge view of New GL). The line-items created after splitting don't show at runtime(ie during substitution using GGB1), we can only see user entered line-items that are present in BKPF and BSEG tables.
    Is there any way by which we can get hold of system created split line items that are present only in FAGLFLEXA/FAGLFLEAXT table for substitution/validation ?
    I need to substitute Transaction Type field BEWAR.
    Message was edited by: Tabishul Haque

    All ,
    As per SAP it doesn't allow substitution of fields after splitting of documents in new GL in case of Cross Profit Centre documents postings.

  • The amount of memory used for data is a lot larger than the saved file size why is this and can I get the memory usage down without splitting up the file?

    I end up having to take a lot of high sample rate data for relativily long periods of time. When I save the data it is usually over 100 MB. When I load the data for post-processing though the amount of memory used is excessively higher than the file size. This causes my computer to crash because 1.5 GB is not enough. Is there a way to stop this from happening withoput splitting up the file into smaller files.

    LabVIEW can efficiently handle large files, far beyond 100Mb, provided that care is taken in the coding of the loading/processing routines. Here are several suggestions:
    1) Check out the resources National Instruments has put together (NI Developer Zone > Development Library > Measurement and Automation Software > LabVIEW > Development System > Optimizing Applications > Managing Memory), specifically the article entitled "Managing Large Data Sets in LabVIEW".
    2) Load and process the data in chunks if possible.
    3) Avoid sending the data to front panel indicators, using local/global variables for data storage, or changing data types unless absolutely necessary.
    4) If using LabVIEW 7.1, use the "show buffer" tool to determine when LabVIEW is creating extra
    copies of data in memory.

  • Generate 2 line items with posting keys in same table while using  FM .

    Dear Expert ,
    For T-code f-65 ,I have to park a FI Document  .i tried with PRELIMINARY_POSTING_FB01 for parked Document . But  i am not  successfully park the document .
    with the help of F-65 the data Segregate between Tables VBSEGS and VBSEGD with respecting Posting key . but with this Fm entire entry is displaying in table VBSEGS .For example it generated two line items with posting keys '15' and '40' and these both are displayed in VBSEGS whereas posting key '15' has to be displayed in VBSEGD.
    when i check this Document in FBV0 Error reflect " G/L Account 0012000 1001 Does not Exist ".
    Here my code -
    DATA:   XT_BKPF LIKE  BKPF OCCURS 0 WITH HEADER LINE ,
            XT_BSEG LIKE  BSEG OCCURS 0 WITH HEADER LINE ,
            XT_BSEG1 LIKE  BSEG OCCURS 0 WITH HEADER LINE ,
            XT_BSEC LIKE  BSEC OCCURS 0 WITH HEADER LINE ,
            XT_BSET LIKE  BSET OCCURS 0 WITH HEADER LINE ,
            XT_BSEZ LIKE  BSEZ  OCCURS 0 WITH HEADER LINE ,
            XT_BKORM  LIKE  BKORM OCCURS 0 WITH HEADER LINE ,
            XT_THEAD  LIKE  THEAD OCCURS 0 WITH HEADER LINE ,
            XT_SPLTTAB  LIKE  ACSPLT  OCCURS 0 WITH HEADER LINE ,
            XT_SPLTWT LIKE  WITH_ITEMX  OCCURS 0 WITH HEADER LINE .
    DATA :    XTEXT_UPDATE  LIKE  BOOLE-BOOLE VALUE SPACE,
              XTEXT_ITEM_UPDATE LIKE  BOOLE-BOOLE VALUE SPACE,
              XI_UF05A  LIKE  UF05A,
              XI_XCMPL  TYPE  XFELD VALUE 'X',
              XFS006_FB01 LIKE  FS006 ,
              XI_TCODE  LIKE  T020-TCODE  VALUE 'F-65',
              XI_PARGB  LIKE  RF05A-PARGB        ,
              XI_TCODE_INT  TYPE  TCODE           .
    DATA P_RETURN LIKE BAPIRET2 OCCURS 0 WITH HEADER LINE.
    XT_BKPF-BUKRS     =     'CP01'.
    XT_BKPF-GJAHR     =     2011.
    XT_BKPF-BLART     =     'DZ'.
    XT_BKPF-BLDAT     =     SY-DATUM.
    XT_BKPF-BUDAT     =     SY-DATUM.
    XT_BKPF-MONAT     =     '06'.
    XT_BKPF-CPUDT     =     SY-DATUM.
    XT_BKPF-WWERT     = SY-DATUM.
    XT_BKPF-USNAM     =     'ABAPER'.
    XT_BKPF-TCODE     =     'F-65'.
    APPEND XT_BKPF.
    XT_BSEG-BUKRS     =     'CP01'.
    XT_BSEG-GJAHR     =     '2011'.
    XT_BSEG-BUZEI = '001'.
    XT_BSEG-BSCHL = '40'.
    XT_BSEG-KOART = 'S'.
    XT_BSEG-SHKZG = 'S' .
    XT_BSEG-GSBER     =     'CPLN'.
    XT_BSEG-BUPLA = 'CP01'.
    XT_BSEG-WRBTR     =     10000.
    XT_BSEG-PSWSL = 'INR'.
    XT_BSEG-ZUONR = 'CH. 123456'.
    XT_BSEG-HKONT = '241000'.
    APPEND XT_BSEG .
    CLEAR  XT_BSEG.
    Vendor line item - required even for header only - BSEG table
    XT_BSEG-BUKRS     =     'CP01'.
    XT_BSEG-GJAHR     =     '2011'.
    XT_BSEG-BUZEI = '002'.
    XT_BSEG-BSCHL = '15'.
    XT_BSEG-KOART = 'S'.
    XT_BSEG-SHKZG = 'H' .
    XT_BSEG-GSBER     =     'CPLN'.
    XT_BSEG-BUPLA = 'CP01'.
    XT_BSEG-WRBTR     =     10000.
    XT_BSEG-PSWSL = 'INR'.
    XT_BSEG-ZUONR = 'CH. 123456'.
    XT_BSEG-HKONT = 'PC04000001'.
    APPEND XT_BSEG .
    CLEAR  XT_BSEG.
      CALL FUNCTION 'PRELIMINARY_POSTING_FB01'
       EXPORTING
         TEXT_UPDATE            = XTEXT_UPDATE
         TEXT_ITEM_UPDATE       = XTEXT_ITEM_UPDATE
      I_UF05A                =
         I_XCMPL                = XI_XCMPL
      FS006_FB01             =
          I_TCODE                = XI_TCODE
      I_PARGB                =
      I_TCODE_INT            =
      IMPORTING
        XEPBBP                 = CHECK_A
        TABLES
          T_BKPF                 = XT_BKPF
          T_BSEG                 = XT_BSEG
          T_BSEC                 = XT_BSEC
          T_BSET                 = XT_BSET
          T_BSEZ                 = XT_BSEZ
      T_BKORM                =
      T_THEAD                =
      T_SPLTTAB              =
      T_SPLTWT               =
              EXCEPTIONS
                ERROR_MESSAGE = 1.
      P_RETURN-ID         = SY-MSGID.
      P_RETURN-TYPE       = SY-MSGTY.
      P_RETURN-NUMBER     = SY-MSGNO.
      APPEND P_RETURN.
         p_return-MESSAGE_V1 = XSYMSGV.
      IF SY-SUBRC = 0.
        CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
          EXPORTING
            WAIT = 'X'.
      ENDIF.
      WRITE :/ sy-subrc , sy-MSGV1 .
    Thanks ,
    Ashish Gupta

    Hi Raghuram,
    I found a very important SAP Note 103051, details are below.
    An IDoc processed by function module IDOC_INPUT_INVOIC_MM (of category INVOIC01) must not refer to the same purchase order item in several invoice items. This is also valid if for a goods receipt-related invoice verification several delivery notes belong to the same purchase order item.
    Depending on the system settings and the situation, various error messages can occur (for example, FD240 'Order item ... selected more than once' or M8050 'Balance not zero: & debits: & credits: &').
    In this situation module IDOC_INPUT_INVOIC_MRM generates error message M8321 'Document contains same order item more than once'.
    For example, this situation occurs if you work with individual batch valuation and the SD billing document executes a batch split for different batches which belong to the same purchase order item and delivery.
    Other terms
    INVOIC, SAPLIEDI,  M8047, M8, 321
    Reason and Prerequisites
    This is because of the program design.
    Solution
    There is no solution for IDOC_INPUT_INVOIC_MM.
    Module IDOC_INPUT_INVOIC_MRM (only as of Release 4.0) for the logistics invoice verification can distinguish different goods receipts by means of the delivery note number. For this purpose, GR-related invoice verification must be active.
    Owing to this symptom, billing documents for single batch valuation with batch split cannot be settled in MM-EDI inbound processing. The settlement generates exactly the situation described (several invoice items for the same purchase order item). In this case, the only solution is to deactivate the billing of the batch sub-items in SD Customizing and to calculate the main item only.
    Hope this helps.
    Reward if helpful.
    Thanks

  • How can I split a QuickTime file in two?

    8/21/2006. How can I split a QuickTime file in two? I saved a two hour videotape converted by PYRO A/V LINK to iBook G4 combo into a file of 24.5 Gb, and I saved to QuickTime into 536 mgs. Of course, it took half a day. When I went to copy it as video (536 mgs) onto a CD of 700 mg capacity, Toast 7 said it required 1100 mgs and couldn't do it. I have QuickTime Pro, got just recently. I would be very happy to learn how to split it, so to copy each of the split pieces onto 2 CD's, for my own purposes. Many, many thanks. Chuck Yopst. [email protected] 847-394-5621 Moutn Prospect, IL (nw Chicago suburb)

    Wednesday, 10:35 am CDT, 30 August 2006
    Kirk, Thanks for your post Monday 8/21. Interesting how things worked out. I split the OuickTime file--of my retired uncle Church of the Brethren pastor's 50th wedding anniversary, and came up with two folders. Each contained four more folders, and inside folder Mvseg was file Avseq.dat which contained the video material and, when double clicked, started playing that half of the story on Windows 98SE. It just so happens that my uncle's PC operating system is a Windows 98SE. All this from QuickTime 7.0 which Windows 98SE would play the audio but not the video. Then a week later, yesterday in fact, in looking through iDVD help I found your solution, which heretofore I had no idea of. I had to go into System Preferences--Hardware--CDs & DVDs, and set these for "Open with 'Ask what you want to do'." The factory settings were to "open these when inserted" without that "Ask"ing. So then I was able and did follow your solution and it worked, but as a single QuickTime file, playable on Windows XP and Windows 2000 and Macintosh but not Windows 98SE. So everything worked out well. And thank you again. Chuck Y

  • How can I Split the PDF File into different pages?

    Hi,
    My requirment is to split the pdf file , which is obtained by using FM "convert_otf" , into seperate PDF file for each employee data(PERNR).
    Please suggest me the way to slipt the PDF file that has to be downloaded into the presentation server.

    Hi,
    Ok, looking at that programm didn't actually help me very much to understand what's going on, or where you have the CONVERT_OTF call... Regardless, if the suggestion by Raymond is not feasible in your scenario, the thing I'd try to do is - splitting the spool (OTF) contents before calling CONVERT_OTF into individual documents (feeding it to convert FM piecemeal)... The link to OTF format and commands documentation is here.
    Again, it's difficult to give a good algorithm without knowing the exact OTF contents (could you perhaps somehow export the display of spool in RAW format and attach here?) but it would boil down to approximately following:
    1) set the &first_page per PERNR = 1;
    2) run through OTF lines until the &end_page for PERNR has been identified somehow (hopefully there are Begin/End Form OTF commands '//' in that spool... and they correspond to the split of spool you need...);
    3) extract the otf contents from &first_page to the EP command of &end_page into separate OTF itab and, if necessary, add // (End of form) command at the end of itab;
    4) call CONVERT_OTF on the table and download;
    5) set the &first_page per PERNR = &end page + 1;
    6) repeat from 2) until end of spool OTF data
    Something like that... Depending how the Sapscript translates into OTF, you may also need to prepend a few commands found at the very beginning of the spool to each extracted invividual OTF document...
    I Hope you can get the gist of what I'm suggesting... splitting OTF is definetly easier than trying to split PDF, I feel. It's not ideal solution, because - what if the structure of OTF contents you would be realying on changes for some reason..?
    cheers
    Janis

  • Dreamweaver CS5 fatal error XML parsing: Invalid document structure, line:1, File: C:\User\Tyler\App

    XML parsing fatal error: Invalid document structure, line: 1, file: C:\User\Tyler\AppData\Roaming\Adobe\Dreamweaver CS5\en_US\Configureation\Workspace\Classic.xml
    I have gone through older disscussion forums and have followed the steps recommended by deleting the corrupted .xml file itself (in this case "Classic.xml") but imidiatly after when i try to open Dreamweaver again, i am prompted with the same fatal error message including the file that i have just deleted. Next i deleted the entire configure file and this did not help either.
    It is also odd that when i follow the path to find the corrupted .xml file from the prompt, the path is different in that i do not find the "Classic.xml" file through th "en_US\Configuration\Weorkspace" navigation, however i do find the corrupt "Classic.xml" file in the "configure" folder of the "Adobe Dreamweaver CS5" file.
    Im not sure what to do now and i would really like Dreamweaver back up and running so please help if you can!!!
    Thanks
    -Tyler

    Step 1: Close DW, Navigate to C:\User\Tyler\AppData\Roaming\Adobe\Dreamweaver CS5\en_US\Configureation\Workspace\ and delete that Classic.xml file
    Re-open DW. DW should auto-create your Workspace layout XML file again. See if it works fine.
    PROCEED TO STEP 2 ONLY IF STEP 1 DOESN'T SOLVE THE ISSUE
    Step 2: Navigate to C:\Users\Tyler\AppData\Roaming\Adobe\Dreamweaver CS5\en-US\Configuration and delete the entire 'CONFIGURATION' folder. Re-open DW. DW should auto-create your configuration folder based on predefined layouts and config options. This will definitely fix your issue.
    Reason for this issue: This issue would have been caused due to a malformed workspace layout configuration. That may happen due to customizations you may have done to the layout/ improper file permissions/ improper shutdown on Windows.
    See if these fixes resolve your issue and post your results here.
    Cheers,
    ST

  • Om split line not split the charge

    the stardand split sales order line(Sales Orders/line items/Actions/Split line) will split the charge,but my customer hope not split the charge. please help me!!! thank you

    Unfortunately there is no way currently to make split view the default for any server-side page. Your best bet would be to post an idea on the Ideas tab (or support one that is already posted).

  • How to delete the specified line in file?

    How to delete the specified line in file? In case of deleting a specified line in a file, how to do?
    Line 1
    Line 2
    Line 3
    Line 4
    Line 5
    The case is a file including the above content. Now I wanna to delete the "Line 3" and how to realize the action in Java?

    An alternative solution can be :
    import java.io.LineNumberReader;
    import java.io.IOException;
    import java.io.File;
    import java.io.FileReader;
    import java.io.BufferedWriter;
    import java.io.FileWriter;
    import java.io.PrintWriter;
    public class LineDeleter {
    public static void main(String args[]){
    try {
    //suppose you want to delete line 3
         int lineToBeDeleted = 3;
         File f = new File("line.txt");
         long fileSize = f.length();
    //Wrap the FileReader with a LineNumberReader. It will help you
    //identify the lines.
    LineNumberReader lnr = new LineNumberReader( new FileReader(f));
    //Wrap the FileWriter object with BufferedWriter object. Create it with the buffersize
    //equal to the file size.
         BufferedWriter bw = new BufferedWriter(new FileWriter(new File("line1.txt")),(int)fileSize);
    //Wrap BufferedWriter object with PrintWriter so that it allows you
    //to print line by line
    PrintWriter pw = new PrintWriter(bw);
         String s=null;
         while ( (s=lnr.readLine())!=null ){
              System.out.println(s);
              int lineNumber = lnr.getLineNumber();
    //match the line number
              if(! (lineNumber==lineToBeDeleted)){
                   pw.println(s);
              pw.flush();
              lnr.close();
              pw.close();
         catch(Exception e){System.out.println(e);}
    If you want you can rename the line1.txt to the original file name.
    I hope this helps.Good luck!!!!!!

  • Split a ZIP File in smaller Chunks / payloads on ECC 6.0 (NOT PI)

    I call a external web service directly from ECC6.0.  In one of the proxy class's methods I need to send a zip file. I can generate ZIP file using SAP supplied class. The file size of this zip file 150 MB and I want to break this file in smaller chunks say 2 MB chunks. Is there a FM / Class / some way to break this file into smaller chunks?
    Appreciate your inputs
    Thanks,
    Vikram

    Hi Vikram,
    You can use the FM DX_SPLIT_FILE to split the zip file.
    Regards,
    Ashvin

  • File History split 1 gig file in to 5 200 meg files, and I can't find how to restore this file

    I setup file restore on a Windows 8 computer.
    Backed up all the files to a NAS.
    Wiped and reloaded 8.1 pro and did a file restore.
    All of my files, except one large file, restored.
    The file that didn't was a 1 gig TrueCrypt (VeraCrypt) file in the root of "documents".
    When I browse out to my NAS, I see 5 files.
    One of them is named: TCvol (2014_12_09 02_17_05 UTC) and it is 200 megs.
    I believe these 5 files need to combine themselves again to restore my original 1 gig file, but I cannot find any documentation.
    Pretty much everything shows that you open file history restore and your data should appear there.

    Hi,
    It should be caused that Windows Server Backup cannot recognize the encrypted file - I cannot confirm how the third party application (TrueCrypt) work, it seems that it will split an encrypted file into several files and Windows Server Backup only restored
    files physically. You can contact TrueCrypt support about this issue. 
    I searched on your website and found several threads which related to get TrueCrypt working with Windows Server Backup. Specifically to your current issue you can confirm with them about any suggestion to get file recovered or how to do such backup job for
    a healthy recovery. 
    https://veracrypt.codeplex.com/discussions/581529
    https://veracrypt.codeplex.com/discussions/577924
    Please Note: Since the web site is not hosted by Microsoft, the link may change without notice. Microsoft does not guarantee the accuracy of this information.
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

Maybe you are looking for