How to ignore last four lines in file

Hey All,
We have a requirement where we have to ignore the last four lines of the text file using file sender communication channel. We are using file content conversion and there is no identifier as such which would identify this four lines separately.
Please reply back to me soon.
Thanks and Regards,
Sandeep Maurya

1.you can do that by writing a  commondline script if have unix operation system iam not sure if we can do that same by in windows by ingoring the last line of a file.
or
2. wirte a Module Processor code
3. wirte a UDF in mapping and and not map the last four lines but you need to check for every field.
Regards
Sreeram.G.Reddy
Message was edited by:
        Sreeram Reddy

Similar Messages

  • How to delete the specified line in file?

    How to delete the specified line in file? In case of deleting a specified line in a file, how to do?
    Line 1
    Line 2
    Line 3
    Line 4
    Line 5
    The case is a file including the above content. Now I wanna to delete the "Line 3" and how to realize the action in Java?

    An alternative solution can be :
    import java.io.LineNumberReader;
    import java.io.IOException;
    import java.io.File;
    import java.io.FileReader;
    import java.io.BufferedWriter;
    import java.io.FileWriter;
    import java.io.PrintWriter;
    public class LineDeleter {
    public static void main(String args[]){
    try {
    //suppose you want to delete line 3
         int lineToBeDeleted = 3;
         File f = new File("line.txt");
         long fileSize = f.length();
    //Wrap the FileReader with a LineNumberReader. It will help you
    //identify the lines.
    LineNumberReader lnr = new LineNumberReader( new FileReader(f));
    //Wrap the FileWriter object with BufferedWriter object. Create it with the buffersize
    //equal to the file size.
         BufferedWriter bw = new BufferedWriter(new FileWriter(new File("line1.txt")),(int)fileSize);
    //Wrap BufferedWriter object with PrintWriter so that it allows you
    //to print line by line
    PrintWriter pw = new PrintWriter(bw);
         String s=null;
         while ( (s=lnr.readLine())!=null ){
              System.out.println(s);
              int lineNumber = lnr.getLineNumber();
    //match the line number
              if(! (lineNumber==lineToBeDeleted)){
                   pw.println(s);
              pw.flush();
              lnr.close();
              pw.close();
         catch(Exception e){System.out.println(e);}
    If you want you can rename the line1.txt to the original file name.
    I hope this helps.Good luck!!!!!!

  • Ignoring last 2 lines while reading the file

    Hi All,
    I have a file structure as mentioned below :
    ab
    ab
    ab
    ab
    =======
    =
    While reading a file , i need to ignore the last 2 lines . How to achieve this using FCC parameters.
    Regards
    Vinay P.

    Hi,
    I am not aware of any parameters in FCC to ignore last lines but work around can be :
    You may create one structure to read last 2 lines if depending on file structure and ignore it in mapping (map all records except this structure).
    Regards,
    Beena.

  • How to ignore 1st row form the file(csv) sender CC

    Hi,
    I have a CSV File (File sender) that I need to load with PI.I want to Ignore 1st row from the file.
    For example the File contains 10 rows  but PI  need to read  the data from 2nd row.
    because in the 1st row contains header data like name, number, mobile, address, etc., I don't want to read this 1st row. I want to read only the data which starts from the 2nd row.
    can you pls tell me the ignore command File Receiver CC
    I am using these commands.
    Record structure: item,*
    item.fieldSeparator: ,
    item.endSeparator : 'nl'
    item.fieldNames:   Name, Number, Address, Mobile
    Thanks in Adv
    Vankadoath

    Hi Vankadoath,
    In your content conversion use the field document offset which ignores the number of lines to be ignored.
    for example if you provide the value as "1" for document offset it will ignore the first line in your file.
    (Under Document Offset, specify the number of lines that are to be ignored at the beginning of the document.
    This enables you to skip comment lines or column names during processing.
    Regards,
    Naveen.

  • How do I remove on-line help files (other than English)

    When I do a search within Files (OFO) that returns on-line help documents, I get a list of all the on-line help files for EVERY language. I'd like to eliminate the non-English language on-line help files so my users don't have to click every link to figure out which one is in English.
    I have already created a TAR for this, but the person that responded suggested that I could just delete all of the non-English on-line help folders as the Files Admin. However, as Files Admin, I didn't have appropriate permissions to do so.

    Try ordering your songs by date added. The ones you just added will be at the top (or the bottom) - then you've just got to find where the newly added list stops and the old tracks begin and select all the tracks from then onwards. Sorted.
    Hope that helps.

  • How to ignore comments in a CSV file

    Here's what my CSV file looks like
    #Comment 1
    #Comment 2
    #Comment 3
    US,20080101,20080526,20080901
    I'm reading in this file. How can I ignore all the comment lines? I can code it to ignore all lines begin with # but I thought this feature is built into the language already. Maybe I'm using a wrong class. I'm using Scanner right now and it reads in all the comments as well.
    Scanner sc = new Scanner(new File("test.txt")).useDelimiter(",");
    while (sc.hasNext()) {
    System.err.println(sc.next());
    }

    It's not built into the language. Your best bet is probably to put the CSV-Reading logic into its own class.
    Or find one of the thousands of classes that already implement such a thing.

  • How to ignore footer in fixed length file

    I have the following data which needs to be processed
    SMAL002495
    SMAL002496
    %%EOF
    which consists of multiple (unbounded) records with a single end of file marker (%%EOF)
    This is the existing schema used:
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
    xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd"
    targetNamespace="--suppressed--"
    xmlns:tns="--suppressed--"
    elementFormDefault="qualified"
    attributeFormDefault="unqualified" nxsd:encoding="ASCII" nxsd:stream="chars" nxsd:version="NXSD">
    <xsd:element name="container">
    <xsd:complexType>
    <xsd:sequence>
    <xsd:element name="detail" nxsd:style="array" nxsd:cellSeparatedBy="${eol}" minOccurs="1" maxOccurs="unbounded" nxsd:arrayTerminatedBy="%%EOF">
    <xsd:complexType>
    <xsd:sequence>
    <xsd:element name="ABC" type="xsd:string" nxsd:style="fixedLength" nxsd:length="20"/>
    </xsd:sequence>
              </xsd:complexType>
    </xsd:element>
    </xsd:sequence>
    </xsd:complexType>
    </xsd:element>
    </xsd:schema>
    How can I skip %%EOF and avoid having the inbound file rejected, given the adapter tries to allocate an element of 20 characters in length? (nsxd:arrayTerminatedBy, as shown above does not work as presumably, the %%EOF is on a separate line)
    Is there a way I can exclude the %%EOF marker, or skip, or ignore it?
    Thanks

    There is a command StartsWith wich may help in this situation. You can play with the code but you should get the idea. BPEL will read it in but you can ignore by not mapping.
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
    xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd"
    targetNamespace="suppressed"
    xmlns:tns="suppressed"
    elementFormDefault="qualified"
    attributeFormDefault="unqualified" nxsd:encoding="ASCII" nxsd:stream="chars" nxsd:version="NXSD">
    <xsd:element name="container">
    <xsd:complexType>
    <xsd:sequence>
    <xsd:element name="detail" nxsd:style="array" nxsd:cellSeparatedBy="${eol}" minOccurs="1" maxOccurs="unbounded" nxsd:arrayTerminatedBy="%%EOF">
    <xsd:complexType>
    <xsd:sequence>
    <xsd:element name="ABC" type="xsd:string" nxsd:style="fixedLength" nxsd:length="20"/>
    </xsd:sequence>
    </xsd:complexType>
    </xsd:element>
    </xsd:sequence>
    </xsd:complexType>
    <xsd:complexType>
    <xsd:sequence>
    <xsd:element name="Dummy" nxsd:startsWith="%%EOF" maxOccurs="unbounded">
    </xsd:sequence>
    </xsd:complexType>
    </xsd:element>
    </xsd:schema>
    cheers
    James

  • How to ignore some fields on Receiver File?

    Hi folks,
    I have this inbound structure example:
    <?xml version="1.0" encoding="UTF-8"?>
    <ns0:MT_ObrasAdjud_Out xmlns:ns0="http://pt.edp/r3/obrasadjudicadas">
      <codforn>123543</codforn>
      <numdoc>000003</numdoc>
      <tipodoc>CAO</tipodoc>
      <encoding_scheme>UTF-8</encoding_scheme>
      <filetype>pdf</filetype>
      <filedata>01010101001001</filedata>
    </ns0:MT_ObrasAdjud_Out>
    Using File Conversion with my Receiver File Adapter I want to convert this xml file to a flat file, but only using the element <b><filedata></b> and ignore the others. However, I also need the others elements to use as variable substitution.
    Anybody knows how to do it?
    Thanks in Advance,
    Ricardo.

    Hi Ricardo,
    >>>>Using adapter specific attributes I can use the elements of source message to do it?
    or course!!! that's the idea and you do it all in message mapping
    (very small advanced function as shown in many blogs)
    >>>>But using variable substitution, I can’t ignore those fields with file conversion?
    don't know I stopped working with variable substitution
    as soon as I learned about adapter specific attributes
    as they are sooo much better
    >>>>I can do a map excluding all the others elements for target message, my target message will be one element <filedata>.
    that's the main idea
    Regards,
    michal

  • How to ignore a recordset in receiver file adapter

    Hi, all.
    I am trying to get PI to write a fixed length file via the reciever file adapter. Here is a test data structure I put together:
    In the output file, I want the file adapter to ignore the recordset PAYPERIOD and field PERIOD. Only fields in DATA will be written to the file. Here is what I configure in the CC:
    The problem is that the CC will not work unless I put in the recordset PAYPERIOD which starts printing the data in the PERIOD field. The data in PERIOD is only used to generate the output file name as a perameter.
    Here is the error:
    Attempt to process file failed with java.lang.Exception: Exception in XML Parser (format problem?):'java.lang.Exception: Message processing failed in XML parser: 'java.lang.Exception: Column value 'Y1406' too long (>4 for 0. column) - must stop', probably configuration error in file adapter (XML parser error)'
    Please help. Note that Y1406 is the data in the field PERIOD. The PI file adapter is a rather primitive tool to use.
    Thanks,
    Jonathan.

    Hi Jonathan,
    can you try using the paramter ignoreRecordsetName = TRUE. refer the below sap help
    https://help.sap.com/saphelp_nw04/helpdata/en/2c/181077dd7d6b4ea6a8029b20bf7e55/content.htm
    ignoreRecordsetName
    A
    <Recordset> element is inserted in the XML structure for each recordset
    structure. This level is not always required, particularly if the recordset only
    contains one structure definition.
    If you set the
    parameter to true, the <Recordset> element is not
    inserted.
    regards,
    Harish

  • How to ignore Last node in Receiver side CC

    Hi Group,
    I need to prevent one node writting into output file which is coming from Target structure.
    I have used
    FNAME.fieldFixedLengths 0
    FNAME.FixedLengthTooShortHandling Cut
    in my Content Conversion.
    But its not working.Its giving error in adapter level,saying that field length is greated then 0.
    Can any body suggest.

    > Hi,
    > How does it look after mapping i.e xml format ?
    >
    > Just to check ...
    >
    > Regards,
    > Moorthy
    Hi ,
    Here I am attaching XML format after Running in Message Mapping.
    <?xml version="1.0" encoding="UTF-8"?>
    <ns1:MT_BACSPayment xmlns:ns1="urn://SR3_BACS_01/BACS/BACSPayment"><Recordset><VOL1><KEY>VOL</KEY><LN>1</LN><SSN>000004</SSN><AI> </AI><RES1>                    </RES1><RES2>      </RES2><OI>994915        </OI><RES3>                            </RES3><LSL>1</LSL></VOL1><HDR1><KEY>HDR1</KEY><FI>A994915S 1994915 </FI><SI>000004</SI><FSN>0001</FSN><FSQN>0001</FSQN><GN>    </GN><GVN>  </GVN><CD>06299 </CD><ED>06301 </ED><AI>0</AI><BC>000000</BC><SC>             </SC><RES>       </RES></HDR1><HDR2><KEY>HDR2</KEY><RF>F</RF><BL>00000</BL><RL>00100</RL><RES1>                                   </RES1><BO>00</BO><RES2>                            </RES2></HDR2><UHL1><KEY>UHL</KEY><LN>1</LN><PD>06300 </PD><IDN>999999    </IDN><CUC>00</CUC><COC>000000</COC><WC>1 DAILY  </WC><FN>001</FN><RES1>       </RES1><API>       </API><RES2>                          </RES2></UHL1><TC><DSC>123456</DSC><DAN>00123456</DAN><DAT>0</DAT><KEY>99</KEY><OSC>123456</OSC><OAN>00123456</OAN><FF>    </FF><AMT>00000005700</AMT><ON>06299LG12 17543400</ON><OR>CONTRA            </OR><DN>SOUTHERN WATER    </DN></TC><TC><DSC>209148</DSC><DAN>61628011</DAN><DAT>0</DAT><KEY>01</KEY><OSC>123456</OSC><OAN>00123456</OAN><FF>    </FF><AMT>00000002300</AMT><ON>SOUTHERN WATER    </ON><OR>000400011703X     </OR><DN>LG2               </DN></TC><TC><DSC>209148</DSC><DAN>61628011</DAN><DAT>0</DAT><KEY>01</KEY><OSC>123456</OSC><OAN>00123456</OAN><FF>    </FF><AMT>00000003400</AMT><ON>SOUTHERN WATER    </ON><OR>1234567891234     </OR><DN>LG2               </DN></TC><EOF1><KEY>EOF1</KEY><FI>A994915S 1994915</FI><SI>000004</SI><FSN>0001</FSN><FSEQ>0001</FSEQ><GN>    </GN><GVN>  </GVN><CD>06299 </CD><ED>06301 </ED><AI>0</AI><BC>000000</BC><SC>             </SC><RES>       </RES></EOF1><EOF2><KEY>EOF2</KEY><RF>F</RF><BL>00000</BL><RL>00100</RL><RES1>                                   </RES1><BO>00</BO><RES2>                            </RES2></EOF2><UTL1><KEY>UTL</KEY><LNO>1</LNO><DVT>0000000005700</DVT><CVT>0000000005700</CVT><DIC>0000002</DIC><CIC>0000001</CIC><RES>        </RES><DDI>       </DDI><SU>                     </SU></UTL1><EOF><ID>END</ID><RES>                                                                             </RES></EOF><FNAME><Name>PAYSW_061106_164904</Name></FNAME></Recordset></ns1:MT_BACSPayment>

  • Ignore last lines in a file using FCC

    Hello,
    can we ignore last two line in a file, when we are using FCC.
    in a file uploaded with some special char like $#@..... due to this mapping getting failed.
    i want to ignore last two lines in a file.
    Regards,
    Chinna

    Hi Chinna,
    you can also use replaceString funtion to remove all special character and then use result for date transformation like shown below.
    Hope it will be helpful for you.
    Regards
    Jitender

  • How to Ignore Header Line in FCC " Sender Side" .

    Hi SDNrs,
    I am getting a File Data  after FCC with header Data.
    How to Ignore that Header Line coming with actual data...
    Like Employeeid , Name , Deptid
    So this Employeeid , Name , Deptid is also coming as Data.
    Or what approach Should be Used?
    Regards
    Prabhat Sharma.

    Hi,
    Use " Document Offset" parameter in Sender File CC.
    http://help.sap.com/saphelp_nw04/helpdata/en/2c/181077dd7d6b4ea6a8029b20bf7e55/content.htm
    Thnaks
    Amit

  • Last four Months Background job details to view

    Hello ,
    How to find Last four months (or last year) Background job details , as i need to schedule one job which was deleted from system and the same was executed 3 months agao, so i want to run the same job now( I do not know the steps to given for this job to shcedule again).
    <<removed_by_moderator>>
    Thanks,
    Chinna
    Edited by: Vijay Babu Dudla on Apr 28, 2009 5:40 AM

    > Please do the needful asap
    I thought this was extinct by now.
    If the job was reorganized i.e. deleted, you'll have to do the needful and reschedule it from scratch. If you don't know the steps any more, bad luck I'm afraid.
    Thomas

  • How to skip first 5 lines from a txt file when using sql*loader

    Hi,
    I have a txt file that contains header info tat i dont need. how can i skip those line when importing the file to my database?
    Cheers

    Danny Fasen wrote:
    I think most of us would process this report using pl/sql:
    - read the file until you've read the column headers
    - read the account info and insert the data in the table until you have read the last account info line
    - read the file until you've read a new set of column headers (page 2)
    - read the account info and insert the data in the table until you have read the last account info line (page 2)
    - etc. until you reach the total block idenfitied by Count On-line ...
    - read the totals and compare them with the data inserted in the tableOr maybe like this...
    First create an external table to read the report as whole lines...
    SQL> ed
    Wrote file afiedt.buf
      1  CREATE TABLE ext_report (
      2    line VARCHAR2(200)
      3          )
      4  ORGANIZATION EXTERNAL (
      5    TYPE oracle_loader
      6    DEFAULT DIRECTORY TEST_DIR
      7    ACCESS PARAMETERS (
      8      RECORDS DELIMITED BY NEWLINE
      9      BADFILE 'bad_report.bad'
    10      DISCARDFILE 'dis_report.dis'
    11      LOGFILE 'log_report.log'
    12      FIELDS TERMINATED BY X'0D' RTRIM
    13      MISSING FIELD VALUES ARE NULL
    14      REJECT ROWS WITH ALL NULL FIELDS
    15        (
    16         line
    17        )
    18      )
    19      LOCATION ('report.txt')
    20    )
    21  PARALLEL
    22* REJECT LIMIT UNLIMITED
    SQL> /
    Table created.
    SQL> select * from ext_report;
    LINE
    x report page1
    CDC:00220 / Sat Aug-08-2009 xxxxp for 02/08/09 - 08/08/09 Effective Date 11/08/09 Wed Sep-30-2009 08:25:43
    Bill to
    Retailer Retailer Name                  Name on Bank Account           Bank ABA   Bank Acct            On-line Amount  Instant Amount  Total Amount
    ======== ============================== ============================== ========== ==================== =============== =============== ===============
    0100103  BANK Terminal                  raji                           123456789  123456789            -29,999.98    9 0.00         99 -29,999.98
    0100105  Independent 1                  Savings                        123456789  100000002            -1,905.00     9 0.00         99 -1,905.00
    0100106  Independent 2                  system                         123456789  100000003            -800.00       9 -15.00       99 -815.00
    LARGE SPACE
    weekly_eft_repo 1.0 Page: 2
    CDC:00220 / Sat Aug-08-2009 Weekly EFT Sweep for 02/08/09 - 08/08/09 Effective Date 11/08/09 Wed Sep-30-2009 08:25:43
    Bill to
    Retailer Retailer Name Name on Bank Account Bank ABA Bank Acct On-line Amount Instant Amount Total Amount
    ======== ============================== ============================== ========== ==================== =============== =============== ===============
    Count On-line Amount Instant Amount Total Amount
    ============== ====================== ====================== ======================
    Debits 0 0.00 0.00 0.00
    Credits 3 -32,704.98 -15.00 -32,719.98
    Totals 3 -32,704.98 -15.00 -32,719.98
    Total Tape Records / Blocks / Hash : 3 1 37037034
    End of Report
    23 rows selected.Then we can check we can just pull out the lines of data we're interested in from that...
    SQL> ed
    Wrote file afiedt.buf
      1  create view vw_report as
      2* select line from ext_report where regexp_like(line, '^[0-9]')
    SQL> /
    View created.
    SQL> select * from vw_report;
    LINE
    0100103  BANK Terminal                  raji                           123456789  123456789            -29,999.98    9 0.00         99 -29,999.98
    0100105  Independent 1                  Savings                        123456789  100000002            -1,905.00     9 0.00         99 -1,905.00
    0100106  Independent 2                  system                         123456789  100000003            -800.00       9 -15.00       99 -815.00And then we adapt that view to extract the data from those lines as actual columns...
    SQL> col retailer format a10
    SQL> col retailer_name format a20
    SQL> col name_on_bank_account format a20
    SQL> col online_amount format 999,990.00
    SQL> col instant_amount format 999,990.00
    SQL> col total_amount format 999,990.00
    SQL> ed
    Wrote file afiedt.buf
      1  create or replace view vw_report as
      2  select regexp_substr(line, '[^ ]+', 1, 1) as retailer
      3        ,trim(regexp_replace(regexp_substr(line, '[[:alpha:]][[:alnum:] ]*[[:alpha:]]', 1, 1), '(.*) +[^ ]+$', '\1')) as retailer_name
      4        ,trim(regexp_replace(regexp_substr(line, '[[:alpha:]][[:alnum:] ]*[[:alpha:]]', 1, 1), '.* ([^ ]+)$', '\1')) as name_on_bank_account
      5        ,to_number(regexp_substr(regexp_replace(line,'.*[[:alpha:]]([^[:alpha:]]+)','\1'), '[^ ]+', 1, 1)) as bank_aba
      6        ,to_number(regexp_substr(regexp_replace(line,'.*[[:alpha:]]([^[:alpha:]]+)','\1'), '[^ ]+', 1, 2)) as bank_account
      7        ,to_number(regexp_substr(regexp_replace(line,'.*[[:alpha:]]([^[:alpha:]]+)','\1'), '[^ ]+', 1, 3),'999,999.00') as online_amount
      8        ,to_number(regexp_substr(regexp_replace(line,'.*[[:alpha:]]([^[:alpha:]]+)','\1'), '[^ ]+', 1, 5),'999,999.00') as instant_amount
      9        ,to_number(regexp_substr(regexp_replace(line,'.*[[:alpha:]]([^[:alpha:]]+)','\1'), '[^ ]+', 1, 7),'999,999.00') as total_amount
    10* from (select line from ext_report where regexp_like(line, '^[0-9]'))
    SQL> /
    View created.
    SQL> select * from vw_report;
    RETAILER   RETAILER_NAME        NAME_ON_BANK_ACCOUNT   BANK_ABA BANK_ACCOUNT ONLINE_AMOUNT INSTANT_AMOUNT TOTAL_AMOUNT
    0100103    BANK Terminal        raji                  123456789    123456789    -29,999.98           0.00   -29,999.98
    0100105    Independent 1        Savings               123456789    100000002     -1,905.00           0.00    -1,905.00
    0100106    Independent 2        system                123456789    100000003       -800.00         -15.00      -815.00
    SQL>I couldn't quite figure out the "9" and the "99" data that was on those lines so I assume it should just be ignored. I also formatted the report data to fixed columns width in my external text file as I'd assume that's how the data would be generated, not that that would make much difference when extracting the values with regular expressions as I've done.
    So... something like that anyway. ;)

  • How to delete string or line from unix file(dataset) of application server

    Hi  All,
    After transfer workarea information or all records into dataset(unix file). When I see the file in application server automatically the last line is shown a blank line. I am not passing any blank line.
    I have tried for single record than also the file generates the last line(2nd line) also a blank line.
    When I m reading the dataset, it is not reading the last blank line but why it is showing the last blank line?
    How to delete string or line from unix file(dataset) of application server?
    Please give your comments to resolve this.
    Thanks
    Tirumula Rao Chinni

    Hi Rio,
    I faced similar kind of issue working with files on UNIX platform.
    The line is a line feed to remove it use
    DATA : lv_carr_linefd TYPE abap_cr_lf VALUE cl_abap_char_utilities=>cr_lf. 
      DATA : lv_carr_return TYPE char1,                                   
             lv_line_feed   TYPE char1.                                          
      lv_line_feed   = lv_carr_linefd(1).
      lv_carr_return = lv_carr_linefd+1(1).
    Note: IMP: The character in ' ' is not space but is a special
    character set by pressing ALT and +255 simultaneosly
      REPLACE ALL OCCURRENCES OF lv_line_feed IN l_string WITH ' '.
      REPLACE ALL OCCURRENCES OF lv_carr_return IN l_string WITH ' '.

Maybe you are looking for