Unix Flat File: Remove header and trailer and put in another file.

Hi,
I have Source Flat File placed on Unix Box with header and trailer.
I want to remove Header and Trailer and put in some other file and Data in another file.
I tried following command in unix its working.But not getting Header and Trailer in another file.
sed '1d;$d' input_source.txt > output_data.txt
also How will i use OS command for it in ODI.
Guide me.
Thanks
Ashwini

Hi Ashwini,
You can run OS commands in a package using an ODI Tool: OdiOSCommand.
It is also possible to execute OS commands in an ODI procedure using the Operating System or Jython technologies.
There should be some articles about this on metalink (http://metalink.oracle.com).
Thanks,
Julien

Similar Messages

  • How to create a file and store its contents into another file?

    Hi,
    I'm having some trouble trying to create a code where I have to to create a file and store its contents into another file?
    I read the API, but I'm not certain how this file thing works.
    Here's my code so far:
    public static void main(String[] args) throws Exception
              File file = new File("tasks.txt");
              if (file.exists())
                   System.out.println("File already exists");
                   System.exit(0);
              Scanner scan = new Scanner(System.in);
              Scanner scan2 = new Scanner(System.in);
              //Scans the input line by line
              scan.useDelimiter("\\n");
              //Scans the input by tabs
              scan2.useDelimiter("\\t");
              PrintWriter outputs = new PrintWriter("newtasks.txt");
              outputs.print("ok");
              outputs.println(3);
              outputs.close();
         }

    I managed to change my text into uppercase, but how do I store the uppercase content into another file.
    -So this is what I did so far, I took a text file and modified its strings to uppercase.
    -Now I need to put those modified strings into another text file, is there a way where I can do that with my current code?
    -I already tried printwriter, but it doesn't seem to work
    public static void main(String[] args)throws IOException
              //Task[] oneHundredTasks = new Task[100];
              String uppercase;
              String combine;
              Scanner scan = null;
              FileInputStream in = null;
            FileOutputStream out = null;
            PrintWriter output = null;
            try
                 scan = new Scanner(new BufferedReader(new FileReader("tasks.txt")));
                 scan.useDelimiter("\\n");
                 scan.useDelimiter("\\t");
                while (scan.hasNext())
                     if(!scan.hasNext())
                          scan.next();
                     combine = scan.next();
                     uppercase = combine.toUpperCase();
                     System.out.println(uppercase);
            finally
                if (scan != null)
                    scan.close();
            //The program will try the input and output files
            try
                 in = new FileInputStream("tasks.txt");
                out = new FileOutputStream("newtasks.txt");
                int c;
                //The number "-1" is used to indicate that it has reached the end of the stream.
                while ((c = in.read()) != -1)
                    out.write(c);
            finally
                if (in != null)
                    in.close();
                if (out != null)
                    out.close();
         }

  • Dps file automatically opening multiple times when working on another file in the same folio?

    dps file automatically opening multiple times when working on another file in the same folio? Can anybody please help

    Can you try resetting your preferences first and see if that helps - see Troubleshooting 101: Replace, or "trash" your InDesign preferences

  • Linux server(how to save command out put to another file. )

    hi all,
    i have Q ?
    how to save command out put to another file.
    Ex: #ps -ef
    that particular cmd output i need to save another file.
    is it possible ...if possible ..please let me know
    And how to save command history in Linux.

    df -h >> /oracle/output.log
    /oracle -- mount point name
    Regards
    Asif Kabir

  • How to do the header and trailer validation in the input file?

    hi,
    what are the ways we can validate whether header and trailer record exists in the input file?
    how to do that?
    regards
    Ruban

    File to Proxy Validation
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/99593f86-0601-0010-059d-d2dd39dceaa0
    /people/swaroopa.vishwanath/blog/2005/06/24/generic-approach-for-validating-incoming-flat-file-in-sap-xi--part-1
    /people/swaroopa.vishwanath/blog/2005/06/29/generic-approach-for-validating-incoming-flat-file-in-sap-xi--part-ii
    /people/prateek.shah/blog/2005/06/14/file-to-r3-via-abap-proxy

  • Is a record ID field required for File Adapter (Header, Body, Trailer)?

    Hi,
    File to File scenario. The sender file is a text file i.e. .txt which needs a conversion on the sender file adapter. There are three structures in the file namely header (one), body (many) and Trailer (one). From what I can see I need a record Id field (with the same name) in all the structure types, can anyone confirm this for me?
    Thanks,
    Leanne

    That is correct. If you have a variable number of records of a particular type, then you need a way to accurately identify each type of record... and that is done via the key field (or Record ID)  which is present in each record type.
    Extract from SAP Online Help:
    <i>Key Field Name
    If you specified a variable number of substructures for Recordset Structure, in other words, at least one substructure has the value ‘*’, then the substructures must be identified by the parser from their content. A key field must be set with different constants for the substructures. In this case, you must specify a key field, and the field name must occur in all substructures.
    Specify the Key Field Type to be used to compare the predefined values. This entry is used if the key field name is defined.
    NameA.keyFieldValue: Specify the value of the key field for the structure. This entry is mandatory if the key field name is set. Otherwise, the entry is ignored.</i>
    For more info, refer to the SAP Help:
    http://help.sap.com/saphelp_nw04/helpdata/en/2c/181077dd7d6b4ea6a8029b20bf7e55/frameset.htm

  • Adding numbers in a file a and saving the output in another file.!

    can any one pls tell me how to add numbers from a textfile and save the output in another textfile. i'm new with java so pls help out..

    Hi Friend,
    your statement is not clear, it's bit confuse
    this is my unserstanding,
    you want to open a file and read the values then add numbers,
    latter save the sum in a new file.let me know if this is the thing, I can help u
    with Cheers
    Prasanna

  • Colours desaturate when I copy and paste an image into another file

    I'm using Illustrator CS4 with Windows Vista, 4GB RAM. I've created an image using lots of gradients and different shades of colour, and have tried copying it into a new file, but all the colours in the copied image are desaturated. For instance, a blue with an RGB value of 95 179 255, becomes 114 174 223 in the copied image. All the colours are similarly desaturated. It's very annoying! Anyone had a similar problem, or have any idea what's happening?

    Jacob,
    I am using Illustrator CS (ver.11.0.0) and all of my docs are cmyk. It might help you to know that this: the cmyk values in the original document were assigned as: c100 m55 y0 k0. When I close the doc and reopen it, the values change to: c93.74 m55.69 y2.35 k0. I don't understand why. Can you help?

  • How to Seggragate Header, Footer and Trailer Records in Informatica Cloud ?

    Hi All, This is my source file Structure which is Flat File. Source File : "RecordType","Creation Date","Interface Name""H","06-08-2015","SFC02""RecordType","Account Number","Payoff Amount","Good Through Date""D","123456787","2356.14","06-08-2015""D","12347","2356.14","06-08-2015""D","123487","235.14","06-08-2015""RecordType","Creation Date","TotalRecordCount""T","06-08-2015","5" The Source File has to be loaded into three targets for Header , Detail and Trailer records separately. Target Files: File 1 : Header.txt
    "RecordType","Creation Date","Interface Name""H","06-08-2015","SFC02" File 2 : Detail.txt "RecordType","Account Number","Payoff Amount","Good Through Date""D","123456787","2356.14","06-08-2015""D","12347","2356.14","06-08-2015""D","123487","235.14","06-08-2015" File 3 : Trailer.txt "RecordType","Creation Date","TotalRecordCount""T","06-08-2015","5"  I tired this solution below :  1.  Source ---> Expression ]-----filter 1---Detail.txt                                        -----filter 2---Trailer.txt  In source , I will read the records starting from 3rd row. This is because, if i read from first row, the detail part contains more fields when compared to Header part. Header Part contains only three fields. So it is taking only first three fields records in Detail Section as well.That's why I am skipping the first two records(Header fields and header record) .. refer the example.. In filter 1, condition is Record_Type = 'D'. In Filter 2 , condition is Record_Type = 'T'.So, the filter 1 will load to Detail.txt and Filter 2 Will load to Trailer.txt In task , pre session command, Calling the windows .bat script to fetch the first two lines and load into Header.txt  This solution is working fine..  My query is can we use two pipeline flow in a same mapping in Informatica cloud.?  Pipeline Flow 1  Source ---> Expression ]-----filter 1---Detail.txt                                    -----filter 2---Trailer.txtPipeline Flow 2  Source ---> Expression ]-----filter 3(Record_Type='H')---Header.txt Source file is same in two flows. In first flow, I will read from the third row, skipping the header section as I mentioned earlier.In second flow , I ll read the entire content and take only header record and load into target.  If I add the flow 2 to existing flow 1.  I am getting the below error..TE_7020 Internal error. The Source Qualifier [Header_Source] contains an unbound field [Creation_Date]. Contact Informatica Global Customer Support.  1. Do informatica Cloud supports, two parallel flow in a same mapping ?2. Any other best solution for my requirement?   Since I am new to Informatica Cloud, Can anyone suggest any other solution if you have?? It will be more helpful if you guys suggest a good solution ..  ThanksSindhu Ravindran

    We are using a Webservices Consumer Tranformation in our mapping to connect to RightFax Server using a WSDL url via Business Service in a mapping.Here in the mapping, where we are sending the input parameters through a flat file and then connecting to Rightfax Server via WS Consumer transformation and then fetching the data and writing to a Flat File.  07/28/2015 10:10:49 **** Importing Connection: Conn_000A7T0B0000000000EM ...07/28/2015 10:10:49 **** Importing Source Definition: SRC_RightFax_txt ...07/28/2015 10:10:49 **** Importing Target Definition: GetFax_txt ...07/28/2015 10:10:49 **** Importing Target Definition: FaultGroup_txt ...07/28/2015 10:10:49 **** Importing SessionConfig: default_session_config ...    <Warning> :  The Error Log DB Connection value should have Relational: as the prefix.    <Warning> :  Invalid value  for attribute Error Log DB Connection. Will use the default value     Validating Source Definition  SRC_RightFax_txt...    Validating Target Definition  FaultGroup_txt...    Validating Target Definition  GetFax_txt...07/28/2015 10:10:49 **** Importing Mapping: Mapping0 ...    <Warning> :  transformation: RIghtfax - Invalid value  for attribute Output is repeatable. Will use the default value Never    [transformation< RIghtfax > ] Validating transformations of mapping Mapping0...Validating mapping variable(s).07/28/2015 10:10:50 **** Importing Workflow: wf_mtt_000A7T0Z00000000001M ...    <Warning> :  Invalid value Optimize throughout for attribute Concurrent read partitioning. Will use the default value Optimize throughput   [Session< s_mtt_000A7T0Z00000000001M >  --> File Reader< File Reader > ]     <Warning> :  The value entered is not a valid integer.    <Warning> :  Invalid value NO for attribute Fail task after wait time. Will use the default value Successfully extracted session instance [s_mtt_000A7T0Z00000000001M].  Starting repository sequence id is [1048287470] Kindly provide us a solution. Attached are logs for reference.

  • NXSD translator for CSV file (Header-Line-Trailer)

    Hi,
    We have to process the file coming in below format.
    File Format-
    Header Line (Only once)
    Payload lines. (is unbounded)
    Trailer Line (only once) ----> this is what i like to skipped by adapter.
    Sample file-
    2013-10-10
    1,2,3,4,4,5,1
    35,5,5,66,6,1,1
    2
    We don't have any record identifier to identify the record type. Only info we have is that first record will be the header and last records will be the trailer and in between that we will get payload.
    Can some one tell how to create XSD for above type of file?
    Thanks,
    Ab

    Hi Puneet,
    As you suggested I generated a separate schema for header+payload and trailer and tried to merge them using both choice and sequence. But not of them seems to be working.
    are you sure that we can handle this scenario in xsd?  Below is the generated schema.
    File Format-
    Header Line (Only once)
    Payload lines. (is unbounded)
    Trailer Line (only once) ----> this is what i like to skipped by adapter.
    Actual data file-
    2013-10-10
    5561386532;11111111;INDIA
    5561386532;44444444;INDIA
    2
    Schema for Trailer-
    <?xml version="1.0" encoding="UTF-8" ?>
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
                xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd"
                xmlns:tns="http://TargetNamespace.com/bn"
                targetNamespace="http://TargetNamespace.com/bn"
                elementFormDefault="qualified"
                attributeFormDefault="unqualified"
                nxsd:version="NXSD"
                nxsd:stream="chars"
                nxsd:encoding="US-ASCII"
    >
      <xsd:element name="Root-Element">
        <xsd:complexType>
          <xsd:sequence>
            <xsd:element name="rec" minOccurs="1" maxOccurs="1">
              <xsd:complexType>
                <xsd:sequence>
                  <xsd:element name="C1" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}" nxsd:quotedBy="&quot;" />
                </xsd:sequence>
              </xsd:complexType>
            </xsd:element>
          </xsd:sequence>
        </xsd:complexType>
      </xsd:element>
    </xsd:schema>
    Schema for Header+Payload-
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
                xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd"
                xmlns:tns="http://TargetNamespace.com/bn"
                targetNamespace="http://TargetNamespace.com/bn"
                elementFormDefault="qualified"
                attributeFormDefault="unqualified"
                nxsd:version="NXSD"
                nxsd:stream="chars"
                nxsd:encoding="US-ASCII"
                nxsd:hasHeader="true"
                nxsd:headerLines="1"
                nxsd:headerLinesTerminatedBy="${eol}"
    >
      <xsd:element name="Root-Element">
        <xsd:complexType>
          <xsd:sequence>
            <xsd:element name="rec" minOccurs="1" maxOccurs="unbounded">
              <xsd:complexType>
                <xsd:sequence>
                  <xsd:element name="C1" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=";" nxsd:quotedBy="&quot;" />
                  <xsd:element name="C2" type="xsd:int" nxsd:style="terminated" nxsd:terminatedBy=";" nxsd:quotedBy="&quot;" />
                  <xsd:element name="C3" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}" nxsd:quotedBy="&quot;" />
                </xsd:sequence>
              </xsd:complexType>
            </xsd:element>
          </xsd:sequence>
        </xsd:complexType>  
      </xsd:element>
    </xsd:schema>

  • Sharepoint 2010 and Unable to Open MS Office Files

    Hello,
    I have this annoying issue that I'm experiencing. My IE11 is unable to open *any* sharepoint files. I have to "download a copy", then "add document" in order for me to update files, pure annoyance.
    This is just for one client.
    Here's the stuff I've done so far
    Change a new PC
    Remove Office 2013 OEM and Install Office 2010 Standard (cannot open file)
    Remove Office 2010 Standard and Install Office 2013 (cannot open file)
    BasicAuthLevel registry (no go)
    Disable all IE Plugins except Microsoft ones (no go)
    Use IE x86 (no go)
    Use IE10 (no go)
    Use IE Tab on Firefox (no go)
    Trust Center and Trust all Content (no go)
    Delete Cache, reset Advanced Setting, "reset" (no go)
    Uncheck "Autodetect" on LAN setting, then recheck (no go)
    Add sharepoint to Trusted Sites (no go).
    Use CCleaner to clean PC, then reinstall Office (no go)
    Uninstall all other office product except 2013 Office. Remove Sharepoint component via Repair, then re-add (no go)
    I've Binged and tested many solutions and unable to find it. I use fiddler to see what stuff is being passed back and forth.
    The file is 100kb. If I use "download a copy", its downloaded right away. However, if I open with Sharepoint, it takes forever, then eventually give me this error (see attached).
    Please advise. I've seen odd issue, but never this one where I cannot Bing it.

    Hi Samir,
    For troubleshooting your issue, you can take steps as below:
    1.Make sure you are using IE 32bit.
    2.Go to Internet Options>Advanced tab, uncheck "Do not save encrypted files to disk".
    3.Delete the Office File Cache:http://www.addictivetips.com/windows-tips/office-2010-fix-office-document-cache-error/
    4.Fixing SharePoint Office Integration Problems:http://thechriskent.com/2012/03/02/fixing-sharepoint-office-integration-problems/
    Best Regards,
    Eric
    Eric Tao
    TechNet Community Support

  • I have my mac's recovered files in my trash and i want to delete the items in my trash can i put my recovered files in a separate folder on my desktop and empty my trash or do they have to stay in my trash?

    i have the recovered folders on my mac and i want to empty my trash and can i put my recovered files in a seperate folder on my desktop or do they have to stay in my trash?

    Empty the trash to release more HD space.
    NOTE:  memory=RAM=hardware   Storage space & memory are 2 different things.

  • How to read a file and pickup some information from  a file

    Hai all,
    I wrote a beloe program to read a file and write it into another file
    import java.io.*;
    public class Rdata {
    /** Creates a new instance of Rdata */
    public Rdata() {
    public static void main(String a[])throws Exception
    FileReader fr=new FileReader("c:\\java\\attach\\attach2\\resume1.doc");
    FileWriter fw=new FileWriter("C:\\java\\attach\\abc2.doc");
    char ch;
    int i;
    while((i=fr.read())!=-1)
    ch=(char)i;
    fw.write(ch);
    fw.close();
    This program reads the content from resume1.doc and write it into abc2.doc . But in my project I want to read the resume1.doc and and write only peace of information like "keyskills are java,jsp,javamail"
    I mean how to read the entaire file and how to write into another file whisc i require
    Thanks&regards
    radhs

    http://java.sun.com/docs/books/tutorial/essential/io/

  • Generate target/out file with header record as Record Count ?

    Hi Kareem, Please try the below approach. Pipeline 1: Load actual data(without header with record count) from source to target. Let say your file name is intermediate1.dat Pipeline 2: Take the target from pipeline 1 as source and create the header with count of source file using an aggregator. The filename of target for pipeline 2 will be your final file(header and detail data). Pipeline 3: Take the target of pipeline 1 again and do 1-to-1 load to the target file of second pipeline. In session properties, dont forget to tick the check box append if exists for the third pipeline target. There may be other simple approaches also. If you have no time in hand try the above approach. Let me know if you find any issues. Thanks,Deeshan.

    Generate target/out file with header record as Record Count ? Out file:---------------------------Record Count :2000  Coulmn1, Column2...Data, data........

  • Can we able to take the out put have the file into another file in unix

    Hi all ,
    My question is .. we are having one shall fill , we need to take the output of that file to another file ,
    but the output is not printing into another file..
    what am guessing is we have only read and execute permission on that file.. we don't have write permission on that file and the folder where the file is exists .
    my Question is , Do we need all permission on File to copy the output to another file... ?
    because we are redirecting the output to another folder where we can create the another file and read and write permissions .
    -rwxrwxr-x 1 th th 499 Jun 20 01:06 Coverage
    like this we are having the permission on that file
    for the folder we are having the permission
    drwxrwxr-x 6 kpsmith kpsmith 12288 Aug 9 21:59 1.3
    can any one help me out from this problem
    Thanks
    Sreedhar

    Assuming 'shall fil' is shell script , other than dumping even more garbage in this forum, what does this question have to do with Oracle?
    Or can't you type anything in the address bar of your browser and is this URL the only one you can visit?
    Sybrand Bakker
    Senior Oracle DBA

Maybe you are looking for

  • Label disk fail

    hi,all: who can help me ? when i try to label a disk , the label always cannt success ! why is this ? the following is info . os version: # cat /etc/release Solaris 10 10/09 s10s_u8wos_08a SPARC Copyright 2009 Sun Microsystems, Inc. All Rights Reserv

  • Macbook acting wierd even after fresh install of OS

    i've got a 13" macbook i've had for a couple of years. i've had no problems with it until about a month ago when it started acting weird. it was becoming very slow and crashed regularly when i tried to run simple applications like itunes or safari. e

  • Output blank for some of fields in SAP script

    Hi All, I am working for an issue in which a ZFORM is configured with VA22(Change Quotation) and ZDRIVER Program.Now my Concern is when user make some change and save the Quotation and pick the option from menu to show print preview: Menu -> Sales Do

  • Macbook pro fell down e

    Guys , I was driving in large subarbun , my mbp 2.4Ghz was completely shut down fully , and the bag laptop was in , fell of the seat to the floor during the sudden break . I turned on laptop and can't find anything wrong however to be 100% sure shall

  • Problem in importing the database

    Hi my team is new to htmldb..I am doing one module in a stand alone system and my teammate is doing another module in anothe system..I want to import my database into my teammate system..i used sql script for importing...But when I run the script..it