SSIS CSV FILE READING ISSUE

hi can u reply me for the below post
Actually iam using a FF connection manger to read the csv file .. with the row delimiter to {CR}{LF}.
Suddenly in looping through the files package got failed because it cant able to read the csv file 
Again i changed  the row delimiter to {LF} then it worked for the file which i faced issue with the {CR}{LF} delimiter.
Now i want to know why the package is failing for the row delimiter issue..
can any one help me on this.
Please share me what actually the difference between those

Please share me what actually the difference between those
CR = Carriage Return = CHAR(13) in SQL
This character is used in Mac OS as new line.
When this character is used, cursor goes to first position of the line
LF = Line Feed = CHAR(10) in SQL
This character is used in Unix as new line.
When this character is used, cursor goes to next line of the line (Old days typewritter when paper moves up)
CR LF
New line in Windows system. Combination of CR and LF.
Best thing is to open the test flat file in Notepadd++ and enable show symbols - show chars and see what exactly you have as row delimiter
Cheers,
Vaibhav Chaudhari
[MCTS],
[MCP]

Similar Messages

  • PS CS3, Camera Raw 4.6 Update and Nikon D810 NEF File Reading Issue

      Currently Have Photoshop CS3 & Adobe Bridge CS3 2.1.1.9.
    According to Adobe, Camera Raw 4.6 supports Nikon D810 NEF files.  I have downloaded Camera Raw 4.6 Update. When
    I try to open Nikon D810 NEF files, I get Photoshop CS3  error that says
    "cannot complete request your because it is not the right kind of
    document. These NEF files wont open in Adobe Bridge at all.  What am I doing wrong?

    Hey,
    Before I purchased, let me get your thoughts on another option that just occurred to me which involves:
    1)Editing my D810 NEF files in LR 5.6, which I have, then exporting/saving the edited files as JPEGs.
    2) Doing any additional editing, as necessary (i.e. that might require layering or selections..etc) , with the JPEGs  with Bridge or PS CS3. 
    This method wont involve any additional software purchase now.. right?    Am I missing anything.   here?    Since my final output to my customers are JPEGs any way, wont this work?   Will I be at any disadvantage.   Am I missing anything.   here? 
    From: ssprengel <[email protected]>
    To: charles cash <[email protected]>
    Sent: Monday, October 6, 2014 9:49 AM
    Subject:  PS CS3, Camera Raw 4.6 Update and Nikon D810 NEF File Reading Issue
    PS CS3, Camera Raw 4.6 Update and Nikon D810 NEF File Reading Issue  created by ssprengel in Adobe Camera Raw - View the full discussion  
    You can buy it from here:
    http://creative.adobe.com/plans
    Please note that the Adobe Forums do not accept email attachments. If you want to embed a screen image in your message please visit the thread in the forum to embed the image at https://forums.adobe.com/message/6795004#6795004
    Replies to this message go to everyone subscribed to this thread, not directly to the person who posted the message. To post a reply, either reply to this email or visit the message page: 
    To unsubscribe from this thread, please visit the message page at . In the Actions box on the right, click the Stop Email Notifications link. 
    Start a new discussion in Adobe Camera Raw by email or at Adobe Community
    For more information about maintaining your forum email notifications please go to http://forums.adobe.com/thread/416458?tstart=0.

  • CSV file read

    Is there a standard FM which handles csv file read? I am currently using a 'split at' to separate values but this fails when some strings ( within quotes) have commas.
    Eg:
    333,khdfs, "Company name", 87348, " Name1, Name2"
    In this scenerio, the last field in quotes gets split into 2. I cannot handle this in the progrm because the last field does not always have a comma split. Any suggestions?

    Hi Suker ,
    First you remove all the Quotes , then split into coma (,).
    I mean to say --
    REPLACE   ALL   OCCURRENCES OF '"'  IN <string_name> WITH SPACE.
    Now split the string at the coma - -
    SPLIT  AT  ....
    Regards
    Pinaki

  • Csv file loading issue

    I am trying to load data from csv file into oracle table.
    The interface executes successfully but the problem is-
    there are 501 rows in the original csv file.
    When i load it in the file model it shows only 260 rows.
    What is the problem? Why all the rows are not loaded?

    Just forget about the interface.
    I am creating a new datastore of file type.
    In the resource name, i m giving my files path.
    when i reverse engineer it and check for the data, it shows only 260.
    But there are 501 records in my csv file.

  • Excel 2007 csv file formatting issue

    Our users create .csv files for upload to SAP. Their habit is to include a number of blank lines in excel to make it more readable.
    In Excel 2003, blank lines were handled as, literally, blank lines, and opening in a text editor shows exactly that, a blank line (with a CR-LF character to terminate the row).
    In Excel 2007 however, the blank line consists of a number of commas equal to the no. of columns, followed by the CR-LF termination. Hope that makes sense.
    While the 2003-generated .CSVs are fine, the 2007 versions cause SAP to throw an exception  ("Session never created from RFBIBL00") and the upload fails. The question therefore is, has anyone ever come across anything similar, or is anyone aware of any remediation that might be possible? Haven't been able to find any documentation on this Excel 2003-2007 change sonot able to address the issue through Excel config.
    Thanks!
    Duncan

    Hello
    Please refer to the consulting note 76016 which will provide information on the performance of the standard program
    RFBIBL00.
    Regards.

  • CSV file generation issue

    Hello All,
    We are facing below issue during CSV file generation :
    Generated file shows field value as 8.73E+11 in output and when we are clicking inside this column result shown is approximate of the correct value like 873684000000. We wish to view the correct value 872684000013.
    Values passes from report program during this file generation are correct.
    Please advice to resolve this issue.
    Thanks in Advance.

    There is nothing wrong in your program, it is the property of excel that if the value in the cell is larger than
    default size it shows in output of that format.

  • BULK INSERT from a text (.csv) file - read only specific columns.

    I am using Microsoft SQL 2005, I need to do a BULK INSERT from a .csv I just downloaded from paypal.  I can't edit some of the columns that are given in the report.  I am trying to load specific columns from the file.
    bulk insert Orders
    FROM 'C:\Users\*******\Desktop\DownloadURL123.csv'
       WITH
                  FIELDTERMINATOR = ',',
                    FIRSTROW = 2,
                    ROWTERMINATOR = '\n'
    So where would I state what column names (from row #1 on the .csv file) would be used into what specific column in the table.
    I saw this on one of the sites which seemed to guide me towards the answer, but I failed.. here you go, it might help you:
    FORMATFILE [ = 'format_file_path' ]
    Specifies the full path of a format file. A format file describes the data file that contains stored responses created using the bcp utility on the same table or view. The format file should be used in cases in which:
    The data file contains greater or fewer columns than the table or view.
    The columns are in a different order.
    The column delimiters vary.
    There are other changes in the data format. Format files are usually created by using the bcp utility and modified with a text editor as needed. For more information, see bcp Utility.

    Date, Time, Time Zone, Name, Type, Status, Currency, Gross, Fee, Net, From Email Address, To Email Address, Transaction ID, Item Title, Item ID, Buyer ID, Item URL, Closing Date, Reference Txn ID, Receipt ID,
    "04/22/07", "12:00:21", "PDT", "Test", "Payment Received", "Cleared", "USD", "321", "2.32", "3213', "[email protected]", "[email protected]", "", "testing", "392302", "jdal32", "http://ddd.com", "04/22/03", "", "",
    "04/22/07", "12:00:21", "PDT", "Test", "Payment Received", "Cleared", "USD", "321", "2.32", "3213', "[email protected]", "[email protected]", "", "testing", "392932930302", "jejsl32", "http://ddd.com", "04/22/03", "", "",
    Do you need more than 2 rows? I did not include all the columns from the actual csv file but most of it, I am planning on taking to the first table these specfic columns: date, to email address, transaction ID, item title, item ID, buyer ID, item URL.
    The other table, I don't have any values from here because I did not list them, but if you do this for me I could probably figure the other table out.
    Thank you very much.

  • Csv file reading and voltage and current plotting with respect to time samples XY plotting

    Hallo,
             I've been struggling with reading  a comma separated value (csv) file from another instrument (attached). I need to plot this data for analysis. I have 5 column of data with numbers of rows, the first three row is the information of the measurement. I want to read 4th row as string and rest of the row as numbers. I want to plot 2nd column (i1)  with respect to TIMESTAMP; 4th column(u2) wrt TIMESTAMP. And finally plotting i1 (x-axis) vs.. u2 (y-axis) in labview. Could anyone help me.
    In excel its so easy to plot  but I don't know how its done in labview.
    Attachments:
    labview forum test.csv ‏30 KB
    excel plot.jpg ‏88 KB

    Start by opening the file.  Then use the Read Text File function.  You can right-click on it and configure it to read lines.  First make it read 3 lines (this is your extra header data).  Then make it read a single line.  That will give you your channel names.  Then read the rest of the file (disable the read by line and wire a -1 into the number of bytes to read).  Then use the Spreadsheet String to Array function to give you your data.
    I would recommend going through the LabVIEW tutorials if you are really new at this.
    LabVIEW Basics
    LabVIEW 101
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • CSV file reading using UTL_FILE at run time

    Hi,
    I have to read CSV file using UTL_FILE.
    but Folder contains Many CSV files.
    I dont know there name.So i have to read csv file at run time.
    Please let me know how should we achieve this?
    Thanks

    place the following in a shell script, say "list_my_files.ksh"
    ls -l > my_file_list.datthen run the shell script using dbms_scheduler;
    begin
    dbms_scheduler.create_program (program_name   => 'a_test_proc'
                                  ,program_type   => 'EXECUTABLE'
                                  ,program_action => '/home/bluefrog/list_my_files.ksh'
                                  ,number_of_arguments => 0
                                  ,enabled => true);
    end;
    /then open "my_file_list.dat" using UTL_FILE, read all file names and choose the one you require.
    P;

  • CSV  FILE READING

    Hi all,
    I got the Csv parser from the net.It is giving runtime error "IO FILE Exception"
    actually there are 3 file included in it.
    CSVFile
    import java.util.ArrayList;
    import java.io.BufferedReader;
    import java.io.FileReader;
    * holds the file object of records
    public class CSVFile
    * arraylist of records, each one containing a single record
    private ArrayList records = new ArrayList();
    * What to replace a row delimiter with, on output.
    private String replacementForRowDelimiterInTextField = " "; // Change if needed.
         * debug, > 0 for output.
        public int debug = 5;
        private boolean debugLoading = true; //true when debugging load cycle
    *Return the required record
    *@param index the index of the required record
    *@return a CSVRecord, see #CSVRecord
    public CSVRecord getRecord (int index)
        if (this.debug > 3 && !debugLoading) {
         System.err.println("CSVFile getRecord ["+index+"]"+ ((CSVRecord)this.records.get(index)).getFields(3));
         return (CSVRecord)this.records.get(index);
    *Get the number of records in the file
    *@return 1 based count of records.
    public int count()
         return this.records.size();
         // ----- Constructor -----
    *Constructor; create a file object
    *@param details  a propertyFile object, see #propertyFile
    *@param csvFile filename of csv file
         public CSVFile(propertyFile details, String csvFile)
             try{
              BufferedReader reader = new BufferedReader (new FileReader (csvFile));
              //StringBuilder sbBuffer = new StringBuilder( reader.ReadToEnd() );
              StringBuffer buf=new StringBuffer();
              String text;
              try {
                  while ((text=reader.readLine()) != null)
                   buf.append(text + "\n");
                  reader.close();
              }catch (java.io.IOException e) {
                  System.err.println("Unable to read from csv file "+ csvFile);
                  System.exit(2);
              String buffer;
              buffer = buf.toString();
              buffer = buffer.replaceAll("&","&");
              buffer = buffer.replaceAll("<","<");
              boolean inQuote = false;
              String savedRecord = "";
              String curRecord = "";
              if (debug > 2) {
                  System.err.println("csvFile: setup");
                  System.err.println("Read int from src CSV file");
              //Split entire input file into array records, using row delim.
              String records[] =  buffer.split( details.rowDelimiter() );
              //Iterate over each split, looking for incomplete quoted strings.
              for (int rec=0; rec <records.length; rec++)
                   curRecord = savedRecord + records[rec];
                   if (debug > 4) {
                       System.out.println("csvFile: saved rec" + savedRecord);
                       System.out.println("csvFile: current rec " + curRecord);
                       System.out.println("csvFile: currRecLth: " + curRecord.length());
                   for (int i = 0; i < curRecord.length(); i ++ )
                        char ch = curRecord.charAt(i);
                        char prev = ( i != 0? curRecord.charAt(i-1): ' ');
                        char nxt = ( i < (curRecord.length()-2)? curRecord.charAt(i+1): ' ');
                        if ( !inQuote && ch == '"' )
                            inQuote = true;
                        else
                            if ( inQuote && ch == '"' )
                             if ( i + 1 < curRecord.length() )
                                 inQuote = (nxt == '"')
                                  || (prev == '"');
                             else
                                 inQuote = false;
                   if ( inQuote )
                        // A space is currently used to replace the row delimiter
                        //when found within a text field
                        savedRecord = curRecord + replacementForRowDelimiterInTextField;
                        inQuote = false;
                   else
                        this.records.add( new CSVRecord(details, curRecord) );
                        savedRecord = "";
              catch (java.io.FileNotFoundException e) {
                  System.out.println("Unable to read CSV file, quitting");
                  System.exit(2);
         // ----- Private Methods -----
         private String[] SplitText(String textIn, String splitString)
              String [] arrText = textIn.split(splitString);
              return arrText;
    *Get all records in the csvfile
    *@return array of CSVRecords, see #CSVRecord
    public CSVRecord[] GetAllRecords()
    CSVRecord[] allRecords = new CSVRecord[ this.records.size() ];
    for (int i = 0; i < this.records.size(); i++ )
         allRecords[i] = (CSVRecord)this.records.get(i);
    return allRecords;
      public static void main(String args[])
         propertyFile path=new propertyFile("C:\\bea\\jdk142_05\\bin");
        CSVFile  a=new CSVFile(path,"C:\\bea\\jdk142_05\\bin\\xxx.csv");
    CSVRecord
    import  java.util.ArrayList;
    *Represents a single record of a CSV file
    public class CSVRecord
         *Debug
        private int debug = 0;
         * Arraylist of fields of the record
        private ArrayList fields = new ArrayList();
         *get the field with index index
         *@param index of field required
         *@return String value of that field
        public String getFields (int index)
         if ( index < fields.size())
         return (String)this.fields.get(index);
         else return ("");
         *get the number of fields
         *@return int number of fields in this file
        public int count()
         return this.fields.size();
         *Create a csv record from the input String, using the propertyfile.
         *@param  details , the property file
         *@see <a href="propertyFile.html">propertyFile</a>
         *@param  recordText , the record to be added to the arraylist of records
        public  CSVRecord(propertyFile details, String recordText)
          * true if within a quote
         boolean inQuote = false;
          * temp saved field value
         String savedField = "";
          * current field value
         String curField = "";
          * field being built
         String field = "";
          * array of records.
          * split it according to the field delimiter.
          * The default String.split() is not accurate according to the M$ view.
         String records[] =  recordText.split( details.fieldDelimiter() );
         for (int rec=0; rec <records.length; rec++)
              field = records[rec];
              //Add this field to currently saved field.
              curField = savedField + field;
              //Iterate over current field.
              for (int i = 0; i < curField.length(); i ++ ){
                   char ch = curField.charAt(i); //current char
                   char nxt = ((i==
                             curField.length() -1)
                            ? ' ' : curField.charAt(i+1)); //next char
                   char prev = (i==0? ' ': curField.charAt(i-1)); //prev char
                   if ( !inQuote && ch == '"' )
                       inQuote = true;
                   else
                       if ( inQuote && ch == '"' )
                        if ( (i + 1) < curField.length() )
                            inQuote = (nxt == '"') || (prev == '"');
                        else
                            inQuote = (prev == '"');
              }//end of current field
              if ( inQuote )
                   savedField = curField + details.fieldDelimiter() + " ";
                   inQuote = false;
              else if (!inQuote && curField.length() > 0)
                   char ch = curField.charAt(0); //current char
                   char lst = curField.charAt(curField.length()-1);
                   if (ch   == '"' &&
                       lst == '"')
                        //Strip leading and trailing quotes
                        curField = curField.substring(1,curField.length()-2);
                        //curField = curField.Replace( "\"\"", "\"" );
                        curField =curField.replaceAll("\"\"", "\"");
                   this.fields.add( curField );
                   savedField = "";
              else if(curField.length() == 0){
                  this.fields.add("");
              if (debug > 2)
                  System.out.println("csvRec  Added:" + curField);
             }//   end of for each record
    propertyFile
    import java.util.ArrayList;
    import java.io.BufferedReader;
    import java.io.FileReader;
    * This class holds the data from a Property file.
    public class propertyFile
        // ----- Private Fields -----
         *Comments from the file
        private String comment;
         * Delimiter for individual fields
        private String fieldDelimiter; // was char
         *   Delimiter for each row
        private String rowDelimiter;
         * Root element to use for output XML
        private String xmlRootName;
         * Element to use for each row
        private String recordName;
         *How many fields are there -  Note: This is 1 based, not zero based.
        private int fieldCount;
         * array of fields
        private ArrayList fields = new ArrayList(88);
         *Set to int > 0 for debug output
        private int  debug=0;
    /** A single instance of this will hold all the relavant details for ONE PropertyFile.
        *@param filePath String name of the property file.
        public  propertyFile(String filePath)
         //StreamReader reader = new StreamReader( filePath );
         try {
         BufferedReader reader = new BufferedReader (new FileReader (filePath));
         String line = null;
         while ( (line = reader.readLine()) != null )
              if ( line.length() != 0 )   //was != ""
                   if (debug> 0)
                       System.err.println("String is: " + line + "lth: " + line.length());
                   if ( line.charAt(0) != '[' && !( line.startsWith("//") ) )
                        String propertyValue = line.split("=")[1];
                        // Assign Comment
                        if ( line.toUpperCase().startsWith("COMMENT=") )
                            this.comment = propertyValue;
                        // Assign Field Delimter
                        if ( line.toUpperCase().startsWith("FIELDDELIMITER") )
                            this.fieldDelimiter = propertyValue.substring(0);
                        // Assign Row Delimiter
                        if ( line.toUpperCase().startsWith("ROWDELIMITER") )
                             if ( propertyValue.substring(0,1).toUpperCase() ==
                                  "\\" && propertyValue.toUpperCase().charAt(1) == 'N')
                                 this.rowDelimiter = "\r\n";
                             else
                                 this.rowDelimiter = propertyValue;
                        // Assign Root Document Name
                        if ( line.toUpperCase().startsWith("ROOTNAME") )
                            this.xmlRootName = propertyValue;
                        // Assign Record Name
                        if ( line.toUpperCase().startsWith("RECORDNAME") )
                            this.recordName = propertyValue;
                        // Assign Field Count
                        if ( line.toUpperCase().startsWith("FIELDS") )
                            this.fieldCount =  Integer.parseInt(propertyValue);
                   else
                        if ( line.toUpperCase().startsWith("[FIELDS]") )
                             while ( (line = reader.readLine()) != null )
                                  if ( line.length() == 0)
                                      break;
                                  else{
                                      if (debug > 0)
                                       System.err.println("Adding: "+line.split("=")[1]);
                                      this.fields.add( line.split("=")[1] );
                             break;
         reader.close();
         } catch (java.io.IOException e) {
             System.out.println("**** IO Error on input file. Quitting");
             System.exit(2);
         * Return the comment int the property file
         *@return String, the comment value, if any
        public String comment ()
         return this.comment;
         * The delimiter to be used for each field, often comma.
         *@return String, the character(s)
        public String fieldDelimiter()
         return this.fieldDelimiter;
         * Row Delimiter - often '\n'
         *@return String, the character(s)
        public String rowDelimiter ()
         return this.rowDelimiter;
        * The XML document root node.
        * @return String, the element name
        public String XMLRootName()
         return this.xmlRootName;
        /** <summary>
        ** The node name for each record
        public String recordName()
         return this.recordName;
        ** Number of Fields per record/node
        *@return integer count of number of fields, 1 based.
        public int fields()
         return this.fieldCount;
         // ----- Public Methods -----
         ** The value of the nth field, 0 based.
         ** @param index Which field to return
         * @return String the field value
        public String fieldNames(int index)
         if (index <this.fields.size())
             return (String)this.fields.get(index); //was .toString()
         else
              System.err.println("PropertyFile: Trying to get idx of :"
                           + index
                           + "\n when only "
                           //+ (this.fields.size() -  1)
                           + this.fieldCount
                           + " available"
              System.exit(2);
         return "";
         *Test entry point, this class
         *@param argv  cmd line arg of property file
        public static void main (String argv[]) {
              if ( argv.length != 1) {
               System.out.println ("Md5 <file>") ;
               System.exit (1) ;
        propertyFile p = new propertyFile(argv[0]);
    Please help as i m novice in File handling espically csvfiles

    > **** IO Error on input file. Quitting
    Press any key to continue . . .
    Ok, no compiler error but it seems that the filePath String name of the property file isn't there.

  • Ssrs 2008 export to csv file display issue

    In a new SSRS 2008 report, I would like to know if there is a way to automatically expand the width of some of the rows when the data is exported to a CSV file so the data is displayed correctly. Here are examples that I am referring to:
    1. In one column where there is suppose to be a date that looks like 12/11/2014, the value of ########## is displayed. The value of 12/11/2014 is what is setup in the SSRS formatting option.
    2. In a number field that is suppose to look like 6039267049 the value that is displayed is 6E+09.
    Basically if I manually expand the width of the columns that I am referring to above, the data is displayed correctly. Thus can you tell me what I can do so that the data is disaplayed correctly in the CSV file and ther user does not need to manually expand
    the column to see the data?

    Hi wendy,
    After testing the issue in my local environment, I can reproduce it when use Excel to open the csv file. As per my understanding, there is no width when we export the report to csv, Excel is just using the default cell sizes when we open the csv. So when
    a date value is wider than the default cell size, it would be displayed as ##########. When a number value is wider than the default cell size, it would use Scientific format.
    As to the date value, we can use the expression =cstr(Fields!Date.Value) to replace the former one =Fields!Date.Value. In this way, the width of the value is narrower than the default cell size, so that the date value can be displayed correctly. For the
    number value, it already narrow down the width with Scientific format. Or we can Select all the cells in the csv file, then click Format| AutoFit Column Width to change all the cells width to fit its corresponding values in Excel level.
    Besides, we can try to export to excel instead of csv. Excel format can inherit the column width in the report. So in this way, we can directly change the width to fit the values in Reporting Services level.
    Hope this helps.
    Thanks,
    Katherine Xiong
    If you have any feedback on our support, please click
    here.
    Katherine Xiong
    TechNet Community Support

  • How can I have a csv file read from excel to labview.

    Hi,
    I would like to read multiple csv files from excel to labview, creating a duplicate of the tables in excel, which would allow me to then draw some graphs for data analysis and comparison between the two.
    Are there any examples that could be useful to what I am trying to do?
    Thanks

    Patel33 wrote:
    From one of the csv files, I only require 3 of the columns. Is there a way to only read that part of the csv file?
    No. The characters in a file are just one long string and delimiters and linefeeds are special characters that defined where fields and lines start and end. As such, columns are interlaced into the file and consists of many small sections, where the position depends on the number of characters in each field, which is typically variable. You really need to read the entire file, then only look at the interesting columns later.
    LabVIEW Champion . Do more with less code and in less time .

  • Spool output to .csv file - having issues with data display

    Hi,
    Need to deliver the output of a select query which has around 80000 records to a .csv file. A procedure is written for the select query and the procedure is being called in the spool script. But few of the columns have comma(,) in the values. For Example, there is a personal_name column in the select query which says the name as " James, Ed". Then output is displayed in different columns. Hence the data is being shifted to the right for the remaining columns.
    Could some one help fix this issue. I mainly used a procedure as the select query is about three pages and hence want the script to look clear.
    Script is,
    set AUTOPRINT ON ;
    set heading ON;
    set TRIMSPOOL ON ;
    set colsep ',' ;
    set linesize 1000 ;
    set PAGESIZE 80000 ;
    variable main_cursor refcursor;
    set escape /
    spool C:\documents\querys\personal_info.csv
    EXEC proc_personal_info(:main_cursor);
    spool off;

    Hi,
    set PAGESIZE 80000 ;is not valid and it will print header as default every 14 rows.
    You can avoid printing the header in this way:
    set AUTOPRINT ON ;
    set heading ON;
    set TRIMSPOOL ON ;
    set colsep ',' ;
    set linesize 1000 ;
    set PAGESIZE 0 ;
    set escape /
    set feedback off
    spool c:\temp\empspool.csv
      SELECT '"'||ename||'"', '"'||job||'"'
      FROM emp;
    spool offThe output will look like this in this case
    "SMITH"     ,"CLERK"
    "ALLEN"     ,"SALESMAN"
    "WARD"      ,"SALESMAN"
    "JONES"     ,"MANAGER"
    "MARTIN"    ,"SALESMAN"
    "BLAKE"     ,"MANAGER"
    "CLARK"     ,"MANAGER"
    "SCOTT"     ,"ANALYST"
    "KING"      ,"PRESIDENT"
    "TURNER"    ,"SALESMAN"
    "ADAMS"     ,"CLERK"
    "JAMES"     ,"CLERK"
    "FORD"      ,"ANALYST"
    "MILLER"    ,"CLERK"You can also consider creating a unique column by concatenating the columns in this way:
    spool c:\temp\empspool.csv
      SELECT '"'||ename||'","'||job||'"' In this case the output will look without spaces between columns:
    "SMITH","CLERK"
    "ALLEN","SALESMAN"
    "WARD","SALESMAN"
    "JONES","MANAGER"
    "MARTIN","SALESMAN"
    "BLAKE","MANAGER"
    "CLARK","MANAGER"
    "SCOTT","ANALYST"
    "KING","PRESIDENT"
    "TURNER","SALESMAN"
    "ADAMS","CLERK"
    "JAMES","CLERK"
    "FORD","ANALYST"
    "MILLER","CLERK"Regards.
    Al
    Edited by: Alberto Faenza on May 2, 2013 5:48 PM

  • BO 4.0 save as csv file format issue

    Hi All,
    We are using BO 4.0 Webi for reporting on SAP BW 7.3 system. Some of our reports have to be scheduled to bring the output in CSV file format. When I schedule the report in CSV format, the final output has data in two sets. The first set has the list of columns which I have selected in my report. Starting on the next row I get to see the second set of data with all the objects selected in the query panel including the detail objects.
    We only need the data for the columns which is selected in the report, but it is bringing the dump from all the objects in the dataprovider.
    Can anyone tell me how to get rid of the second set of data in the same csv file ?
    Thanks,
    Prasad

    Hi,
    CSV format is reserved for 'data only' dataprovider dumps. it exports the entire webi microcube (query results)
    You don't get that option when going 'save as' - which preserves the report formatting - in which case you should consider .xls
    regards,
    H

  • JCA flat file read issue

    Hi,
    We needs to read a flat file and transform it to destination xml format. Then send it to destination file location.
    Steps we have done:
    1. Create JCA adapter and configure the flat file schema
    2. Create proxy based on the jca
    3. create transformation for the source to target schema (this has no namespace)
    4. Create BS for sending the output
    Everything workins as expected when testing from OSB test console. But then the file is placed in the source folder, the output xml has the namespace xmlns:soap-env="http://schemas.xmlsoap.org/soap/envelope/" to the root node.
    e.g,
    <Root xmlns:soap-env="http://schemas.xmlsoap.org/soap/envelope/" >
    <Child>
    <Child1/>
    <Child2/>
    <Child3/>
    </Child>
    </Root>
    But expected output is
    <Root>
    <Child>
    <Child1/>
    <Child2/>
    <Child3/>
    </Child>
    </Root>
    We tried converting the xml to string then repalcing the xmlns:soap-env="http://schemas.xmlsoap.org/soap/envelope/" value with balnk.
    Also we tried with hadcorded xml using assign action instead of transformation. Even the harded xml also having the namespace to the root node.
    But the end system is failing due to this namespace value.
    Help me to resolve the issue.
    Thanks,
    Vinoth
    Edited by: Vinoth on Jun 8, 2011 10:12 PM

    Ideally your endsystem should not fail if you specify any number of namespace identifiers in the XML unless you are using them in elements or attributes within the XML. They should try to resolve this on their side.
    But to see whats going on in OSB, can you please paste log the $body variable in the before the publish action and paste the content here. Or send the sbconfig of the Proxy and business to me on my email mentioned in the profile if possible.

Maybe you are looking for

  • Issue of Create Additional Payments

    Hi, When we maintain the HR master Data through PA30 (without insert any date), with infotype 15- Additional Payment and click on create button. Then see Date of origin under Additional Payments, this is showing 30.04.2010. From where this date is pi

  • MAX_PATH length in windows server 2012 appears to be 255 Characters still

    I understand that NTFS has a 32k character limit, however windows API only supports 255 characters. I was reading that server 2012 was to do away with the 255 character limit and embrace NTFS 32k limit. What I am finding when creating a long path nam

  • Function Module SAP_CONVERT_TO_TEX_FORMAT

    Dear ABAPers, I find function SAP_CONVERT_TO_TEX_FORMAT module really useful to concatenate fields in internal table into one text delimited with character. But somehow I found that this function module delete space in the data. For example if I have

  • Reg Data Import

    Hi, Can any one let me know how can i import the data for BP or Items. I have tried to import for BP. Below are the steps I followed Administration>Data Import/Export>Data Import-->Import from Excel I have selected the bp code and bp name then Ok. A

  • Change sales order values from Inbound Idoc

    Hi all,      In my scenario i have to change sales document automatically(Workflow) by getting the values from the inbound idoc ordchg (one more thing i dont want to create another sales document). Is there is any function module? please tell me how