Exporting to CSV adding NULL as last line

I am having issue while exporting data to CSV file, i can see no data issue when running a select statement in query window. all 105 row comes back correct. but when i export to CSV field it add NULL 105 times as a row at the end.. i don't understand why
and where it comes from

I couldn't understand the issue. NULL values only at the last line?
If you don't want NULL in your CSV, use ISNULL function in your query like ISNULL(Col, "") so that it will give blank
Also you didn't mention how you are exporting to CSV? You have several ways here.
Exporting SQL Server Data with Import and Export Wizard
Creating CSV Files Using BCP and Stored Procedures
SSMS: Export query result to .csv file in SQL Server 2008
-Vaibhav Chaudhari

Similar Messages

  • Exporting Incoming Payments to Excel - Missing last line

    Forum,
    On 8.8 PL15 has anyone come across the following problem:
    When in Incoming Payments for customers, I click on the Excel icon within SAP and save the file as a .TXT file.
    I then open this up in excel and notice the last line is missing from the excel spreadsheet. This is irrespective of how many lines are on the Incoming Payments screen within SAP.
    Regards,
    Juan

    Hi Juan,
    It seems like bug in PL-15
    check Cant export the last data in datagrid to excel
    Thanks,
    Neetu

  • Adding sysdate to filename  when exporting to csv in Apex V3.0.1 using IE6

    Hi,
    I have created an application item (AI_DTTIME), together with an application computation that fires on before-header that is set-up as follows:
    TO_CHAR(sysdate, 'DDMMYYYY HH24:MI') ||'.'
    Within the report addtributes on a page under "Report Export", within the filename, I assigned &AI_DTTIME.
    When I go to test it by clicking on the link label, it seems to assign a number to the filename, such as 40.csv. Each time I clink on the link label, the filename value simply increments to the next value, i.e 41.csv
    Can anyone see what I may be doing wrong or how to apply the sysdate to a filename under "Report Export".
    Thanks.
    Tony.

    Hi Tony, Hi Andy
    I have a similar functionality to achieve by adding sysdate of filename when exporting to csv, adding wthr this csv is in dev or prod,URL of this report and it is powered by xyz company
    I have created an application item same as andy did and the application computation with computation item AI_DTTIME, computation point - Before Header, computation type - static assignment and computation - TO_CHAR(sysdate, 'DDMMYYYY HH24MI') ||'.'
    After that in report-export filename i added the address of my application item i.e &AI_DTTIME
    but when i am trying to export my report to csv the header is coming with " TO_CHAR(sysdate,'DDMMYYYY HH24MI')_' "
    Can you suggest where i am wrong?
    Regards
    Adi

  • How to upload CSV file w/o ,, for the last line

    Hi Experts,
    As part of one interface uploading the csv file (4coulmns) manually using the TCODE: CSCA_FILE_COPY.
    After uploading when i see in the AL11 as it is csv file it is filling the , in each space.
    But my issue while running the interface (because of logic in it) checking for the hash total in the last line where (record count, hash total) only 2 items should come but because of manual upload adding the (record count, hash total, , ,) another two spaces.
    Where program not moving further becasue the it is alway comparing the amounts (1000.00 not matching with 1000.00, , ) it is not processing. In production files directly load in to unix server so no issues and i can't ask to change the program code...because iam not following the process. As part of testing i've to do manually only.
    Please suggest the workaround for this.
    Is it possible to change the contents directly in AL11 directly....then HOW ?
    or
    How to create a csv file w/o , , in the last line (may be it is not possible) ?
    Thanks for your replies in advance.
    Regards
    VVR

    I don't think you can edit from AL11, nor creating a CSV without those commas at the end because are created to state fields with empty value.
    You have 6 columns filled with values but for the last line you only fill the first four. It is expected to appear , , for the last 2 columns on the last line.
    I can only suggest keep doing it manually before uploading the CSV file... or... you are saying you can't change the program.
    How about upload all CSV files... then writing another program to get those files from AL11 and delete the spare commas at the end line?
    Cheers,
    Andres.

  • Why first & last lines in each page in pdf is exported to WORD as a Header/footer?

    Similarly, when exporting a pdf file to Excel, the first and last lines in each page of the PDF do not get exported to EXCEL?

    Hi Michael,
    Installation of PDF Action is done based on this link-
    https://websmp230.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1109054
    MII Version - 12.0.7 Build(20)
    version 1.4.5. of
    the third-party iText.jar and iTextAsian.jar, from
    http://www.lowagie.com/iText/download.html
    Thanks,
    Soumen

  • Last line of table shifts to right slightly after export to PDF

    Indesign CS4 (6.0.4.578), Windows XP. Last line of some tables shifts very slightly to the right, but only in exported PDFs. See screencap from PDF below. It's not very pronounced, but Xs don't quite line up, and client can see it.
    When I printed to PDF through Acrobat 9 the problem goes away. But I would prefer to export.
    For a one-page sample file formatted in Minion Pro (it happens in differing degrees with any typeface), download
    http://www.pegtype.com/shifttorighttest.zip.
    Any ideas why this is happening?
    Ken Benson

    It's not showing in the ID file, just in the exported PDF (and not in the printed PDF). And it shows in the PDF you made too, you just have to look at a higher magnification.
    This is from the PDF you made and emailed to me. See how the bottom X is a little further to the right? Compare foot to foot, not head to foot. Minion X is narrower at the top.
    Acrobat is at 9.3.1. I let the Adobe applications update themselves, so everything should be at the latest point release. No custom dictionary or kerning, though if there was, I would expect the ID file to show the same thing as the PDF.
    Ken Benson

  • When exporting to CSV missing last delimiter

    Post Author: John nutter
    CA Forum: Exporting
    after upgrading from Enterprise 9 to Enterprise 11 a number of CSV files no longer include a comma after the last value. I have tried all file format options but nothing formats the data as required. Has anyone else come across this problem ?

    Post Author: qcheng
    CA Forum: Exporting
    I am using XI. When I export to Excel, the width of last column is always not like what I formated. I have to add an empty text object after the last column so the actual last column becomes to be the second last column with right format.

  • Don't Import Last Line In CSV

    I'm using PowerShell and Log Parser 2.2 to import CSV data into SQL Server. I've encountered an issue whereby a batch of files has an extra line at the very end (totals) and I don't want it imported. Is it possible to either remove the very last line of
    the CSVs using PowerShell, or tell Log Parser not to import the last row. I've looked for a switch to do this in Log Parser but couldn't find anything.
    Thanks in advance
    Adam

    Below is a straightforward way to code it.  I'm sure there are more efficient ways.
    (get-content testfile.txt)[0..((get-content testfile.txt).count-2)] | set-content testfile.txt
    I had something very similar initially, but didn't like the idea of having to read the file twice.
    [string](0..33|%{[char][int](46+("686552495351636652556262185355647068516270555358646562655775 0645570").substring(($_*2),2))})-replace " "

  • Xfa.resolveNode is not writing values to the last line

    Hi,
    I have an issue with the xfa.resolveNode function, i have added at comment //
    at below coding at the line where the issues is occuring, please advise what can be done in order to make the loop works and enter the loop for the last line.
    Any help/comment is highly appreciate!
    FormQuoteNotification.bdyMain.frmTableBlock.tblTable.rowTableSection.rowTableItem.colNetV alue::initialize - (JavaScript, client)
    var rowCount = tblTable._rowTableSection.count;
    var rowNo = new Array(rowCount);
    var i = 0;
    var j = 0;
    var k = 0;
    var comboType = new Array(rowCount);
    var comboPrice = new Array(rowCount);
    var productID = new Array(rowCount);
    var finalType = new Array();
    var finalPrice = new Array();
    var finalRow = new Array();
    var datasum = 0.000;
    for(i=0; i<rowCount; i++){
    rowNo[i] = xfa.resolveNode("tblTable.rowTableSection["+i+"]").rowRemarkRow.frmItem.txtLine.rawValue;
    productID[i] = xfa.resolveNode("tblTable.rowTableSection["+i+"]").rowRemarkRow.frmItem.txtProduct.rawVal ue;
    comboPrice[i] = xfa.resolveNode("tblTable.rowTableSection["+i +"]").rowRemarkRow.frmItem.frmNetValue.decNetValue.rawValue;
    comboType[i] = xfa.resolveNode("tblTable.rowTableSection["+i+"]").rowRemarkRow.frmItem.txtCombo.rawValue ;
              for(j=0; j<rowCount; j++){
                    if(j==(rowCount-1)){ // to check if this is the row before the last one
                        if(comboType[j]== comboType[j-1]){
                         finalType[k] = comboType[j];
                         datasum += parseFloat(comboPrice[j]);
                            finalPrice[k] = datasum;
                        else{
                         if(comboType[j] == null || comboType[j] == ""){
                          finalType[k] = "NotCombo";
                          datasum += parseFloat(comboPrice[j]);
                          finalPrice[k] = datasum; 
                          finalRow[k] = rowNo[j];
                          k=k+1;
                          datasum = 0.000;
                         else{finalType[k] = comboType[j];}
                    else{
                        if(comboType[j]== comboType[j+1]){
                            finalType[k] = comboType[j];
                            datasum += parseFloat(comboPrice[j]);
                            finalPrice[k] = datasum; 
                             if(finalRow[k] == "" || finalRow[k] == null){
                              finalRow[k] = rowNo[j]; 
                        else{
                         if(comboType[j] == null || comboType[j] == ""){
                             datasum += parseFloat(comboPrice[j]);
                          finalType[k] = "NotCombo";
                          finalRow[k] = rowNo[j];
                          k=k+1;
                          datasum = 0.000;
                         else{                      
                          finalPrice[k] += parseFloat(comboPrice[j]);
                              k=k+1;
                             datasum = 0.000;
    if(finalType[0] != null && finalType[0] != ""){
    for(var n=0; n<rowCount; n++){
    for(var m=0; m<rowCount; m++){
    //at this line, for the last row, even it is matching the if statement, the script is not entering for the last match of loop.
    //example, if rowCount = 2, when m=1, n=1, and the if statement is matching, , but it is simply not displaying the "XXX" for the line of n=2
    if(xfa.resolveNode("tblTable.rowTableSection["+n+"]").rowTableItem.colCombo.rawValue == finalType[m]
       && xfa.resolveNode("tblTable.rowTableSection["+n+"]").rowTableItem.colCombo.rawValue != ""
       && xfa.resolveNode("tblTable.rowTableSection["+n+"]").rowTableItem.colCombo.rawValue != null)
        xfa.resolveNode("tblTable.rowTableSection["+n+"]").rowTableItem.colProduct.rawValue = "XXX";
        //display comboprice
        if(xfa.resolveNode("tblTable.rowTableSection["+n+"]").rowTableItem.colLine.rawValue == finalRow[m] &&   xfa.resolveNode("tblTable.rowTableSection["+n+"]").rowTableItem.colCombo.rawValue == finalType[m]){
         var pricetodisplay = finalPrice[m].toFixed(3)+ " KWD";
         xfa.resolveNode("tblTable.rowTableSection["+n+"]").rowTableItem.colNetValue.rawValue =   pricetodisplay ; 
           if(xfa.resolveNode("tblTable.rowTableSection["+n  +"]").rowTableItem.colNetValue.rawValue == null || xfa.resolveNode("tblTable.rowTableSection["+n  +"]").rowTableItem.colNetValue.rawValue == " " || isNaN(xfa.resolveNode("tblTable.rowTableSection["+n  +"]").rowTableItem.colNetValue.rawValue)){
               this.rawValue = null;

    Wouldn't this be better posted in a developers forum? This forum is for Using an iPad, not writing programs for one.

  • Data not exporting to CSV

    I know this is going to be stupid and i'm kicking myself for not seeing why this is not working. But in my ELSE statement when i pipe that to a .csv i am getting blank lines.
    It outputs to the console fine just nothing in the .csv file.
    Thoughts ?
    Function get-mdtbuiltcomputers {
    param (
        [string[]]$computername='localhost'
        foreach ($comp in $computername) { 
        $path=Test-Path "\\$comp\c$\Windows\smsts.ini"
        $connected = Test-Connection -quiet -count 2 -delay 2 -ComputerName $comp
            if ($path -and $connected) {
                    $file=Get-ChildItem \\$comp\c$\Windows\smsts.ini
                    $Output = [ordered]@{'Computername'=$comp;
                                         'MDT Image Date'=$file.creationtime
                $obj = New-Object -TypeName psobject -Property $Output
                Write-Output $obj
            else
                Write-Output "$comp is unavailable or not a Standard MDT Image"
    Rich Thompson

    FYI:
    Here is a good way to guarantee consistent object generation.
    Function get-mdtbuiltcomputers {
    param (
    [string[]]$computername='localhost'
    foreach($comp in $computername){
    $p=[ordered]@{
    Computername=$comp
    Status='Unavailable or not a Standard MDT Image'
    MDTImageDate=$null
    $path=Test-Path "\\$comp\c$\Windows\smsts.ini"
    $connected = Test-Connection -quiet -count 2 -delay 2 -ComputerName $comp
    if($path -and $connected){
    $file=Get-ChildItem \\$comp\c$\Windows\smsts.ini
    $p.Status='Available'
    $p.MDTImageDate=$file.CreationTime
    New-Object PsObject -Property $p
    Avoid adding spaces in Property names.
    ¯\_(ツ)_/¯

  • How to remove the last line input into a string indicator??

    I am currently working on a program where the user has the ability to create a profile for one of our lab machines to run.  When the user selects what parameters they would like, a string indicator shows the user the last input they set followed by a comma.  For example, if the user selects which profile they want, and the times they would like to set, the following string would be displayed:
    As you can see in the picture above, the profile is updated in the string indicator after every set button is hit.  In the event the user makes a mistake, I want them to have the ability to delete the last line that was added.
    This is the code I developed so far.  The string coming in and leaving is attached to a shift register so it continues to append to the indicator.  I tried in this case to count how many lines have been written in the string so far, and I built an array with the current string.  Using the delete from array.VI I indexed it to the number of lines that was created so that the last line is deleted.  I unfortunately had no luck.  Any suggestions?

    tbob wrote:
    Fonzie1104 wrote:
    Hey Tbob,
    That almost worked.  The only issue though is it leaves a null string in its place without moving everything after up one line.  Also, if i have a parameter repeating multiple times, it doenst seem to work.
    I'm confused.  If you are deleting the last line, how can there be anything after?  What do you mean by a parameter repeating multiple times?
    If you look at my example front panel, you will see that the last line is deleted.  Isn't this what you asked for?
    It's due to the fact that your constant doesn't have a comma in the last line. If you add that in, you should be able to reproduce his problem. This is just a guess though, I didn't actually test it to see.
    Message Edited by for(imstuck) on 06-08-2010 01:24 PM
    CLA, LabVIEW Versions 2010-2013

  • 10g exporting to CSV using client_text_io is not working correctly.

    I have an odd issue which i could do with some help with. I run an function that exports to CSV based on an pre defined record group.
    This has been working fine for many months with various customers. Recently a new customer used it and they have 28k rows in his record group and the export is actually not exporting correctly.
    The record group has a record count of 28331.
    The CSV produced has only 3756 in my CSV file. These are the last 3756 records in the Record group so its as if its overwriting the data as it goes yet all smaller datasets work
    FUNCTION fun_export_csv (vgraphid NUMBER, p_filename VARCHAR2)RETURN BOOLEAN IS
      out_file                      client_text_io.file_type; 
         i                                             NUMBER;
      lv_line              VARCHAR2(5000);
    BEGIN
    rg:=populate_group('RG11_EXP');               
    synchronize;
    lv_line:= ('"GIN","Gin Date","PO Num","PO Required Date","Mat Num","Mat Description","Supplier Part No","On Time Delivery(Yes-1, No-0)"');
    client_text_io.put(out_file, lv_line);
    client_text_io.new_line(out_file,1); 
    For i in 1..get_group_row_count('RG11_EXP') Loop--this count is 28331
    lv_line:= ('"'||get_group_number_cell('RG11_EXP.col1', i )                 ||'"'|| ',' ||'"'||
         get_group_date_cell('RG11_EXP.grn_date', i )                 ||'"'|| ',' ||'"'||                                          
         get_group_number_cell('RG11_EXP.po', i )       ||'"'|| ',' ||'"'||
         get_group_date_cell('RG11_EXP.daterqd', i ) ||'"'|| ',' ||'"'||
         get_group_char_cell('RG11_EXP.item_no', i ) ||'"'|| ',' ||'"'||
         get_group_char_cell('RG11_EXP.desc', i )                    ||'"'|| ',' ||'"'||
         get_group_char_cell('RG11_EXP.part_no', i )               ||'"'|| ',' ||'"'||
         get_group_number_cell('RG11_EXP.ontime', i )                    ||'"');
    client_text_io.put(out_file, lv_line);
    client_text_io.new_line(out_file,1); 
    END LOOP;     
    client_text_io.FCLOSE(out_file);
    RETURN TRUE;          

    Hello,
    Try to insert a "synchronize" instruction from time to time:
    i  pls_integer := 1;
    Loop
      If mod(i, 500) = 0 Then
         synchronize;
      End if ;
      i := i + 1 ;
    End loop;
    ...<p>But keep in mind the the CLIENT_TEXT_IO generate a lot of network traffic, so it is better and faster to generate the file on the A.S., then after transfer it to the client machine.</p>
    Francois

  • Last line is still not read

    Hi,
    I fixed error in last post and rewrote the code using ProcessBuilder instead.
    Output from my unix tool is:
    Command is a shell application' /home/myid/test/myshell/bin/run' that starts a ascii gui that looks like ( see below). It waits for input from user.
    *   My shell Application                   *
    HELP: h
    COMMAND: c
    QUIT:q
    135.19.45.18>
    And still the last line is not showing when I read the text.
    I have tried with readLine() and I get the same error.
    Note: When I debug, after characters in line:
    OUIT:q
    have been read I can see that debugger hangs on
    br.read()
    And now the code looks like:
    import java.io.BufferedReader;
    import java.io.BufferedWriter;
    import java.io.IOException;
    import java.io.InputStream;
    import java.io.InputStreamReader;
    import java.io.OutputStream;
    import java.util.ArrayList;
    import java.util.List;
    public class RunShellCmd {
        private BufferedReader processInputStream;
        private BufferedWriter processWriter;
        public static void main(String args[]) throws Exception {
            RunShellCmd cmd = new RunShellCmd();
            cmd.runCmd();
        public void runCmd() throws Exception {
            Process process = null;
            List<String> cmd = new ArrayList<String>();
            cmd.add("/bin/bash");
            cmd.add("/home/myid/test/myshell/bin/run");
            ProcessBuilder processBuilder = new ProcessBuilder(cmd);
            processBuilder.redirectErrorStream(true);
            try {
                System.out.println("start process ....");
                process = processBuilder.start();
                InputStream is = process.getInputStream();
                InputStreamReader isr = new InputStreamReader(is);
                BufferedReader br = new BufferedReader(isr);
                StringBuilder sb = new StringBuilder();
                int intch;
                while ((intch = br.read()) != -1) {
                  char ch = (char) intch;
                  sb.append(ch);
                  System.out.println(sb.toString());
            } catch (IOException e) {
                System.out
                        .println("An error occourd: "
                                + e.toString());
            } finally {
                processInputStream.close();
                process.destroy();
                System.out.println("exit value: " + process.exitValue());
                processWriter.close();

    DUPLICATE THREAD!
    Please don't create duplicate threads. You already have a thread for this same issue.
    I fixed error in last post and rewrote the code using ProcessBuilder instead.
    Good - then post your progress in the original thread and keep using it until the problem is resolved. That way the people that tried to help you before can see the entire context of what you are trying.
    https://forums.oracle.com/thread/2611844

  • After installed SP1 for SQL Server 2012, can no longer export to csv

    After installing SP1 today via Windows Update, I am no longer able to export data to csv using the SQL Server Import and Export wizard. I get the following error message:
    "Column information for the source and the destination data could not be retrieved, or the data types of source columns were not mapped correctly to those available on the destination provider."
    "Column "col1": Source data type "200" was not found in the data type mapping file."...
    (The above line repeats for each column)
    The work-around I have to do is to manually map each column in the "Edit Mappings..." option from the "Configure Flat File Destination" page of the wizard. This is an extreme inconvenience to have to have to edit the mappings and change
    each column to "string [DT_STR]" type from "byte stream [DT_BYTES]" type each time I want to export to csv. I did not have to do this before installing SP1; it worked perfectly for months with hundreds of exports prior to this update and
    no need to modify mapping.

    I am running Windows 7 64-bit, SQL Server 2012 Express edition. Again, just yesterday from Windows Update, I installed SQL Server 2012 Service Pack 1 (KB2674319), followed by Update Rollup for SQL Server 2012 Service Pack 1 (KB2793634). This situation was
    not occurring before these updates were installed, and I noticed it immediately after they were installed (and of course I restarted my computer after the updates).
    In SSMS I just now created a test DB and table to provide a step-by-step with screenshots.
    Here is the code I ran to create the test DB and table:
    CREATE DATABASE testDB;
    GO
    USE testDB;
    GO
    CREATE TABLE testTable
    id int,
    lname varchar(50),
    fname varchar(50),
    address varchar(50),
    city varchar(50),
    state char(2),
    dob date
    GO
    INSERT INTO testTable VALUES
    (1,'Smith','Bob','123 Main St.','Los Angeles','CA','20080212'),
    (2,'Doe','John','555 Rainbow Ln.','Chicago','IL','19580530'),
    (3,'Jones','Jane','999 Somewhere Pl.','Washington','DC','19651201'),
    (4,'Jackson','George','111 Hello Cir.','Dallas','TX','20010718');
    GO
    SELECT * FROM testTable;
    Results look good:
    id    lname    fname    address    city    state    dob
    1    Smith    Bob    123 Main St.    Los Angeles    CA    2008-02-12
    2    Doe    John    555 Rainbow Ln.    Chicago    IL    1958-05-30
    3    Jones    Jane    999 Somewhere Pl.    Washington    DC    1965-12-01
    4    Jackson    George    111 Hello Cir.    Dallas    TX    2001-07-18
    In Object Explorer, I right-click on the [testDB] database, choose "Tasks", then "Export Data..." and the SQL Server Import and Export Wizard appears. I click Next to leave all settings as-is on the "Choose a Data Source" page, then on the "Choose a Destination"
    page, under the "Destination" drop-down I choose "Flat File Destination" then browse to the desktop and name the file "table_export.csv" then click Next. On the "Specify Table Copy or Query" page I choose "Write a query to specify the data to transfer" then
    click Next. I type the following SQL statement:
    SELECT * FROM testTable;
    When clicking the "Parse" button I get the message "This SQL statement is valid."
    On to the next page, "Configure Flat File Destination" I try leaving the defaults then click Next. This is where I am getting the error message (see screenshot below):
    Then going to the "Edit Mappings..." option on the "Configure Flat File Destination" page, I see that all columns which were defined as varchar in the table are showing as type "byte stream [DT_BYTES]", size "0", the state column which is defined as char(2)
    shows correctly however with type "string [DT_STR]", size "2" (see screenshow below):
    So what I have to do is change the type for the lname, fname, address and city columns to "string [DT_STR]", then I am able to proceed with the export successfully. Again, this just started happening after installing these updates. As you can imagine, this
    is very frustrating, as I do a lot of exports from many tables, with a lot more columns than this test table.
    Thanks for your help.

  • Gui_download - last line

    Hi,
    The function module gui_download always create a blank line at the end of file created.
    But I need a file without this last line.
    How can I do that ?
    Thanks !

    I solved this problem with BIN mode.
    See this code example
    DATA: FILESIZE TYPE I.
    CALL FUNCTION 'SCMS_TEXT_TO_BINARY'
    IMPORTING
       OUTPUT_LENGTH         = filesize
      TABLES
        TEXT_TAB              =   i_record
        BINARY_TAB            = i_Record
          filesize = filesize - 1. "TRUNCATED LAST LINE!!!
      CALL FUNCTION 'GUI_DOWNLOAD'
        EXPORTING
          BIN_FILESIZE = filesize
          filename                = l_file
          filetype                = 'BIN'
         TRUNC_TRAILING_BLANKS = 'X'
         TRUNC_TRAILING_BLANKS_EOL = 'X'
         WRITE_LF_AFTER_LAST_LINE = ' '
       TABLES
          data_tab                = i_record
    Hope this can help you.

Maybe you are looking for