CSV file template - values in columns

Hi experts,
I'm with a requirement to upload transactional data from a CSV file. And it's working, but with values in lines like below:
CostCenter        P_ACCT             Time                  Amount
1101450               341010101     2012.DEZ          56000
1101450               341010102     2012.DEZ          4000
1101450               341010103     2012.DEZ          13000.99
Just in terms of to make the user's life easy, considering that this way the file will have thounsands of lines, I would like to know if is possible to upload data from a CSV file with the key figure in columns. Something like:
CostCenter        P_ACCT             JAN          FEB         MAR        MAY        JUN          JUL          AUG  ........
1101450               341010101     56000     56000     56000     56000     56000     56000     56000........
1101450               341010102     4000     56000     56000     56000     56000     56000     56000........
1101450               341010103     13000.99     56000     56000     56000     56000     56000     56000........
If I can't manage this in BPC, I'll have to create a macro to put the values in lines.
Regards!
Lucas

You can use a transformation file to get this in a couple of ways I think.
See the help file topic on *MVAL (NW 7.5 version)
http://help.sap.com/saphelp_bpc75_nw/helpdata/en/b8/a76a1ca9ac4ca698259a8ff397bb61/frameset.htm
Also you could try this (from the help file)
*<Dimension> =<value1>,<value2> has the similar effect of an *MVAL command in the mapping section. This type of header occurs only in the beginning of the data file, not in the middle.
*CATEGORY=ACTUAL
*TIME=1999.JAN, 1999.FEB, 1999.MAR,1999.APR
*ENTITY, ACCOUNT, PRODUCT
*AMOUNT UK, SALES, SEDAN, 100, 200, 300, 400
Good luck!

Similar Messages

  • How to generate a second csv file with different report columns selected?

    Hi. Everybody:
    How to generate a second csv file with different report columns selected?
    The first csv file is easy (report attributes -> report export -> enable CSV output Yes). However, our users demand 2 csv files with different report columns selected to meet their different needs.
    (The users don't want to have one csv file with all report columns included. They just want to get whatever they need directly, no extra columns)
    Thank you for any help!
    MZ

    Hello,
    I'm doing it usually. Typically example would be in the report only the column "FIRST_NAME" and "LAST_NAME" displayed whereas
    in the csv exported with the UTL_FILE the complete address (street, housenumber, additions, zip, town, state ... ) is written, these things are needed e.g. the form letters.
    You do not need another page, just an additional button named e.g. "export_to_csv" on your report page.
    The csv export itself is handled from a plsql procedure "stored procedure" ( I like to have business logic outside of apex) which is invoked by pressing the button "export_to_csv". Of course the stored procedure can handle also parameters
    An example code would be something like
    PROCEDURE srn_brief_mitglieder (
         p_start_mg_nr IN NUMBER,
         p_ende_mg_nr IN NUMBER
    AS
    export_file          UTL_FILE.FILE_TYPE;
    l_line               VARCHAR2(20000);
    l_lfd               NUMBER;
    l_dateiname          VARCHAR2(100);
    l_datum               VARCHAR2(20);
    l_hilfe               VARCHAR2(20);
    CURSOR c1 IS
    SELECT
    MG_NR
    ,TO_CHAR(MG_BEITRITT,'dd.mm.yyyy') AS MG_BEITRITT ,TO_CHAR(MG_AUFNAHME,'dd.mm.yyyy') AS MG_AUFNAHME
    ,MG_ANREDE ,MG_TITEL ,MG_NACHNAME ,MG_VORNAME
    ,MG_STRASSE ,MG_HNR ,MG_ZUSATZ ,MG_PLZ ,MG_ORT
    FROM MITGLIEDER
    WHERE MG_NR >= p_start_mg_nr
    AND MG_NR <= p_ende_mg_nr
    --WHERE ROWNUM < 10
    ORDER BY MG_NR;
    BEGIN
    SELECT TO_CHAR(SYSDATE, 'yyyy_mm_dd' ) INTO l_datum FROM DUAL;
    SELECT TO_CHAR(SYSDATE, 'hh24miss' ) INTO l_hilfe FROM DUAL;
    l_datum := l_datum||'_'||l_hilfe;
    --DBMS_OUTPUT.PUT_LINE ( l_datum);
    l_dateiname := 'SRNBRIEF_MITGLIEDER_'||l_datum||'.CSV';
    --DBMS_OUTPUT.PUT_LINE ( l_dateiname);
    export_file := UTL_FILE.FOPEN('EXPORTDIR', l_dateiname, 'W');
    l_line := '';
    --HEADER
    l_line := '"NR"|"BEITRITT"|"AUFNAHME"|"ANREDE"|"TITEL"|"NACHNAME"|"VORNAME"';
    l_line := l_line||'|"STRASSE"|"HNR"|"ZUSATZ"|"PLZ"|"ORT"';
         UTL_FILE.PUT_LINE(export_file, l_line);
    FOR rec IN c1
    LOOP
         l_line :=  '"'||rec.MG_NR||'"';     
         l_line := l_line||'|"'||rec.MG_BEITRITT||'"|"' ||rec.MG_AUFNAHME||'"';
         l_line := l_line||'|"'||rec.MG_ANREDE||'"|"'||rec.MG_TITEL||'"|"'||rec.MG_NACHNAME||'"|"'||rec.MG_VORNAME||'"';     
         l_line := l_line||'|"'||rec.MG_STRASSE||'"|"'||rec.MG_HNR||'"|"'||rec.MG_ZUSATZ||'"|"'||rec.MG_PLZ||'"|"'||rec.MG_ORT||'"';          
    --     DBMS_OUTPUT.PUT_LINE (l_line);
    -- in datei schreiben
         UTL_FILE.PUT_LINE(export_file, l_line);
    END LOOP;
    UTL_FILE.FCLOSE(export_file);
    END srn_brief_mitglieder;Edited by: wucis on Nov 6, 2011 9:09 AM

  • CSV file template for Uploading PO confirmaiton

    Dear all,
    We have activated Badi for Upload PO confirmations for a supplier in SNC 7.0.
    Request if any of you can please let me know where can I find the CSV upload file template.
    Thanks,
    mahesh.

    Hi Mahesh,
    You can download the CSV file from Tools-->Download center(select purchase order confirmation.
    To create a purchase order confirmation, you enter X in the To Be Confirmed column of the schedule line. If the schedule line has an X, you can change the following data:
    ■Quantity
    ■Delivery date
    Note
    Depending on the system set-up, for example if you have set up Customizing for POs to allow shipping dates to be used instead of delivery dates, you can change the shipping date.
    End of the note.
    ■Confirmed price
    Note
    If you leave the Confirmed Price field empty in all schedule lines of an item, the requested price is used. If you enter a value in the Confirmed Price field of one of the schedule lines, or in more than one schedule line, but those values are the same, the system uses that value as the confirmed price for the item. However, if you enter two or more different confirmed price values in the schedule lines of an item, the system regards this as an error, and the item is not processed.
    End of the note.
    ■Confirmed MPN
    ■Confirmed Mfr
    ■To reject the PO item, you enter an X in the To Be Rejected column of the schedule line that has an X in the Requested column.
    Now upload this file in Tools->upload center.
    See the below link for more information.
    http://help.sap.com/saphelp_snc70/helpdata/EN/b4/79223dc5b54b36899ea4f731a712f6/frameset.htm
    Regards,
    Nikhil

  • BULK INSERT from a text (.csv) file - read only specific columns.

    I am using Microsoft SQL 2005, I need to do a BULK INSERT from a .csv I just downloaded from paypal.  I can't edit some of the columns that are given in the report.  I am trying to load specific columns from the file.
    bulk insert Orders
    FROM 'C:\Users\*******\Desktop\DownloadURL123.csv'
       WITH
                  FIELDTERMINATOR = ',',
                    FIRSTROW = 2,
                    ROWTERMINATOR = '\n'
    So where would I state what column names (from row #1 on the .csv file) would be used into what specific column in the table.
    I saw this on one of the sites which seemed to guide me towards the answer, but I failed.. here you go, it might help you:
    FORMATFILE [ = 'format_file_path' ]
    Specifies the full path of a format file. A format file describes the data file that contains stored responses created using the bcp utility on the same table or view. The format file should be used in cases in which:
    The data file contains greater or fewer columns than the table or view.
    The columns are in a different order.
    The column delimiters vary.
    There are other changes in the data format. Format files are usually created by using the bcp utility and modified with a text editor as needed. For more information, see bcp Utility.

    Date, Time, Time Zone, Name, Type, Status, Currency, Gross, Fee, Net, From Email Address, To Email Address, Transaction ID, Item Title, Item ID, Buyer ID, Item URL, Closing Date, Reference Txn ID, Receipt ID,
    "04/22/07", "12:00:21", "PDT", "Test", "Payment Received", "Cleared", "USD", "321", "2.32", "3213', "[email protected]", "[email protected]", "", "testing", "392302", "jdal32", "http://ddd.com", "04/22/03", "", "",
    "04/22/07", "12:00:21", "PDT", "Test", "Payment Received", "Cleared", "USD", "321", "2.32", "3213', "[email protected]", "[email protected]", "", "testing", "392932930302", "jejsl32", "http://ddd.com", "04/22/03", "", "",
    Do you need more than 2 rows? I did not include all the columns from the actual csv file but most of it, I am planning on taking to the first table these specfic columns: date, to email address, transaction ID, item title, item ID, buyer ID, item URL.
    The other table, I don't have any values from here because I did not list them, but if you do this for me I could probably figure the other table out.
    Thank you very much.

  • Read from csv file and plot particular columns

    Hello,
    I`m a new user of Labview and here it comes...my first major problem.
    Maybe this has been discussed before. I’ve made a search to solve my problem first but I couldn`t find anything helpful so I `ve decided to post a new message.
    So here is my problem:
    I`m working in a small semiconductor lab where different types of nitrides are grown using proprietary reactor. The goal is to read the collected csv files from each growth in Labview and plot the acquired data in appropriate graphs.
    I have a bunch of csv files and I have to make a Labview program to read them.
    The first part of my project I`ve decided to be displaying the csv file (growth log file) under labview (which I think works fine).
    The second one is to be able to plot particular columns from the recipe in graphs in Labview (that one actually gives me a lot of trouble):
    1. Timestamp vs Temperature /columns B and D/
    2. Timestamp vs Gas flow /columns L to S/
    3. Timestamp vs Pressure /columns E,K,T,U,V/
    I`ve got one more problem. How can I convert the Timestamp shown in csv file to human readable date in labview? This actually is a big problem, because the timestamp is my x axis and I want to know at what time a particular process took place and I also want to be able to see the converted timestamp when displaying csv file at first. I`ve read a lot about time stamping in excel and timestamp in labview but I`m still confused how to convert it in my case.
    I don`t have problems displaying csv file under Labview. My problems are with the timestamp and the graphs.
    Sorry for my awful English.  I hope you can understand my problems since English is not my mother language. 
    Please find the attached files.
    If you have any ideas or suggestions I`ll be more than happy to discuss them.
    Thank you in advance.
    Have a nice day! 
    Attachments:
    growth log.csv ‏298 KB
    Read from growth log.vi ‏33 KB

    Hello again,
    I`m having problems with converting the first column in the attached above file Growth Log.csv.
    I have a code converting xl timestamp to time and using Index Array traying to grab a particular column out of it but the attached file is written in strings so I guess I have to redo it in array but I don`t know how.Would you help me with this one?
    Attachments:
    Xl Timestamp to Time.vi ‏21 KB

  • Parse CSV file with some dynamic columns

    I have a CSV file that I receive once a week that is in the following format:
    "Item","Supplier Item","Description","1","2","3","4","5","6","7","8" ...Linefeed
    "","","","Past Due","Day 13-OCT-2014","Buffer 14-OCT-2014","Week 20-OCT-2014","Week 27-OCT-2014", ...LineFeed
    "Part1","P1","Big Part","0","0","0","100","50", ...LineFeed
    "Part4","P4","Red Part","0","0","0","35","40", ...LineFeed
    "Part92","P92","White Part","0","0","0","10","20", ...LineFeed
    An explanation of the data - Row 2 is dynamic data signifying the date parts are due. Row 3 begins the part numbers with description and number of parts due on a particular date. So looking at the above data: row 3 column7 shows that PartNo1 has 100 parts
    due on the Week of OCT 20 2014 and 50 due on the Week of OCT 27, 2014.
    How can I parse this csv to show the data like this:
    Item, Supplier Item, Description, Past Due, Due Date Amount Due
    Part1 P1 Big Part 0 20 OCT 2014 100
    Part1 P1 Big Part 0 27 OCT 2014 50
    Part4 P4 Red Part 0 20 OCT 2014 35
    Part4 P4 Red Part 0 27 OCT 2014 40
    Is there a way to manipulate the format to rearrange the data like I need or what is the best method to resolve this? Moreover how do I go about doing this? 

    Hello,
    If the files have the same structure you can create an Integration Service Package.
    see this article
    http://www.mssqltips.com/sqlservertip/2923/configure-the-flat-file-source-in-sql-server-integration-services-2012-to-read-csv-files/
    Javier Villegas |
    @javier_vill | http://sql-javier-villegas.blogspot.com/
    Please click "Propose As Answer" if a post solves your problem or "Vote As Helpful" if a post has been useful to you

  • Download int table into csv file with each column in separate column in csv

    Hi All,
    I want to download the data in internal table to CSV file. but each column in the table should come as separate column in csv format.
      CALL FUNCTION 'GUI_DOWNLOAD'
        EXPORTING
          FILENAME                = GD_FILE
          FILETYPE                = 'ASC'
          WRITE_FIELD_SEPARATOR   = 'X'
        tables
          DATA_TAB                = I_LINES_NEW
        EXCEPTIONS
          FILE_OPEN_ERROR         = 1
          FILE_READ_ERROR         = 2
          NO_BATCH                = 3
          GUI_REFUSE_FILETRANSFER = 4
          INVALID_TYPE            = 5
          NO_AUTHORITY            = 6
          UNKNOWN_ERROR           = 7
          BAD_DATA_FORMAT         = 8
          HEADER_NOT_ALLOWED      = 9
          SEPARATOR_NOT_ALLOWED   = 10
          HEADER_TOO_LONG         = 11
          UNKNOWN_DP_ERROR        = 12
          ACCESS_DENIED           = 13
          DP_OUT_OF_MEMORY        = 14
          DISK_FULL               = 15
          DP_TIMEOUT              = 16
          OTHERS                  = 17.
      IF SY-SUBRC NE 0.
        WRITE: 'Error ', SY-SUBRC, 'returned from GUI_DOWNLOAD SAP OUTBOUND'.
        SKIP.
      ENDIF.
    with the above values passd , I am getting csv file but all the columns in one column separated by some square symbol.
    How to separate them into different columns.
    Thanks in advance
    rgds,
    Madhuri

    Below example might help you understand on dowloading CSV file:
    TYPE-POOLS: truxs.
    DATA: i_t001 TYPE STANDARD TABLE OF t001,
          i_data TYPE truxs_t_text_data.
    SELECT * FROM t001 INTO TABLE i_t001 UP TO 20 ROWS.
    CALL FUNCTION 'SAP_CONVERT_TO_TEX_FORMAT'
      EXPORTING
        i_field_seperator          = ','
    *   I_LINE_HEADER              =
    *   I_FILENAME                 =
    *   I_APPL_KEEP                = ' '
      TABLES
        i_tab_sap_data             = i_t001
    CHANGING
       i_tab_converted_data       = i_data
    EXCEPTIONS
       conversion_failed          = 1
       OTHERS                     = 2.
    IF sy-subrc <> 0.
      MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
              WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
    ENDIF.
    DATA: file TYPE string VALUE 'C:\testing.csv'.
    CALL METHOD cl_gui_frontend_services=>gui_download
      EXPORTING
        filename                = file
      CHANGING
        data_tab                = i_data[]
      EXCEPTIONS
        file_write_error        = 1
        no_batch                = 2
        gui_refuse_filetransfer = 3
        invalid_type            = 4
        no_authority            = 5
        unknown_error           = 6
        header_not_allowed      = 7
        separator_not_allowed   = 8
        filesize_not_allowed    = 9
        header_too_long         = 10
        dp_error_create         = 11
        dp_error_send           = 12
        dp_error_write          = 13
        unknown_dp_error        = 14
        access_denied           = 15
        dp_out_of_memory        = 16
        disk_full               = 17
        dp_timeout              = 18
        file_not_found          = 19
        dataprovider_exception  = 20
        control_flush_error     = 21
        not_supported_by_gui    = 22
        error_no_gui            = 23
        OTHERS                  = 24.
    IF sy-subrc <> 0.
      MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                 WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
    ENDIF.
    Regards
    Eswar

  • How to import from CSV file into INTERVAL DAY TO SECOND column?

    Hello,
    I need to import data from a csv file that has a column that stores time-intervals (e.g. 2:06:02 Hours:Minutes:Seconds) into a table column INTERVAL DAY TO SECOND.
    Does any of you know what format I should apply?
    I tried HH:MI:SS but it didn't work.
    Any suggestion is welcomed (including changing the column type to something else that might hold time-intervals in it....).
    Thanks,
    Andrei

    Andrei,
    There is no native support for the INTERVAL datatype in the Text loading facilities of Application Express.
    To work around this known limitation, I suggest either creating a temporary table or a shadow column in your existing table. You could then upload your data for the interval as a VARCHAR2 and then use SQL to convert to an appropriate interval value.
    For example:
    1) Create a table with:
    create table foo (id number primary key, i interval day to second, shadowi varchar2(4000) )2) Using your sample data set, load this data into table FOO, loading the second column of this data into column SHADOWI
    1     01:06:02
    2     02:06:02
    3     03:01:013) Issue the following SQL to convert the value in your shadow interval column to your actual interval column
    update foo set i = to_dsinterval( '0 ' || shadowi )Note that you'll need to prepend the '0' to formulate a valid interval literal value.
    Joel

  • How can I read, millions of records and write as *.csv file

    I have to return some set of columns values(based on current date) from the database (could be million of record also) The dbms_output can accomodate only 20000 records. (I am retrieving thru a procedure using cursor).
    I should write these values to a file with extn .csv (comma separated file) I thought of using a utl_file. But I heard there is some restriction on the number of records even in utl_file.
    If so, what is the restriction. Is there any other way I can achive it? (BLOB or CLOB ??).
    Please help me in solving this problem.
    I have to write to .csv file, the values from the cursor I have concatinated with "," and now its returning the value to the screen (using dbms_output, temporarily) I have to redirect the output to .csv
    and the .csv should be in some physical directory and I have to upload(ftp) the file from the directory to the website.
    Please help me out.

    Jimmy,
    Make sure that utl_file is properly installed, make sure that the utl_file_dir parameter is set in the init.ora file and that the database has been re-started so that it will take effect, make sure that you have sufficient privileges granted directly, not through roles, including privileges to the file and directory that you are trying to write to, add the exception block below to your procedure to narrow down the source of the exception, then test again. If you still get an error, please post a cut and paste of the exact code that you run and any messages that you received.
    exception
        when utl_file.invalid_path then
            raise_application_error(-20001,
           'INVALID_PATH: File location or filename was invalid.');
        when utl_file.invalid_mode then
            raise_application_error(-20002,
          'INVALID_MODE: The open_mode parameter in FOPEN was
           invalid.');
        when utl_file.invalid_filehandle then
            raise_application_error(-20002,
            'INVALID_FILEHANDLE: The file handle was invalid.');
        when utl_file.invalid_operation then
            raise_application_error(-20003,
           'INVALID_OPERATION: The file could not be opened or
            operated on as requested.');
        when utl_file.read_error then
            raise_application_error(-20004,
           'READ_ERROR: An operating system error occurred during
            the read operation.');
        when utl_file.write_error then
            raise_application_error(-20005,
                'WRITE_ERROR: An operating system error occurred
                 during the write operation.');
        when utl_file.internal_error then
            raise_application_error(-20006,
                'INTERNAL_ERROR: An unspecified error in PL/SQL.');

  • Problem in creating csv file

    I have one problem in creating csv file.
    If one column has single line value, it is coming in single cell. But if the column has no.of lines using carriage return while entering into the table,
    I am not able to create csv file properly. That one column value takes more than one cell in csv.
    For example the column "Issue" has following value:
    "Dear
    I hereby updated the Human Resources: New User Registration Form Request.
    And sending the request for your action.
    Regards
    Karthik".
    If i try to create the csv file that particular record is coming as follows:
    0608001,AEGIS USERID,SINGAPORE,Dear
    I hereby updated the Human Resources: New User Registration Form Request.
    And sending the request for your action.
    Regards
    Karthik,closed.
    If we try to load the data in table it is giving error since that one record is coming in more than one line. How can I store that value in a single line in csv file.
    Pls help.

    I have tried using chr(10) and chr(13) like this......still it is not solved.
    select REQNO ,
    '"'||replace(SUBJECT,chr(13),' ')||'"' subject,
    AREA ,
    REQUESTOR ,
    DEPT ,
    COUNTRY ,
    ASSIGN_TO ,
    to_Date(START_DT) ,
    START_TIME ,
    PRIORITY ,
    '"'||replace(issues, chr(13), ' ')||'"' issues,
    '"'||replace(SOLUTIONS,chr(13),' ')||'"' SOLUTIONS ,
    '"'||replace(REMARKS,chr(13),' ')||'"' REMARKS ,
    to_date(CLOSED_DT) ,
    CLOSED_TIME ,
    MAN_DAYS ,
    MAN_HRS ,
    CLOSED_BY ,
    STATUS from asg_log_mstr
    Pls help.

  • 2.5 GB CSV file as data source for Crystal report

    Hi Experts,
        I  was asked to create a crystal report using crystal report as datasource(CSV file that is pretty huge (2.4Gb)). Could you help with me any doc that expalins the steps mainly with data connectivity.
    Objective is to create Crystal Report using that csv file as data source, save the report as .rpt with the data and send the results to customer to be read with Crystal Reports Viewer or save the results to PDF.
    Please help and suggest me steps as I am new to crystal reports and CSV as source.
    BR, Nanda Kishore

    Nanda,
    The issue of having some records with comma and some with a semi colon will need to be resolved before you can do an import. Assuming that there are no semi colons in any of the text values of the report, you could do a "Find & Replace" to convert the semi colons to commas.
    If find & replace isn't an option, you'll need to get the files separately.
    I've never used the Import Export Wizzard myself. I've always used the BULK INSERT command
    It would look something like this...
    BULK INSERT SQLServerTableName
    FROM 'c:\My_CSV_File.csv'
    WITH (FIELDTERMINATOR = ',')
    This of course implies that your table has the same columns, in the same order as the csv files and that each column is the correct data type to accept the incoming data.
    If you continue to have issues getting your data into SQL Server Express, please post in one of these two forums
    [Transact-SQL|http://social.msdn.microsoft.com/Forums/en-US/transactsql/threads]
    [SQL Server Express|http://social.msdn.microsoft.com/Forums/en-US/sqlexpress/threads]
    The Transact-SQL forum has some VERY knowledgeable people (including MVPs and book authors) posing answers.
    I've never posed to the SQL Server Express but I'm sure they can trouble shoot your issues with the Import Export Wizard.
    If you post in one of them, please copy the post link back to this thread you I can continue to to help.
    Jason

  • How to align the CSV file on upload?

    Hi All,
    I have to upload a CSV file as an attachment in a mail and the data's in the internal table which has to be uploaded in the CSV file are seperated using commas which on uploading appear in the CSV file in the same column but i need the data's to be seperated in different columns and different lines...
    Pls help it is very urgent..
    Thanks in Advance...

    Hi
    For my understanding you are talking about download.
    For that you have to concatenate each fields of final internal table (gt_itab for example) using comma as below.
    TYPES: BEGIN OF ty_lines,
             line(1023) TYPE c,
           END OF ty_lines.
    DATA: l_filename TYPE string VALUE 'C:\temp\abcd.csv',
          gt_lines  TYPE TABLE OF ty_lines,
          gw_lines  TYPE ty_lines.
      LOOP AT gt_itab INTO gw_itab.
        CONCATENATE gw_itab-f1
                    gw_itab-f2 .....
               INTO gw_lines-line
               SEPARATED BY ','.
         APPEND gw_lines TO gt_lines.
      ENDLOOP.
      CALL FUNCTION 'GUI_DOWNLOAD'
        EXPORTING
          filename                = l_filename
          filetype                = 'ASC'
          confirm_overwrite       = 'X'
          no_auth_check           = 'X'
        TABLES
          data_tab                = gt_lines.

  • Read .csv File and Update DB

    I have a .csv file with about 11 columns and multiple rows.
    I need to read this file and from each row extract the first
    4 columns and update my local database.
    If the value if 1st column (unique#) is already in DB do an
    Update else do an Insert.
    How do I read this .csv file and accomplish this goal?

    You read the file with <cffile>
    It then becomes nested lists. For the outer list, your
    delimiter is chr(10). Each list item is a row from your file. For
    your inner list, the delimiter is a comma.
    That should get you started. Details list functions and
    <cffile> are in the cfml reference manual. If you don't have
    one, the internet does.

  • Sort records in a csv file in the descending order of time created attribute

    I have an excel (.csv) file with the following column headers :-
    Client, 
    SourceNetworkAddress
    ,TimeCreated,
     LogonType
    ,User,
    Message
    Values like :- ABC, 10.5.22.27, 11/23/2014 9:02:21 PM, 10, testuser
    The file is a combination of a report generated everyday using multiple scripts. The data is appended each day therefore, I would like to sort the final output file in descending order of time created (a combination of date and time) fetched from events
    i.e. the latest record with the latest date and time should be at the top of the list.
    I tried using the following command however, I get a sorted list according to the date but not the time. The command does not consider the AM/PM mentioned in the time instead simply sorts them as per numbers 
    Import-Csv "C:\Users\a\Desktop\report.csv" | sort Timecreated -Descending | Export-csv "C:\Users\a\Desktop\report_sorted.csv" -force -NoTypeInformation
    So if I have a record with 9:02:21 PM(latest) and a record with time 10.44.10 AM on the same date, the command will sort the list with record 10:44:10 AM first and then record with 9:02:21 PM however it should be the opposite as per descending order.
    Kindly help !!

    Hi jrv,
    Thanks for your response. However, I get errors while I run this command in Powershell :-
    Import-Csv <file> | Select Client,SourceNetworkAddress,LogonType,User,Message,@{N='TimeCreated';E={[datetime]($_.TimeCreated)} | Sort TimeCreated -Descending | Export-csv <file> -force -NoTypeInformation
    Missing expression after ','.
    At line:1 char:150
    Unexpected token 'LogonType' in expression or statement.
    At line:1 char:151
    Unexpected token ',' in expression or statement.
    At line:1 char:160
    Unexpected token 'User' in expression or statement.
    At line:1 char:161
    Unexpected token ',' in expression or statement.
    At line:1 char:165
    Unexpected token 'Message' in expression or statement.
    At line:1 char:166
    The hash literal was incomplete.
    At line:1 char:174
    Please help!
    You are missing a second curly brace - 
    Import-Csv <file> | Select Client,SourceNetworkAddress,LogonType,User,Message,@{N='TimeCreated';E={[datetime]($_.TimeCreated)}} | Sort TimeCreated -Descending | Export-csv <file> -force -NoTypeInformation

  • Sqlloader with same csv files - 1 fails the other works fine!

    Hi,
    I am using sqlldr to load data from a csv file into a table. The table has 23 columns in it. The last column is nullable. The CSV has got 23 column values for all the records excepting couple of records which have 22 values leaving the last 23rd field blank as 23rd column in the table is nullable. Sqlloader loaded everything fine in Dev env but the records failed on Live Env with error - 'Rejected - Error on table ...column not found before end of logical record (use TRAILING NULLCOLS)'.
    Surprisingly - when I copied the live csv file to Dev Env and ran it - it rejected same records with same errors but the other earlier CSV file with same 22 column values worked fine. So I have 2 copies of same CSV file which have different names - one works fine and the other rejects a few records.
    Could anyone please let me know where and what to look for? Thanks in advance for the expert advice of you esteemed people on Forum.
    Regards,
    Ash

    Ash -
    Are you saying you had one file, copied to two servers, and the copy you sent to live never works on either server, but the copy you have always had on DEV works fine? This sounds like an operating system or network issue, where moving the file caused a change you cannot see. Typically happens when you go between Windows and Unix, because the end-of-line sequences are different. You could try using file comparison tools to see if they really are the same (diff on Unix, fc on Windows). You could also use the Unix utility od (octal dump, e.g. od -c file1) to see what you cannot see.
    Good luck,
    Andy

Maybe you are looking for

  • Advice needed regarding Database design practice

    OK, here is my situation. I've started working for a company about 6 months ago. I'm a .NET developer with 12+ years of experience. Most of my experience with databases is with SQL Server. Recently I've discovered an design change with our Oracle dat

  • Final Cut Server Email Response Issues...

    I'm having trouble with email responses. Basically I've created a number of file system watchers that are looking for creation/modification of files that have been compressed using Episode Engine (in the monitors folder). The entire monitors folder i

  • Optimising render time: Which timeline elements/clip effects increase render time?

    Hi, I was just wondering if anybody had further information about Premiere's output render process, specifically the effect clip effects, tracks/layer and clip formats have on render time. The reason I ask is I seem to be getting inconsistent file ex

  • Safari has a stupid bug on OS 4

    I've used iphone3GS for nearly 1 year. And Safari worked well until I updated the new OS 4. This bug appears when I surf a website, click on a link and go to deeper URL, and after that I tab the Back icon of Safari browser to return to previous URL (

  • Is it sufficient to just backup users directory in time machine?

    I am backing up my macbook to my Buffalo Linkstation Live. Is it ok if I just backup the users directory to save all the images,movies,files etc? It is a bout 40 gig. I use garageband, iweb and all the ilife programs and assume all these items includ