Csv file reading and voltage and current plotting with respect to time samples XY plotting

Hallo,
         I've been struggling with reading  a comma separated value (csv) file from another instrument (attached). I need to plot this data for analysis. I have 5 column of data with numbers of rows, the first three row is the information of the measurement. I want to read 4th row as string and rest of the row as numbers. I want to plot 2nd column (i1)  with respect to TIMESTAMP; 4th column(u2) wrt TIMESTAMP. And finally plotting i1 (x-axis) vs.. u2 (y-axis) in labview. Could anyone help me.
In excel its so easy to plot  but I don't know how its done in labview.
Attachments:
labview forum test.csv ‏30 KB
excel plot.jpg ‏88 KB

Start by opening the file.  Then use the Read Text File function.  You can right-click on it and configure it to read lines.  First make it read 3 lines (this is your extra header data).  Then make it read a single line.  That will give you your channel names.  Then read the rest of the file (disable the read by line and wire a -1 into the number of bytes to read).  Then use the Spreadsheet String to Array function to give you your data.
I would recommend going through the LabVIEW tutorials if you are really new at this.
LabVIEW Basics
LabVIEW 101
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines

Similar Messages

  • CSV file read

    Is there a standard FM which handles csv file read? I am currently using a 'split at' to separate values but this fails when some strings ( within quotes) have commas.
    Eg:
    333,khdfs, "Company name", 87348, " Name1, Name2"
    In this scenerio, the last field in quotes gets split into 2. I cannot handle this in the progrm because the last field does not always have a comma split. Any suggestions?

    Hi Suker ,
    First you remove all the Quotes , then split into coma (,).
    I mean to say --
    REPLACE   ALL   OCCURRENCES OF '"'  IN <string_name> WITH SPACE.
    Now split the string at the coma - -
    SPLIT  AT  ....
    Regards
    Pinaki

  • How to measure current and voltage and rpm with daq device

    i am measuring current and voltage and am wondering should i use shunt resistors or current sensors hall effect sensors. i have a pci 6221 and a scc-68 breakout box. what specifications or size should the shunt resistor/current sensor be or should i use another device to measure the currrent and voltage. do i need any other safety device between the resistors, current sensors and the scc-68 breakout box when measuring the current and voltage. will the daq card pci 6221 pick up a signal from the resistor. what terminals should the wires from the resistors and current sensors be connected to in the scc-68. i am using a proximity switch to measure rpm of a motor. should the proximity switch be 2 or 3wire connection. should it have an analog or digital connection and is a power supply required to power the proximity switch and should it have an npn or pnp connection

    hello,
     i was going to use a 20amp 50mv, or a 20amp 100mv current shunt and connect wires directly into the AI input terminals of the scc-68 from the current shunt would these be suitable. would current shunts such as these be suitable to use http://uk.farnell.com/elc/sh10020/shunt-sh10020-20a-100mv-1-class/dp/1319576 or http://uk.farnell.com/datel/3020-01098-0/shunt-50mv-20a/dp/1339338
    is it ok to use either a current shunt or should a hall effect sensor be used  such as this http://ie.farnell.com/honeywell-s-c/csla2cd/sensor-hall-effect/dp/1082269 . which of them would be more accurate or are both of them fairly accurate.
    when i am measuring voltage can i connect two resistors between the positive and negative of the wires going to the battery and connect two wires from either side of the resistor directly into the analog inputs of the scc-68 and measure the 12-15 volts directly would the pci6221 and scc-68 be able to measure the voltage drop across the resistor.
    i wanted to measure rpm also does it matter whether the proximity switch has 2 or 3 wires and should it have an analog or digital/frequency output for cponnecting to the scc-68

  • My waveform chart is suppose to be with respect to time but I've noticed it is plotted with respect to the number of times through my VI

    I was under the assumption that waveform charts in LabVIEW 6i atomaticly displayed wired data with respect to time in seconds. However, I noticed that my chart plots data with respect to the number of data points my VI has generated. Is there anyway to fix this so that the chart actually plots vs. time. I've attached VI. The CIN is to a Computerboards Inc, thermocouple. Any help would be appreciated.
    Attachments:
    tepTInScan.VI ‏73 KB

    It appears that the chart is set to display relative time but since you don't have the acquisition in any kind of a loop, your relative time is pretty meaningless. You've got the dX of the chart set to be one, so yeah, at the beginning, the x scale numbers are going to be the same as the number of aquisitions until you've run it more than 60 times. At this point, you'll see the x scale show minutes and seconds. If you want absolute time for the x scale, look at the examples Real-Time Chart.vi or Variable-Time Chart XY.vi.

  • Hi am trying to save Data into a write to measurement file vi using a NI PXI 1042Q with a real time mode but it is not working but when i run it with uploading it into the PXI it save in to the file

    Hi am trying to save Data into a write to measurement file vi using a NI PXI 1042Q and DAQ NI PXI-6229 with a real time mode but it is not working but when i run it without uploading it into the PXI it save in to the file please find attached my vi
    Attachments:
    PWMs.vi ‏130 KB

     other problem is that the channel DAQmx only works at real time mode not on stand alone vi using Labview 8.2 and Real time 8.2

  • Powershell read CSV file line by line and only first header

    I have a CSV file, which reads like this:
    read, book
    read1, book1
    read2, book2
    I want to read only the first value, before comma one line at time,
    meaning first time it will be
    read
    read1
    read2
    no headers and i want to pass that value to a variable for a if condition.
    I have script it read line by line but it reads both values before coma and after line by line, so instead of having three values i'm ending with 6 values.
    $file = Import-CSV c:\script\server.csv
    $file | ForEach-Object {
            foreach ($property in $_.PSObject.Properties) 
    $property.Name
    $property.Value
    #replace = $property.Value

    If you can work with headers then do so as it will make your life easier.
    with a csv file called server.csv like this:
    headername1, headername2
    read, book
    read1, book1,
    read2, book2
    and this bit of code 
    $file = Import-CSV c:\script\server.csv
    #output to host, file or directly pipe the command above.
    foreach($cell in $file.headername1){ if($cell -eq $something){ }}
    will evaluate the content of each cell to $something.
    This is because Powershell will grab the first row and claim that as a header name.
    So whatever you put in cell A1 in excell will end up as name of the first collumn and its corresponding property (e.g. $file.A1 oor $file.headername1 if you will).

  • Program to upload csv file to internal table and insert into database table

    Hi I'm writing a program where I need to upload a csv file into an internal table using gui_upload, but i also need this program to insert the data into my custom database table using the split command.  Anybody have any samples to help, its urgent!

    Hi,
    Check this table may be it will give u an hint...
    REPORT z_table_upload LINE-SIZE 255.
    Data
    DATA: it_dd03p TYPE TABLE OF dd03p,
          is_dd03p TYPE dd03p.
    DATA: it_rdata  TYPE TABLE OF text1024,
          is_rdata  TYPE text1024.
    DATA: it_fields TYPE TABLE OF fieldname.
    DATA: it_file  TYPE REF TO data,
          is_file  TYPE REF TO data.
    DATA: w_error  TYPE text132.
    Macros
    DEFINE write_error.
      concatenate 'Error: table'
                  p_table
                  &1
                  &2
             into w_error
             separated by space.
      condense w_error.
      write: / w_error.
      stop.
    END-OF-DEFINITION.
    Field symbols
    FIELD-SYMBOLS: <table> TYPE STANDARD TABLE,
                   <data>  TYPE ANY,
                   <fs>    TYPE ANY.
    Selection screen
    SELECTION-SCREEN: BEGIN OF BLOCK b01 WITH FRAME TITLE text-b01.
    PARAMETERS: p_file  TYPE localfile DEFAULT 'C:\temp\' OBLIGATORY,
                p_separ TYPE c DEFAULT ';' OBLIGATORY.
    SELECTION-SCREEN: END OF BLOCK b01.
    SELECTION-SCREEN: BEGIN OF BLOCK b02 WITH FRAME TITLE text-b02.
    PARAMETERS: p_table TYPE tabname OBLIGATORY
                                     MEMORY ID dtb
                                     MATCHCODE OBJECT dd_dbtb_16.
    SELECTION-SCREEN: END OF BLOCK b02.
    SELECTION-SCREEN: BEGIN OF BLOCK b03 WITH FRAME TITLE text-b03.
    PARAMETERS: p_create TYPE c AS CHECKBOX.
    SELECTION-SCREEN: END OF BLOCK b03,
                      SKIP.
    SELECTION-SCREEN: BEGIN OF BLOCK b04 WITH FRAME TITLE text-b04.
    PARAMETERS: p_nodb RADIOBUTTON GROUP g1 DEFAULT 'X'
                                   USER-COMMAND rg1,
                p_save RADIOBUTTON GROUP g1,
                p_dele RADIOBUTTON GROUP g1.
    SELECTION-SCREEN: SKIP.
    PARAMETERS: p_test TYPE c AS CHECKBOX,
                p_list TYPE c AS CHECKBOX DEFAULT 'X'.
    SELECTION-SCREEN: END OF BLOCK b04.
    At selection screen
    AT SELECTION-SCREEN.
      IF sy-ucomm = 'RG1'.
        IF p_nodb IS INITIAL.
          p_test = 'X'.
        ENDIF.
      ENDIF.
    At selection screen
    AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_file.
      CALL FUNCTION 'F4_FILENAME'
           EXPORTING
                field_name = 'P_FILE'
           IMPORTING
                file_name  = p_file.
    Start of selection
    START-OF-SELECTION.
      PERFORM f_table_definition USING p_table.
      PERFORM f_upload_data USING p_file.
      PERFORM f_prepare_table USING p_table.
      PERFORM f_process_data.
      IF p_nodb IS INITIAL.
        PERFORM f_modify_table.
      ENDIF.
      IF p_list = 'X'.
        PERFORM f_list_records.
      ENDIF.
    End of selection
    END-OF-SELECTION.
          FORM f_table_definition                                       *
    -->  VALUE(IN_TABLE)                                               *
    FORM f_table_definition USING value(in_table).
      DATA: l_tname TYPE tabname,
            l_state TYPE ddgotstate,
            l_dd02v TYPE dd02v.
      l_tname = in_table.
      CALL FUNCTION 'DDIF_TABL_GET'
           EXPORTING
                name          = l_tname
           IMPORTING
                gotstate      = l_state
                dd02v_wa      = l_dd02v
           TABLES
                dd03p_tab     = it_dd03p
           EXCEPTIONS
                illegal_input = 1
                OTHERS        = 2.
      IF l_state NE 'A'.
        write_error 'does not exist or is not active' space.
      ENDIF.
      IF l_dd02v-tabclass NE 'TRANSP' AND
         l_dd02v-tabclass NE 'CLUSTER'.
        write_error 'is type' l_dd02v-tabclass.
      ENDIF.
    ENDFORM.
          FORM f_prepare_table                                          *
    -->  VALUE(IN_TABLE)                                               *
    FORM f_prepare_table USING value(in_table).
      DATA: l_tname TYPE tabname,
            lt_ftab TYPE lvc_t_fcat.
      l_tname = in_table.
      CALL FUNCTION 'LVC_FIELDCATALOG_MERGE'
           EXPORTING
                i_structure_name = l_tname
           CHANGING
                ct_fieldcat      = lt_ftab
           EXCEPTIONS
                OTHERS           = 1.
      IF sy-subrc NE 0.
        WRITE: / 'Error while building field catalog'.
        STOP.
      ENDIF.
      CALL METHOD cl_alv_table_create=>create_dynamic_table
        EXPORTING
          it_fieldcatalog = lt_ftab
        IMPORTING
          ep_table        = it_file.
      ASSIGN it_file->* TO <table>.
      CREATE DATA is_file LIKE LINE OF <table>.
      ASSIGN is_file->* TO <data>.
    ENDFORM.
          FORM f_upload_data                                            *
    -->  VALUE(IN_FILE)                                                *
    FORM f_upload_data USING value(in_file).
      DATA: l_file    TYPE string,
            l_ltext   TYPE string.
      DATA: l_lengt   TYPE i,
            l_field   TYPE fieldname.
      DATA: l_missk   TYPE c.
      l_file = in_file.
      l_lengt = strlen( in_file ).
      FORMAT INTENSIFIED ON.
      WRITE: / 'Reading file', in_file(l_lengt).
      CALL FUNCTION 'GUI_UPLOAD'
           EXPORTING
                filename = l_file
                filetype = 'ASC'
           TABLES
                data_tab = it_rdata
           EXCEPTIONS
                OTHERS   = 1.
      IF sy-subrc <> 0.
        WRITE: /3 'Error uploading', l_file.
        STOP.
      ENDIF.
    File not empty
      DESCRIBE TABLE it_rdata LINES sy-tmaxl.
      IF sy-tmaxl = 0.
        WRITE: /3 'File', l_file, 'is empty'.
        STOP.
      ELSE.
        WRITE: '-', sy-tmaxl, 'rows read'.
      ENDIF.
    File header on first row
      READ TABLE it_rdata INTO is_rdata INDEX 1.
      l_ltext = is_rdata.
      WHILE l_ltext CS p_separ.
        SPLIT l_ltext AT p_separ INTO l_field l_ltext.
        APPEND l_field TO it_fields.
      ENDWHILE.
      IF sy-subrc = 0.
        l_field = l_ltext.
        APPEND l_field TO it_fields.
      ENDIF.
    Check all key fields are present
      SKIP.
      FORMAT RESET.
      FORMAT COLOR COL_HEADING.
      WRITE: /3 'Key fields'.
      FORMAT RESET.
      LOOP AT it_dd03p INTO is_dd03p WHERE NOT keyflag IS initial.
        WRITE: /3 is_dd03p-fieldname.
        READ TABLE it_fields WITH KEY table_line = is_dd03p-fieldname
                             TRANSPORTING NO FIELDS.
        IF sy-subrc = 0.
          FORMAT COLOR COL_POSITIVE.
          WRITE: 'ok'.
          FORMAT RESET.
        ELSEIF is_dd03p-datatype NE 'CLNT'.
          FORMAT COLOR COL_NEGATIVE.
          WRITE: 'error'.
          FORMAT RESET.
          l_missk = 'X'.
        ENDIF.
      ENDLOOP.
    Log other fields
      SKIP.
      FORMAT COLOR COL_HEADING.
      WRITE: /3 'Other fields'.
      FORMAT RESET.
      LOOP AT it_dd03p INTO is_dd03p WHERE keyflag IS initial.
        WRITE: /3 is_dd03p-fieldname.
        READ TABLE it_fields WITH KEY table_line = is_dd03p-fieldname
                             TRANSPORTING NO FIELDS.
        IF sy-subrc = 0.
          WRITE: 'X'.
        ENDIF.
      ENDLOOP.
    Missing key field
      IF l_missk = 'X'.
        SKIP.
        WRITE: /3 'Missing key fields - no further processing'.
        STOP.
      ENDIF.
    ENDFORM.
          FORM f_process_data                                           *
    FORM f_process_data.
      DATA: l_ltext TYPE string,
            l_stext TYPE text40,
            l_field TYPE fieldname,
            l_datat TYPE c.
      LOOP AT it_rdata INTO is_rdata FROM 2.
        l_ltext = is_rdata.
        LOOP AT it_fields INTO l_field.
          ASSIGN COMPONENT l_field OF STRUCTURE <data> TO <fs>.
          IF sy-subrc = 0.
          Field value comes from file, determine conversion
            DESCRIBE FIELD <fs> TYPE l_datat.
            CASE l_datat.
              WHEN 'N'.
                SPLIT l_ltext AT p_separ INTO l_stext l_ltext.
                WRITE l_stext TO <fs> RIGHT-JUSTIFIED.
                OVERLAY <fs> WITH '0000000000000000'.           "max 16
              WHEN 'P'.
                SPLIT l_ltext AT p_separ INTO l_stext l_ltext.
                TRANSLATE l_stext USING ',.'.
                <fs> = l_stext.
              WHEN 'F'.
                SPLIT l_ltext AT p_separ INTO l_stext l_ltext.
                TRANSLATE l_stext USING ',.'.
                <fs> = l_stext.
              WHEN 'D'.
                SPLIT l_ltext AT p_separ INTO l_stext l_ltext.
                TRANSLATE l_stext USING '/.-.'.
                CALL FUNCTION 'CONVERT_DATE_TO_INTERNAL'
                     EXPORTING
                          date_external = l_stext
                     IMPORTING
                          date_internal = <fs>
                     EXCEPTIONS
                          OTHERS        = 1.
              WHEN 'T'.
                CALL FUNCTION 'CONVERT_TIME_INPUT'
                     EXPORTING
                          input  = l_stext
                     IMPORTING
                          output = <fs>
                     EXCEPTIONS
                          OTHERS = 1.
              WHEN OTHERS.
                SPLIT l_ltext AT p_separ INTO <fs> l_ltext.
            ENDCASE.
          ELSE.
            SHIFT l_ltext UP TO p_separ.
            SHIFT l_ltext.
          ENDIF.
        ENDLOOP.
        IF NOT <data> IS INITIAL.
          LOOP AT it_dd03p INTO is_dd03p WHERE datatype = 'CLNT'.
          This field is mandant
            ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data>
                                                          TO <fs>.
            <fs> = sy-mandt.
          ENDLOOP.
          IF p_create = 'X'.
            IF is_dd03p-rollname = 'ERDAT'.
              ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data>
                                                            TO <fs>.
              <fs> = sy-datum.
            ENDIF.
            IF is_dd03p-rollname = 'ERZET'.
              ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data>
                                                            TO <fs>.
              <fs> = sy-uzeit.
            ENDIF.
            IF is_dd03p-rollname = 'ERNAM'.
              ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data>
                                                            TO <fs>.
              <fs> = sy-uname.
            ENDIF.
          ENDIF.
          IF is_dd03p-rollname = 'AEDAT'.
            ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data>
                                                          TO <fs>.
            <fs> = sy-datum.
          ENDIF.
          IF is_dd03p-rollname = 'AETIM'.
            ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data>
                                                          TO <fs>.
            <fs> = sy-uzeit.
          ENDIF.
          IF is_dd03p-rollname = 'AENAM'.
            ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data>
                                                          TO <fs>.
            <fs> = sy-uname.
          ENDIF.
          APPEND <data> TO <table>.
        ENDIF.
      ENDLOOP.
    ENDFORM.
          FORM f_modify_table                                           *
    FORM f_modify_table.
      SKIP.
      IF p_save = 'X'.
        MODIFY (p_table) FROM TABLE <table>.
      ELSEIF p_dele = 'X'.
        DELETE (p_table) FROM TABLE <table>.
      ELSE.
        EXIT.
      ENDIF.
      IF sy-subrc EQ 0.
        FORMAT COLOR COL_POSITIVE.
        IF p_save = 'X'.
          WRITE: /3 'Modify table OK'.
        ELSE.
          WRITE: /3 'Delete table OK'.
        ENDIF.
        FORMAT RESET.
        IF p_test IS INITIAL.
          COMMIT WORK.
        ELSE.
          ROLLBACK WORK.
          WRITE: '- test only, no update'.
        ENDIF.
      ELSE.
        FORMAT COLOR COL_NEGATIVE.
        WRITE: /3 'Error while modifying table'.
        FORMAT RESET.
      ENDIF.
    ENDFORM.
          FORM f_list_records                                           *
    FORM f_list_records.
      DATA: l_tleng TYPE i,
            l_lasti TYPE i,
            l_offst TYPE i.
    Output width
      l_tleng = 1.
      LOOP AT it_dd03p INTO is_dd03p.
        l_tleng = l_tleng + is_dd03p-outputlen.
        IF l_tleng LT sy-linsz.
          l_lasti = sy-tabix.
          l_tleng = l_tleng + 1.
        ELSE.
          l_tleng = l_tleng - is_dd03p-outputlen.
          EXIT.
        ENDIF.
      ENDLOOP.
    Output header
      SKIP.
      FORMAT COLOR COL_HEADING.
      WRITE: /3 'Contents'.
      FORMAT RESET.
      ULINE AT /3(l_tleng).
    Output records
      LOOP AT <table> ASSIGNING <data>.
        LOOP AT it_dd03p INTO is_dd03p FROM 1 TO l_lasti.
          IF is_dd03p-position = 1.
            WRITE: /3 sy-vline.
            l_offst = 3.
          ENDIF.
          ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data> TO <fs>.
          l_offst = l_offst + 1.
          IF is_dd03p-decimals LE 2.
            WRITE: AT l_offst <fs>.
          ELSE.
            WRITE: AT l_offst <fs> DECIMALS 3.
          ENDIF.
          l_offst = l_offst + is_dd03p-outputlen.
          WRITE: AT l_offst sy-vline.
        ENDLOOP.
      ENDLOOP.
    Ouptut end
      ULINE AT /3(l_tleng).
    ENDFORM.
    Regards,
    Joy.

  • Importing CSV file into AddressBook: Major and Immediate Crash

    I'll start with what I've already tried to solve this problem. I have read through several posts regarding other address book problems. I wanted to try the recommended deletion of file "homedirectory/library/application support/address book/AddressBook-v22.abcddb". That file does not seem to exist on my computer for me to even try delete it. I was able to locate the "libray folder" and get seamlessly to the "application support" folder, but there is no "address book" folder in the "application support" folder.
    The problem is with importing a CSV file (created/saved from Outlook Express) into my Mac AddressBook. I've made 10 attempts and have gotten 10 crash messages. The first part of the crash message is posted below.
    Is there anything I can do to solve this problem? Is it because I've created a CSV file from Outlook Express that it won't work? I have successfully imported other CSV files (like an old, ancient CSV file that I created from a nextel phone years ago and a family address file that I recently created on my MacBook in Excel.)
    Here's the first page or so of the crash message... the entire report is about 28 pages:
    Process: Address Book [254]
    Path: /Applications/Address Book.app/Contents/MacOS/Address Book
    Identifier: com.apple.AddressBook
    Version: 5.0.1 (864)
    Build Info: AddressBook-8640000~4
    Code Type: X86-64 (Native)
    Parent Process: launchd [144]
    Date/Time: 2009-12-16 15:47:00.744 -0500
    OS Version: Mac OS X 10.6.2 (10C540)
    Report Version: 6
    Interval Since Last Report: 994434 sec
    Crashes Since Last Report: 10
    Per-App Interval Since Last Report: 8923 sec
    Per-App Crashes Since Last Report: 10
    Anonymous UUID: 9EDF5817-9DD1-4C21-9CE5-F6882E750B54
    Exception Type: EXC_CRASH (SIGABRT)
    Exception Codes: 0x0000000000000000, 0x0000000000000000
    Crashed Thread: 11 Dispatch queue: com.apple.root.default-priority
    Application Specific Information:
    abort() called
    * Terminating app due to uncaught exception 'NSRangeException', reason: '* -[NSCFArray objectAtIndex:]: index (7) beyond bounds (7)'
    * Call stack at first throw:
    0 CoreFoundation 0x00007fff82bf7444 __exceptionPreprocess + 180
    1 libobjc.A.dylib 0x00007fff82d3b0f3 objcexceptionthrow + 45
    2 CoreFoundation 0x00007fff82bf7267 +[NSException raise:format:arguments:] + 103
    3 CoreFoundation 0x00007fff82bf71f4 +[NSException raise:format:] + 148
    4 Foundation 0x00007fff83fd5080 _NSArrayRaiseBoundException + 122
    5 Foundation 0x00007fff83f37b81 -[NSCFArray objectAtIndex:] + 75
    6 AddressBook 0x00007fff833af1df -[ABImportMappingModel(PrivateMappingToPersonConversion) addressDictionaryForMapping:rowData:localizedAddressMappings:] + 348
    7 AddressBook 0x00007fff833af4d3 -[ABImportMappingModel(MappingToPersonConversion) personWithRowData:localizedAddressMappings:addressBook:] + 597
    8 AddressBook 0x00007fff833ace87 __-[ABNewTextFileImportController import:]block_invoke1 + 153
    9 Foundation 0x00007fff83f7a7d9 -[NSBlockOperation main] + 140
    10 Foundation 0x00007fff83f6b06d -[__NSOperationInternal start] + 681
    11 Foundation 0x00007fff83f6ad23 ___startOperations_block_invoke2 + 99
    12 libSystem.B.dylib 0x00007fff8617ace8 dispatch_call_block_andrelease + 15
    13 libSystem.B.dylib 0x00007fff86159279 dispatch_workerthread2 + 231
    14 libSystem.B.dylib 0x00007fff86158bb8 pthreadwqthread + 353
    15 libSystem.B.dylib 0x00007fff86158a55 start_wqthread + 13

    Pamela MacBeginner wrote:
    I'll start with what I've already tried to solve this problem. I have read through several posts regarding other address book problems. I wanted to try the recommended deletion of file "homedirectory/library/application support/address book/AddressBook-v22.abcddb". That file does not seem to exist on my computer for me to even try delete it.
    it does exist on your computer. you are looking in the wrong library folder. you need the library folder in your home directory, not at the top level of the drive. click on the house icon in the sidebar of any finder window. that's your home directory. you can also get to it from the top level of the drive by going to /users/yourusername.
    if deleting this file does not help try converting the CSV file to a vcard before importing it to AB. you can use this converter for example
    http://homepage.mac.com/phrogz/CSV2vCard_v2.html
    I was able to locate the "libray folder" and get seamlessly to the "application support" folder, but there is no "address book" folder in the "application support" folder.
    The problem is with importing a CSV file (created/saved from Outlook Express) into my Mac AddressBook. I've made 10 attempts and have gotten 10 crash messages. The first part of the crash message is posted below.
    Is there anything I can do to solve this problem? Is it because I've created a CSV file from Outlook Express that it won't work? I have successfully imported other CSV files (like an old, ancient CSV file that I created from a nextel phone years ago and a family address file that I recently created on my MacBook in Excel.)
    Here's the first page or so of the crash message... the entire report is about 28 pages:
    Process: Address Book [254]
    Path: /Applications/Address Book.app/Contents/MacOS/Address Book
    Identifier: com.apple.AddressBook
    Version: 5.0.1 (864)
    Build Info: AddressBook-8640000~4
    Code Type: X86-64 (Native)
    Parent Process: launchd [144]
    Date/Time: 2009-12-16 15:47:00.744 -0500
    OS Version: Mac OS X 10.6.2 (10C540)
    Report Version: 6
    Interval Since Last Report: 994434 sec
    Crashes Since Last Report: 10
    Per-App Interval Since Last Report: 8923 sec
    Per-App Crashes Since Last Report: 10
    Anonymous UUID: 9EDF5817-9DD1-4C21-9CE5-F6882E750B54
    Exception Type: EXC_CRASH (SIGABRT)
    Exception Codes: 0x0000000000000000, 0x0000000000000000
    Crashed Thread: 11 Dispatch queue: com.apple.root.default-priority
    Application Specific Information:
    abort() called
    * Terminating app due to uncaught exception 'NSRangeException', reason: '* -[NSCFArray objectAtIndex:]: index (7) beyond bounds (7)'
    * Call stack at first throw:
    0 CoreFoundation 0x00007fff82bf7444 __exceptionPreprocess + 180
    1 libobjc.A.dylib 0x00007fff82d3b0f3 objcexceptionthrow + 45
    2 CoreFoundation 0x00007fff82bf7267 +[NSException raise:format:arguments:] + 103
    3 CoreFoundation 0x00007fff82bf71f4 +[NSException raise:format:] + 148
    4 Foundation 0x00007fff83fd5080 _NSArrayRaiseBoundException + 122
    5 Foundation 0x00007fff83f37b81 -[NSCFArray objectAtIndex:] + 75
    6 AddressBook 0x00007fff833af1df -[ABImportMappingModel(PrivateMappingToPersonConversion) addressDictionaryForMapping:rowData:localizedAddressMappings:] + 348
    7 AddressBook 0x00007fff833af4d3 -[ABImportMappingModel(MappingToPersonConversion) personWithRowData:localizedAddressMappings:addressBook:] + 597
    8 AddressBook 0x00007fff833ace87 __-[ABNewTextFileImportController import:]block_invoke1 + 153
    9 Foundation 0x00007fff83f7a7d9 -[NSBlockOperation main] + 140
    10 Foundation 0x00007fff83f6b06d -[__NSOperationInternal start] + 681
    11 Foundation 0x00007fff83f6ad23 ___startOperations_block_invoke2 + 99
    12 libSystem.B.dylib 0x00007fff8617ace8 dispatch_call_block_andrelease + 15
    13 libSystem.B.dylib 0x00007fff86159279 dispatch_workerthread2 + 231
    14 libSystem.B.dylib 0x00007fff86158bb8 pthreadwqthread + 353
    15 libSystem.B.dylib 0x00007fff86158a55 start_wqthread + 13

  • Csv files for OBIA procurement and spend analytics

    hi,
    which csv files we need to configure in order to install OBIA 7.9.6.3 procurement and spend analytic for EBS R12 1 3 as source?
    In documentation, oracle given set of files for 11i & R12? do we need to configure csv files for both version ???
    As per documentation,counter files for , CSV files req for source as 11i are not there in R12???
    please guide???

    Hi,
    To be safe, configure all of the domain value files documented in the section http://docs.oracle.com/cd/E20490_01/bia.7963/e19039/configsupplychain.htm#CACIEBBH
    Do this regardless of whether they state 11i or R12 in the filename. Some of the files have not changed from 11i adapters, so the filenames have not been updated e.g. domainValues_Employee_Expense_Type_ora11i.csv
    Please mark if helpful / correct,
    Andy
    www.project.eu.com

  • Transfring from CSV file to LabVIEW 2011 and view it in XY Graph

    Hi, I would like to seek for your assistance in my project.
    I have an excel file which is in CSV format and intend to transfer it into LabVIEW 2011.
    I want to create a storage to store all the excel file for viewing purpose and in the mean while I want to view it in graph form. 
    Hope to that you will be able to help me with this. Thank you.
    Attachments:
    History_power2013-10-27 12-53-32 ch1 tv ch 2 fan.csv ‏209 KB

    Thank you for replying.
    But I have problem displaying the whole CSV file.
    However, I have a batch of CSV file to monitor, so is there a way to consolidate all the data in one page?  
    Attachments:
    Open CSV file and history files.vi ‏33 KB

  • Csv file to database tables and also foreginkey related columns directly

    i have created dimensions tables in ssis  and i need to load  the data into that tables from my given csv files. iand i have foriegien key columns of fact table for this also data need to load

    definitely we have primary key relations..  the tables contain primary keys and forien keys  i have created tables nearly 20 tables  in sql server some of them consist of dimensions and facts.. so i have an csv files of data. so that i need
    to load data in that tables by using ssis package
    i have an idea taking one data flow task in control flow task for  each and every single table i am taking one oldb destination. for each and every one i need source as csv file. by connecting this both we can load data
    but i need to load data into 20 tables by taking on dataflow task..how it is possible any solution and any different ways to load data from csv files to ssispacke tables

  • CSV  FILE READING

    Hi all,
    I got the Csv parser from the net.It is giving runtime error "IO FILE Exception"
    actually there are 3 file included in it.
    CSVFile
    import java.util.ArrayList;
    import java.io.BufferedReader;
    import java.io.FileReader;
    * holds the file object of records
    public class CSVFile
    * arraylist of records, each one containing a single record
    private ArrayList records = new ArrayList();
    * What to replace a row delimiter with, on output.
    private String replacementForRowDelimiterInTextField = " "; // Change if needed.
         * debug, > 0 for output.
        public int debug = 5;
        private boolean debugLoading = true; //true when debugging load cycle
    *Return the required record
    *@param index the index of the required record
    *@return a CSVRecord, see #CSVRecord
    public CSVRecord getRecord (int index)
        if (this.debug > 3 && !debugLoading) {
         System.err.println("CSVFile getRecord ["+index+"]"+ ((CSVRecord)this.records.get(index)).getFields(3));
         return (CSVRecord)this.records.get(index);
    *Get the number of records in the file
    *@return 1 based count of records.
    public int count()
         return this.records.size();
         // ----- Constructor -----
    *Constructor; create a file object
    *@param details  a propertyFile object, see #propertyFile
    *@param csvFile filename of csv file
         public CSVFile(propertyFile details, String csvFile)
             try{
              BufferedReader reader = new BufferedReader (new FileReader (csvFile));
              //StringBuilder sbBuffer = new StringBuilder( reader.ReadToEnd() );
              StringBuffer buf=new StringBuffer();
              String text;
              try {
                  while ((text=reader.readLine()) != null)
                   buf.append(text + "\n");
                  reader.close();
              }catch (java.io.IOException e) {
                  System.err.println("Unable to read from csv file "+ csvFile);
                  System.exit(2);
              String buffer;
              buffer = buf.toString();
              buffer = buffer.replaceAll("&","&");
              buffer = buffer.replaceAll("<","<");
              boolean inQuote = false;
              String savedRecord = "";
              String curRecord = "";
              if (debug > 2) {
                  System.err.println("csvFile: setup");
                  System.err.println("Read int from src CSV file");
              //Split entire input file into array records, using row delim.
              String records[] =  buffer.split( details.rowDelimiter() );
              //Iterate over each split, looking for incomplete quoted strings.
              for (int rec=0; rec <records.length; rec++)
                   curRecord = savedRecord + records[rec];
                   if (debug > 4) {
                       System.out.println("csvFile: saved rec" + savedRecord);
                       System.out.println("csvFile: current rec " + curRecord);
                       System.out.println("csvFile: currRecLth: " + curRecord.length());
                   for (int i = 0; i < curRecord.length(); i ++ )
                        char ch = curRecord.charAt(i);
                        char prev = ( i != 0? curRecord.charAt(i-1): ' ');
                        char nxt = ( i < (curRecord.length()-2)? curRecord.charAt(i+1): ' ');
                        if ( !inQuote && ch == '"' )
                            inQuote = true;
                        else
                            if ( inQuote && ch == '"' )
                             if ( i + 1 < curRecord.length() )
                                 inQuote = (nxt == '"')
                                  || (prev == '"');
                             else
                                 inQuote = false;
                   if ( inQuote )
                        // A space is currently used to replace the row delimiter
                        //when found within a text field
                        savedRecord = curRecord + replacementForRowDelimiterInTextField;
                        inQuote = false;
                   else
                        this.records.add( new CSVRecord(details, curRecord) );
                        savedRecord = "";
              catch (java.io.FileNotFoundException e) {
                  System.out.println("Unable to read CSV file, quitting");
                  System.exit(2);
         // ----- Private Methods -----
         private String[] SplitText(String textIn, String splitString)
              String [] arrText = textIn.split(splitString);
              return arrText;
    *Get all records in the csvfile
    *@return array of CSVRecords, see #CSVRecord
    public CSVRecord[] GetAllRecords()
    CSVRecord[] allRecords = new CSVRecord[ this.records.size() ];
    for (int i = 0; i < this.records.size(); i++ )
         allRecords[i] = (CSVRecord)this.records.get(i);
    return allRecords;
      public static void main(String args[])
         propertyFile path=new propertyFile("C:\\bea\\jdk142_05\\bin");
        CSVFile  a=new CSVFile(path,"C:\\bea\\jdk142_05\\bin\\xxx.csv");
    CSVRecord
    import  java.util.ArrayList;
    *Represents a single record of a CSV file
    public class CSVRecord
         *Debug
        private int debug = 0;
         * Arraylist of fields of the record
        private ArrayList fields = new ArrayList();
         *get the field with index index
         *@param index of field required
         *@return String value of that field
        public String getFields (int index)
         if ( index < fields.size())
         return (String)this.fields.get(index);
         else return ("");
         *get the number of fields
         *@return int number of fields in this file
        public int count()
         return this.fields.size();
         *Create a csv record from the input String, using the propertyfile.
         *@param  details , the property file
         *@see <a href="propertyFile.html">propertyFile</a>
         *@param  recordText , the record to be added to the arraylist of records
        public  CSVRecord(propertyFile details, String recordText)
          * true if within a quote
         boolean inQuote = false;
          * temp saved field value
         String savedField = "";
          * current field value
         String curField = "";
          * field being built
         String field = "";
          * array of records.
          * split it according to the field delimiter.
          * The default String.split() is not accurate according to the M$ view.
         String records[] =  recordText.split( details.fieldDelimiter() );
         for (int rec=0; rec <records.length; rec++)
              field = records[rec];
              //Add this field to currently saved field.
              curField = savedField + field;
              //Iterate over current field.
              for (int i = 0; i < curField.length(); i ++ ){
                   char ch = curField.charAt(i); //current char
                   char nxt = ((i==
                             curField.length() -1)
                            ? ' ' : curField.charAt(i+1)); //next char
                   char prev = (i==0? ' ': curField.charAt(i-1)); //prev char
                   if ( !inQuote && ch == '"' )
                       inQuote = true;
                   else
                       if ( inQuote && ch == '"' )
                        if ( (i + 1) < curField.length() )
                            inQuote = (nxt == '"') || (prev == '"');
                        else
                            inQuote = (prev == '"');
              }//end of current field
              if ( inQuote )
                   savedField = curField + details.fieldDelimiter() + " ";
                   inQuote = false;
              else if (!inQuote && curField.length() > 0)
                   char ch = curField.charAt(0); //current char
                   char lst = curField.charAt(curField.length()-1);
                   if (ch   == '"' &&
                       lst == '"')
                        //Strip leading and trailing quotes
                        curField = curField.substring(1,curField.length()-2);
                        //curField = curField.Replace( "\"\"", "\"" );
                        curField =curField.replaceAll("\"\"", "\"");
                   this.fields.add( curField );
                   savedField = "";
              else if(curField.length() == 0){
                  this.fields.add("");
              if (debug > 2)
                  System.out.println("csvRec  Added:" + curField);
             }//   end of for each record
    propertyFile
    import java.util.ArrayList;
    import java.io.BufferedReader;
    import java.io.FileReader;
    * This class holds the data from a Property file.
    public class propertyFile
        // ----- Private Fields -----
         *Comments from the file
        private String comment;
         * Delimiter for individual fields
        private String fieldDelimiter; // was char
         *   Delimiter for each row
        private String rowDelimiter;
         * Root element to use for output XML
        private String xmlRootName;
         * Element to use for each row
        private String recordName;
         *How many fields are there -  Note: This is 1 based, not zero based.
        private int fieldCount;
         * array of fields
        private ArrayList fields = new ArrayList(88);
         *Set to int > 0 for debug output
        private int  debug=0;
    /** A single instance of this will hold all the relavant details for ONE PropertyFile.
        *@param filePath String name of the property file.
        public  propertyFile(String filePath)
         //StreamReader reader = new StreamReader( filePath );
         try {
         BufferedReader reader = new BufferedReader (new FileReader (filePath));
         String line = null;
         while ( (line = reader.readLine()) != null )
              if ( line.length() != 0 )   //was != ""
                   if (debug> 0)
                       System.err.println("String is: " + line + "lth: " + line.length());
                   if ( line.charAt(0) != '[' && !( line.startsWith("//") ) )
                        String propertyValue = line.split("=")[1];
                        // Assign Comment
                        if ( line.toUpperCase().startsWith("COMMENT=") )
                            this.comment = propertyValue;
                        // Assign Field Delimter
                        if ( line.toUpperCase().startsWith("FIELDDELIMITER") )
                            this.fieldDelimiter = propertyValue.substring(0);
                        // Assign Row Delimiter
                        if ( line.toUpperCase().startsWith("ROWDELIMITER") )
                             if ( propertyValue.substring(0,1).toUpperCase() ==
                                  "\\" && propertyValue.toUpperCase().charAt(1) == 'N')
                                 this.rowDelimiter = "\r\n";
                             else
                                 this.rowDelimiter = propertyValue;
                        // Assign Root Document Name
                        if ( line.toUpperCase().startsWith("ROOTNAME") )
                            this.xmlRootName = propertyValue;
                        // Assign Record Name
                        if ( line.toUpperCase().startsWith("RECORDNAME") )
                            this.recordName = propertyValue;
                        // Assign Field Count
                        if ( line.toUpperCase().startsWith("FIELDS") )
                            this.fieldCount =  Integer.parseInt(propertyValue);
                   else
                        if ( line.toUpperCase().startsWith("[FIELDS]") )
                             while ( (line = reader.readLine()) != null )
                                  if ( line.length() == 0)
                                      break;
                                  else{
                                      if (debug > 0)
                                       System.err.println("Adding: "+line.split("=")[1]);
                                      this.fields.add( line.split("=")[1] );
                             break;
         reader.close();
         } catch (java.io.IOException e) {
             System.out.println("**** IO Error on input file. Quitting");
             System.exit(2);
         * Return the comment int the property file
         *@return String, the comment value, if any
        public String comment ()
         return this.comment;
         * The delimiter to be used for each field, often comma.
         *@return String, the character(s)
        public String fieldDelimiter()
         return this.fieldDelimiter;
         * Row Delimiter - often '\n'
         *@return String, the character(s)
        public String rowDelimiter ()
         return this.rowDelimiter;
        * The XML document root node.
        * @return String, the element name
        public String XMLRootName()
         return this.xmlRootName;
        /** <summary>
        ** The node name for each record
        public String recordName()
         return this.recordName;
        ** Number of Fields per record/node
        *@return integer count of number of fields, 1 based.
        public int fields()
         return this.fieldCount;
         // ----- Public Methods -----
         ** The value of the nth field, 0 based.
         ** @param index Which field to return
         * @return String the field value
        public String fieldNames(int index)
         if (index <this.fields.size())
             return (String)this.fields.get(index); //was .toString()
         else
              System.err.println("PropertyFile: Trying to get idx of :"
                           + index
                           + "\n when only "
                           //+ (this.fields.size() -  1)
                           + this.fieldCount
                           + " available"
              System.exit(2);
         return "";
         *Test entry point, this class
         *@param argv  cmd line arg of property file
        public static void main (String argv[]) {
              if ( argv.length != 1) {
               System.out.println ("Md5 <file>") ;
               System.exit (1) ;
        propertyFile p = new propertyFile(argv[0]);
    Please help as i m novice in File handling espically csvfiles

    > **** IO Error on input file. Quitting
    Press any key to continue . . .
    Ok, no compiler error but it seems that the filePath String name of the property file isn't there.

  • BULK INSERT from a text (.csv) file - read only specific columns.

    I am using Microsoft SQL 2005, I need to do a BULK INSERT from a .csv I just downloaded from paypal.  I can't edit some of the columns that are given in the report.  I am trying to load specific columns from the file.
    bulk insert Orders
    FROM 'C:\Users\*******\Desktop\DownloadURL123.csv'
       WITH
                  FIELDTERMINATOR = ',',
                    FIRSTROW = 2,
                    ROWTERMINATOR = '\n'
    So where would I state what column names (from row #1 on the .csv file) would be used into what specific column in the table.
    I saw this on one of the sites which seemed to guide me towards the answer, but I failed.. here you go, it might help you:
    FORMATFILE [ = 'format_file_path' ]
    Specifies the full path of a format file. A format file describes the data file that contains stored responses created using the bcp utility on the same table or view. The format file should be used in cases in which:
    The data file contains greater or fewer columns than the table or view.
    The columns are in a different order.
    The column delimiters vary.
    There are other changes in the data format. Format files are usually created by using the bcp utility and modified with a text editor as needed. For more information, see bcp Utility.

    Date, Time, Time Zone, Name, Type, Status, Currency, Gross, Fee, Net, From Email Address, To Email Address, Transaction ID, Item Title, Item ID, Buyer ID, Item URL, Closing Date, Reference Txn ID, Receipt ID,
    "04/22/07", "12:00:21", "PDT", "Test", "Payment Received", "Cleared", "USD", "321", "2.32", "3213', "[email protected]", "[email protected]", "", "testing", "392302", "jdal32", "http://ddd.com", "04/22/03", "", "",
    "04/22/07", "12:00:21", "PDT", "Test", "Payment Received", "Cleared", "USD", "321", "2.32", "3213', "[email protected]", "[email protected]", "", "testing", "392932930302", "jejsl32", "http://ddd.com", "04/22/03", "", "",
    Do you need more than 2 rows? I did not include all the columns from the actual csv file but most of it, I am planning on taking to the first table these specfic columns: date, to email address, transaction ID, item title, item ID, buyer ID, item URL.
    The other table, I don't have any values from here because I did not list them, but if you do this for me I could probably figure the other table out.
    Thank you very much.

  • CSV file reading using UTL_FILE at run time

    Hi,
    I have to read CSV file using UTL_FILE.
    but Folder contains Many CSV files.
    I dont know there name.So i have to read csv file at run time.
    Please let me know how should we achieve this?
    Thanks

    place the following in a shell script, say "list_my_files.ksh"
    ls -l > my_file_list.datthen run the shell script using dbms_scheduler;
    begin
    dbms_scheduler.create_program (program_name   => 'a_test_proc'
                                  ,program_type   => 'EXECUTABLE'
                                  ,program_action => '/home/bluefrog/list_my_files.ksh'
                                  ,number_of_arguments => 0
                                  ,enabled => true);
    end;
    /then open "my_file_list.dat" using UTL_FILE, read all file names and choose the one you require.
    P;

  • SSIS CSV FILE READING ISSUE

    hi can u reply me for the below post
    Actually iam using a FF connection manger to read the csv file .. with the row delimiter to {CR}{LF}.
    Suddenly in looping through the files package got failed because it cant able to read the csv file 
    Again i changed  the row delimiter to {LF} then it worked for the file which i faced issue with the {CR}{LF} delimiter.
    Now i want to know why the package is failing for the row delimiter issue..
    can any one help me on this.
    Please share me what actually the difference between those

    Please share me what actually the difference between those
    CR = Carriage Return = CHAR(13) in SQL
    This character is used in Mac OS as new line.
    When this character is used, cursor goes to first position of the line
    LF = Line Feed = CHAR(10) in SQL
    This character is used in Unix as new line.
    When this character is used, cursor goes to next line of the line (Old days typewritter when paper moves up)
    CR LF
    New line in Windows system. Combination of CR and LF.
    Best thing is to open the test flat file in Notepadd++ and enable show symbols - show chars and see what exactly you have as row delimiter
    Cheers,
    Vaibhav Chaudhari
    [MCTS],
    [MCP]

Maybe you are looking for

  • AT305-T16 isn't recognized by Windows XP

    When connected my AT305 tablet to my Windows XP PC, I get a dialog saying that new hardware has been found and software must be installed. But I have no drivers to install and Windows can't find one. So the PC never recognizes the tablet. Any suggest

  • Java.lang.ClassNotFoundException: oracle.jdbc.OracleDriver

    Hi everyone, Can any one plz help me with this problem in connection with the database...im unable to connect from the servlet where my jdbc connectivity code is written when i compile it gives me this error: Ljava.lang.String;@e53220java.lang.ClassN

  • Ssis flat file to ole db skipping some rows...

    for some reason, I can't seem to figure out why I'm missing some lines when I import a csv file that I exported using sqlplus. The ssis package gives no warnings or errors as well. The reason I know this is because when I loaded the csv file into a s

  • Firmware update 1.2?

    Hey guys i just got my macbook pro back from repair. i got a new logic board, the buzzing noise is completely fixed. my macbook is now completely silent. my constant wireless dropout problem's fixed too. anyways, should i update to the 1.2 firmware?

  • I can't restore my IPhone!

    I backed up my IPhone before it broke and had to get a new one. Is there a way to get my music and videos back?