Importing CSV file with Data Merge Fails

Specs
See pasted text from CSV at http://pastebin.com/mymhugpN
I am using InDesign CS6 (8.0.1)
I created the CSV by downloading it from a Google Spreadsheet as a CSV. I confirm with the Terminal that the character encoding is utf-8 usnig the file command.
Problem detailed
I am trying to import a CSV file (utf-8) with Data Merge via the Select Data Source... command with Show Import Options checked. When viewing the Data Source Import Options dialog, I set the following options—Delimiter:Comma, Encoding:Unicode, Platform:Macintosh. I leave Preserve Spaces in Data Source unchecked. It fails to import any variables and produces no error message. I have tried other CSV files as well (created TextEdit, Espresso, etc.) and it seems that InDesign will not import any files if Unicode is specified as the encoding, no matter which other options are specified.
Can anyone else confirm this?
Importing as ACSII works, but obviously does not display my content correctly.

Mike is having some trouble posting in this thread (and I am too), but he sent me a PM with what he wanted to say:
OK. I think I might have a positive answer for you.
I was getting lost in the upper ASCII characters you showed. In your test file I never could see any--a case of not seeing the trees for the forest.
Your quote marks are getting dropped in your test file. Now, this may or may not affect other factors but it does in some further testing. I believe ID has an issue with dropping quote marks even in a plain ASCII file if the marks are at the beginning of a sentence and the file is tab delimited. Call it a bug.
Because of all the commas and quote marks in your simple file, I think you should be exporting from Google Docs' spreadsheet as a tab-delimited file. This exported file has to be opened in a text editor capable of saving it out as a UTF-16 BE (Big Endian) type of file.
Also, I think you are going to have to use proper quote marks throughout, or change them in the exported tab-delimited file. Best to have a correct source, though.
Here is your sample ZIPped up. I think it works properly. But then again, I think I might be bleary-eyed by now.
http://www.wenzloffandsons.com/temp/merge_psalms_utf-16.zip
Take care, Mike

Similar Messages

  • How to import csv file with multiple tables into sql server

    I have multiple csv files that has one sheet but has 130 headers with each header having different data. 
    I'd like to import each one of these header rows with data into its own file in sql server. 
    I know very basic SSIS and am but am not familiar with the scripting in it though which what I assume I'd have to use. 
    Each header in the csv file is structured as such(also see example pic):
    first header would be this:                             
          ITEM = ORG_V                              
          DATE = 2013-07-22 10:00 ~ 2013-07-22 10:15      
    column names
    data
    second header would be this:
    ITEM = TER_V
          DATE = 2013-07-22 10:00 ~ 2013-07-22 10:15
    column names
    data
    The headers can be at any random row number as well as the data size in each excel file differs but they all start with "ITEM ="
    and then in the next row "DATE ="
    I could also convert these to excel files if it makes this process easier. 

    Why don't you put a filter on D3, filter out the blanks, copy/paste to a new CSV file, save it, and import it.
    There's no way you're going to get SQL to do that kind of thing for you.  The language is for set based operations, not for complex data manipulation tasks.
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

  • Problem import csv file with SQL*loader and control file

    I have a *csv file looking like this:
    E0100070;EKKJ 1X10/10 1 KV;1;2003-06-16;01C;75
    E0100075;EKKJ 1X10/10 1 KV;500;2003-06-16;01C;67
    E0100440;EKKJ 2X2,5/2,5 1 KV;1;2003-06-16;01C;37,2
    E0100445;EKKJ 2X2,5/2,5 1 KV;500;2003-06-16;01C;33,2
    E0100450;EKKJ 2X4/4 1 KV;1;2003-06-16;01C;53
    E0100455;EKKJ 2X4/4 1 KV;500;2003-06-16;01C;47,1
    I want to import this csv file to this table:
    create table artikel (artnr varchar2(10), namn varchar2(25), fp_storlek number, datum date, mtrlid varchar2(5), pris number);
    My controlfile looks like this:
    LOAD DATA
    INFILE 'e:\test.csv'
    INSERT
    INTO TABLE ARTIKEL
    FIELDS TERMINATED BY ';'
    TRAILING NULLCOLS
    (ARTNR, NAMN, FP_STORLEK char "to_number(:fp_storlek,'99999')", DATUM date 'yyyy-mm-dd', MTRLID, pris char "to_number(:pris,'999999D99')")
    I cant get sql*loader to import the last column(pris) as I want. It ignore my decimal point which in this case is "," and not "." maybe this is the problem. If the decimal point is the problem how can I get oracle to recognize "," as a decimal point??
    the result from the import now, is that a decimal number (37,2) becomes 372 in the table

    Set NLS_NUMERIC_CHARACTERS environment variable at OS level, before running SqlLoader :
    $ cat test.csv
    E0100070;EKKJ 1X10/10 1 KV;1;2003-06-16;01C;75
    E0100075;EKKJ 1X10/10 1 KV;500;2003-06-16;01C;67
    E0100440;EKKJ 2X2,5/2,5 1 KV;1;2003-06-16;01C;37,2
    E0100445;EKKJ 2X2,5/2,5 1 KV;500;2003-06-16;01C;33,2
    E0100450;EKKJ 2X4/4 1 KV;1;2003-06-16;01C;53
    E0100455;EKKJ 2X4/4 1 KV;500;2003-06-16;01C;47,1
    $ cat artikel.ctl
    LOAD DATA
    INFILE 'test.csv'
    replace
    INTO TABLE ARTIKEL
    FIELDS TERMINATED BY ';'
    TRAILING NULLCOLS
    (ARTNR, NAMN, FP_STORLEK char "to_number(:fp_storlek,'99999')", DATUM date 'yyyy-mm-dd', MTRLID, pris char "to_number(:pris,'999999D99')")
    $ sqlldr scott/tiger control=artikel
    SQL*Loader: Release 10.1.0.3.0 - Production on Sat Nov 12 15:10:01 2005
    Copyright (c) 1982, 2004, Oracle.  All rights reserved.
    Commit point reached - logical record count 6
    $ sqlplus scott/tiger
    SQL*Plus: Release 10.1.0.3.0 - Production on Sat Nov 12 15:10:11 2005
    Copyright (c) 1982, 2004, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - Production
    With the Partitioning, OLAP and Data Mining options
    SQL> select * from artikel;
    ARTNR      NAMN                      FP_STORLEK DATUM      MTRLI       PRIS
    E0100070   EKKJ 1X10/10 1 KV                  1 16/06/2003 01C           75
    E0100075   EKKJ 1X10/10 1 KV                500 16/06/2003 01C           67
    E0100440   EKKJ 2X2,5/2,5 1 KV                1 16/06/2003 01C          372
    E0100445   EKKJ 2X2,5/2,5 1 KV              500 16/06/2003 01C          332
    E0100450   EKKJ 2X4/4 1 KV                    1 16/06/2003 01C           53
    E0100455   EKKJ 2X4/4 1 KV                  500 16/06/2003 01C          471
    6 rows selected.
    SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - Production
    With the Partitioning, OLAP and Data Mining options
    $ export NLS_NUMERIC_CHARACTERS=',.'
    $ sqlldr scott/tiger control=artikel
    SQL*Loader: Release 10.1.0.3.0 - Production on Sat Nov 12 15:10:41 2005
    Copyright (c) 1982, 2004, Oracle.  All rights reserved.
    Commit point reached - logical record count 6
    $ sqlplus scott/tiger
    SQL*Plus: Release 10.1.0.3.0 - Production on Sat Nov 12 15:10:45 2005
    Copyright (c) 1982, 2004, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - Production
    With the Partitioning, OLAP and Data Mining options
    SQL> select * from artikel;
    ARTNR      NAMN                      FP_STORLEK DATUM      MTRLI       PRIS
    E0100070   EKKJ 1X10/10 1 KV                  1 16/06/2003 01C           75
    E0100075   EKKJ 1X10/10 1 KV                500 16/06/2003 01C           67
    E0100440   EKKJ 2X2,5/2,5 1 KV                1 16/06/2003 01C         37,2
    E0100445   EKKJ 2X2,5/2,5 1 KV              500 16/06/2003 01C         33,2
    E0100450   EKKJ 2X4/4 1 KV                    1 16/06/2003 01C           53
    E0100455   EKKJ 2X4/4 1 KV                  500 16/06/2003 01C         47,1
    6 rows selected.
    SQL>                                                                            Control file is exactly as yours, I just put replace instead of insert.

  • How to import flat file with date in filename on a regular basis

    Hi,
    Using OWB 11gR1
    I have a file that will be delivered to an FTP each night with the date in the filename having the form YYYYMMDD-FILE.txt (ex: 20100326-FILE.txt) that I want to import to an external table.
    Now I've set up the import to the external table but am only able to import files that I specify the name for exactly. I've tried pointing to filenames such as "*-FILE.txt" and "%-FILE.txt" but that only results in errors.
    It must be possible to automatically import files with different filenames but the same structure, or isn't it? Could anyone help me solve this it'd be greatly appreciated.
    Thank you in advance.

    Hi
    For dynamic files you can;
    1. either do the DDL on the external table to point to the file with the changing name
    2Copy the file to a fixed name before using the external table/maps
    3. Use the preprocessor to cat/pipe these files for the external table. See the post here http://blogs.oracle.com/warehousebuilder/2009/06/file_staging_using_external_table_preprocessor.html it shows using gunzip but could simply be doing 'cat' on a bunch of files to standard output
    Cheers
    David

  • Error importing CSV files with 'hidden' characters using External Table

    Hi Folks
    Bit of a strange one here.
    We're well used to using the External Table method of loading data from CSV files into the database but a recent event has presented us with a problem.
    We have received some CSV files that 'look' like regular CSV files but Oracle will not load them.
    When we examined the CSV file using VIM on a UNIX box we saw the following 'hidden' characters between every regular character in the file.
    ^@So a string that looks like this when opened in Excel/Wordpad etc
    "TEST","TEXT"Looks like this when exmained with VIM
    ^@"^@T^@E^@S^@T^@"^@,^@"^@T^@E^@X^@T^@"Has anyone come across this before?
    Many thanks
    Simon Gadd
    Oracle 11g 11.2.0.1.0

    Hi Simon,
    ^@ represents the NUL character (0x00).
    So, most likely, you've got a Unicode-encoded file.
    You'll have to specify the character set in the record specification (and if necessary the byte order mark), for instance :
    CREATE TABLE ext_table
      col1 VARCHAR2(10),
      col2 VARCHAR2(10)
    ORGANIZATION EXTERNAL
      TYPE ORACLE_LOADER
      DEFAULT DIRECTORY dump_dir
      ACCESS PARAMETERS
       RECORDS DELIMITED BY '
    ' CHARACTERSET 'UTF16'
      FIELDS TERMINATED BY ','
      LOCATION ('dump.csv')
    REJECT LIMIT UNLIMITED;http://download.oracle.com/docs/cd/E11882_01/server.112/e16536/et_params.htm#i1009499

  • How to import data from CSV file with columns separated by semicolon?

    I migrate database from MS SQL 2008 to ORACLE 11g
    I export data to CSV file from MS SQL
    then I try to import it to Oracle
    several tables goes fine using Import data option in the SQL Developer
    Standard CSV files with data separated by comma were imported.
    chars, date (with format string), and integer data are imported via import wizard without problems
    the problems were when I try to import table with noninteger numbers with modal part separated by comma not by dot
    comma is the standard separator for columns in CSV file
    so I must change the standard separator to semicolon
    then the import wizard have problem to correct recognize the columns data because it use only standard CSV comma separator :-/
    In SQL Developer 1.5.3 Tools -> Preferences -> Migration -> Data Move Options
    I change "End of Column Delimiter" to ; but it doens't work
    Is this possible to change the standard column separator for import data wizzard in SQL Developer 1.5.3?
    Or maybe someone know how to import data in SQL Developer 1.5.3 from CSV when CSV colunn separator is set to semicolon?

    A new preference has been added to customize the import delimiter in main code line. This should be available as part of future release.

  • Import Comments data and Dimension Members from csv file via Data Manager

    Dear Experts,
    I have two questions regarding the data manager.
    Q1.Is it possible to import "Comments" from the csv file via Data Manager?
    We'd like to import the amount with "Comments".
    My image of csv file is like below;
    ACCOUNT,CATEGORY,TIME,ENTITY,INPUTCURRENCY,AMOUNT,COMMENTS
    1100000,ACTUAL,2010/06,LC,30000,This is comment
    Q2.Is it possible to import the dimension "members" from the csv file via Data Manager?
    We have a user-defined dimension named "Project"
    and would like to import the members, instead of maintaining them in BPC administration manually.
    I found an online help information which says "Import Master Data from a Data File Example",
    but I could not find any relevant sample package for this.
    (I tried to import the members by using "Import" package, but it failed...)
    reference:http://help.sap.com/saphelp_bpc75/helpdata/en/86/8b1bfc12c94fb0b585cca70d6f1b61/content.htm
    Thanks in advance for your help.
    Fumi

    Hi Fumi,
    In this case, I would suggest you to create a customized SSIS package which will fill-in the "Comment<APP>" table, according to the csv file you have. I do not know any standard package that allows you to import comment the way you would like...
    Best Regards,
    Patrick

  • Performance issue with big CSV files as data source

    Hi,
    We are creating crystal reports for a large banking corporation with CSV files as data source. For some reports, we need join 2 csv files. The problem we met now is that when the 2 csv files are very large (both >200M), the performance is very bad and it takes an hour or so to refresh the data in Crystal Reports designer. The same case for either CR 11.5 or CR 2008.
    And my question is, is there any way to improve performance in such situations? For example, can we create index on the csv files? If you have ever created reports connecting to CSV, your suggestions will be highly appreciated.
    Thanks,
    Ray

    Certainly a reasonable concern...
    The question at this point is, How are the reports going to be used and deployed once they are in production?
    I'd look at it from that direction.
    For example... They may be able to dump the data directly to another database on a separate server that would insulate the main enterprise server. This would allow the main server to run the necessary queries during off peak hours and would isolate any reporting activity to a "reporting database".
    This would also keep the data secure and encrypted (it would continue to enjoy the security provided by an RDBMS). Text & csv files can be copied, emailed, altered & deleted by anyone who sees them. Placing them in encrypted .zip folders prevents them from being read by external applications.
    <Hope you liked the sales pitch I wrote for you to give to the client... =^)
    If all else fails and you're stuck using the csv files, at least see if they can get it all on one file. Joining the 2 files is killing you performance wise... More so than using 1 massive file.
    Jason

  • How to search a .csv file for data using its timestamp, then import to labview

    Hi, I'm currently obtaining density, viscosity and temperature data from an instrument, adding a timestamp and writing it to a .csv file which I can view in Excel. This works fine (see attached code) but what I need to do now is to search that csv file for data which was obtained at a certain time, import the temperature, density & viscosity values at this time back into Labview to do some calculations with them, while the data acquisition process is still ongoing.
    I've found various examples on how to import an entire csv file into labview, but none on how to extract data at a specific time. Also, whenever I try to do anything with the .csv file while my data acquistion VI is running, I receive error messages (presumably because I'm trying to write to and import data from the .csv file simultaneously). Is there some way around this, maybe using case structures?
    If you need to know my skill level, I've been using Labview for a few weeks and prior to that have basically no experience of writing code, so any help would be great. Thanks!
    Solved!
    Go to Solution.
    Attachments:
    Lemis VDC-30 read registers MODBUS v5.vi ‏56 KB

    It sounds as if you are going about this a little backwards writing to a data file and then extracting from the file but its the weekend so I can't think of an improved way to do it at the moment. 
    Searching for a specific time with those specific values is quite easy, or if you wanted to select any time then you could interpolate the values to find any value that you want (This is where the contiguous measurement comes in, as you have readings at discrete times you will have to interpolate the values if you want to get the 'measured value' at a time point that is not exactly one of your measured points).
    If you can extract the TDMS time column and the T, D & V then simply thresholding and interpolating each of your array/data sets should allow readings at your desired times.
    Attachments:
    Interpolate.png ‏301 KB

  • Problem importing a CSV file with forward slashes in a column

    I have an Excel csv file of a product database (contains about 6500 products) that contains product codes with forward slahes such as 499/1, 499/3, 499/5.
    These are different sizes of a product and as such have different prices etc.
    When I import the file into Numbers these numbers appear as 499, 166 1/3, and 99 4/5 respectively.
    What seems to be happening is that Numbers is interpreting the forward slash (/) as a divide by command and subsequently performing a calculation on that number on import and hence totally changing the value of the cell so that it is impossible to look up the price related to a product as the product code no longer exists.
    Excel can import these files with no problems, why can't Numbers treat each cell as text and leave it alone on import.
    Is there any way round this or do I have to revert to using Excel for the import of csv files.
    Thanks
    Steve

    I know I'm a bit(!) late (a year) coming to this party, but there is a simple solution, that worked for me, enclose the field in double quotes, and add a single quote before the number:
    Instead of
    499/1,"Super Widget 3",12.34
    do
    "'499/1","Super Widget 3",12.34
    In fact the single (unclosed) quote without double-quotes works as well:
    '499/1,"Super Widget 3",12.34
    I've always found it better to enclose strings with double quotes. This works on loading the file into Mac Numbers and should work with Excel too if that helps. It opens with OpenOffice 4 on the Mac too - if you select "comma" in the "Separated by" checkbox
    I can't remember where I picked this info up....
    Hope this helps someone, albeit late.
    Andy

  • How to import csv-file in Numbers 3.2.2.

    I start using Numbers in stead of Excel. I would like to import csv-files from my bank, but when I open the csv-file in Numbers, everything is imported in the same cell. I composed a testfile: 01/08/2014,”text”,”more text”,”even more text” in Pages, exported to a textfile and changed the extension from .txt in .csv. It did not help, everything was in the same cell. What must be changed to become successful in importing csv-files? I am using Numbers 3.2.2. and an iMac with 2,8 GHz Intel Core i7 processor and 8 GB 1067 MHz DDR3 memory with OS X 10.9.4.
    Thanks, Joan Voormolen

    You can do this using Pages. Without using outside scripts or functions. The Pages Find/Replace function will let you change the delimiter on the data in your file.
    Open the file in Pages. Click Show Invisibles. (this will show you the delimiter used in the file)
    If you see a * as the delimiter, that is a space. Some data files are space delimited. This is a really poor way to delimit numerical data files.
    If you see a fat arrow to the right, the file is Tab delimited
    Obviously, a comma is not a hidden character. Some files are comma delimited
    Whatever else might have been used as a delimiter (for example a semi colon is sometimes used) will be apparent.
    The delimiter should be something that is not used anywhere else in the "data"... text, numbers, etc., you want to delimit. Numbers considers a comma as a valid delimiter for files with the suffix .csv . It considers a tab as a valid delimiter with files with the suffix .txt . It does not consider spaces a valid delimiter in with any file suffix. But some programs use odd delimiters (semi colon, colon, double spaces, etc) as delimiters.
    Use the Find command, then Find/Replace as you need to create that delimiter numbers recognizes. Let's say a semi colon was used as a delimiter. Enter the current delimiter (semi colon)  into the Find box. Pages should highlight all the instances of your entry. Enter a comma (to create comma delimited data file) in the Replace box. You should now see a comma as the delimiter.
    Important Don't forget, any other comma used in the file will also be considered a delimiter. (a comma in 1,000 for example). So check the data. If you see a comma used another way you will want to eliminate that BEFORE you do the "comma as delimiter" replacement. If you have 1,000, do a find/replace with comma as the find, nothing as the replace, first. THEN do the replacement of the semi colon.
    Now comes the "tricky" part from what I could see. You want to save this new file with a suffix of .csv. (Export the file) Numbers will only open a comma delimited file with separated data (by comma) if it's suffix is .csv. Pages only gives you limited export options and puts the file suffix on for you automatically. CSV is not one of the options!
    Choose Text. Pages will name the file .txt. Quit Pages. Go to the file on your desktop (or wherever you saved it). Change the file suffiix from .txt to .csv.
    That's it. Open the file with Numbers. Numbers will create a separate column for everything between the comma's.
    You can use this same method to alter your data file before you import it into Numbers. For example, one file I wanted to import had time=xxx . I only wanted the actual time, not the text attached to it, in my spreadsheet. I did a find/replace with "time=" as the find. A comma as the replace. Even though "time=xxx" is one "word", Pages identified the "time=" within the word to allow the replacement.
    Numbers does not provide a "choose delimiter" function when opening a file. Instead it automatically uses the standard delimiter based on the file suffix. CSV means Comma, so if the file is named .csv it will only look for and use a comma as the delimiter to put the data into separate columns. I believe .txt uses only a tab as the delimiter. In the above example you could find/replace to a Tab. Then Export to Text. And numbers will open the data into columns the way you want, without the extra step of renaming the file on your desktop.
    While some files use a second space (ie two in a row) as a delimiter that's a nasty way to delimit. You always want a specific delimiter that is not used within the data element.
    The above is to import numerical data into separate columns. You could use the same method to manipulate a file that contains text. Let's say you had a file with the suffix .txt. In the file are names and addresses.  John Smith 246 Rose Road . You want Name in one column. Address in another.  Look at all the spaces, which ones should be delimiters which not? Are there any delimiters in the file?
    If you open with Pages and choose show invisibles you can see. You might see John Smith --> 246 Rose Road. (the --> will look like a fat arrow in Pages). Numbers will open this file, IF it has .txt as the suffix, based on the Tab,  with name in one column, Address in another.
    Or you might see John*Smith**246*Rose*Road. Even though the creator of this intended two spaces to be a delimiter Numbers does not recognize that. Numbers will put everything into one column. The fix? In Pages, put a tab between name and address. Find/replace two spaces with Tab. Export, as Text.
    Based on what you see (with show invisible active) in Pages, you can use the Find/Replace function to create the specific delimiter you want (tab or comma). You can use that function to manipulate the file easily so the data you want shows up in separate columns. You may need to get clever to accomplish the unique delimiters. You might even need to do two passes with Find/Replace.
    In the instance above  if there was only one space between each element. (not two as a pseudo delimiter) You could replace all spaces with a tab in Pages. Export as Text.  Numbers will open that file with a column for each word (one for John, one for Smith). Then  "Merge" the two cells (columns) you want to put back together. 

  • Strange Behavior with Data Merge (CS4) - Advice?

    This is the most bizarre behavior and I may have to chalk it up to a full moon or something since I can't reproduce it and I cannot find any hints as why this happened. This is my last attempt at figuring a solution...maybe someone has experienced this??? Anyway, here's what happened...
    I do a data merge for addressing envelopes. Been using the InDesign data merge feature for a couple of years now, no problems.
    My address list is approximately 1200. It's in a .csv file with the following fields: fname, lname, company, address1, address2, city, state, zip
    For my last mailing, approximately 300 addresses were merged wrong. Here's the crazy thing it did: instead of using the address1 field for the current record, it used the address1 field from the NEXT record, and ONLY the address1 field.
    Here's more bizarre-ness. The NEXT record was merged correctly.
    For example:
    Record 1 (address1 should be 555 Cherry Lane, Apt 5)
    John Smith
    Smith Company
    123 Main Street, Apt 5
    Small Town, CA 12345
    Record 2 (John Smith's address used THIS address1, but NOT address2 or any other field! Jane Doe's address is correct)
    Jane Doe
    Doe Company
    123 Main Street, Suite 300
    Big Town, TX 56789
    I've tested. I've looked for commonality in the records, where it's placed, if there was something unusual about it's format. I come up empty. The incorrect data seem randomly interspersed.
    One thing that I have to do is break up the actual merge. Either my computer memory or InDesign can't handle 1200 merge records. So, I merge approximately 200 records at a time (I make 6 print files). I don't know if this has anything to do with it (like I said, the incorrect data is interspersed all through out), but, thought I'd mention it.
    Any advice is appreciated. I'm hesitant to use it again until I can understand why this happened.
    thanks much,
    Kia

    CS was soemtimes flakey in this way with Data Merge.
    Are you updated to the latest patch? Most issues were resolved at some point.
    That said, I recall another user with this problem, and it never resolved with his particular file. I examined the data file and couldn't find any issues, but it behaved identically on my system tot he way it did on his so I can only conclude there was still some bug, or something about the data file that we were unable to detect. I seem to recall that the same data file merged correctly in CS3, but I don't recall if I saved it long enough to test in CS5. I've not had any issues with my own files in any version.
    Open the file in a plain text editor and look for any odd characters (I'd start the search where the merge first fails). Also do a Save As from the text editor to make sure there is no possibility of formatting from your spread sheet having found its way in.

  • How to get hebrew characters to work with data merge?

    I'm trying to work with data merge with Hebrew characters and get gibrish on the panel, merging and export.
    I've tried to change the CSV file to Unicode and change the language setting but it still don't work.
    I've worked with data merge before in English and it work perfectly.
    Any Ideas? is this a bug? software constrains?

    Try this:
    Save your file as a UTF-16 BE (Big Endian) file.
    Import showing options in ID. I'm on a PC, choose the below regardless.
    Should look like this in ID.
    Apologies to Farsi-speaking people everywhere. I pulled some text out of a Farsi text file to make up this tab-delimited merge file. I don't speak it, so it is likely servely non-sensical.
    The text editor being used here (first screen shot) is the OpenSource NotePad++. I am also not using the ME version of ID.
    Take care, Mike

  • 2.5 GB CSV file as data source for Crystal report

    Hi Experts,
        I  was asked to create a crystal report using crystal report as datasource(CSV file that is pretty huge (2.4Gb)). Could you help with me any doc that expalins the steps mainly with data connectivity.
    Objective is to create Crystal Report using that csv file as data source, save the report as .rpt with the data and send the results to customer to be read with Crystal Reports Viewer or save the results to PDF.
    Please help and suggest me steps as I am new to crystal reports and CSV as source.
    BR, Nanda Kishore

    Nanda,
    The issue of having some records with comma and some with a semi colon will need to be resolved before you can do an import. Assuming that there are no semi colons in any of the text values of the report, you could do a "Find & Replace" to convert the semi colons to commas.
    If find & replace isn't an option, you'll need to get the files separately.
    I've never used the Import Export Wizzard myself. I've always used the BULK INSERT command
    It would look something like this...
    BULK INSERT SQLServerTableName
    FROM 'c:\My_CSV_File.csv'
    WITH (FIELDTERMINATOR = ',')
    This of course implies that your table has the same columns, in the same order as the csv files and that each column is the correct data type to accept the incoming data.
    If you continue to have issues getting your data into SQL Server Express, please post in one of these two forums
    [Transact-SQL|http://social.msdn.microsoft.com/Forums/en-US/transactsql/threads]
    [SQL Server Express|http://social.msdn.microsoft.com/Forums/en-US/sqlexpress/threads]
    The Transact-SQL forum has some VERY knowledgeable people (including MVPs and book authors) posing answers.
    I've never posed to the SQL Server Express but I'm sure they can trouble shoot your issues with the Import Export Wizard.
    If you post in one of them, please copy the post link back to this thread you I can continue to to help.
    Jason

  • Import CSV file: column count mismatch by always 1

    Hello,
    I am trying to import data from a CSV file into a HANA db table with an .hdbti file.
    I have a large structure of 332 fields in the table. When I activate the files to start the import, I always get the following message:
    CSV table column count mismatch. The CSV-file: Test.data:data.csv does not match to target table: Test.data::test. The number of columns (333) in csv record 1 is higher than the number of columns (332) in the table.
    But even if I delete the last column in the CSV file, the message stays the same. I also tried adding an additional field in the table on HANA, but then the column numbers in the error message just increase by 1. Then the message is: The number of columns (334) in csv record 1 is higher than the number of columns (333) in the table.
    So, it seems, whatever I do, the system thinks always that the csv file has one column too much.
    With another smaller structure of 5 fields, the import worked that way.
    Do you have an idea what the problem could be?
    Regards,
    Michael

    Hi Michael,
    It may be coz of delimiter problem or something. Can you paste the control file content here. So that i can check it out. Issue may be in your control file.
    Also if you can show the contents of the your *.err error log file, we can have a broader picture.
    Regards,
    Safiyu

Maybe you are looking for

  • IOS8 completely messes up old "family sharing" method of sharing an itunes account

    Instead of my wife and I's five devices working very well together ...sharing a 200gb storage plan, sharing safari tabs through iCloud, ect.  Our devices have been completely hacked apart by this new family sharing setup. It's also going to cost us m

  • Error creating OIM Schema with RCU 11.1.1.5

    I am trying to create the OIM schema on Oracle 11g R2 using RCU 11.1.1.5. Both the database and RCU is being ran on Windows Server 2003 x64. The error I am getting is included. The remaining schemas are all created fine. 2011-06-21 14:58:58.321 NOTIF

  • Getting the 500 Internal server,while opening the work item in Portal Inbox

    Hi All, When a transaction is submitted by an employee. It routes to the manager approval through work item. In the Portal Inbox of the Manager he could find the task to approve. When he clicks on Task we are getting an Error as 500 Internal Server e

  • SQL Developer with Mac Os X 1.3.9

    Greetings, I am running Oracle Database 10g Release 1 (10.1.0.3) on my Powerbook G4 1 GHz with 768 MB ram. After reading the requirements for SQL developer for OS X, I thought I would try to install it. Unfortunately there seems to be a conflict in t

  • Why won't YouTube embedded video play from PowerPoint?

    I have followed the normal process to embed video in PowerPoint 2010 (32 bit), and playback worked without issues a couple of months ago.  Now it does not.  I have completed all of the troubleshooting tips, and reinstalled latest FlashPlayer 32-bit,