Can GhostScript handle UTF-8 encoded files?

Is there a way to tell GhostScript that its input file is encoded in UTF-8?

I don't know much about this, so this is almost curiosity more than anything, but wouldn't that usually be handled by a process run prior to gs? That is, by whatever produces the postscript which is to be passed to gs? In TeX, for example, the flow would usually go input -> TeX/LaTeX/whatever -> dvi output -> dvi to ps processor -> ps output -> gs -> final output (dependent on device). It's the initial processor which usually needs to know whether the input is UTF-8 encoded or whatever.
- cfr

Similar Messages

  • CONVERSION FROM ANSI ENCODED FILE TO UTF-8 ENCODED FILE

    Hi All,
    I have some issues in conversion of ANSI encoded file to utf encoded file. let me tell you in detail
    I have installed the Language Support for Thai Language on My Operating System.
    now, when I open my notepad and add thai character on the file and save it as ansi encoding. it saves it perfectly and also I able to see it on opening the file again.
    This file need to be read by my application , store in database and should display thai character on jsp after fetching the data from database. Currently it is showing junk character on jsp reason being that my database (UTF8 compliant database) has junk data . it has junk data because my application is not able to read it correctly from the file.
    If I save the file with encoding as UTF 8 it works fine. but my business requirement is such that the file is system generated and by default it is encoded in ANSI format. so I need to do the conversion of encoding from ANSI to UTF8 . so Any of you can guide me on the same how to do this conversion ?
    Regards
    Gaurav Nigam

    Guessing the encoding of a text file by examining its contents is tricky at best, and should only be done as a last resort. If the file is auto-generated, I would first try reading it using the system default encoding. That's what you're doing whenever you read a file with a FileReader. If that doesn't work, try using an InputStreamReader and specifying a Thai encoding like TIS-620 or cp838 (I don't really know anything about Thai encodings; I just picked those out of a quick Google search). Once you've read the file correctly, you can write the text to a new file using an OutputStreamWriter and specifying UTF-8 as the encoding. It shouldn't really be necessary to transcode files like this, but without knowing a lot more about your situation, that's all I can suggest.
    As for native2ascii, it isn't for encoding conversions. All it does is replace each non-ASCII character with its six-character Unicode escape, so "voilá" becomes "voil\u00e1". In other words, it avoids the problem of character encodings by converting the file's contents to a form that can be stored as ASCII. It's mainly used for converting property or resource files to a form that can be read by the Properties and ResourceBundle classes.

  • Can't use UTF-16 encoding with XML Parser for Java v2.

    This is my XML Document:
    <?xml version="1.0" encoding="UTF-16" ?>
    <Content>
    <Title>Documento de Prueba de gestin de contenidos.</Title>
    <Creator>Roberto P     rez Lita</Creator>
    </Content>
    This is the way in which i parse de document:
    DOMParser parser=new DOMParser();
    parser.setPreserveWhitespace(true);
    parser.setErrorStream(System.err);
    parser.setValidationMode(false);
    parser.showWarnings(true);
    parser.parse(
    new FileInputStream(new File("PruebaA3Ingles.xml")));
    I've got this error:
    XML-0231 : (Error) Encoding 'UTF-16' is not currently supported.
    I am using the XML Parser for Java v2_0_2_5 and I am a little
    confused because the documentation says that the UTF-16 encoding
    is supported in this version of the Parser.
    Does anybody know how can I parse documents containing spanish
    accents?
    Thanks in advance.
    Roberto P     rez.
    null

    Oracle just uploaded a new release of V2 Parser. It should
    support UTF-16.
    Yet, other utilities still have some problems with UTF-16
    encoding. Seems we just
    have to wait this one out.
    BTW, I'm trying to use Japanese. We, also, have some problems
    with JServer.
    Roberto P     rez (guest) wrote:
    : This is my XML Document:
    : <?xml version="1.0" encoding="UTF-16" ?>
    : <Content>
    : <Title>Documento de Prueba de gestin de contenidos.</Title>
    : <Creator>Roberto P     rez Lita</Creator>
    : </Content>
    : This is the way in which i parse de document:
    : DOMParser parser=new DOMParser();
    : parser.setPreserveWhitespace(true);
    : parser.setErrorStream(System.err);
    : parser.setValidationMode(false);
    : parser.showWarnings(true);
    : parser.parse(
    : new FileInputStream(new File("PruebaA3Ingles.xml")));
    : I've got this error:
    : XML-0231 : (Error) Encoding 'UTF-16' is not currently supported.
    : I am using the XML Parser for Java v2_0_2_5 and I am a little
    : confused because the documentation says that the UTF-16
    encoding
    : is supported in this version of the Parser.
    : Does anybody know how can I parse documents containing spanish
    : accents?
    : Thanks in advance.
    : Roberto P     rez.
    null

  • [SOLVED] Can't save UTF-8 encoded source file in IDE

    I had a problem where I could not save a c++ source file from anjuta using the gtksourceview plugin, but could using the scintilla plugin.  I would get an error message with "invalid byte sequence in conversion input" in the text.  Eclipse CDT also would not save a modified file with a similar error message.
    The fix was to uncomment the proper lines in the file /etc/locale.gen for my locale and run locale-gen as described in the Archlinux wiki article "Configuring locales."
    I did not find any help for this problem using internet searches, so I thought I would document it here.  Just in case someone else fails to setup their locale and runs into the same trouble.

    I think so (haven't checked), but it is a really simple test xml which is not really error prone).
    But the problem is a different one, because I also just tried to read a txt file with some Japanese characters into an NSString using initWithContentsOfURL.
    When I print the string in the console, I only get messed up characters (the latin characters next to the Japanese are displayed fine).
    It is a general problem of reading out an UTF-8 file from an url.
    Spent the whole last night to google something helpful but couldn't find anything. Now I'm tired at work
    Thomas

  • UTF-8 csv file generated by Java can't be displayed correctly on Excel 2000

    Hi, I have a question about the file geneated by Java in UTF-8 format. If I have a file named "foo.csv" generated by my java code like this:
    writer = new OutputStreamWriter(new FileOutputStream("foo.csv"), "UTF-8");
    Then I write some data loaded from a MS SQL 2000 server to this file. Some data are in German or french characters. After the file is generated, if I open it in notepad, all the characters in the file looks good. The file is in UTF-8 format. The only problem is that when I open this csv file in MS Excel 2000, all the non-english character display incorrectly. For example:
    "VERKTYGSTEKNIK I V�XJ� AB" display as itself in notepad, but it display as "VERKTYGSTEKNIK I V��XJ�� AB" in the MS Excel. I don't know the reason. If I don't have the encoding in the writer, then the file generated looks fine in the Excel (Even the non-English characters), in this case, the file is generated in ANSI format. But my client wants the file in UTF-8 format and also want to view it in Excel.
    Why the UTF-8 file generated by Java can't be displayed correctly in Excel 2000?
    My environment is:
    OS: Window 2000 Server English Version
    JDK: Sun jdk1.3.1
    Thanks

    - Does ms excell actually support UTF-8
    Yes. I believed that we installed some international add-on which is not in default installnation. Anyway, other UTF-8 or UTF-16 file can be openned and viewed by Excel without any problem.
    - have you verifide that the file is viewable as a UTF-8 -encoded file
    I think so. If I open it into Notepad and choose "save as", the file type if UTF-8 file
    - Try opening the file in a program you are confident
    that it support UTF-8 - eg. Mozilla...
    I will try that.
    - Check that your UTF-8 -encoded file has a UTF-8 identifier (0xFEFF ?)
    as the first character
    The unicode-16(LE or BE) file I got from internet, I found there is always two bytes in the front. (0xFEFF or 0xFFFE). My UTF-8 file generated by java doesn't have that. But should UTF-8 file also has this kind of specifcal bytes in the front? If I manually add these bytes in the front of my file using Ultraeditor and open it in Excel2000, it didn't help.
    - Try using another spreadsheet program that supports UTF-8
    Do you know any other spreadsheet program supports csv file and UTF-8.

  • How to handle the layout of file in OWB?

    Hi all,
    I am facing one problem..that is
    If my file is coming monthly
    i need to load into datamart.
    But the problem is the layout is not same.
    At the end of the file one field may add or one field may delete..
    The procedure I am following to load data into datamart is "First I am creating External table and then from Eaternal table to Table."
    How can I handle this type of file in OWB
    Pls give me some inputs
    Thanks in advance
    Srinivas

    hi,
    Define the extenal table in the followin way,
    CREATE TABLE sample_extbl
    2 ( col1 NUMBER
    3 , col2 NUMBER
    4 , col3 NUMBER
    5 )
    6 ORGANIZATION EXTERNAL
    7 (
    8 TYPE ORACLE_LOADER
    9 DEFAULT DIRECTORY LOG_DIR
    10 ACCESS PARAMETERS
    11 (
    12 RECORDS DELIMITED BY NEWLINE
    13 nobadfile
    14 nologfile
    15 FIELDS TERMINATED BY ',' LDRTRIM
    16 MISSING FIELD VALUES ARE NULL
    17 )
    18 LOCATION (LOG_DIR:'a.csv')
    19 )
    20 REJECT LIMIT UNLIMITED;
    case 1:
    Now, for example if the file a.csv contains the data as follows.
    col1,col2,col3,col4,col5
    abc,28,xyz,mno,1000
    abc2,38,xyz,mno,2000
    abc3,28,xyz,mno,3000
    Then, if we query the external table "sample_extbl"
    select * from sample_extbl
    Then you get the data as follows
    col1 col2 col3
    abc 28 xyz
    abc2 38 xyz
    abc3 28 xyz
    Case 2:
    Now, for example if the file a.csv contains the data as follows.
    col1,col2,col3,col4,col5
    abc,28,xyz,mno,1000
    abc2,38
    abc3,28,xyz,mno
    Then, if we query the external table "sample_extbl"
    select * from sample_extbl
    Then you get the data as follows
    col1 col2 col3
    abc 28 xyz
    abc2 38
    abc3 28 xyz
    Regards,
    Gowtham Sen.

  • Handling FORM - input type="File"...

    How can I handle the <input type="FILE"...> in a servlet, if I'd like to save this file to the server.
    HELP !
    Thanks...

    Take a quick look at servletsuite.com. They have a uploadservlet.
    Andreas

  • Photoshop CS5 can't handle lots of text..?

    Recently migrated to CS5.
    Was running Photoshop CS5 just fine for a week, working mainly on images... but now that I need to work on something that has more text in it (like a graphical CV) the screen isn't reflecting the changes I'm making.
    I can click inside a text box, but I'll never see the cursor. I can type, but I'll never see what I'm typing. What was there before I clicked the text box is still there. It's like the viewport's frozen.
    However, if I exit the text box, zoom out and zoom in... I see the changes I made. Ditto if I move the layer around.
    There's something wrong with the viewport refreshing. I even turned off OpenGL, reloaded the whole thing... and it'll only work properly for a minute or two before nothing I do in this document gets immediately displayed. I always have to move the layer around, or do the zoom trick, to see what I did.
    What's even more peculiar is that this file is only 5-6 megs in size. I've worked with 6 GB (that's right, GB) files in the past week without any problems, because the text was minimal. But the program can't handle a peanut-sized file if it's got a lot of text in it. It's the weirdest thing.
    Has anyone else encountered this problem, and if so, what have you done to remedy it?
    Thanks.

    Also, does this mean I can never have font previews turned back on? Or
    is this a temporary solution until you guys fix this bug with an
    eventual update?
    Unless you can identify which of your corrupt fonts is causing the problem, we won't be able to fix it.
    It might be a bug in our code, or in Windows.  But it is almost certainly going to be due to a corrupt font file.
    We haven't seen that particular problem, or we would have already fixed it.
    So we need to know which of your fonts is corrupt and causing the problem.
    Font previews are created before you show them, in the background -- a corrupt font file can cause problems when creating the previews and slow things down for a while.

  • Encoding Problem - can't read UTF-8 file correctly

    Windows XP, JDK 7, same with JDK 6
    I can't read a UTF-8 file correctly:
    Content of File (utf-8, thai string):
    &#3648;&#3617;&#3655;&#3604;&#3648;&#3621;&#3639;&#3629;&#3604;&#3586;&#3634;&#3623;
    When opened in Editor and copy pasted to JTextField, characters are displayed correctly:
    String text = jtf.getText();
    text.getBytes("utf-8");
    -32 -71 -128 -32 -72 -95 -32 -71 -121 -32 -72 -108 -32 -71 -128 -32 -72 -91 -32 -72 -73 -32 -72 -83 -32 -72 -108 -32 -72 -126 -32 -72 -78 -32 -72 -89
    Read file with FileReader/BufferedReader:
    line = br.readLine();
    buffs = line.getBytes("utf-8"); //get bytes with UTF-8 encoding
    -61 -65 -61 -66 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14
    buffs = line.getBytes(); // get bytes with default encoding
    -1 -2 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14
    Read file with:
    FileInputStream fis...
    InputStreamReader isr = new InputStreamReader(fis,"utf-8");
    BufferedReader brx = new BufferedReader(isr);
    line = br.readLine();
    buffs = line.getBytes("utf-8");
    -17 -65 -67 -17 -65 -67 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14
    buffs = line.getBytes();
    63 63 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14
    Anybody has an idea? The file seems to be UTF-8 encoded. What could be wrong here?

    akeiser wrote:
    Windows XP, JDK 7, same with JDK 6
    I can't read a UTF-8 file correctly:
    Content of File (utf-8, thai string):
    &#3648;&#3617;&#3655;&#3604;&#3648;&#3621;&#3639;&#3629;&#3604;&#3586;&#3634;&#3623;
    When opened in Editor and copy pasted to JTextField, characters are displayed correctly:
    String text = jtf.getText();
    text.getBytes("utf-8");
    -32 -71 -128 -32 -72 -95 -32 -71 -121 -32 -72 -108 -32 -71 -128 -32 -72 -91 -32 -72 -73 -32 -72 -83 -32 -72 -108 -32 -72 -126 -32 -72 -78 -32 -72 -89 These values are the bytes of your original string "&#3648;&#3617;&#3655;&#3604;&#3648;&#3621;&#3639;&#3629;&#3604;&#3586;&#3634;&#3623;" utf-8 encoded with no BOM (Byte Order Marker) prefix.
    >
    Read file with FileReader/BufferedReader:
    line = br.readLine();
    buffs = line.getBytes("utf-8"); //get bytes with UTF-8 encoding
    -61 -65 -61 -66 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14
    buffs = line.getBytes(); // get bytes with default encoding
    -1 -2 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14
    Read file with:
    FileInputStream fis...
    InputStreamReader isr = new InputStreamReader(fis,"utf-8");
    BufferedReader brx = new BufferedReader(isr);
    line = br.readLine();
    buffs = line.getBytes("utf-8");
    -17 -65 -67 -17 -65 -67 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14
    buffs = line.getBytes();
    63 63 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14 These values are the bytes of your original string UTF-16LE encoded with a UTF-16LE BOM prefix.
    This means that there is nothing wrong (the String has been read correctly) with the code and that your default encoding is UTF-16LE .
    Edited by: sabre150 on Aug 1, 2008 5:48 PM

  • How to read UTF-8 encoded text file randomly?

    I am trying to read a text file which has been encoded in UTF-8. The problem is that I need to access the file randomly. The RandomAccessFile is a low-level class and there seems to be no-way to wrap it in InputStreamReader so that UTF-8 encoding can be done on-the-fly. Is there any easy way to do that. Below is the simplified version of my program.
    import java.io.*;
    public class Test{
            public Test(String filename){
                    try{
                            RandomAccessFile rafTemIn = new RandomAccessFile(new File(filename), "r");
                            while(true){
                                    char chr = rafTemIn.readChar();
                                    System.err.println(chr);
                    } catch (EOFException e) {
                            System.err.println("File read.");
                    } catch (IOException e) {
                            System.err.println("File input error");
            public static void main(String[] args){
                    Test t= new Test("template.idx");
    }

    The file that I am going to read could be few hundreds of MBs or GBs. Hence, I will index interesting items in the file. The index file contain the keyword and the byte offset in the file. So, I will need to seek to any byte to read it. The file could be UTF-8 encoded XML or UTF-8 encoded plain text.
    Also, would like to add-up that in the sample program above I am reading the file sequentially. The concerned class has another method which actually does the reading randomly. If this helps, I am pasting the simplified version of code again but this also includes the said method.
    import java.io.*;
    public class Test{
            long bloc;
            long eloc;
            RandomAccessFile rafTemIn;
            public Test(String filename){
                    bloc=0L;
                    eloc=0L;
                    try{
                            rafTemIn = new RandomAccessFile(new File(filename), "r");
                            while(true){
                                    char chr = rafTemIn.readChar();
                                    System.err.println(chr);
                    } catch (EOFException e) {
                            System.err.println("File read.");
                    } catch (IOException e) {
                            System.err.println("File input error");
            public String getVal(String templateName){
                    String stemval=null;
                    try {
                            rafTemIn.seek(bloc); //bloc is a long value for beginng location to read from. It changes.
                            byte[] b = new byte[(int)(eloc - bloc + 1L)];
                            rafTemIn.read(b,0,(int) (eloc - bloc + 1L));
                            stemval = new String(b,"UTF-8");
                    } catch(IOException eio) {
                            System.err.println("Template Dump file IO error.");
                    return stemval;
            public static void main(String[] args){
                    Test t= new Test("template.idx");
                    System.out.println(t.getVal("wikipedia"));
    }

  • Export SQL View to Flat File with UTF-8 Encoding

    I've setup a package in SSIS to export a SQL view to a flat file and it's working fine.  I now need to make that flat file UTF-8 encoded.  The package executes but still shows the files as ANSI encoded.
    My package consists of a Source (SQL View) -> Derived Column (casts the fields to DT_WSTR) -> Destination Flat File (Set to output UTF-8 file).
    I don't get any errors to help me troubleshoot further.  I'm running SQL Server 2005 SP2.

    Unless there is a Byte-Order-Marker (BOM - hex file prefix: EF BB BF) at the beginning of the file, and unless your data contains non-ASCII characters, I'm unsure there is a technical difference in the files, Paul.
    That is, even if the file is "encoded" UTF-8, if your data is only ASCII values (decimal values 0-127, hex 00-7F), UTF-8 doesn't really serve a purpose over ANSI encoding.  Now if you're looking for UTF-8 with specifically the BOM included, and your data is all standard ASCII, the Flat File Connection Manager can't do that, it seems.
    What the flat file connection manager is doing correctly though, is encoding values that are over decimal 127/hex 7F in UTF-8 when the encoding of the connection manager is set to 65001 (UTF-8).
    Example:
    Input data built with a script component as a source (code at the bottom of this post) and with only one WSTR output column hooked to a flat file destination component:
    a string containing only decimal value 225 (german Eszett character - ß)
    Encoding set to ANSI 1252 looks like:
    E1 0D 0A (which is the ANSI encoding of the decimal character value 225 (E1) and a CR-LF (0D 0A)
    Encoding set to UTF-8 65001 looks like:
    C3 A1 0D 0A  (which is the UTF-8 encoding of the decimal character value 225 (C3 A1) and a CR-LF (0D 0A)
    Note that for values over decimal 127, UTF-8 takes at least two bytes and up to four for the remaining values available.
    So, I'm comfortable now, after sitting down and going through this, that the flat file connection manager is working correctly, unless you need a BOM.
    1
    Imports System  
    2
    Imports System.Data  
    3
    Imports System.Math  
    4
    Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper  
    5
    Imports Microsoft.SqlServer.Dts.Runtime.Wrapper  
    6
    7
    Public Class ScriptMain  
    8
        Inherits UserComponent  
    9
    10
        Public Overrides Sub CreateNewOutputRows()  
    11
            Output0Buffer.AddRow()  
    12
            Output0Buffer.col1 = ChrW(225)  
    13
        End Sub 
    14
    15
    End Class 
    Phil

  • Can't drag and drop Quicktime files into Adobe Media Encoder CS6

    For some reason I can't drag Quicktime wrapped files into Media Encoder.  I get this error message:
    "The file [FILE PATH] could not be imported. Could not read from source. Please check the setting and try again."
    I have the latest version of Quicktime installed on my system and the problem files open in QT Player without issue.  Has anyone encountered this before or know of a quick fix?
    I'm running CS6 on a Windows 7 64 bit system.

    What's the path to the file? Is it on an external drive? A network drive? A mapped drive? Can you try copying the file to your desktop and dragging it from there?
    It's on an external e-sata drive. No problem accessing the file from within Premiere.  Just can't drop it into Encoder. Doesn't work if I drag to desktop and try from there either.
    Are you dragging it in from Windows Explorer? If not, then what app are you dragging from?
    Yes, from Explorer.
    Are you dragging the actual clip or a shortcut to it?
    Actual clip.
    Have you tried importing via File > Add Source...? [Double-clicking a blank spot in the Queue gets you to the same Open dialog.]
    Yep, tried that.  No luck.  Double clicking and adding didn't work either.
    Definitely only seems to be QT files.  Have tried H264 files from 7D camera and ProRes files.  No luck.  Files that are MP4 work fine.

  • How to write csv or txt file through utl_file with UTF-8 Encoding

    Hi All,
    I need your help to write the data from DB to csv or txt file with UTF-8 encoding through utl_file.
    Database character set:AL32UTF8
    Database version:10G
    All the columns in the DB are of varchar2 type.
    Please let me know if there is any way of doing it.

    What was wrong with the info provided in the link(s) given?
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions027.htm#SQLRF00620]

  • How to save a UTF-8 encoded text file ?

    hi People
    I have a little script which reads the source text from a layer and saves it to a .txt file. This is on a Mac and all was good until recently when I tried opening the .txt file on a PC in Notepad and found my ˚ degree symbols all whack.
    Resaving the .txt file in TextEdit as Unicode (UTF-8) encoding solved the problem, now opens fine in Notepad.
    But ideally I'd like the script to output the .txt as UTF-8 in the first place. It's currently Western (Mac OS Roman). I've tryed adding in myfile.encoding = "UTF8" but the resulting file is still Western (and the special charaters have wigged out again)
    any help greatly appreciated../daniel
        var theComp = app.project.activeItem;
        var dataRO = theComp.layer("dataRO").sourceText;
        // prompt user to save file
        var theFile = new File ("~/Desktop/"+ theComp.name + "_output.txt");
        theFile = theFile.saveDlg("Save an ASCII export file.");
        if (theFile != null) {          // check user didn't cancel dialog
            theFile.lineFeed = "windows";
            //theFile.encoding = "UTF8";
            theFile.open("w","TEXT","????");
            theFile.writeln("move details:");
            theFile.writeln(dataRO.value.toString());
        theFile.close();

    Hi,
    Got it, it seems, the utf-8 standard use 2-bytes (and more) encoding on accents and special characters.
    I found some info there with some code http://ivoronline.com/Coding/Theory/Tutorials/Encoding%20-%20Text%20-%20UTF%208.php
    However there was some error so I fixed it. (However for 3 and 4 bytes characters i didnt test it. So maybe you'll have to change back the 0xbf to 0x3f or something else.)
    So here is the code.
    Header 1
    function convertCharToUTF(character){
        var utfBytes = "";
        c = character.charCodeAt(0)
        if (c < 0x80) {
            utfBytes =  String.fromCharCode (c);
        else if (c < 0x800) {
            utfBytes =  String.fromCharCode (0xC0 | c>>6);
            utfBytes +=  String.fromCharCode (0x80 | c & 0xbF);
        else if (c < 0x10000) {
            utfBytes = String.fromCharCode (0xE0 | c>>12);
            utfBytes += String.fromCharCode (0x80 | c>>6 & 0xbF);
            utfBytes += String.fromCharCode (0x80 | c & 0xbF);
        else if (c < 0x200000) {
            utfBytes += String.fromCharCode (0xF0 | c>>18);
            utfBytes += String.fromCharCode (0x80 | c>>12 & 0xbF);
            utfBytes += String.fromCharCode (0x80 | c>>6 & 0xbF);
            utfBytes =+ String.fromCharCode (0x80 | c & 0xbF);
            return utfBytes
    function convertStringToUTF(stringToConvert){
        var utfString = ""
        for (var i = 0 ; i < stringToConvert.length; i++){
            utfString = utfString + convertCharToUTF(stringToConvert.charAt (i))
        return utfString;
    var theFile= new File("~/Desktop/_output.txt");
    theFile.open("w", "TEXT");
    theFile.encoding = "BINARY"
    theFile.linefeed = "Unix"
    theFile.write("");//or theFile.write(String.fromCharCode (0xEF) + String.fromCharCode (0xEB) + String.fromCharCode (0xBF)
    theFile.write(convertStringToUTF("Your stuff éàçËôù"));
    theFile.close();

  • External Table which can handle appending multiple csv files dynamic

    I need an external table which can handle appending multiple csv files' values.
    But the problem I am having is : the number of csv files are not fixed.
    I can have between 2 to 6-7 files with the suffix as current_date. Lets say it will be like my_file1_aug_08_1.csv, my_file1_aug_08_2.csv, my_file1_aug_08_3.csv etc. and so on.
    I can do it by following as hardcoding if I know the number of files, but unfortunately the number is not fixed and need to something dynamically to inject with a wildcard search of file pattern.
    CREATE TABLE my_et_tbl
      my_field1 varchar2(4000),
      my_field2 varchar2(4000)
    ORGANIZATION EXTERNAL
      (  TYPE ORACLE_LOADER
         DEFAULT DIRECTORY my_et_dir
         ACCESS PARAMETERS
           ( RECORDS DELIMITED BY NEWLINE
            FIELDS TERMINATED BY ','
            MISSING FIELD VALUES ARE NULL  )
         LOCATION (UTL_DIR:'my_file2_5_aug_08.csv','my_file2_5_aug_08.csv')
    REJECT LIMIT UNLIMITED
    NOPARALLEL
    NOMONITORING;Please advice me with your ideas. thanks.
    Joshua..

    Well, you could do it dynamically by constructing location value:
    SQL> CREATE TABLE emp_load
      2      (
      3       employee_number      CHAR(5),
      4       employee_dob         CHAR(20),
      5       employee_last_name   CHAR(20),
      6       employee_first_name  CHAR(15),
      7       employee_middle_name CHAR(15),
      8       employee_hire_date   DATE
      9      )
    10    ORGANIZATION EXTERNAL
    11      (
    12       TYPE ORACLE_LOADER
    13       DEFAULT DIRECTORY tmp
    14       ACCESS PARAMETERS
    15         (
    16          RECORDS DELIMITED BY NEWLINE
    17          FIELDS (
    18                  employee_number      CHAR(2),
    19                  employee_dob         CHAR(20),
    20                  employee_last_name   CHAR(18),
    21                  employee_first_name  CHAR(11),
    22                  employee_middle_name CHAR(11),
    23                  employee_hire_date   CHAR(10) date_format DATE mask "mm/dd/yyyy"
    24                 )
    25         )
    26       LOCATION ('info*.dat')
    27      )
    28  /
    Table created.
    SQL> select * from emp_load;
    select * from emp_load
    ERROR at line 1:
    ORA-29913: error in executing ODCIEXTTABLEOPEN callout
    SQL> set serveroutput on
    SQL> declare
      2      v_exists      boolean;
      3      v_file_length number;
      4      v_blocksize   number;
      5      v_stmt        varchar2(1000) := 'alter table emp_load location(';
      6      i             number := 1;
      7  begin
      8      loop
      9        utl_file.fgetattr(
    10                          'TMP',
    11                          'info' || i || '.dat',
    12                          v_exists,
    13                          v_file_length,
    14                          v_blocksize
    15                         );
    16        exit when not v_exists;
    17        v_stmt := v_stmt || '''info' || i || '.dat'',';
    18        i := i + 1;
    19      end loop;
    20      v_stmt := rtrim(v_stmt,',') || ')';
    21      dbms_output.put_line(v_stmt);
    22      execute immediate v_stmt;
    23  end;
    24  /
    alter table emp_load location('info1.dat','info2.dat')
    PL/SQL procedure successfully completed.
    SQL> select * from emp_load;
    EMPLO EMPLOYEE_DOB         EMPLOYEE_LAST_NAME   EMPLOYEE_FIRST_ EMPLOYEE_MIDDLE
    EMPLOYEE_
    56    november, 15, 1980   baker                mary            alice     0
    01-SEP-04
    87    december, 20, 1970   roper                lisa            marie     0
    01-JAN-99
    SQL> SY.
    P.S. Keep in mind that changing location will affect all sessions referencing external table.

Maybe you are looking for

  • LaserJet 600 M602

    Hi. This is my first time on this forum. I am looking for help. I am trying to access the web configurations for my printer in my office to make modifications to the settings. However, when I try to access the "change settings", it prompts me for a U

  • IMac 24 in internal screen not working - external screen is working.

    The model is 2.8 GHz Intel Core 2 Duo. 4GB memory Serial Number (system): VM826CRLZE7 Chipset Model: NVIDIA GeForce 8800 GS Type: GPU Bus: PCIe PCIe Lane Width: x16 VRAM (Total): 512 MB Vendor: NVIDIA (0x10de) Device ID: 0x0609 Revision ID: 0x00a2 RO

  • How do I set up security on my linksys router after the i...

    How do I set up security on my linksys router after the install of a net gear wireless pc adapter?

  • Nikon D300 auto / manual focus metadata

    Hello, When viewing photos in my D300 (i.e. using the D300 to view photos I have just taken), I can see displayed whether the photo was taken using auto or manual focus. I have been unable to see this information after importing the photos from the c

  • Every time I plug an ethernet cable in my MacBook Air, my computer starts immediately. Why?

    I'm using the Thunderbold adapter with the ethernet cable. As well, when it restarts I get taken to a white page that says "Your computer has restarted because a problem has occurred." My computer ALWAYS restarts immediately and very abruptly. What's