Htmldb_mail and charset

Hi to all;
I'm using htmldb_mail package to send plain-text e-mails, there are some way to change the default (i guess) uft-8 charset to iso-8859-1?
Some e-mails clients like hotmail are not displaying my emails correct;
Thanks;

Hey scott, thanks for the thread
It s been a while since i posted this tread but we ve postponed the e-mail problem for a while...
Now I m facing another problem...
I can not make none of the procedures evoked from the job running correct
I am getting the same errors: It is either ORA-00942 or ORA-06576...
However I get no compilation errors on the procedures in the sql developer
All the problems came when we moved from APEX 2.2 to APEX 3.0 ...
Thanks in Advance
Tsveti

Similar Messages

  • Problem with encoding and charset for downloading a file

    Hi guys, I have a problem and I beg for your help, I am 1000% frustrated at this point.
    I have a servlet which have to do something and give a file log to the final user. Coding the logic for that took me about 5 minutes, the problem is that the given file doesnt shows properly in notepad (Default app to open txt files). I have tried every way I have read over the internet and absolutely nothing works.
    After trying about 20 different ways fo doing this without success, this is my actual code:
    Charset def=Charset.defaultCharset();
    OutputStreamWriter out = new OutputStreamWriter(servletOutputStream,def);
    for (String registry:regList) {
    out.write(registry+"\n");
    out.close();
    the page gives the file to the user, I can download or open it, when I open it this is the result
    registry1registry2registry3registry4registry5registry6registry7...
    and I am expecting:
    registry1
    registry2
    registry3
    registry4
    registry5
    If I open it with wordpad or notepad++ the file looks fine, but I cant achieve that notepad reads it correctly. I have spent about 10 hours on this and at this point I just dont know what to do, i have tried Windows-1252, UTF-8, UTF-16, the Default one. I have tried to set this enconding on the response header with no luck. Any help will be very appreciated.
    Thanks in advance.

    >
    I have a servlet which have to do something and give a file log to the final user. Coding the logic for that took me about 5 minutes, the problem is that the given file doesnt shows properly in notepad (Default app to open txt files). I have tried every way I have read over the internet and absolutely nothing works.
    If I open it with wordpad or notepad++ the file looks fine, but I cant achieve that notepad reads it correctly. I have spent about 10 hours on this and at this point I just dont know what to do, i have tried Windows-1252, UTF-8, UTF-16, the Default one. I have tried to set this enconding on the response header with no luck. Any help will be very appreciated.
    >
    Your file likely uses *nix style line endings and use a single LF (0x0A) as the end of each line.
    Notepad doesn't recognize a single LF as the end of line; it expects CRLF (0x0D0A). The encoding isn't the issue.
    If you have to use Notepad you will need to add code to find all of the LF characters and insert a CR character in front of them.

  • How to set default encoding and charsets for jsp and servlets.

    Hi,
    Is there any possibility to set default encoding or charset for jsps and servlest (for both request and response)?
    For example in Weblogic such parameters can be set in weblogic specific configuration files (weblogic.xml).
    Thanks in advance.

    Hi,
    I created one request with logo in the header an page in the footer etc. and called StyleSheet. After you can import this formats by each request.
    You can do this in compound layout.
    Regards,
    Stefan

  • Languages and charset

    Hello,
    I know that <meta charset="UTF-8"> will suffice for most common languages, but what about when mixing two languages on the very same webpage? I will have one webpage that will be predominantly Arabic with some English words, or can UTF-8 understnad that and not try and mirror the English text to match the Arabic becuase it reads right to left rather than left to right?
    I use to use <meta http-equiv="content-language" content="ar-en" but html5 doesn't like that now
    Lee

    In your head region, you want your meta tag to be "en"
    I do realize that is completely counterintuitive.
    I created two pages for an organization in two foreign languages (Spanish and Polish). Here is what I did:
    In HTML5, I just left things alone.
    <!DOCTYPE HTML>
    <html>
    <head>
    <meta charset="UTF-8">…
    In the div with the spanish language: <div lang="es">
    When I am using Google's Chrome, it correctly identifies that region as Spanish and offers to translate the language.
    So, put your language change in the div, not on the page.
    Now, for Arabic.
    Your web-pages should have following <meta>-tags set:
    <meta http-equiv="Content-Type" CONTENT="text/html; charset=windows-1256"> or
    <meta http-equiv="Content-Type" CONTENT="text/html; charset=iso-8859-6">
    Frankly, I would do both, because I fear that older Microsoft browsers may not correctly display the Arabic unless you define it properly for Windows.
    In HTML the base direction is either set explicitly by the nearest parent element that uses the dir attribute, or, in the absence of such an attribute, the base direction is inherited from the default direction of the document, which is left-to-right (LTR).
    Now, if you need to set the entire page up as Arabic and have no other language, here is what you do:
    <!DOCTYPE html>
    <html dir="rtl" lang="ar">
    <head>
    <meta charset="utf-8">…
    (this is HTML5). And one of the things you'll see is the dir element defined as "rtl" which means your text direction will be right to left for thw whole document. in Internet Explorer and Opera, applying a right-to-left direction in the html or body tag will affect the user interface, too. On both of these browsers the scroll bar will appear to the left side of the window.
    But you can do this on the block level as well, just like I defined a div as Spanish:
    <div dir="ltr" lang="ar"> … </div>
    HTML 5 offers the auto direction control. The auto value tells the browser to look at the first strongly typed character in the element and work out from that what the base direction of the element should be. If it's a Hebrew (or Arabic, etc.) character, the element will get a direction of rtl. If it's, say, a Latin character, the direction will be ltr. This can be very useful for blogs, where you receive input that is submitted by readers.
    Lastly, when you specify your typefaces, you need to build a type stack that will have the Arabic characters, but not assume that everyone is using Windows, or Mac, or Linux:
    font-family: "Geeza Pro", "Nadeem", "Al Bayan", "DecoType Naskh", "DejaVu Serif", "STFangsong", "STHeiti", "STKaiti", "STSong", "AB AlBayan", "AB Geeza", "AB Kufi", "DecoType Naskh", "Aldhabi", "Andalus", "Sakkal Majalla", "Simplified Arabic", "Traditional Arabic", "Arabic Typesetting", "Urdu Typesetting", "Droid Naskh", "Droid Kufi", "Roboto", "Tahoma", "Times New Roman", "Arial", serif;
    Alan Wood has an Arabic Test page: http://www.alanwood.net/unicode/arabic.html
    You should also Google for "font survey" to find percentage of computers that have been found to have various fonts installed.
    -Mark

  • JTable, Clipboard and charset encoding...

    I'm trying to paste data into JTable. Here's a part of the code:
    private BufferedReader getReaderFromTransferable(Transferable t)
    throws IOException, UnsupportedFlavorException
    if (t == null)
    throw new NullPointerException();
    DataFlavor[] dfs = t.getTransferDataFlavors();
    for (int i = 0; i < dfs.length; i++)
    System.out.println(dfs);
    DataFlavor df = DataFlavor.selectBestTextFlavor(dfs);
    df = df.getTextPlainUnicodeFlavor();
    Reader r = df.getReaderForText(t);
    return new BufferedReader(r);
    When I'm copying data from Excel everything is fine because
    DataFlavor of mimetype=text/plain...charset=utf-16le is supported.
    However, if I try to copy and paste data only inside my JTable,
    I'm getting UnsupportedFlavorException. It happens because
    there are only two mimetype=text/plain supported none of which is
    charset=utf-16le. API says that utf-16le is used for Windows as
    default Unicode encoding. What am I supposed to do? How can I set
    utf-16le encoding for my JTable? Or maybe I should do something
    different.

    Hi,
    You dont have to set utf-16le encoding to JTable..instead you have to create your own flawor type which supports current encoding. I have some code example somewhere on my HD, but i'm too lazy to check it out. You can find examples of creating your own data flawor by putting "creating own data flawors" in search field in java.sun.com web site. This can be a really "bloodpath" but try to survive.

  • Oc4j and Charset

    Hi,
    we have little problem with national characters.
    Problem is when I use:
    java 1.4, oc4j 903 '020927' (or AS 903 with embeded oc4j).
    - "Our app doesn't show some national characters well."
    I tried set param default-charset="windows-1250" in the file orion-web.xml (I also tried set it in the ..home/config/global-web-application.xml )
    (If I use java 1.4, oc4j 904, and set param as I described above everything is well.)
    Thanks for any advice.
    Zajo

    Well, in our case the difference between oc4j 903 & 904 is in the jsp.
    Our jsp start:
    <%@ page contentType="text/html;windows-1250" language="java" %>
    If you are using oc4j 904 everything is ok.
    But if you are using oc4j 903 you have to use this syntax:
    <%@ page contentType="text/html;charset=windows-1250" language="java" %>
    Also you have to setup global-web-application.xml /or orion-web-app.xml/ for your default charset.
    example...<orion-web-app
    default-charset="windows-1250"
    >
    >>>
    Zajo

  • SetContentType and charset

    Is there any possible way to set the Content-Type header of a servlet response so that it does not append the "charset=ISO-8859-1" on the end of the header value?
    I would simply like to send a response that contains the header:
    Content-Type: application/pdf
    However, using setContentType("application/pdf"), creates a header that looks like this:
    Content-Type: application/pdf;charset=ISO-8859-1
    I NEED to chop off that charset value completely. Is there any way of doing this?
    All help will be greatly appreciated.

    The following servlet:
    import java.io.IOException;
    import javax.servlet.*;
    import javax.servlet.http.*;
    public class OTest extends HttpServlet {
        public void doGet(HttpServletRequest request, HttpServletResponse response)
                throws ServletException, IOException {
            ServletOutputStream sos = response.getOutputStream();
            sos.print("Test-Header: testing\n\n");
            sos.print("Here is the first line of the HTTP response.\n");
        public void doPost(HttpServletRequest request, HttpServletResponse response)
                throws ServletException, IOException {
            doGet(request, response);
    }... produces the following server response:
    HTTP/1.1 200 OK
    Content-Length: 67
    Date: Thu, 22 Jan 2004 12:54:16 GMT
    Server: Apache-Coyote/1.1
    Test-Header: testing
    Here is the first line of the HTTP response.It appears that calling getOutputStream() flushes a default set of headers. Is there any way to intercept this output from the server and manually write header information? If not, is there a way, at least, to address the Content-Type problem I mentioned in my first post?
    I am at a loss for direction on this problem. Any/all help is greatly appreciated.

  • I/o streams and charsets....

    Sorry for posting this in two forums but I got no response in the reg java forum....either my question is too advanced or no one knows??
    I was wondering if anyone could help clairify something for me. I don't really know how the character sets work so by all means let me know if my thinking is all wrong. Sorry in advance for long post.
    Say user 1 has a system default charset of ASCII. They write a message in a JTextArea and hit the save button. The pgm calls JTextArea.write(myFileWriter) which saves the text to a file (using system default charset). They send the file to another user whose default charset is UTF-16. If the pgm simply loads the file into the JTextArea using JTextArea.read(myFileReader), wouldn't the text message get jumbled up? The UTF-16 machine would be reading two bytes per character when in fact the file was written out as 1 byte per character. Same is true the other way around. When the ASCII user loaded a UTF-16 file, it would treat each byte as 1 character when in fact two bytes represent one character. That is where the confusion is.
    The only way I could see to control this was to have a rule that says the files will always be in a specific format, say ASCII? Then before writing the contents to file, I would call String.getBytes("ASCII") on the of the JTextArea -- when doing this on the UTF-16 machine, I assume if it encountered a char whose value was > 255 it would simply convert it to some char like "?" whose value was <= 255 so it would fit in 8 bits? Then write that byte[] to the output stream.
    Then to load the file, instead of using JTextArea.read(), I would have to read the bytes into a byte array then create a new String using String(byte[], "ASCII") and pass that to the JTextArea?
    Any dropped information from the UTF-16 file would simply show up as "?" on the ASCII machine. On the UTF-16 machine, everything would look fine? No double spaced characters or such?
    Is there another way? Anyone????
    Jim

    wouldn't the text message get jumbled up?Yes, I agree
    I assume if it encountered a char whose value was > 255 it would simply convert it to some char like "?" whose value was <= 255 so it would fit in 8 bits?Values in the ASCII character set are 7 bits. When a Unicode character is converted to an ASCII character, a value > 127 is converted to 63. 63 == �?�.
    Any dropped information from the UTF-16 file would simply show up as "?" on the ASCII machine.Yes
    I assume then that my thinking is correct...unless there is some conversion going on, the text will not show up correctly on the different machines?Yes
    files will be written out/saved in a predefined format: 8 bits per character.ASCII characters are 7 bits, whereas ISO-8859-1 (ISO Latin Alphabet No. 1, a.k.a ISO-LATIN-1) are 8-bits. You might consider using the ISO-8859-1 character set. According to the Charset API documentation, every implementation of the Java Platform is required to support US-ASCII and ISO-8859-1.
    FYI, I found the following document called Supported Encodings in the SDK, C:\j2sdk1.4.0\docs\guide\intl\encoding.doc.html
    but I am also curious about the actual conversion process from unicode to bytes? Is my thinking correct in that if the unicode character is too "large" to fit in 8 bits (byte) the system just defaults to some byte value?I haven�t read the source code, but according to my experiments, when I convert from Unicode to ASCII, values > 127 are converted to 63.
    Also, according to my experiments, if a byte has a value between 0x80 and 0xff (unsigned 128 and 255), an attempt to convert from ASCII to Unicode results in a value of 65533 (no typo here, not 65535).
    When responding to this thread, I had to make some simple programs to simulate writing/reading sending/receiving and converting character sets to make sure what I am saying is correct. Here is an idea for your own inquiries.
    import java.nio.charset.Charset;
    import java.io.*;
    class Test {
        Charset cs = Charset.forName("ASCII");
        void m() {
            char c = (char)('x' + 20);  //120 + 20 > 127
            System.out.println((int)c);
            ByteArrayOutputStream bout = new ByteArrayOutputStream();
            OutputStreamWriter out = new OutputStreamWriter(bout, cs);
            try {
                out.write(c);
                out.flush();
            } catch (IOException e) {
                return;
            byte[] b = bout.toByteArray();
            System.out.println(b[0]);
            ByteArrayInputStream bin = new ByteArrayInputStream(b);
            InputStreamReader in = new InputStreamReader(bin);
            try {
                int i = in.read();
                System.out.println((char)i);
            } catch (IOException e) {
                return;
        public static void main(String[] args) {
            new Test().m();
    }Also, I found this helpful.
    import java.nio.charset.Charset;
    import java.util.SortedMap;
    import java.util.Set;
    import java.util.Iterator;
    class Test {
        void m() {
            SortedMap m = Charset.availableCharsets();
            Set s = m.keySet();
            Iterator it = s.iterator();
            while (it.hasNext()) {
                System.out.println((String)it.next());
        public static void main(String[] args) {
            new Test().m();
    }And this.
    import java.nio.charset.Charset;
    import java.util.Set;
    import java.util.Iterator;
    class Test {
        void m(String name) {
            Charset s = Charset.forName(name);
            System.out.println("display name= " + s.displayName());
            Set aliases = s.aliases();
            Iterator it = aliases.iterator();
            while (it.hasNext()) {
                String x = (String)it.next();
                System.out.println("alias= " + x);
        public static void main(String[] args) {
            if (args.length > 0) new Test().m(args[0]);
    }

  • Runtime and charset

    Hello,
    Technical info: Windows 2000, IIS5, Tomcat 4.1.24, JDK 1.4.1, (.net)
    I wrote the following java code:
    Process p = null;
    Runtime r = null;
    try
    //Ex�cution du batch de transformation
    r = Runtime.getRuntime();
    p = r.exec("NEAT_converting.exe");
    //Attente de la fin du process
    p.waitFor();
    catch (Exception e)
    System.out.println("Error executing the converter : " + e.toString());
    e.printStackTrace();
    throw new IOException();
    finally
    p.destroy();
    r.gc();
    As you can see, I run a executable from my Java code. The java code is in a bean run from a JSP.
    If I connect to an Oracle DB from the bean, I can get the data with the french accent. When I try from the exe (.net sorry!) to get the same data, I have the wrong charset!
    I think that the exe run under an other User than the bean.
    Can anyone tell me:
    - When I run a cmd from java, can I choose the charset?
    - What is the user used by the exe?
    - Any other idea..
    Thanks a lot. Hope I was clear enough ;-)
    JDS

    - When I run a cmd from java, can I choose the
    charset?That entirely depends on the application you are executing and how it chooses the charset it uses.
    - What is the user used by the exe?Probably the same user that ran your program -- anything else could be a security exposure. Why would that make a difference?
    - Any other idea..Basically you are asking why your other application doesn't work the way you want. And based on what that other application is, I don't think this is really the right forum for that.

  • Htmldb_mail and multiple rows

    I would like to send via email the result of a query that returns multiple rows. How can I achieve that ? The following code example explains in detail what I would like to have - but it doesn't work like that.
    DECLARE
    remp varchar2(4000);
    BEGIN
    remp := 'select name from employee';
    htmldb_mail.send(
    p_to => '[email protected]',
    p_from => '[email protected]',
    p_body => 'These are the employees: '||remp||'',
    p_subj => 'mail test');
    END;
    Thanks Tobias

    hi Tobias,
    you need to use cursor, to collect all your employee names into one variable, after that include that variable in your mail body.
    Declare
    remp varchar2(4000);
    CURSOR cur IS
    SELECT name
    FROM employee;
    rec cur%ROWTYPE;
    begin
    OPEN cur;
    LOOP
    FETCH cur INTO rec;
    EXIT WHEN cur%NOTFOUND;
    remp:=remp||', '||rec.name;
    END LOOP;
    CLOSE cur;
    htmldb_mail.send(
    p_to => '[email protected]',
    p_from => '[email protected]',
    p_body => 'These are the employees: '||remp||'',
    p_subj => 'mail test');
    end;
    hope this help.

  • 10g upgrade and charset converison

    Hello,
    I'm planning for upgrading my 9i DB to 10.2.0.4 , also i want to upgrade my Database character set to UTF8.
    Is there any possiblity that both of these objectives can be achieved in one go ?
    May be using import/export or something ?
    Appreciate your help!
    Thanks,
    Nik

    I don't think that there is one Oracle tool to allow to do database upgrade and character set conversion in one go.
    I would recommend first to upgrade database to 10g and then to change character set because 9i has limited support now.
    Database character set change is documented in [ Globalizaton Guide|http://download.oracle.com/docs/cd/B19306_01/server.102/b14225/ch11charsetmig.htm#sthref1476] ].

  • Oracle Text and cyrillic charsets

    i'm trying to index a column containing a mix of word documents and text documents (in a combination of koi8-r, iso-8859-5 and utf-8) stored in a blob column via oracle text.
    (the database has a native charset of AL32UTF8.)
    the table looks like this:
    BLOB_TEST (
    ID NUMBER,
    DATA BLOB,
    FMT VARCHAR2,
    CSET VARCHAR2,
    LANG VARCHAR2(10)
    i set up my index to use inso_filter with the format and charset options.
    CREATE INDEX blob_ling ON blob_test(data)
    indextype IS ctxsys.CONTEXT
    PARAMETERS('datastore ctxsys.direct_datastore
    lexer ctxsys.world_lexer
    filter ctxsys.inso_filter
    stoplist ctxsys.default_stoplist
    language column lang
    format column fmt
    charset column cset' );
    however when i try to run full text queries using non-ascii (in this case cyrillic) queries, only the word documents ever have hits.
    ie given a word document and a text document containing the exact same cyrillic string, a query for that exact string using contains only returns the word document.
    what could be causing this behavior, and how do i actually enable character set filtering?

    When I used BASIC_LEXER with the same environment with cyrillic documents, all works perfectly.

  • Decoding the content (text/plain) for charset "iso-2022-jp"

    Hi there,
    I am using IMAP protocol to receive new email message and retrieving the contents using java mail API InputStream is = p.getInputStream()
    The part content type is text/plain and text/html and charset is "iso-2022-jp"
    The contents contain Japanese Characters as well. In my code, I am decoding the contents using charset "iso-2022-jp" to get the actual content (verified by sending the same content using SFTP client built on java mail APIs). The contents are decoded properly in Linux OS but not working properly on Windows OS.
    Can anybody help me out the root cause of this problem? The code snippet is as follows
    InputStream is = p.getInputStream();// p is Part object
                   byte[] bytesJap = IOUtils.toByteArray(is); // org.apache.commons.io.IOUtils
                   String decodedBytesJap = decodeBytes(charset, bytesJap);
    The content I am getting while deploying the program
    on Linux box:
    Subject: FW: Japanese characters
    聖書に示された神の純粋なみ言葉に基づいて
    on Windows box:
    Subject: FW: Japanese characters
    $B@;=q$K&lt;($5$l$??@$N=c?h$J$_8@MU$K4p$E$$$F(B&
    ----------------------------------------------

    decodeBytes() method implementation is as below
    protected String decodeBytes(String characterset, byte[] bytes) throws CharacterCodingException
              CharsetDecoder decoder = null;
              try {
                   decoder = Charset.forName(characterset)
                             .newDecoder();
              } catch (IllegalCharsetNameException e) {
              } catch (UnsupportedCharsetException e) {
              decoder.onMalformedInput(CodingErrorAction.REPORT);
              decoder.onUnmappableCharacter(CodingErrorAction.REPORT);
              try {
                   return decoder.decode(ByteBuffer.wrap(bytes)).toString();
              catch (CharacterCodingException e) {
                   throw e;               
    }

  • JSF RI 1.2_4 and before bug in encodings i think

    im from Bulgaria and Im using cyrelic characters in one JSF page using SUn JSF RI implementation .
    The problem is that sometimes ( 1 from 30 openings of a page ) the Cyrlic characters looks like "???" only when i refresh it is okei .
    im using facelets but i try without them the result is the same.
    Every character is getted from resource bundle in UTF 8 encoding
    keys are somethink like :
    steptwo_livein_rent=\u043F\u043E\u0434 \u043D\u0430\u0435\u043Cand from time to time everythink looks ???? or not all fields just one of them not only input fields not only output fields i think random fields looks like ????
    that is realy realy very strange have somebody have solution ?
    what have i try :
    1)i try to use <f:view locale="bg_BG" >
    2) i try to use charset=utf-8 and charset=1251
    the result is the same.
    i see somethink like this :
    http://isy-dc.com/~naiden/JSFBug.JPG
    can i use some phase listener and to be able to set the encoding to the response ? or can i make something ?
    Message was edited by:
    JOKe

    im from Bulgaria and Im using cyrelic characters in one JSF page using SUn JSF RI implementation .
    The problem is that sometimes ( 1 from 30 openings of a page ) the Cyrlic characters looks like "???" only when i refresh it is okei .
    im using facelets but i try without them the result is the same.
    Every character is getted from resource bundle in UTF 8 encoding
    keys are somethink like :
    steptwo_livein_rent=\u043F\u043E\u0434 \u043D\u0430\u0435\u043Cand from time to time everythink looks ???? or not all fields just one of them not only input fields not only output fields i think random fields looks like ????
    that is realy realy very strange have somebody have solution ?
    what have i try :
    1)i try to use <f:view locale="bg_BG" >
    2) i try to use charset=utf-8 and charset=1251
    the result is the same.
    i see somethink like this :
    http://isy-dc.com/~naiden/JSFBug.JPG
    can i use some phase listener and to be able to set the encoding to the response ? or can i make something ?
    Message was edited by:
    JOKe

  • USER_FILTER and database character set

    Hello,
    I'm currently working on the integration of a tool into Oracle Text for filtering PDFs. My current approach is to call a command line tool via a USER_FILTER preference, and this works fine as long as the database character set is AL32UTF8. The tool is creating the filtered text as UTF-8.
    I'm struggling now with the case that the database character set is not Unicode, for example LATIN1. I had hoped that I can specify a chain of filters for this situation when creating the index, first a USER_FILTER to get the text out of the document and then a CHARSET_FILTER to convert the filtered text from UTF-8 into the database character set. This is my attempt to set this up:
    execute ctx_ddl.create_preference ('my_pdf_datastore', 'file_datastore')
    execute ctx_ddl.create_preference ('my_pdf_filter', 'user_filter')
    execute ctx_ddl.set_attribute ('my_pdf_filter', 'command', 'tetfilter.bat')
    execute ctx_ddl.create_preference('my_cs_filter', 'CHARSET_FILTER');
    execute ctx_ddl.set_attribute('my_cs_filter', 'charset', 'UTF8');
    create index tetindex on pdftable (pdffile) indextype is ctxsys.context parameters ('datastore my_pdf_datastore filter my_pdf_filter filter my_cs_filter');
    These are the error messages I'm getting (sorry, German Windows):
    FEHLER in Zeile 1:
    ORA-29855: Fehler bei Ausf³hrung der Routine ODCIINDEXCREATE
    ORA-20000: Oracle Text-Fehler:
    DRG-11004: Doppelter oder unvereinbarer Wert f³r FILTER
    ORA-06512: in "CTXSYS.DRUE", Zeile 160
    The relevant message is DRG-11004, which translates to "duplicate or incompatible value for FILTER".
    ORA-06512: in "CTXSYS.TEXTINDEXMETHODS", Zeile 364
    So here is my question:
    Do I understand it correctly that with the USER_FILTER the text is always expected in the database encoding, and that it is mandatory to create the filtered text in the database character set, or are there any alternatives?
    Thanks
    Stephan

    The previous experiments were performed with Oracle 10i. I just saw that in Oracle 11.1.0.7 there is this new feature: "USER_FILTER is now sensitive to FORMAT and CHARSET columns for better indexing performance.".
    This seems to be exactly what I was looking for.
    Regards
    Stephan

Maybe you are looking for

  • My ipod touch is flashing white on and off, I hold the buttons down, and the screen goes black- but the apple logo does not come on. What do I do?

    So I have had this problem before, and know that you hold down the top Power button and the Home button, at the same time, for roughly 10 seconds, or until the apple logo comes on. The problem is, when I press the buttons and the screen goes black, l

  • IPhone 3G Wifi no longer connecting

    Have had this iPhone 3G since Dec 1st, so very fairly new. Am running Version 2.2 and have verified that the phone is up to date. So. The odd issue... Never had a problem connecting to wireless networks (secured or otherwise). Now suddenly I am not r

  • Initial load of PO - Migrate PO from R3 to SRM

    Hi all, we are implementing a SRM 5.0 (ECS) and we have to make an initial migration of po from r3 to srm. Is there a standard functionality (IDOC, BAPI, or transaction) to use for this aim? rds enzo

  • SRP547W dropping ADSL every minute

    We installed one of these devices as our gateway a couple of weeks ago, flashed the firmware to 1.2.4(003) and it worked perfectly until this morning where it keeps resetting the ADSL every minute and losing the web interface to the unit. The log end

  • Open folio after bundling.. ?

    Hi there I was wondering if anyone knew if it was possible to open up the folio file after bundling and change some of the assets? Basically the PNG compression for MSO's is ridiculous so it adds a stupid amount of data weight to the folios, I can se