Converting to BLOB in AL32UTF8 destroys German Umlaut characters

Test Case.
Two database
(A) NLS_CHARACTERSET= WE8MSWIN1252
(B) AL32UTF8
I have used the following function to convert CLOB data to BLOB
create or replace function clob_to_blob (p_clob_in in clob)
return blob
is
v_blob blob;
v_offset integer;
v_buffer_varchar varchar2(32000);
v_buffer_raw raw(32000);
v_buffer_size binary_integer := 32000;
begin
  if p_clob_in is null then
    return null;
  end if;
  DBMS_LOB.CREATETEMPORARY(v_blob, TRUE);
  v_offset := 1;
  FOR i IN 1..CEIL(DBMS_LOB.GETLENGTH(p_clob_in) / v_buffer_size)
  loop
    dbms_lob.read(p_clob_in, v_buffer_size, v_offset, v_buffer_varchar);
    v_buffer_raw := utl_raw.cast_to_raw(v_buffer_varchar);
    dbms_lob.writeappend(v_blob, utl_raw.length(v_buffer_raw), v_buffer_raw);
    v_offset := v_offset + v_buffer_size;
  end loop;
  return v_blob;
end clob_to_blob;If i input ÄÖÜ to the function,In WE8MSWIN1252 the returning BLOB looks ok, but in AL32UTF8 the BLOB's characters comes out like ÄÖÜ.
Now if i were to save the values (CLOB and BLOB) in both occations the CLOB will be stored without problem, but BLOB will be saved incorrectly only in AL32UTF8 environment.
The only difference between the two db's are the characterset.
Is this a know limitation or am i doing something wrong.
PS. I have seen similar behaviour when using dbms_lob.converttoblob
thank you

When describing the problem you missed one very important point: how do you look at the content of the BLOB to tell if it is OK or not? Remember that observation usually influences the experiment results ;-)
The sequence like ÄÖÜ looks like AL32UTF8 encoding of umlauts viewed with a WE8MSWIN1252 viewer. If you look with a viewer that expects WE8MSWIN1252 at a BLOB that contains AL32UTF8 text (which is the case with an AL32UTF8 database), then this is what you should expect.
-- Sergiusz

Similar Messages

  • Pages 2.0.2 has problems with German umlaut characters and hangs.

    The problem has occured when I had the idea to rename the style names to have exported html files wider accepted by other browsers.
    Therefore I tried to rename used style names by replacing encountered umlaut letters by their transcriptions e.g. "Überschrift 1" by "Ueberschrift 1" and so on. But when doing so, the program was freezing while the last changed style name was vanishing and did not come back at all.
    Reinhard.
    iMac   Mac OS X (10.4.9)   computer chess programming

    I cannot reproduce your problem. I rename überschrift 1 to Ueberschrift. Logically it moves to the bottom of the list, but it is still there and nothing hangs.
    Does this happen in all your documents or only one?
    Does it happen using all templates or only one?
    Are you working in German all the time, or have you switched to English in the mean time?
    Do you have access to other computers with the problem, or have you seen it only on yours?
    Have you renamed other styles like "Text" or "Titel" without problem?
    If you start Pages in English, do you have the same problem?

  • Logging German umlauts?

    Is there a logging package that handles German umlauts - vowels with 2 dots above ? log4j? java logging?

    Many years ago I worked on a computer system, where the missing German umlaut characters (��� and upper-cased) were "sharing" the same code value with good old [], {} and |.
    So you had to write the scripts that had instead of those braces and the pipe the umlauted letters. It looked funny!
    Boy, those were the days!

  • German umlauts into question marks

    Hi,
    The problem is that the German umlauts are converted to question marks . I create a mail message (see below). After the ship in here: Outlook 2010 can be seen, the German umlauts as question marks.
    Gesch?ftsf?hrer: ....
    RG M?nchen ....
    mailmessage obj_message
    s_mapi_connector = 'SMTP:'
    obj_message.notetext = struct_drucker.s_email_message
    obj_message.recipient[l].address = s_mapi_connector + s_email_adress
    obj_message.recipient[l].recipienttype = mailto!
    obj_message.recipient[l].name = s_email_recipient
    // Email per MAPI-Client
    obj_session = CREATE mailSession
    en_result = obj_session.mailLogon(mailNewSession!)
    en_result = obj_session.mailsend(obj_message)
    en_result = obj_session.mailLogoff()
    DESTROY obj_session
    Does anyone know a solution?
    Thank You
    André Rust

    Hi André,
    Here is my code (also tested successfully with PB12.5.2 build 5703):
    mailSession l_mailSession
    mailMessage l_mailMessage
    l_mailMessage.Recipient [1].Name = "[email protected]"
    l_mailMessage.Recipient [1].RecipientType = mailTo!
    l_mailMessage.receiptrequested=TRUE
    l_mailMessage.subject = "PB E-mail example from PB 12.6 Einflußgröße Gefäßschädigung"
    l_mailMessage.noteText = "Bla,~nbla, bla, ...~n"
    l_mailMessage.noteText += "~n"
    l_mailMessage.noteText += "Meßgröße"
    l_mailMessage.noteText += "~n"
    l_mailMessage.noteText += "Rückäußerung"
    l_mailMessage.noteText += "~n"
    l_mailMessage.noteText += "Rückstöße"
    l_mailMessage.noteText += "~n"
    l_mailMessage.noteText += "flächenmäßig"
    l_mailMessage.noteText += "~n"
    l_mailMessage.noteText += "rüstungsmäßig"
    l_mailMessage.noteText += "~n"
    l_mailMessage.noteText += "überläßt"
    l_mailMessage.noteText += "~n"
    l_mailMessage.noteText += "äußerst"
    l_mailSession = CREATE mailSession
    if l_mailSession.mailLogon () <> mailReturnSuccess! then
    MessageBox ("E-mail error!", "Login failed!", Exclamation!)
    else
    if l_mailSession.mailSend (l_mailMessage) <> mailReturnSuccess! then
    MessageBox ("E-mail error!", "Sending failed!", Exclamation!)
    else
    MessageBox ("E-mail report!", "Message sent!", Information!)
    end if
    end if
    l_mailSession.mailLogoff ()
    DESTROY l_mailSession

  • Numbers don't show german umlaut but quick view does

    I do have a problem opening CSV files in Numbers.
    I have a shared folder with VirtualBox. The virtual HD is formatted in FAT32. With Windows I created a CSV file which contains german umlauts like ÄÖÜ.
    When I open the CSV with the "quick view" (press space when file is selected in finder, the CSV is displayed as a nice Table and all Umlauts are OK.
    When I double click the CSV file to open it in Numbers, the Umlauts are not displayed. Opening in Text Edit is the same result, no Umlauts.
    ö = ^
    ß = fl
    ü = ¸
    ä = ‰
    I think this is an encoding problem, but why is "quick view" displaying it correct. I'm confused.
    Thanks for any help.

    Hello
    I cannot find any encoding settings for opening files in Numbers 09.
    A simple and obvious solution would be to save CSV file in UTF-8 in FAT32 volume in VirtualBox if possible.
    One other method is to copy the CSV file to HFS+ volume and set its extended attributes to specify its text encoding. This method is only changing the metadata of the file and so is less intrusive than to convert the data itself to different text encoding. When text encoding extended attribute is properly set, Numbers 09 (and TextEdit.app set to use "Automatic" encoding option in opening files) will open it in the specified text encoding.
    The following AppleScript script might help to set the extended attributes (OSX 10.5 or later only). Recipe is as follows.
    A1) Open /Applications/Utilities/AppleScript Editor.app, copy the code of set_latin1_xattr.applescript listed below and save it as application (bundle) in the folder where target CSV files reside.
    A2) Double click the saved applet and it will set the com.apple.TextEncoding extended attribute of the *.csv and *.txt files in the same folder where the applet resides to Latin 1 when they are recognised as 'ISO-8859 text' by file(1) command.
    -- set_latin1_xattr.applescript
    _main()
    on _main()
        set p2m to (path to me)'s POSIX path
        if p2m ends with "/" then set p2m to p2m's text 1 thru -2
        set sh to "
    # set current directory to parent directory of script (or die)
    cd \"${0%/*}\" || exit 1
    # add com.apple.TextEnncoding xattr for Latin-1
    # for *.csv and *.txt files, of which file info contains 'ISO-8859 text', in current directory
    for f in *.{csv,txt}
    do
        if [[ $(file \"$f\") =~ 'ISO-8859 text' ]]
        then
            xattr -w com.apple.TextEncoding \"WINDOWS-1252;$((0x0500))\" \"$f\"
        fi
    done
        do shell script "/bin/bash -c " & sh's quoted form & " " & p2m's quoted form
    end _main
    Another method is to convert the text encoding itself as you have already done using TextEdit.app.
    The following AppleScript might help to convert text encoding from latin-1 to utf-8 in bulk. Recipe is basically the same as the above.
    B1) Open /Applications/Utilities/AppleScript Editor.app, copy the code of convert_latin1_to_utf8.applescript listed below and save it as application (bundle) in the folder where target CSV files reside.
    B2) Double click the saved applet and it will convert the text encoding of the *.csv and *.txt files in the same folder where the applet resides to UTF-8 when they are recognised as 'ISO-8859 text' by file(1) command. Also it sets the extended attribute for text encoding accordingly.
    -- convert_latin1_to_utf8.applescript
    _main()
    on _main()
        set p2m to (path to me)'s POSIX path
        if p2m ends with "/" then set p2m to p2m's text 1 thru -2
        set sh to "
    # set current directory to parent directory of script (or die)
    cd \"${0%/*}\" || exit 1
    # make temporary directory
    temp=$(mktemp -d /tmp/\"${0##*/}\".XXXXXX) || exit 1
    # convert text encoding from ISO-8859-1 to UTF-8
    # for *.csv and *.txt files, of which file info contains 'ISO-8859 text', in current directory
    for f in *.{csv,txt}
    do
        if [[ $(file \"$f\") =~ 'ISO-8859 text' ]]
        then
            iconv -f ISO-8859-1 -t UTF-8 \"$f\" > \"$temp/$f\" \\
            && xattr -w com.apple.TextEncoding \"UTF-8;$((0x08000100))\" \"$temp/$f\" \\
            && mv -f \"$temp/$f\" \"$f\"
        fi
    done
    # clean up temporary directory
    rm -rf \"$temp\"
        do shell script "/bin/bash -c " & sh's quoted form & " " & p2m's quoted form
    end _main
    Note that the extended attribute is not supported in FAT32 and so the above methods only work in HFS+ formatted volume.
    Scripts are briefly tested with Numbers 2.0.5 under OSX 10.5.8 and 10.6.5. Please make sure you have backup CSV files before applying the above scripts.
    Good luck,
    H

  • German umlaut in idoc to file scenario

    Hi,
    in our scenario we send MATMAS idocs to XI, map them and create a file using file adapter.
    Settings in file receiver communication channel: file type = text and  file encoding = UTF8. I also tried file encoding = ISO-8859-1 - both with the same result:
    German umlauts are not converted. E.g. the material short text "Hängematte" is shown as "Hängematte" in the created file. The receiving system errors out. Acceptable would be
    "H& #228;ngematte". (of course without the blank but I have to add it herer else the forum would replace this by "ä" )Any help is appreciated.
    Regards,
    Philipp

    Hi,
    Follow the steps:-
    1>In your adapter -> Set File type to TEXT -> use Encoding and provide ISO-8859-1 (http://en.wikipedia.org/wiki/ISO/IEC_8859-1)
    To know more about other encoding standards refer -
    http://en.wikipedia.org/wiki/Character_encoding
    2>file type==>binary
    Regards,
    AshwinM
    Reward If helpful

  • Encoding Problem: Losing German Umlaute from gathering data from Oracle 8i

    my problem does concerns the diplay of german Umlaute such as äöüß etc. The OS is NW65 out of the box with Apache 2.0.49 and tomcat 4.1.28, JVM 1.4.2_02 and JDBC-driver Oracle 8i 8.1.7 JDBC Driver.
    The Data containing Umlaute which are retreived from the Database does somehow lose it´s Umlaute. The Umlaut which are coded in the servlet directly in order to generate the regular HTML does display the Umlaute without any problem.
    The same servlet and request from a Unix Enviroment does work fine. We have checked all Codepage settings (Java, NetWare, Tomcat etc).

    Hi Sven and Ingmar,
    I will try to kill 2 birds with one stone here. First of all you should check the definition of your current database character set and make sure that it can support all the characters that you need to store in your database.
    Please check out
    http://www.microsoft.com/globaldev/reference/iso.asp
    WE8ISO8859P9 is for Turkish and WE8ISO8859P1 is for western European (without Euro support).
    Next you need to set your client NLS_LANG character set (eg. for your SQL*Plus seesion), so that Oracle can convert your client operating system characters correctly into your database character set. The NLS_LANG character set also determines how SQL*Plus interpret/display your characters upon retrieval from the db.
    In both of your cases , I believed that the client NLS_LANG setting was not defined , hence this was defaulting to AMERICAN_AMERICA.US7ASCII. This is telling SQL*PLUS that the your client operating system can handle ASCII data only , hence all the accented characters are converted to ASCII during insertion into the database. Likewise upon data retrieval.
    If you are running SQL*PLUS client on English Windows then you NLS_LANG character set should be WE8MSWIN1252 and for Turkish Windows it should set to TR8MSWIN1254 .
    If the client character set and the database character set are the same , Oracle does not perform any data conversion, that's why if you use a US7ASCII database and set you NLS_LANG character set to US7ASCII .then you can insert all the accented Latin characters , Japanese , Chinese etc. This configuration is not supported, and it can cause many issues.
    For more information on character set configuration and possible problems.
    Please check out the white paper Database Character set migration on http://otn.oracle.com/products/oracle8i/content.html#nls
    And the Globalization Support FAQ at:
    http://technet.oracle.com/products/oracle8i/
    Regards
    Nat
    null

  • German Umlauts OK in Test Environment, Question Marks (??) in production

    Hi Sun Forums,
    I have a simple Java application that uses JFrame for a window, a JTextArea for console output. While running my application in test mode (that is, run locally within Eclipse development environment) the software properly handles all German Umlauts in the JTextArea (also using Log4J to write the same output to file-- that too is OK). In fact, the application is flawless from this perspective.
    However, when I deploy the application to multiple environments, the Umlauts are displayed as ??. Deployment is destined for Mac OS X (10.4/10.5) and Windows-based computers. (XP, Vista) with a requirement of Java 1.5 at the minimum.
    On the test computer (Mac OS X 10.5), the test environment is OK, but running the application as a runnable jar, german umlauts become question marks ??. I use Jar Bundler on Mac to produce an application object, and Launch4J to build a Windows executables.
    I am setting the default encoding to UTF-8 at the start of my app. Other international characters treated OK after deployment (e, a with accents). It seems to be localized to german umlaut type characters where the app fails.
    I have encoded my source files as UTF-8 in Eclipse. I am having a hard time understanding what the root cause is. I suspect it is the default encoding on the computer the software is running on. If this is true, then how do I force the application to honor german umlauts?
    Thanks very much,
    Ryan Allaby
    RA-CC.COM
    J2EE/Java Developer
    Edited by: RyanAllaby on Jul 10, 2009 2:50 PM

    So you start with a string called "input"; where did that come from? As far as we know, it could already have been corrupted. ByteBuffer inputBuffer = ByteBuffer.wrap( input.getBytes() ); Here you convert the string to to a byte array using the default encoding. You say you've set the default to UTF-8, but how do you know it worked on the customer's machine? When we advise you not to rely on the default encoding, we don't mean you should override that system property, we mean you should always specify the encoding in your code. There's a getBytes() method that lets you do that.
    CharBuffer data = utf8charset.decode( inputBuffer ); Now you decode the byte[] that you think is UTF-8, as UTF-8. If getBytes() did in fact encode the string as UTF-8, this is a wash; you just wasted a lot of time and ended up with the exact same string you started with. On the other hand, if getBytes() used something other than UTF-8, you've just created a load of garbage. ByteBuffer outputBuffer = iso88591charset.encode( data );Next you create yet another byte array, this time using the ISO-8859-1 encoding. If the string was valid to begin with, and the previous steps didn't corrupt it, there could be characters in it that can't be encoded in ISO-8859-1. Those characters will be lost.
    byte[] outputData = outputBuffer.array();
    return new String( outputData ); Finally, you decode the byte[] once more, this time using the default encoding. As with getBytes(), there's a String constructor that lets you specify the encoding, but it doesn't really matter. For the previous steps to have worked, the default had to be UTF-8. That means you have a byte[] that's encoded as ISO-8859-1 and you're decoding it as UTF-8. What's wrong with this picture?
    This whole sequence makes no sense anyway; at best, it's a huge waste of clock cycles. It looks like you're trying to change the encoding of the string, which is impossible. No matter what platform it runs on, Java always uses the same encoding for strings. That encoding is UTF-16, but you don't really need to know that. You should only have to deal with character encodings when your app communicates with something outside itself, like a network or a file system.
    What's the real problem you're trying to solve?

  • Mail Adapter - PayloadSwapBean - MessageTransformBean - German umlauts

    Hi there,
    I'm receiving mails with an attachment (.csv / .txt) that I want to process to get IDocs. Everything works fine but the conversion of German umlauts. I tried to apply several charsets (i.e. iso-8859-1, iso-8859-2, utf-8) in the contentType parameter without success. The result in my payload after swapping and transforming is a message without umlauts. All these characters have been replaced by the same 'character' that looks like a quadrangle. Therefore even the earliest possible mapping comes too late to convert this character back into umlauts, because I don't know anymore the original ones.
    When I process the same attachment with a <u>file</u> <u>adapter</u> in the same manner (until getting an IDoc) there are no problems with umlauts, the payload looks fine!
    I even checked the note 881308 (although it's said to be for the mail receiver) but it's already in the system (XI 3.0, SP 14)
    Anyone an idea to solve my problem?
    Regards,
    Ralph

    Hi Ralph,
    now I got the solution:
    Allpy the MessageTransformBean twice.
    First you set the code page for the mail attachment, how it comes to the system.
    Then you do the conversion and set the code page how the target xml should be.
    Make two entries in Module configuration:
    localejbs/AF_Modules/MessageTransformBean - contenttype
    localejbs/AF_Modules/MessageTransformBean - tranform
    as paramters you set:
    contentType - Transform.ContentType - text/plain; charset=iso-8859-1
    transform     - Transform.ContentType - text/xml; charset=UTF-8
    transform     - Transform.Class           - com.sap.aii.messaging.adapter.Conversion
    and so on.
    The problem is that outlook does not provide the content type for the attachment, so the MailTransformBean assumes UTF-8, but the attachment has iso-8859-1, so you have to set this before the conversion.
    I have tested this with XI 3.0 SP17 with note 960501 included.
    Regards
    Stefan

  • Characters With German Umlaut in webservice

    Hello,
    i have a requirement wherein i have to pass Characters With German Umlaut in my web service but when i see the trace they are getting converted into dots.
    does anyone have solution for it?
    Regards,
    Gunjan

    Hi,
    Ideally this should not be a problem. Please provide more information.
    Where you are checking trace? (PI?)
    Are you getting response or request as dots?
    Can you check encoding?
    Check this out, to me it seems problem related to encoding: http://www.sqldbu.com/eng/sections/tips/utf8.html
    The problem mainly arise because of mismatching of encoding: ISO-8859-1 and UTF-8.
    so double check encoding type provided in request and response.
    Regards,
    Gourav

  • AS2 Sender problem with German "Umlaute"

    Hello experts,
    I have an AS2 sender adapter sending orders into my SAP system. My problem is though that it deletes all German "Umlaute" before converting it into XML.
    I used to use the "CallBicXIRaBean" to do the Encoding with ISO-8859-1 and it did not work. So now I am using the "CharsetConversion" to do that but it still does not work.
    In the Module of the AS2 sender I am using:
    1) CharsetConversion
    - sourceDest --> MainDocument
    - targetDest --> MainDocument
    - sourceEnc --> ISO-8859-1
    - targetEnc --> ISO-8859-1
    2) CallBicXIRaBean
    - mappingName --> E2X_ORDERS_UN_D93A
    3) localejbs/CallSapAdapter
    - 0
    Does anyone have an idea what I have to change?
    Thank you very much for your help!
    Best regards,
    Peter

    Hello Iddo,
    Thank you for your answer.
    When I expect Umlaute in a message I always use ISO-8859-1 and not UTF-8.
    The Umlaute are actually deleted. For example the German "für" looks like this "f". Or "Präferenzsituation" looks like this: "Prerenzsituation". So it kills the Umlaut and the following character.
    The sender insists that he sends the messages with Umlaute. Now I activated the AS2 Message Dumping and hopefully will see what the message really looks like. Maybe the Umlaute are already deleted when they get to the AS2 adapter. I hope to find out soon.
    Best regards,
    Peter

  • Apex 4.0 Cascading Select List: ajax problem with german umlaute

    Hi everybody,
    Apex 4.0
    Dad PlsqlNLSLanguage: GERMAN_GERMANY.WE8MSWIN1252
    I have problems with german umlaute and ajax cascading select lists (Cascading LOV Parent Item).
    The data is populated without a page refresh in the select list when the parent select list changes but special signs like german umlaute are shown as weird characters.
    Seems like there is some charset problem with ajax.
    This is the only part of the application where special signs like umlaute are messed up. Everything else is fine.
    I allready tried to figure out if I can escape the umlaute in the javascript (file apex_widget_4_0.js) but no success here.
    Can anybody help me with this issue?
    Thanks in advance,
    Markus

    Hi Markus,
    your specified character set in your DAD is wrong. As mentioned in the installation instructions at http://download.oracle.com/docs/cd/E17556_01/doc/install.40/e15513/otn_install.htm#CHDHCBGI , Oracle APEX always requires AL32UTF8.
    >
    3. Locate the line containing PlsqlNLSLanguage.
    The PlsqlNLSLanguage setting determines the language setting of the DAD. The character set portion of the PlsqlNLSLanguage value must be set to AL32UTF8,
    regardless of whether or not the database character set is AL32UTF8. For example:Regards
    Patrick
    My Blog: http://www.inside-oracle-apex.com
    APEX 4.0 Plug-Ins: http://apex.oracle.com/plugins
    Twitter: http://www.twitter.com/patrickwolf

  • PDF printing German umlauts

    Hello all,
    I'm trying to use the new "Report Query" and "Report Layout" feature. First of all I have to say that my PDF printing setup works generally. My problem has nothing to do with that. Here's what I'm trying to do and what my problem is:
    I'm using a report query on a table which has data with german umlauts (äöüÄÖÜ) and special characters (ß) in it. Trying to print that report with the standard layout works. I see my german nls characters. If I try to apply a XSL-FO stylesheet I get # signs for my german nls characters. I played with the encoding of my stylesheet but that has no effect. TRANSLATE ... USING or CONVERT hasn't any effect either. I also changed the PlsqlNLSLanguage setting in my dads.conf to GERMAN_GERMANY, but that also has no effect. Now I'm pretty lost.
    Anybody out there has some tips on using german special characters in report queries with own written XSL-FO stylesheets ?
    Regards Markus

    Hi Jes,
    this is the query:
    select name, title, original_title,
    to_char ( purchase_date, 'dd.mm.yyyy' ) as purchase_date,
    to_char ( price, '990D00' ) as price,
    currency
    from authors a,
    books b
    where a.id = b.author_id
    order by nlssort ( name, 'nls_sort = generic_m' ),
    nlssort ( title, 'nls_sort = generic_m' )
    here's an excerpt from the XML:
    <?xml version="1.0" encoding="UTF-8" ?>
    - <ROWSET>
    - <ROW>
    <NAME>Buchheim, Lothar Günter</NAME>
    <TITLE>Das Boot</TITLE>
    <ORIGINAL_TITLE />
    <PURCHASE_DATE>21.05.1997</PURCHASE_DATE>
    <PRICE>0.00</PRICE>
    <CURRENCY>DM</CURRENCY>
    </ROW>
    - <ROW>
    <NAME>Stroustrup, Bjarne</NAME>
    <TITLE>The C Programming Language</TITLE>
    <ORIGINAL_TITLE />
    <PURCHASE_DATE>21.05.1997</PURCHASE_DATE>
    <PRICE>0.00</PRICE>
    <CURRENCY>DM</CURRENCY>
    </ROW>
    </ROWSET>
    as you can see the umlauts (ü) are there but the + signs are missing. The XSL-FO is indeed too big to post here. If you give me your email address, I'll mail it in zip format to you.
    Regards Markus

  • [fixed] German Umlauts are broken on an external disk

    Hi!
    I am having the problem that the german Umlauts (üäöß) are broken on my external hard drive.
    That's for example the folder "Hörbücher" -> "Hörbücher" or "Herbert Grönemeyer" -> "Herbert Grönemeyer"
    I had this problem before (when i switched on SuSE from KDE3 to KDE4), the solution back then was to convert all filenames on the disk to UTF-8 (they were ISO-8859-1 before).
    Problem now is: They already are, still the names are broke (after i installed Arch but keeping KDE4).
    When i try to convert the files with
    convmv -f ISO-8859-1 -t UTF-8 -r /media/disk/*
    I get back:
    Skipping, already UTF-8: /media/disk/videos/musikvideos/Hannes_Wader___Viel_zu_schade_für_mich1972.flv
    Which is logical, because the already ARE UTF-8.
    So - how can i repair the filenames now?
    When i start convmv it says this:
    bash-3.2# convmv -f ISO-8859-1 -t UTF-8 -r /media/disk/*
    perl: warning: Setting locale failed.
    perl: warning: Please check that your locale settings:
    LANGUAGE = "de",
    LC_ALL = (unset),
    LC_COLLATE = "C",
    LANG = "de_DE.utf8"
    are supported and installed on your system.
    perl: warning: Falling back to the standard locale ("C").
    Your Perl version has fleas #37757 #49830
    Starting a dry run without changes...
    Skipping, already UTF-8: /media/disk/musik/Hörbücher
    No changes to your files done. Use --notest to finally rename the files.
    bash-3.2#
    The part of my rc.conf concerning my locale is this:
    LOCALE="de_DE.utf8"
    HARDWARECLOCK="localtime"
    USEDIRECTISA="no"
    TIMEZONE="Europe/Berlin"
    KEYMAP="de"
    CONSOLEFONT=
    CONSOLEMAP=
    USECOLOR="yes"
    Last edited by haukew (2009-01-07 12:26:20)

    Hi!
    Thanks for your answer.
    First, this is the output:
    $ echo $LANG
    de_DE.utf8
    $ set | grep LC_
    LC_COLLATE=C
    $
    And - yes, the lines were (actually all lines were...) commented out, i must have forgotten this when i set up the machine
    I just changed it and ran locale-gen and will now restart - let's see...
    @filesystem: Sorry, i just didn't see that, i was quite tired when i posted my second post
    It's ext3
    [edit] Yep, that did it - everything works now - thanks a lot!
    Last edited by haukew (2009-01-07 12:27:38)

  • German Umlaute don't work after upgrading to Yosemite with Japanese Romaji Layout set on U.S.

    Hi everyone,
    previously i could just use [ALT] + [U] and then press [A] / [a] / [U] / [u] / [O] / [o] to generate the german umlaut version.
    After installing Yosemite this does not work anymore, which is kinda frustrating. Cause it forces me to add another keyboard layout and switch when i have to write a german umlaut.
    I was previously using 'Input Source' > 'Japanese' > Enabled Romaji (Romaji layout was set to U.S.) & Hiragana.
    For testing purpose i added the Standard U.S. Layout and writing the Umlaut with the keyboard shortcut still works there. So that's why i think this is a Romaji specific bug.
    Has anyone any idea what's wrong ? Did Apple change something ?
    Best Regards

    Before I stumbled upon this thread, there was no flag at all. I had the character viewer there because pre-Yosemite it was the only way to access emoji (and things like arrows and other special symbols) in many apps (and still is for my Adobe CS5 apps). Additionally, I would use the keyboard viewer from time to time as a way to learn which keys to use for other special symbols (bring up the keyboard viewer and press option or shift-option or other combinations and it would show you a preview of what each key would generate).
    While reading this thread, I added more input sources (US International and US Extended) and would see the flags. The flags replaced the previous character/keyboard viewer menu icon. I switched between each flag to test my issue under each, to no avail.  Then I tried adding the Spanish keyboard but still had the same issue.
    Now I've removed all the input sources except the primary US keyboard. There is no longer a flag in my menu, it's gone back to the character viewer icon.
    The issue is still there. When I type option-r, u, or g, in any app nothing happens at all.
    Unfortunately, neither auto-substitution nor press-and-hold work in Adobe CS5 apps, which is where I need to use these special characters the most.
    Thank you for reaching out

Maybe you are looking for

  • ABAB programm to copy material sales text from one sales org to another?

    Hi all, i'm searching for a ABAP programm to copy a material sales text from one sales org to another. Has anybody seen something like that before or has anybody an idea how to do this? Greetings & TIA strobbel

  • Xilinx error 59 reported by Translation process, and unhandled exception in ngdbuild

    Hi all, Not sure if this is the right forum to post in but I'll try. I'm struggeling with compiling my FPGA code on a NI 9074 target with 8 modules. LabView FPGA tells me that it failed due to a xilinx error in the translation process. It looks like

  • 11g Upgrade from 10.2.0.4 in SUSE 10.

    Hi Gurus, We have 11g Database upgrade on our EBS 11i environment, database size is around 1.3 TB, i already did test upgrade it was not satisfactory results what client is expecting regarding downtime, i followed manual upgrade as i am very much fam

  • Pages .docx crash

    Hi, I searched old versions of this question and things didn't help.  A .docx file crashes pages when I try to open it.  I have version 5.2.  All updates seem to be installed. I reset computer, tried opening the .docx from pages directly, held down s

  • Menu icons

    Hi, I am looking for icons and their names that oracle uses in its DEFAULT&SMARTBAR menu. I like the previous and next record icons and want to use it in my customized menu but not sure what it is called ? I am using forms 11g. Any idea? Thanks Munis