Handling of international char in WTC

Hi,
we have a problem with our swedish letters when we send them (as 8-bit ASCII-strings) from a TUXEDO application to a WLS. The strings shows 'funny' values upon reception (we send the strings in FML-buffers). We reckon this has to do with encodings in some way. Do anyone have a pointer to some resource that describes how to handle this matter?
Further on, it seems that if we, in our bean handling the tuxedo call, just returns the FML-buffer (containing swedish 8-bit letters), the buffer is modified upon return to the caller. Is that a correct behaviour?
Regards, Ola

Hi Ola,
Are these multi-byte strings? Also, what sort of field are you placing them in on the Tuxedo side? WTC does not currently support the MBSTRING buffer type or the FLD_MBSTRING. This support is expected to be in the next major release of WLS.
As for modifying the FML buffer, this is probably a side effect of the encoding/decoding process. For supported buffer types and field types, the encoding/decoding should be symetric. If you are seeing something that is modifying WTC supported buffer/field types, please contact BEA Customer Support and raise a case with them.
Regards,
Todd Little
BEA Tuxedo Engineering

Similar Messages

  • Char, UTF, Unicode, International chars

    Hi all,
    Could anybody make a brief summary of the connections among the char data type, UTF, Unicode, International chars?
    Thanks!

    The java char data type usually contains one unicode character. However, some unicode characters are larger than 16 bits (which is the size of char). Those characters are represented by multiple chars. There are multiple byte encodings for Unicode. Two of the most common are UTF-8 and UTF-16. The char data type stores characters in the UTF-16 encoding. UTF-8 is a common format for serializing unicode since it takes up half the space of UTF-16 for the most common characters. Also, UTF-8 deals with bytes, so byte order is not an issue when transfering between different platforms. Finally, one of the best reasons to use UTF-8 is that for characters with code < 128, UTF-8 is identical to ascii.
    Every character is an international character.

  • International Chars Support in nio

    Hello Gurus
    After making our server scalable, we are now ready to deploy it world wide. I have a question regarding encoding/decoding international characters.
    We want the server to support any many languages as possible. Presently, I check to decode the bytebuffer using few constant (English based ) charsets. If unsuccessful, I try to find all the charsets on the machine and then decode it one by one till it decodes successfully. For texts with French characters, after I call toString on the charBuffer, all the English characters are stripped off leaving only French Characters(weird chars not found in Engligh but only in French). Let me know how to proceed. Any help would be very appreciated.

    double post.
    NIO supports the same encoding as Blocking IO

  • International Chars Support

    Hello Gurus
    After making our server scalable, we are now ready to deploy it world wide. I have a question regarding encoding/decoding international characters.
    We want the server to support as many languages as possible. Presently, I decode the bytebuffer using few constant (English based ) charsets. If unsuccessful, I try to find all the charsets on the machine and then decode it one by one till it decodes successfully. For texts with French + English characters, after I call toString on the charBuffer, all the English characters are stripped off leaving only French Characters(weird chars not found in Engligh but only in French). The type of encoding used by client is not known.
    Let me know how to proceed. Any help would be very appreciated.

    It really isn't possible to confidently work out the character encoding of an incoming byte stream from the bytes sent. Different encodings may give different, but equally valid characters for the same bytes.
    Are we talking a web service here? If so then the encoding a browser returns on a form is the same as the one you send the form in. Don't use new String(bytes, encoding) for this, use HttpServletRequest.setCharacterEncoding(). For internationalised web sites, best to use UTF-8 for your pages.

  • Receiver File adapter not able to handle the special Char

    We have asynchronous interface from MDM file to CRM (Seibel) file. XI picks up the file from MDM and process them with transformation. It is observed that if the file contains the special character such as ®, it is not writing the output file. The trace is checked and it goes to the adapter framework to do the FCC but do not write the file.It marks the process as successful in SXMB_MONI. Any pointer will be helpful.
    Thanks,
    Samir

    Hi,
    Try  with ISO-8859-2
    http://en.wikipedia.org/wiki/Windows-1250
    see the last row in the above link.
    Secondly to make sure if you are getting the correct char or not, just open the file in Internet Explorer and choose the correct codepage, then you see the characters correctly.
    Regards,
    Sarvesh

  • Event Handler with Internal Loops

    Hi...
    I'm trying to update a basic program to handle control events more efficiently.  The program needs to perform the following functions on start button press:
    1)  Import data file and parse instrument settings from multiple (X) rows
    2)  Perform loop with case for each row changing input settings, the read test equipment, and store data in new output data file
    I've looked at producer/consumer example and the continuous measurement and logging example, but not sure if either (or none) of the following two options is the best way to handle the looping from the file...
    A)  Use events to trigger looping for all input cases within a consumer (I've had a problem with this due to not being able to terminate the loop within the consumer with an abort button)
    B)  Make the consumer only a single data acquisition and load loop inputs as queues and generate output queues to be handled by another parallel logging loop
    Any advice?

    Hi bripple,
    Based on what you've described above, a Producer/Consumer Design Pattern (Events) might work. There is a template for this design pattern that ships with LabVIEW which you can access by going to File > New > VI > From Template > Frameworks > Design Patterns. When the user clicks the start button on your front panel, you can queue up a command that will trigger your consumer loop to read the file and loop over each instrument setting. Within that loop, you should be able to queue up additional events corresponding to each instrument setting and reading.
    In terms of error handling, you can conditionally stop a loop if you detect an error. If your user decides to push a button on the front panel to stop the entire process, you can use Enqueue Element at Opposite End to put a stop command at the beginning of the queue. When your consumer loop encounters this event, it can flush the queue and do any cleanup it needs to perform.
    One additional thing to be cautious of is that queues can only handle one data type. Because of that, you may also want to consider a Queued Message Handler design pattern. This design pattern allows you to send both a command and data along with that command. I think that would be ideally situated for you since you could send a "Read Instrument" command along with the data for its settings. You can access this design pattern from within LabVIEW as well. If you have LabVIEW 2012 or LabVIEW 2013, see these instructions. The things I've said above also hold true for the Queued Message Handler as well.
    Let me know if you have any questions or if this is helpful.
    Regards,
    Matthew B.
    Applications Engineer
    National Instruments

  • International Char. Sets

    Hello,
    Does anyone know how to force the browser
    to use a particular charset with JSP?
    I am trying to write a page in Japanese,
    but the browser keeps switching to
    the default ISO. I have already
    tried setting the contentType in the
    page tag, but I can not find a listing
    anywhere of legal char sets.
    The funny thing is, up until a few days ago,
    my browser automatically used the correct
    char set to display the Japanese characters.
    Any help would be great. I have been
    scavenging the Internet for several days
    and have come up with nothing.
    Thanks!
    Phil

    Thanks for the reply.
    Very strange results though.
    Before, I was either adding the content
    type in the page tag, but that did not work.
    Then I was trying adding the meta tag, as
    you just suggested, but I did not know the
    correct charset type.
    After adding your meta tag, nothing happened.
    But then I also added the charset in the page
    tag, and now it works!
    As a background, I am running Tomcat on
    WinME. A possible problem is to get
    it running, I have to start java on the main
    class, because the batch files will not work
    due to static environment vars and clearing
    my classpath. Also strange. So I am wondering
    if part of the original problem is my server.
    Another background point is a few weeks ago
    it worked. Only in the last couple of days
    did it stop working.
    Thanks for the input!

  • How does the (ABAP) AS handle unused internal tables? (garbage collection?)

    Hi,
    Let's suppose i have a class, which contains a simple method like follow:
    method do_this.
      data lt_table type something_tab.
      "do some manipulation on the lt_table.
    endmethod.
    So during runtime, at the moment when we leave the method "do_this", will the lt_table memory space be freed? Or is it still allocated? Do we need to use an explicit "free lt_table" statement?
    I want to optimise memory usage in my code, and was wondering if such explicit calls to "free" is necessary.
    Thank you,
    Edited by: Huynh Van Du Tran on Mar 16, 2009 6:38 PM

    Hi,
    I would say it would be better use FREE itab at the end of the processing in your code. In the end-of-selection in your code, you can FREE all your itabs that were used in the program. This is one of the good approach of optimizing the memory.
    Regards
    Vimal

  • Type international chars are lost over ssh

    Hi!
    I have implemented a bash script to make it easier to add new users to the system. I planned to run it over ssh. The problem is that many names have non-ascii characters in them (öäü and the likes). When I connect to the server from my workplace over ssh, these characters are lost no matter what I have tried to do. I have used Putty on Windows and Terminal and iTerm on OS X laptop. The problem remains. I can only enter these characters when I log on the server locally and open the Terminal or iTerm. It is also curious that when I run nano text-editor over ssh, it does display those characters so it seems to have something to do with bash itself.
    Thanks for any help!

    That's likely a character-encoding mismatch, potentially with PuTTY or within your bash shell script. While it's possibly PuTTY (check your settings), I'd probably look at the bash script in detail.
    Have a look around for existing discussions of Unicode and bash, as a start. The character conversion usually involves some combination of iconv (conversion) and the bash LC_CTYPE knob.
    It's common to see the latter set to en_US.UTF-8 to cause UTF8 processing, though your question and your forum name imply you're probably not looking to use US English encoding here.
    As for more details, read this topic and read the comments of [this thread|http://blogamundo.net/dev/2008/02/16/dont-sort-stuff-in-unicode-with-bas h> and most definitely go read and grok [this|http://www.joelonsoftware.com/articles/Unicode.html].

  • How to handle all UTF-8 char set in BizTalk?

    Can any one let me know how to handle UTF-8 char set in BizTalk.
    My receive file can contain any character set like ÿÑÜÜŒöäåüÖÄÅÜ . I have to support all char set under the umbrella of UTF-8.
    But when i am trying to convert flat file data to xml its converting special character to ??????????.
    Thanks,

    That won't work because the content has been modified simply by posting it.
    Let's start form the beginning:
    No component will ever replace any character with '?', that just doesn't happen.
    Some programs will display '?' if the byte value does not fall within the current character set, UTF-x, ANSI, ANSI+Code Page, etc.
    You need to open the file with an advanced text editor such as Notepad++. 
    Please tell us exactly where you are seeing the '?'.
    The Code Page is not an encoding itself, it is a special way of interpreting ANSI, single byte char values 0-254, in a way that supports characters beyond the traditional Extended Character Set.
    You need to be absolutely sure what encoding and possibly Code Page the source app is sending.  Notepad++ is pretty good at sniffing this out or just ask the sender.  If you determine that it's really UTF-8, you must leave
    the Code Page property blank.

  • External Web Server links to internal web server on LAN - how to configure?

    I'm hoping someone can give me a bit of assistance with some routing configurations:
    Currently, I have a Cisco PIX 515E that's handling my VPN and routing/DNS, etc. I'm dumping the PIX (it's overkill for my organization and it's costing too much money for Cisco-certified techs to come in and still not configure it correctly for my needs - long story).
    Furthermore, an external website hosted with our ISP links to a public IP (let's say 192.x.x.1) that points through the current PIX firewall, through a DMZ, and then to a webserver hosted locally behind our firewall.
    I'd like our Xserve to take over for the PIX, providing VPN access, DNS, etc. and to properly route calls from the web to 198.x.x.1 to the correct server behind out network.
    The Xserve has two NIC cards, one on a public IP 192.x.x.2 (for the sake of this discussion) and one with it's internal address of 10.1.0.2 for file sharing, etc.
    The internal web server also has 2 NIC cards, one that listens for the links to 192.x.x.1, and one that listens locally on 10.1.0.80 for LAN application services.
    How do I configure DNS/etc. on the Xserve to properly channel the incoming calls to 192.x.x.1 to properly reach the server they're supposed to reach?
    Any help is appreciated. If more info is needed, I'm happy to provide.
    Thanks in advance!

    I've read your post several times and I'm pretty sure I understand what you're saying, until the line:
    >How do I configure DNS/etc. on the Xserve to properly channel the incoming calls to 192.x.x.1 to properly reach the server they're supposed to reach?
    Assuming that the 192.x.x.1 address is a real-world, public IP address that the web server is using, you want all requests from the outside world to go to this address, correct? but requests from the inside world want to go to the 10.1.0.80 address on that server?
    That part I get - you want split DNS, which is not trivial to setup, but is manageable. The part I don't get is where the firewall comes in - you're removing the pix and replacing it with an XServe, but the web server has a public IP address in the same range as the XServe's public IP address and on that basis no traffic is going to flow through the firewall.
    So I'm not sure if this is a firewall or a DNS question.
    Split DNS will handle the internal vs. external traffic going to the different IP addresses of your server. You can't use Server Admin to do this (it can't handle multiple views of the DNS), but it is possible to do by hand.
    The firewall element stumps me, though - but if the XServe is going to run as the firewall you might just find it easier to put the web server behind the firewall and forget the whole DMZ concept.
    Then again, you could get the PIX operating correctly - it's a viable firewall appliance and I'd be surprised if it couldn't do what you want here.

  • Memory space issue in internal table

    Hi ,
    My report is dumping because there is no memory space availabe for extending the memory of an internal table, after it gets filled with about 2500000 lakh records.
    the dump analysis is as follows :-
    Error analysis
    The internal table (with the internal identifier "IT_317") could not be
    enlarged any further. To enable error handling, the internal table had
    to be deleted before this error log was formatted. Consequently, if you
    navigate back from this error log to the ABAP Debugger, the table will
    be displayed there with 0 lines.
    When the program was terminated, the internal table concerned returned
    the following information:
    Line width: 1700
    Number of lines: 106904
    Allocated lines: 106904
    New no. of requested lines: 8 (in 1 blocks)
    How to correct the error
    The amount of storage space (in bytes) filled at termination time was:
    Roll area...................... 7272944
    Extended memory (EM)........... 603339264
    Assigned memory (HEAP)......... 396390176
    Short area..................... " "
    Paging area.................... 40960
    Maximum address space.......... 529887568
    You may able to find an interim solution to the problem
    in the SAP note system. If you have access to the note system yourself,
    use the following search criteria:
    Please suggest what can be done.
    Regards,
    Vikas Arya

    Hi,
    This solution might not sound good. But give a thought.
    While appending data u may take more than one internal table.
    Append first 10lakh records to 1st table, Second 10 lakhs to second table etc.
    But from where are u getting the source data? It should be present in some internal table correct?
    Probably u can use dynamic internal tables concept.
    Also check ur code carefully. After the place of appending if u are not going to use any internal tables then use FREE itab
    statement to free the memory allocated.
    Also reduce the global declarations as much as possible
    Thanks,
    Vinod.

  • How to handle blob/clob?

    In my process I can successfully write to a blob or clob field (via database adapter). But I'm not able anymore to get the value out of the database (Oracle Lite). I always get following error:
    <remoteFault xmlns="http://schemas.oracle.com/bpel/extension">
    <part name="code">
    <code>0</code>
    </part>
    <part name="summary">
    <summary>file:/E:/OraBPELPM_1/integration/orabpel/domains/default/tmp/.bpel_GetStatus_1.0.jar/SelectJobStates.wsdl [ SelectJobStates_ptt::SelectJobStatesSelect_key_typeFrom_typeTo(SelectJobStatesSelect_key_typeFrom_typeTo_inparameters,MainCollection) ] - WSIF JCA Execute of operation 'SelectJobStatesSelect_key_typeFrom_typeTo' failed due to: Exception: Nicht erfolgreiche Ausführung von DBReadInteractionSpec. Abfragename: [SelectJobStatesSelect], Descriptor-Name: [GetStatus.Main]. ; nested exception is: ORABPEL-11614 Exception: Nicht erfolgreiche Ausführung von DBReadInteractionSpec. Abfragename: [SelectJobStatesSelect], Descriptor-Name: [GetStatus.Main]. Weitere Informationen finden Sie in der Root Exception. Verursacht von Exception [TOPLINK-4002] (OracleAS TopLink - 10g (9.0.4.5) (Build 040930)): oracle.toplink.exceptions.DatabaseException Exception-Beschreibung: java.sql.SQLException: Internal Error:LiteThinJDBCLob: One or More Invalid Handle(s) Interne Exception: java.sql.SQLException: Internal Error:LiteThinJDBCLob: One or More Invalid Handle(s) Fehlercode:0.</summary>
    </part>
    <part name="detail">
    <detail>Exception-Beschreibung: java.sql.SQLException: Internal Error:LiteThinJDBCLob: One or More Invalid Handle(s) Interne Exception: java.sql.SQLException: Internal Error:LiteThinJDBCLob: One or More Invalid Handle(s) Fehlercode:0</detail>
    </part>
    </remoteFault>
    Trying on the console with a polsql skript I got the following:
    [POL-3006] null handle is not allowed
    Any ideas?
    Yves
    PS. Is there a GUI client which can browse data on a Oracle Lite database?

    Hi Pete,
    The problem is with using Oracle's CLOB and BLOB implementation. I haven't even started using BLOB and CLOB. I want to collect data of BLOB and CLOB types from oracle and store it in XML file and after that if required, collect the same data from xml file and store it back to oracle table. I can do these things for all the rest datatypes but BLOB and CLOB. How to proceed?
    Thanks -
    Samit

  • Cannot type/paste normal international text when non-Unicode in CS6

    Hello,
    In all versions of DW (up to CS3 which I have used) I had no problem pasting / typing HTML or text with international characters in Design View when the page is using a non-Unicode, yet international encoding (like Windows - Greek or ISO - Greek).
    Now, DW CS6 auto converts all international chars typed/pasted in Design View to html entities (unless Page Encoding is Unicode).
    For example, when the document has an encoding of:
    <meta http-equiv="Content-Type"  content="text/html;  charset=windows-1253">
    [ This is equal to Modify / Page Properties / Title/Encoding / Document Type (DTD): None & Encoding: Greek (Windows) ]
    ...in the past I was able to type/paste greek characters/text in Design view and they were retained as such (simple text) in Code view (and this is what we need now as well).
    Yet, in DW CS6 such international chars/text (typed / pasted in Design view) are auto-switched to "&somechar;" entities which is not what should happen; this messes up all text. Design view shows the text correctly, but html source (Code View) does not retain international characters themselves, although it should, as long as the html page is using a proper encoding/charset that allows compatible international text to be retained (e.g. greek encoding is compatible with greek characters). I repeat that this was working correctly at least until DW CS3.
    Directly typing/pasting in DW CS6 design view correctly (i.e. retaining the original chars in code view) works ONLY when using Unicode.
    However, if we type/paste greek text (with html tags or not) directly in Code view, then DW CS6 retains chars/text properly and Design view displays everything properly too. Consequently, as a work-around, we can use the Code View to type/paste international text/html when not using Unicode (UTF-8) as the Page Encoding. But this makes our life more difficult.
    So, has CS6 dropped support for typing/pasting international text/html directly in Design view, for non-Unicode international encodings?
    Or something has changed and we need to configure some setting(s) so that the feature works properly? (I haven't been able to find any setting that might affect this behavior. I also played with Document Type (DTD) settings but I found these did not affect the described behavior.)
    Please advise. This is very important.
    Thanks,
    Nick
    Message was edited by: JJ-GR

    Thanks for the reply.
    As I have already mentioned, typing/pasting in Code View works properly.
    However, in previous versions of DW, pasting/typing in Design View was working fine, whatever the page encoding.
    I agree that pasting in Code View is not really a big deal. But having to do all editing/typing in Code View definitely is! What is the point of using a WYSIWYG editor, if it can't produce correct source code (except in Unicode pages)? If we are going to do all editing in Code View, then we could simply use notepad (just an exaggeration) or other programming-oriented tool.
    I hope other people can confirm the problem and suggest solutions or Adobe fixes it.

  • International Characters Turn to ?'s

    I'm using Kodo 2.5.4 with MySQL. Any international characters (such as
    "__") in MySQL get translated to a "?". Thanks for any clues on how to
    prevent this -
    Sam

    Sam-
    There are 3 possible places where something might be going wrong:
    1. the database might not be storing the international characters
    correctly
    2. the JDBC driver might not be handling the international characters
    properly
    3. Kodo might be doing something wrong with the international characters
    Since Kodo just uses the JDBC driver's String handling, I think #1 and #2
    are more likely.
    Some quick searching on the internet reveals that MySQL did not support
    unicode until version 4.1:
    http://www.mysql.com/doc/en/Charset-Unicode.html
    You might want to try upgrading to see if your problems are magically
    solved.
    Otherwise, you are going to be limited by the capabilities of the JDBC
    driver, and Kodo doesn't do any special handling for unicode characters
    (beyond what is provided by the Java language itself). One solution
    would be to perform your own encoding into the String field, and then
    perform the decoding when you retrieve the field.
    In article <bn0bdj$eec$[email protected]>, Sam wrote:
    I forgot to mention that I'm also using MySQL version 3.23.58.
    I'm using Kodo 2.5.4 with MySQL. Any international characters (such as
    "__") in MySQL get translated to a "?". Thanks for any clues on how to
    prevent this -
    Sam
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com

Maybe you are looking for