Characters conversion
Hello,
I'm working on a conversion problem between windows and unicode representations of characters.
I would like to get, for instance for the euro character, the windows encoding value (128) from its unicode encoding value (8364), and vice versa.
� 8364 128 (unicode / windows)
� 8218 130
� 402 131
� 8222 132
� 8230 133
� 8224 134
...I've found on the internet a way to get 128 from 8364 :
String s = "�";
byte b[] = s.getBytes();
int code = (int) (b[0] & 0xff);
By the way, if someone could explain me how it works... ;)
I'm looking now for a way to do the opposite, get 8364 from 128...
Thank you a lot in advance.
Bye!
Thank you for your answers.
hunter90000, you're right, the "windows" charset is windows-1252. But what I'm looking for is a way to get 8364 from 128. If someone could help me unterstand the code below, I think I could find it :
String s = "�";
System.out.println( (int) s.charAt(0)); --> 128
byte b[] = s.getBytes();
int code = (int) (b[0] & 0xff);
System.out.println(code); -->8364
System.out.println((char) code); -->�The reason why I want to get 8364:
I'm manipulating a xml file to send data to a web browser via an ajax function. This data provide from an Oracle database, in which the euro character has the value 128.
The only way I've found to display correctly the euro character on the browser is to encode it & # 8364; in the xml file, even if the charset of this one and of the JSP is 'ISO-8859-15'...
The problem is not limited to the euro character, but to all the carachters in the following list :
(8364 . 128)
(8218 . 130)
(402 . 131)
(8222 . 132)
(8230 . 133)
(8224 . 134)
(8225 . 135)
(710 . 136)
(8240 . 137)
(352 . 138)
(8249 . 139)
(338 . 140)
(381 . 142)
(8216 . 145)
(8217 . 146)
(8220 . 147)
(8221 . 148)
(8226 . 149)
(8211 . 150)
(8212 . 151)
(732 . 152)
(8482 . 153)
(353 . 154)
(8250 . 155)
(339 . 156)
(382 . 158)
(376 . 159)
Similar Messages
-
Non English characters conversion issue in LSMW BAPI Inbound IDOCs
Hi Experts,
We have some fields in customer master LSMW data load program which can
contain non-English characters. We are facing issues in LSMW BAPI
method with non-English characters Conversion. LMSW steps read and
conversion are showing the non-English characters properly with out any
issue. While creating inbound IDOCs most of the non-English characters
replaced with '#' and its causing issues in creating customer master data in
system. In our scenario customer data with non-English characters in
the first name, last name and address details. Any specific setting
needs to be done from our side? Please suggest me to resolve this issue.
Thanks
Rajesh YadlaIf your language is a unicode tehn you need to change the options like IN SAP you need to change it to unicode in the initial screen Customize local layout(ALT F12) options 118 --> Encoding ....
-
ASCII (SYMBOLS & CHARACTERS) Conversion to Telephone numbers
I am using oracle Release 9.2.0.7.0 and I am trying to access a database containing telephone numbers and codes, the table contains two columns with the names no and code both are VARCHAR2 column datatype. As you can see below the values in these two columns are ASCII characters and symbols. I tried to save the SQLPLUS query (select no, code from subdata) into a a file using spool command but the Ascii symbols and characters have being changed into some other format I am not familiar with I will show it below. Therefore is there a function that can help me to convert the ascii codes into the telephone numbers without doing any import or export, an embedded SQLPlus function will help me very much.
This is a sample of the current table containing the ASCII codes:
no code
%)öFƒ cs☺ ☻YO
%)ûe_ cs☺ ☻Y_
%)ö►_ cs☺ ☻Y
%)ö♥▼ cs☺ ☻YÅ
%)ö"_ cs☺ ☻Yƒ
%)ö☻o cs☺ ☻`
%)ô☺▼ cs☺ ☻`▼
%)öt_ cs☺ ☻`/
%)öto cs☺ ☻`?
%)æ_ cs☺ ☻a
%)òS▼ cs☺ ☻h_
no code
%)öv cs☺ ☻pÅ
%)öp? cs☺ ☻pƒ
%)Æ♠_ cs☺ ☻q
%)öp/ cs☺ ☻q▼
%)öuo cs☺ ☻q/
%)Æ _ cs☺ ☻q?
%)ö↨O cs☺ ☻qƒ
%)öu cs☺ ☻r
%)ÿ☺O cs☺ ☻r▼
%)öQ_ cs☺ ☻r/
%)öB? cs☺ ☻r?
24 rows selected.
SQL>
This is the format I got after I saved the query results of the sub_data table into a text file using SPOOL command:
no Code
%)^G\2223?\377\377 cs 5_
%)^G\225^X_\377\377 cs 5\217
%)^G\2238\217\377\377 cs 3/
%)^G\225^Q\217\377\377 cs 6O
%)^G\223^Q^O\377\377 cs 3?
%)^G\223SO\377\377 cs 3o
%)^G\2231^O\377\377 cs 3^?
%)^G\223\225/\377\377 cs 3\217
%)^G\223^W^_\377\377 cs 3\237
Abdel Moalim
Bosaso, SomaliaThe two column names are MSISDN and IMSI
MSISDN IMSI
%)VXO cs☻ ☺sA/
%)sq cs☻ ☺åo
%)ÖÇ cs☺ ♥y_
%)ÖÇ_ cs☺ ♥y?
%)ÖÇo cs☺ ♥yO
%)ÖÇÅ cs ► y?
%)Öü▼ cs☺ ☻io
%)Öü? cs☺ ☻iO
%)ÿâO cs☺ ☻iÅ
%)Öü cs☺ ♥yo
%)û↕_ cs☺ ☻i
MSISDN IMSI
%)ûƃ cs☺ ☻i
%)ûT▼ cs☺ ☻p▼
%)ûw? cs☺ ☻po
%)Æ▼ cs☺ ☻p
%)ÿêO cs☺ ☻hƒ
%)ûR_ cs☺ ♥y/
%)ùFÅ cs☺ ♥s
%)ÿ1_ cs ► i?
%)ÿü▼ cs☺ ☻p/
%)ùöƒ cs☺ ☻p
%)ÿ1o cs☺ ☻hÅ
MSISDN IMSI
%)òE_ cs☺ ☻É
%)ÿ cs☺ ♥sÅ
%)û0Å cs ► dƒ
%)Ö♥Å cs☺ ♥sƒ
%)Öbo cs☺ ♥t/
%)ÿ cs☺ ♥t?
%)Ö♠Å cs☺ ♥tO
%)û%O cs☺ ♥t_
%)æ? cs !_
%)Öö cs☺ ♥to
%)ûV cs☺ ♥tÅ
MSISDN IMSI
%)òô/ cs☺ ☻cƒ
%)òSÅ cs☺ ☻d?
%)òSƒ cs☺ ☻hO
%)òV▼ cs☺ ☻do
%)ò3 cs☺ ☻"ƒ
%)òT▼ cs☺ ☻h
%)ò3o cs☺ ☻ho
%)ÖG cs☺ ♥tƒ
%)Ö1? cs☺ ♥u
%)Ö`_ cs☺ ♥u▼
%)Ö2? cs☺ ♥u/
MSISDN IMSI
%)ùù/ cs☺ ♥u?
%)Ö►o cs☺ ♥w?
%)D☺▼ cs☻ ☺Eÿ?
%)ÿY cs☺ ♥w_
%)ÖDo cs☺ ♥wO
2469 rows selected.
SQL> -
Hi all,
I have Frame documents, which contain anchored frames with callouts. The callouts are created with Frame and contain german special characters like ä,ö,ü. These characters are not converted correctly.
Thanks for help.
Regards,
Rainer -
Need Help UTF8, Special Characters Conversion/Translate
Hi All,
I have requirement to cleanup special character of field.
The data which is retrieved from the field and written to text file.
So Here
HÃ-re - I need value Hire(actual value) from the filed.
Ã- this has written into file instead of i
In database i have
NLS_NCHAR_CHARACTERSET - AL16UTF16
NLS_CHARACTERSET - UTF8
I tried
select convert('HÃ-re','UTF8') from dual
Output: H??re
Can any one please provide me a solution.
Thanks,
Edited by: user8822881 on Nov 3, 2010 6:33 PMuser8822881 wrote:
Hi All,
I have requirement to cleanup special character of field.
The data which is retrieved from the field and written to text file.
So Here
HÃ-re - I need value Hire(actual value) from the filed.
Ã- this has written into file instead of i
In database i have
NLS_NCHAR_CHARACTERSET - AL16UTF16
NLS_CHARACTERSET - UTF8
I tried
select convert('HÃ-re','UTF8') from dual
Output: H??re
Can any one please provide me a solution.
Thanks,
Edited by: user8822881 on Nov 3, 2010 6:33 PMUse TRANSLATE() or REPLACE()? -
ANSI X12 meta data availability in conversion agent??
Hello,
Is there a provision for availability of ANSI X12 meta data in SAP conversion agent. We are trying to process EDI orders using the X12 INDUSTRY standard. Please advise.
-Krishi,
SAP Conversion Agent together with SAP Net Weaver, enables to automate complex integration activities for a multitude of data and document formats. These include unstructured and partially structured documents such as Office files, data streams, printing applications, and other application-specific formats.
Easily integrate unstructured and semi-structured data into SAP Net Weaver Process Integration using the Conversion Agent.
Conversion Agent dynamically converts unstructured messages from Microsoft Word,Excel,PDF plain text.
Semi-structured formats such as HL7,SWIFT,HIPA, ANSI X12 and COBOL to PI understandable SOAP XML.
So that it helps to easily integrate the information need into the back-end systems. Conversion can also be used as the reverse process to convert from XML to unstructured or semi-structured formats
https://www.sdn.sap.com/irj/sdn/wiki?path=/display/xi/sap%2bconversion%2bagent%2b-%2bthe%2blong%2bjourney
Conversion Agent - Handling EDI termination characters
Conversion Agent a Free Lunch?
Integrate SAP Conversion Agent by Itemfield with SAP XI
regards
sr -
SAP Conversion Agent - Performance Tuning
Hi Experts,
We are working with SAP Conversion agent..we have developed several scenarios on that...
But, Now we are worried about the performance issues on the same.
Can anybody share their experience on SAP Conversion Agent and any performance tuning guide on SAP Conversion Agent.
-SHi,
/people/bla.suranyi/blog/2006/09/29/conversion-agent--handling-edi-termination-characters
Conversion Agent
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/da1e7c16-0c01-0010-278a-eaed5eae5a5f - conversion agent
Thanks,
Madhu -
Hi experts,
Our scenario is EDI-X12 data to XI to R/3 (both side)
For Converison Purpose,
We want to use Conversion Agent by Itemfield, but there are some other options also in the market like Seeburger and Iway adapters.
We have approx 50-60 EDI Scenarios.
Now, we want to justify that why we are using Conversion Agent not other adapters ?
Please help us ?
Regards,
Study SAPhi,
SAP Conversion Agent together with SAP Net Weaver, enables to automate complex integration activities for a multitude of data and document formats. These include unstructured and partially structured documents such as Office files, data streams, printing applications, and other application-specific formats.
Easily integrate unstructured and semi-structured data into SAP Net Weaver Process Integration using the Conversion Agent.
Conversion Agent dynamically converts unstructured messages from Microsoft Word,Excel,PDF plain text.
Semi-structured formats such as HL7,SWIFT,HIPA, ANSI X12 and COBOL to PI understandable SOAP XML.
So that it helps to easily integrate the information need into the back-end systems. Conversion can also be used as the reverse process to convert from XML to unstructured or semi-structured formats
https://www.sdn.sap.com/irj/sdn/wiki?path=/display/xi/sap%2bconversion%2bagent%2b-%2bthe%2blong%2bjourney
Conversion Agent - Handling EDI termination characters
Conversion Agent a Free Lunch?
Integrate SAP Conversion Agent by Itemfield with SAP XI
regards
sr -
Export from table with SQLDeveloper 4
My Oracle instance 10G is unicode (AL32UTF8) and I have some tables inside with multilingual content in columns (greek, polish by example)
SQLDeveloper access to the instance trough jdbc thin
With the previous version of SQLDeveloper (before 4) I made Table Export as xml, csv and sql files. As result I had the right charcaters in the export files and I was happy...
Now, with SQLDeveloper version 4 all multilingual characeters are replaced by ?. That means wrong characters conversions are occured
Bug ?I don't think this is a good protection: you can have these characters in VARCHAR anyway, and this way you "throw the baby out with the bath water". Not mentioning that right now also immune formats like HTML are also limited.
-
Import WE8ISO8859P1 to IW8ISO8859P8 or IW8MSWIN1255
Hello,
I have an Oracle 8I database with charset defined to WE8ISO8859P1.
It contains Hebrew characters.Is there a way to convert it to IW8ISO8859P8 or IW8MSWIN1255 directly or by export/import ? What about importing to 10g ?
Os WinXP/2003.Basic problem is you have a characterset (WE8ISO8859P1), which cannot store hebrew characters, because it's a western-european characterset set. Despite of this it seems to work, because as long as your client characterset is also WE8ISO8859P1 no characterset conversion happens and you get back, what you typed in. But the internal storage in the database is wrong.
When you now try to import your database into another one having characterset IW8MSWIN1255 (correct characterset for hebrew characters) conversion will happen and will fail. Very likely you will see '?' . You get the internal coding by 'select dump(column_name) from table_name;', correct hebrew characters are in range 224-250.
Werner -
The command "describe test_table", displays different datatype
Hello All,
I have a strange issue in my Oracle 12c, I'm have a database gateway setup to connect to a Microsoft SQL through dblink using ODBC 64 bits driver
All works fine but I noticed that some columns in the oracle tables that I'm merging from microsoft sql don't display the values even though in the microsoft SQL they are there
I noticed that in the SQL the datatype of the column is VARCHAR while if I describe the table from within oracle I can see that column as NVARCHAR2
I also noticed that some columns don't even display in Oracle and they are NVARCHAR in Microsoft SQL
I was thinking that it may be due to characters conversion but not sure
Anyone ever had this problem ?
Thank youHi Matt,
thanks for your reply, I have checked and here is the output:
SQL> select * from v$nls_parameters;
PARAMETER VALUE CON_ID
NLS_LANGUAGE AMERICAN 0
NLS_TERRITORY AMERICA 0
NLS_CURRENCY $ 0
NLS_ISO_CURRENCY AMERICA 0
NLS_NUMERIC_CHARACTE ., 0
RS
NLS_CALENDAR GREGORIAN 0
NLS_DATE_FORMAT DD-MON-RR 0
NLS_DATE_LANGUAGE AMERICAN 0
NLS_CHARACTERSET WE8MSWIN1252 0
PARAMETER VALUE CON_ID
NLS_SORT BINARY 0
NLS_TIME_FORMAT HH.MI.SSXFF AM 0
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXF 0
F AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR 0
NLS_TIMESTAMP_TZ_FOR DD-MON-RR HH.MI.SSXF 0
MAT F AM TZR
NLS_DUAL_CURRENCY $ 0
NLS_NCHAR_CHARACTERS AL16UTF16 0
PARAMETER VALUE CON_ID
ET
NLS_COMP BINARY 0
NLS_LENGTH_SEMANTICS BYTE 0
NLS_NCHAR_CONV_EXCP FALSE 0
19 rows selected. -
Unexpected keyboard codes replacements
Hi,
It seems that Adobe and earlier Macromedia are nursing
serious bug well
known from version MX. It lies in keyboard characters
conversion to strange
codes when right ALT is pressed and <PARAM> "wmode" is
used (most likely it
happens in mode transparent and opaque only). In consequence
it is not
possible to enter Polish (and maybe other non-ASCII)
characters to input
fields. It works fine in
standalone Flash player.
There is no place to count Adobe on fixing it and I have to
do something by
myself. Is it possible to build eg. component or class which
will convert on
the fly key codes of pressed buttons for TextField. and
components such as
TextArea and TextInput? How to get down to do that? Any
suggestions?
Regards,
MarekHi DmR,
You can see all the characters for any font installed on your system by using the Character Palette. To enable it, go to System Preferences and choose International. Then, select Input Menu in the bar at the top and look at the bottom of the window. There will be a little option box that says "show input menu in menu bar." Check it.
You will see a flag show up in the menu bar. Click on this flag and choose Character Palette. From there you can see all the "hidden" characters available. -
File XML Content Conversion: Problem with special characters
Hello,
in a file sender cc content conversion is used to transform a flat structure to XML. What we experiencecd is that the message mapping failed due to a character that was not allowed in XML:
I was assuming that the file content conversion just creates XML messages with allowed characters. Is there any way to configure content conversion to remove control characters which are not allowed in XML? Unfortunately the sender system cannot be modified.
Thank you.Hi Florian,
Please use this UDF to remove special characters which prevent XML messages to form properly.
public static String removeSpecialChar(String s)
try
s=s.replaceAll("&","& amp ;");
s=s.replaceAll("<" , " & lt ;");
s=s.replaceAll(">", "& gt ;");
s=s.replaceAll("'", "& apos ;");
s=s.replaceAll("\"", "& quot ;");
catch(Exception e)
e.printStackTrace();
return s;
Please remove spaces between characters within double quotes. I have added them because otherwise you can't see this code properly. Please check this below link , please replace the characters with proper values as the display is causing a problem here
http://support.microsoft.com/kb/316063
regards
Anupam
Edited by: anupamsap on Jul 7, 2011 4:22 PM
Edited by: anupamsap on Jul 7, 2011 4:23 PM -
File content conversion only 100 characters read from source..
Hi,
In my case I have a sender channel with file content conversion set as follows;
Recordset structure : Record, 1
sequence: ascending
Key field type : String (case sensitive)
record.fieldseparator : 'nl'
record.fieldnames : Data
ignorerecordsetname: true
Idea is in ECC, when a custom program is run, it reads the shipment data and builds an xml file with data in various nodes like;
<shipmentId>6767667</shipmentId>
<DelvText>hjysks sag fhdososlhfiof </DelvText>
Now all this data is converted to a single string entry under tag called <Data> and passed on to the third party system by PI using above conversion.
And the resulting file will have all the data like this;
- <Record>
<Data><?xml version="1.0" encoding="UTF-8"?></Data>
</Record>
- <Record>
<Data><ShipmentId>6767667</ShiomentId>.........</Data>
It so happens that the data that is populated in the <DelvText> by the program is lost during conversion. I get only first 100 characters in the resulting above mentioned XML after the file content conversion happens. rest of the string is lost. I can see all other data perfect except for this long text.
This is the data I enter in the delivery's header text under shipment instruction field. I debugged the program and see that the entire text is indeed filled but gets lost after the file conversion happens !!
What can be the reason ?
thnksStefan, I appreciate your concern, thanks. But this is an already working interface and I cannot change it and can only assist with minor data mapping changes and troubleshooting such issues.
Scenario is simple, ECC has to send shipment data to third party via PI.
The shipment data has to be sent as an XML file with a single <Data> tag as I showed to you earlier.
It is so weird that when I type my delivery text less than 100 characters I can see that in full text in my XML file but only when the text is more than 100 characters, the XML has only 100 characters and it is passed to the third party like that and so third party is consdering this as incomplete info.
Would help if you can think and let me kjow what may create this kind of an issue or if you can point me where to look for issues.
thnks -
Content Conversion - Special Characters
How content conversion behaves with special characters?
Do we need to avoid any text in the input string that needs to be converted?
ThanksHow content conversion behaves with special characters?
Do we need to avoid any text in the input string that needs to be converted?
>>>
There is no link between the content conversion and special characters directly. The dependency is actually on the encoding standard used. You can set your encoding standard in the file adapter.
Option:
File Type
Specify the document data type.
1. Binary
2. Text
Under File Encoding, specify a code page.
For encoding standard ref. the post by another SDNer in this thread itself.
Maybe you are looking for
-
How to implement autocomplete in a search form?
Hi all, I´m trying to implement the autocomplete in a search form but I think I'm missing something in content server... Is there a simple away to do it in content server? With PHP it doesn´t work in content server, right? Where can I find a good tut
-
Can an old iPod classic charger brick be used with new iPod nano?
I have an Apple ModelA1102 W005A050 wall charger. I came with the iPod I purchased in 2005. Can it be used to charge the new iPod nano I just purchased? Thanks.
-
Hi Which table I will get the vendor details
-
Location and Category in sap xi logging
I deployed sap sample adapter in pck, now I can see the log outputted by trace object with category, but i cannot see the trace info outputted by the trace object without category, who can help me? Message was edited by: Spring Tang Message was edite
-
Photo Editing Software for the Intel Mac
I've just gotten a Nikon D50 and am trying to figure out what software to purchase to use on my Intel Imac. Obviously something that runs natively would be ideal, but "good" performance would be enough. I'm just an amateur photographer looking to imp