Problem with language specific characters on e-mail sending
Hi,
Problem with language specific characters on e-mail sending.
How can it be fixed?
Thanks.
Hi,
try to work on the charecter code set UTF-8 or UTF-16. You can define this in html.
Or encode the charecter using java script.
Hope this may help you.
Deepak!!!
Similar Messages
-
Issue with language specific characters combined with AD-Logon to BO platform and client tools
We are using SSO via Win AD to logon to BO-Launchpad. Generally this is working which means for Launch Pad no manual log on is needed. But this is not working for users which have language specific letters in their AD name (e.g. öäüéèê...).
What we have tried up to now:
If the AD-User name is Test-BÖ the log on is working with the user name Test-BO with logon type AD
If the logon Type "SAP" is used than it is possible to use the name Test-BÖ as the username
Generally it is no problem in AD to use language specific letters (which means it is possible to e.g. log on to Windows with the user Test-BÖ)
It is possible to read out the AD attributes from BO side and add them to the user. Which means in the user attributes the AD name Test-BÖ is shown via automatic import from AD. So it's not the problem that the character does not reach BO.
I have opened a ticket concerning that. SAP 1th level support is telling me that this is not a BO problem. They say it is a problem of Tomcat. I don't believe that because the log on with authentification type SAP is working.
I have set up the same combination (AD User Test-BÖ with SAP User Test-BÖ) as a single sign on authentification in SAP BW and there it is working without problems.
Which leads me to the conlusion: It is not a problem of AD. It is something which is connected to the BO platform but only combined with logon type AD because SAP Logon is working with language specific characters.I have found this article with BO support:
You cannot add a user name or an object name that only differs by a character with a diacritic mark
Basically this means AD stores the country specific letters as a base letter internally. Which means that if you have created a user with a country specific letter in the name you can also logon with the Base letter to Windows.
SAP-GUI and Windows are maybe replacing the country specific letters by the base letter. Due to that SSO is working. BO seems not to be able to do that. Up to now the supporter from BO is telling me that this is not a BO problem.
Seems to be magic that the colleagues of SAP-GUI are able to to it. -
Problem with language specific letters in Translation Builder editor
Hello,
I'm trying to translate some reports from Slovenian to Croatian using OTB, but as soon as I scroll up or down through translation form some Croatian language specific letters (čćžšđ) either convert to c (čć) or d (đ) or become "unreadable" (šž). The latest (šž) are displayed correctly on the report when strings are exported back to RDF file.
According Troubleshooting section in OTB help I tried to change both base and translation font but with no success.
Any experience, any hint or trick?
Thanks in advance.
Dev6i patch10
RDBMS=Oracle10g
WinXPsp2
NLS_LANG=CROATIAN_CROATIA.EE8MSWIN1250Naveen,
This is more of a portal problem.
First, you should submit an OSS message to get the <b>best support possible</b> from SAP.
Second, if you don't like that solution, THEN come back and post it on SDN. You will get better answers in the Enterprise Portal forum here on SDN.
Regards,
Greg -
Issues with language-specific characters and Multi Lexer
I want to create a text index with global lexer and different languages. But how to create the index to satisfy all languages?
Oracle EE 10.2.0.4 (UTF8) on Solaris 10
1.) Create global lexer with german as default and czech, turkish as additional languages.
begin
ctx_ddl.drop_preference('global_lexer');
ctx_ddl.drop_preference('german_lexer');
ctx_ddl.drop_preference('turkish_lexer');
ctx_ddl.drop_preference('czech_lexer');
end;
begin
ctx_ddl.create_preference('german_lexer','basic_lexer');
ctx_ddl.create_preference('turkish_lexer','basic_lexer');
ctx_ddl.create_preference('czech_lexer','basic_lexer');
ctx_ddl.create_preference('global_lexer', 'multi_lexer');
end;
begin
ctx_ddl.set_attribute('german_lexer','composite','german');
ctx_ddl.set_attribute('german_lexer','mixed_case','no');
ctx_ddl.set_attribute('german_lexer','alternate_spelling','german');
ctx_ddl.set_attribute('german_lexer','base_letter','yes');
ctx_ddl.set_attribute('german_lexer','base_letter_type','specific');
ctx_ddl.set_attribute('german_lexer','printjoins','_');
ctx_ddl.set_attribute('czech_lexer','mixed_case','no');
ctx_ddl.set_attribute('czech_lexer','base_letter','yes');
ctx_ddl.set_attribute('czech_lexer','base_letter_type','specific');
ctx_ddl.set_attribute('czech_lexer','printjoins','_');
ctx_ddl.set_attribute('turkish_lexer','mixed_case','no');
ctx_ddl.set_attribute('turkish_lexer','base_letter','yes');
ctx_ddl.set_attribute('turkish_lexer','base_letter_type','specific');
ctx_ddl.set_attribute('turkish_lexer','printjoins','_');
ctx_ddl.add_sub_lexer('global_lexer', 'default', 'german_lexer');
ctx_ddl.add_sub_lexer('global_lexer', 'czech', 'czech_lexer', 'CZH');
ctx_ddl.add_sub_lexer('global_lexer', 'turkish', 'turkish_lexer', 'TRH');
end;
/2.) Create table and insert data
drop table text_search;
create table text_search (
lang varchar2(5)
, name varchar2(100)
insert into text_search(lang, name) values ('DEH', 'Strauß');
insert into text_search(lang, name) values ('DEH', 'Möllbäck');
insert into text_search(lang, name) values ('TRH', 'Öğem');
insert into text_search(lang, name) values ('TRH', 'Öger');
insert into text_search(lang, name) values ('CZH', 'Tomáš');
insert into text_search(lang, name) values ('CZH', 'Černínová');
commit;3.) The index creation now produces different results depending on the language settings:
-- *Option A)*
alter session set nls_language=german;
drop index i_text_search;
create index i_text_search on text_search (name)
indextype is ctxsys.context
parameters ('
section group CTXSYS.AUTO_SECTION_GROUP
lexer global_lexer language column lang
memory 300000000'
select * from dr$i_text_search$I;
-- *Option B)*
alter session set nls_language=turkish;
drop index i_text_search;
create index i_text_search on text_search (name)
indextype is ctxsys.context
parameters ('
section group CTXSYS.AUTO_SECTION_GROUP
lexer global_lexer language column lang
memory 300000000'
select * from dr$i_text_search$I;
-- *Option C)*
alter session set nls_language=czech;
drop index i_text_search;
create index i_text_search on text_search (name)
indextype is ctxsys.context
parameters ('
section group CTXSYS.AUTO_SECTION_GROUP
lexer global_lexer language column lang
memory 300000000'
select * from dr$i_text_search$I;And now I get different:
Option A)
dr$i_text_search$I with nls_language=german:
STRAUß
STRAUSS
MOLLBACK
OĞEM
OGER
TOMAŠ
ČERNINOVA
Problems, e.g.:
A turkish client now does not find his data (the select returns 0 rows)
alter session set nls_language=turkish;
select * from text_search
where contains (name, 'Öğem') > 0;
Option B)
dr$i_text_search$I with nls_language=turkish:
STRAUß
STRAUSS
MÖLLBACK
ÖĞEM
ÖGER
TOMAŠ
ČERNINOVA
Problems, e.g.:
A czech client now does not find his data (the select returns 0 rows)
alter session set nls_language=czech;
select * from text_search
where contains (name, 'Černínová') > 0;
Option C)
dr$i_text_search$I with nls_language=czech:
STRAUß
STRAUSS
MOLLBACK
OĞEM
OGER
TOMAS
CERNINOVA
Problems, e.g.:
A turkish client now does not find his data (the select returns 0 rows)
alter session set nls_language=turkish;
select * from text_search
where contains (name, 'Öğem') > 0;
----> How can these problems be avoided? What am I doing wrong?You need to change your base_letter_type from specific to generic. Also, if you are going to use both alternate_spelling and base_letter in your german_lexer, then you might want to set override_base_letter to true. Please see the run of your code below, with those changes applied. The special characters got mangled in my spool file, but hopefully you get the idea.
SCOTT@orcl_11gR2> begin
2 ctx_ddl.drop_preference('global_lexer');
3 ctx_ddl.drop_preference('german_lexer');
4 ctx_ddl.drop_preference('turkish_lexer');
5 ctx_ddl.drop_preference('czech_lexer');
6 end;
7 /
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> begin
2 ctx_ddl.create_preference('german_lexer','basic_lexer');
3 ctx_ddl.create_preference('turkish_lexer','basic_lexer');
4 ctx_ddl.create_preference('czech_lexer','basic_lexer');
5 ctx_ddl.create_preference('global_lexer', 'multi_lexer');
6 end;
7 /
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> begin
2 ctx_ddl.set_attribute('german_lexer','composite','german');
3 ctx_ddl.set_attribute('german_lexer','mixed_case','no');
4 ctx_ddl.set_attribute('german_lexer','alternate_spelling','german');
5 ctx_ddl.set_attribute('german_lexer','base_letter','yes');
6 ctx_ddl.set_attribute('german_lexer','base_letter_type','generic');
7 ctx_ddl.set_attribute('german_lexer','override_base_letter', 'true');
8 ctx_ddl.set_attribute('german_lexer','printjoins','_');
9
10 ctx_ddl.set_attribute('czech_lexer','mixed_case','no');
11 ctx_ddl.set_attribute('czech_lexer','base_letter','yes');
12 ctx_ddl.set_attribute('czech_lexer','base_letter_type','generic');
13 ctx_ddl.set_attribute('czech_lexer','printjoins','_');
14
15 ctx_ddl.set_attribute('turkish_lexer','mixed_case','no');
16 ctx_ddl.set_attribute('turkish_lexer','base_letter','yes');
17 ctx_ddl.set_attribute('turkish_lexer','base_letter_type','generic');
18 ctx_ddl.set_attribute('turkish_lexer','printjoins','_');
19
20 ctx_ddl.add_sub_lexer('global_lexer', 'default', 'german_lexer');
21 ctx_ddl.add_sub_lexer('global_lexer', 'czech', 'czech_lexer', 'CZH');
22 ctx_ddl.add_sub_lexer('global_lexer', 'turkish', 'turkish_lexer', 'TRH');
23 end;
24 /
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> drop table text_search;
Table dropped.
SCOTT@orcl_11gR2> create table text_search (
2 lang varchar2(5)
3 , name varchar2(100)
4 );
Table created.
SCOTT@orcl_11gR2> insert into text_search(lang, name) values ('DEH', 'Strauß');
1 row created.
SCOTT@orcl_11gR2> insert into text_search(lang, name) values ('DEH', 'Möllbäck');
1 row created.
SCOTT@orcl_11gR2> insert into text_search(lang, name) values ('TRH', 'Öğem');
1 row created.
SCOTT@orcl_11gR2> insert into text_search(lang, name) values ('TRH', 'Öger');
1 row created.
SCOTT@orcl_11gR2> insert into text_search(lang, name) values ('CZH', 'Tomáš');
1 row created.
SCOTT@orcl_11gR2> insert into text_search(lang, name) values ('CZH', 'ÄŒernÃnová');
1 row created.
SCOTT@orcl_11gR2> commit;
Commit complete.
SCOTT@orcl_11gR2>
SCOTT@orcl_11gR2> -- *Option A)*
SCOTT@orcl_11gR2> alter session set nls_language=german;
Session altered.
SCOTT@orcl_11gR2> drop index i_text_search;
drop index i_text_search
ERROR at line 1:
ORA-01418: Angegebener Index ist nicht vorhanden
SCOTT@orcl_11gR2> create index i_text_search on text_search (name)
2 indextype is ctxsys.context
3 parameters ('
4 section group CTXSYS.AUTO_SECTION_GROUP
5 lexer global_lexer language column lang
6 memory 300000000'
7 );
Index created.
SCOTT@orcl_11gR2> select token_text from dr$i_text_search$I;
TOKEN_TEXT
AYEM
AŒERNA
CK
GER
LLBA
MA
NOVA
STRAUAY
TOMA
9 rows selected.
SCOTT@orcl_11gR2> alter session set nls_language=turkish;
Session altered.
SCOTT@orcl_11gR2> select * from text_search
2 where contains (name, 'Öğem') > 0;
LANG
NAME
TRH
Öğem
1 row selected.
SCOTT@orcl_11gR2>
SCOTT@orcl_11gR2> -- *Option B)*
SCOTT@orcl_11gR2> alter session set nls_language=turkish;
Session altered.
SCOTT@orcl_11gR2> drop index i_text_search;
Index dropped.
SCOTT@orcl_11gR2> create index i_text_search on text_search (name)
2 indextype is ctxsys.context
3 parameters ('
4 section group CTXSYS.AUTO_SECTION_GROUP
5 lexer global_lexer language column lang
6 memory 300000000'
7 );
Index created.
SCOTT@orcl_11gR2> select token_text from dr$i_text_search$I;
TOKEN_TEXT
AYEM
AŒERNA
CK
GER
LLBA
MA
NOVA
STRAUAY
TOMA
9 rows selected.
SCOTT@orcl_11gR2> alter session set nls_language=czech;
Session altered.
SCOTT@orcl_11gR2> select * from text_search
2 where contains (name, 'ÄŒernÃnová') > 0;
LANG
NAME
CZH
ÄŒernÃnová
1 row selected.
SCOTT@orcl_11gR2>
SCOTT@orcl_11gR2> -- *Option C)*
SCOTT@orcl_11gR2> alter session set nls_language=czech;
Session altered.
SCOTT@orcl_11gR2> drop index i_text_search;
Index dropped.
SCOTT@orcl_11gR2> create index i_text_search on text_search (name)
2 indextype is ctxsys.context
3 parameters ('
4 section group CTXSYS.AUTO_SECTION_GROUP
5 lexer global_lexer language column lang
6 memory 300000000'
7 );
Index created.
SCOTT@orcl_11gR2> select token_text from dr$i_text_search$I;
TOKEN_TEXT
AYEM
AŒERNA
CK
GER
LLBA
MA
NOVA
STRAUAY
TOMA
9 rows selected.
SCOTT@orcl_11gR2> alter session set nls_language=turkish;
Session altered.
SCOTT@orcl_11gR2> select * from text_search
2 where contains (name, 'Öğem') > 0;
LANG
NAME
TRH
Öğem
1 row selected.
SCOTT@orcl_11gR2> -
Runtime.exec() with language specific chars (umlauts)
Hello,
my problem is as follows:
I need to run the glimpse search engine from a java application on solaris using JRE 1.3.1 with a search pattern containing special characters.
Glimpse has indexed UTF8 coded XML files that can contain text with language specific characters in different languages (i.e. german umlauts, spanish, chinese). The following code works fine on windows and with JRE 1.2.2 on solaris too:
String sSearchedFreeText = "Tür";
String sEncoding = "UTF8";
// Convert UTF8 search free text
ByteArrayOutputStream osByteArray = new ByteArrayOutputStream();
Writer w = new OutputStreamWriter(osByteArray, sEncoding);
w.write(sSearchedFreeText);
w.close();
// Generate process
String commandString = "glimpse -y -l -i -H /data/glimpseindex -W -L 20 {" + osByteArray.toString() + "}";
Process p = Runtime.getRuntime().exec(commandString);
One of the XML files contains:
<group topic="service-num">
<entry name="id">7059</entry>
<entry name="name">Türverkleidung</entry>
</group>
Running the java code with JRE 1.2.2 on solaris i get following correct commandline
glimpse -y -l -i -H /data/glimpseindex -W -L 20 {Türverkleidung}
--> glimpse finds correct filenames
Running it with JRE 1.3.1 i get following incorrect commandline
glimpse -y -l -i -H /data/glimpseindex -W -L 20 {T??rverkleidung}
--> glimpse finds nothing
JRE 1.2.2 uses as default charset ISO-8859-1 but JRE 1.3.1 uses ASCII on solaris.
Is it possible to change the default charset for the JVM in solaris environment?
Or is there a way to force encoding used by Runtime.exec() with java code?
Thanks in advance for any hints.
KarstenosByteArray.toString()Yes, there's a way to force the encoding. You provide it as a parameter to the toString() method.
-
Language specific characters with JDBC
Does anybody know how to insert language specific characters to Oracle tables using JDBC and without the overhead of unicode conversion back and forth?
At the moment, all we can do is to convert those characters to unicode when inserting, and perform a reverse conversion when getting back from a resultset. This is cumbersome in large text data.
Is there a way to configure the RDBMS and/or the operating system for this purpose? We are using Oracle 7.3.4 on Windows NT 4.0 SP5, Oracle JDBC Driver 8.1.6, and Java Web Server 2.0 (JDBC 1.0 compliant). Suggestions for Oracle 8.1.6 and Solaris 2.6 will also be appreciated.
Ozan & SerpilHi Jeremy,
Below is meta tags for Turkish
lt & meta http-equiv="Content-Type" content="text/html; charset=windows-1254" / & gt
lt & meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-9" / & gt
lt & meta http-equiv="Content-Language" content="tr" / & gt
I tryed but result is the same.
I think .irpt has no Turkish support.
Thanks. -
Problems with non-ASCII characters on Linux Unit Test Import
I found a problem with non-ASCII characters in the Unit Test Import for Linux. This problem does not appear in the Unit Test Import for Windows.
I have attached a Unit Test export called PROC1.XML It tests a procedure that is included in another attachment called PROC1.txt. The unit test includes 2 implementations. Both implementations pass non-ASCII characters to the procedure and return them unchanged.
In Linux, the unit test import will change the non-ASCII characters in the XML file to xFFFD. If I copy/paste the the non-ASCII characters into the Unit Test after the import, they will be stored and executed correctly.
Amazon Ubuntu 3.13.0-45-generic / lubuntu-core
Oracle 11g Express Edition - AL32UTF8
SQL*Developer 4.0.3.16 Build MAIN-16.84
Java(TM) SE Runtime Environment (build 1.7.0_76-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.76-b04, mixed mode)
In Windows, the unit test will import the non-ASCII characters unchanged from the XML file.
Windows 7 Home Premium, Service Pack 1
Oracle 11g Express Edition - AL32UTF8
SQL*Developer 4.0.3.16 Build MAIN-16.84
Java(TM) SE Runtime Environment (build 1.8.0_31-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.31-b07, mixed mode)
If SQL*Developer is coded the same between Windows and Linux, The JVM must be causing the problem.Set the System property "mail.mime.decodeparameters" to "true" to enable the RFC 2231 support.
See the javadocs for the javax.mail.internet package for the list of properties.
Yes, the FAQ entry should contain those details as well. -
Used Visual studio 2012. In our project there is used the Microsoft.Reporting.WinForms.ReportViewer control. In the report handled by the control are TextBoxs with a text with Czech specific characters e.g. (ř, ě, ...) . When exporting the report to pdf,
characters are displayed correctly. However when the text with czech characters in the pdf if copied and placed into the seach box in the pdf document only box characters are displayed. The TextBox in the report use the default font Arial. When the report
is exported to Word, and then the Word document is saved as a pdf document, its ok. Coping a text with Czech charactes in the result pdf document and pasting into the search box displays again Czech characters not box characters.
Also when in the report handled by the ReportViewer control are several Tex Boxes and some of the boxes contains Czech characters and some not, after exporting to a pdf document there is problem with text selection. When in the pdf document I'm trying to
select several paragraphs, some with Czech characters and some without them, selection behaves strangely and jumps from one paragraph to another unexpectedly.Hi,
did you managed to avoid those squares?
BTW: if any such char. is encountered in a line, the entire line of text is grabbled.
I've tried even the ReportViewer from MSSQL 2014, but got the same problem. When I've tried IL Spy, I found a code, where it is checked if the PDFFont is composite - depending on that a glyph is created. But that still only a guess.
I've tried Telerik's reporting, they have similar problem (beside other), but not with the special characters. They produced scuares for some sequences like: ft, fi, tí.
Please give any info you got.
Until then my advices for you:
a) try JasperReports (seems theyre most advanced, although it is java)
b) Developer express has quiet quality reports - and it seems they got those special chars. right :D
c) I created a ticket and waiting for Telerik's response (but if I had to choose reporting, I vould stick with a) or b) -
Problem with Icelandic special characters on Mac
Hello
I am working on a Flash publication for students, and I want it to run on Mac as well as PC. Everything goes fine, except a problem with three special characters in my language, Icelandic. I am working on a registration and login page where I am using text boxes and text input boxes. Everything looks correct on PC, but on Mac the characters Þ Ð Ý are lost.
I have tried different fonts etc.
Any idea what is wrong?
Jónas HelgasonHello Jónas,
Did you ever figure this out ?
I have a similar problem except only with two letters (both upper and lower case). These two Icelandic letters can't be entered into a Flex TextInput box in the Flex apps I am creating when they are loaded on a Mac. The letters are known as &Eth, ð, &Thorn and þ in HTML terminology. Typing these characters on the keyboard results in the following: { [ ? /
However I can copy the characters in question from some other app like TextEdit and paste them into a TextInput box in my Flex app and all is well, they show up correctly.
This happens regardless of the Mac browser used and the Flash plugin version used (have tried both 9 and 10) and also happens in the standalone Flash Player application.
Does anyone have any idea how to fix this or is this a bug in Flash Player ? This is really annoying as it makes text input into Flex apps on Icelandic Macs very difficult.
There must be something wrong with the mapping of keyboard key codes into character codes on the Mac that is causing this.
Btw, I just heard from a friend that this problem does not exist in MacOS 10.6. I am running 10.4 and have tested this on 10.5 and it exists on both of those OS versions.
Rgds,
Hordur Thordarson
Lausn hugbunadur
http://lausn.is -
Problems with Greek accented characters
After the update to AIR 2.0.2 I cannot input into any application greek with accented characters.
Tried TweetDeck and Twhirl and neither work (used to before the update)
Is this a bug or it needs some configuration
I am working on Fedora13 but heard the same problem reported on Ubuntu.
Have not tried on MS Windows or MacOSXHi,
I'm using Adobe AIR 2.0.3 on Windows machine. I wrote an app in Aptana Studio (build: 2.0.5.1278522500) with ExtJS library and I found the problem with polish national characters like ż and Ż (all the other national characters like ą, ę, ń are possible to input).
In order to reproduce, here you have the sample code in ExtJS:
http://dev.sencha.com/deploy/dev/examples/form/anchoring.html
As you will see - it is possible to input ż and Ż in text fields.
Now, use the same code to build AIR applicaton and then run the application. It is not possible to input those characters in Air window. Right Alt+z acts like undo operation - it removes last entered text. All the other characters work fine.
Here is the code I used:
<html>
<head>
<title>New Adobe AIR Project</title>
<link rel="stylesheet" type="text/css" href="lib/ext/resources/css/ext-all.css" />
<link rel="stylesheet" type="text/css" href="lib/ext/air/resources/ext-air.css" />
<script type="text/javascript" src="lib/air/AIRAliases.js"></script>
<script type="text/javascript" src="lib/ext/adapter/ext/ext-base.js"></script>
<script type="text/javascript" src="lib/ext/ext-all.js"></script>
<script type="text/javascript" src="lib/ext/air/ext-air.js"></script>
<script type="text/javascript">
Ext.onReady(function(){
var form = new Ext.form.FormPanel({
baseCls: 'x-plain',
labelWidth: 55,
defaultType: 'textfield',
items: [{
fieldLabel: 'Send To',
name: 'to',
anchor:'100%' // anchor width by percentage
fieldLabel: 'Subject',
name: 'subject',
anchor: '100%' // anchor width by percentage
xtype: 'textarea',
hideLabel: true,
name: 'msg',
anchor: '100% -53' // anchor width by percentage and height by raw adjustment
var window = new Ext.Window({
title: 'Resize Me',
width: 500,
height:300,
minWidth: 300,
minHeight: 200,
layout: 'fit',
plain:true,
bodyStyle:'padding:5px;',
buttonAlign:'center',
items: form,
buttons: [{
text: 'Send'
text: 'Cancel'
window.show();
</script>
</head>
<body>
</body>
</html>
Is it possible to input those characters or is there a workaround for this (disable undo operation or so) ?
I really appreciate any help.
Kind regards,
Marcin. -
I am having problems with text size when using AOL Mail. I am using version 25.0 of Firefox. I did not have this problem in the past, but I suspect a change occurred with one of the Firefox upgrades. I do not have the same problem when using Internet Explorer.
Could you start by resetting the zoom level on the page? To do that, you can either:
* Press Ctrl+0 (that's a zero) on the keyboard
* View menu > Zoom > Reset
You also can use zoom to increase/decrease the size from there. This article describes the various mouse, keyboard, and menu methods: [[Font size and zoom - increase the size of web pages]].
Any luck? -
Language specific characters changes when .irpt executed.
Hi,
When i execute .irtp page, language specific characters changes to strange signs.
How can it be solved?
Thanks.Hi Jeremy,
Below is meta tags for Turkish
lt & meta http-equiv="Content-Type" content="text/html; charset=windows-1254" / & gt
lt & meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-9" / & gt
lt & meta http-equiv="Content-Language" content="tr" / & gt
I tryed but result is the same.
I think .irpt has no Turkish support.
Thanks. -
i have problem with my contacts. I can't send message if the contact didn't send me a message before. How can I add new address?
Hi tizac,
You will want to add them to your Contacts.
There's a "+" button at the bottom of the contact list on the left to add a contact.
If you need to add the email address or phone number to an existing contact, there's an Edit button at the bottom of the contact info (on the right).
Once you have the contact email address or phone number you want to message to, scroll all the way down to the bottom of their information and there are options to:
Send Message FaceTime
Share Contact Add to Favorites
Of course choose "Send Message" to send an iMessage to them and you will start a new chat. You will be given the options of how to contact them, via phone number (if they are on an iPhone) or email (if they are on an iPod touch or iPad). They do need to be signed up for iMessage or it won't allow you to send.
ivan -
Please show us the video tutorial for this because i am having so much problem with my iphone5 bluetooth and can not send any file . So plz plz help me
You have to use airdrop which is on iOS 7
-
Problem with labelprinter POLISH CHARACTERS
Hi,
I have a problem with labelprinter. When I try to print a label (Intermec 4e) in polish language there are no polisch characters on printout. Printpreview looks good but that's all. Maybe someone had a similar problems
best regardsHi!
In which way is the output transferred?
Something like:
a)
ESC 'times new roman'
My text
b) already converted into a picture with a specific resolution (e.g. of 300dpi)?
Case a) -> Your printer needs an installed font times new roman / it's using the first installed font or something in this way. Then you need to download a special (user-) font with polish characters. (That's a quite common way. Sometimes already then vendor installs an user font as preparation for some countries.)
Case b) -> No idea, when print preview is already OK, then output should be OK, too.
Regards,
Christian
Maybe you are looking for
-
Stupid itunes 7.0.1, stupid ipod, stupid headaches!!!
when you first plug in the ipod it udates photos, songs, and whatever else like any ordinary ipod. but every since itunes updated a week ago i've been getting headaches with my 5th generation ipod. i get an error message "The iPod cannot be updated.
-
"iTunes Plus" Purchases Fail On WiFi iTunes App
This is a more specific subject line for the issue discussed in my prior posting "WiFi Music Store Purchases Fail To Download" -- see prior posting at: http://discussions.apple.com/thread.jspa?threadID=1333843&tstart=0 Can anyone please shed any ligh
-
So when using safari and using the header to search - google comes up with .co.kr instead of .co.uk This only happens when javascript is turned on.
-
How do i get a hit counter on my site
Hi everyone Im looking for a we hit counter for my site, but it seems like its abit more complecated than i want it to be, or am i getting it wrong? I have looked on the dreamweaver exchange, i dowloaded one called 'visit counter v.1.0' by felixone,
-
How to disable icloud drive?
I enabled icloud Drive when updating my iphone to ios 8, but Yosemite (wich is required on my mac to make my documents work with Drive) is not released in Norway until late october or later. My Q: How do I disable icloud drive now?