Dreamweaver - Special/Extended Characters
Hello,
Does anyone know if it's possible to have a list of special characters in as part of the DW interface? I've used Homesite for years which has a panel that lists all the special characters making it very easy to insert a character or highlight and replace an existing one.
For example, to replace "&" with "&" or "-" with "–" etc.
Here's what it looked like in Homesite:
Thanks,
Bob
Hello Mylenium,
Hmmm, that works but is somewhat limited. To get a full listing all I can find is a diaglogue box that opens, is there no way to keep this list open as part of the UI? Using the current method is clunky.
Similar Messages
-
Java app is writing question marks instead of extended characters
People,
since I've installed an Sun E3000 with Solaris9 I'm having problems with accents (extended characters) in my java application: - it writes question marks instead of accents.
This machine used to be Solaris 2.6 and we formatted/installed Solaris9 and got the problem since then.
My application uses a thin driver to connect to Oracle9i database, and via unix I can call sqlplus and insert chr(199) and SELECT it (�) with no problems, but application writes ? in nohup.out.
If you can point me in any direction, please let me know.
Regards, Orlando.I'm pretty sure that the default locale "C" corresponds to the 7-bit US-ASCII character set. Again, this is all the methaphorical shot in the dark, but it's the only time that I've seen this happen. Sun's own documentation says to use something other than C or US ASCII if international or special characters will be needed.
Here is my /etc/default/init for ISO 8859-15.
TZ=US/Eastern
CMASK=022
LC_COLLATE=en_US.ISO8859-15
LC_CTYPE=en_US.ISO8859-1
LC_MESSAGES=en_US.ISO8859-1
LC_MONETARY=en_US.ISO8859-15
LC_NUMERIC=en_US.ISO8859-15
LC_TIME=en_US.ISO8859-15
If the system that you're using can be rebooted, try changing the /etc/default/init file to something like the above, but with the specific for your locale. Obviously, en_US is not your area, but I have no idea what Brazil's would be. -
How to input extended characters in textfields.
Hi,
I wanted to know how java handles extended characters such as alt130 , alt137 etc... in textfields. Is the only way to input these by alt-codes or is there another way?
Much thanks,
Hugo HendriksSee this site for full info on Chinese input:
http://www.yale.edu/chinesemac
There is no need for a special keyboard for any kind of input. -
Entering extended characters?
As a recentish switcher I'm completely stumped by how I can enter extended characters. I need to enter the character code 0239 to get a special character in a font I'm using. On a PC I'd simply type 0239 on the numeric pad while holding down the Alt key. But that doesn't work on the Mac.
Any help much appreciated. GeoffT.Most apps have an item in the "Edit" menu called "Special Characters..." which brings up a palette with just about every conceivable character set you could want. That should do the trick, unless the app is so old that it doesn't enable this system-level item. Assuming it works, once you've found the character you want, just click the "Insert" button in the lower-right of the palette to insert it into the active text field.
Here's Apple's description of the same feature:
http://docs.info.apple.com/article.html?path=Mac/10.5/en/8164.html
There are also many longstanding tricks to enter the more common special characters such as curly quotes, emdashes, and accented characters. Here's a very old but still valid cheat sheet:
http://home.earthlink.net/~awinkelried/keyboard_shortcuts.html -
Error with extended characters
hi i have a problem with extended characters and it only happens when i have an attachment with the mail
MimeBodyPart bp = new MimeBodyPart();
String content = "� � � � � � � � � � � � �� � � � � � � � � � � � �";
bp.setContent( content, "text/plain; charset=UTF-8" );
mp.addBodyPart( bp );
//code for adding a attachment is added to the mp using MimeBodyPart
msg.setContent( mp ); // add Multipart
msg.saveChanges(); // generate appropriate headers
ByteArrayOutputStream baos = new ByteArrayOutputStream();
msg.writeTo( baos );msg is a MimeMessage.
Now this msg is stored in a cache and later retrieved using
String s = "";
java.lang.Object o = msg.getContent();
if ( o instanceof String )
s = (String) o;
else if ( o instanceof Multipart )
try
Multipart mp = (Multipart) o;
MimeBodyPart bp = (MimeBodyPart) mp.getBodyPart( 0 );
s = (String) bp.getContent();
catch ( Exception e )
cLog.error(e.getMessage(),e);
}now s contains "�€ �� �‚ �ƒ �„ �… �† �‡ �ˆ �‰ �Š �‹ �Œ �� �� �� �� �� �� �� �� �� �� �� �� ��" which is wrong.
This has been bugging me for a couple of days now.
could anyone please give a solution.
Many Thanks in advanceUsing these literal non-ASCII characters in the string constant might
look great in your editor but might not be giving the correct Unicode
values. Try replacing the special characters with Unicode escapes
(\u####).
If that doesn't work, step through the original string and print out
each character as an integer. Then do the same for the string you
fetch from the message. Compare the integer values. If they're
not the same, post the details. If they are the same, the problem
is most likely in how you're displaying the characters you read
from the message. -
I previously used Dreamweaver 4 without CSS and then moved onto Dreamweaver 8, again without CSS. I am trialing Dreaweaver CS5 and have updated my site content to <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> from just <HTML>
Extended characters such as e acute which used to appear correctly in earlier versions of Dreamweaver, although they appear correctly in Dreamweaver Live View now appear rather differently in Internet Explorer 8 e.g an e acute appears as A with a tilda followed by a copyright symbol. How can I correct this please?Thanks John,
Since reading your message I discovered that the error occured only where I had copied text with extended characters from another website and pasted it into mine.
Best wishes,
Brian -
Problem inserting text with special Hungarian characters into MySQL database
When I insert text into my MySQL db the special Hungarian
characters (ő,ű) they change into "?".
When I check the
<cfoutput>#FORM.special_character#</cfoutput> it gives
me the correct text, things go wrong just when writing it into the
db. My hosting provider said the following: "please try to
evidently specify "latin2" charset with "latin2_hungarian_ci"
collation when performing any operations with tables. It is
supported by the server but not used by default." At my former
hosting provider I had no such problem. Anyway how could I do what
my hosting provider has suggested. I read a PHP related article
that said use "SET NAMES latin2". How could I do such thing in
ColdFusion? Any suggestion? Besides I've tried to use UTF8 and
Latin2 character encoding both on my pages and in the db but with
not much success.
I've also read a French language message here in this forum
that suggested to use:
<cfscript>
setEncoding("form", "utf-8");
setEncoding("url", "utf-8");
</cfscript>
<cfcontent type="text/html; charset=utf-8">
I' ve changed the utf-8 to latin2 and even to iso-8859-2 but
didn't help.
Thanks, AronI read that it would be the most straightforward way to do
everything in UTF-8 because it handles well special characters so
I've tried to set up a simple testing environment. Besides I use CF
MX7 and my hosting provider creates the dsn for me so I think the
db driver is JDBC but not sure.
1.) In Dreamweaver I created a page with UTF-8 encoding set
the Unicode Normalization Form to "C" and checked the include
unicode signature (BOM) checkbox. This created a page with the meta
tag: <meta http-equiv="Content-Type" content="text/html;
charset=utf-8" />. I've checked the HTTP header with an online
utility at delorie.com and it gave me the following info:
HTTP/1.1, Content-Type: text/html; charset=utf-8, Server:
Microsoft-IIS/6.0
2.) Then I put the following codes into the top of my page
before everything:
<cfprocessingdirective pageEncoding = "utf-8">
<cfset setEncoding("URL", "utf-8")>
<cfset setEncoding("FORM", "utf-8")>
<cfcontent type="text/html; charset=utf-8">
3.) I wrote some special Hungarian chars
(<p>őű</p>) into the page and they displayed
well all the time.
4.) I've created a simple MySQL db (MySQL Community Edition
5.0.27-community-nt) on my shared hosting server with phpMyAdmin
with default charset of UTF-8 and choosing utf8_hungarian_ci as
default collation. Then I creted a MyISAM table and the collation
was automatically applied to my varchar field into wich I stored
data with special chars. I've checked the properties of the MySQL
server in MySQL-Front prog and found the following settings under
the Variables tab: character_set_client: utf8,
character_set_connection: utf8, character_set_database: latin1,
character_set_results: utf8, character_set_server: latin1,
character_set_system: utf8, collation_connection: utf8_general_ci,
collation_database: latin1_swedish_ci, collation_server:
latin1_swedish_ci.
5.) I wrote a simple insert form into my page and tried it
using both the content of the form field and a hardcoded string
value and even tried to read back the value of the
#FORM.special_char# variable. In each cases the special Hungarian
chars changed to "q" or "p" letters.
Can anybody see something wrong in the above mentioned or
have an idea to test something else?
I am thinking about to try this same page against a db on my
other hosting providers MySQL server.
Here is the to the form:
http://209.85.117.174/pages/proba/chartest/utf8_1/form.cfm
Thanks, Aron -
Send purchase order via email (external send) with special Czech characters
Hi all,
I am sending a purchase order created with ME21N via email to the vendor using "external send".
The mail is delivered without any problems, PO is attached to the mail as PDF file.
Problem is that special Czech characters as "ž" or "u0161" are not displayed, a "#" (hash) appears instead.
This problem occurs when language for PO output = EN.
Tests with language = CS worked out fine, but the whole form incl. all texts are in Czech as well; so no valid solution since it needs to be in English.
We checked SAPCONNECT configuration and raised note 665947; this is working properly.
When displaying the PO (ME23N) special characters are shown correctly as well.
Could you please let me know how to proceed with that issue?!
Thanks.
FlorianHi!
No, it's not a Unicode system.
It is maintained as:
Tar. Lang. Lang. Output Format Dev. type
Format
PDF EN English PDF1
Using this option, character "ž" was not displayed correctly, but "Ú" was ok.
All other Czech special characters are not tested so far.
Thanks,
Florian
Edited by: S. SCE - Stock Mngmnt on Aug 14, 2008 10:19 AM -
Importing WORD document with special regional characters in RoboHelp X5
Hello,
i have a problem, when im importing a *.doc document. The
document is written in slovene and it contains special regional
characters. Here is the deal:
I was using RoboHelp 4.0 before and i had this same problem.
What i did was, when i added a new topic (imported *.doc file) into
existing project and the WebHelp was generated, in the web browser
i clicked the last added topic and opened a source code, where i
changed the charset from 1252 to 1250. That enabled the special
characters to be viewed correctly. When i imported some additional
topics and generated the WebHelp again, the program somehow "saved"
the 1250 setting in the previous topics and the characters were
correctly shown. I had to adjust 1250 only in the new topics, that
were added before the last generate.
When i try to do the same in RoboHelp X5, this doesnt work.
Program doesnt "remember" the 1250 setting and it always generates
with 1252 character setting. Which is a problem, because there are
a lot of topics and i would have to change the character setting
for every topic/doc document i added.
What can i do ?
Thanks in advanceHello Tiesto_ZT,
Welcome to the Forum.
I have no experience of using other languages in RH, but this
problem was discussed in
Thread.
Check it out and post back if it doesn't fit your needs.
Hope this helps (at least a bit),
Brian -
Special Unicode characters in RSS XML
Hi,
I'm using an adapted version of Husnu Sensoy's solution (http://husnusensoy.wordpress.com/2007/11/17/o-rss-11010-on-sourceforgenet/ - thanks, Husnu) to consume RSS feeds in an Apex app.
It works a treat, except in cases where the source feeds contain special unicode characters such as [right double quotation mark - 0x92 0x2019] (thankyou, http://www.nytimes.com/services/xml/rss/nyt/GlobalBusiness.xml)
These cases fail with
ORA-31011: XML parsing failed
ORA-19202: Error occurred in XML processing
LPX-00217: invalid character 8217 (U+2019) Error at line 19
Any ideas on how to translate these characters, or replace them with something innocuous (UNISTR?), so that the XML transformation succeeds?
Many thanks,
jd
The relevant code snippet is:
procedure get_rss
( p_address in httpuritype
, p_rss out t_rss
is
function oracle_transformation
return xmltype is
l_result xmltype;
begin
select xslt
into l_result
from rsstransform
where rsstransform = 0;
return l_result;
exception
when no_data_found then
raise_application_error(-20000, 'Transformation XML not found');
when others then
l_sqlerrm := sqlerrm;
insert into errorlog...
end oracle_transformation;
begin
xmltype.transform(p_address.getXML()
,oracle_transformation
).toobject(p_rss);
exception
when others then
l_sqlerrm := sqlerrm;
insert into errorlog....
end get_rss;My environment:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
PARAMETER VALUE
NLS_LANGUAGE AMERICAN
NLS_CHARACTERSET WE8ISO8859P1
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSEenvironment
Oracle 10g R2 x86 10.2.0.4 on RHEL4U8 x86.
db NLS_CHARACTERSET WE8ISO8859P1
After following the following note:
Changing US7ASCII or WE8ISO8859P1 to WE8MSWIN1252 [ID 555823.1]
the nls_charset was changed:
Database character set WE8ISO8859P1
FROMCHAR WE8ISO8859P1
TOCHAR WE8MSWIN1252
And the error:
ORA-31011: XML parsing failed
ORA-19202: Error occurred in XML processing
LPX-00217: invalid character 8217 (U+2019)
was no longer generated.
A Unicode database charset was not required in this case.
hth.
Paul -
Hi,
I encountered a problem with regards to the display of special HTML characters(chr 155). Crystal was not able to correctly display the cahracters. Instead a blank space was displayed. In addition to, when the report is exported to PDF, it is displayed as boxes.
Is there a way to handle display of special HTML chars in crystal?
ThanksCrystal HTML interpreter is very limited and has been same for years, so it seems unlikley it will chnage any time soon.
As its a specific character that is failing use a replace formul to remove the long dash html and replace with a short dash html which I guess Crystal will recognise.
Replace(yourfield, 'longdashhtml', 'shortdashhtml')
Ian -
OVD - special/national characters in LDAP context
Hi all,
I created integration between Active Directory and Oracle 10g via Oracle Virtual Directory 10g. All works correctly but some users have national characters in his/her AD context. For example Thomas Bjørne (cn=Thomas Bjørne,cn=Users,dc=media,dc=local). In this case this user cannot login into database. I know that problem is with special national characters in AD context but I don't know how solve it. It is not possible change AD context :-(
Can somebody help me with it?Lets first verify that you can bind to OID using the command line
commands with an existing user in OID.
Lets assume for a moment that your users password is welcome and
their DN in OID is cn=jdoe,c=US
Try the following command and tell me what the results are.
ldapsearch -p port_num -h host_name -b "c=US" -s sub -v "cn=*"
It should return all users under c=US. If not let me know the
error message you get. -
Problem with special national characters
Hi,
How can I turn on the Oracle Application Server 10g to correct expose special national characters (ANSI 1250 Central Europe page)?
It hosted on Windows Server 2003 where are appropriate character resources.
Thanks in advance
KMCheck the available languages in SMLT (trn). In example stated below the characters coming from DI are Spanish characters, which are gettnig converted to Swedish 1s.
Please go through the following:
Re: Japanese characters -
Hello Team,
Here we are facing issues while converting SAP tables data to XML file.
the description is not converting properly for the special German characters like ü, ö, ä.
Actual output should be :Überprüfung nach § 29 STVZO
Output Displayed :Ã#berprüfung nach § 29 STVZO
Can you please look into and help me in this to get correct output .
Thank you.Hi,
Unicode or Non-Unincode System ?
Displayed where ? SAPGUI ? Print Preview ? Spool-Display ?
And how is the XML file written ? OPEN DATASET ? BAPI ?
At all of these stages it might be either that it is only a display system, like the selected display CHARSET in SAPGUI, when it is a non-unicode system, or simply not coded correctly like OPEN DATASET without specifying the cdepage when necessary.
And it might even be, that even "displaying" the XML File is simply done with the incorrect codepage while the data inside the file is correct.
If you are on Windows, you might even face funny results when saving a simple textfile from notepad with ANSI/DOS and both Unicode variants and then go to CMD.EXE ans simply "type" the content.
All 4 results will be different, allthough notepad will display the same stuff.
So first of all, makes sure which codepage is relevant at all stages from DB-table to "display"
- DB-Charset
- SAP system type (unicode/non-unicode)
- SAP codepage (1100 / 410x )
- crosscheck the test from report RSCPINST
- Codepage on Windows running SAPGUI
- Selected codpage for Sapgui
Good hunting
Volker -
Fail to pass special Hungarian characters using WSDL
Dear All,
I'm using WLS 7.0SP1, the webservice is generated with the ant task in rpc-style.
The client is written in VB6 with MS SoapToolkit3.
The simple method that receives and returns a
string fails when the input or output
contains special Hungarian characters.
Can anyone help how to solve the problem?
Thank you,
PeterHello,
Thank you for the help, I set the property and it's working fine,
the characters appear as they should!
Meanwhile I realized that the failiure was because of some
unneeded '\0' characters at the back of the strings.
Thank you again,
Peter
Bruce Stephens <[email protected]> wrote:
Hello,
On the server, is the VM locale set to "en" ?
Try setting the the following system property on the server startup:
weblogic.webservice.i18n.charset="utf-8"
Could you post a SOAP trace?
Thanks,
Bruce
Peter Dobszai wrote:
Dear All,
I'm using WLS 7.0SP1, the webservice is generated with the ant taskin rpc-style.
The client is written in VB6 with MS SoapToolkit3.
The simple method that receives and returns a
string fails when the input or output
contains special Hungarian characters.
Can anyone help how to solve the problem?
Thank you,
Peter
Maybe you are looking for
-
My printer HP Officejet pro 8600 plus makes a lot of noise while printing
My HP officejet pro 8600 plus, makes a lot of noise while printing, ink levels are ok, and printing is ok. it was makiing lines before but not anymore
-
Pen drive write protected - Unable to format
I have a usb pen drive mounted. I cannot create any folder or delete any file. It says read only mode. I also tried to formate it using the gparted tool, it says partition read only I tried dd command it also says partition is read only file system s
-
Will adding more RAM to my Power Mac G5 make a difference?
I have a Power Mac G5 (Late 2005) with 1 GB of RAM, which is how I ordered it. Sometimes when I am viewing large (large viewing size) video files with Quicktime, the video files get a little choppy from time to time, especially when I have many other
-
Help with methodology on 1 to 1 form/relationships
So this post if more looking for methodology verses a solution. I have a couple of Tables with Forms in my Application. We have a root cause Table which is where we create case to track the root cause of an issue. A root cause case can be generated f
-
How to unlock iPhone 4s to use outside us
HHow to unlock iohone4s to use outside us