Dreamweaver warning UTF encoding

Help!! I am an intermediate web designer and have
been using Dreamweaver MX 2004 on my Mac OSX. I have been working
on a page named services.html for many days. it is a lot of text
layed out in a table. It has been loading fine throughout the past
week, as I've been editing it daily. Today I made minor changes to
the navigation, and when I went to upload I got this strange
warning that I've never seen before, and I don't know what it
means. Since my page had been fine on the internet up to this point
I chose to ignore the warning and uploaded as usual--big mistake.
the content has fallen apart with huge gaps between paragraphs,
like inches of blank space. All of the content is in the same
<td> so I know that I can't blame it on the table. the
warning with the big yellow triangle said,"the document's current
encoding can not correctly save all of the characters within the
document. You may want to change to UTF or an encoding that
supports the special characters in this document." What the heck
does that mean?? Hopeless in California, deborah Here is the link
to the page, so that you can look at it and my code. thanks to
anyone that can come to my rescue.
http://www.dnorthphoto.com/bloomful_web/services.html

On 18 Nov 2006 in macromedia.dreamweaver, David Powers wrote:
> dnorth wrote:
> > Today I made minor changes to the navigation, and
when I went to
> > upload I got
>> this strange warning that I've never seen before,
and I don't know
>> what it means. Since my page had been fine on the
internet up to
>> this point I chose to ignore the warning and
uploaded as usual--big
>> mistake. the content has fallen apart with huge gaps
between
>> paragraphs, like inches of blank space.
>
> It has nothing to do with the warning about character
encoding. The
> huge gaps are caused by this style rule in the
services_headings
> class in your stylesheet:
>
> padding-bottom: 150%;
>
> Remove it, and the gaps disappear.
Or use a headline tag (<h1>, <h2>, <hX>),
which would be more
semantically correct anyway.
Joe Makowiec
http://makowiec.net/
Email:
http://makowiec.net/email.php

Similar Messages

  • Validator warning: Character Encoding mismatch!

    I have been following the discussion on favicons with
    interest. A few days ago
    I added a favicon to the page:
    http://www.corybas.com/, and
    eventually persuaded
    IE6 to show the favicon, provided I loaded it by clicking the
    icon. It did not
    show it when the page reloaded itself, and now it has
    forgotten all about it.
    Following some discussions here this morning I ran the
    Validator over the page
    and got the diagnostic "The character encoding specified in
    the HTTP header
    (utf-8) is different from the value in the <meta>
    element (iso-8859-1). I will
    use the value from the HTTP header (utf-8) for this
    validation. "
    As far as I can work out, this is caused by an
    incompatibility in the witchcraft
    Dreamweaver includes in a basic HTML page, which is as
    follows:
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0
    Transitional//EN"
    http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    <html xmlns="
    http://www.w3.org/1999/xhtml">
    <head>
    <meta http-equiv="Content-Type" content="text/html;
    charset=iso-8859-1"
    />
    <title>Untitled Document</title>
    </head>
    Should I worry about this warning?
    (I removed a few other insignificant errors, but IE6 still
    can't see the
    favicon.)
    Clancy

    On 22 Apr 2008 in macromedia.dreamweaver, Clancy wrote:
    > Following some discussions here this morning I ran the
    Validator
    > over the page and got the diagnostic "The character
    encoding
    > specified in the HTTP header (utf-8) is different from
    the value in
    > the <meta> element (iso-8859-1). I will use the
    value from the HTTP
    > header (utf-8) for this validation. "
    >
    > As far as I can work out, this is caused by an
    incompatibility in
    > the witchcraft Dreamweaver includes in a basic HTML
    page, which is
    > as follows:
    >
    > <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0
    Transitional//EN"
    > "
    http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    > <html xmlns="
    http://www.w3.org/1999/xhtml">
    > <head>
    > <meta http-equiv="Content-Type" content="text/html;
    > charset=iso-8859-1"
    > />
    > <title>Untitled Document</title>
    > </head>
    >
    > Should I worry about this warning?
    At an offhand guess, you're on an Apache 2.x server. In its
    default
    setup, it sends a UTF-8 charset header. That is sufficient
    for the
    browser. It also conflicts with the iso-8859-1 charset in the
    document. The fastest cure for it is to remove the charset
    meta from
    the page, or change it to UTF-8. But it has no bad effects
    that I know
    of on a browser.
    Joe Makowiec
    http://makowiec.net/
    Email:
    http://makowiec.net/contact.php

  • UTF encoding issues on file adapters and mappings

    Hi,
    We did some tests regarding to UTF-8 and UTF-16 encoding using file adapters. Our conclusion so far is (when using Windows OS):
    1. Inbound adapter can handle UTF-8 and UTF-16 correctly, but do not specify the encoding!
    2. XI mappings will set the XML encoding to UTF-8 correctly when sending an UTF-16 file to XI.
    3. Outbound adapter can only handle UTF-8 (and US-ACSII and ISO-8859-1) correctly.
    The exact test results are:
    >>Outbound file adapter bug.
    If no encoding is specified in the outbound file adapter, UTF-8 and UTF-16 are handled correctly. However if the encoding is set to UTF-16, XI mapping will fail with the error:
    During the application mapping com/sap/xi/tf/_CHRIS_OUTBOUND_TO_INBOUND_ a com.sap.aii.utilxi.misc.api.BaseRuntimeException was thrown: Fatal Error: com.sap.engine.lib.xml.parser.Parser~
    Part of the trace:
    com.sap.aii.ibrun.server.mapping.MappingRuntimeException: Runtime exception occurred during execution of application mapping program com/sap/xi/tf/_CHRIS_OUTBOUND_TO_INBOUND_: com.sap.aii.utilxi.misc.api.BaseRuntimeException; Fatal Error: com.sap.engine.lib.xml.parser.ParserException: XMLParser: No data allowed here: (hex) a0d, a0d, 6e3c(:main:, row:3, col:2) at com.sap.aii.ibrun.server.mapping.JavaMapping.executeStep(JavaMapping.java:72) at com.sap.aii.ibrun.server.mapping.Mapping.execute(Mapping.java:91) at com.sap.aii.ibrun.server.mapping.MappingHandler.run(MappingHandler.java:78) at com.sap.aii.ibrun.sbeans.mapping.MappingRequestHandler.handleMappingRequest
    >>Inbound file adapter bug.
    If the encoding of an inbound file adapter is set to UTF-16 everything works ok (except the XML encoding is not set correctly, but this may be a mapping issue and not an adapter issue). However the default UTF-16 encoding seems to be UTF-16BE, where I would expect UTF-16LE since this is the most commonly used encoding.
    If the encoding UTF-16LE or UTF-16BE the characterset used in the message is correct, except the BOM of the file. The BOM is empty which means UTF-8 encoded file. Since the file is UTF-16BE or UTF-16LE encoded, this is wrong and the correct BOM should be added by the adapter.
    Encodings like US-ASCII and ISO-8859-1 are handled correctly.
    >>Mapping bug
    When we send in a message encoded in UTF-8 and want to send it out as a UTF-16 encoded message, we need to set the XML encoding to UTF-16. Normally this is done by an XSLT mapping using the <xsl:output encoding=”UTF-16”/> command.
    The UTF-8 message will get processed by the XSLT and any special character will be converted to its UTF-16 value. However the output message is not UTF-16 encoded (1 byte in-stead off 2 bytes).
    When this 1 byte message is send to the inbound adapter (encoding is set to UTF-16) the message will be translated from 1 byte to 2 byte (UTF-8 to UTF-16). The characters that were converted from UTF-8 to UTF-16 will be read as single byte characters and will be converted again. This will result in an incorrect message with illegal characters.
    So basically characters will be converted to UTF-16 2 times, which is incorrect.
    Maybe someone can confirm this on another XI system (maybe different OS). If you need test files or mapping, please let me know.
    Kind regards,
    Christiaan Schaake.

    Update after carefully reading all the UTF related documents on the internet.
    For UTF-16 the BOM is required and the adapter is handling this correctly. (encoding=UTF-16 will create the BOM).
    For UTF-16LE and UTF-16BE the BOM must not be set. The application should be able to handle the conversion. The adapter is working correct again.
    If the adapter is set to binary mode in stead of the text mode, the file will always be read correctly.
    About the mapping issue, I'm still experimenting with this one.
    Kind regards,
    Christiaan Schaake.

  • Flying Saucer and utf encoding

    Hello,
    This post is on the heals of a former post of another user.
    http://forum.java.sun.com/thread.jspa?threadID=5265104&messageID=10105301
    I am using Flying Saucer in a J2EE application. The server is configured to encode using UTF-8. I required to keep this setting. I would like to know if there is a way to change the default character encoding from latin to UTF-8. It seem like there should be a method to set the encoding.
    I am aware of this instruction on https://xhtmlrenderer.dev.java.net/r7/users-guide-r7.html#configuration
    import com.lowagie.text.pdf.BaseFont;
    ITextRenderer renderer = new ITextRenderer();
    FontResolver resolver = renderer.getFontResolver();
    resolver.addFont (
    "C:\\WINNT\\Fonts\\ARIALUNI.TTF",
    BaseFont.IDENTITY_H,
    BaseFont.NOT_EMBEDDED
    However,
    This does not work for me because the instantiation of the ITextRenderer object is what gives me runtime exceptions.
    I commented everything out left this line ITextRenderer renderer = new ITextRenderer(); and I still get runtime exceptions.
    Any help would be greatly appreciated,
    Thanks.

    ok i have hurdled on obstacle. I forgot to put /resources/conf/xhtmlrenderer.conf into the classpath.
    now i have another problem.
    the following code will not work for me, because i do not have ARIALUNI.TTF.
    ITextRenderer renderer = new ITextRenderer();
    FontResolver resolver = renderer.getFontResolver();
    resolver.addFont (
    "C:\\WINNT\\Fonts\\ARIALUNI.TTF",
    BaseFont.IDENTITY_H,
    BaseFont.NOT_EMBEDDED
    I did try using ARIAL.TTF, but that did not work. It give me the same error as if I did not use the FontResolver.
    In other words
    =================================================================
    CodeSnippet1
    =================================================================
    String inputFile = "test.xhtml";
    String url = new File(inputFile).toURI().toURL().toString();
    String outputFile = "firstdoc.pdf";
    OutputStream os = new FileOutputStream(outputFile);
    ITextRenderer renderer = new ITextRenderer();
    ITextFontResolver resolver = renderer.getFontResolver();
    resolver.addFont ("C:\\WINDOWS\\Fonts\\ARIAL.TTF", BaseFont.IDENTITY_H, BaseFont.NOT_EMBEDDED);
    renderer.setDocument(url);
    renderer.layout();
    renderer.createPDF(os);
    os.close();
    =================================================================
    =================================================================
    CodeSnippet2
    =================================================================
    String inputFile = "test.xhtml";
    String url = new File(inputFile).toURI().toURL().toString();
    String outputFile = "firstdoc.pdf";
    OutputStream os = new FileOutputStream(outputFile);
    ITextRenderer renderer = new ITextRenderer();
    renderer.setDocument(url);
    renderer.layout();
    renderer.createPDF(os);
    os.close();
    =================================================================
    CodeSnippet1 and CodeSnippet2 gives this exception.
    org.xhtmlrenderer.util.XRRuntimeException: Can't load the XML resource (using TRaX transformer). java.io.IOException: Stream closed
         at org.xhtmlrenderer.resource.XMLResource$XMLResourceBuilder.createXMLResource(XMLResource.java:191)
         at org.xhtmlrenderer.resource.XMLResource.load(XMLResource.java:71)
         at org.xhtmlrenderer.swing.NaiveUserAgent.getXMLResource(NaiveUserAgent.java:205)
         at org.xhtmlrenderer.pdf.ITextRenderer.loadDocument(ITextRenderer.java:102)
         at org.xhtmlrenderer.pdf.ITextRenderer.setDocument(ITextRenderer.java:106)
    Again any help will be greatly appreciated.

  • JAXB: Encoding error (Malformed UTF-8, trying to set ISO)

    Hi, I get this error when doing a Unmarshaller.unmarshal( anURL );
    DefaultValidationEventHandler: [WARNING]: Declared encoding "ISO-8859-1" does not match actual one "UTF-8"; this might not be an error.
    DefaultValidationEventHandler: [FATAL_ERROR]: Character conversion error: "Malformed UTF-8 char -- is an XML encoding declaration missing?" (line number may be too low).
    org.xml.sax.SAXParseException: Character conversion error: "Malformed UTF-8 char -- is an XML encoding declaration missing?" (line number may be too low).
         at org.apache.crimson.parser.InputEntity.fatal(InputEntity.java:1100)
    (...) etc..So far, I set the ISO-8859-1 encoding 4 places;
    1. the xml response from the servlet: xml.append("<?xml version=\"1.0\" encoding=\"ISO-8859-1\"?>\n\n")
    2. the original XSD which the JAXB classes was generated from
    3. In the client, after recieving the string containing the XML: bytes b[] = xmlString.getBytes("ISO-8859-1");
    4. In the client; when posting to the server: httpConn.setRequestProperty("Content-Type","text/xml;charset=ISO-8859-1");
    Yet, JAXB still thinks I'm doing UTF-8 here! Did I forget something?
    thanks,
    bjorn

    bump .. no one knows why this happens?

  • CONVERSION FROM ANSI ENCODED FILE TO UTF-8 ENCODED FILE

    Hi All,
    I have some issues in conversion of ANSI encoded file to utf encoded file. let me tell you in detail
    I have installed the Language Support for Thai Language on My Operating System.
    now, when I open my notepad and add thai character on the file and save it as ansi encoding. it saves it perfectly and also I able to see it on opening the file again.
    This file need to be read by my application , store in database and should display thai character on jsp after fetching the data from database. Currently it is showing junk character on jsp reason being that my database (UTF8 compliant database) has junk data . it has junk data because my application is not able to read it correctly from the file.
    If I save the file with encoding as UTF 8 it works fine. but my business requirement is such that the file is system generated and by default it is encoded in ANSI format. so I need to do the conversion of encoding from ANSI to UTF8 . so Any of you can guide me on the same how to do this conversion ?
    Regards
    Gaurav Nigam

    Guessing the encoding of a text file by examining its contents is tricky at best, and should only be done as a last resort. If the file is auto-generated, I would first try reading it using the system default encoding. That's what you're doing whenever you read a file with a FileReader. If that doesn't work, try using an InputStreamReader and specifying a Thai encoding like TIS-620 or cp838 (I don't really know anything about Thai encodings; I just picked those out of a quick Google search). Once you've read the file correctly, you can write the text to a new file using an OutputStreamWriter and specifying UTF-8 as the encoding. It shouldn't really be necessary to transcode files like this, but without knowing a lot more about your situation, that's all I can suggest.
    As for native2ascii, it isn't for encoding conversions. All it does is replace each non-ASCII character with its six-character Unicode escape, so "voil&#xE1;" becomes "voil\u00e1". In other words, it avoids the problem of character encodings by converting the file's contents to a form that can be stored as ASCII. It's mainly used for converting property or resource files to a form that can be read by the Properties and ResourceBundle classes.

  • Encoding Fonts from InDesign to UTF-8 for PDF SEO

    I have the topic that I export documents out of InDesign CS3 into a PDF file. Does anybody know how to adjust the font encoding to UTF-8.
    Now I always get the ANSI coding in each PDF. The result is that the PDFs can't be accessed and indexed by the search engine spiders.
    Who is able to help me?
    Thanks ahead.
    Thomas

    How do you find the UTF encoding?
    I took an inDesign-created PDF and saved as PDFXML. When I unzipped it, the container file looks like this:
    <?xml version="1.0" encoding="UTF-8"?>
    <container version="1.0" xmlns="urn:oasis:names:tc:opendocument:xmlns:container">
      <relationships xmlns:pdf="http://ns.adobe.com/pdf/2006">
        <relationship type="metadata" target="$path.xmp"/>
        <relationship type="pdf:annotation" target="$path.ann"/>
      </relationships>
    </container>
    Then I resaved the file as PDF and it grew from 4.1 to 4.3 MB. I've no idea if saving as xml changed the encoding, but I'd be curious to find out.

  • IDOC_AAE sender adapter XML : missing  encoding="UTF-8"?

    Hi All,
    We have a IDOC to JMS scenario using IDOC_AAE as sender channel.
    Symptom
    The IDOC whihc we have received from ECC should add XML version and encoding information to the created IDOC XML file.
    Other Terms
    IT SHOULD BE AS
    <?xml version="1.0" encoding="UTF-8"?>
    But we are getting as  <?xml version="1.0" > 
    Can you please share your views on this ?
    Regards.,
    Siva

    Shiva,
    Add XML AnonymizerBean in target channel module to replace/Add UTF encoding.
    AF_Modules/XMLAnonymizerBean
    Regards
    Aashish Sinha

  • Parsing a UTF-8 encoded XML Blob object

    Hi,
    I am having a really strange problem, I am fetching a database BLOB object containing the XMLs and then parsing the XMLs. The XMLs are having some UTF-8 Encoded characters and when I am reading the XML from the BLOB, these characters lose their encoding, I had tried doing several things, but there is no means I am able to retain their UTF encoding. The characters causing real problem are mainly double qoutes, inverted commas, and apostrophe. I am attaching the piece of code below and you can see certain things I had ended up doing. What else can I try, I am using JAXP parser but I dont think that changing the parser may help because, here I am storing the XML file as I get from the database and on the very first stage it gets corrupted and I have to retain the UTF encoding. I tried to get the encoding info from the xml and it tells me cp1252 encoding, where did this come into picture and I couldn't try it retaining back to UTF -8
    Here in the temp.xml itself gets corrupted. I had spend some 3 days on this issue. Help needed!!!
    ResultSet rs = null;
        Statement stmt = null;
        Connection connection = null;
        InputStream inputStream = null;
        long cifElementId = -1;
        //Blob xmlData = null;
        BLOB xmlData=null;
        String xmlText = null;
        RubricBean rubricBean = null;
        ArrayList arrayBean = new ArrayList();
          rs = stmt.executeQuery(strQuery);
         // Iterate till result set has data
          while (rs.next()) {
            rubricBean = new RubricBean();
            cifElementId = rs.getLong("CIF_ELEMENT_ID");
                    // get xml data which is in Blob format
            xmlData = (oracle.sql.BLOB)rs.getBlob("XML");
            // Read Input stream from blob data
             inputStream =(InputStream)xmlData.getBinaryStream(); 
            // Reading the inputstream of data into an array of bytes.
            byte[] bytes = new byte[(int)xmlData.length()];
             inputStream.read(bytes);  
           // Get the String object from byte array
             xmlText = new String(bytes);
           // xmlText=new String(szTemp.getBytes("UTF-8"));
            //xmlText = convertToUTF(xmlText);
            File file = new File("C:\\temp.xml");
            file.createNewFile();
            // Write to temp file
            java.io.BufferedWriter out = new java.io.BufferedWriter(new java.io.FileWriter(file));
            out.write(xmlText);
            out.close();

    What the code you posted is doing:
    // Read Input stream from blob data
    inputStream =(InputStream)xmlData.getBinaryStream();Here you have a stream containing binary octets which encode some text in UTF-8.
    // Reading the inputstream of data into an
    into an array of bytes.
    byte[] bytes = new byte[(int)xmlData.length()];
    inputStream.read(bytes);Here you are reading between zero and xmlData.length() octets into a byte array. read(bytes[]) returns the number of bytes read, which may be less than the size of the array, and you don't check it.
    xmlText = new String(bytes);Here you are creating a string with the data in the byte array, using the platform's default character encoding.
    Since you mention cp1252, I'm guessing your platform is windows
    // xmlText=new new String(szTemp.getBytes("UTF-8"));I don't know what szTemp is, but xmlText = new String(bytes, "UTF-8"); would create a string from the UTF-8 encoded characters; but you don't need to create a string here anyway.
    //xmlText = convertToUTF(xmlText);
    File file = new File("C:\\temp.xml");
    file.createNewFile();
    // Write to temp file
    java.io.BufferedWriter out = new java.io.BufferedWriter(new java.io.FileWriter(file));This creates a Writer to write to the file using the platform's default character encoding, ie cp1252.
    out.write(xmlText);This writes the string to out using cp1252.
    So you have created a string treating UTF-8 as cp1252, then written that string to a file as cp1252, which is to be read as UTF-8. So it gets mis-decoded twice.
    As the data is already UTF-8 encoded, and you want the output, just write the binary data to the output file without trying to convert it to a string and then back again:// not tested, as I don't have your Oracle classes
    final InputStream inputStream = new BufferedInputStream((InputStream)xmlData.getBinaryStream());
    final int length = xmlData.length();
    final int BUFFER_SIZE = 1024;                  // these two can be
    final byte[] buffer = new byte[BUFFER_SIZE];   // allocated outside the method
    final OutputStream out = new BufferedOutputStream(new FileOutputStream(file));
    for (int count = 0; count < length; ) {
       final int bytesRead = inputStream.read(buffer, 0, Math.min(BUFFER_SIZE, (length - count));
       out.write(buffer, 0, bytesRead);
       count += bytesRead;
    }Pete

  • c:import character encoding problem (utf-8)

    Aloha @ all,
    I am currently importing a file using the <c:import> functionallity (<c:import url="module/item.jsp" charEncoding="UTF-8">) but it seems that the returned data is not encoded with utf-8 and hence not displayed correctly. The overall file header is:
    HTTP/1.1 200 OK
    Server: Apache-Coyote/1.1
    Set-Cookie: JSESSIONID=E67F9DAF44C7F96C0725652BEA1713D8;
    Content-Type: text/html;charset=UTF-8
    Content-Length: 6861
    Date: Thu, 05 Jul 2007 04:18:39 GMT
    Connection: close
    I've set the file-encoding on all pages to :
    <%@ page contentType="text/html;charset=UTF-8" %>
    <%@ page pageEncoding="UTF-8"%>
    but the error remains... is this a known bug and is there a workaround?

    Partially, yes. It turns out that I created the documents in eclipse with a different character encoding. Hence the entire document was actually not UTF-encoded...
    So I changed each document encoding in Eclipse to UTF and got it working just fine...

  • DW destroys file encoding while saving them

    hello there!
    I wanted to modifiy the latest wordpress file "wp-config-sample.php" by opening it and saving it without any changes and guess what; dreamweaver CS3 modifies the encoding of this file making un-usable to wordpress installation.
    I tried to open this file in notepad and save it in defferent encoding, and her's the results;
    save as --> ANSI --> works
    save as --> Unicode --> does not work
    save as --> Unicode Bg Endian--> does not work
    save as --> UTF --> works
    so at least saving in ANSI or UTF encoding works, but, in dreamweaver whatever I try it doesn't work, does that mean notepad is more advanced than DW!!
    the file "wp-config-sample.php" doesn't contain any header information, just some PHP variables to define.
    Dreamweaver CS 3 acts strange mayb when it doesn't find any header encoding information in the file.
    any help?
    NB: I checked the settings in the preferences and tried saving using the BOM segnitureand without it.. but always the same probleme

    Well, I discovered what was missing around with my code; the line break!
    if you go to Edit --> Preferences --> Code format --> you ahve to set line break type to CR LF (Windows).
    it was set to CR macintosh.
    have a nice day.

  • Change of Encoding in Sender JMS Adapter

    Hi,
       My scenario is like that:-
    FTP->MQ Queue->JMS Queue->XI->R/3
    From JMS Queue IDOC xml is coming in UTF-8 encoding to XI. In that IDOC xml certain special characters are there, say, some Latin or European character. But for the scenario XI->R/3, data are not getting posted to R/3. In XI side, it is not giving any error, but it is giving a flag (in QRFC Monitor) which is “ Error between two Character Sets”.
    I am unable to rectify this error. One solution I have guessed that is, it will be possible to resolve this issue if I can change the encoding in XI to ISO-8859-1. But I don’t know how to change the encoding in Sender JMS Adapter in XI. Could you please help me to resolve this issue?
    BR
    Soumya B

    Hi,
    Check following:
    1. In SXMB_MONI, what is the XML structure generated for inbound and outbound message. Check the encoding used in both. This could be checked by looking at the first line of XML generated. For UTF encoding, usually, the first line should look as follows:
    <?xml version="1.0" encoding="UTF-8" ?>
    2. If the encoding for both is different, try to figure out which encoding is used for Message Type in XI. For matching the encodings, you could change the XSD used for creating message type in XI. This way, the character encoding could be changed. And this solution should suffice if the problem has occured between XI to R3 scenario.
    Also, for learning more about character encodings, you could visit following link:
    http://www.cs.tut.fi/~jkorpela/chars.html
    Hope it helps.
    Bhavish.
    Reward points if comments found useful:-)

  • [CS3 - JS - Mac] Problem with encoding

    Hi,
    I made a script that perform a lot of actions on ID.
    Everytime this script performs an action it writes a line on a global variable and at the end of the script write this var into a text file ( in the Document folder).
    Yes, it's a log file...
    While I was testing this script through the Toolkit everything went right.
    Then I added a menu action inside the script to call the script from the application and something strange happened, in the log file (wich is a text file UTF8 encoding) and also in the alerts ID shows. Both display text from this:
    "È necessario effettuare una selezione" (running the script from the toolkit)
    to this:
    "È necessario effettuare una selezione" (running the script from ID menu)
    So I think it's an encoding problem...
    I just added this code:
    #targetengine "Lele";
    var Lele_menu = app.menus.item("Main").submenus.add("Lele");
    //     Menu
    var main_action = app.scriptMenuActions.add("Update");
    var main_event_listener = main_action.eventListeners.add("onInvoke", function(){main();});
    var main_menu = Lele_menu.menuItems.add(main_action);
    //     Functions
    What's the point?
    Hope you understood.
    Thanks!
    Lele

    I had problems with UTF encoding so I use this function to write the log file:
    var log_file = new File(file_path);
    log_file.encoding = "UTF8";
    log_file.lineFeed = "unix";
    log_file.open("w");
    log_file.write("\uFEFF" + text_var);
    log_file.close();
    Where text_var is the log string.
    When it's written form the ESTK everything is right, when called from the menu it isn't.
    It's strange that it also involves alert text innit?

  • Import export in different encoding

    Is there a way to change an Import/Export Wizard's output in UTF encoding. Please help.

    please check this:
    https://social.msdn.microsoft.com/Forums/sqlserver/en-US/fef48cda-dfb9-456e-9a54-ec2d989a47c3/output-utf8-file?forum=sqlexpress

  • Invalid Characters shown in UTF-8 character set

    There is an XMLP report whose template output character set is ISO-8859-1. The character set ISO-8859-1 is required for this report as per Spanish Authorities. When the report is run, output gets generated in the output directory file of application server. This output file doesn't contain any invalid characters.
    But when the output is opened from SRS window, which opens it in a browser, the invalid characters are shown for characters like Ñ , É etc.
    Investigation done:
    Found that the output generated on the server is having ISO encoding and hence doesn't contain any invalid characters. Whereas the output generated from SRS window, it is in UTF encoding, so it seems the invalid characters are displayed when conversion takes place from ISO to UTF-8 format.
    Created the eText output using the data xml and template using BI publisher tool, the output is in ISO encoding. So if i go and change the encoding to UTF-8 by opening it in explorer or Notepad++, invalid charcters are shown for Ñ, É etc.
    Is there any limitation, that output from SRS window will show only in UTF-8 encoding? If not then please suggest.
    Thanks,
    Saket
    Edited by: 868054 on Aug 2, 2012 3:05 AM
    Edited by: 868054 on Aug 2, 2012 3:05 AM

    Hi Srini,
    When customer is viewing output from the SRS window, then it contains invalid characters because it is in UTF-8 character set. Customer is on Oracle OnDemand so they cannot take the output generated on the server.Every time they have to raise a request to Oracle for the output file. So the concern here is, why don't the output from SRS window show output with valid characters ?
    The reason could be conversion of ISO format to UTF-8. How could this be resolved ? Does SRS window output cannot generate in ISO format ?
    A quick reply will be appreciated as customer is chasing for an update.
    Thanks,
    Saket
    Edited by: 868054 on Aug 7, 2012 11:08 PM

Maybe you are looking for

  • ORABPEL-02100 - Deployed Process "Lost" under load.

    We've experience yet another load related BPEL issue. We have three servers running OAS+BPEL with the same set of BPEL processes deployed on each. Requests are round-robin'ed across all three. We had an unexpected load placed on the servers by a non-

  • After update to ioS 7.0.3, Airplay function for playing music with the Home amplifier doesn't work anymore.

    I used to listen to Ipad music on my home video using Airplay function. After updating to iOS 7.0.3, the Airplay icon is no more available for choosing local/remote reading. Does anybody has an idea?

  • Not Enough Memory (Even with a MicroSD Card)

    I am running out of memory on my Blackberry Pearl 8130. I have a 2GB SD card loaded for all of my pics, ringtones, and other media, but with the programs I have on my device I am out of space. (And I don't have anything crazy loaded in- just a few si

  • Time out of the Query

    Sdn Im getting Time out for one query while excuting in PRD system. That query filter field setting is Only values Infoprovider. But DEV Im able to excute the same query, but in this system that field setting is Only posted values in navigation advan

  • ISE Single SSID BYOD - Windows Endpoint user experience

    We are implementing wireless BYOD using Cisco ISE 1.2 and WLC 7.4x. We are using PEAP / MS-CHAP v2 for wireless security. We are able to on-board iOS, Adroid, and MAC OS endpoints using single SSID and Native supplicant provisiong seems to work fine