XDODTEXE forces output to be UTF-8 ?

I am using a data template as my XML generation source. The database character set is ISO-8859-1. I have some data that has French characters - accents, graves etc. When I use SQL or PL/SQL to generate the XML, there is no issue - the output contains the French characters as expected. When I use the data template, I get gobble-de-gook characters - inverted question marks and other glyphs normally associated with cussing in comic books.
I looked at the data processor engine java code and I saw a line of code that writes to the output after encoding the data to UTF-8. Is this by design or am I missing something?
Thanks,
Sunder
Edited by: SIyer on Nov 21, 2009 9:20 AM

In a sql session or unix session has the wrong encoding. Follow the instructions to find the nls values. There is an order of precedence.
http://www.oracle.com/technology/tech/globalization/htdocs/nls_lang%20faq.htm
I would bet it's because the database is not setup correctly and it's not setup to UTF-8. UTF-8 is the character set you want not ISO-8859-1. It will have more characters than ISO-8859-1
Also, make sure data template has the correct encoding.
<?xml version="1.0" encoding="UTF-8"?>ike wiggins
http://bipublisher.blogspot.com

Similar Messages

  • Required output File In UTF -8 NO BOM format

    Hi,
    I am Working with Idoc to File scenario.
    Where the some of the data for the file will be in Chiniese.
    So i used the file encoding parameters as GB18030 in file reciever.
    But the legacy system to get process the format of the file should be UTF -8 NO BOM format.
    So what encoding parameter should be used to get the output file in UTF -8 NO BOM and chiniese format.
    Regards,
    Manoj

    this might be helfulll check........
    Under File Encoding, specify a code page.
    The default setting is to use the system code page that is specific to the configuration of the installed operating system. The file content is converted to the UTF-8 code page before it is sent.
    Permitted values for the code page are the existing Charsets of the Java runtime. According to the SUN specification for the Java runtime, at least the following standard character sets must be supported:
    &#9632; US-ASCII
    Seven-bit ASCII, also known as ISO646-US, or Basic Latin block of the Unicode character set
    &#9632; ISO-8859-1
    ISO character set for Western European languages (Latin Alphabet No. 1), also known as ISO-LATIN-1
    &#9632; UTF-8
    8-bit Unicode character format
    &#9632; UTF-16BE
    16-bit Unicode character format, big-endian byte order
    &#9632; UTF-16LE
    16-bit Unicode character format, little-endian byte order
    &#9632; UTF-16
    16-bit Unicode character format, byte order
    More at - http://help.sap.com/saphelp_nw04/helpdata/en/e3/94007075cae04f930cc4c034e411e1/content.htm

  • Is there a way to force the charset to utf-8 with the IIS plug-in?

    We're using AJAX. The initial request for the HTML has charset=utf-8 set on the HTTP header as seen in this Live HTTP Headers capture:
    http://plper.mysite.com/mysupport/index.jsf
    GET /mysupport/index.jsf HTTP/1.1
    Host: plper.mysite.com
    User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.14) Gecko/20080404 Firefox/2.0.0.14
    Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
    Accept-Language: en-us,en;q=0.5
    Accept-Encoding: gzip,deflate
    Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
    Keep-Alive: 300
    Connection: keep-alive
    Cookie: mysiteSiteUrl=http://www.mysite.com; Amysession=aHR0cDovL3BscGVyLmVtYy5jb206ODAv; JSESSIONID=FN3WLTNJFJCfYhHHVrwKvLHF2gGdnnTb11DrCyZqR9YbGhcG28lK!-1728721171; mysession=AAAAAgABAFBy5LRMDmjSRCN%2FByvfquVwFeKCpmES4x9lReRava35fxKfwcbJimb3YyPhEd0vBq7ZxgJVecL475TFZwQuSphLOwRWAQw2t7PEW%2BrxsfxgnQ%3D%3D
    HTTP/1.x 200 OK
    Date: Tue, 10 Jun 2008 18:53:01 GMT
    Server: Microsoft-IIS/6.0
    Cache-Control: no-store,no-cache,must-revalidate, no-cache="set-cookie"
    Pragma: No-cache
    Transfer-Encoding: chunked
    Content-Type: text/html;charset=UTF-8
    Expires: Thu, 01 Jan 1970 00:00:00 GMT
    Set-Cookie: JSESSIONID=09VTLTNWT07LlqnK22jTWwM8y5L9v1rmPf9CTW5TnGGKBvWvjJpP!-1728721171; path=/
    Content-Language: en-US
    X-Powered-By: Servlet/2.5 JSP/2.1
    Subsequent requests do not:
    http://plper.mysite.com/mysupport/index.jsf
    POST /mysupport/index.jsf HTTP/1.1
    Host: plper.mysite.com
    User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.14) Gecko/20080404 Firefox/2.0.0.14
    Accept: application/x-backbase+xml,application/xhtml+xml,application/xml,text/xml,application/x-www-form-urlencoded,*/*;q=0.5
    Accept-Language: en-us,en;q=0.5
    Accept-Encoding: gzip,deflate
    Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
    Keep-Alive: 300
    Connection: keep-alive
    Content-Type: application/x-www-form-urlencoded
    Referer: http://plper.mysite.com/mysupport/index.jsf
    Content-Length: 122
    Cookie: mysiteSiteUrl=http://www.mysite.com; Amysession=aHR0cDovL3BscGVyLmVtYy5jb206ODAv; JSESSIONID=09VTLTNWT07LlqnK22jTWwM8y5L9v1rmPf9CTW5TnGGKBvWvjJpP!-1728721171; mysession=AAAAAgABAFBRtE5lAyr85YM0aIap%2Bekf1Qu8FoA6BNh4JVl1JgvDNDQgYrQm5m9W%2FQa4HLK767CtXV5c%2FhtXchbug9%2BE1zoCmqSBqqYmqXE9VG1lXi%2F%2Brg%3D%3D
    Pragma: no-cache
    Cache-Control: no-cache
    BackbaseClientDelta=%5Bevt%3DsrQuery%3AsiteList%7Cevent%7Csubmit%5D%5Bvalue%3DsrQuery%3AsiteList%7Cmultivalue%7C3971957%5D
    HTTP/1.x 200 OK
    Date: Tue, 10 Jun 2008 18:58:17 GMT
    Server: Microsoft-IIS/6.0
    Content-Length: 1720
    Content-Type: text/xml
    X-Powered-By: Servlet/2.5 JSP/2.1
    Is there a way to force requests going through the proxy plug-in to get a charset=utf-8 set in the HTTP header for all requests?
    Thanks!
    Edited by f2racer at 06/10/2008 12:01 PM

    If for some reason you have failed to maintain a backup copy of your computer ( not good), then transfer purchases form your ipod.
    Without syncing, click File>Transfer Purchases

  • Can I force JDBC Driver use UTF-8 Charset to encode?

    The similiar way is in MySQL, like
    jdbc:mysql://localhost:3306/test?useUnicode=true&amp;characterEncoding=UTF-8
    Thanks,

    You must describe your requirements in more details. There is generally nothing special in reading/writing into a database that has UTF-8 (AL32UTF8) as its database character set. Data is read into/written from String variables, which are encoded in UTF-16 by Java design. JDBC transparently converts between UTF-16 and UTF-8.
    If you want to output a string into a file in UTF-8 encoding, it is no longer an Oracle problem but a normal Java programming issue. You need to create an appropriate OutputStreamWriter for your FileOutputStream.
    new OutputStreamWriter( new FileOutputStream(file), "UTF-8" );
    -- Sergiusz

  • Forcing output via VGA on a MBP

    So I have four monitors hooked up via an active VGA splitter, only problem is that while my MBP recognizes the VGA adapter (screen fades blue for a second) it refuses to actually output any image to the VGA port.

    I am having the same problem. It seems to be an apple hardware problem. I find it very fustrating that all the windows driven computers i have been using over the many years have NEVER had any problem with output on external vga device.
    how on earth can you use this computer for presentation when it simply dont work? (using DVI works with out any problem at all, its only related to VGA)
    I have searched the internett and found out that the seems to be some devices it works on and some it don't. Maybe it outputs too low power on the vga?? but apple, can you fix this problem?! its very fustrating and it seems to me that there are sevral people having this problem. if this problem still persist, i have to drop using MBP's in the company i work, and move back to hp or lenovo.

  • [SOLVED] apache forces UTF-8 charset

    Hello all!
    After recent update, apache begin to force output of UTF-8 charset in Content-type header. AddDefaultCharset seems to be ignored both in .htaccess and in virtual host configuration. Does anybody know where this issue come from? Not all parts of the world are ready for UTF-8 yet because of tons of legacy code.
    Thanks in advance.
    UPDATE: Not related to apache, but to php. default_charset in php.ini was need to be set to the empty string.
    Last edited by dmiceman (2014-09-24 05:21:09)

    To answer myself:
              CR216621
              Changes made by mod_rewrite to the URI are reflected only to request_rec->uri. Hence, the Apache plug-in now uses request_rec->uri by default.
              Also, a new property, WLForwardUriUnparsed, has been added. When it is set to ON, the Apache plug-in uses request_rec->unparsed_uri instead of request_rec->uri.
              Note: The Apache plug-in with WLForwardUriUnparsed set to ON does not work correctly with mod_rewrite.

  • Unicode, UTF-8 and java servlet woes

    Hi,
    I'm writing a content management system for a website about russian music.
    One problem I'm having is trying to get a java servlet to talk Unicode to the Content mangament client.
    The client makes a request for a band, the server then sends the XML to the client.
    The XML reading works fine and the client displays the unicode fine from an XML file read locally (so the XMLReader class works fine).
    The servlet unmarshals the request perfectly (its just a filename).
    I then find the correct class, and pass it through the XML writer. that returns the XML as string, that I simply put into the output stream.
    out.write(XMLWrite(selectedBand));I have set correct header property
    response.setContentType("text/xml; charset=UTF-8");And to read it I
             //Make our URL
             URL url = new URL(pageURL);
             HttpURLConnection conn = (HttpURLConnection)url.openConnection();
             conn.setRequestMethod("POST");
             conn.setDoOutput(true); // want to send
             conn.setRequestProperty( "Content-type", "application/x-www-form-urlencoded" );
             conn.setRequestProperty( "Content-length", Integer.toString(request.length()));
             conn.setRequestProperty("Content-Language", "en-US"); 
             //Add our paramaters
             OutputStream ost = conn.getOutputStream();
             PrintWriter pw = new PrintWriter(ost);
             pw.print("myRequest=" + URLEncoder.encode(request, "UTF-8")); // here we "send" our body!
             pw.flush();
             pw.close();
             //Get the input stream
             InputStream ois = conn.getInputStream();
                InputStreamReader read = new InputStreamReader(ois);
             //Read
             int i;
             String s="";
             Log.Debug("XMLServerConnection", "Responce follows:");
             while((i = read.read()) != -1 ){
              System.out.print((char)i);
              s += (char)i;
             return s;now when I print
    read.getEncoding()It claims:
    ISO8859_1Somethings wrong there, so if I force it to accept UTF-8:
    InputStreamReader read = new InputStreamReader(ois,"UTF-8");It now claims its
    UTF8However all of the data has lost its unicode, any unicode character is replaced with a question mark character! This happens even when I don't force the input stream to be UTF-8
    More so if I view the page in my browser, it does the same thing.
    I've had a look around and I can't see a solution to this. Have I set something up wrong?
    I've set, "-encoding utf8" as a compiler flag, but I don't think this would affect it.

    I don't know what your problem is but I do have a couple of comments -
    1) In conn.setRequestProperty( "Content-length", Integer.toString(request.length())); the length of your content is not request.length(). It is the length of th URL encoded data.
    2) Why do you need to send URL encoded data? Why not just send the bytes.
    3) If you send bytes then you can write straight to the OutputStream and you won't need to convert to characters to write to PrintWriter.
    4) Since you are reading from the connection you need to setDoInput() to true.
    5) You need to get the character encoding from the response so that you can specify the encoding in           InputStreamReader read = new InputStreamReader(ois, characterEncoding);
    6) Reading a single char at a time from an InputStream is very inefficient.

  • Custom Component Creates Unicode Files instead of UTF-8

    We have a custom component running in our SAP Enterprise Portal. The component creates a XML Outputfile encoded in UTF-8 since we force the component to so so. In the environment we're using right now, the output is in Unicode, which is obviously wrong. In what way can we amend the settings of our WebAs to switch the output format to unicode? Where/with what means do we have to do that?
    We are using Enterprise Portal 6.0 SP2, therefore WebAS 6.2.

    Alexander,
    Were you able to find the answer to your question?  I am experiencing the same issue and need to know how to create an XML output file in UTF-8 encoding.
    Thanks,
    Shannon

  • Creation of Note in PDF file generated from Output Server

    I would like to be able to force Output Server to create a Note (in expanded format) on a PDF file. I see the bookmark information in the manual, however I was not able to locate any documentation referencing the note tool. Is this possible? If so, can someone provide a brief example of a command that would need to be passed through the data stream to generate a note on a PDF document?

    Unfortunately, Output Server doesn't allow you to create PDF comments. The best you could probably do is generate a PDF and then create an XFDF that references that PDF and also contains your comments. Check the Adobe PDF Developer's site for information on XFDF.
    Regards,
    Rob McDougall
    Indigo Pacific Ltd.

  • [SOLVED] Is full UTF-8 supported in unicode-rxvt?

    Does urxvt have full UTF-8 support, or it only supports 16 bit unicode characters?
    I have set inconsolata font for URXVT adding to .Xdefaults this line: URxvt*font: xft: Inconsolata:pixelsize=17:antialias=True:hinting=True
    I can write most characters, like á, ñ, etc. But It looks like characters with codes greater than 16 bits are not printed. E.g.:
    - If I open XFCE Terminal (also configured to use Inconsolata font) and input: [CTRL]+[SHIFT]+U,1F600, it outputs the smiling face.
    - If I open urxvt ant while holding [CTRL]+[SHIFT] type 1F600, only an empty rectangle is shown.
    The same happens for other characters I have tested with codes greater than 16 bit. It works flawlessly though with 16 bit codes. Is there a way to make these characters to work? I need the smileys because of the telegram CLI client...
    Last edited by doragasu (2014-03-08 20:41:45)

    locale output:
    LANG=es_ES.UTF-8
    LC_CTYPE="es_ES.UTF-8"
    LC_NUMERIC="es_ES.UTF-8"
    LC_TIME="es_ES.UTF-8"
    LC_COLLATE="es_ES.UTF-8"
    LC_MONETARY="es_ES.UTF-8"
    LC_MESSAGES="es_ES.UTF-8"
    LC_PAPER="es_ES.UTF-8"
    LC_NAME="es_ES.UTF-8"
    LC_ADDRESS="es_ES.UTF-8"
    LC_TELEPHONE="es_ES.UTF-8"
    LC_MEASUREMENT="es_ES.UTF-8"
    LC_IDENTIFICATION="es_ES.UTF-8"
    LC_ALL=
    But I don't think this is a locale problem, because as I already wrote, I can write most characters, including spanish (es_ES) ones like á, Á, ñ, etc.

  • Why are newer versions of Firefox having problems with UTF-16le (Windows default unicode set)

    I have a website that has multiple languages on it, and so I've been using UTF-16le for it. Everything was working well on multiple browsers until the last few months, when only Firefox stopped displaying it properly. I can force the page into UTF-16le, but then some of my graphical links no longer work and I cannot navigate through the pages unless I force every single page to UTF-16le EVERY SINGLE TIME. This problem is not unique to my computer, either, as this has happened with every computer I have tried in the last few months.

    As answered before a few weeks back [[/questions/770955 *]]: the server sends the pages as UTF-8 and that is what Firefox uses to display the pages. You need to reconfigure the server and make them send the pages with the correct content type (UTF-16) or with no content type at all if you want Firefox to use the content type (BOM) in the file.
    A good place to ask questions and advice about web development is at the mozillaZine Web Development/Standards Evangelism forum.<br />
    The helpers at that forum are more knowledgeable about web development issues.<br />
    You need to register at the mozillaZine forum site in order to post at that forum.<br />
    See http://forums.mozillazine.org/viewforum.php?f=25

  • Transforming XML source to (X)HTML StreamResult outputs long comment

    Hi,
    I'm using javax.xml.transform.* to transform XML data into an XHTML page. The XML data represents a document (contains information about headers, paragraphs, related pages, etc.) and an XSL file is used to properly transform that data into an XHTML compatible file. All transformations go just as they should, however, the XHTML output contains a very lenthy comment about elements and attributes concerning HTML. Obviously, I do not want this comment in my final output. As a side note: I'm using a StreamSource(File) for the XML data source, a StreamSource(File) for the XSL file and a StreamResult(HttpServletResponse.getOutputStream()) for the HTML output. I'm not sure wether the problem is with using a StreamResult directly (rather than a SAXResult), or because I'm outputting directly to the servlet output stream. All input and output is in UTF-8, and the HttpServletResponse has been configured using setCharacterEncoding("UTF-8") and setContentType("text/html").
    Everything works fine, except I get this stinkin' long comment, which I want gone.
    Thanks,
    Yuthura

    I get the following result.html file (properly transformed and displayed in my browser, except for the comment):
    <!--================== Imported Names ====================================-->
    <!-- media type, as per [RFC2045] -->
    <!-- comma-separated list of media types, as per [RFC2045] -->
    <!-- a character encoding, as per [RFC2045] -->
    <!-- a space separated list of character encodings, as per [RFC2045] -->
    <!-- a language code, as per [RFC3066] -->
    <!-- a single character, as per section 2.2 of [XML] -->
    <!-- one or more digits -->
    <!-- space-separated list of link types -->
    <!-- single or comma-separated list of media descriptors -->
    <!-- a Uniform Resource Identifier, see [RFC2396] -->
    <!-- a space separated list of Uniform Resource Identifiers -->
    <!-- date and time information. ISO date format -->
    <!-- script expression -->
    <!-- style sheet data -->
    <!-- used for titles etc. -->
    <!-- nn for pixels or nn% for percentage length -->
    <!-- pixel, percentage, or relative -->
    <!-- integer representing length in pixels -->
    <!-- these are used for image maps -->
    <!-- comma separated list of lengths -->
    <!--=================== Generic Attributes ===============================-->
    <!-- core attributes common to most elements
    id document-wide unique id
    class space separated list of classes
    style associated style info
    title advisory title/amplification
    -->
    <!-- internationalization attributes
    lang language code (backwards compatible)
    xml:lang language code (as per XML 1.0 spec)
    dir direction for weak/neutral text
    -->
    <!-- attributes for common UI events
    onclick a pointer button was clicked
    ondblclick a pointer button was double clicked
    onmousedown a pointer button was pressed down
    onmouseup a pointer button was released
    onmousemove a pointer was moved onto the element
    onmouseout a pointer was moved away from the element
    onkeypress a key was pressed and released
    onkeydown a key was pressed down
    onkeyup a key was released
    -->
    <!-- attributes for elements that can get the focus
    accesskey accessibility key character
    tabindex position in tabbing order
    onfocus the element got the focus
    onblur the element lost the focus
    -->
    <!--=================== Text Elements ====================================-->
    <!-- these can occur at block or inline level -->
    <!-- these can only occur at block level -->
    <!-- %Inline; covers inline or "text-level" elements -->
    <!--================== Block level elements ==============================-->
    <!-- %Flow; mixes block and inline and is used for list items etc. -->
    <!--================== Content models for exclusions =====================-->
    <!-- a elements use %Inline; excluding a -->
    <!-- pre uses %Inline excluding big, small, sup or sup -->
    <!-- form uses %Block; excluding form -->
    <!-- button uses %Flow; but excludes a, form and form controls -->
    <!--================ Document Structure ==================================-->
    <!-- the namespace URI designates the document profile -->
    <!--================ Document Head =======================================-->
    <!-- content model is %head.misc; combined with a single
    title and an optional base element in any order -->
    <!-- The title element is not considered part of the flow of text.
    It should be displayed, for example as the page header or
    window title. Exactly one title is required per document.
    -->
    <!-- document base URI -->
    <!-- generic metainformation -->
    <!--
    Relationship values can be used in principle:
    a) for document specific toolbars/menus when used
    with the link element in document head e.g.
    start, contents, previous, next, index, end, help
    b) to link to a separate style sheet (rel="stylesheet")
    c) to make a link to a script (rel="script")
    d) by stylesheets to control how collections of
    html nodes are rendered into printed documents
    e) to make a link to a printable version of this document
    e.g. a PostScript or PDF version (rel="alternate" media="print")
    -->
    <!-- style info, which may include CDATA sections -->
    <!-- script statements, which may include CDATA sections -->
    <!-- alternate content container for non script-based rendering -->
    <!--=================== Document Body ====================================-->
    <!-- generic language/style container -->
    <!--=================== Paragraphs =======================================-->
    <!--=================== Headings =========================================-->
    <!--
    There are six levels of headings from h1 (the most important)
    to h6 (the least important).
    --> ... (more to come)

  • UTF8, UTF-16 & Cloudscape

    Hi everyone,
    I have a problem when I'm trying to store and retrieve Big5 chars into Cloudscape
    (which claims to support Unicode just the Java does). The following is the steps:
    1. Prepare my Big5 chars
    2. Run native2ascii -encoding UTF-16 (UTF8 gives MalformedInputException)
    3. Running native2ascii -reverse perfectly reverses everything and I could see the Big5 chars.
    So I guess this should right.
    4. Inserted into Cloudscape by SQL: insert into theuser values('\uXXXX', 'elite02'); just to try out.
    SQL select shows ? though.
    5. I retrieve the String with normal rs.getString();
    6. I output the string to a file to check if it is correct by doing:
    new OutputStreamWriter() with UTF-16 (I got some other character), UTF8 (gives a ?) and even Big5 and I got crap.
    Correct me if I'm wrong but my rationale is:
    Java stores String as Unicode UTF-16 and since Cloudscape claims that I can just get
    the Unicode String without any manual conversion, logically my String should contain
    the right thing. Now when I output the String to file, wouldn't using the UTF-16 solve my
    problem? I mean since String is UTF-16 and my output is also UTF-16, right?
    Or do I still need to convert the String from Cloudscape to UTF8 (bear in mind this actually
    gives exception with native2ascii tool) or UTF-16 before I can output it to UTF-16 file?
    Or am I totally goofed up with these encoding stuff?
    thank you guys in advance!

    Hi,
    the -encoding paramter of native2ascii is the encoding of your sourcefile. (the targetfile is ASCII-encoded).
    So step 2 should be native2ascii -encoding Big5.Running native2ascii -reverse with the same encoding as in step 2 should always lead back to your original input, regardless of the -encoding parameter used (as long as it is the same).
    I dont know Cloudscape, but I guess inserting your Strings from Step 2 by a java-program should work.
    But you cant insert the values direct via SQL (because \uxxxx has only meaning in java).
    Regards
    Jan

  • UTF-16 required by "something"

    Hello everyone,
    i have a Problem with a locale how it looks like. First Informations about the Environment:
    - DS 3.2 12.2.0.0000
    - Installed on SunOS 5.10
    - Source data is on .csv Files
    - Destination is a Oracle 10g DB on SunOS
    We get the following Message when we load 1:1 the Files from the .csv to the Oracle DB:
    "The initial environment locae  ()."
    The Result is that we cant display "ü, ä, ö" instead we get "??"
    - The locale of the DS Server is UTF-8
    - The locale of the Oracle Host is UTF-8
    - The locale of the Oracle Schema is UTF-8
    Something in between is forcing us to use UTF-16. We tried to nail down the Datastore and the Flatfiles in DS to utf-8 but that didnt helped.
    Hope you guys have any ideas.
    Thanks and Regards
    Sebastian

    Setting "NLS_LANG" to "AMERICAN_AMERICA.UTF8" in "AL_env.sh" solved the issue.
    Regards
    -Seb.

  • Tty on HDMI output?

    Hi, i've an asrock nettop ion 330 which provides vga and hdmi output, it is connected to a TV via hdmi.
    If it is powered on before the tv, i can't see bios post messages nor kernel messages, nor any tty (works if i switch the tv on before the nettop)
    When Xorg takes control, it correctly initializes hdmi output, but even switching to a tty leads to a black screen.
    I suspect that it defaults to vga at boot if it can't see any hdmi device connected.
    Is it possible force linux to output to the hdmi output even if it wasn't seen by the bios?
    Something make me think that it should be possible, because if i turn on the tv, then turn on the box and then hibernate it,  when it resumes it still outputs to hdmi (ttys+xorg) even if the tv was switched off at resume time.
    Thanks
    Last edited by kokoko3k (2010-12-19 18:35:11)

    Which driver?
    With KMS drivers you can force outputs on/off or force resolutions with 'video=' in the kernel line.
    For example
    video=HDMI-1:e video=LVDS-1:d
    or
    video=HDMI-1:1920x1080@60
    With the nvidia driver you're probably out of luck.
    Last edited by thestinger (2010-12-19 19:03:13)

Maybe you are looking for