CONVERSION FROM ANSI ENCODED FILE TO UTF-8 ENCODED FILE

Hi All,
I have some issues in conversion of ANSI encoded file to utf encoded file. let me tell you in detail
I have installed the Language Support for Thai Language on My Operating System.
now, when I open my notepad and add thai character on the file and save it as ansi encoding. it saves it perfectly and also I able to see it on opening the file again.
This file need to be read by my application , store in database and should display thai character on jsp after fetching the data from database. Currently it is showing junk character on jsp reason being that my database (UTF8 compliant database) has junk data . it has junk data because my application is not able to read it correctly from the file.
If I save the file with encoding as UTF 8 it works fine. but my business requirement is such that the file is system generated and by default it is encoded in ANSI format. so I need to do the conversion of encoding from ANSI to UTF8 . so Any of you can guide me on the same how to do this conversion ?
Regards
Gaurav Nigam

Guessing the encoding of a text file by examining its contents is tricky at best, and should only be done as a last resort. If the file is auto-generated, I would first try reading it using the system default encoding. That's what you're doing whenever you read a file with a FileReader. If that doesn't work, try using an InputStreamReader and specifying a Thai encoding like TIS-620 or cp838 (I don't really know anything about Thai encodings; I just picked those out of a quick Google search). Once you've read the file correctly, you can write the text to a new file using an OutputStreamWriter and specifying UTF-8 as the encoding. It shouldn't really be necessary to transcode files like this, but without knowing a lot more about your situation, that's all I can suggest.
As for native2ascii, it isn't for encoding conversions. All it does is replace each non-ASCII character with its six-character Unicode escape, so "voilá" becomes "voil\u00e1". In other words, it avoids the problem of character encodings by converting the file's contents to a form that can be stored as ASCII. It's mainly used for converting property or resource files to a form that can be read by the Properties and ResourceBundle classes.

Similar Messages

  • [svn:fx-trunk] 7661: Change from charset=iso-8859-1" to charset=utf-8" and save file with utf-8 encoding.

    Revision: 7661
    Author:   [email protected]
    Date:     2009-06-08 17:50:12 -0700 (Mon, 08 Jun 2009)
    Log Message:
    Change from charset=iso-8859-1" to charset=utf-8" and save file with utf-8 encoding.
    QA Notes:
    Doc Notes:
    Bugs: SDK-21636
    Reviewers: Corey
    Ticket Links:
        http://bugs.adobe.com/jira/browse/iso-8859
        http://bugs.adobe.com/jira/browse/utf-8
        http://bugs.adobe.com/jira/browse/utf-8
        http://bugs.adobe.com/jira/browse/SDK-21636
    Modified Paths:
        flex/sdk/trunk/templates/swfobject/index.template.html

    same problem here with wl8.1
    have you sold it and if yes, how?
    thanks

  • Can't save file in UTF-8 encoding

    Hi,
    I've read everything I can find on this subject, from these forums to Google to newsgroups. Still no success.
    I am simply trying to save a file in UTF-8 format.
    This code depicts the methods I'm using:
    File file = new File("myFile");
    FileOutputStream fos = new FileOutputStream(file);
    OutputStreamWriter osw = new OutputStreamWriter(fos, "UTF-8");
    System.out.println("Encoding is " + osw.getEncoding());
    osw.write(myStringBuffer.toString());
    When this code is run, this is printed:
    Encoding is UTF8
    However, when I check the output file itself, it is ANSI 1252. I've tried compiling and running in both 1.4.2 and 1.5.0. Same results.
    Thanks for your help...

    I have checked it with two separate tools.
    1. I have opened up the file in TextPad and viewed the file properties. TextPad reports this: "code set: ANSI"
    2. I have a command-line utility that is bundled with a 3rd party application (commercial search engine). The command-line utility reports the character set and the language of a text file. The command-line utility reports this: "CHARSET: 1252"
    Is there something else to try?

  • Export SQL View to Flat File with UTF-8 Encoding

    I've setup a package in SSIS to export a SQL view to a flat file and it's working fine.  I now need to make that flat file UTF-8 encoded.  The package executes but still shows the files as ANSI encoded.
    My package consists of a Source (SQL View) -> Derived Column (casts the fields to DT_WSTR) -> Destination Flat File (Set to output UTF-8 file).
    I don't get any errors to help me troubleshoot further.  I'm running SQL Server 2005 SP2.

    Unless there is a Byte-Order-Marker (BOM - hex file prefix: EF BB BF) at the beginning of the file, and unless your data contains non-ASCII characters, I'm unsure there is a technical difference in the files, Paul.
    That is, even if the file is "encoded" UTF-8, if your data is only ASCII values (decimal values 0-127, hex 00-7F), UTF-8 doesn't really serve a purpose over ANSI encoding.  Now if you're looking for UTF-8 with specifically the BOM included, and your data is all standard ASCII, the Flat File Connection Manager can't do that, it seems.
    What the flat file connection manager is doing correctly though, is encoding values that are over decimal 127/hex 7F in UTF-8 when the encoding of the connection manager is set to 65001 (UTF-8).
    Example:
    Input data built with a script component as a source (code at the bottom of this post) and with only one WSTR output column hooked to a flat file destination component:
    a string containing only decimal value 225 (german Eszett character - ß)
    Encoding set to ANSI 1252 looks like:
    E1 0D 0A (which is the ANSI encoding of the decimal character value 225 (E1) and a CR-LF (0D 0A)
    Encoding set to UTF-8 65001 looks like:
    C3 A1 0D 0A  (which is the UTF-8 encoding of the decimal character value 225 (C3 A1) and a CR-LF (0D 0A)
    Note that for values over decimal 127, UTF-8 takes at least two bytes and up to four for the remaining values available.
    So, I'm comfortable now, after sitting down and going through this, that the flat file connection manager is working correctly, unless you need a BOM.
    1
    Imports System  
    2
    Imports System.Data  
    3
    Imports System.Math  
    4
    Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper  
    5
    Imports Microsoft.SqlServer.Dts.Runtime.Wrapper  
    6
    7
    Public Class ScriptMain  
    8
        Inherits UserComponent  
    9
    10
        Public Overrides Sub CreateNewOutputRows()  
    11
            Output0Buffer.AddRow()  
    12
            Output0Buffer.col1 = ChrW(225)  
    13
        End Sub 
    14
    15
    End Class 
    Phil

  • Writing plain-text files in UTF-8 encoding under MacOSX

    Hello forums,
    I've run into some problem writing text-files under MacOSX. I've tried several methods of writing, the current one I'm using is as follows;
    private void stringToFile(File file, String string) throws IOException
        OutputStream fout = new FileOutputStream(file);
        OutputStream bout = new BufferedOutputStream(fout);
        OutputStreamWriter out = new OutputStreamWriter(bout, "UTF-8");
        out.write(string);
        out.close();
    }However, when I open the file letters other than A-Z appear corrupted in Text-Edit, though in BBEdit the file is identified as UTF-8 without BOM. (still corrupt.)
    The application uses some components I am not so familiar with, which makes trouble-shooting less of a breeze.
    It is a spring-framework web-app, and the string to be written is passed to the application through a HTTPClient.
    The string itself is constructed by
    MultipartFile content = multipartRequest.getFile(CONTENT_PARAM_NAME);
    String contentStr = (content != null) ? new String(content.getBytes(), "UTF-8") : null;and is created client-side by
    new FilePart("content", new ByteArrayPartSource("content", strContent.getBytes()), "", "UTF-8")I would appreciate any clues you have hinting towards a solution.
    I have tried to isolate parts by f.ex. writing a fixed string (which still would not work properly, which leads me to think that the HTTPClient/Spring part is not to blaim).
    Message was edited by:
    joakim.back

    Good idea,
    I'm now ensuring that the hashcode of the clientside and serverside String match, supply the bytes as UTF-8, and write it properly with
    private void stringToFile(File file, String string) throws IOException
        BufferedWriter out = new BufferedWriter(
                    new OutputStreamWriter(
                                new FileOutputStream(file), "UTF8"));
        out.write(string);
        out.close();
    }I adjusted the stringToFile earlier, so I'm not sure wether the old code still works.
    TextEdit under MacOSX still view the files as corrupt, but BBEdit and EditPlus under windows view the result fine.
    Lessons learned? Beeing very careful about identifying sub-tasks and dealing with them separately.
    ..ofcourse, my job is not done since the damned applescripts dealing with the output treats the files as TextEdit do, but that's a task for tomorrow.
    Thank you for your assistance!

  • How to write csv or txt file through utl_file with UTF-8 Encoding

    Hi All,
    I need your help to write the data from DB to csv or txt file with UTF-8 encoding through utl_file.
    Database character set:AL32UTF8
    Database version:10G
    All the columns in the DB are of varchar2 type.
    Please let me know if there is any way of doing it.

    What was wrong with the info provided in the link(s) given?
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions027.htm#SQLRF00620]

  • How to read UTF-8 encoded text file randomly?

    I am trying to read a text file which has been encoded in UTF-8. The problem is that I need to access the file randomly. The RandomAccessFile is a low-level class and there seems to be no-way to wrap it in InputStreamReader so that UTF-8 encoding can be done on-the-fly. Is there any easy way to do that. Below is the simplified version of my program.
    import java.io.*;
    public class Test{
            public Test(String filename){
                    try{
                            RandomAccessFile rafTemIn = new RandomAccessFile(new File(filename), "r");
                            while(true){
                                    char chr = rafTemIn.readChar();
                                    System.err.println(chr);
                    } catch (EOFException e) {
                            System.err.println("File read.");
                    } catch (IOException e) {
                            System.err.println("File input error");
            public static void main(String[] args){
                    Test t= new Test("template.idx");
    }

    The file that I am going to read could be few hundreds of MBs or GBs. Hence, I will index interesting items in the file. The index file contain the keyword and the byte offset in the file. So, I will need to seek to any byte to read it. The file could be UTF-8 encoded XML or UTF-8 encoded plain text.
    Also, would like to add-up that in the sample program above I am reading the file sequentially. The concerned class has another method which actually does the reading randomly. If this helps, I am pasting the simplified version of code again but this also includes the said method.
    import java.io.*;
    public class Test{
            long bloc;
            long eloc;
            RandomAccessFile rafTemIn;
            public Test(String filename){
                    bloc=0L;
                    eloc=0L;
                    try{
                            rafTemIn = new RandomAccessFile(new File(filename), "r");
                            while(true){
                                    char chr = rafTemIn.readChar();
                                    System.err.println(chr);
                    } catch (EOFException e) {
                            System.err.println("File read.");
                    } catch (IOException e) {
                            System.err.println("File input error");
            public String getVal(String templateName){
                    String stemval=null;
                    try {
                            rafTemIn.seek(bloc); //bloc is a long value for beginng location to read from. It changes.
                            byte[] b = new byte[(int)(eloc - bloc + 1L)];
                            rafTemIn.read(b,0,(int) (eloc - bloc + 1L));
                            stemval = new String(b,"UTF-8");
                    } catch(IOException eio) {
                            System.err.println("Template Dump file IO error.");
                    return stemval;
            public static void main(String[] args){
                    Test t= new Test("template.idx");
                    System.out.println(t.getVal("wikipedia"));
    }

  • Non UTF-8 xml file by Email channel

    Hi there,
    I am sending an US-ASCII xml file content to B2B receiver. b2b does not identify the doc and gives doc identification error. The issue is the encoding is not mentioned in the xml.
    It works if i send it as UTF-8 or specify the encoding in the xml file (<?xml version="1.0" encoding="US-ASCII" ?>)
    My que is:
    Is it possible for b2b to read an us-ascii xml file over email channel, and identify the associated custom document when the encoding is not configured in the xml file. ?
    Is changing the tip.properties as racle.tip.adapter.b2b.encoding = Characterset Name will work? Also wondering where is tip.prop file in 11g?
    Can you help me with a solution for this?
    Thanks
    Ganesh
    Edited by: Ganesh on Oct 15, 2010 1:54 PM

    Hello Ganesh,
    Is it possible for b2b to read an us-ascii xml file over email channel, and identify the associated custom document when the encoding is not configured in the xml file. ?If file is UTF-8 encoded then without encoding configuration in XML b2b should be able to accept and identify the document.
    Is changing the tip.properties as oracle.tip.adapter.b2b.encoding = Characterset Name will work? Also wondering where is tip.prop file in 11g?tip.properties file is used in Oracle B2B 10g. In 11g, there is no such file. Few of the properties are part of the product in 11g and rest of them can be set using B2B system parameters in Administration --> Configuration tab. Few properties can also be set using fusion middleware control. Please refer -
    http://download.oracle.com/docs/cd/E14571_01/integration.1111/e10229/bb_config.htm#CEGEADFJ
    http://download.oracle.com/docs/cd/E14571_01/integration.1111/e10229/app_isags.htm#CIHDFDIC
    Regards,
    Anuj

  • Text file attachment in UTF-8 encoding

    Hi
    I have written a program which sends  mails to the users with text file attached. the problem is the text file when you save it to the local desktop ( by clicking on save as ) the encoding is by default ANSI. I want to make the encoding as UTF-8. Is it possible to change this in program?.
    thanks
    sankar

    OPEN DATASET - encoding
    Syntax
    ... ENCODING { DEFAULT
                 | {UTF-8 [SKIPPING|WITH BYTE-ORDER MARK]}
                 | NON-UNICODE } ... .
    Alternatives:
    1. ... DEFAULT
    2. ... UTF-8 [SKIPPING|WITH BYTE-ORDER MARK]
    3. ... NON-UNICODE
    Effect
    : The additions after ENCODING determine the character representation in which the content of the file is handled. The addition ENCODING must be specified in Unicode programs and may only be omitted in non-Unicode programs. If the addition ENCODING is not specified in non-Unicode programs, the addition NON-UNICODE is used implicitly.
    Note
    : It is recommended that files are always written in UTF-8, if all readers can process this format. Otherwise, the code page can depend on the text environment and it is difficult to identify the code page from the file content.
    Alternative 1
    ... DEFAULT
    Effect
    : In a Unicode system, the specification DEFAULT corresponds to UTF-8, and in a non-Unicode system, it corresponds to NON-UNICODE.
    Alternative 2
    ... UTF-8 [SKIPPING|WITH BYTE-ORDER MARK]
    Addition:
    ... SKIPPING|WITH BYTE-ORDER MARK
    Effect
    : The characters in the file are handled according to the Unicode character representation UTF-8.
    Notes
    : The class CL_ABAP_FILE_UTILITIES contains the method CHECK_UTF8 for determining whether a file is a UTF-8 file.
    A UTF-16 file can only be opened as a binary file.
    Addition
    ... SKIPPING|WITH BYTE-ORDER MARK
    Effect
    : This addition defines how the byte order mark (BOM), with which a file encoded in the UTF-8 format can begin, is handled. The BOM is a sequence of 3 bytes that indicates that a file is encoded in UTF-8.
    SKIPPING BYTE-ORDER MARK
    is only permitted if the file is opened for reading or changing using FOR INPUT or FOR UPDATE. If there is a BOM at the start of the file, this is ignored and the file pointer is set after it. Without the addition, the BOM is handled as normal file content.
    WITH BYTE-ORDER MARK
    is only permitted if the file is opened for writing using FOR OUTPUT. When the file is opened, a BOM is inserted at the start of the file. Without the addition, no BOM is inserted.
    The addition BYTE-ORDER MARK cannot be used together with the AT POSITION.
    Notes
    : When opening UTF-8 files for reading, it is recommended to always enter the addition SKIPPING BYTE-ORDER MARK so that a BOM is not handled as file content.
    It is recommended to always open a file for reading as a UTF-8 with the addition WITH BYTE-ORDER MARK, if all readers can process this format.
    Alternative 3
    ... NON-UNICODE
    Effect
    : In a non-Unicode system, the data is read or written without conversion. In a Unicode system, the characters of the file are handled according to the non-Unicode codepage that would be assigned at the time of reading or writing in a non-Unicode system according to the entry in the database table TCP0C of the current text environment.

  • Problem in file content conversion from XML to CSV

    Hi Experts,
    I am finding problem in file content conversion. I need to convert the following XML file into CSV file:
      <?xml version="1.0" encoding="UTF-8" ?>
    - <ns0:MT_CROSS_REF xmlns:ns0="urn:dabur:idoc2file:pos">
          <Update_type>2</Update_type>
          <PLU>00000000</PLU>
          <Cross_ref_PLU>7777777</Cross_ref_PLU>
          <Capture_PLU />
          <Package_size />
          <Package_desc />
      </ns0:MT_CROSS_REF>
    The output file data has to be like:
    2,00000000,7777777,,,,
    The problem I am facing is while specifying the content conversion parameters in communication channel i dont know what recordset structure i should mention as all the records are directly under root. If I mention recordset structure as "ns0:MT_CROSS_REF" and parameters as
    ns0:MT_CROSS_REF.fieldSeparator   ,
    ns0:MT_CROSS_REF.endSeparator    'nl'
    i get error in communication channel monitoring and no file is posted.
    Please help me as to what correct parameter i should mention in my case.
    Thanks,
    Regards,
    Yash

    Hi Chirag,
    I cannot change the xml file as it comes after mapping idoc to a message type. How can I add ROOT in the xml? My message type is like MT_CROSS_REF and it has those 6 fields as in the XML (Update_type, PLU etc.). I do the mapping of these fields from a IDOC and get the XML.
    I hope you got my point.
    Thanks,
    Yash

  • Reading UTF-8 Encoding xml file sqlserver

    Hi ,
    I am recieving a xml file from a third party vendor. it is encoded in UTF-8. while i am reading it i am getting the below error.
    Msg 9420, Level 16, State 1, Line 3
    XML parsing: line 30117390, character 33, illegal xml character
    the characters causing the problem are like è,Ö,è.
     My database default collation is ‘SQL_Latin1_General_CP1_CI_AS’
    I am using the below query to read the xmlfile.
    declare @xml xml
    SELECT
    @xml= CAST(x AS XML)
    FROM
    OPENROWSET(BULK 'D:\sample.xml',SINGLE_BLOB) AS T(x)
    select
    X.product.value('(ID/text())[1]', 'varchar(50)') as ID ,
    X.product.value('(Name/text())[1]', 'varchar(50)') as Name
    from
    @xml.nodes('Students/Student') AS X(product)
    how can i read the file successfully. any help is appreciated.
    Thanks in advance.

    This issue normally happens when the XML file is not in the correct format. To save in the correct format open the xml file and click save as. Choose the encoding option as "UTF-8".
    Regards, RSingh

  • How to save a UTF-8 encoded text file ?

    hi People
    I have a little script which reads the source text from a layer and saves it to a .txt file. This is on a Mac and all was good until recently when I tried opening the .txt file on a PC in Notepad and found my ˚ degree symbols all whack.
    Resaving the .txt file in TextEdit as Unicode (UTF-8) encoding solved the problem, now opens fine in Notepad.
    But ideally I'd like the script to output the .txt as UTF-8 in the first place. It's currently Western (Mac OS Roman). I've tryed adding in myfile.encoding = "UTF8" but the resulting file is still Western (and the special charaters have wigged out again)
    any help greatly appreciated../daniel
        var theComp = app.project.activeItem;
        var dataRO = theComp.layer("dataRO").sourceText;
        // prompt user to save file
        var theFile = new File ("~/Desktop/"+ theComp.name + "_output.txt");
        theFile = theFile.saveDlg("Save an ASCII export file.");
        if (theFile != null) {          // check user didn't cancel dialog
            theFile.lineFeed = "windows";
            //theFile.encoding = "UTF8";
            theFile.open("w","TEXT","????");
            theFile.writeln("move details:");
            theFile.writeln(dataRO.value.toString());
        theFile.close();

    Hi,
    Got it, it seems, the utf-8 standard use 2-bytes (and more) encoding on accents and special characters.
    I found some info there with some code http://ivoronline.com/Coding/Theory/Tutorials/Encoding%20-%20Text%20-%20UTF%208.php
    However there was some error so I fixed it. (However for 3 and 4 bytes characters i didnt test it. So maybe you'll have to change back the 0xbf to 0x3f or something else.)
    So here is the code.
    Header 1
    function convertCharToUTF(character){
        var utfBytes = "";
        c = character.charCodeAt(0)
        if (c < 0x80) {
            utfBytes =  String.fromCharCode (c);
        else if (c < 0x800) {
            utfBytes =  String.fromCharCode (0xC0 | c>>6);
            utfBytes +=  String.fromCharCode (0x80 | c & 0xbF);
        else if (c < 0x10000) {
            utfBytes = String.fromCharCode (0xE0 | c>>12);
            utfBytes += String.fromCharCode (0x80 | c>>6 & 0xbF);
            utfBytes += String.fromCharCode (0x80 | c & 0xbF);
        else if (c < 0x200000) {
            utfBytes += String.fromCharCode (0xF0 | c>>18);
            utfBytes += String.fromCharCode (0x80 | c>>12 & 0xbF);
            utfBytes += String.fromCharCode (0x80 | c>>6 & 0xbF);
            utfBytes =+ String.fromCharCode (0x80 | c & 0xbF);
            return utfBytes
    function convertStringToUTF(stringToConvert){
        var utfString = ""
        for (var i = 0 ; i < stringToConvert.length; i++){
            utfString = utfString + convertCharToUTF(stringToConvert.charAt (i))
        return utfString;
    var theFile= new File("~/Desktop/_output.txt");
    theFile.open("w", "TEXT");
    theFile.encoding = "BINARY"
    theFile.linefeed = "Unix"
    theFile.write("");//or theFile.write(String.fromCharCode (0xEF) + String.fromCharCode (0xEB) + String.fromCharCode (0xBF)
    theFile.write(convertStringToUTF("Your stuff éàçËôù"));
    theFile.close();

  • Encoding from UTF-8 encoded String to Microsoft Project default encode

    Hi Expert ...
    I have a problem with encoding a String from UTF-8 String in order to write a MPX (Microsoft Project) file. I used UTF-8 on my Database encoding, and I want to write a MPX file using MPXJ library, but the result is (?) character. I think it's because I didn't encode yet to Shift JIS (a Microsoft Product default encoding). And after that I try to encode the String with Shift_JIS encoding, but the same result is appeared. I try to looking another way, but there is no result.
    I hope some expert would help me to solve this problem.
    Thank you,
    Alfian B.

    Totally wrong. A String doesn't have an encoding.
    Now if you had an array of bytes, which were encoded using one charset, and you wanted to convert that to an array of bytes encoded using a second charset, you would use code like this:byte[] bytes = // the bytes encoded in UTF-8, let's say
    String s = new String(bytes, "UTF-8"); // make that into a String
    byte[] newbytes = s.getBytes("windows-31j"); // encode the String into windows-31j

  • Character encoding (unicode to utf-8) conversion problem

    I have run into a problem that I can't seem to find a solution to.
    my users are copying and pasting from MS-Word. My DB is Oracle with its encoding set to "UTF-8".
    Using Oracle's thin driver it automatically converts to the DB's default character set.
    When Java tries to encode Unicode to UTF-8 and it runs into an unknown character (typically a character that is in the High Ascii range) it substitutes it with '?' or some other wierd character.
    How do I prevent this.

    my users are copying and pasting from MS-Word. My DB
    is Oracle with its encoding set to "UTF-8".Pasting where? Into the database? If they are pasting into the database (however they might do that) and getting bad results then that's nothing to do with Java.
    Using Oracle's thin driver it automatically converts
    to the DB's default character set.Okay, I will assume that is correct.
    When Java tries to encode Unicode to UTF-8 and it
    runs into an unknown character (typically a character
    that is in the High Ascii range) it substitutes it
    with '?' or some other wierd character.This is false. When converting from Unicode to UTF-8 there are no "unknown characters". I don't know what you mean by the "High Ascii range" but if your users are pasting MS stuff into your Java program somehow, then a conversion from something into Unicode is done at that time. If "something" isn't the right encoding then you have the problems already, before you try to write to the DB.
    How do I prevent this.First identify the problem. You have input coming from somewhere, then you are writing to the database. Two different steps. Either of them could have a problem. Test them separately so you know which one of them is the problem.

  • ITunes will no longer play nor add to the library any mp3 files which I have created by conversion from a wav file.

    I am running iTunes 10 on XP Pro and it will no longer play those mp3 files which I have created by conversion from wav files.  I tried to delete some from the library and then re-add them but iTunes will not add them back to the library.
    Can anyone help solve this, please?

    Back up your purchased music before you upgrade, I've read at least 5 cases where purchased music has just gone.....missing.
    That is not supposed to happen, didn't happen to me or thousnds of other iTunes users, but you don't want to be the 6th person, do you?
    There are two things to try before you upgrade:
    Barbara Hall, "Unknown Error (-208)" #1, 07:42pm Oct 19, 2005 CDT
    http://docs.info.apple.com/article.html?artnum=302478

Maybe you are looking for

  • Choppy DVD playback on external monitor

    I have a new Samsung Syncmaster monitor, which I am connecting to my MacBook via DVi. I tried playing a DVD in the DVD player, but when I move the window over to my external monitor, the playback is choppy (although the audio plays perfectly). The di

  • Kindly suggest oracle automation testing tools supporting oracle 10g forms

    Hi, We would like to automate oracle 10g forms in our company. kindly suggest oracle automation tools supporting oracle 10g forms other than OATS. If only OATS tool is available in Oracle, please suggest user guide for OATS. Regards, Sairam

  • How to capture 480/60p?

    I have stupidly in a hast, like how all problems start, I set my JVC GY-HD100 to shoot in HDV-SD60P and from the manual it tells me i have recorded in 480/60p i have spent hours trying to bring it into FCP with no luck. I played with my settings and

  • Enhancements not visible when included as Component Usage

    Hi Experts I have created an enhancement to the standard web dynpro LSO_BL_ADMN_MANAGECRSPART which adds a new attibute to the interface node CTX_COURSE_CATALOG. I can see this attribute on the interface controller but when I include the component as

  • SAP Content Server

    Dear Experts, I got problem in installing SAP Content Server 6.30 in Solaris 9 (Unix-SPARC 64bit). After discus with OSS they sugest to review the Apache installation, i try to follow the OSS note number 664384, but i confused with some error about t