Utf-8 filename characters in Motif applications

Hello,
1) my Xpdf/Xdvi apps work quite well, but unable to save files with non-latin characters in filenames.
2) in addition Xpdf fails to search non-latin characters/words in pdf files.
Unfortunately I can't google appropriate way to solve this. Any clue?
Thanks.

Hello
It's a long shot, but if it/they rely on properly configured linux console settings, directly or not - this might help you.
Good luck

Similar Messages

  • We cannot type Polish (non-latin) characters in WebDynpro applications

    We cannot type Polish (non-latin) characters in WebDynpro application (in runtime) because 'Browser Help Shortcuts' are fired.
    To type a polish character in polish keyboard you need to press AltGr + letter (ie. AltGr + a/c/e/s/o/l/z/x/n). To type an uppercase polish character you need to press AltGr + Shift + letter. This comination is in fact the same as pressing Alt + Ctrl + Shift + letter (because AltGr produces Alt + Ctrl) and it fires some of 'Browser Help Shortcuts'. For example AltGr + Shift + O should produce a letter O with a dash on it's top but instead it fires 'Show nesting of HTML containers'.
    We tried to turn off sap-wd-lightspeed, but then other key combinations are reserved for u2018Browser Help Shortcutsu2019.
    We need to be able to use AltGr + Shift + a/c/e/s/o/l/z/x/n in runtime.
    Product: SAP NW 7.11 SP04
    WebDynpro for Java
    I hope there is a somewhere a hidden parameter that solves our problem Maybe we're in some kind of debug mode?
    Thanks for your help!!

    The funny thing is that bold font [when message unread in message list] shows OK, ie in greek, but when i click on unread message, it is assumed to have been read, so it changes over to medium [non bold] and the encoding changes as well into the one that is not greek and thus unreadable.  In ~/.sylpheed/sylpheedrc the fonts are:
    widget_font=
    message_font=-microsoft-sylfaenarm-medium-r-normal-*-*-160-*-*-p-*-iso8859-7
    normal_font=-monotype-arial-medium-r-normal-*-12-*-*-*-*-*-iso8859-7
    bold_font=-monotype-arial-bold-r-normal-*-12-*-*-*-*-*-iso8859-7
    small_font=-monotype-arial-medium-r-normal-*-12-*-*-*-*-*-iso8859-7
    In /etc/gtk, for gtk1.2 apps the file refering to greek encoding [el] seems to be fine [exactly the same as in slackware 9.1].

  • Designer6i: About Page template filename Web PL/SQL Applications

    how can change the About Page template filename Web PL/SQL Applications
    Thank!!!
    my e-mail is: [email protected]

    Hi,
    Have you upgraded Apex?
    I assume you use XE EPG.
    Have you grant execute privilege on procedure to DAD user ANONYMOUS?
    GRANT EXECUTE ON WOLF_22.HELLO_WORLD TO ANONYMOUS;Have you changed wwv_flow_epg_include_mod_local that it allow execute WOLF_22.HELLO_WORLD?
    And write schema.procedure at upper case to function.
    If you have not upgraded Apex, run as SYS or SYSTEM
    CREATE OR REPLACE function FLOWS_020100.wwv_flow_epg_include_mod_local(
        procedure_name in varchar2)
    return boolean
    is
    begin
        -- Administrator note: the procedure_name input parameter may be in the format:
        --    procedure
        --    schema.procedure
        --    package.procedure
        --    schema.package.procedure
        -- If the expected input parameter is a procedure name only, the IN list code shown below
        -- can be modified to itemize the expected procedure names. Otherwise you must parse the
        -- procedure_name parameter and replace the simple code below with code that will evaluate
        -- all of the cases listed above.
        if upper(procedure_name) in (
              'WOLF_22.HELLO_WORLD'
        ) then
            return TRUE;
        else
            return FALSE;
        end if;
    end wwv_flow_epg_include_mod_local;
    /Regards,
    Jari

  • [SOLVED] Problems opening folders with UTF-8 encoded characters

    Hello everyone, I'm having an issue when I acess folders in all my programs ( except Dolphin File Manager). Every time I open the folder navigation window in my programs, folders with UTF-8 encoded characters ( such as "ç", "á ", "ó", "í", etc ) are not shown or the folder name not show these characters, therefore, I can not open documents inside these folders.
    However, as you saw, I can type these characters normally. Here's my "locale.conf" :
    LANG="en_US.UTF-8:ISO-8859-1"
    LC_TIME="pt_BR.UTF-8:ISO-8859-1"
    And here's the output of the command "locale -a" :
    C
    en_US.utf8
    POSIX
    Last edited by regmoraes (2015-04-17 12:55:19)

    Thing is, when I run locale -a, I get
    $ locale -a
    C
    de_DE@euro
    de_DE.iso885915@euro
    de_DE.utf8
    en_US
    en_US.iso88591
    en_US.utf8
    ja_JP
    ja_JP.eucjp
    ja_JP.ujis
    ja_JP.utf8
    japanese
    japanese.euc
    POSIX
    So an entry for every locale I have uncommented in my locale.conf. Just making sure, by "following the steps in the beginner's guide", you also mean running locale-gen?
    Are those folders on a linux filesystem like ext4 or on a windows (ntfs?)

  • Japanese characters alone are not passing correctly (passing like ??? or some unreadable characters) to Adobe application when we create input variable as XML data type. The same solution works fine if we change input variable data type to document type a

    Dear Team,
    Japanese characters alone are not passing correctly (passing like ??? or some unreadable characters) to Adobe application when we create input variable as XML data type. The same solution works fine if we change input variable data type to document type. Could you please do needful. Thank you

    Hello,
    most recent patches for IGS and kernel installed. Now it works.

  • UTF/Japanese character set and my application

    Blankfellaws...
    a simple query about the internationalization of an enterprise application..
    I have a considerably large application running as 4 layers.. namely..
    1) presentation layer - I have a servlet here
    2) business layer - I have an EJB container here with EJBs
    3) messaging layer - I have either Weblogic JMS here in which case it is an
    application server or I will have MQSeries in which case it will be a
    different machine all together
    4) adapter layer - something like a connector layer with some specific or
    rather customized modules which can talk to enterprise repositories
    The Database has few messages in UTF format.. and they are Japanese
    characters
    My requirement : I need thos messages to be picked up from the database by
    the business layer and passed on to the client screen which is a web browser
    through the presentation layer.
    What are the various points to be noted to get this done?
    Where and all I need to set the character set and what should be the ideal
    character set to be used to support maximum characters?
    Are there anything specifically to be done in my application code regarding
    this?
    Are these just the matter of setting the character sets in the application
    servers / web servers / web browsers?
    Please enlighten me on these areas as am into something similar to this and
    trying to figure out what's wrong in my current application. When the data
    comes to the screen through my application, it looks corrupted. But the asme
    message when read through a simple servlet, displays them without a problem.
    Am confused!!
    Thanks in advance
    Manesh

    Hello Manesh,
    For the database I would recommend using UTF-8.
    As for the character problems, could you elaborate which version of WebLogic
    are you using and what is the nature of the problem.
    If your problem is that of displaying the characters from the db and are
    using JSP, you could try putting
    <%@ page language="java" contentType="text/html; charset=UTF-8"%> on the
    first line,
    or if a servlet .... response.setContentType("text/html; charset=UTF-8");
    Also to automatically select the correct charset by the browser, you will
    have to include
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> in the
    jsp.
    You could replace the "UTF-8" with other charsets you are using.
    I hope this helps...
    David.
    "m a n E s h" <[email protected]> wrote in message
    news:[email protected]...
    Blankfellaws...
    a simple query about the internationalization of an enterpriseapplication..
    >
    I have a considerably large application running as 4 layers.. namely..
    1) presentation layer - I have a servlet here
    2) business layer - I have an EJB container here with EJBs
    3) messaging layer - I have either Weblogic JMS here in which case it isan
    application server or I will have MQSeries in which case it will be a
    different machine all together
    4) adapter layer - something like a connector layer with some specific or
    rather customized modules which can talk to enterprise repositories
    The Database has few messages in UTF format.. and they are Japanese
    characters
    My requirement : I need thos messages to be picked up from the database by
    the business layer and passed on to the client screen which is a webbrowser
    through the presentation layer.
    What are the various points to be noted to get this done?
    Where and all I need to set the character set and what should be the ideal
    character set to be used to support maximum characters?
    Are there anything specifically to be done in my application coderegarding
    this?
    Are these just the matter of setting the character sets in the application
    servers / web servers / web browsers?
    Please enlighten me on these areas as am into something similar to thisand
    trying to figure out what's wrong in my current application. When the data
    comes to the screen through my application, it looks corrupted. But theasme
    message when read through a simple servlet, displays them without aproblem.
    Am confused!!
    Thanks in advance
    Manesh

  • Cannot display BIG5 characters for web applications deployed to 9iAS

    I have just installed the J2EE and Webcache module of Oracle9iAS Release 2 to
    my Windows NT Server 4.0 and deployed a simple web application to it. However,
    I found that the JSP cannot display chinese (Big5) characters correctly. My JSP
    is something like:
    <%@ page contentType="text/html; charset=BIG5" %>
    <HTML><BODY>
    <% String s = SOME_BIG5_CHARACTERS; %>
    <%= s %>
    </BODY></HTML>
    On the other hand, I tried to re-direct the standard output to a log file and
    do the following in my servlet.
    System.out.println(SOME_BIG5_CHARACTERS);
    Now, the Big5 characters CAN be displayed correctly in the log file. So, I am
    confused with where the problem is.
    Here are my settings to my 9iAS:
    1) Using regedit, I have set the NLS_LANG variable of the corresponding
    ORACLE_HOME to TRADITIONAL CHINESE_TAIWAN.ZHT16BIG5
    2) In the file %ORACLE_HOME%\Apache\Jserv\conf\jserv.properties, I have
    inserted the following line:
    wrapper.env=NLS_LANG=TRADITIONAL CHINESE_TAIWAN.ZHT16BIG5
    3) In the file %ORACLE_HOME%\Apache\Apache\conf\httpd.conf, I have added the
    following line:
    PassEnv NLS_LANG
    4) In the file %ORACLE_HOME%\opmn\conf\opmn.xml, I have added the following
    line to the corresponding OC4J instance:
    <environment>
    <prop name="NLS_LANG" value="TRADITIONAL CHINESE_TAIWAN.ZHT16BIG5"/>
    </environment>
    5) For my application server, I set the java option with -Dfile.encoding=Big5
    6) I have replaced the file font.properties with font.properties.zh_TW under
    %ORACLE_HOME%\jdk\jre\lib.
    7) I have set the following in the file orion-web.xml of my web application:
    <orion-web-app
    deployment-version="9.0.2.0.0"
    default-charset="Big5"
    jsp-cache-directory="./persistence"
    temporary-directory="./temp"
    internationalize-resources="false"
    default-mime-type="application/octet-stream"
    servlet-webdir="/servlet/">
    </orion-web-app>
    Anyone have idea on fixing my problem? Thanks in advance.
    Regards,
    Kae

    I met a similar problem before but not exactly your case. When I compile the JSP by Jdeveloper, it will convert the chinese characters to strange characters. It makes me crazy to handle the chinese characters ...
    Anyway, by my experience, you better isolate the chinese characters from your JSP or Java programs. Instead, put all language dependent text in a properties file and then use native2ascii to covert your properties file into Unicode. Of course, u need to change your page charset to UTF-8.
    U can get more idea from the following site.
    Brief Description of Internationalization:
    http://java.sun.com/products/jdk/1.2/docs/guide/internat/faq.html
    Detail Tutorial:
    http://java.sun.com/docs/books/tutorial/i18n/
    Native-to-ASCII converter:
    http://java.sun.com/products/jdk/1.2/docs/tooldocs/win32/native2ascii.html

  • [Solved] Greek accented characters and Qt Applications

    Hello.
    This is my first day with Arch linux.
    My problem has to do with greek accented characters like ά, ή, ό etc.
    I can type these characters on my browsers, rox file manager and other applications but not on Qt based applications like Skype and TeXmaker.
    At first when I tried to type ά on skype I got nothing.
    Later I installed ibus-qt and now I get á instead of ά, é instead of έ, etc.
    Thanks in advance for any help.
    PS: I include the results of locale -a and locale commands in case it helps:
    # locale -a
    C
    POSIX
    el_GR
    el_GR.iso88597
    el_GR.utf8
    en_US
    en_US.iso88591
    en_US.utf8
    greek
    # locale
    LANG=C
    LC_CTYPE="C"
    LC_NUMERIC="C"
    LC_TIME="C"
    LC_COLLATE="C"
    LC_MONETARY="C"
    LC_MESSAGES="C"
    LC_PAPER="C"
    LC_NAME="C"
    LC_ADDRESS="C"
    LC_TELEPHONE="C"
    LC_MEASUREMENT="C"
    LC_IDENTIFICATION="C"
    LC_ALL=
    Last edited by Paris (2013-07-22 00:41:19)

    I just created a /etc/rc.conf file and added:
    HARDWARECLOCK="UTC"
    TIMEZONE="Europe/Athens"
    KEYMAP="us"
    CONSOLEFONT="ter-v16b" #it's Terminus font for console, just install terminus-font from community
    CONSOLEMAP=
    LOCALE="en_US.UTF-8"
    DAEMON_LOCALE="no"
    USECOLOR="yes"
    This fixed my problem.

  • JSPs, UTF-8 & multibyte characters

              In our project we have a situation where we must output some multibyte characters
              to a JSP page. The data is retrieved from an Oracle database using BEA ELink and
              XML (don't ask why). The XML-data is UTF-8 encoded, and the data seems to be ok
              down to the JSP level, because I can output it to a file and it's properly UTF-8
              encoded.
              But when I try to write the data to the final reply (using <%=dataObject.getData()%>
              the results definitely are not UTF-8 encoded. On the client browser they show
              up as garbage, occupying more than twice the actual length of the data. The response
              headers and META-tags are all set to UTF-8 encoding, and the browser is set to
              use UTF-8.
              The funny part is, that the string seems to be encoded twice or something similar
              as is shown by the next example:
              This is the correct UTF-8 byte sequence for the first twice characters (they are
              just generated data for debugging purposes):
              C3 89 C3 A5
              Which translates to Unicode characters 00C9 and 00E5.
              But on the final page that is sent to the client this sequence has been changed
              to:
              C3 83 E2 80 B0 C3 83 C2 A5
              Which just doesn't make sense since it shows up as five different garbage characters.
              Does anyone have any ideas what is causing the problem and any suggestions? What
              are those extra characters in the final encoding?
              .Pete.
              

    It sounds like the Object.toString is coming back already encoded in UTF8,
              and thus the JSP writer encodes that UTF8 using UTF8 again, which is what
              you see. Try making the String value be:
              > ... characters 00C9 and 00E5.
              ... instead of:
              > C3 89 C3 A5
              Then it will be encoded correctly.
              Peace,
              Cameron Purdy
              Tangosol Inc.
              << Tangosol Server: How Weblogic applications are customized >>
              << Download now from http://www.tangosol.com/download.jsp >>
              "Petteri Räisänen" <[email protected]> wrote in message
              news:[email protected]...
              >
              > In our project we have a situation where we must output some multibyte
              characters
              > to a JSP page. The data is retrieved from an Oracle database using BEA
              ELink and
              > XML (don't ask why). The XML-data is UTF-8 encoded, and the data seems to
              be ok
              > down to the JSP level, because I can output it to a file and it's properly
              UTF-8
              > encoded.
              >
              > But when I try to write the data to the final reply (using
              <%=dataObject.getData()%>
              > the results definitely are not UTF-8 encoded. On the client browser they
              show
              > up as garbage, occupying more than twice the actual length of the data.
              The response
              > headers and META-tags are all set to UTF-8 encoding, and the browser is
              set to
              > use UTF-8.
              >
              > The funny part is, that the string seems to be encoded twice or something
              similar
              > as is shown by the next example:
              >
              > This is the correct UTF-8 byte sequence for the first twice characters
              (they are
              > just generated data for debugging purposes):
              >
              > C3 89 C3 A5
              >
              > Which translates to Unicode characters 00C9 and 00E5.
              >
              > But on the final page that is sent to the client this sequence has been
              changed
              > to:
              >
              > C3 83 E2 80 B0 C3 83 C2 A5
              >
              > Which just doesn't make sense since it shows up as five different garbage
              characters.
              >
              >
              > Does anyone have any ideas what is causing the problem and any
              suggestions? What
              > are those extra characters in the final encoding?
              >
              > Pete.
              

  • How to send non-latin unicode characters from Flex application to a web service?

    Hi,
    I am creating an XML containing data entered by user into a TextInput. The XML is sent then to HTTPService.
    I've tried this
    var xml : XML = <title>{_title}</title>;
    and this
    var xml : XML = new XML("<title>" + _title + "</title>");
    _title variable is filled with string taken from TextInput.
    When user enters non-latin characters (e.g. russian) the web service responds that XML contains characters that are not UTF-8.
    I run a sniffer and found that non-printable characters are sent to the web service like
    <title>����</title>
    How can I encode non-latin characters to UTF-8?
    I have an idea to use ByteArray and pair of functions readMultiByte / writeMultiByte (to write in current character set and read UTF-8) but I need to determine the current character set Flex (or TextInput) is using.
    Can anyone help convert the characters?
    Thanks in advance,
    best regards,
    Sergey.

    Found tha answer myself: set System.useCodePage to false

  • CSV file encoded as UTF - 8 loses characters when displayed with excel 2010

    Hello everybody,
    I have adapted a customer report to be able to send certain data via mail a a CSV attachment.
    For that purpose I am using class cl_bcs.
    Everything goes fine, but since mail attachment contains certain german characters as Ü, when displaying it with excel those characters appear as corrupted.
    It seems the problem is with excel, because when opening the same file with notepad, the Ü is there. If I import the file to excel with the importer, it is correct too.
    Anyway, is there any solution to this problem?
    I have tried concatenating byte_order_mark_utf8 in the beginning of the file, but still excel does not recognize it.
    Thanks in advance,
    Pablo.
    Edited by: katathema on Jan 31, 2012 2:05 PM

    - Does ms excell actually support UTF-8
    Yes. I believed that we installed some international add-on which is not in default installnation. Anyway, other UTF-8 or UTF-16 file can be openned and viewed by Excel without any problem.
    - have you verifide that the file is viewable as a UTF-8 -encoded file
    I think so. If I open it into Notepad and choose "save as", the file type if UTF-8 file
    - Try opening the file in a program you are confident
    that it support UTF-8 - eg. Mozilla...
    I will try that.
    - Check that your UTF-8 -encoded file has a UTF-8 identifier (0xFEFF ?)
    as the first character
    The unicode-16(LE or BE) file I got from internet, I found there is always two bytes in the front. (0xFEFF or 0xFFFE). My UTF-8 file generated by java doesn't have that. But should UTF-8 file also has this kind of specifcal bytes in the front? If I manually add these bytes in the front of my file using Ultraeditor and open it in Excel2000, it didn't help.
    - Try using another spreadsheet program that supports UTF-8
    Do you know any other spreadsheet program supports csv file and UTF-8.

  • Report Script output in UTF-8 code with Non-Unicode Application

    Essbase Nation,
    Report Script output (.txt) file is being coded as UTF-8 when the application is set to Non-unicode. This coding creates a signature character in the first line of the text file, which in turn shows up when we import the file into Microsoft Access. Does anyone know how to change the coding of the output file or know who to remove the UTF-8 signature character.
    Any adive is greatly appreciated.
    Thank you.
    Concerned Admin

    You may be able to find a text editor that can do the conversion. Alternatively, I have converted from one encoding to the another programmatically using Java as well.
    Tim Tow
    Applied OLAP, Inc

  • Very strange problem. Missing characters in QT applications

    Hello all.
    I'm trying to deal with a very strange issue since some time, that is driving me crazy, because I have no clue where the problem may lay:
    In my Archlinux system, some, and only some QT applications (for example Skype, or Qtconfig) don't display properly certain characters in the unicode set (for example "ń" or "ł") when I use certain fonts (for example Terminus, Clean or Verdana), and those applications substitute those characters with the corresponding ones from the Bitstream Vera Sans font type.
    So, when I write, say, the Polish word "Toruń", the four first letters (T, o, r and u) are rendereed in the desired font type (let's say Terminus). However, the last letter (ń) is rendered in Bitstream Vera Sans, thus making the writing appear really ugly and uneven.
    However, this doesn't happen in Opera (the browser), which is a QT application, but where I get the whole unicode character set displayed properly in the desired font type (Terminus in this example).
    Also, this doesn't happen in the rest of non-QT applications I have in my system: all of them render perfectly the whole unicode set of characters in the desired font type, no matter which.
    But! it's not a problem of those QT applications, because I have another partition with Ubuntu installed, and when I boot that partition I don't have this problem: all applications, including Skype and Qtconfig, display correctly all characters in the whole unicode set, no matter which font type I'm using.
    This is totally bewildering me.
    Any clue?
    Thank you.

    The problem is not only in qt applications. I run openbox and this is the output I get when using the command-line script pdfmerge to merge pdf files:
    GPL Ghostscript 8.71: Missing glyph CID=48, glyph=0030 in the font VerdanaBold . The output PDF may fail with some viewers.
    GPL Ghostscript 8.71: Missing glyph CID=49, glyph=0031 in the font VerdanaBold . The output PDF may fail with some viewers.
    GPL Ghostscript 8.71: Missing glyph CID=31, glyph=001f in the font VerdanaItalic . The output PDF may fail with some viewers.
    GPL Ghostscript 8.71: Missing glyph CID=38, glyph=0026 in the font VerdanaItalic . The output PDF may fail with some viewers.
    GPL Ghostscript 8.71: Missing glyph CID=37, glyph=0025 in the font VerdanaItalic . The output PDF may fail with some viewers.
    GPL Ghostscript 8.71: Missing glyph CID=34, glyph=0022 in the font VerdanaItalic . The output PDF may fail with some viewers.
    GPL Ghostscript 8.71: Missing glyph CID=54, glyph=0036 in the font VerdanaBold . The output PDF may fail with some viewers.
    GPL Ghostscript 8.71: Missing glyph CID=44, glyph=002c in the font VerdanaBold . The output PDF may fail with some viewers.
    GPL Ghostscript 8.71: Missing glyph CID=50, glyph=0032 in the font VerdanaBold . The output PDF may fail with some viewers.
    GPL Ghostscript 8.71: Missing glyph CID=77, glyph=004d in the font Verdana . The output PDF may fail with some viewers.
    There are visible errors in the pdf output. For example, bold "C" is replaced by a square box, bold "b" is replaced by a space.
    It looks like there's bug in the Verdana font or perhaps in Ghostscript 8.71

  • CALL TRANSFORMATION - UTF-8 - Resut charactere u00E9 u00E8 replace by  #

    hi,
    I'm trying to import XML data in internal table in ABAP programme (6.20).
    But after the call transformation, in the internal table all the accent charactere  (é;è;à)  are replace by charactere #
    Open dataset MyFile FOR INPUT IN TEXT MODE ENCODINF UTF-8
    IGNORING CONVERSION ERRORS.
    DO
    READ DATASET MyFile INTO xmlfield.
    ENDDO.
    LOOP AT xmlfield.
             CONCATENATE xmlfile xmlfield-fiel INTO xmlfile.
    ENDLOOP.
    CALL TRANSFORMATION MyXSLT
    SOURCE xmlfile
    tab2 = TAB2.
    Can you please help me on this.

    Hi,
    Try using ENCODING Default.
    Asvhen

  • How to get utf-8 Chinese Characters from Oracle DB by EJB

    We have found that after disabling JIT in weblogic, the Chinese can be displayed correctly, otherwise, it doesn't work.How come this happens?

    Thanks for all of your suggestions. It still refuses to work.
    I entered the following: ���^�E on the HTML form using the Chinese(PRC)keyboard on my Win2K box.
    I checked and verified the correct encoding in the servlet request (GB2312 for chinese characters)
    request.getParameter(xxx) yields ???
    new String(request.getParameter(xxx).getBytes("GB2312")) yields three boxes (values 20309, 27946 and 23380)
    new String(request.getParameter(xxx).getBytes("GB2312"), "UTF-8") yields nothing
    Any ideas?

Maybe you are looking for

  • Brightness and volume control keys not working with windows 7?

    brightness, volume control and some other keys keys are not working with windows 7 with macbook pro retina. Can I fix it?

  • Password-lock layers in InDesign

    Here's the deal: You have a client asking to give them your native InDesign files so they can update and fix texts. Is there a way, like a plugin or something, that you could use to lock layers that contains elements you don't want them to edit? It h

  • How to combine ABAP and javascript in BSP

    I have a variable "w_temp" in ABAP(defined in page attributes) and another variable "x" in javascript. I can assign value of w_temp to x as follows :- x = <% w_temp. %> but I want to do vice versa i.e. assign value of x to w_temp. How to do that ????

  • How to transform response xml according to user defined xsd?

    Hi all, In my use case I should invoke a webservcie which gives the employee details. For that I have a business service which returns the response in an xml which format is some thing like given below: *<output>* *<Name>Sample<Name>* *<Id>231231<Id>

  • "open with" arguement between quicktime and itunes

    When I first got my Mac I changed the default app to open music CD's (AIFF) from iTunes to Quicktime (because I wanted an app that would open quickly and use as little desktop real estate as possible). Now I want to change it back to iTunes. I perfor