Wrong character set in RFC Response.

Hi all,
interesting little problem - in calling an ERP system via the RFC Adapater, I am expecting English text in a string field, however,I end up getting a series of, I assume, Chinese characters. I don't know why, it has nothing to do with a language text look up as I am explicity forcing a return of 'Call Made!' to one of the the string fields all apart of my debugging.
So the question is - where can I go to find out what the default character set is for the PI system.
Thanks.

Thanks Stefan,
the front end codepage is character set is UTF-8 and the RFC user, in fact all users, are logging on with langauge of 'EN'. It seems to be just this particular field. Another field for example is Material Description, and that is coming through as english and is readable, however, this other field as populated by an RFC Call, regardless of whether I set delibrately via code eg;
ev_sortf = 'Call Made!'.
or let it populate from a select statement comes through in a different character set. When I look at the field configuration and the WSDL, it is set as type STRING in both the data type and in the RFC definition, and I cannot see anything that stands out as not to use the English character set.

Similar Messages

  • Commadmin create user inputfile wrong character set

    hi!
    im adding several users with non-ascii characters through inputfile (-i) with commadmin:
    l recepciosavinyeta
    F Recepci�
    L Sa Vinyeta
    W passtest
    d example.com
    S mail,cal
    E [email protected]
    [...]when i go to DA i can see Recepci� is not displayed correctly, so i presume is something related to character set.
    these are my locales:
    # locale
    LANG=es_ES.ISO8859-15
    LC_CTYPE=es_ES.ISO8859-1
    LC_NUMERIC=es_ES.ISO8859-15
    LC_TIME="es_ES.ISO8859-15"
    LC_COLLATE=es_ES.ISO8859-15
    LC_MONETARY=es_ES.ISO8859-15
    LC_MESSAGES=es
    LC_ALL=
    ./imsimta version
    Sun Java(tm) System Messaging Server 6.3-0.15 (built Feb  9 2007)
    libimta.so 6.3-0.15 (built 19:17:24, Feb  9 2007)
    SunOS silmail 5.10 Generic_120012-14 i86pc i386 i86pcthanks in advance.

    Hi,
    I couldn't find any record of somebody attempting to use non-ascii characters in the commadmin command, so I cannot offer any steps to assist in this. I suggest you log a support case.
    Regards,
    Shane.

  • Xmldom.writetoclob give wrong character set

    I use xmldom to create a xml clob. I create document for xml as follow:
    doc := xmldom.NewDomDocument;
    xmldom.setVersion(doc, '1.0');
    xmldom.setStandalone(doc, 'no');
    xmldom.setCharSet(doc, 'ISO-8859-1'); -- Should be character set for danish.
    Create xml clob as follow:
    xmldom.writeToClob(root, out_xml);
    Problem: Now is danish 'x, f, e' replace with ??.
    When a use 'xmldom.WriteToFile(root, 'd:\test_document');' I get correctly danish 'x, f, e'!
    I have seen you can use procedure/function 'SetEncoding', but I can not find this function in my Oracle installation (8.1.7.1.1).

    The character set you specify via setCharset() procedure is ignored unless you use writeToFile() later.
    http://docs.oracle.com/cd/E11882_01/appdev.112/e25788/d_xmldom.htm#CHDCGDDB
    Usage Notes
    This is used for WRITETOFILE Procedures if not explicitly specified at that time.You can also use something like this :
    SQL> set serveroutput on
    SQL>
    SQL> declare
      2 
      3   export_file  clob;
      4   prolog       clob := '<?xml version="1.0" encoding="UTF-8"?>';
      5 
      6  begin
      7 
      8    select prolog || chr(10) ||
      9           xmlserialize(document
    10             xmlelement("TextTranslation"
    11             , xmlattributes(
    12                 '1.0' as "version"
    13               , 'ja'  as "language"
    14               , 'DEMOAND' as "module"
    15               , 'VC' as "type"
    16               )
    17             )
    18             indent
    19           )
    20    into export_file
    21    from dual ;
    22 
    23    dbms_output.put_line ( export_file );
    24 
    25  end;
    26  /
    <?xml version="1.0" encoding="UTF-8"?>
    <TextTranslation version="1.0" language="ja" module="DEMOAND" type="VC"/>
    PL/SQL procedure successfully completed

  • Data load uses wrong character set, where to correct? APEX bug/omission?

    Hi,
    I created a set of Data Load pages in my application, so the users can upload a CSV file.
    But unlike the Load spreadsheet data (under SQL Workshop\Utilities\Data Workshop), where you can set the 'File Character Set', I didn't see where to set the Character set for Data Load pages in my application.
    Now there is a character set mismatch, "m³/h" and "°C" become "m�/h" and "�C"
    Where to set?
    Seems like an APEX bug or at least omission, IMHO the Data Load page should ask for the character set, as clients with different character sets could be uploading CSV.
    Apex 4.1 (testing on the apex.oracle.com website)

    Hello JP,
    Please give us some more details about your database version and its character set, and the character set of the CSV file.
    >> …But unlike the Load spreadsheet data (under SQL Workshop\Utilities\Data Workshop), where you can set the 'File Character Set', I didn't see where to set the Character set for Data Load pages in my application.
    It seems that you are right. I was not able to find any reference to the (expected/default) character set of the uploaded file in the current APEX documentation.
    >> If it's an APEX omission, where could I report that?
    Usually, an entry on this forum is enough as some of the development team members are frequent participants. Just to be sure, I’ll draw the attention of one of them to the thread.
    Regards,
    Arie.
    &diams; Please remember to mark appropriate posts as correct/helpful. For the long run, it will benefit us all.
    &diams; Author of Oracle Application Express 3.2 – The Essentials and More

  • Wrong character set display in IE?

    Hi,
    our BW system has unicode character encoding and when i make report with web application designer and then look at report in IE I don't see the right fonts.
    I would like to use different character encoding then unicode, because I don't see the right fonts.
    Can I change character encoding, or what can I do to get the right fonts?
    Any ideas?
    Best regards,
    Uros

    Hello Uros,
    How r u ?
    Open your Internet Explorer then in the menu bar select VIEW -> Encoding -> select what you want.
    Best Regards....
    Sankar Kumar
    +91 98403 47141

  • JSF and Character Sets (UTF-8)

    Hi all,
    This question might have been asked before, but I'm going to ask it anyway because I'm completely puzzled by how this works in JSF.
    Let's begin with the basics, I have an application running on an OC4J servlet container, and am using JSF 1.1 (MyFaces). The problems I am having with this setup, is that it seems that the character encodings I want the server/client to use are not coming across correctly. I'm trying to enforce the application to be UTF-8, but after the response is rendered to my client, I've magically been reverted to ISO-8859-1, which is the main character set for the netherlands. However, I'm building the application to support proper internationalization; which means I NEED to use UTF-8.
    I've executed the following steps to reach this goal:
    - All JSP files contain page directives, noting the character set:
    <%@ page pageEncoding="UTF-8" contentType="text/html; charset=UTF-8" %>I've checked the generated source that comes from the JSP's, it looks as expected.
    - I've created a servlet filter to set the character set directly on the request and response objects:
        public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException {
            // Set the characterencoding for the request and response streams.
            req.setCharacterEncoding("UTF-8");
            res.setContentType("text/html; charset=UTF-8");       
            // Complete (continue) the processing chain.
            chain.doFilter(req, res); 
        }I've debugged the code, and this works fine, except for where JSF comes in. If I use the above situation, without going through JSF, my pages come back UTF-8. When I go through JSF, my pages come back as ISO-8859-1. I'm baffled as to what is causing this. On several forums, writing a filter was proposed as the solution, however this doesn't do it for me.
    It looks like somewhere internally in JSF the character set is changed to ISO. I've been through the sources, and I've found several pieces of code that support that theory. I've seen portions of code where the character set for the response is set to that of the request. Which in my case coming from a dutch system, will be ISO.
    How can this be prevented? Can anyone give some good insight on the inner workings of JSF with regards to character sets in specific? Could this be a servlet container problem?
    Many thanks in advance for your assistance,
    Jarno

    Jarno,
    I've been investigating JSF and character encodings a bit this weekend. And I have to say it's more than a little confusing. But I may have a little insight as to what's going on here.
    I have a post here:
    http://forum.java.sun.com/thread.jspa?threadID=725929&tstart=45
    where I have a number of open questions regarding JSF 1.2's intended handling of character encodings. Please feel free to comment, as you're clearly struggling with some of the same questions I have.
    In MyFaces JSF 1.1 and JSF-RI 1.2 the handling appears to be dependent on the raw Content-Type header. Looking at the MyFaces implementation here -
    http://svn.apache.org/repos/asf/myfaces/legacy/tags/JSF_1_1_started/src/myfaces/org/apache/myfaces/application/jsp/JspViewHandlerImpl.java
    (which I'm not sure is the correct code, but it's the best I've found) it looks like the raw header Content-Type header is being parsed in handleCharacterEncoding. The resulting value (if not null) is used to set the request character encoding.
    The JSF-RI 1.2 code is similar - calculateCharacterEncoding(FacesContext) in ViewHandler appears to parse the raw header, as opposed to using the CharacterEncoding getter on ServletRequest. This is understandable, as this code should be able to handle PortletRequests as well as ServletRequests. And PortletRequests don't have set/getCharacterEncoding methods.
    My first thought is that calling setCharacterEncoding on the request in the filter may not update the raw Content-Type header. (I haven't checked if this is the case) If it doesn't, then the raw header may be getting reparsed and the request encoding getting reset in the ViewHandler. I'd suggest that you check the state of the Content-Type header before and after your call to req.setCharacterEncoding('UTF-8"). If the header charset value is unset or unchanged after this call, you may want to update it manually in your Filter.
    If that doesn't work, I'd suggest writing a simple ViewHandler which prints out the request's character encoding and the value of the Content-Type header to your logs before and after the calls to the underlying ViewHandler for each major method (i.e. renderView, etc.)
    Not sure if that's helpful, but it's my best advice based on the understanding I've reached to date. And I definitely agree - documentation on this point appears to be lacking. Good luck
    Regards,
    Peter

  • Wrong RFC-response filling with characters '#'

    Hi all,
    we have an XI scenario in which a Report use a synchronous RFC call to run a query on a DB table and return the resultset to the calling Report.
    Sync RFC call -> PI 7.0 -> JDBC Query (Select u2026 with ResultSet) -> PI 7.0 -> RFC Adapter
    Weu2019ve some troubles on the data returning from the resultset of the query, thatu2019s rightly filled, and the RFC-response structure because there are some '#' characters as values of the response fields.
    Weu2019ve loaded into the Message Mapping the instance of the resultset (rightly filled) and the test has run successfully, so perhaps the issue itu2019s regarding the RFC Adapter side.
    Now the mapping mapping is set up with constant values but not all the RFC-response fields are filled, infact there are some u2018#u2019 characters that break the values in more lines.
    Weu2019ve already run the full cache refresh on XI on the ABAP and Java side, also weu2019ve restarted the XI instances without results.
    Any ideas?
    Thanks in advance,
    GB

    If its due to the special characters that are popping in the result of query which does not seem to be valid for RFC response,
    1) try to make the RFC Destination Communication Setting as "Unicode"
    2) Check the data type of RFC Response structure could be the fields for which the data is mapped are of date/numeric, in that case RFC data type has to be changed.
    (you would have to reimport the RFC into PI i guess)
    3) If you feel you would not require these special character, you can filter the same in mapping.
    4) After all these if you are still struck and data types are rightly defined and rfc destination is rightly maintained for Unicode, then it i feel the problem resides in the RFC Adapter to convert the xml message into ABAP RFC Format using its metadata, in that case i recomend to change the scenario to Proxy from RFC where by the messages are routed by ABAP stack not by Java.

  • Character sets and conversions

    Hi all,
    were facing a quite complex problem, for which I'am not even able to specify were it is going wrong or what needs configuring, partly for lack of experience and partly for combining different tecnical areas from which I'm only responible for some of them.
    So I'll sketch breefly the situation, and hopefully you might give me some guidelines or hints as to where to look at.
    The setup : web application (so clients access by use of browser) on Weblogic- Linux platform, Tuxedo on Iseries , and as far as I understand some DB internally to Iseries where data is stored.
    Data is entered in the DB by use of some data-entry application that comes with the iSeries.
    The problem: consulting data by use of the web-aplication , some characters dont show up correctly , e.g. @ in email addresses, e's with accents, ...
    For the chain being "browser <-> WL <-> Tuxedo <-> DB" , the problem might be different points. But from trace beeing activated , we could see that the response going out of tuxedo to WL is not correct...
    Any hint as to what to look for, what can configuration is important, would be welcome ...
    Some sub-questions:
    - I understand Tuxedo is always "installed" in English , with no other option. This means that f.e. logs are in English.
    But can/need to define some character set?
    - Between Tuxedo <-> DB you can use som conversion tables ?
    Any help would be apreciated , were quite lost ..

    Hi,
    Given that you are running Tuxedo on iSeries, I'm guessing you are running Tuxedo 6.5 as the port for the current Tuxedo release on iSeries hasn't been released yet. Tuxedo 6.5 does not directly support multi-byte character strings. The two common buffer formats for string data in Tuxedo are STRING which doesn't support multi-byte characters, or CARRAY which does support multi-byte characters as a CARRAY is essentially a blob. Do you know what buffer type the Tuxedo application is using to send data to WebLogic Server?
    In Tuxedo 9.0 and later, direct support for multi-byte strings was added in the form of the MBSTRING buffer type. This buffer type supports multi-byte strings with a variety of character sets and encodings.
    Regards,
    Todd Little
    Oracle Tuxedo Chief Archiitect

  • UTF/Japanese character set and my application

    Blankfellaws...
    a simple query about the internationalization of an enterprise application..
    I have a considerably large application running as 4 layers.. namely..
    1) presentation layer - I have a servlet here
    2) business layer - I have an EJB container here with EJBs
    3) messaging layer - I have either Weblogic JMS here in which case it is an
    application server or I will have MQSeries in which case it will be a
    different machine all together
    4) adapter layer - something like a connector layer with some specific or
    rather customized modules which can talk to enterprise repositories
    The Database has few messages in UTF format.. and they are Japanese
    characters
    My requirement : I need thos messages to be picked up from the database by
    the business layer and passed on to the client screen which is a web browser
    through the presentation layer.
    What are the various points to be noted to get this done?
    Where and all I need to set the character set and what should be the ideal
    character set to be used to support maximum characters?
    Are there anything specifically to be done in my application code regarding
    this?
    Are these just the matter of setting the character sets in the application
    servers / web servers / web browsers?
    Please enlighten me on these areas as am into something similar to this and
    trying to figure out what's wrong in my current application. When the data
    comes to the screen through my application, it looks corrupted. But the asme
    message when read through a simple servlet, displays them without a problem.
    Am confused!!
    Thanks in advance
    Manesh

    Hello Manesh,
    For the database I would recommend using UTF-8.
    As for the character problems, could you elaborate which version of WebLogic
    are you using and what is the nature of the problem.
    If your problem is that of displaying the characters from the db and are
    using JSP, you could try putting
    <%@ page language="java" contentType="text/html; charset=UTF-8"%> on the
    first line,
    or if a servlet .... response.setContentType("text/html; charset=UTF-8");
    Also to automatically select the correct charset by the browser, you will
    have to include
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> in the
    jsp.
    You could replace the "UTF-8" with other charsets you are using.
    I hope this helps...
    David.
    "m a n E s h" <[email protected]> wrote in message
    news:[email protected]...
    Blankfellaws...
    a simple query about the internationalization of an enterpriseapplication..
    >
    I have a considerably large application running as 4 layers.. namely..
    1) presentation layer - I have a servlet here
    2) business layer - I have an EJB container here with EJBs
    3) messaging layer - I have either Weblogic JMS here in which case it isan
    application server or I will have MQSeries in which case it will be a
    different machine all together
    4) adapter layer - something like a connector layer with some specific or
    rather customized modules which can talk to enterprise repositories
    The Database has few messages in UTF format.. and they are Japanese
    characters
    My requirement : I need thos messages to be picked up from the database by
    the business layer and passed on to the client screen which is a webbrowser
    through the presentation layer.
    What are the various points to be noted to get this done?
    Where and all I need to set the character set and what should be the ideal
    character set to be used to support maximum characters?
    Are there anything specifically to be done in my application coderegarding
    this?
    Are these just the matter of setting the character sets in the application
    servers / web servers / web browsers?
    Please enlighten me on these areas as am into something similar to thisand
    trying to figure out what's wrong in my current application. When the data
    comes to the screen through my application, it looks corrupted. But theasme
    message when read through a simple servlet, displays them without aproblem.
    Am confused!!
    Thanks in advance
    Manesh

  • Message uses a character set that is not supported by the internet service

    Does any one have any advice on how to fix this problem?
    E-mails sent from my iphone 3G periodically arrive in an unreadable form at the recipient. The body of the e-mail has been replaced with the message "This message uses a character set that is not supported by the internet service...." The problem e-mails also include an attachment that contains an unformatted text file containing the original message surrounded by what appears to be lots of formatting data that is displayed as gibberish.
    This occurs sometimes, but not always, even with the same recipients. I am sending e-mail through a G-mail account that is configured on the iphone using IMAP. I have tried the gmail account to use the two available formatting options for mail, but neither fixes the problem.
    I have also upgraded to 2.01 and restored a few times without impact.

    Hi,
    I got somewhat similar problem with special charecters(German umlaud �,�,�..).
    I create a file with java having special charecters in it. Now if I open this file I am able to view the special charecters in it.But If I attach this file send it using following code then receiver can not see the umlaud charecters in it.They get replaced by _ or ?
    MimeBodyPart mbp2 = new MimeBodyPart();
    FileDataSource fds = new FileDataSource(fileName);
    mbp2.setDataHandler(new DataHandler(fds));
    mbp2.setFileName(output.getName());
    Multipart mp = new MimeMultipart();
    mp.addBodyPart(mbp2);
    msg.setContent(mp);
    Transport.send(msg);
    From you message it looks like you are able to send the mail attachment correctly(by preserving special charecters).
    Can you tell me what might be wrong in my code.
    I appriciate your efforts in advance.
    Prasad

  • Changing Character set in SAP BODS Data Transport

    Hi Experts,
    I am facing issue in extracting data from SAP.
    Job details: I am using an ABAP data Flow which fetches the data from SAP and loads into Oracle table using Data Transport.
    Its giving me below error while executing my job:
    (12.2) 05-06-11 11:54:30 (W) (3884:2944) FIL-080102: |Data flow DF_SAP_EXTRACT_QMMA|Transform R3_QMMA_EXTRACT__AL_ReadFileMT_Process
                                                         End of file was found without reading a complete row for file <D:/DataService/SAP/Local/Z_R3_QMMA>. The expected number of
                                                         columns was <30> while the number of columns actually read was <10>. Please check the input file for errors or verify the
                                                         schema specification for the file format. The number of rows processed was <8870>.
    reason: When analyzed I found the reason for this is presence of special characters in data. So while generating the data file in SAP working directory which is available on SAP Application server the SAP code page is 1100 due to which the delimeter of the file and the special characters are represented with #. So once the ABAP is executed and data is read from the file it is treating the # as delimiter and throwing the above error.
    I tried to replace the special characters in ABAP data Flow but the ABAP data Flow doesnot support replace_substr function. I also tried changing the Code Page value to UTF-8 in SAP datastore properties but this didnt work as well.
    Please let  me know what needs to be done to resolve this issue. Is there any way we change the character set while reading from the generated data file in BODS to convert code page 1100 to UTF-8.
    Thanks in advance.
    Regards,
    Sudheer.

    Unfortunately, I am no longer working on this particular project/problem. What I did discover though, is that /127 actually refers to character <control>+<backspace>. (http://en.wikipedia.org/wiki/Delete_character)
    In SAP this and any other unknown characters get converted to #.
    The conclusion I came to at the time, was that these characters made their way into the actual data and was causing the issue. In fact I think it is still causing the issue, since no one takes responsibility for changing the records, even after being told exactly which records need to be updated ;-)
    I think I did try to make the changes on the above mentioned file, but without success.

  • Character set AL32UTF8;

    Hello All,
    I have oracle 10gr2 and i want to get support of Portuguese character set, for that , i suppose AL32UTF8 is recommended, but when i try to modify the character set i get this error.
    SQL> select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
    VALUE
    WE8ISO8859P1
    SQL> alter database character set AL32UTF8;
    alter database character set AL32UTF8
    ERROR at line 1:
    ORA-12712: new character set must be a superset of old character set
    what should i do now? any idea?
    thanks

    here it is:
    SQL> select news_detail from news_tbl;
    NEWS_DETAIL
    <p>A Directora nacional-Adjunta do Patrim?nio ligado ao Minist?rio das Finan?as
    de Mo?ambique, Albertina Fruquia, esteve nesta quarta-feira, do dia 22 de Novemb
    ro, na Secretaria de Log?stica e Tecnologia de Informa??o ( SLTI )para conhecer
    o sistema de compras do Governo Federal Brasileiro e verificar a possibilidade d
    e estabelecer acordos de coopera??o nessa ?rea.</p>
    <p> </p>
    <p>Na opini?o da Directora Mo?ambicana, o Brasil tem uma experi?ncia importante
    em compras p?blicas que pode colaborar com Mo?ambique nesta ?rea. Ela lembrou qu
    e o pa?s Africano est? imlpementando um novo regulamento de compras que constitu
    i o preg?o presencial, modalidade regulamentada no Brasil desde 2000.</p>
    <p> </p>
    you can see that even in sqlplus the data is not shown the portugueses language chars, now what you think, that whether this data is stored wrong or either i can recover it by changing chars set or language?
    Message was edited by:
    nayyares
    Message was edited by:
    nayyares

  • Character set error during startup

    Hi all
    This is a follow up of the following problem: Paralel Install of Ora8i and Ora9i
    Please read it before continue.
    This seems to be a database problem, so that I post it here:
    I'm going forward with my problem. It seems that's a problem with versions in the startup routine. When I try to start the instances with the 9i dbstart tool, all is going well.
    Now I changed the oracle script in /etc/init.d in a was that it will explicitly start the 9i dbstart tool. Now the instances come up normally, except the following error during mount of the instance:
    ORA-12709: error while loading create database character set.
    For me it looks like an mistaken during the creation of the database. Or perhaps its again a version problem.
    Any clue around for this?
    Greetings
    Salvatore

    Ok Here the answers:
    1) Error during startup
    2) Error during startup
    3) Error during startup
    The Oracle Environment seems to be OK. I can use all variables in the oracle script during OS boot.
    I suspect that something is going wrong during the database creation. But in the Oracle Error Message Documentation I read "contact Support Services". So I'm unsure what to do. AFAIK we don't have a support contract, nor I can use any paid service.

  • Oracle11g: how I change character set to AL32UTF8?

    Hi, a software is requiring to have a database with AL32UTF8 character set.
    For what I understand I have an instance of db with
    nls_language=american
    I tried:
    SQL> alter database character set AL32UTF8;
    alter database character set AL32UTF8
    ERROR at line 1:
    ORA-12712: new character set must be a superset of old character set
    what's wrong? How can I achieve this?
    Thanks a lot.
    Warning: you are talking with a non expert. :)

    802428 wrote:
    Hi schavali,
    I am new bee to oracle and BPM so i am unable to get which database you are talking about to drop & recreate and also how to do so.
    Any help over this will be highly appreciable.
    Regards,
    ITM CrazyWe are referring to OP's database (where the characterset is set to WE8MSWIN1252)
    Srini

  • Ora-12704: character set mismatch

    I'm trying to insert string data into a nvarchar2 column using jdbc and I keep getting the error, ora-12704: character set mismatch. How do I avoid this without using the translate function? I believe my nls variables are all set correctly. Our software will eventually be in Korean.

    Both my database and national character sets are UTF8.
    We are trying to design a system
    that supports internationalization. Majority
    of our customers will be using English but we do have Korean customers. All their data will be in korean. It is my understanding (and I could be wrong) that the korean character set is varying multi-byte. This understanding has lead me to believe I need to define my "character" columns as nchar and nvarchar2. Is this assumption wrong?
    Also, my nls_lang environment variable is AMERICAN_AMERICA.UTF8. I'm currently running Oracle 8.1.6. We have received 8.1.7 but I haven't upgraded yet.
    Thanks in advance!

Maybe you are looking for

  • Why is iPod syncing so slow?

    I have been try to sync my iPod with my pc and it has taken close to 20 hours, and I am not yet half way. This has never happened. Why is it so slow? Any help

  • Programmatically enabling tcp protocol for sql server

    hi guys.... i have to manually enable tcp/ip protocol for sql server with jdbc... is there any programmatical way..to do this... it will be a great help... help me out guys.. thanx in advance

  • I can't open emails from my hotmail inbox

    I have read similar posts about problems with hotmail etc, but none specific. I can get in, open my inbox etc, but I cannot open an email, they are there, I can hover over the message but when I click nothing.... Actually on further probing I can't d

  • Switching wireless networks - OS X asking for password

    At my Moms house, she has a Linksys wireless router setup to serve b/g traffic, and a Time Capsule setup to serve n traffic. Both are saved and stored in the known networks setion. For some reason, when I go to switch to use one or the other, OS X pr

  • GPU support and Nvidia SLI

    It would appear that even with Nvidia's new Vista drivers 178.24 (today Oct 15) you will only get GPU support if you turn off SLI. Seems like a waste of a second video card. Hopefully Nvidia will get this sorted in the next release. My system: Intel