National characters (code page) problem

I made JSP page with code page 1250 with characters specific to this code page. In JDeveloper everything looks OK. Compiled page (Java file) also shows good, but when I open it in Web browser all national characters are lost (question marks instead of letters). Can anybody help me to solve this problem?
Note: JDeveloper is configured to mentioned code page.

have you tried posting in the ABAP webdypro forum?

Similar Messages

  • National characters in pages, accessed through Application/URl

    Jambo all!
    I hope you can help me with this little problem:
    1. I've created application and added URL item to it
    2. URL is pointing to an external ASP page (I hope that fact that this is an ASP page does not influent the behavior)
    3. I've published it as a portlet and added it to a page.
    Everything is working OK except our national accented characters - they rae all converted to '?' sometime in the process of rendering page. There really are question marks instead of proper characters in the source code so any usage of browser encoding or meta tags in html is useless ;(
    The question is - how can I convince page (URL) rendering system to leave my national characters intact?
    TNX a lot in advance!
    null

    Solution of this problem is shown here:
    Re: Error Message: print success message checksum content error in Apex 4.0

  • Regarding the File Adapter with Code Page problem

    Hi All,
    I have a scenario where I am processing file at receiver end. The code page of the file is Cp037. When I try with this, I am facing the problem. Is there anyway where I can chage the code page of the file which is to processed by File adapter receiver.
    I have one idea but I don't know whether it is possible or not. It is to use XML Anonymizer Module.
    Please get back to me with your ideas.
    Regards,
    Achari

    Hi Achaari,
    Cp037 ( EBCDIC ) is not a basic but an extended encoding set which might not be supported by the file encoding parameter at the receiver file adapter.
    Either you can try the code page conversion using java code  as mentioned in  this[ post|Code page conversion;
    Please refer Problem with EBCDIC
      Michale's reply and the sriram's reply which talks about  some work around using .BAT files.
    Regards,
    Srinivas

  • Code Page Problem while downloading to PC in DUF Format

    Hi,
    I have a problem where in i tried to download a XML Structure that has been built in using ABAP and then download the same as .duf file in PC Frontend. The file is then used to upload the details in some webportal.
    The Problem is, there is a text which contains a special character ' ī ' in Latvian format which while getting downloaded is changed as # (can be seen when opened using Notepad). I tried passing Codepage 1900 (checked from table TCP0C) to the function Module 'GUI_DOWNLOAD'. Now it changes the special character as ' ļ '.
    How do i overcome this problem with codepage or do i need to make any settings?
    Any help on this will be greatly appreciated. Let me know incase of further explanations or details are required.
    Thanks,
    Prashanth

    hi
    check whether the character is set to the unicode with reference to the language key

  • DB Tab RFCDES: Could not determine code page

    Hello,
    I have a simple scenario File->XI->IDOC(orders05)
    The R/3 who receive the IDOC is an 46c system. When sending a file to the XI I receive the following error message in sxmb_moni on the last step "Call Adapter"
    DB Tab RFCDES: Could not determine code page with <myr3>
    Does anyone have an idea whats wrong ?
    Regards Bernd

    Hi Bernd,
    did you have a look at SAP notes for idoc adapter code page problems? : 747322, 804570
    Regards,
    michal

  • SSIS Error Text was truncated or one or more characters had no match in the target code page

    I the same issue or something close.
    Except I have one Field (27) that get a trunacation error
    Error:
    Data conversion failed. The data conversion for column "Column 27" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.".
    The "output column "Column 27" (91)" failed because truncation occurred, and the truncation row disposition on "output column "Column 27" (91)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component.
    Data looks like:Red Text is the field that is throwing the error!
    00000412,
    0000000011411001,
    0273508793,
    01,
    "RUTH           ",
    "EDWARDS             ",
    19500415,20080401,
    "N",
    04488013,
    "1",
    "F",
    365094,
    20080401,
    000472162716,
    "1447203880    ",
    43995202341210,
    00120.000,
    0010,
    00008.26,
    00004.96,
    000.00,
    00002.70,
    00007.66,
    0,
    "PROMETH/COD  SYP 6.25-10 ",
    "Y",
    "Promethazine w/ Codeine Syrup 6.25-10 MG/5ML               ",
    0000,
    "001C",
    610020,"WELLP1537",
    "O",
    "N",
    00,
    "D",
    "S",
    "G",
    "ID01V012008782",
    "TOM AHL CHRYSLER              ",
    "M",
    "M",
    "PBD $20/10+40%/20%            ",
    00008.26,
    "1184641367"

    I have found four things that I always check when I run into this problem.  I have yet to find a time when one of these didn't work (specifically helps when reading data from flat files but I suppose most of the four would apply to any source).  Check out my blog post, content repeated below:
    1.  Make sure to properly configure the "Flat File Source".  When setting the connection properties to the flat file, take time to click on the advanced tab and ensure that the" Name", "DataType", and "OutputColumnWidth" properties are set properly.  I have found that if this is setup correctly when the initial connection is created, some if not all of the data type issues and errors can be alleviated.  The "Flat File Connection Manager Editor" can be accessed while initially creating the connection or by double clicking on a flat file connection within the "Connection Managers" for connections that have previously been created. 
    2.  Depending on the order and steps that were used to create the connection to the flat file, sometimes the data types need to be updated in an additional area.  This can be found by right clicking on the "Flat File Source" and selecting "Show Advanced Editor...".  Once in the advanced editor, click on the "Input and Output Properties" tab.  Expand the "External Columns" folder.  For each field being loaded from the flat file there are some configurable properties.  Make sure that the "DataType" field is properly set for each field.
    3.  Something else that can be done if you are sure that the data type is set correctly in both of the two previously mentioned locations is to set the "Flat File Source" to essentially ignore those annoying truncation errors.  On the same "Input and Output Properties" tab, expand the "Output Columns" folder.  For those fields listed, there is a "TruncationRowDisposition" property.  By default this is set to "RD_FailComponent".  This can be switched to "RD_IgnoreFailure" in order to allow the data to successfully pass through the "Flat File Source" even if SSIS believes that truncation is going to occur.  Along with making this change, you can also check the "DataType" in the "Output Columns" as well.
    Caution: If you do set the "Flat File Source" to "RD_IgnoreFailure" as mentioned above, always take time to review the data loaded in the target table to ensure that the integrity of the data was not jeopardized.
    Note:  I have found that when the "DataType" for both the "External Columns" and "Output Columns" is manually updated that it does not remain the same when the advanced editor is reopened.  For this reason, try Steps 1 and 2 before setting the "Output Columns" manually.
    4.  The last thing to try, and this applies specifically to loading data from Excel files as opposed to text or CSV is to set the package to run in 32-bit mode.  Click on "Project" on the top menu and select "Data Imports Properties...".  Click on "Debugging" under the "Configuration Properties" and set the "Run64BitRuntime" to "False".
    Working with data from flat files can sometimes be difficult in SSIS.  By using one or many of the approaches I have listed above you should be able to create a repeatable process that is frequently needed within most SSIS packages.  Be very careful when setting data types within SSIS and make sure to do it upfront when necessary because it can be harder to debug later in the development process.  If the proper changes are made it should not be a surprise to feel a big SSIS developer sense of relief when the screen shows all green.
    Let me know if this works!
    Check out my blog!

  • Problem crawling filenames with national characters

    Hi
    I have a big problem with filenames containing national (danish) characters.
    The documents gets an entry in in wk$url but have error code 404 (Not found).
    I'm running Oracle RDBMS 9.2.0.1 on Redhat Advanced Server 2.1. The
    filesystem is mounted on the oracle server using NFS.
    I configure the Ultrasearch to crawl the specific directory containing
    several files, two of which contains national characters in their
    filenames. (ls -l)
    <..>
    -rw-rw-r-- 1 user group 13 Oct 4 13:36 crawlertest_linux_2_fxeFXE.txt
    -rw-rw-r-- 1 user group 19968 Oct 4 13:36 crawlertest_windows_fxeFXE.doc
    <..>
    (Since the preview function is not working in my Mozilla browser, I'm
    unable to tell whether or not the national characters will display
    properly in this post. But they represent lower and upper cases of the
    three special danish characters.)
    In the crawler log the following entries are added:
    <..>
    file://localhost/<DIR_PATH>/crawlertest_linux_2_B|C?C%C?C?.txt
    file://localhost/<DIR_PATH>/crawlertest_linux_2_B|C?C%C?C?.txt
    Processing file://localhost/<DIR_PATH>/crawlertest_linux_2_%e6%f8%e5%c6%d8%c5.txt
    WKG-30008: file://localhost/<DIR_PATH>/crawlertest_linux_2_%e6%f8%e5%c6%d8%c5.txt: Not found
    <..>
    file://localhost/<DIR_PATH>/crawlertest_windows_B|C?C%C?C?.doc
    file://localhost/<DIR_PATH>/crawlertest_windows_B|C?C%C?C?.doc
    Processing file://localhost/<DIR_PATH>/crawlertest_windows_%e6%f8%e5%c6%d8%c5.doc
    WKG-30008:
    file://localhost/<DIR_PATH>/crawlertest_windows_%e6%f8%e5%c6%d8%c5.doc:
    Not found
    <..>
    The 'file://' entries looks somewhat UTF encoded to me (some chars are
    missing because they are not printable) and the others looks URL
    encoded.
    All other files in the directory seems to process just fine!.
    In the wk$url table the following entries are added:
    (select status url from wk$url where url like '%crawlertest%'; )
    404 file://localhost/<DIR_PATH>/crawlertest_linux_2_%e6%f8%e5%c6%d8%c5.txt
    404 file://localhost/<DIR_PATH>/crawlertest_windows_%e6%f8%e5%c6%d8%c5.doc
    Just for testing purpose a
    SELECT utl_url.unescape('%e6%f8%e5%c6%d8%c5') from dual;
    Actually produce the expected resulat : fxeFXE
    To me this indicates that the actual filesystem scanning part of the
    crawler can sees the files, but the processing part of the crawler can
    not open the file for reading and it therefor fails with error 404.
    Since the crawler (to my knowledge is written in Java i did some
    experiments, with the following Java program.
    import java.io.*;
    class filetest {
    public static void main(String args[]) throws Exception {
    try {
    String dirname = "<DIR_PATH>";
    File dir = new File(dirname);
    File[] fs = dir.listFiles();
    for(int idx = 0; idx < fs.length; idx++) {
    if(fs[idx].canRead()) {
    System.out.print("Can Read: ");
    } else {
    System.out.print("Can NOT Read: ");
    System.out.println(fs[idx]);
    } catch(Exception e) {
    e.printStackTrace();
    The performance of this program is very depending on the language
    settings of the current shell (under Linux). If LC_ALL is set to "C"
    (which is a common default) the program can only read files with
    filenames NOT containing national characters (Just as the Ultrasearch
    crawler). If LC_ALL is set to e.g. "en_US", then it is capable of
    reading all the files.
    I therefor tried to set the LC_ALL environment for the oracle user on
    my oracle server (using locale_config, and .bash_profile) but that did
    not seem to fix the problem at hand.
    So (finally) my question is; is this a bug in the Ultrasearch crawler
    or simply a mis configuration of my execution environment. If the
    latter how do i configure my system correctly?
    Yours sincerely
    Martin Dahl Pedersen, Visanti ( mdp at visanti dot com )

    I've posted my problems as a TAR on METALINK a little week ago.
    And it turns out to be a new bug in UltraSearch.
    It is now filed under BUG:2673282
    -- mdp

  • National characters problem

    Hi.
    I'm using AE on XE 10.2.0.1.0
    I have problem with typing national characters f.e. in updatable Report Attributes Column Heading (Custom). If i type name for heading "Ilo&#347;&#263;", then push "Apply changes", name are saved without national characters, "Ilosc".
    Why it is happenig ?
    Should i change settings in Application ? Or on database ?
    Should i use another Browser (currentlny SeaMonkey) ?
    I have download "Oracle Database 10g Express Edition (Western European)".
    Should I download and use "Oracle Database 10g Express Edition (Universal)" ???
    My APP globalization parameters:
    Application Primary Language      : Polish (pl)
    Application Language Derived From: Application Preference (using FSP_LANGUAGE_PRFERENCE)
    Automatic CSV Encoding: no
    My DB NLS settings :
    NLS_CALENDAR     GREGORIAN
    NLS_CHARACTERSET     WE8MSWIN1252
    NLS_COMP     BINARY
    NLS_CURRENCY     zl
    NLS_DATE_FORMAT     RR/MM/DD
    NLS_DATE_LANGUAGE     POLISH
    NLS_DUAL_CURRENCY     zl
    NLS_ISO_CURRENCY     POLAND
    NLS_LANGUAGE     POLISH
    NLS_LENGTH_SEMANTICS     BYTE
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    NLS_NCHAR_CONV_EXCP     FALSE
    NLS_NUMERIC_CHARACTERS     ,
    NLS_SORT     POLISH
    NLS_TERRITORY     POLAND
    NLS_TIME_FORMAT     HH24:MI:SSXFF
    NLS_TIMESTAMP_FORMAT     RR/MM/DD HH24:MI:SSXFF
    NLS_TIMESTAMP_TZ_FORMAT     RR/MM/DD HH24:MI:SSXFF TZR
    NLS_TIME_TZ_FORMAT     HH24:MI:SSXFF TZR

    N'<national symbols>', being part of an SQL statement, will be converted to the database character set (WE8ISO8859P1) before being parsed. Only if the client and the database are both 10.2 or higher, the client can encode the literal appropriately so that it survives this conversion.
    In earlier versions, you can do the encoding yourself. Instead of the N'<national symbols>' literal use the UNISTR function: UNISTR('\xxxx\yyyy\zzzz'), where U+xxxx, U+yyyy, U+zzzz are Unicode code points of your national characters.
    -- Sergiusz

  • How to send Oracle rowid to servlet? | Problem with national characters.

    There is same possibility how to send rowid to servlet?
    I have now definition like this:
    <af:image source="/imageservlet?Par1=#{bindings.Col1.inputValue}"/>
    But If column contents national characters, servlet methods obtained changed these characters.
    My idea is to use not primary key for row, but use oracle rowid. It is simply possible?
    Use something like this:
    <af:image source="/imageservlet?Rowid=#{bindings.Rowid}"/
    Or Do you have ideas how to solve problem with national characters ?
    Thanks
    FiL

    Hi,
    Although your workaround works.
    I think this is a simple encoding problem.
    I simply need to make sure all parameters and pages are encoded with a char set which contains the national characters you mentioned.
    This is a bit dependent on the exact technology your using, but most can be done via the web.xml:
      <jsp-config>
          <jsp-property-group>
              <url-pattern>*.jsp</url-pattern>
              <page-encoding>UTF-8</page-encoding>
          </jsp-property-group>
      </jsp-config>     This forces all JSP pages to be encoded in UTF-8
    Adding the following parameter sometimes helps as well, although I think this one is a bit dated:
    You said your using a servlet so your servlet needs a similar block for its pattern
      <context-param>
        <param-name>PARAMETER_ENCODING</param-name>
        <param-value>UTF-8</param-value>
      </context-param>If you want to be 100% sure the encoding is set right make sure thepages contain:
    <%@ page contentType="text/html;charset=utf-8"%>Or depending on your view technology the syntax can be a bit different
    -Anton

  • Problem with national characters on windows client

    Hello there,
    I'am having problem with national characters on windows client.
    All national data stored in NVARCHAR2 colums, applications (.net) works fine,
    but in sqlplus:
    select city from test_table;
    - everything ok, sqlplus shows national characters
    select dump(N'<national symbols>') from dual
    - returns
    Typ=96 Len=12: 0,191,0,191,0,191,0,191,0,191,0,191
    select * from test_table where city = N'<national symbols> '
    - always returns nothing
    As i understand the problem in
    sql query text (and national literals) convertion
    to servers "WE8ISO8859P1" encoding, Is it possible
    to solve the issue?
    Thanks in advance
    PS.
    Console in right mode (chcp=1251)
    sqlplus shows russian messages well
    Server (oracle 9 on solaris):
    select * from nls_database_parameters
    NLS_NCHAR_CHARACTERSET AL16UTF16
    NLS_SAVED_NCHAR_CS WE8ISO8859P1
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CURRENCY $
    NLS_ISO_CURRENCY AMERICA
    NLS_NUMERIC_CHARACTERS .,
    NLS_CHARACTERSET WE8ISO8859P1
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-RR
    NLS_DATE_LANGUAGE AMERICAN
    NLS_SORT BINARY
    NLS_TIME_FORMAT HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZH:TZM
    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZH:TZM
    NLS_DUAL_CURRENCY $
    NLS_COMP BINARY
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CONV_EXCP FALSE
    NLS_RDBMS_VERSION 9.2.0.6.0
    Client (windows server 2003, oracle client 10):
    NLS_LANG = RUSSIAN_CIS.CL8MSWIN1251

    N'<national symbols>', being part of an SQL statement, will be converted to the database character set (WE8ISO8859P1) before being parsed. Only if the client and the database are both 10.2 or higher, the client can encode the literal appropriately so that it survives this conversion.
    In earlier versions, you can do the encoding yourself. Instead of the N'<national symbols>' literal use the UNISTR function: UNISTR('\xxxx\yyyy\zzzz'), where U+xxxx, U+yyyy, U+zzzz are Unicode code points of your national characters.
    -- Sergiusz

  • SSIS - "Text was truncated or one or more characters had no match in the target code page"

    Hello everyone,
    SQL server 2012, SSIS package, we are getting the following error for some of the mapped columns,
    "Text was truncated or one or more characters had no match in the target code page."
    We're fetching the data from CSV file and dumping that to staging table i.e. SQL server 2012.
    Can anybody please advise how to resolve this error/problem? It's urgent.
    Any help would be much appreciated.
    Thanks, <b>Ankit Shah</b> <hr> Inkey Solutions, India. <hr> Microsoft Certified Business Management Solutions Professionals <hr> http://ankit.inkeysolutions.com

    You can enable data viewer (Right click on data flow connector --> Enable Data Viewer) before loading records to find out what's going on. Also, Configure error output to re-direct rows, so you can analyse data type and length.
    Also, try this: 
    Ultimately, in the Advanced Editor of the source datafile, on the Input and Output Properties tab, under External Columns, there is a
    Length property that defaults to 50. Changing that to match the Target Database File did the trick. [Source]
    Check this links: Add a Data Viewer to a Data Flow
    web: www.ronnierahman.com

  • Problem with special national characters

    Hi,
    How can I turn on the Oracle Application Server 10g to correct expose special national characters (ANSI 1250 Central Europe page)?
    It hosted on Windows Server 2003 where are appropriate character resources.
    Thanks in advance
    KM

    Check the available languages in SMLT (trn). In example stated below the characters coming from DI are Spanish characters, which are gettnig converted to Swedish 1s.
    Please go through the following:
    Re: Japanese characters

  • Code page translation problem

    Hi guys,
    I have the following scenario in XI:
         1. SAP R/3 creates a file in code page 1208 in AS 400
         2. The file is transferred via XI to the client's AS400 system. In XI this is a simple file-to-file scenario that reads from ftp location where SAP R/3 is found and copies the file into XI file system, it then runs an OS command after message processing (receiver file communication channel), that ftp's the file to the client's AS400.
         3. Our client then runs a translation CLP (I am not sure that local IT fully control this CLP) expecting to have as result the file in the correct CP. The correct CP is 813.
    The strange thing is that the ABAP program returns some files with Codepage 1208 and some others with 813. When the codePage is SAP R/3 is 813, the XI transfers correctly to the client's machine and all characters are then displayed just fine.
    However, when the codepage in SAP R/3 file system is 1208 then we cannot get the characters correctly in target system.
    We could not assume how the codepage changes in SAP R/3 system since the ABAP program remains the same.
    In XI I have used all the possible encodings: Text, Binary and all codepages.
    Any ideas??

    When I understand this right, the ABAP downloads the file in CP1208, then a CLP transfers this to CP813 and afterwards the file is processed by file adapter.
    I guess that the file adapter is sometimes too fast and processes the file before it is prepared with the CLP.
    Maybe it helps when the CLP renames the file after processing so file adapter and to set wildcards in the file adapter channel for the new name.
    When you set file to binara in sender and receiver and you have no mapping, the file will be transferred unchanged.
    Regards
    Stefan

  • Code page conversion for chinese characters

    Hi,
    we receive an XML via JMS sender adapter where the code page in the Sending MQ system is cp850.
    One tag we receive contain chinese characters but are encoded as below
    <FAPIAO><Title>马么</Title><Remark>*æ¤,波特肉*</Remark></FAPIAO>
    We have tried the messageTransformBean in the sender JMS adapter to convert into UTF-8, but that gives no change.
    If we use some other code page, BIG5 some of the characters are converted to chinese characters, but we need to have it as UTF-8.
    Is this possible or do we have to use some other codepage?
    Best Regards
    Olof

    Olof Trönnberg wrote:
    Hi,
    we receive an XML via JMS sender adapter where the code page in the Sending MQ system is cp850.
    One tag we receive contain chinese characters but are encoded as below
    <FAPIAO><Title>马么</Title><Remark>*æ¤,波特肉*</Remark></FAPIAO>
    XML has to be transported as binary always.
    Remove the encoding parameter in comm. channel.
    Besides: obviously this is UTF-8, so how can you say, the code page of the sending system is cp850?
    It seems, that you have a wrong information.

  • Xterm font problem (national characters)

    Hello everyone!
    I have the following problem...
    I'm Slovene and the national characters (š, č, ž) work in the xterm by default, but when I change the fault, even to a line like "XTerm*font: -*-fixed-medium-r-normal-*-16-*-iso8859-2", which is identical to the default font, they stop working. They show up as a space. If I set the font to something like "XTerm*faceName: terminus:pixelsize=14", they show up as 'box' characters, but when I try to use something like Monospace, they work again.
    How is it possible that the manual declaration of the -fixed- font does not work, when it's exactly like the default font if no special font is specified?
    Thanks for the answers,
    — Nanthiel

    When I use Terminus, it shows the Š and š, Č and č, Ž and ž.
    Try adding this line to your xterm settings to make sure xterm is UTF-8 compatible:
    xterm*utf8:    2
    I have no trouble using Terminus to show the Slovenian characters with locale set to 'en_US.utf8'.
    Terminus is installed to ./usr/share/fonts/local/'.  For Xorg to be fully aware of Terminus, your '/etc/X11/xorg.conf' must contain a section like this,
    Section "Files"
    FontPath "/usr/share/fonts/local"
    EndSection
    If you aren't using an 'xorg.conf' file, I think you can create one containing just those three lines.  (I could be wrong).  X has to be restarted for any changes in 'xorg.conf' to work.
    After restarting, Terminus will show up when you run 'xfontsel'.  Xfontsel is a small app that lets you display fonts and their names in the old Xorg format:   "-*-terminus-medium-*-*-*-12-*-*-*-*-*-iso8859-2".  The program won't show all the valid options, just some of them.
    Now you should be able to load Terminus in xterm with
    xterm*font:    -*-terminus-medium-*-*-*-12-*-*-*-*-*-iso8859-2
    Change the "12" to change the size of the font.
    Or,
    xterm*faceName:    xft:Terminus:size=12:hinting=true
    Again, change the "12" to change the size of the font.
    I hope this helps you.  I've learned much about xterm, and I now know that I prefer the font Inconsolata over Terminus.
    Last edited by thisoldman (2009-12-01 22:59:47)

Maybe you are looking for

  • How do I create a fixed signature for the bottom of my E-mail?

    '''bold text'''I want to add a signature to my E-mail body; Company name, title etc. How do I do this? In Eudora there was a signature drop down in the tools menue. Thanks, Lary

  • Interface SAP/ABAP-Host - Non SAP/ABAP Host

    Hi experts, I need to define an interface technoloty between an ABAP-host and a Non-ABAP host. I know, that non-abap hosts are able to call RFC-Function Modules of the ABAP-host and that I can use this method to exchange data between the systems. Thi

  • Date, File name, User Stamp

    I've looked for hours for a script that will work in Illustrator CS4 that will put a text block on the page identifying: Date, File Name and the User's name on the document (art board) I am not versed in scripting. Can anyone help me out?

  • IPOD HI-FI/NO Sound???

    Ipod playing with headphones...Ipod HI-Fi shows signs of life but no sound is coming from the speaker. Always has been plugged into the wall...batteries never used! Any help????

  • Sun Fire V490 Error

    Hello everyone, I got issue with this UUID=a4334520-a3be-6004-b82d-b4042a6f1ade. Pls advise or suggest us in this case. TIME UUID SUNW-MSG-ID Apr 19 18:48:48.6895 a4334520-a3be-6004-b82d-b4042a6f1ade FMD-8000-2K 100% defect.sunos.fmd.module Problem i