NLS_LANG in case of Japanese data

Hi,
My DB NLS Charset is AMERICAN_AMERICA.AL32UTF8 and NLS_LENGTH_SEMANTICS is BYTE. I am working with Japanese Kanji, Kana Data. I am storing japanese data into VARCHAR2. I am using SQL*LOADER to load japanese data into Oracle Tables.
When I upload data using SQLLDR with NLS_LANG=AMERICAN_AMERICA.AL32UTF8 then I am facing Data Too Large Exception. But When I upload data with NLS_LANG=AMERICAN_AMERICA.JA16SJIS then my data is uploaded successfully.
Please tell me AMERICAN_AMERICA.JA16SJIS is perfect charset to upload data into oracle tables?
If not then please tell me how to upload japanese data using AMERICAN_AMERICA.AL32UTF8 charset.
My DB Table is given below:
CREATE TABLE TEST_KANJI
DATA_TEST1 VARCHAR2(5 CHAR),
DATA_TEST2 VARCHAR2(5 CHAR)
);

Thanks orafad.
Actually.. Our database is AL32UTF8 compatible. We are using SQL LOADER to upload japanese data into database. Our DB NLS_LENGTH_SEMANTICS is BYTE TYPE.
Our NLS_LANG contains AMERICAN_AMERICA.AL32UTF8. Now we perform following operations:
QL> CREATE TABLE TEST_SJIS_CHAR
2 (
3 data_1 VARCHAR2(3 CHAR),
4 data_2 VARCHAR2(3 CHAR)
5 );
Table created.
SQL> CREATE TABLE TEST_SJIS_BYTE
2 (
3 data_1 VARCHAR2(3 BYTE),
4 data_2 VARCHAR2(3 BYTE)
5 );
Table created.
SQL> INSERT INTO test_sjis_char VALUES('私金魚','私金魚');
INSERT INTO test_sjis_char VALUES('私金魚','私金魚')
ERROR at line 1:
ORA-12899: value too large for column "ODIDEV"."TEST_SJIS_CHAR"."DATA_1"
(actual: 4, maximum: 3)
SQL> INSERT INTO test_sjis_byte VALUES('私金魚','私金魚');
INSERT INTO test_sjis_byte VALUES('私金魚','私金魚')
ERROR at line 1:
ORA-12899: value too large for column "ODIDEV"."TEST_SJIS_BYTE"."DATA_1"
(actual: 6, maximum: 3)
SQL> INSERT INTO test_sjis_byte VALUES('金魚魚','魚金魚');
INSERT INTO test_sjis_byte VALUES('金魚魚','魚金魚')
ERROR at line 1:
ORA-12899: value too large for column "ODIDEV"."TEST_SJIS_BYTE"."DATA_1"
(actual: 6, maximum: 3)
SQL> INSERT INTO test_sjis_char VALUES('金魚魚','魚金魚');
INSERT INTO test_sjis_char VALUES('金魚魚','魚金魚')
ERROR at line 1:
ORA-12899: value too large for column "ODIDEV"."TEST_SJIS_CHAR"."DATA_1"
(actual: 4, maximum: 3)
Now we change our client NLS_LANG=AMERICAN_AMERICA.JA16SJIS. Now we perform following operations:
SQL> CREATE TABLE TEST_SJIS_CHAR
2 (
3 data_1 VARCHAR2(3 CHAR),
4 data_2 VARCHAR2(3 CHAR)
5 );
Table created.
SQL> CREATE TABLE TEST_SJIS_BYTE
2 (
3 data_1 VARCHAR2(3 BYTE),
4 data_2 VARCHAR2(3 BYTE)
5 );
Table created.
SQL> INSERT INTO test_sjis_char VALUES('私金魚','私金魚');
1 row created.
SQL> INSERT INTO test_sjis_byte VALUES('私金魚','私金魚');
INSERT INTO test_sjis_byte VALUES('私金魚','私金魚')
ERROR at line 1:
ORA-12899: value too large for column "ODIDEV"."TEST_SJIS_BYTE"."DATA_1"
(actual: 9, maximum: 3)
SQL> INSERT INTO test_sjis_byte VALUES('金魚魚','魚金魚');
INSERT INTO test_sjis_byte VALUES('金魚魚','魚金魚')
ERROR at line 1:
ORA-12899: value too large for column "ODIDEV"."TEST_SJIS_BYTE"."DATA_1"
(actual: 9, maximum: 3)
SQL> INSERT INTO test_sjis_char VALUES('金魚魚','魚金魚');
1 row created.
SQL> SELECT length(data_1), length(data_2) FROM test_sjis_char;
LENGTH(DATA_1) LENGTH(DATA_2)
3 3
3 3
SQL> SELECT length(data_1), length(data_2) FROM test_sjis_byte;
no rows selected
SQL>
SQL> SELECT lengthb(data_1), lengthb(data_2) FROM test_sjis_char;
LENGTHB(DATA_1) LENGTHB(DATA_2)
9 9
9 9
SQL> SELECT lengthb(data_1), lengthb(data_2) FROM test_sjis_byte;
no rows selected
SQL> SELECT data_1 || data_2 from test_sjis_char;
DATA_1||DATA
私金魚私金魚
金魚魚魚金魚
SQL> SELECT data_1 || data_2 from test_sjis_byte;
no rows selected.
Actually We want to take a final decission on charset. But we need strong reason to finalize it. Please help me out to solve this issue.

Similar Messages

  • Problem in working with Japanese data

    Hello,
    We have an application, which should work for both English and Japanese languages. Our application is working fine for English data but in case of Japanese, we are facing problems. In the application, there is a language setting, where the user can select either English or Japanese. This changes the language of the labels to English or Japanese. We are handling this thru' ResourceBundles. But the data that is entered can be independent of this. In English setting also, the data entered or diplayed could be Japanese. Now we are facing problems at vaious places -
    1. When we enter Japanese data and save it, it gets copied into the bean and if there is any error in the saving, the same data from bean comes back on the screen. This data doesn't appear properly in Japanese.
    2. When we try to retrieve the already saved data from database, it doesn't show poperly in Japanese.
    We are using our own 'toUnicode' function, which converts the data to Unicode before saving the data to database. Similarly, we have tried different encodings like SJIS / Shift_JIS / ISO-2022-JP at various places i.e. in 'Page' directive or in HTML Meta tag or various combinations of these two. Nothing is working reliably. Sometimes for some combination, data retained in the first case is correct but it can't be saved. Sometimes it works with Unicode, sometimes without Unicode. Some combination affects the Japanese labels as well. Otherwise the labels at least are displayed properly.
    The same code with just Shift_JIS setting in 'page' directive, worked well in IIS Server for both English and Japanese.
    What could be the problem in WebLogic? Does it matter if WL is installed on English or Japanese machine?
    Kindly answer ASAP, since the project is getting delayed.
    Thanking in anticipation...
    -Medha

    Hi,
    is there a difference in handling Japanese and Chinese chars? If not I
    might be able to give you some hints.
    Daniel
    -----Original Message-----
    From: JSB [mailto:[email protected]]
    Posted At: Monday, February 12, 2001 6:22 PM
    Posted To: internationalization
    Conversation: Problem in working with Japanese data
    Subject: Re: Problem in working with Japanese data
    Have u solved this yet, and how ?
    Medha <[email protected]> wrote in message
    news:[email protected]...
    >
    Hello,
    We have an application, which should work for both English andJapanese
    languages. Our application is working fine for English data but in case
    of
    Japanese, we are facing problems. In the application, there is a
    language
    setting, where the user can select either English or Japanese. This
    changes
    the language of the labels to English or Japanese. We are handling this
    thru' ResourceBundles. But the data that is entered can be independent
    of
    this. In English setting also, the data entered or diplayed could be
    Japanese. Now we are facing problems at vaious places -
    1. When we enter Japanese data and save it, it gets copied into thebean
    and if there is any error in the saving, the same data from bean comes
    back
    on the screen. This data doesn't appear properly in Japanese.
    2. When we try to retrieve the already saved data from database, itdoesn't show poperly in Japanese.
    We are using our own 'toUnicode' function, which converts the datato
    Unicode before saving the data to database. Similarly, we have tried
    different encodings like SJIS / Shift_JIS / ISO-2022-JP at various
    places
    i.e. in 'Page' directive or in HTML Meta tag or various combinations of
    these two. Nothing is working reliably. Sometimes for some combination,
    data
    retained in the first case is correct but it can't be saved. Sometimes
    it
    works with Unicode, sometimes without Unicode. Some combination affects
    the
    Japanese labels as well. Otherwise the labels at least are displayed
    properly.
    The same code with just Shift_JIS setting in 'page' directive,worked
    well in IIS Server for both English and Japanese.
    What could be the problem in WebLogic? Does it matter if WL isinstalled on English or Japanese machine?
    Kindly answer ASAP, since the project is getting delayed.
    Thanking in anticipation...
    -Medha

  • NLS_LANG that supports both Japanese and Spanish

    Hi All,
    I need to insert Japanese scripts into my custom table. So I did setup the NLS_LANG to support the Japanese scripts as American_America.JA16SJIS.
    The japanese data got inserted into the table.
    Now, the spanish characters, which i have shows up as junk characters.
    I tried setting the NLS_LANG to UTF8. It didnt help me inserting the japanese characters thought it did insert the spanish properly.
    Can you please let me know, which supports these.
    Thanks,
    LR

    When you convert to Shift-JIS, if you also have spanish characters these need to be defined in the character set too. Is the client side Windows? Which shift-jis standard does the code page you are using support, and does it include all the characters?

  • Am unable to unload correct japanese data using ODISqlUnloader tool

    Hi,
    I am unable to unload correct japanese data using ODISqlUnloader tool to my
    local system.
    Database LNS_LANG setting is AL32UTF8 and I can easily see those data in my
    SQLPLUS window but when I unloading those getting error or JUNK characters
    for japanese scripts. which charset should I use for unloading japanese
    characterset properly
    regards,
    Palash

    Hi all,
    Our problem with SqlUnloader where it is unable to unload japanese
    data, So we built a new open tool for unloading data .
    but in our case open tool ojdbc14.jar that we had in driver folder of ODI
    installation jdbc Implementation- Version: "Oracle JDBC Driver version -
    10.1.0.5.0".
    While we changed the same with Specification-Version: "Oracle JDBC Driver
    version - 9.0.2.0.0" it's working well.
    but for this ojdbc14.jar SQLUNLOADER showing java.sql.SQLException: Invalid
    character encountered in: failAL32UTF8Conv as my data base characterset is
    AL32 UTF8.
    Can you people confirm that is this a bug that we can't unload japanese data
    using OdiSqlUnloader or I am doing somthing wrong.
    Note : My opentool is working fine with ojdbc14.jar of version - 9.0.2.0.0 but
    fails for Driver version - 10.1.0.5.0 . to excute the the same I changed the
    ojdbc14.jar present in driver folder of odi installation directory . So will this affect
    any other part of odi excution . My ODI version is 10.1.3.2.0.

  • Problem in displaying dynamic japanese data in JSP

    Hi all,
    I am trying to display japanese data (address) fetched from RDBMS database but only ???????? getting displayed. First the data is fetched from database then populated in xml. From xml databean is created .
    Steps taken
    I have put chartset =shift-jis in jsp response header .
    I made sure the encoding for XML file is UTF-8 for parsing. I am using documentbuilder object for parsing for xml.
    I am not sure what needs to be done to display japnese text properly.
    I appreciate if i get solution as i am struck with this for long time
    Regards,
    prasad

    Your JSP pages need to contain a JSP directive that specifices the encoding for the pages (one that supports Japanese).
    I use:
    <%@ page contentType="text/html;charset=UTF-8" %>
    You could also use:
    <%@ page contentType="text/html;charset=SHIFT-JIS" %>
    When you view with your browser also remember to set the page encdoing and ensure your OS has the proper fonts installed.

  • Problem in searching japanese data in DB2

    Hi,
    I have japanese data stored in my DB2 database. I have to search that data in DB2. For that I have writen test program.
    Following is the SQL query:
    "select adrnr from sapr3.sadr where land1='JP' and adrnr = '1000027051' and name1 like '"+sname+"%'"
    Where sname is japanese data stored in DB2.
    Before running this query I am retrieving sname from database using following:
    "select name1 from sapr3.sadr where adrnr='1000027051'";
    But its not able to search the same record if I also provide name1.
    Please guide.
    Thanks in advance.

    Hi,
    i am working on three operating systems, but lets take the example of linux
    oracle version :Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Prod
    charaterset : AL32UTF8
    nls settings are :
    NLS_LANGUAGE               AMERICAN
    NLS_TERRITORY               AMERICA
    NLS_CURRENCY               $
    NLS_ISO_CURRENCY          AMERICA
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CALENDAR               GREGORIAN
    NLS_DATE_FORMAT          DD-MON-RR
    NLS_DATE_LANGUAGE     AMERICAN
    NLS_CHARACTERSET          AL32UTF8
    NLS_SORT                         BINARY
    NLS_TIME_FORMAT          HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY     $
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    NLS_COMP                    BINARY
    NLS_LENGTH_SEMANTICS     CHAR
    NLS_NCHAR_CONV_EXCP     FALSE
    Regards,
    Vikas Kumar

  • Japanese Data Corruption in SOAP Response

    Hello,
    I'm in an environment where I need to pass Japanese data through a SOAP service using an Element. When I return
    the Element in response tor my SOAP request I'm receiving corrupted data in the XML. The XML data of the Element on
    the server side looks perfect. Does anyone have any suggestions what might be wrong?
    I'm using Oracle SOAP from iAS v1.0.2.2.
    My client code looks like this:
    Call call = new Call();
    call.setTargetObjectURI( serviceId );
    call.setMethodName( "select" );
    call.setEncodingStyleURI( Constants.NS_URI_LITERAL_XML );
    Vector params = new Vector();
    params.addElement( new Parameter( "max",
    int.class,
    new Integer( 50 ),
    Constants.NS_URI_SOAP_ENC ) );
    Any help would be greatly appreciated! Thanks!
    -ann

    Hello,
    Still no luck, but I did try to send the XML as a string. The results were greatly improved, but still not right. This
    client code looks like this:
    Locale.setDefault(new Locale("ja","JP"));
    Call call = new Call();
    call.setTargetObjectURI( serviceId );
    call.setMethodName( "select" );
    call.setEncodingStyleURI( Constants.NS_URI_SOAP_ENC );
    Vector params = new Vector();
    // Specify to return a maximum of 50 records
    params.addElement( new Parameter( "max",
    int.class,
    new Integer( 50 ),
    null ) );
    The resulting Japanese data has the correct characters, but includes extraneous ones as well.
    Can anyone help me with this?
    Thanks!
    -ann

  • Should be able to enter both Japanese data and English data into the database without

    Scenario 1:
    Database Char Set: UTF 8
    National CharSet :UTF8
    String Type:Varchar2
    Problem :Unable to enter more than 1/3rd of the field length specified when entering Kanji(Japanese) Data
    Scenario 2:
    Database Char Set: JA16EUC/JA16SJIS
    National CharSet :JA16EUCFixed/JA16SJISFixed
    String Type:NVarchar2
    Problem :Unable to enter/retrieve English data written into the database but works fine with Japanese
    Scenario 3:
    Database Char Set: UTF8
    National CharSet :JA16EUCFixed
    String Type:NVarchar2
    Problem :Unable to enter/retrieve English data written into the database but works fine with Japanese
    null

    You will not be able to display the process form, or edit those values from the view profile screen.
    You would need to create custom fields that are mapped from the process form to the user's profile. Then you would need to create user form update triggers from the User Defined Fields that when the user changes them, they get pushed to the target process form by adding those task names into the provisioning process definition. This would then trigger the updates to the target resource.
    -Kevin

  • How to use japanese data in BO 6.5

    Post Author: sinha_ips
    CA Forum: Desktop Intelligence Reporting
    hi,I am using BO 6.5 trying to use japanese data in my report. The database i m querying supports japanese and the oracle client at my machine also supports japanese.I have installed the japanese language set also at my machine.But all this is not giving me the results. Japanese data is still coming as junk. Any help is appreciable...Thanks & Regards,Ashish

    Are you able to log into infoview then?  And the problem happens when you select on a web report?  Do you see any errors when you select the link?  Does this happen with all reports? or is there a specific type of report that this happens to?
    With your general supervisor account, try to create a simple report from a simple database such as efashion, with one object.  Make sure that you save the report without refresh on open.  Then try again.
    Basically, you'll want to narrow the problem down to:
    1. report related
    2. deployment related
    3. browser related.
    4. authorization related.
    Run simple tests to narrow down what the problem is related to.

  • XML containing Japanese data

    Hello,
    We have an application, which should work for both English and Japanese languages. We are using MVC pattern, where the data coming from the browser is stored in a bean and is saved into the database using a separate persistence manager (PM) class. When multiple records are retrieved from the database, we are using XML to return the results from PM class to the controller. This works fine if the records contain only the English data. When we are trying to return the records containing Japanese data, we get an exception 'Illegal XML character' while parsing the string. Can somebody tell, which encoding should be used in XML, to make it work? We are using some other parsers as against the ones available in WebLogic. We don't know how to override the WL XML settings. Could the problem be because of that?
    Kindly answer ASAP, since the project is getting delayed.
    Thanking in anticipation...
    -Medha

    Hi Medha,
    OK, if you are writing to a file you have to specify the encoding as
    well, e.g.
    File f = new File(sFileName);
    FileOutputStream fos = new FileOutputStream(f);
    Writer w = new OutputStreamWriter(fos, "UTF8");
    w.write(s);
    w.close();
    fos.close();
    I was thinking that you are reading from a file. Is that correct? What
    I'm doing is parsing XML Files containing simplified Chinese and Thai
    characters. I store them in a UTF-8 encoded XML File. I read this as
    described in my first posting. I'm using a new version of the xerces dom
    parser, but I suppose that should not make a difference.
    Could you post the stacktrace? And maybe a short example xml file in
    UTF-8 encoding? I could try if my parser setup works with it.
    Daniel
    -----Original Message-----
    From: Medha [mailto:[email protected]]
    Posted At: Thursday, January 18, 2001 1:49 PM
    Posted To: xml
    Conversation: XML containing Japanese data
    Subject: Re: XML containing Japanese data
    Hi,
    We tried JISAutoDetect, UTF-8, UTF-16, ISO-2022-JP and various JIS***
    encodings in XML file.
    We are using following code to parse the XML string
    File filePointer=new File("/","test.xml");
    FileWriter fileWriter=new FileWriter(filePointer);
    fileWriter.write(xmlString);
    fileWriter.close();
    factory = DocumentBuilderFactory.newInstance();
    builder = factory.newDocumentBuilder();
    document = builder.parse(filePointer);
    We are using SAX parser version 1.0 from SUN. We are using jaxp.jar
    and parser.jar files. They are located in the classpath before weblogic
    stuff.
    Hope you can suggest some solution.
    Thanks,
    Medha
    Daniel Hoppe <[email protected]> wrote:
    Hi Medha,
    some questions in order to help:
    - which encoding are you currently using?
    - Did you specify the encoding in your XML file (e.g. <?xml
    version="1.0" encoding="UTF-8"?>)?
    - Are you reading the file from which you are building the strings
    correctly (specify the encoding as well?)
         e.g.:
         InputStream is = new FileInputStream(sFileName);
    InputStreamReader isr = new InputStreamReader(is,"UTF8");
    Reader in = new BufferedReader(isr);
    - which parser are you using and where in the classpath is it located
    (especially before or behind the weblogic stuff including servicepacks)
    >
    Regards,
    Daniel

  • Problem in displaying dynamiic japanese data

    I am trying to display japanese data (address) fetched from RDBMS database but only ???????? getting displayed.
    First the data is fetched from database then populated in xml. XML is in UTF-8 encoding,From xml ,databean is created .
    Steps taken
    I have put chartset =shift-jis in jsp response header .
    I made sure the encoding for XML file is UTF-8.
    I am not sure what needs to be done to display japnese text properly.
    cheers
    svp

    Tien ,
    I tired this but it did not work.
    I was curious knowing how japanese static text is put into jsp code. I want to know whether i followed right way because if the change te encoding to utf-8 ,my jsp displays question marks even for static content.

  • Case type Default data

    Hello Team,
    In the case type default data in GTS for the Procedure CMCD - Customs document, system asks below fields
    Foreign trade Organization
    Document Type
    Goods Direction
    for SPL and Embargo check in customs document, i wanted to determine seperate case type. But the above input fields will be unique and system not allows me to assign seperate case type.
    Can you help me on how to determine different case type based on the Area?
    Note:
    I wanted to determine the case type as below
    In SPL Check for customs document e.g. EXPORD - Case type SPLE
    In Embargo check for customs document e.g. EXPORD - Case type EMBE
    Thanks & Regards
    Rahul

    Some ideas:
    One method would be to place a before insert trigger on the data and upper it.
    You could add another column, populate via a trigger as upper(), then build a unique index on this column.
    See the following Oracle documentation on case insensitive searchs and comparisons
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14225/ch5lingsort.htm#i1008800
    Oracle support documents:
    Develop Global Application: Case-insensitive searches #342960.1
    How to use Contains As Case-Sensitive And -Insensitive On The Same Column #739868.1
    How To Implement Case Insensitive Query in BC4J ? #337163.1
    HTH -- Mark D Powell --

  • Can anybody explain me difference between test cases and test data

    Hi All,
    Can anybody explain me difference between test cases and test data.
    Testing procedure for FS.
    Thanks & Regards,
    Smitha

    Hi,
    Test case is a procedure how to do the testing of particular functionality and it consists the data that to be given for testing a particular requirement of the given Functional Spec and it also consists the result whether the desired functionality is fullfilling or not.
    Regards
    Pratap

  • Retrive Japanese data and store into a file

    Hi,
    I have a table which contains japanese data. My database charset is AL32UTF8. I am able to insert japanse data into oracle table also I am able to view inserted data into oracle table using sql*plus.
    I wrote a java program to write a file with table stored data. To fetch that data I am using sql query. When I use ojdbc14.jar of version 9.0.2.0.0. Then I am able to write japanse data file but when I use ojdbc14.jar of version 10.2.0.3.0. Then I found junked data into data file.
    Please help me out to solve this problem.

    The BIP Bursting APIs allow you to hook in code, in which you could achieve what you mention - but would require programming it!
    Regards,
    Gareth

  • Issue with LPAD/RPAD when using with Japanese data

    Hi,
    I am trying to apply LPAD/RPAD on Japanese data, but its giving different results than expected.
    LPAD/RPAD is returning less data than length passed.
    Database: Oracle 10g
    Could some one help me on this.
    SQL Query: select length('アップリカ'),lpad('アップリカ',5,' '), length(lpad('アップリカ',5,' ')) from dual;
    Output:
    LENGTH('アップリカ') --> *5*
    LPAD('アップリカ',5,'') --> アッ
    LENGTH(LPAD('アップリカ',5,'')) --> *3*
    Thanks in advance !!
    Edited by: 871132 on Jul 7, 2011 8:18 PM
    Edited by: 871132 on Jul 7, 2011 8:20 PM
    Edited by: 871132 on Jul 7, 2011 8:21 PM

    I have made a little research and it seems that your problem is connected with way RPAD function works.
    Actually it counts 'display units' instead of 'real' characters count. And in japanesse one 'real' character may be formed from few display units.
    Oracle gives some info about it in its official documentation:
    http://download.oracle.com/docs/cd/B19306_01/olap.102/b14346/dml_x_reserved010.htm
    Some more info is given here:
    http://www.rhinocerus.net/forum/databases-oracle-tools/426832-oracle-bug-rpad-japanese-kanji-character-oracle-10gr2-utf8database.html
    And also sugestion is given:
    +"If you really want the number of characters then you can use+
    +something like:+
    +RPAD ( str , n - LENGTHC(str),'c')+
    +Use LENGTHB, if the requested width is in bytes, LENGTHC, if in+
    +characters+
    +Or+
    +SUBSTR( str || RPAD( 'c', n, 'c' ), 1, n )+
    +Use SUBSTRB, if the requested width is in bytes, SUBSTR, if in+
    +characters+
    +In above the+
    +* str is the string to be padded.+
    +*'c' is the fill character (usually blank) -- we a assume single-+
    +byte char+
    +* n is the requested width in bytes or characters+
    +"+
    Edited by: chudapet on 8.7.2011 1:52

Maybe you are looking for