Unicode datatype

Hi,
· Unicode database (changing the database character set to AL32UTF8) is working fine, we tested with the with asp .NET application, we are able to see English, Japanese, Arabic and Urdu.
· Unicode datatype (database character set is default “WE8MSWIN1252”), with column datatype as NVARCHAR2, we are able to enter any language, but while querying from database the values are displayed as inverted “???????”. We tried the above as per the oracle documentation (Globalization Support Guide –
Chapter 5 - Supporting Multilingual Databases with Unicode - a96529.pdf) but still it displays junk characters only.
Is any client setting am missing here?.
Thanks in Advance.

There is no character set that supports both Arabic and Japanese data other then a Unicode character set. The restriction you are encountering should only be for string literals you are trying to load into Unicode datatypes. For literals in this scenario where the database character set does not support the characters in the literal string the only work around is to use UNISTR. This problem with Unicode datatypes and literals was addressed in 10gR2.

Similar Messages

  • CMP Bean's  Field Mapping with oracle unicode Datatypes

    Hi,
    I have CMP which mapps to RDBMS table and table has some Unicode datatype such as NVARCHAR NCAR
    now i was woundering how OC4J/ oracle EJB container handles queries with Unicode datatypes.
    What i have to do in order to properly develope and deploy CMP bean which has fields mapped onto the data based UNICODE field.?
    Regards
    atif

    Based on the sun-cmp-mapping file descriptor
    <schema>Rol</schema>
    It is expected a file called Rol.schema is packaged with the ejb.jar. Did you perform capture-schema after you created your table?

  • Unicode datatypes v.s. unicode databases

    We have a legacy system that is implemented in Powerbuilder and C++. We are pretty sure about which columns we need to convert to support Unicode. Besides, some of our clients have cooperate standard (AMERICAN_AMERICA.WE8MSWIN1252) for the NLS_LANG on the Oracle clients set up, .
    Therefore, we decided to use the unicode datatypes approach to only update the columns identified to NVARCHAR2 and NCLOB with AL16UTF16 as the national character set. Our understanding is that this is the safe and easy way for our situation since both C++ and Powerbuilder support UTF-16 standard as default. This will not require any change on the NLS_LANG set up.
    However, one of our clients seems to have strong opinions against the unicode datatypes option and would rather migrating the entire database to be Unicode. The client mentioned that "AL16UTF16 has to be used in a Unicode database with UTF8 or AL32UTF8 as the database character set in order to display characters correctly". To our knowledge and understanding we have not heard about this requirement. I didn't see anything like this in Oracle official document.
    Could anyone advise if Unicode database is really better than Unicode datatype option?
    Thanks!

    Besides, some of our clients have cooperate standard
    (AMERICAN_AMERICA.WE8MSWIN1252) for the NLS_LANG on
    the Oracle clients set up, . This might even be necessary requirement since they are using Windows-1252 code page.
    that "AL16UTF16 has to
    be used in a Unicode database with UTF8 or AL32UTF8
    as the database character set in order to display
    characters correctly".Hard to say without knowing what they refer to specifically.
    They might have been thinking about the requirement to use AL32UTF8, depending on how binds are done. If you insert string literals, which is interpreted in the database character set, into NCHAR columns, you obvisouly need a character set that supports all characters you are going to insert (i.e. AL32UTF8 in unicode case).
    This is described very clearly by Sergiusz Wolicki, in Re: store/retrieve data in lang other than eng when CHARACTERSET is not UTF8.

  • To Determine Unicode Datatype encoding

    Hi,
    Going through the Oracle documentation found that Oracle Unicode datatype (NCHAR or NVARCHAR2) supports AL16UTF16 and UTF8 Unicode encodings.
    Is there a way to determine which encoding is being used by Oracle Unicode datatypes thorugh OCI interface?
    Thanks,
    Sachin

    That's a rather hard problem. You would, realistically, either have to make a bunch of simplifying assumptions based on the data or you would want to buy a commercial tool that does character set detection.
    There are a number of different ways to encode Unicode (UTF-8, UTF-16, UTF-32, USC-2, etc.) and a number of different versions of the Unicode standard. UTF-8 is one of the more common ways to encode Unicode. But it is popular precisely because the first 127 characters (which is the majority of what you'd find in English text) are encoded identically to 7-bit ASCII. Depending on the size and contents of the document, it may not be possible to determine whether the data is encoded in 7-bit ASCII, UTF-8, or one of the various single-byte character sets that are built off of 7-bit ASCII (ISO 8859-15, Windows-1252, ISO 8859-1, etc).
    Depending on how many different character sets you are trying to distinguish between, you'd have to look for binary values that are valid in one character set and not in another.
    Justin

  • Unicode - DataType Currency error

    Hi experts.
    Please can you help me?
    I used below method instead of move clause.
    I can transfer (wa_table> to buffer.
    But i found ##―ఀ###ఀ ###ఀ contents in Buffer.
    This filed of buffer is Curr(15.2) datatype.
    Please Can notice me how can slove this problem ?
    Thanks.
    DATA: buffer(30000) OCCURS 10 WITH HEADER LINE.
    DATA : st_table TYPE REF TO data,
    tb_table TYPE REF TO data,
    FIELD-SYMBOLS : <wa_table> TYPE ANY,
    <it_table> TYPE STANDARD TABLE,
    <wa_table2> TYPE ANY.
    CREATE DATA : tb_table TYPE TABLE OF (query_table), "Object Create.
    st_table TYPE (query_table).
    ASSIGN : tb_table->* TO <it_table>, "INTERNAL TABLE.
    st_table->* TO <wa_table>. "WORK AREA.
    SELECT * FROM (query_table)
    INTO CORRESPONDING FIELDS OF TABLE <it_table> WHERE (options).
    LOOP AT <it_table> INTO <wa_table>.
    CLEAR buffer.
    CALL METHOD cl_abap_container_utilities=>fill_container_c
    EXPORTING
    im_value = <wa_table>
    IMPORTING
    ex_container = buffer
    EXCEPTIONS
    illegal_parameter_type = 1
    OTHERS = 2.
    APPEND buffer.
    endloop.

    Hello Kalidas
    Here is a simple "smoke test". Try to see if the system accept the following statement:
    " NOTE: Try to write the packed field only
    WRITE: / i_z008-packed_field.
    If you receive an error you cannot WRITE packed values directly.
    Alternative solution: write your structure to a string.
    DATA:
      ls_z008  LIKE LINE OF i_z008,
      ld_string  TYPE string.
    LOOP AT i_z008 INTO ls_z008.
      CALL METHOD cl_abap_container_utilities=>fill_container_c
        EXPORTING
          im_value = ls_z008
        IMPORTING
          ex_container = ld_string.
      WRITE: / ld_string.
    ENDLOOP.
    Regards
      Uwe

  • Moving to unicode datatype for an entire database - SQL Server 2012

    Hi,
    I've a SQL Server 2012 database with more tables having more char and varchar columns.
    I'd like to change quickly char columns in nchar columns and varchar columns in nvarchar columns.
    Is it possible to solve this issue, please? Many thanks

    Hello,
    Creating a script could do it quickly as shown in the following article:
    http://blog.sqlauthority.com/2010/10/18/sql-server-change-column-datatypes/
    But creating the scripts may take you some time.
    You will find more options here:
    https://social.technet.microsoft.com/Forums/sqlserver/en-US/e7b70add-f390-45ee-8e3e-8ed6c6fa0f77/changing-data-type-to-the-fields-of-my-tables?forum=transactsql
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • UTF-16 datatypes in Solaris 10

    Unicode datatypes in Solaris support UTF-32. The datatype wchar_t is 4 bytes long. Are there datatypes in Solaris that support 2 bytes instead of 4?
    I require this to support UTF-16 in my application. This is because my application "talks" to a Windows client (via an interface shared between Solaris and Windows) that supports Unicode datatypes 2 bytes long. (wchar_t is 2 bytes long in Windows).

    Ah, one of the guys I work with figured it out. /etc/services file was messed up. It's working now. :)

  • Reporting Services Unicode Parameters Cause Performance Issues

    When I create a report using string parameters,  reporting services sends the SQL to SQL Server with an N prefix on the string parameters.  This is the behavior even when the underlying data table has no unicode datatypes.  This causes SQL Server to do a scan instead of a seek on these queries.  Can this behavior be modified to send the parameters as non-unicode text?

    Work around to overcome SSRS report performance due to UNICODE conversion issue:
    I have used a new parameter (of type Internal) which collects/duplicates the original parameter values as comma separated in string.
    In the report Dataset query, parse the comma separated string into  a list into a vairable table using XML trick.
    Use the variable table in WHERE IN clause
    Steps:
    Create a new Internal parameter (call it InternalParameter1)
    Under Default Values -> Specify values : Add Exp : =join( Parameters!OrigParameter1.Value,",")
    Pass/Use the InternalParameter1 in your dataset query.
    Example code
    DECLARE @InternalParameter1 NVARCHAR(MAX)
    SET @InternalParameter1 = '100167600,
    100167601,
    4302853605,
    4030753556,
    4026938411
    --- Load comma separated string to a temp variable table ---
    SET ARITHABORT ON
    DECLARE @T1 AS TABLE (PARALIST VARCHAR(100))
    INSERT @T1 SELECT Split.a.value('.', 'VARCHAR(100)') AS CVS FROM
    ( SELECT CAST ('<M>' + REPLACE(@InternalParameter1, ',', '</M><M>') + '</M>' AS XML) AS CVS ) AS A CROSS APPLY CVS.nodes ('/M') AS Split(a)
    --- Report Dataset query ---
    SELECT CONTRACT_NO, report fields… FROM mytable
    WHERE CONTRACT_NO IN (SELECT PARALIST FROM @T1) -- Use temp variable table in where clause
    Mahesh

  • Unicode in php oracle

    While using multilanguage in php/oracle iam facing some issues
    i gave input as : Portuguese: O próximo vôo à
    and got output as : Portuguese: O pr?ximo v?o ?
    for some special characters it is replaced by '?' mark.
    iam using php5 and orcle 9 and OCI for connecting.
    i gave NLS_CHARACTERSET as UTF8
    and the unicode datatype as NVARCHAR2(100)
    Plz some one help me out in solving it

    hi..
    using alter session in php i changed my nsl_lang to PORTUGUESE
    but still the data is stored in ? for some special characers
    Array
    [PARAMETER] => Array
    [0] => NLS_LANGUAGE
    [1] => NLS_TERRITORY
    [2] => NLS_CURRENCY
    [3] => NLS_ISO_CURRENCY
    [4] => NLS_NUMERIC_CHARACTERS
    [5] => NLS_CALENDAR
    [6] => NLS_DATE_FORMAT
    [7] => NLS_DATE_LANGUAGE
    [8] => NLS_CHARACTERSET
    [9] => NLS_SORT
    [10] => NLS_TIME_FORMAT
    [11] => NLS_TIMESTAMP_FORMAT
    [12] => NLS_TIME_TZ_FORMAT
    [13] => NLS_TIMESTAMP_TZ_FORMAT
    [14] => NLS_DUAL_CURRENCY
    [15] => NLS_NCHAR_CHARACTERSET
    [16] => NLS_COMP
    [17] => NLS_LENGTH_SEMANTICS
    [18] => NLS_NCHAR_CONV_EXCP
    [VALUE] => Array
    [0] => PORTUGUESE
    [1] => AMERICA
    [2] => $
    [3] => AMERICA
    [4] => .,
    [5] => GREGORIAN
    [6] => DD-MON-RR
    [7] => PORTUGUESE
    [8] => UTF8
    [9] => WEST_EUROPEAN
    [10] => HH.MI.SSXFF AM
    [11] => DD-MON-RR HH.MI.SSXFF AM
    [12] => HH.MI.SSXFF AM TZR
    [13] => DD-MON-RR HH.MI.SSXFF AM TZR
    [14] => $
    [15] => AL16UTF16
    [16] => BINARY
    [17] => BYTE
    [18] => FALSE
    Input i gave : próximo vôo à
    out put : pr??ximo v??o ??

  • Migrate/upgrade to support unicode XML

    environment:
    DB: 9.2.0.4, 9.2.0.5
    XMLTYPE on CLOB
    we have a new requirement to support full unicode in XML and data. The Oracle9i Database Globalization Support Guide , example Example 5-4 Unicode Solution with Unicode Datatypes hits us perfectly, and it talks about putting data into NVARCHAR2 and NCLOB.
    1. Is it possible to set up XMLTYPE to use NCLOB instead of CLOB (for the 9.2.0.5 platform)?
    2. It looks like (in reading past forum threads) that there still are some issues in unicode with XMLtype. Do you have any recommendations?
    (using shredded types in 9.2.0.n is out because of issues with support for w3c timezone specs for datetime ....)
    thanks in advance,

    Could you elaborate on this response? I am having trouble inserting an xml document into an xmltype database, and I'm using the AL32UTF8 character set. I receive:
    ORA-19202: Error occurred in XML processing
    LPX-00217: invalid character 169 (\u00A9)
    Error at line 2
    The input file has unicode encoded data (&#x00a9;). Here is the insert code:
    Create OR Replace Function getClobDocument(
    filename in varchar2,
    charset in varchar2 default NULL)
    return CLOB deterministic
    is
    file bfile := bfilename('JLEONXDB',filename);
    charContent CLOB := ' ';
    targetFile bfile;
    lang_ctx number := DBMS_LOB.default_lang_ctx;
    charset_id number := 0;
    src_offset number := 1 ;
    dst_offset number := 1 ;
    warning number;
    begin
    if charset is not null then
    charset_id := NLS_CHARSET_ID(charset);
    end if;
    targetFile := file;
    DBMS_LOB.fileopen(targetFile, DBMS_LOB.file_readonly);
    DBMS_LOB.LOADCLOBFROMFILE(charContent, targetFile,
    DBMS_LOB.getLength(targetFile), src_offset, dst_offset, charset_id, lang_ctx,warning);
    DBMS_LOB.fileclose(targetFile);
    return charContent;
    end;
    INSERT INTO hdot VALUES(XMLType(getCLOBDocument('browningCUIM.xml','AL32UTF8')));

  • Issue to export data from sql to excel

    I have MS SQL 2008 Developer version and visual studio 2008. I'm using SSIS Import and Export Wizard on the VS2008 to create a simple package to export data from a table using a sql query to excel file (.xlsx), but I got the following
    error messages:
    [Destination - Query [37]] Error: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. Error code: 0x80040E21.
    [Destination - Query [37]] Error: Cannot create an OLE DB accessor. Verify that the column metadata is valid.
    [SSIS.Pipeline] Error: component "Destination - Query" (37) failed the pre-execute phase and returned error code 0xC0202025.
    The SQL query is
    SELECT [BusinessEntityID]
          ,[PersonType]
          ,[NameStyle]
          ,[Title]
          ,[FirstName]
          ,[MiddleName]
          ,[LastName]
      FROM [AdventureWorks2008].[Person].[Person]
    Any help will be appreciated. Thanks.
    A Fan of SSIS, SSRS and SSAS

    Or another way is to save the package created by Export Import wizard, open it in BIDS and add a Derived column task before the Excel destination to do explicit casting of the columns to your required unicode datatypes.
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • NCHAR in Oracle 9i

    Hi,
    I'm using Oracle 9i and implementing internationalization for my application. On the oracle site, it says that Oracle 9i supports NCHAR which is an unicode datatype exclusively. But i found that VARCHAR2 also stores unicode characters perfectly.
    Q: So why do we use NCHAR datatypes?
    It also says that we do not need to change the Database Character Set. We just need to make the National Character Set equal to UTF8 or AL16UTF16 and add the column NCHAR datatype to store unicode in a non-unicode database.
    But i tried this :
    Database Character Set: WE8MSWIN1252
    National Character Set: AL16UTF16 (default)
    and i still cant store my unicode.
    Please advise. Thanks

    Perhaps this information is going to clear in better
    way your question:
    NCHAR
    You use the NCHAR datatype to store fixed-length (blank-padded if necessary) national character data. How the data is represented internally depends on the national character set specified when the database was created, which might use a variable-width encoding (UTF8) or a fixed-width encoding (AL16UTF16). Because this type can always accommodate multibyte characters, you can use it to hold any Unicode character data.
    The NCHAR datatype takes an optional parameter that lets you specify a maximum size in characters. The syntax follows:
    NCHAR[(maximum_size)]
    Because the physical limit is 32767 bytes, the maximum value you can specify for the length is 32767/2 in the AL16UTF16 encoding, and 32767/3 in the UTF8 encoding.
    You cannot use a symbolic constant or variable to specify the maximum size; you must use an integer literal.
    If you do not specify a maximum size, it defaults to 1. The value always represents the number of characters, unlike CHAR which can be specified in either characters or bytes.
    my_string NCHAR(100); -- maximum size is 100 characters
    The maximum width of an NCHAR database column is 2000 bytes. So, you cannot insert NCHAR values longer than 2000 bytes into an NCHAR column.
    If the NCHAR value is shorter than the defined width of the NCHAR column, Oracle blank-pads the value to the defined width.
    You can interchange CHAR and NCHAR values in statements and expressions. It is always safe to turn a CHAR value into an NCHAR value, but turning an NCHAR value into a CHAR value might cause data loss if the character set for the CHAR value cannot represent all the characters in the NCHAR value. Such data loss can result in characters that usually look like question marks (?).
    NVARCHAR2
    You use the NVARCHAR2 datatype to store variable-length Unicode character data. How the data is represented internally depends on the national character set specified when the database was created, which might use a variable-width encoding (UTF8) or a fixed-width encoding (AL16UTF16). Because this type can always accommodate multibyte characters, you can use it to hold any Unicode character data.
    The NVARCHAR2 datatype takes a required parameter that specifies a maximum size in characters. The syntax follows:
    NVARCHAR2(maximum_size)
    Because the physical limit is 32767 bytes, the maximum value you can specify for the length is 32767/2 in the AL16UTF16 encoding, and 32767/3 in the UTF8 encoding.
    You cannot use a symbolic constant or variable to specify the maximum size; you must use an integer literal.
    The maximum size always represents the number of characters, unlike VARCHAR2 which can be specified in either characters or bytes.
    my_string NVARCHAR2(200); -- maximum size is 200 characters
    The maximum width of a NVARCHAR2 database column is 4000 bytes. Therefore, you cannot insert NVARCHAR2 values longer than 4000 bytes into a NVARCHAR2 column.
    You can interchange VARCHAR2 and NVARCHAR2 values in statements and expressions. It is always safe to turn a VARCHAR2 value into an NVARCHAR2 value, but turning an NVARCHAR2 value into a VARCHAR2 value might cause data loss if the character set for the VARCHAR2 value cannot represent all the characters in the NVARCHAR2 value. Such data loss can result in characters that usually look like question marks (?).
    Joel P�rez

  • Which ojdbc14.jar JDBC driver to use for Oracle 10g database

    When ODI is installed there seems to be an Oralce JDBC driver in place in the drivers folder (ojdbc14.jar).
    When we connect to an Oracle datastore and point to a table and use the 'reverse' function to populate the columns - it sort of works OK but does not bring back the datatypes properly. This is found to be when the Oracle table has UNICODE character datatypes NCHAR and NVARCHAR. If a table has CHAR and VARCHAR it is all OK but any table that has UNICODE datatyoe has a problem.
    Is this likely to be the JDBC driver ?
    We have tried replacing this ojdbc14.jar with the older classes12 and this, as expected, did not resolve the issue.
    We then tried replacing it with the latest 10.2.0.4 ojdbc14.jar but again no difference.
    Does anyone have any experience with Oracle JDBC drivers and what release level to use - and using against UNICODE datatypes in tables ?
    Regards

    Our problem is that when we use 'reverse' to populate the columns from a physicla table in an Oracle database - ODI is obviously 'seeing' the ORacle table and is correctly understanding the columns in the table and defining them in it's model - but wherever there is a column with a datatype in the ORacle database of NCHAR or NVARCHAR it fails to populate the datatype or the 'length' of these columns. If I manually try to specify the datatype these 2 unicode data types do NOT exist in the pull down list of datatypes.
    I see what you are asking - if these datatypes are actually defined as datatypes within the actual technology - I cant access my lab right now but will check as soon as I can. Thanks for the suggestion.

  • Is CL8MSWIN1251 a subset of AL16UTF16 (or other *UTF*) at all?

    DB: Oracle 9.2.0.1.0 on Linux
    Client: 9.2.0.1.0 on Windows 2000/XP, Bulgarian regional options
    Hi,
    I have big trouble using CL8MSWIN1251 as a client charset, WE8ISO8859P1 as a database charset and AL16UTF16 as a national charset. I do this cause I want to make only few datas multilingual, using NCHAR datatypes. Furthermore, I chose AL16UTF16 because in some oracle docs I've read this type is most compatible with windows clients because it's fixed length and a strict superset of UCS-2.
    1) It seems to me that SQL*Net converts all cyrillic characters to ? or whatever (but identical anyway) on the modification phase. Then on the fetch phase I get all ? for cyrillic.
    2) When I set the client charset to WE8ISO8859P1, everything works fine, BUT I think it's not a clever idea, cause it could impose some implications in the future.
    3) When I insert/update using WE8ISO8859P1 and fetch using CL8MSWIN1251, I get just incorrect characters, because of the recoding between these both.
    All this makes me to think there is some bug with CL8MSWIN1251. Can someone help me?
    One more question: When I set client charset to AL16UTF16, on db login the oracle client does not complain about "invalid NLS parameter" (though it's not a valid client setting), but these: OEM 2.2 shows "Connection closed", dbExpress in Borland Delphi 7 raises an exception "Mapping failed", etc. UTF8 setting for the client works, but why to recode between them, if I decide to make it the hard way and work with raw UTF16 in Delphi. For what reason the AL16UTF16 is not a valid client charset, but not treated as such?
    Please, throw some light on these, especially on the first question.
    With best regards,
    Anton Kolev

    Thank you for your response.
    So,
    In the SQL CHAR datatypes I store texts in plain english. In the NCHAR (unicode) datatypes I need to store texts in english, french, other western european languages with special accented and other characters, cyrillic in the win1251 encoding, and probably some sort of japanese in the future.
    The idea is to ship the final product to different worldwide customers without modifications, they would just need to set the proper charset in their clients (and, it's usually set properly by the oracle client installer on windows). This also means there is low probability that different languages will be stored in the db at the same time. My only intent is to reduce the customizations for each customer.
    Please note, that if and when I realize all the internals of this unicode stuff, there is no problem to switch the db charset to UTF, but I have to be sure it will work properly with at least cyrillic in the first place, which is not my case for now.
    The tools I use are Golden32 (latest) execute&edit function and plain SQL, SQL*PLUS on Linux, Borland Delphi 7 with ADO (MDAC 2.7) or dbExpress, the table editor of DBA Studio (OEM 2.2).
    My tests are quite simple for an example code, just INSERT and SELECT on a single NVARCHAR2(500) column with different client charsets (e.g. insert as 8bit charset and select as UTF etc.).
    I hope this helps you to check out my problem.
    With best regards,
    Anton

  • Is it possible to assign a particular font to a table?

    I am building an archaeology-related database and have to store representations of cunieform tablets in a table.
    The glyphs used to represent Akkadian and Bablylonian cunieform are non-alphabetic and are obtained from either a U of Chicago font or a middle-eastern font.
    The table containing the tablet information would need to be able to use one of these two fonts, I believe. The rest of the
    database would use standard fonts.
    Is it possible to assign a font to only a single table? is there some other way I can represent the cunieform?
    One of the requirements is that the cunieform must be queriable so I can't use bit-maps.
    Thanks in advance for any suggestions or pointers to where I can find information about solving the problem.
    George Sundell

    A follow-up to my original message. I have been reading the Globalization Manual. It seems that my solution possibly should be to use Unicode datatypes.
    It seems as if the only way to do this is via specifying the Unicode font as the National Character Set and it appears that Imust choose AL16UTF16.
    I have learned that a new Unicode font set incorporating the characters for the Akkadian, Sumerian, etc. transliterations is in preparation.
    Would it be possible to substitute the new font for the AL16UTF16 one? If so, can someone point me to the documentation on how to accomplish this?
    Many thanks.
    George Sundell

Maybe you are looking for

  • Can you create a playlist when the associated files are not present?

    I want to let my friend create a playlist from songs in MY library, but from HIS computer, so I copied my Library.itl and Library.XML files to his PC to let him peruse what I have. Obviously this only lets hin VIEW the songs, NOT play them. Is there

  • Sales tax and Labor Tax

    Hello Friends We are on ECC6.0 On our Service Contracts the is labor taxable and not sales taxable we are having a bit of an issue with sales tax on Service Contracts. Some states are labor tax exempt and some are not. When we create the Service Cont

  • How do i get my ipod to show up in itunes?

    My ipod isn't showing up in itunes. I plug it in and it charges, but only the camera thing pops up on the comptuer screen. How can i connect my Ipod to itunes?

  • TAXINN VS TAXINJ

    Why SAP does recommend to take TAXINN instead of TAXINJ. Is there any specific OSS Note for this? Biswajit

  • Exit required to "drop" two segments from an IDOC

    hi, how can we dropped some of the segments in we02 according to my condition.can somebody tell me how to find out the program means where we can see the idoc data is populating in segments.in we02 i am finding 6 segments with E1EDP10.i want to drop