How to set Multi Byte Character Set ( MBCS ) to Particular String In MFC VC++

I Use Unicode Character Set in my MFC Application ( VC++) .
now i get the output   ठ桔湡潹⁵潦⁲獵 (like this )character and i want to convert this character in english language (means MBCS),
But i need Unicode to My Applicatiion. when i change the Multi-Byte Character set It give Correct output in English but other Objects ( TreeCtrl Selection ) will perform wrongly .  so i need to convert the particular String to MBCS
how can i do that ? In MFC

I assume your string read from your hardware device is an plains "C" string (ANSI string). This type of string has one byte per character. Unicode has two bytes per character.
From the situation you explained I'd convert the string returned by the hardware to an Unicode string using i.e. MultibyteTowideChar with CP_ACP. You may also use mbstowcs or some similar functions to convert your string to an Unicode string.
Best regards
Bordon
Note: Posted code pieces may not have a good programming style and may not perfect. It is also possible that they do not work in all situations. Code pieces are only indended to explain something particualar.

Similar Messages

  • Multi byte character set

    Hi,
    I am going to create a oracle 8i database in linux OS.In that i want to set english, italian and chinese language.
    My question is
    1. what are the parameter to be set in the OS level for this multi byte characterset
    2.how to set these characterset in database level at the time of creation.
    3.and also i am going to migrate one database to the new database. but the old one contains only english and italy. so while migration what are the parameter we have to set in the new database.
    Kindly provide some solutions..
    rgds..

    1) I'm not sure what you're asking here.
    2) While creating the database, you would want to set the NLS_CHARACTERSET to UTF8 (I don't believe AL32UTF8 was available in 8i).
    3) How are you migrating the database? Via export & import? If so, you'd need to ensure that the NLS_LANG on the client machine(s) that do the actual export and import are set appropriately.
    Justin

  • Converting from Single Byte to Multi Byte character set

    Hello,
    I'm trying to migrate one schema, including data, from a 10g (10.1.0.2.0) DB with IW8ISO8859P8 character set, to a 10g (10.2.0.1.0) DB with AL32UTF8 character set.
    The original tables are using VARCHAR2 columns, including some VARCHAR2(1) columns.
    I'm trying to use exp and imp for the task, but during import I'm receiving errors like:
    IMP-00019: row rejected due to ORACLE error 12899
    IMP-00003: ORACLE error 12899 encountered
    ORA-12899: value too large for column "SHAMAUT"."TIKIM"."GAR_SET" (actual: 2, maximum: 1)These errors are not limited to the one-character columns only.
    Is there a way to export/import the data with AL32UTF8 in mind, so the system will automatically convert the data properly?
    Thanks for the help,
    Arie.

    It's not a true conversion problem that you have but more a space problem. Tables columns are created by default with the init. parameter NLS_LENGTH_SEMANTICS character semantics:
    If NLS_LENGTH_SEMANTICS = BYTE
    then 1 character = 1 byte whatever the db character set
    If NLS_LENGTH_SEMANTICS = CHAR
    then 1 character = 1 character size for the db character set.
    If this parameter is changed it is only taken into account for newly created tables or columns: existing columns are not changed.
    See http://download-uk.oracle.com/docs/cd/B10501_01/server.920/a96529/ch2.htm#104327
    The only solution I see is to enlarge your VARCHAR2 columns before running the import...
    Message was edited by:
    Pierre Forstmann

  • Crystal XI R2 exporting issues with double-byte character sets

    NOTE: I have also posted this in the Business Objects General section with no resolution, so I figured I would try this forum as well.
    We are using Crystal Reports XI Release 2 (version 11.5.0.313).
    We have an application that can be run using multiple cultures/languages, chosen at login time. We have discovered an issue when exporting a Crystal report from our application while using a double-byte character set (Korean, Japanese).
    The original text when viewed through our application in the Crystal preview window looks correct:
    性能 著概要
    When exported to Microsoft Word, it also looks correct. However, when we export to PDF or even RPT, the characters are not being converted. The double-byte characters are rendered as boxes instead. It seems that the PDF and RPT exports are somehow not making use of the linked fonts Windows provides for double-byte character sets. This same behavior is exhibited when exporting a PDF from the Crystal report designer environment. We are using Tahoma, a TrueType font, in our report.
    I did discover some new behavior that may or may not have any bearing on this issue. When a text field containing double-byte characters is just sitting on the report in the report designer, the box characters are displayed where the Korean characters should be. However, when I double click on the text field to edit the text, the Korean characters suddenly appear, replacing the boxes. And when I exit edit mode of the text field, the boxes are back. And they remain this way when exported, whether from inside the design environment or outside it.
    Has anyone seen this behavior? Is SAP/Business Objects/Crystal aware of this? Is there a fix available? Any insights would be welcomed.
    Thanks,
    Jeff

    Hi Jef
    I searched on the forums and got the following information:
    1) If font linking is enabled on your device, you can examine the registry by enumerating the subkeys of the registry key at HKEY_LOCAL_MACHINEu2013\SOFTWARE\Microsoft\Windows NT\CurrentVersion\FontLink\SystemLink to determine the mappings of linked fonts to base fonts. You can add links by using Regedit to create additional subkeys. Once you have located the registry key that has just been mentioned, from the Edit menu, Highlight the font face name of the font you want to link to and then from the Edit menu, click Modify. On a new line in the dialog field "Value data" of the Edit Multi-String dialog box, enter "path and file to link to," "face name of the font to link".u201D
    2) "Fonts in general, especially TrueType and OpenType, are u201CUnicodeu201D.
    Since you are using a 'true type' font, it may be an Unicode type already.However,if Bud's suggestion works then nothing better than that.
    Also, could you please check the output from crystal designer with different version of pdf than the current one?
    Meanwhile, I will look out for any additional/suitable information on this issue.

  • How to add a new character set encoding?

    Hello,
    can anybody please explain to me, how to add a new character set encoding to Mac OS Tiger?
    I have two Mac laptops, a new one with Snow Leopard and an older one with Tiger, and on the old one i cannot use or enable anywhere the "Russian (DOS)" character set encoding, which i need to be able to use some old text files.
    On the Snow Leopard, this encoding is present in the list of available encodings of TextWrangler, but not in TIger.
    If i have understood correctly, this is not a problem of TextWrangler, and the same encodings are available systemwide.
    So, the question is: how to add new encodings to Tiger (or to Mac OS in general)?
    Thanks.

    I think possibly that's in the Get Info window of Finder?
    I don't think either that or the input menu have any effect on available encoding choices. Adding languages to system prefs/international/languages can do that, but once you have added Russian there, I don't know of any way to add an additional Russian encoding (there are quite a number of them).

  • How to set or change character set for Oracle 10 XE

    Installing via RPM on Linux.
    I need to have my database set to use UTF8 and WE8ISO8859P15 as the character set and national character set. (Think those are in the right order. If not, it's the opposite.)
    If I do a standard "yum localinstall rpm-file-name," it installs Oracle. I then run the "/etc/init.d/oracle-xe configure" command to set my ports.
    Every time I do this, I end up with AL32/AL16 character sets.
    I finally hardcoded ISO-8859-15 as the Linux 'locale' character set and set this in the various bash profile config files. Now, I end up with WE8MSWIN1252 as the character set and AL16UTF16 as the national character set.
    I've tried editing the createdb.sh script to hard code the character set types and then copied that file over the original while the RPM is still installing. I've tried editing the nls_lang.sh script to hard code the settings there and copied over the original shell script while the RPM is still installing.
    Doesn't matter.
    HOW can I do this? If I wait until after the RPM is installed and try running the createdb.sh file, then it ends up creating a database but not doing everything properly. I end up missing pfiles or spfiles. Various errors crop up.
    If I try to change them from the sql command line, I am told that the new character set must be a superset of the old one. It fails.
    I'm new to Oracle, so I'm treading water that's uncharted. In short, I need community help. It's important to the app I'm running and attempting to migrate from to maintain these character sets.
    Thanks.

    I don't think you can change Oracle XE character set. When downloading Oracle XE you must choose to download:
    - either the Universal Edition using AL32UTF8
    - or the Western Euopean Edition using WE8MSWIN1252.
    See http://download.oracle.com/docs/cd/B25329_01/doc/install.102/b25144/toc.htm#BABJACJJ
    If you really need UTF8 instead of AL32UTF8 you need to use Oracle Standard Edition or Oracle Entreprise Edition:
    these editions allow to select database character set at database creation time which is not really possible with Oracle XE
    Note that changing environment variable NLS_LANG has nothing to do with changing database character set:
    http://download.oracle.com/docs/cd/B25329_01/doc/install.102/b25144/toc.htm#BABBGFIC

  • Multi Language Character sets

    Does anyone know if the oracle odbc drivers support multi language character sets?
    I am trying to retrieve Chinese (prc) characters from the database (it is stored correctly and I have the Microsoft Multilanguage service pack installed). Odbc won't retrieve them correctly (actually stops after 1 row).
    If I use the OLE DB driver, it does retrieve them. Is there a converter inside the OLE DB driver that ODBC doesn't have or is there a setting I'm missing? (The tool I want to use this with does not recognize OLE DB, is there a way top make it use oledb but defining an odbc connection??)
    Cheers
    Chris

    The version number you're providing doesn't seem to make any sense to me. Oracle's ODBC drivers are versioned to match the version of the Oracle client they work with, i.e. 8.1.7.8 is the latest Oracle ODBC driver for the 8.1.7 Oracle client. In the Oracle 7 days, there was a 2.5x series of Oracle ODBC drivers. So far as I'm aware, there's never been a 4.x series of Oracle ODBC drivers.
    AMERICAN_AMERICAN.UTF8 would be the option I'd tend to prefer on the client, particularly if you'll be working with more than just Chinese data (i.e. English & Chinese). I'm not sure what AMERICAN_AMERICAN.<some Chinese character set> would end up doing. There's a lot of info out there about NLS settings (including an NLS discussion forum) that might be helpful to you.
    What OLE DB provider are you using that works?
    Justin

  • Multi-byte character

    If DATABASE CHARACTER SET is UTF-8 than
    Than can i use VARCHAR2 to store multi-byte character or i still have to use
    nvarchar2
    also vachar2(1),nvarchar2(1) can store how much (max) bytes in case of UTF-8 CHARACTER SET

    If you create VARCHAR2(1) then you possibly can not store anything as your first character might be multibyte.
    My recommendation would be to consider defining by character rather than by byte.
    CREATE TABLE tbyte (
    testcol VARCHAR2(20));
    CREATE TABLE tchar (
    testcol VARCHAR2(20 CHAR));The second will always hold 20 characters without regard to the byte count.
    Demos here:
    http://www.morganslibrary.org/library.html

  • SQL*Loader-282: Unable to locate character set handle for character set ID

    How do I fix this error that i'm getting when running SQL Loader and connecting to an Oracle 10g database. I'm on 10g client.
    SQL*Loader-282: Unable to locate character set handle for character set ID (46).
    Here's the NLS parameter settings in database: select * from v$nls_parameters
    PARAMETER     VALUE
    NLS_LANGUAGE     AMERICAN
    NLS_TERRITORY     AMERICA
    NLS_CURRENCY     $
    NLS_ISO_CURRENCY     AMERICA
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CALENDAR     GREGORIAN
    NLS_DATE_FORMAT     DD-MON-RR
    NLS_DATE_LANGUAGE     AMERICAN
    NLS_CHARACTERSET     WE8ISO8859P15
    NLS_SORT     BINARY
    NLS_TIME_FORMAT     HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY     $
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    NLS_COMP     BINARY
    NLS_LENGTH_SEMANTICS     CHAR
    NLS_NCHAR_CONV_EXCP     TRUE
    Message was edited by:
    evo

    Yep that's it, thanks, I found out about V$NLS_PARAMETERS:
    SQL> select * from v$nls_parameters;
    PARAMETER                  VALUE
    NLS_LANGUAGE               AMERICAN
    NLS_TERRITORY              AMERICA
    NLS_CURRENCY               $
    NLS_ISO_CURRENCY           AMERICA
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CALENDAR               GREGORIAN
    NLS_DATE_FORMAT            DD-MON-RR
    NLS_DATE_LANGUAGE          AMERICAN
    NLS_CHARACTERSET           WE8ISO8859P1
    NLS_SORT                   BINARY
    NLS_TIME_FORMAT            HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT       DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT         HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT    DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY          $
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    NLS_COMP                   BINARY
    NLS_LENGTH_SEMANTICS       BYTE
    NLS_NCHAR_CONV_EXCP        FALSEGiven that 9i is not available for Solaris x86,
    how do I change NLS_NCHAR_CHARACTERSET to something that
    will work, like UTF-8?
    Thanks
    Ed

  • Multi-byte character encoding issue in HTTP adapter

    Hi Guys,
    I am facing problem in the multi-byte character conversion.
    Problem:
    I am posting data from SAP CRM to third party system using XI as middle ware. I am using HTTP adapter to communicate XI to third party system.
    I have given XML code as UT-8 in the XI payload manipulation block.
    I am trying to post Chines characters from SAP CRM to third party system. junk characters are going to third party system. my assumption is it is double encoding.
    Can you please guide me how to proceed further.
    Please let me know if you need more info.
    Regards,
    Srini

    Srinivas,
    Can you go through the url:
    UTF-8 encoding problem in HTTP adapter
    ---Satish

  • Mapping trademark character u00AE from character set 1406 to character set 1505

    Hello!
    I have following problem.
    In character set 1505 I do not see trademark character - ®,
    Character set 1505 is for Russia. This character can be invisible on the screen. It shood be visible on the paper.
    This character is under code 0+174 in character set 1406.
    How to copy this character from character set 1406 to character set 1505?
    Is it possible?
    Please help me.
    Regards
    Bogdan

    Hello
    I have solved this problem myself.
    Solution is very simply. In SapScript need to be putted following string: <347>
    This is the registered sign for 1505 character set.
    Regards
    Bogdan

  • Conversion error, from character set 4102 to character set 4103

    Hi,
    We've developed a JCO server(in Java) with an ABAP report the function provided by the JCO server.
    MetaData:
         static {
              repository = new Repository("SMSRepository");
              fmeta = new JCO.MetaData("ZSMSSEND");
              fmeta.addInfo("TO", JCO.TYPE_CHAR, 255,   0,  0, JCO.IMPORT_PARAMETER, null);
              fmeta.addInfo("CONTENT", JCO.TYPE_CHAR, 255,   0,  0, JCO.IMPORT_PARAMETER, null);
              fmeta.addInfo("RETN", JCO.TYPE_CHAR, 255,   0,  0, JCO.EXPORT_PARAMETER, null);
              repository.addFunctionInterfaceToCache(fmeta);     
    Server parameters:
           Properties prop = new Properties();
           prop.put("jco.server.gwhost","shaw2k07");
           prop.put("jco.server.gwserv","sapgw01");
           prop.put("jco.server.progid","JCOSERVER01");
           prop.put("jco.server.unicode","1");
           srv = new SMSServer(prop,repository);
    If we run JCO server in both my client machine(from developer studio) and in the WAS machine(stand alone Java program), everything is ok. In the Abap side, the SM59 unicode test return the destination is an unicode system, and the ABAP report call the function can run smoothly.
    But we package this JCO server to a web application and deploy to WAS, problem occured. The SM59 unicode test still say the destination is an unicode system. But the ABAP report runs with an ABAP DUMP:
    Conversion error between two character set
    RFC_CONVERSION_FIELD
    Conversion error "RETN" from character set 4102 to character set 4103
    A conversion error occurred during the execution of a Remote Function
    Call. This happened either when the data was received or when it was
    sent. The latter case can only occur if the data is sent from a Unicode
    system to a non-Unicode system.
    I read the jrfc.trc log, it shows it receives data in unicode 4103(that's ok), but send data in unicode 4102(that's the problem).4102 is UTF-16 Big Endian and 4103  UTF-16 Little Endian. Our system is windows on intel 32 aritechture, so based on Note 552464, it should be 4103.
    Why it sends data (Java JCO server send output parameter to ABAP) in 4102?????
    What's the problem??? Thank you very much!!
    Best Regards,
    Xiaoming Yang
    Message was edited by:
            Xiaoming Yang

    Hello Experts,
    Any replies on this?
    I am also getting a similar kind of error.
    Do you have any idea on this?
    Thanks and Best Regards,
    Suresh

  • Where is the Multi-Byte Character.

    Hello All
    While reading data from DB, our middileware interface gave following error.
    java.sql.SQLException: Fail to convert between UTF8 and UCS2: failUTF8Conv
    I understand that this failure is because of a multi-byte character, where 10g driver will fix this bug.
    I suggested the integration admin team to replace current 9i driver with 10g one and they are on it.
    In addition to this, I wanted to suggest to the data input team on where exactly is the failure occured.
    I have asked them and got the download of the dat file and my intention was to findout where exactly is
    that multi-byte character located which caused this failure.
    I wrote the following code to check this.
    import java.io.*;
    public class X
    public static void main(String ar[])
    int linenumber=1,columnnumber=1;
    long totalcharacters=0;
    try
    File file = new File("inputfile.dat");
    FileInputStream fin = new FileInputStream(file);
    byte fileContent[] = new byte[(int)file.length()];
    fin.read(fileContent);
    for(int i=0;i<fileContent.length;i++)
       columnnumber++;totalcharacters++;
       if(fileContent<0 && fileContent[i]!=10 && fileContent[i]!=13 && fileContent[i]>300) // if invalid
    {System.out.println("failure at position: "+i);break;}
    if(fileContent[i]==10 || fileContent[i]==13) // if new line
    {linenumber++;columnnumber=1;}
    fin.close();
    System.out.println("Finished successfully, total lines : "+linenumber+" total file size : "+totalcharacters);
    catch (Exception e)
    e.printStackTrace();
    System.out.println("Exception at Line: "+linenumber+" columnnumber: " +columnnumber);
    }But this shows that the file is good and no issue with this.
    Where as the middleware interface fails with above exception while reading exactly the same input file.
    Anywhere I am doing wrong to locate that multi-byte character ?
    Greatly appreciate any help everyone !
    Thanks.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    My challenge is to spot the multi-byte character hidden in this big dat file.
    This is because the data entry team asked me to spot out the record and column that has issue out of
    lakhs of records they sent inside this file.
    Lets have the validation code like this...
       if( (fileContent<0 && fileContent[i]!=10 && fileContent[i]!=13) || fileContent[i]>300) // if invalid
    {System.out.println("failure at position: "+i);break;}lessthan 0 - I saw some -ve values when I was testing with other files.
    greaterthan 300 - was a try to find out if any characters exceeds actual chars. range.
    if 10 and 13 are for line-feed.
    with this, I randomly placed chinese, korean characters and program found them.
    any alternative (better code ofcourse) way to catch this black sheep ?
    Edited by: Sanath_K on Oct 23, 2009 8:06 PM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Sqlldr v11 Unable to locate character set handle for character set ID

    Good day,
    Having recently migrated to a 11g oracle database(11.2.0.1.0) from 9i on a test server, running SQL loader gives/throws the above error.
    I've seen a few posts about this but with pretty vague explanations, below is what my bash profile looks like though and i compared NLS_PARAMETERS to another server that already has version 11 running.
    # .bash_profile
    # Get the aliases and functions
    if [ -f ~/.bashrc ]; then
            . ~/.bashrc
    fi
    # User specific environment and startup programs
    export ORACLE_SID=mtctst
    export ORA_NLS10=$ORACLE_HOME/nls/data
    export ORACLE_BASE=/oracle/app/product
    export ORACLE_HOME=/oracle/client/11.2.0/dbhome_1
    #export ORACLE_HOME=/oracle/client/9.2.0
    LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib/X11; export LD_LIBRARY_PATH
    PATH=$ORACLE_HOME/bin:$HOME/bin:/bin:/usr/bin:/usr/local/bin:/bin/X11;
    export PATH
    #export LD_LIBRARY_PATH=$ORACLE_HOME/lib
    #LD_LIBRARY_PATH=$ORACLE_HOME/lib32:$LD_LIBRARY_PATH
    #LD_LIBRARY_PATH_64=$ORACLE_HOME/lib
    #export LD_LIBRARY_PATH
    #export LD_LIBRARY_PATH_64
    #PATH=$ORACLE_HOME/bin:$HOME/bin:/bin:/usr/bin:/usr/local/bin:/usr/bin/X11; export PATH
    #PATH=$PATH:$HOME/bin
    #export PATH
    umask 002
    unset USERNAMEYour assistance will be appreciated.
    Regards,

    The complete command as follows:
    sqlldr userid=user/password@alias control=myctlfile.ctl log=$logfile discard=$CUR_DIR$oldfilename.discard bad=$CUR_DIR$oldfilename.bad silent=feedback errors=10000000I'm running the command on the database server that gives the following error with log entries:
    File found: gsmoly2012080612005072206
    SQL*Loader: Release 11.2.0.1.0 - Production on Mon Aug 6 16:36:09 2012
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    SQL*Loader-282: Unable to locate character set handle for character set ID (178).
    after main file sqlldr before test if sqlldr was successfull: gsmoly2012080612005072206
    in mystring_risk before assign can_continue = blank: PBXOG,,26492092,,,120806,115829,120806 115829,22,,811274127,,,,,,0,,,WORK/gsm/data/gsmoly2012080612005072206,9833407,OLY.GSM  ,
    SQL*Loader: Release 11.2.0.1.0 - Production on Mon Aug 6 16:36:18 2012
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    SQL*Loader-282: Unable to locate character set handle for character set ID (178).
    value of can_continue after sqlldr of risk file: F

  • [urgent] oracle character set and national character set !!(dictionary)

    Hi. everyone.
    What is the oracle dictionary that contains information of
    oracle character set and national character set?
    I checked v$database, but there was not the information.
    It seems that there are some differences between "nls_* " init parameters
    and the database character set.
    "Alter database backup controlfile to trace" gave me the character set of db,
    but I would like to know whether there are oracle dictionary regarding them.
    Thanks in advance. Have a nice day.
    Best Regards.

    I found the dictionary which contains the information of character set and
    natiional character set of database.
    select * from nls_database_parameters
    where parameter like '%CHARACTERSET';
    Thanks for reading.
    Have a good day.
    Best Regards.

Maybe you are looking for

  • Is there ANY way to install windows (for parallels) WITHOUT a superdrive?

    I have Parallels (build 5600) and Win XP running fine on my iMac. I want to run it on my MBA, too. I have successfully installed Parallels (on a new trial license for now), and when I went to install Windows, learned that it can't be done via Remote

  • HT201301 How do I back-up my iPad if it is currently running iOS 4.3.3?

    How do I back-up my iPad if it is currently running iOS 4.3.3?  I have a back-up of my photos, but want to back-up apps, etc before updating to the latest iOS. When I connect my iPad to my computer with the latest version of ITunes, it doesn't give m

  • Replacing bullets with images

    Are there plans to add the option of replacing the bullets in a bulleted list with images, as you can do in Word? Our current workaround is to insert the image as a background image through the CSS with appropriate spacing, but this isn't very accura

  • Collection in trigger

    Greetings, Oracle8i Enterprise Edition Release 8.1.7.4.0 - 64bit Production I am writing a trigger as below but gives me error CREATE OR REPLACE TRIGGER ABHI_TEST BEFORE INSERT ON abhi_TAB FOR EACH ROW DECLARE type abhi_trig_rec is record ( pcode num

  • How can convert HTML file into xml file?

    Hi, I am receving one HTML file as an input and i want to convert that receiving(html file) into .xml file.Is there any converter (tools)to do this.Pls if any give me the details with regard. Regards, mahesh.