Change NLS_LENGTH_SEMANTICS from BYTE to CHAR on Oracle 9i2 problem!

Hi,
I have created a new database on Oracle 9i 2, I did not find the correct pfile parameter for the NLS_LENGTH_SEMANTICS setting.
So have created a standart UTF8 database and now I am trying to change the standard NLS_LENGTH_SEMANTICS=BYTE to CHAR. When I execute the following command in SQL PLUS "ALTER SYSTEM SET NLS_LENGTH_SEMANTICS=CHAR SCOPE=BOTH"
The system is tells me that command is successfully executed.
But when I look at the NLS_DATABASE_PARAMETERS table I do not see any change for the NLS_LENGTH_SEMANTICS parameter.
I have also restarted the instance but still everything is the same as it was before.
Do you know what I am doing wrong?
Regards
RobH

Hi,
Yeah you are right, the nls_session_parameters "NLS_LENGTH_SEMANTICS" for the app user is set to CHAR.
This means that NLS_DATABASE_PARAMETERS is from the SYS or SYSTEM user view?
Thanks a lot
Regards
RobH

Similar Messages

  • NLS_LENGTH_SEMENTICS from BYTE TO CHAR

    Hi,
    For supporting multibyte character, we need to change NLS_LENGTH_SEMENTICS parameter from BYTE to CHAR. But this parameter setting will effect for new database tables created thereafter. To change the storage characteristics for existing database tables we explicitly executed Alter statements for database tables for columns having datatype as “Varchar2” and “Char”.
    Problem:
    ======
    Since the number of database tables in PRODUCTION are very high and contains approx. 600 million of records spread over 400 database tables, we are not in a position to afford the time which will be spent in altering these database tables in PRODUCTION.
    We ran the test by alter script in System Test environment and alteration of database tables covering 150 tables and 200 million of records was carried out in almost 16-20 hrs.
    APPROACHES WE HAVE IN MIND
    ==========================
    1. Alter all the table columns (We tried the same and taking too much time)
    2. Export /Import with NLS_LENGTH_SEMENTICS set as CHAR(We discuss with our DBA about this approach and found that it will also take too much time and there is RISK of data inconsistency)
    3. Drop the index of the table, run alter script for changing storage type BYTE to CHAR , and rebuild the index (this is also taking too much time).
    All above approaches are very costly in terms of time, that we cannot afford.
    If any one having better solution then please suggest.
    thanks in advance
    Syed

    Hi
    We are also facing a similar problem
    We ran alter table scripts and now compiling the objects
    and that is taking lot of time.
    we have around 4000 invalids that by parallel recomp came down to 2000 but still these 2000 which are mostly packages .. are giving a hard time.
    if anyone has faced/found a similar issue/solution pls post. or maildirectly to me.
    Sunil Choudhary

  • Exp/imp, convertion from byte to char.

    Hi,
    I have a dump file exported from a database with nls_lenght_sematics=BYTES. When i import it into a database with nls_lenght_sematics=CHAR, it is retaining the BYTE charecteristics. Is there any way to change it into CHAR while importing. This for globalization support.
    Thanks
    Muneer

    Hi Muneer,
    No, import always preserve the LENGTH semantics of the original columns.
    The workaround is to create your schema objects in the traget database first, with the desired semantics, prior to the importing the data.
    Nat

  • Changing homedirectories from AFP to NFS cures all Office Problems?

    Hi all,
    well a lot of us do have a lot of problmes using Microsoft Office. New files can't be copied to the server (with unknown error -50), saving files fail because of wired netwrok errors or full disks, duplicating files cause complete file loss (the copy and the oridignal file itself), wordfiles seem to be locked (should i make a copy?) trying to open them, over and over there are Word Work Files and uncountable nummers auf Autorecovery files with zero bytes, Word bothering you to many files are open (with not a single file opend) ...
    Well this all seems to be a problem of office combined with homedirectory shared through AFP. A Friend of mine told me, that he has no problems with office. The only difference betwenn his infrastructure and mine is, that he uses NFS for sharing homedirectories.
    So i did the Following:
    I created a new sharepoint for homedirectories and shared it through NFS. Created a new user, using this sharepoint for his homedirectory. Then i worked with the NFS-User on an AFP-sharepoint (like creating a Word file, doing changes and saving it). Then i worked on the same document with an user thats homedirectory is shared through AFP. Everytime i did a change to the document and saved it, a new Word Work File was created next to the original file. After several iterations with the AFP-user there error that the document is allready opend came up...
    So my first findings comparing NFS users with AFP users is, that the problems using Office are completely gone when using NFS for homedirectories.
    Now the questions for me are:
    1.) is anybody there that can approve my findings?
    2.) are there any reasons not to use NFS for homedirectories?
    3.) which protocol do you use for sharing homedirectories?
    4.) what would be the smartest way to change / move APF shared homedirecties to NFS?
    I allready did the following to change an AFP shared homedirectory to NFS:
    - Set the users home dir from /Users/ to none (in Workgroup Manager)
    - as root via terminal mv /Users/myuser /NFSUsers/
    - in Workgroupmanager set the useres homedir from none to /NFSUsers/
    This worked well (for a short look on it). The only problem was, that the subfolders in the sidebar of the Finder window did not work, and had to be reseted.
    Any other suggestions?
    Greetings
    MacSEK

    Hello,
    Today a friend reported this problem to me. This problem was only to Word, to Excel no.
    In my case the guilty was the add-in PDFCOMPLETE in WORD 2013.
    I saw it in programs list on Control Panel. It belong to PDF Complete, Inc.
    They have a HP computer. When I uninstalled this add-in from programs list the problem solved !
    I did that today. Now they are able to change language and also to put tone on Greek vowels.
    When I uninstalled the PDFCOMPLETE I saw on uninstallation window the HP icon !
    It seems a file related with HP programs created the problem.

  • Change NLS_LENGTH_SEMANTICS to CHAR

    I need to change the length semantics of all the tables in an existing application schema from BYTE to CHAR.
    I have explored two methods
    1- Datapump export/import
    Due to large tables with numerous CLOB columns, the performance of the export/import is hardly acceptable for our production downtime window.
    2- ALTER TABLE OWNER.TABLE modify (C160 VARCHAR2(255 CHAR))
    This solution is to alter all table for all varchar2 or char columns.
    Questions :
    a) Does the alter table solution modify only the data dictionary or does it also modify/rearrange the blocks for existing rows.
    a.1) What happens to a row with a varchar(3) string previously stored in three bytes when you change the semantics to CHAR and you update that string with two-bytes charactyers ? Where does the "enlarged" field in the row piece/block ? Does it go at the end of the row piece ?
    b) Are there performance or management benefits to use one methods over the other one ?
    Thanks for any information you can provide.
    Serge Vedrine
    Edited by: [email protected] on 13-Mar-2009 11:12 AM
    Edited by: [email protected] on 13-Mar-2009 11:24 AM

    ## I have explored two methods
    ## 1- Datapump export/import
    This will not work. Export/import preserves the length semantics
    ## 2- ALTER TABLE OWNER.TABLE modify (C160 VARCHAR2(255 CHAR))
    This is the simplest approach. You can write a simple select on the view ALL_TAB_COLUMNS to generate the necessary ALTER TABLE statements.
    ## Questions :
    ## a) Does the alter table solution modify only the data dictionary or does it also modify/rearrange the blocks for existing rows.
    Only data dictionary.
    ## a.1) What happens to a row with a varchar(3) string previously stored in three bytes when you change the semantics to CHAR
    ## and you update that string with two-bytes charactyers ? Where does the "enlarged" field in the row piece/block ? Does it go at the end of the row piece ?
    This is the standard behavior of UPDATE when the new value is longer than the old one. If there is place in the block, the new longer value will push itself into the old place moving the rest of the block data to higher offsets. If there is no place, an extra block will be grabbed and inserted into the chain of blocks holding the row. This is called "row chaining".
    ## b) Are there performance or management benefits to use one methods over the other one ?
    As I said, export/import will not work anyway.
    -- Sergiusz

  • Oracle Best practices for changing  Byte to Char on Varchar2 columns

    Dear Team,
    Application Team wanted to change Byte to Char on Varchar2 columns to accommodate Multi byte character  on couple of production tables.
    Wanted to know is it safe to have mixture of BYTE and CHAR semantics in the same table i have read on the couple of documents that It's good practice to avoid using a mixture of BYTE and CHAR semantics columns in the same table.
    What happens if we have mixture of BYTE and CHAR semantics columns in the same table?
    Do we need to gather stats & rebuild indexes on the table after these column changes .
    Thanks in Advance !!!
    SK

    Application Team wanted to change Byte to Char on Varchar2 columns to accommodate Multi byte character  on couple of production tables.
    Wanted to know is it safe to have mixture of BYTE and CHAR semantics in the same table i have read on the couple of documents that It's good practice to avoid using a mixture of BYTE and CHAR semantics columns in the same table.
    No change is needed to 'accommodate Multibyte characters'. That support has NOTHING to do with whether a column is specified using BYTE or CHAR.
    In 11g the limit for a VARCHAR2 column is 4000 bytes, period. If you specify CHAR and try to insert 1001 characters that each take 4 bytes you will get an exception since that would require 4004 bytes and the limit is 4000 bytes.
    In practice the use of CHAR is mostly a convenience to the developer when defining columns for multibyte characters. For example for a NAME column you might want to make sure Oracle will allocate room for 50 characters REGARDLESS of the actual length in bytes.
    If you provide a name of 50 one byte characters then only 50 bytes will be used. Provide a name of 50 four byte characters and 200 bytes will be used.
    So if  that NAME column was defined using BYTE how would you know what length to use for the column? Fifty BYTES will seldom be long enough and 200 bytes SEEMS large since the business user wants a limit of FIFTY characters.
    That is why such columns would typically use CHAR; so that the length (fifty) defined for the column matches the logical length of the number of characters.
    What happens if we have mixture of BYTE and CHAR semantics columns in the same table?
    Nothing happens - Oracle could care less.
    Do we need to gather stats & rebuild indexes on the table after these column changes .
    No - not if you by 'need' you mean simply because you made ONLY that change.
    But that begs the question: if the table already exists, has data and has been in use without their being any problems then why bother changing things now?
    In other words: if it ain't broke why try to fix it?
    So back to your question of 'best practices'
    Best practices is to set the length semantics at the database level when the database is first created and to then use that same setting (BYTE or CHAR) when you create new objects or make DDL changes.
    Best practices is also to not fix things that aren't broken.
    See the 'Length Semantics' section of the globalization support guide for more best practices
    http://docs.oracle.com/cd/E11882_01/server.112/e10729/ch2charset.htm#i1006683

  • Large_pool value changed from 0 to 8388608 after Oracle Patch 10g Release 2

    10g Release 2 (10.2.0.4) Patch Set 3 for HP-UX Itanium
    Recently we upgrade Oracle software with patch set 6810189.
    and upgraded database using dbua.
    earlier we set large_pool to 0 and after database upgrade, this large_pool changed to 8388608 bytes. Is this normal with DBUA?
    also dbua keeps backup of spfile in same location

    This is a way I used to do it too. I am not sure whether it is the official way, but it works nevertheless.
    Do make sure you run autoconfig afterwards, which will create a new environment file for you. Then stop listener and database, source the new environment file and restart listener and database.
    I found some other procedure in order for you to recreate the environment file:
    1. On the application tier, Execute $AD_TOP/bin/admkappsutil.pl to generate appsutil.zip for the database tier.
    2. Copy this appsutil.zip to the database tier and unzip it into the ORACLE_HOME
    3. Set the following environment variables:
    ORACLE_HOME =<10g ORACLE_HOME>
    LD_LIBRARY_PATH = <10g ORACLE_HOME/lib, 10g ORACLE_HOME/ctx/lib>
    ORACLE_SID = instance name running on this database node.
    PATH= $PATH:$ORACLE_HOME/bin;
    TNS_ADMIN = $ORACLE_HOME/network/admin/<context_name> 4. Edit the $ORACLE_HOME/network/admin/tnsnames.ora file. Change the aliases for SID=<new RAC instance name>
    5. Modify the listener.ora. Change the instance name and Oracle Home to match environment
    6. Start the listener.
    7. From the 10g ORACLE_HOME/appsutil/bin directory, create an instance-specific XML context file by executing the command:
    adbldxml.pl tier=db appsuser=<APPSuser> appspasswd=<APPSpwd>8. De-register the current configuration using the command:
    perl $ORACLE_HOME/appsutil/bin/adgentns.pl appspass=apps contextfile=$CONTEXT_FILE -removeserver9. Rename $ORACLE_HOME/dbs/init<rac instance>.ora , to a new name (i.e. init<rac instance>.ora.old in order to allow AutoConfig to regenerate the file using the RAC specific parameters.
    10. From the 10g ORACLE_HOME/appsutil/bin directory, execute AutoConfig on the database tier by running the adconfig.pl script.
    Now you should have the officially created environment file.
    HTH.
    Arnoud Roth

  • V3.0, NLS_LENGTH_SEMANTICS byte vs. char for CREATE TABLE

    Hi,
    When creating a new table (via "New Table" or "Data Load ...")
    NLS_LENGTH_SEMANTICS of table columns are always in byte even when NLS_LENGTH_SEMANTICS
    is set to char for the instance (and so as default for all sessions) and has not explicitly changes for the current session.
    Is there a way to change this behavior or could this be a gebnerell problem in 3.0.
    I would like SQL Developer to use the standard settings of the instance as long as I don't set it different for a session.
    Please give me a littel hint how to solve this issue.
    Thanks in advance.
    Andre
    --2011/07/15-------------------------------------------------------------------------------------------
    Hello,
    I'm not sure wether or not this is a too complicated or just as silly question.
    However it's still an issue for me.
    It would be great if someone could at least confirm or refuse it.
    Hoping for an answer- Thank you again!
    Andre
    Edited by: andreml on Jul 15, 2011 2:10 AM
    Edited by: andreml on Jul 15, 2011 2:11 AM

    Hi,
    If you click on any given table in the Connections view, an object viewer opens for that table. Its column tab lists all columns with attributes for column_name, data_type, and so on. The data_type info includes the length semantics, e.g., VARCHAR2(30 BYTE).
    Regards,
    Gary Graham
    SQL Developer Team

  • How to Change the data type of infoobject from NUMC to CHAR

    Hi Experts,
    I have created an infoobject with data type as NUMC, i used in DSO as a key figure and i have loaded data up to PSA.
    Now i want to change the data type of this Info Object from NUMC to CHAR.
    Can anyone suggest me the solution for this issue.
    Regards,
    Venu Gopal.K

    Hi pavan/binu,
    I have not loaded data into DSO i just added IO in Key fields and i started editing this IO then im facing error like
    1.Master Data Table /BIC/PZCUSTOMER contains data: Characteristic ZCUSTOMER cannot be activated     
    2.SID Table /BIC/SZCUSTOMER contains data: Characteristic ZCUSTOMER cannot be activated
    Im also unable to edit the P & S tables in SE14.
    NOTE:I have data only in PSA and when im trying to load data to DSO Im facing these issues.

  • How to find out if colums is defined as VARCHAR2 in bytes or char?

    Hello,
    I'd like to know if it is possible to find out if a colum table (or view) is defined as a VARCHAR2 in bytes or in CHAR on Oracle 10g.
    When I do a desc, it shows only VARCHAR2 with its length but not if it is bytes or char. How can I know for sure?
    Thanks,

    SQL> create table t
      id    varchar2 (10 char),
      id2   varchar2 (10 byte)
    Table created.
    SQL> select column_name, data_type, char_used
      from cols
    where table_name = 'T'
    COLUMN_NAME                                   DATA_TYPE       CHAR_USED
    ID                                            VARCHAR2        C       
    ID2                                           VARCHAR2        B       
    2 rows selected.

  • Help needed in understanding conversion alghorithm from byte to hex

    Hi, I'm studying the following code:
    public static char[] byteToHex(byte[] data) {
      char[] retValue = new char[data.length * 2];
      int value = 0;
      int highIndex = 0;
      int lowIndex = 0;
      for (int i = 0; i < retValue.length; i++) {
        value = (data[i] + 256) % 256;
        highIndex = value >> 4;
        lowIndex = value & 0x0f;
        retValue[i * 2 + 0] = hexTable[highIndex];
        retValue[i * 2 + 1] = hexTable[lowIndex];
      return retValue;
    }There are few things (the most important) which I don't understand about the above code.
    I understood that what's returned has got double size related to what's passed in because a char takes 16 bits while a byte takes 8.
    1) I don't understand why each byte must be first added 256 and then % with 256 (returning the same value - Is this to eliminate negative values?)
    2) I do understand that each byte is transformed in two hexadecimal values: one is the highIndex (first 8 bits) and the second is the lowerIndex (last 8 bits) and that each value is tranformed in its hexadecimal value from the array of hex values.
    3) What I don't really understand is why the highIndex is calculated as: value >> 4
    and the lowest index is calculated as value 0x0f (is this last also to eliminate negative values?)
    If someone could clarify this for me, I'd be very grateful.
    Thanks.
    Marco

    So, does this mean that we add 256 to eliminate the sign?No. You need the whole line to convert a signed byte into an int between 0 and 255.
    A simpler way to do this would be
    value = data[i] & 0xFF;
    This moves down the higher bits so that it turnsinto lower bits. i.e. we need it to >be between 0 and
    15.
    Is this shifted of 4 because Math.pow(2, 4) = 16.0?Doing in this case, x >>4 is the same as x / 16
    This leaves only the lowest 4 bits.Is the following what happens?There are no char values produced. Using 0000 as an example is not a good idea as you can change it in many ways and it is still 0000
    >
    Received as initial value:
    byte: 0000 0000
    What we need to obtain:
    char: 0000 0000 0000 0000
    The first 4 bits of the above byte are shifted of 4
    positions to find the hexadecimal equivalent (if from
    2 I want to get to 16 I have to do the opposite of
    powering a number by 4); The last four bits of the
    byte are extracted because of the '&' operator with
    0x0f (which in binary is 1111 - Therefore all the '1'
    are kept)?Yes.

  • How do I migrate views from MS SQL 2008 to Oracle 11g through SQL Developer

    Is there any way to migrate the views from MS SQL 2008 to Oracle 11g through SQL Developer? Please give me some detail steps. Thanks for your help.
    Kevin

    Hi Kevin,
    user13531850 wrote:
    Hi Turloch,
    When I use migrate to oracle, I got a problem, the migrate tool create a new schema for me in my case (AZTECA_KSMMS), it migrates all the stuffs under that schema (AZTECA_KSMMS). However my application need the all the Oracle data under schema AZTECA instead of AZTECA_KSMMS. Is there any way to specify specific schema (AZTECA) for target oracle database? Schema remapping is available:
    First Capture (separately) then during right click convert on the captured model there is a Specify the conversion options with a Object Naming tab where the schema (and other) name changes are editable.
    I have not used this recently.
    Also during the migration process, when I choose repository, there is a check box for truncate to reset repository to empty state, Do I need to check that truncate Check Box so the repository will be cleared from last migration?The repository can hold multple migration attempts. Check truncate to get rid of previous attempts information. This cleans up the repository - not the destination database.
    There are also online database and offline database options during the migration process, what are the difference between these two choices? After I migrated to Oracle, all my views has a red cross icon next to it. Does that mean the view migration is failed or not? Please give me your comments. Thanks for your help.offline: for big (amount of data) databases with simple data types,
    uses bcp + files + scripts + sqlldr.
    online: for small (amount of data) databases (easier),
    uses (Java) jdbc.
    The view is likely to be broken - recompiling it may help.
    The Oracle schema is created using a .sql file - see under generated in the directory you gave originally in the wizard. There is a .out file that contains the result of running this script including any errors. During conversion there are also likely to be warnings displayed on the UI.
    There may be a single issue that is causing multiple issues - if viewa depends on functionb, and functionb is broken, viewa will also fail.
    >
    Kevin-Turloch
    SQLDeveloper Team

  • How Offiline Data loading is possible from MS Access DB to Oracle DM

    HI All,
    I am new to Oracle.
    I am trying to migrate few tables from MS Access DB to Oracle DB (10g)
    I have used the migration utility from the Sql Developer tool.
    I was successful in creating the schema and also transferring the data for tables.
    But if my data in Access DB is updated everyday then how am i suppose to link this to newly created oracle tables?
    Is there any way to do this?
    And if i want to load the Oracle tables offline using "Generate Offline Data Move Scripts" option in sql developer then how do i do it?
    I tried using the option "Generate Data Move Scripts" right click option on converted object then the utility created few files on my local machine.
    I am not able to make my way fwd.
    Please help me.
    regards,
    Sushil

    Hi Sushil,
    I will try to address each of your questions:
    Q: "if my data in Access DB is updated everyday then how am i suppose to link this to newly created oracle tables? Is there any way to do this?"
    A: Depending on your reasons for maintaining the use of the MS Access database, you have two options.
    1. lf you are continuing to use the MS Access database because the application front end is MS Access forms & reports then you could just create linked tables to the newly migrated Oracle tables. Doing so will mean that any data modified via the frond end forms & reports will be saved directly in the Oracle database.
    2. If you wish to continue saving data to the MS Access database, then transfer it to the Oracle database, then you can just use the "Migrate Data" option in Oracle SQL Developer Migration Workbench to carry out this task. You just need to ensure that you've an open connection to your migration repository, and have the converted model for the migrated database. You will then just transfer the data online, like you have already done.
    Q: "And if i want to load the Oracle tables offline using "Generate Offline Data Move Scripts" option in sql developer then how do i do it?"
    A: To migrate your MS Access data offline, you need to do the following:
    1. Run the Exporter tool for MS Access, and select the "Export for Oracle SQL Developer" option. On the second screen of the tool, browse to the location of your MDB file, provide a location for the output directory and ensure you select the "Export Table Data" option before you click the "Export" button. This option generates a .DAT file for each table containing data in your MDB file.
    2. In Oracle SQL Developer, once you have carried out your migration, select the Migration > Script Generation > Generate Data Move Scripts menu item and select the Converted Model to generate the scripts for. An "MSAccess" folder will be generated in the path specified in the "Generate Offline Data Move Files" dialog.
    3. Navigate to the "MSAccess" folder generated in step 2, and this folder should contain a "oracle_ctl.bat" file, which
    for your converted model.
    4. Edit the "oracle_ctl.bat" file and update the script to replace <Username>/<Password> with the actual username/password combination required to connect to the migrated schema e.g. for a migrated Northwind database, this combination would be northwind/northwind. Save the changes to the file.
    5. Copy the .DAT files generated in step 1 to the /MSAccess/Oracle folder. The .ctl files refer to the .DAT file in order to load the data into Oracle using SQL*Loader.
    6. Open each of the .ctl files and check the file name referenced on the 2nd line of the file e.g. infile '[Categories].null', where the file name is "Categories.null". The ".null" part must be updated to ".dat". This is a known issue and a fix for this will be available in a future release of the Oracle SQL Developer Migration Workbench. Save the changes to the .ctl files.
    7. Open up a Command Prompt & navigate to the /MSAccess folder, and run the "oracle_ctl.bat" file to load the data into the Oracle database tables. If you experience any issues during this process, check the output in the command prompt & try to resolve any reported issues. If you are unable to do so, please refer to the Migration Workbench forum - Database and Application Migrations If you cannot find a solution from the existing threads, please post a new thread, giving the full syntax of any reported error messages.
    I hope this helps.
    Regards,
    Hilary

  • Migration from of database and app from SQL server 2005 to Oracle 10g

    Hello Every body,
    Lately, I have been requested to migrate one inhouse developed hub application from SQL server 2005 to Oracle 10g. The objective is to move application and also the database to Oracel 10g.
    Here is the current platform:
    OS Win 2008 64 bit
    JDK 1.4.2
    JBoss 3.2.5
    EJB 2.1
    My question is what kind of issues you guys see in above upgrade. As far I know, Schema and Database migration can be done by Oracle workbench. Oracle 10g supports JDK 1.4. Oracle 10g also supports EJB 2.1.
    Anything, I need to take into consideration or any risks or problems. I would like to list down all the risks, and accordingly, I am thinking of start the upgrade.
    Thanks in advance.
    Regards,
    Zeeshan Qureshi

    In general the Java/J2EE application needs work in the following categories:
    1. Connection Settings: Use Oracle JDBC drivers, create new data sources, connection pools and what not. Disable AUTOCOMMIT for Oracle JDBC Connections.
    2. For EJBs, you should regenerate the entity beans because some object names/column names might have changed in oracle.
    3. Any custom SQL that is in use in the EJBs will have to be ported to Oracle just as you would do in stored procedures and other applications.
    4. For Java front-ends, if you are using callouts to stored procedures and expecting result sets then they will require some changes. You need to modify the stored procedure call signature to include the REF CURSOR variables and process them.
    5. Changes to any SQL Server specific database functions manipulating character/date data will be required.
    6. CLOB/BLOB/XML apis are different across databases so if you are using those then focus on them as well.
    7. Retreiving Auto/generated keys from database also may need changes from what I have seen.
    Hope this helps..
    Regards
    Prakash
    NOTE: Not sure why but my posts from yesterday are not visible today. Even worse is that I can see them in one browser (FireFox) but not in Internet explorer. Crazy browser day I am having.

  • Can i Change Character set WE8ISO8859P1 TO AR8MSWIN1256 IN ORACLE 8i

    I tried to change character set from WE8ISO8859P1 TO AR8MSWIN1256 on oracle 8i database.
    Getting the follwoing error for both character set and National character set.
    ORA-12712:New character set must be a superset of old character set.
    My question can i change or have to do export and import in arabic character set DB.
    null

    Hello Sarath,
    There is an extension CODE PAGE with OPEN DATASET stmt.
    Can you please elaborate which character set you want to write to the application server?
    BR,
    Suhas

Maybe you are looking for

  • Does 5870 support ACD/VGA/HDMI at the same time?

    I have a 5870 video card in my 2010 Mac Pro. One mini displayport (mDP) is connected to a 27" ACD. Currently I have the second mDP connected through a mDP-to-VGA adapter to a 17" VGA monitor. Everything works fine. I decided to free up the second mDP

  • Error when importing Bank statement in FF_5 (FB736)

    Hi all, Our customer in Germany says that they have format MT940 (STA-file..?) of their bank statement. I have done configuration but when I try to import the file I first get following error message: House bank table: No entry with bank key XXXXXX a

  • Paste in place (In front of object selected)

    I am running Illustrator CC 2014 and a useful tool that I go to frequently is Paste in Place (Ctrl + F). In older versions of Illustrator when using 'Paste in Place' you could copy an object, select any other particular object, hit CTRL + F, and the

  • Net 8 (netasst &) doesn't work.

    When I run netasst & in Red Hat Linux 7.1, I recieved Linux processes. I can't figure out why this is not launching the Java based Net8 utility. If any one knows any solutions, please reply. Command: netasst & I have also added the fix (patch) from O

  • My iPhone 4 has been overheated all this week

    My iPhone has been overheated all this week, and making rare sounds..what should I do?.