Umem_cache semantics
[Not sure which forum this question really belongs in...]
I'm trying to find out the semantics of umem_cache_alloc/umem_cache_free. In general I expect that buffers allocated from a umem_cache and then returned to the cache with umem_cache_free remain in the cache and will possibly be returned from subsequent calls to umem_cache_alloc; but under what (if any) circumstances will a buffer which is 'in the cache' as a result of an alloc/free pair of calls be released from my cache back to the system?
Some explanation: for use in a lock-free queue implementation I need a object 'pool' where freed objects remain available in the pool (as they may still be accessed by other threads after one thread has freed them) and I was hoping to build this facility using a umem_cache...
This question is not related to Sun C++ or Sun Studio. I suggest one of the Solaris forums at
http://forum.sun.com/jive/index.jspa?tab=solaris
or the Open Solaris community forum at
http://www.opensolaris.org
Similar Messages
-
Semantics and its' role in Business Services
Role and importance of semantics in the context of services and SOA:
Semantics refer to interpretation of information and not the literal definition of information/ data. Applying semantics to information turns it into knowledge. Semantics is the act of applying references and drawing conclusions given a set of more scientific informational constructs. Typically semantics are derived using the context in which information is presented. Transposition on the other hand allows applies the rule of inference where in one can draw conclusions on the implication of truth based on some set of facts.
Read more about this at <a href="http://entarch.blogspot.com/2007/10/semantics-and-its-role-in-business.html">Surekha Durvasula's</a> blog.
Surekha is an Enterprise Architecture of a large retail companyHi shalini,
Thanks for the reply and can you please say me the menu path for T.code BUSD
And can u please say the difference between 4.0 and 5.0 versions
Regards
Narayana
Message was edited by:
manam narayana
Message was edited by:
manam narayana -
SQL Loader Multibyte character error, LENGTH SEMANTICS CHARACTER
Hi,
startet SQL Loader Multibyte character error
{thread:id=2340726}
some mod locked the thread, why?
the solution for others:
add LENGTH SEMANTICS CHARACTER to the controlfile
LOAD DATA characterset UTF8 LENGTH SEMANTICS CHARACTER
TRUNCATE
INTO TABLE utf8file_to_we8mswin1252
ID CHAR(1)
, TEXT CHAR(40)
)Regards
MichaelHi Werner,
on my linux desktop:
$ file test.dat
test.dat: UTF-8 Unicode text, with very long lines
my colleague is working on a windows system.
On both systems exact the same error from SQL Loader.
Btw, try with different number of special characters (german umlaute and euro) and there is no chance to load without the error
when to many (?) special characters or data is long as column length and special characters included.
Regards
Michael -
Semantics File Sender Adapter to determine which files to retrieve
Hallo,
does anybody has experience with this semantic?
Thanks in advance,
Frank
From the FAQ-List of file-adapter(Note 821267 , Question 27).
Q: Which semantics does the File Sender Adapter apply to determine which files to retrieve from an FTP server?
A: Up to and including SP13 the File Adapter will issue an NLST command (see RFC 959) with the configured source file name scheme as first parameter to the FTP server. It will try to retrieve any of the files returned by the FTP server in response to this command.
Starting with SP14, the File Adapter will issue a LIST command with the configured source file name scheme as first parameter to the FTP server. If it is able to parse the returned result (which is the case for UNIX- and Windows-style directory listings), it will retrieve any of the returned files. If it cannot parse the result, it will fall back to the pre-SP14 implementation and use the NLST command as described above.Frank
We are on SP12 and have had problems with this and specifically, the FTP server on Solaris, not being fully RFC959 compliant.
We've had to get a different FTP Server as we were returned *.xml no file or folder found instead of 550 *.xml no file or folder found.
This increased the polling to the server, and filled up the ftp logging file, which intern filled up the partition and crasehd the server.
We got round this by installing http://www.proftpd.org/ -
How to define transaction semantics for JMS Queues
Hi,
My scenario in Oracle Fusion Middleware 11gR3, SOA Suite, IDE: JDeveloper 11g is as follows:
First). receive a message from QueueA, Second). Using the message, call a serviceABC and send the same message into QueueB.
I created a syn-BPEL, configured two JMS Adapters(for accessing Queue A & B) and one webservice binding for calling serviceABC(Proxy service). Note: I created the two queues and a connection factory under JMS System Module. As two different external systems have to access QueueA and QueueB.
Now, when I execute the BPEL and ServiceABC gets faulted, the message is gettings consumed from QueueA and the same message is getting produced in QueueB.
My goal is to define a transaction semantics for my composite(syn-BPEL), such that, when ServiceABC or producing the message in QueueB fails. I retain the message in QueueA(basically, rolling back the message).
Options tried so far: 1). Created a scope for the three operations and added a compensationhandler to it and .2). XA Connection Factory is Enabled. Result: no luck:( still message from QueueA is getting consumed where as per my goal it should get rolled-back.
Please suggest your approach, I'll try out and post the outcome back.Whats your architecture? Are you running your app on an application server? If so, are you in a servlet or EJB? You should be able to get the application server to manage your transactions for you in that case e.g. wrap all the JMS calls in an stateless session bean and give it Required in its transaction attribute of deployment descriptor.
If you're not on an application server, you need to get a transaction manager, either by putting your code in an app server ( with JBoss you can use its transaction manager without necessarily having to use EJBs or Servlets, but thats not tested as thoroughly as the container managed transactions, and you will have to write some of the transaction code yourself ) or using a stand alone one. -
Offline Data Transaction Semantics and Attachments
I'm researching migrating an existing web and mobile application to Azure. Currently, we're using a NoSQL solution that syncs to SQLite on the client and maps local documents to model subclasses similar to Core Data in iOS. Offline support is a key differentiation
in our market, so I'm focused on the detailed mechanics of sync.
One of our current pain points is that documents synced on both on the server and client do not arrive in a transaction. To use an example, if the iOS client created 10 Projects, each with 5 related Tasks, the first Project might be synced to the server
and trigger an event to process its related Tasks before the remaining Projects were inserted and the Tasks table even started syncing. This would cause processing to fail as the related Tasks were not yet available in the database. Since it appears that syncing
rows, some of which had conflicts, can result in only some of the rows being inserted, I'm under the impression server side inserts do not occur in a transaction, even at the scope of a table. Would this be accurate conclusion? If not, what are the transaction
semantics with offline sync?
Also, is there a strategy for syncing related binary content offline? For example, in our current system, each document can have n-number of media attachments that are synced as part of a multi-part request. However, it appears we would either need to store
related media as blobs in a table column or roll our own method of synchronizing related files as rows were pushed and pulled if created on the device while offline.
Thanks,
- ScottHi
Thanks for the update.
Good to know the different ways to do it.
Not sure if this fits in workflow forum too. as the question is primarily around the data handling and storage of attachments and documents.
Hence I close this thread in this forum, And created the thread in 'Workflow forum ' to get to know the response.
New Thread : Webforms - data handling and storage
Please update your thoughts in the new thread. Appreciate your time and effort.
Thanks alot.
Best Regards
Saujanya
Edited by: Saujanya on Nov 17, 2011 12:20 PM -
Querying CHAR columns with character length semantics unreliable
Hi again,
It appears that there is a bug in the JDBC drivers whereby it is highly unlikely that the values of CHAR columns that use character length semantics can be accurately queried using ResultSet.getString(). Instead, the drivers return the value padded with space (0x#20) characters out to a number of bytes equal to the number of characters multiplied by 4. The number of bytes varies depending on the number and size of any non-ascii characters stored in the column.
For instance, if I have a CHAR(1) column, a value of 'a' will return 'a ' (4 characters/bytes are returned), a value of '\u00E0' will return '\u00E0 ' (3 characters / 4 bytes), and a value of '\uE000' will return '\uE000 ' (2 characters / 4 bytes).
I'm currently using version 9.2.0.3 of the standalone drivers (ojdbc.jar) with JDK 1.4.1_04 on Redhat Linux 9, connecting to Oracle 9.2.0.2.0 running on Solaris.
The following sample code can be used to demonstrate the problem (where the DDL at the top of the file must be executed first):
import java.sql.*;
import java.util.*;
This sample generates another bug in the Oracle JDBC drivers where it is not
possible to query the values of CHAR columns that use character length semantics
and are NOT full of non-ascii characters. The inclusion of the VARCHAR2 column
is just a control.
CREATE TABLE TMP2
TMP_ID NUMBER(10) NOT NULL PRIMARY KEY,
TMP_CHAR CHAR(10 CHAR),
TMP_VCHAR VARCHAR2(10 CHAR)
public class ClsCharSelection
private static String createString(char character, int length)
char characters[] = new char[length];
Arrays.fill(characters, character);
return new String(characters);
} // private static String createString(char, int)
private static void insertRow(PreparedStatement ps,
int key, char character)
throws SQLException
ps.setInt(1, key);
ps.setString(2, createString(character, 10));
ps.setString(3, createString(character, 10));
ps.executeUpdate();
} // private static String insertRow(PreparedStatement, int, char)
private static void analyseResults(PreparedStatement ps, int key)
throws SQLException
ps.setInt(1, key);
ResultSet results = ps.executeQuery();
results.next();
String tmpChar = results.getString(1);
String tmpVChar = results.getString(2);
System.out.println(key + ", " + tmpChar.length() + ", '" + tmpChar + "'");
System.out.println(key + ", " + tmpVChar.length() + ", '" + tmpVChar + "'");
results.close();
} // private static void analyseResults(PreparedStatement, int)
public static void main(String argv[])
throws Exception
Driver driver = (Driver)Class.forName(
"oracle.jdbc.driver.OracleDriver").newInstance();
DriverManager.registerDriver(driver);
Connection connection = DriverManager.getConnection(
argv[0], argv[1], argv[2]);
PreparedStatement ps = null;
try
ps = connection.prepareStatement(
"DELETE FROM tmp2");
ps.executeUpdate();
ps.close();
ps = connection.prepareStatement(
"INSERT INTO tmp2 ( tmp_id, tmp_char, tmp_vchar " +
") VALUES ( ?, ?, ? )");
insertRow(ps, 1, 'a');
insertRow(ps, 2, '\u00E0');
insertRow(ps, 3, '\uE000');
ps.close();
ps = connection.prepareStatement(
"SELECT tmp_char, tmp_vchar FROM tmp2 WHERE tmp_id = ?");
analyseResults(ps, 1);
analyseResults(ps, 2);
analyseResults(ps, 3);
ps.close();
connection.commit();
catch (SQLException e)
e.printStackTrace();
connection.close();
} // public static void main(String[])
} // public class ClsColumnInsertionFYI, this has been mentioned as early as November last year:
String with length 1 became 4 when nls_lang_semantics=CHAR
and was also brought up in Feburary:
JDBC thin driver pads CHAR col to byte size when NLS_LENGTH_SEMANTICS=CHAR -
How to find the Model size in Semantics
Hi All,
Please can you tell me how to find the model size in Semantics.
Thanks,
InduHi,
Instead of looking up the ID for the Entailment in DB_VIEWS you can better use MDSYS.SEM_NETWORK_INDEX_INFO
For example:
select name, type, id from MDSYS.SEM_NETWORK_INDEX_INFO;
NAME TYPE ID
MYMODEL MODEL 60
MYMODEL_INF ENTAILMENT 69
*** Space about the B-tree indexes on models and entailments ***
Indexes created in RDF_LINK$ for a specific model:
SQL> select name, type, id, index_name from MDSYS.SEM_NETWORK_INDEX_INFO;
NAME TYPE ID INDEX_NAME
MYMODEL MODEL 60 RDF_LNK_PCSGM_IDX
FAMILY100 MODEL 59 RDF_LNK_PSCGM_IDX
FAMILY100 MODEL 59 RDF_LNK_PCSGM_IDX
FAMILY2 MODEL 56 RDF_LNK_PCSGM_IDX
FAMILY2 MODEL 56 RDF_LNK_PSCGM_IDX
FAMILY MODEL 57 RDF_LNK_PCSGM_IDX
FAMILY MODEL 57 RDF_LNK_PSCGM_IDX
OTHERMODEL MODEL 58 RDF_LNK_PSCGM_IDX
OTHERMODEL MODEL 58 RDF_LNK_PCSGM_IDX
0 NETWORK 0 RDF_LNK_PCSGM_IDX
0 NETWORK 0 RDF_LNK_PSCGM_IDX
MYMODEL_INF ENTAILMENT 69 RDF_LNK_PCSGM_IDX
MYMODEL_INF ENTAILMENT 69 RDF_LNK_PSCGM_IDX
Then get the size from DBA_SEGMENTS
select bytes/1024/1024 MB, partition_name from dba_segments where segment_name in ('RDF_LNK_PSCGM_IDX','RDF_LNK_PCSGM_IDX');
Specifically for our model 60 and Entailment 69:
select bytes/1024/1024 MB, partition_name from dba_segments where segment_name in ('RDF_LNK_PSCGM_IDX','RDF_LNK_PCSGM_IDX') and partition_name in ('MODEL_60','MODEL_69');
MB PARTITION_NAME
192 MODEL_60
192 MODEL_60
60.0625 MODEL_69
62.375 MODEL_69
You would add them all for the size of the indexes -
How to determine column length semantics through ANSI Dynamic SQL ?
I am looking for a way to determine the length semantics used for a column through ANSI Dynamic SQL.
I have a database with NLS_CHARACTERSET=AL32UTF8.
In this database I have the following table:
T1(C1 varchar2(10 char), C2 varchar2(40 byte))
When I describe this table in SQL*Plus, I get:
C1 VARCHAR2(10 CHAR)
C2 VARCHAR2(40)
In my Pro*C program (mode=ansi), I get the select statement on input, use PREPARE method to prepare it and then use the GET DESCRIPTOR method to obtain colum information for output:
GET DESCRIPTOR 'output_descriptor' VALUE :col_num
:name = NAME, :type = TYPE,
:length = LENGTH, :octet_length = OCTET_LENGTH
For both C1 and C2 I get the following:
:type=12
:length=40
:octet_length=40
So, even if I know that my database is AL32UTF8, there doesn't seem to be a way for me to determine whether char or byte length semantics were used in C1 and C2 column definitions.
Does anybody know how I can obtain this information through ANSI Dynamic SQL?
Note: the use of system views such as ALL_TAB_COLUMNS is not an option, since we wish to obtain this information even for columns in a complex select statements which may involve multiple tables.
Note: I believe OCI provides the information that we need through OCI_ATTR_DATA_SIZE (which is in bytes) and OCI_ATTR_CHAR_SIZE (which is in chars). However, switching to OCI is something we would like to avoid at this point.Yes, I was wondering which forum would be the best for my question. I see similar questions in various forums, Call Interface, SQL and PL/SQL and Database - General. Unfortunately there is no Pro*C or Dynamic SQL forum which would be my first choice for posting this question.
Anyway I now posted the same question (same subject) in the Call Interface forum, so hopefully I'll get some answers there.
Thank you for the suggestion. -
Oracle length semantics migration documention
Due to the oracle documention to migrate length semantics:
To convert an existing schema and its associated data from byte semantics and a single-byte character set to character semantics and a multibyte character set, such as UTF-8, you need only follow these steps: [The following steps have been corrected since the magazine was printed.]
1. Export the schema.
2. Issue an ALTER SYSTEM SET NLS_LENGTH_SEMANTICS=CHAR SCOPE=BOTH command on the target database.
3. Stop and restart the instance so that the parameter change takes effect.
4. Drop the original schema.
5. Recreate the original schema and its tables (you can use import's show=Y option to get the CREATE TABLE statements). Columns in the recreated tables will use character semantics, because that's now the default.
6. Import the schema into the target database using the IGNORE=Y import option.
What is the meaning of the terms target and original?
Suppose there is a (source) database with length semantics byte. If a (target) database
is to be created as a clone of the (source) database, except for the length semantic char, does one have to migrate the (source) database first?
Or rather, why is it not possible to
1. Export the data from the source database with length semantic byte,
2. Create a target database with length semantics char,
3. Import the data from the source database?)This documentation is, unfortunately, poorly written.
If you want to migrate data from one database to another, with both databases having different character sets, you can avoid some data expansion issues, if you migrate to character set semantics at the same time.
You cannot just export data, and import it into a character semantics database, because export/import preserves original semantics of the exported tables.
Note: there is actually no such thing as a character semantics database. Character semantics is a column/variable property. The "character semantics database" is a confusing, though commonly used, term that actually means an instance which has NLS_LENGTH_SEMANTICS set to CHAR in its initialization file. The only significant meaning of this parameter is to be the default for session-level NLS_LENGTH_SEMANTICS. It is used for sessions that do not set this parameter explictly (through environment variable or ALTER SESSION). The session-level parameter is significant for CREATE TABLE/PROCEDURE/FUNCTION/PACKAGE [BODY] statements and tells the default for column and variable declarations that do not specify BYTE or CHAR explicitly.
To migrate semantics of an original schema you need to create a script that will contain all CREATE statements needed to recreate this schema (at least CREATE {TYPE | TABLE | MATERIALIZED VIEW | PROCEDURE | FUNCTION | PACKAGE [BODY]}). Then, you can just add the ALTER SESSION SET NLS_LENGTH_SEMANTICS=CHAR after any CONNECT command in this script. You can than run the script in the target database. How you create the script is irrelevant. You can use any reverse-engineering tool available (e.g. SHOW=Y option of import, DBMS_METADATA package, etc.)
After you pre-create the schema with the new semantics, you can import the data from the original (source) database with IGNORE=Y. The original semantics saved in the export file will be ignored for pre-created objects.
Note: PL/SQL may need manual corrections to migrate to character semantics. For example, SUBSTRB used to trim values before assignment may need to be replaced with SUBSTR.
-- Sergiusz -
Convert all VARCHAR2 data types to character length semantics
Hi,
I am wondering if there is an easy way to convert all columns in the database of data type VARCHAR2(x BYTE) to VARCHAR2(x CHAR)?
Regards
HåkanThe DMU does not allow character length semantics migration for the following type of objects:
- Columns already in character length semantics
- Data dictionary columns
- Columns under Oracle-supplied application schemas
- CHAR attribute columns of ADT
- Columns in clusters
- Columns on which partition keys are defined
Please check if the disabled nodes you observed in the wizard fall under one of these categories. -
Export with data length semantics
Hello,
I've following problem.
I have a table abcd which contains 2 VARCHAR2 columns with different data length semantics (one with BYTE, one with CHAR). Charset is Single Byte; let's say WE8MSWIN1252, so data length semantics should not be a problem. should not. details later.
So this would be:
create table abcd (a_char VARCHAR2(2 CHAR), a_byte VARCHAR2(2 BYTE));after that I export the table via exp. I'm not setting NLS_LENGTH_SEMANTICS environment variable, so BYTE is used.
In the dump file the data length semantics for the byte col is omitted, as I exported it with BYTE:
create table abcd (a_char VARCHAR2(2 CHAR), a_byte VARCHAR2(2));after that, I "accidently" import it with data length semantics set to CHAR, and the table looks like this now
abcd
a_char VARCHAR2(2 CHAR)
a_byte VARCHAR2(2 CHAR)Same happens vice versa when using CHAR for export and BYTE for import...
In single byte charsets this might not be so much of a problem, as one CHAR is equal to one BYTE, but...
If I compile plsql against the original table, and run against the outcoming table after export, I get an ORA-4062, and I have to recompile...
Would not be a problem if the plsql I compile would be on the database...Big problem is that the ORA-4062 occurs in forms, where it's difficult for me to recompile (I would have to transfer all the sources to customer and compile there).
Is there any possibility to export data length semantics regardless which environment variable is set?
database version would be 9.2.0.6; but if there exists a solution in higher versions I would also be happy to hear them...
many thanks,
regardsI can't reproduce your problem:
SQL> show parameter nls_length_semantics
NAME TYPE VALUE
nls_length_semantics string BYTE
SQL> create table scott.demo( col1 varchar2(10 byte), col2 varchar2(10 char) );
SQL> describe scott.demo
Name Null? Type
COL1 VARCHAR2(10)
COL2 VARCHAR2(10 CHAR)
$ export NLS_LENGTH_SEMANTICS=BYTE
$ exp scott/tiger file=scott.dmp tables=demo
SQL> drop table scott.demo;
$ export NLS_LENGTH_SEMANTICS=CHAR
$ imp scott/tiger file=scott.dmp
SQL> describe scott.demo
Name Null? Type
COL1 VARCHAR2(10 BYTE)
COL2 VARCHAR2(10)
SQL> alter session set nls_length_semantics=byte;
SQL> describe scott.demo
Name Null? Type
COL1 VARCHAR2(10)
COL2 VARCHAR2(10 CHAR)Can you post a test like mine?
Enrique
PS If you have access to Metalink, read Note:144808.1 Examples and limits of BYTE and CHAR semantics usage. From 9i and up, imp doesn't read nls_length_semantics from the environment.
Edited by: Enrique Orbegozo on Dec 16, 2008 12:50 PM
Edited by: Enrique Orbegozo on Dec 16, 2008 12:53 PM -
Changing nls_length semantics
hi guys,
i have unicode database (10g) with
nls_length_semantics = 'BYTE'
i try alter system set nls_length_semantics = 'CHAR';
then i try a select * from nls_instance_parameters;
now it is showing that the parameter value is CHAR.
However when i create a new table, i can see that the column size are still in bytes.
q1) how to change the whole database from byte to char
q2) will the change affect existing table columns in bytes ? and its data
*q3) if i export a schema from a database with tables define in BYTE, then i do a import into a database with nls_length_semantics = CHAR
will the tables imported be redefined as CHAR
or will it still be in bytes ?
q4) is there any sql statement whereby i can see if a table column is in byte or char.
doing a describe doesnt show
q5) i try to recreate a new session
and do a
select * from nls_session_parameters
the nls_length_semantics are still in BYTES
but from nls_instance_parameters, it is in CHAR
why is it so? do i need to bounce the db?
thanks guys!
Message was edited by:
OracleWannabeHowever when i create a new table, i can see that the column size are still in bytes.How did you determined this?
Few points, if NLS_LENGTH_SEMANTICS=CHAR then you would not be able to see the CHAR/VARCHAR/VARCHAR2 columns as CHAR, try changing it to BYTE then it would display you CHAR, Same may also apply for NLS_LENGTH_SEMANTICS=BYTE, that is only columns with CHAR will be displayed as CHAR but those with BYTES will not be specified. This behaviour also changes from version to version.
q1) how to change the whole database from byte to charA1) Oracle does not support CHAR Semantics for its own components, hence SYS schema will always remain BYTE, there is not option to change this for entire database level, however if you set NLS_LENGTH_SEMANTICS to CHAR by following command, SYS Schema or Database itself will contibue to operate under BYTE semantics.
SQL> alter system set NLS_LENGTH_SEMANTICS=CHAR scope=both;
q2) will the change affect existing table columns in bytes ? and its dataA2) No, you may chose to convert them manually using alter table statements, for eg.
SQL> alter table emp modify ename varchar2(10 CHAR);
*q3) if i export a schema from a database with tables define in BYTE, then i do a import into a database with nls_length_semantics = CHAR
will the tables imported be redefined as CHAR
or will it still be in bytes ?A3) No, by default they would be in BYTE unless you chose to modify them manually.
q4) is there any sql statement whereby i can see if a table column is in byte or char.
doing a describe doesnt showA4) change NLS_LENGTH_SEMANTICS TO BYTE at session level and then describe the table, those that display CHAR are CHAR
SQL> alter session set nls_length_semantics=byte;
Session altered.
SQL> desc emp
Name Null? Type
EMPNO NOT NULL NUMBER(4)
ENAME VARCHAR2(10 CHAR)
JOB VARCHAR2(9)
MGR NUMBER(4)
HIREDATE DATE
SAL NUMBER(7,2)
COMM NUMBER(7,2)
DEPTNO NUMBER(2)
q5) i try to recreate a new session
and do a
select * from nls_session_parameters
the nls_length_semantics are still in BYTES
but from nls_instance_parameters, it is in CHARA5) Are you connected as SYSDBA? Database's own objects will always remain as BYTE, hence if it is SYS schema the nls_session_parameters will always return as BYTE for SYS.
To verify the value of the parameter just type the following command
show parameter nls_length_semantics
Cheers,
Manoj -
Character semantics when setting the length of NVARCHAR2 field (Oracle 10g)
SQL> CREATE TABLE viv_naseleni_mesta (
2 naseleno_mjasto_id INTEGER NOT NULL PRIMARY KEY,
3 ime NVARCHAR2(15 CHAR) NOT NULL,
4 oblast_id INTEGER NOT NULL REFERENCES viv_oblasti(oblast_id),
5 grad CHAR NOT NULL CHECK (grad IN (0,1)),
6 de_timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
7 de_update_time TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
8 de_operator INTEGER DEFAULT NULL,
9 de_versija NUMBER(5,2) DEFAULT NULL
10 );
ime NVARCHAR2(15 CHAR) NOT NULL,
ERROR at line 3:
ORA-00907: missing right parenthesis
It seems that the DB does not accept the length of a field to be given in character
semantics. I have several sources that say it is possible, but is that
only possible in 11g?According to the docs, NCHAR and NVARCHAR2 types are always character-based. So specifying in CHAR doesn't make sense explicitly.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14225/ch3globenv.htm#sthref385
Cheers
Sarma. -
Incorrect data_length for columns with char semantics in 10g
Hi,
I was going through a few databases at my work place and I noticed something unusual.
Database Server - Oracle 10g R2
Database Client - Oracle 11g R1 (11.1.0.6.0 EE)
Client OS - Win XP
SQL>
SQL> @ver
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
5 rows selected.
SQL> --
SQL> drop table t;
Table dropped.
SQL> create table t (
2 a char(3 char),
3 b char(3 byte),
4 c char(3),
5 d varchar2(3 char),
6 e varchar2(3 byte),
7 f varchar2(3)
8 );
Table created.
SQL> --
SQL> desc t
Name Null? Type
A CHAR(3 CHAR)
B CHAR(3)
C CHAR(3 CHAR) <= why does it show "CHAR" ? isn't "BYTE" semantics the default i.e. CHAR(3) = CHAR(3 BYTE) ?
D VARCHAR2(3 CHAR)
E VARCHAR2(3)
F VARCHAR2(3 CHAR) <= same here; this should be VARCHAR2(3)
SQL> --
SQL> select table_name,
2 column_name,
3 data_type,
4 data_length,
5 data_precision,
6 data_scale
7 from user_tab_columns
8 where table_name = 'T';
TABLE_NAME COLUMN_NAME DATA_TYPE DATA_LENGTH DATA_PRECISION DATA_SCALE
T A CHAR 12 <= why 12 and not 3 ? why multiply by 4 ?
T B CHAR 3
T C CHAR 12 <= same here
T D VARCHAR2 12 <= and here
T E VARCHAR2 3
T F VARCHAR2 12 <= and here
6 rows selected.
SQL>
SQL>I believe it multiplies the size by 4, because it shows 16 in user_tab_columns when the size is changed to 4.
When I try this on 11g R1 server, it looks good -
Database Server - Oracle 11g R1
Database Client - Oracle 11g R1 (11.1.0.6.0 EE)
Client OS - Win XP
SQL>
SQL> @ver
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
PL/SQL Release 11.1.0.6.0 - Production
CORE 11.1.0.6.0 Production
TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production
5 rows selected.
SQL> --
SQL> drop table t;
Table dropped.
SQL> create table t (
2 a char(3 char),
3 b char(3 byte),
4 c char(3),
5 d varchar2(3 char),
6 e varchar2(3 byte),
7 f varchar2(3)
8 );
Table created.
SQL> --
SQL> desc t
Name Null? Type
A CHAR(3 CHAR)
B CHAR(3)
C CHAR(3)
D VARCHAR2(3 CHAR)
E VARCHAR2(3)
F VARCHAR2(3)
SQL> --
SQL> select table_name,
2 column_name,
3 data_type,
4 data_length,
5 data_precision,
6 data_scale
7 from user_tab_columns
8 where table_name = 'T';
TABLE_NAME COLUMN_NAME DATA_TYPE DATA_LENGTH DATA_PRECISION DATA_SCALE
T A CHAR 3
T B CHAR 3
T C CHAR 3
T D VARCHAR2 3
T E VARCHAR2 3
T F VARCHAR2 3
6 rows selected.
SQL>
SQL>Is it a known bug ? Unfortunately, I do not have access to Metalink.
Thanks,
isotope
Edited by: isotope on Mar 3, 2010 6:46 AMAnurag Tibrewal wrote:
It is just because you have different NLS_LENGTH_SEMANTICS in v$nls_parameter for both the database. It is BYTE in R10 and CHAR in R11.
I cannot query v$nls_parameter in the 10g database. I tried this testcase with the ALTER SESSION and checking with nls_session_parameter in both 10g and 11g. The client is 11g in each case.
The DESCRIBE table looks ok, but the user_tab_column shows size*4.
Testcase -
cl scr
select * from v$version;
-- Try CHAR semantics
alter session set nls_length_semantics=char;
select * from nls_session_parameters where parameter = 'NLS_LENGTH_SEMANTICS';
drop table t;
create table t (
a char(3 char),
b char(3 byte),
c char(3),
d varchar2(3 char),
e varchar2(3 byte),
f varchar2(3)
desc t
select table_name,
column_name,
data_type,
data_length,
data_precision,
data_scale
from user_tab_columns
where table_name = 'T';
-- Try BYTE semantics
alter session set nls_length_semantics=byte;
select * from nls_session_parameters where parameter = 'NLS_LENGTH_SEMANTICS';
drop table t;
create table t (
a char(3 char),
b char(3 byte),
c char(3),
d varchar2(3 char),
e varchar2(3 byte),
f varchar2(3)
desc t
select table_name,
column_name,
data_type,
data_length,
data_precision,
data_scale
from user_tab_columns
where table_name = 'T';In 10g R2 server -
SQL>
SQL> select * from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
SQL>
SQL> -- Try CHAR semantics
SQL> alter session set nls_length_semantics=char;
Session altered.
SQL> select * from nls_session_parameters where parameter = 'NLS_LENGTH_SEMANTICS';
PARAMETER VALUE
NLS_LENGTH_SEMANTICS CHAR
SQL> --
SQL> drop table t;
Table dropped.
SQL> create table t (
2 a char(3 char),
3 b char(3 byte),
4 c char(3),
5 d varchar2(3 char),
6 e varchar2(3 byte),
7 f varchar2(3)
8 );
Table created.
SQL> --
SQL> desc t
Name Null? Type
A CHAR(3)
B CHAR(3 BYTE)
C CHAR(3)
D VARCHAR2(3)
E VARCHAR2(3 BYTE)
F VARCHAR2(3)
SQL> --
SQL> select table_name,
2 column_name,
3 data_type,
4 data_length,
5 data_precision,
6 data_scale
7 from user_tab_columns
8 where table_name = 'T';
TABLE_NAME COLUMN_NAME DATA_TYPE DATA_LENGTH DATA_PRECISION DATA_SCALE
T A CHAR 12 <==
T B CHAR 3
T C CHAR 12 <==
T D VARCHAR2 12 <==
T E VARCHAR2 3
T F VARCHAR2 12 <==
6 rows selected.
SQL>
SQL> -- Try BYTE semantics
SQL> alter session set nls_length_semantics=byte;
Session altered.
SQL> select * from nls_session_parameters where parameter = 'NLS_LENGTH_SEMANTICS';
PARAMETER VALUE
NLS_LENGTH_SEMANTICS BYTE
SQL> --
SQL> drop table t;
Table dropped.
SQL> create table t (
2 a char(3 char),
3 b char(3 byte),
4 c char(3),
5 d varchar2(3 char),
6 e varchar2(3 byte),
7 f varchar2(3)
8 );
Table created.
SQL> --
SQL> desc t
Name Null? Type
A CHAR(3 CHAR)
B CHAR(3)
C CHAR(3)
D VARCHAR2(3 CHAR)
E VARCHAR2(3)
F VARCHAR2(3)
SQL> --
SQL> select table_name,
2 column_name,
3 data_type,
4 data_length,
5 data_precision,
6 data_scale
7 from user_tab_columns
8 where table_name = 'T';
TABLE_NAME COLUMN_NAME DATA_TYPE DATA_LENGTH DATA_PRECISION DATA_SCALE
T A CHAR 12 <==
T B CHAR 3
T C CHAR 3
T D VARCHAR2 12 <==
T E VARCHAR2 3
T F VARCHAR2 3
6 rows selected.
SQL>
SQL>In 11g R1 server -
SQL>
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
PL/SQL Release 11.1.0.6.0 - Production
CORE 11.1.0.6.0 Production
TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production
5 rows selected.
SQL>
SQL> -- Try CHAR semantics
SQL> alter session set nls_length_semantics=char;
Session altered.
SQL> select * from nls_session_parameters where parameter = 'NLS_LENGTH_SEMANTICS';
PARAMETER VALUE
NLS_LENGTH_SEMANTICS CHAR
1 row selected.
SQL> --
SQL> drop table t;
Table dropped.
SQL> create table t (
2 a char(3 char),
3 b char(3 byte),
4 c char(3),
5 d varchar2(3 char),
6 e varchar2(3 byte),
7 f varchar2(3)
8 );
Table created.
SQL> --
SQL> desc t
Name Null? Type
A CHAR(3)
B CHAR(3 BYTE)
C CHAR(3)
D VARCHAR2(3)
E VARCHAR2(3 BYTE)
F VARCHAR2(3)
SQL> --
SQL> select table_name,
2 column_name,
3 data_type,
4 data_length,
5 data_precision,
6 data_scale
7 from user_tab_columns
8 where table_name = 'T';
TABLE_NAME COLUMN_NAME DATA_TYPE DATA_LENGTH DATA_PRECISION DATA_SCALE
T A CHAR 3
T B CHAR 3
T C CHAR 3
T D VARCHAR2 3
T E VARCHAR2 3
T F VARCHAR2 3
6 rows selected.
SQL>
SQL> -- Try BYTE semantics
SQL> alter session set nls_length_semantics=byte;
Session altered.
SQL> select * from nls_session_parameters where parameter = 'NLS_LENGTH_SEMANTICS';
PARAMETER VALUE
NLS_LENGTH_SEMANTICS BYTE
1 row selected.
SQL> --
SQL> drop table t;
Table dropped.
SQL> create table t (
2 a char(3 char),
3 b char(3 byte),
4 c char(3),
5 d varchar2(3 char),
6 e varchar2(3 byte),
7 f varchar2(3)
8 );
Table created.
SQL> --
SQL> desc t
Name Null? Type
A CHAR(3 CHAR)
B CHAR(3)
C CHAR(3)
D VARCHAR2(3 CHAR)
E VARCHAR2(3)
F VARCHAR2(3)
SQL> --
SQL> select table_name,
2 column_name,
3 data_type,
4 data_length,
5 data_precision,
6 data_scale
7 from user_tab_columns
8 where table_name = 'T';
TABLE_NAME COLUMN_NAME DATA_TYPE DATA_LENGTH DATA_PRECISION DATA_SCALE
T A CHAR 3
T B CHAR 3
T C CHAR 3
T D VARCHAR2 3
T E VARCHAR2 3
T F VARCHAR2 3
6 rows selected.
SQL>
SQL>isotope
Maybe you are looking for
-
Project will not load Premiere CS5
Ive been working on my senior class video yearbook all year and now the project has over 400 files and now i cant work on it. It will pass the load process but once the sequence and everything comes on, it begins to load all the separate files and ne
-
Apologies in advance for the long post. I could post this query in a variety of locations - MAS/iTunes Store/iCloud - but the area it's actually causing a problem is Apple Support Communities. For several days I have been virtually unable to log in h
-
I downloaded Firefox 5 and discovered it disabled my McAfee SiteAdvisor because of incompatibility. No compatible version of the SiteAdvisor was found. How do I delete the Firefox-5 and reload my old Firefox-4 so I may reactivate the SiteAdvisor? Tha
-
PGR error in sales return delivery in MTO scenario
Hello all, I am working on sales return scenario and creating delivery against the sales return order VL01N. Return sales order is created against the delivery which is created against the MTO sales order. So I am getting an error in VLO1N at the tim
-
Faces vs Keywords : A suggestion
Reading through some of the posts here I can sympathise with many of the frustrations. I've just converted some iPhoto libs and added in the Faces feature to most of them. It's hard work, a lot of typing. The facial recognition feature is still in th