Character sets and conversions
Hi all,
were facing a quite complex problem, for which I'am not even able to specify were it is going wrong or what needs configuring, partly for lack of experience and partly for combining different tecnical areas from which I'm only responible for some of them.
So I'll sketch breefly the situation, and hopefully you might give me some guidelines or hints as to where to look at.
The setup : web application (so clients access by use of browser) on Weblogic- Linux platform, Tuxedo on Iseries , and as far as I understand some DB internally to Iseries where data is stored.
Data is entered in the DB by use of some data-entry application that comes with the iSeries.
The problem: consulting data by use of the web-aplication , some characters dont show up correctly , e.g. @ in email addresses, e's with accents, ...
For the chain being "browser <-> WL <-> Tuxedo <-> DB" , the problem might be different points. But from trace beeing activated , we could see that the response going out of tuxedo to WL is not correct...
Any hint as to what to look for, what can configuration is important, would be welcome ...
Some sub-questions:
- I understand Tuxedo is always "installed" in English , with no other option. This means that f.e. logs are in English.
But can/need to define some character set?
- Between Tuxedo <-> DB you can use som conversion tables ?
Any help would be apreciated , were quite lost ..
Hi,
Given that you are running Tuxedo on iSeries, I'm guessing you are running Tuxedo 6.5 as the port for the current Tuxedo release on iSeries hasn't been released yet. Tuxedo 6.5 does not directly support multi-byte character strings. The two common buffer formats for string data in Tuxedo are STRING which doesn't support multi-byte characters, or CARRAY which does support multi-byte characters as a CARRAY is essentially a blob. Do you know what buffer type the Tuxedo application is using to send data to WebLogic Server?
In Tuxedo 9.0 and later, direct support for multi-byte strings was added in the form of the MBSTRING buffer type. This buffer type supports multi-byte strings with a variety of character sets and encodings.
Regards,
Todd Little
Oracle Tuxedo Chief Archiitect
Similar Messages
-
UTF/Japanese character set and my application
Blankfellaws...
a simple query about the internationalization of an enterprise application..
I have a considerably large application running as 4 layers.. namely..
1) presentation layer - I have a servlet here
2) business layer - I have an EJB container here with EJBs
3) messaging layer - I have either Weblogic JMS here in which case it is an
application server or I will have MQSeries in which case it will be a
different machine all together
4) adapter layer - something like a connector layer with some specific or
rather customized modules which can talk to enterprise repositories
The Database has few messages in UTF format.. and they are Japanese
characters
My requirement : I need thos messages to be picked up from the database by
the business layer and passed on to the client screen which is a web browser
through the presentation layer.
What are the various points to be noted to get this done?
Where and all I need to set the character set and what should be the ideal
character set to be used to support maximum characters?
Are there anything specifically to be done in my application code regarding
this?
Are these just the matter of setting the character sets in the application
servers / web servers / web browsers?
Please enlighten me on these areas as am into something similar to this and
trying to figure out what's wrong in my current application. When the data
comes to the screen through my application, it looks corrupted. But the asme
message when read through a simple servlet, displays them without a problem.
Am confused!!
Thanks in advance
ManeshHello Manesh,
For the database I would recommend using UTF-8.
As for the character problems, could you elaborate which version of WebLogic
are you using and what is the nature of the problem.
If your problem is that of displaying the characters from the db and are
using JSP, you could try putting
<%@ page language="java" contentType="text/html; charset=UTF-8"%> on the
first line,
or if a servlet .... response.setContentType("text/html; charset=UTF-8");
Also to automatically select the correct charset by the browser, you will
have to include
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> in the
jsp.
You could replace the "UTF-8" with other charsets you are using.
I hope this helps...
David.
"m a n E s h" <[email protected]> wrote in message
news:[email protected]...
Blankfellaws...
a simple query about the internationalization of an enterpriseapplication..
>
I have a considerably large application running as 4 layers.. namely..
1) presentation layer - I have a servlet here
2) business layer - I have an EJB container here with EJBs
3) messaging layer - I have either Weblogic JMS here in which case it isan
application server or I will have MQSeries in which case it will be a
different machine all together
4) adapter layer - something like a connector layer with some specific or
rather customized modules which can talk to enterprise repositories
The Database has few messages in UTF format.. and they are Japanese
characters
My requirement : I need thos messages to be picked up from the database by
the business layer and passed on to the client screen which is a webbrowser
through the presentation layer.
What are the various points to be noted to get this done?
Where and all I need to set the character set and what should be the ideal
character set to be used to support maximum characters?
Are there anything specifically to be done in my application coderegarding
this?
Are these just the matter of setting the character sets in the application
servers / web servers / web browsers?
Please enlighten me on these areas as am into something similar to thisand
trying to figure out what's wrong in my current application. When the data
comes to the screen through my application, it looks corrupted. But theasme
message when read through a simple servlet, displays them without aproblem.
Am confused!!
Thanks in advance
Manesh -
Hi. everyone.
What is the oracle dictionary that contains information of
oracle character set and national character set?
I checked v$database, but there was not the information.
It seems that there are some differences between "nls_* " init parameters
and the database character set.
"Alter database backup controlfile to trace" gave me the character set of db,
but I would like to know whether there are oracle dictionary regarding them.
Thanks in advance. Have a nice day.
Best Regards.I found the dictionary which contains the information of character set and
natiional character set of database.
select * from nls_database_parameters
where parameter like '%CHARACTERSET';
Thanks for reading.
Have a good day.
Best Regards. -
I am trying to install Oracle 8i on Linux and it does not work : once the install is finished, I have a message saying that "Character Set not found".
I am runing a french version of Linux (fr-latin 1) and I try to install Oracle with French and English as languages
An other problem about this install : Oracle does not seem to recognize that I have 6,9 Giga for it to install, and says that I have not enough space for the install...
And at the end of the install, it takes for ages (about 15mns) during which nothing seems to happen. On one machine I got out of this phase, but on the other I never saw it finish, it looks as if the computer crashed. Is that normal?
I went through all the initialization phases, set the correct environment variables...
thanks
Solange
nullI've been dealing with the same problems in the english version but could bypass thiss by doing the folowing.
-Just ignore the disk space stuff
-Ignore the charset message, also
-When creating a database, choose custom and then select the WE8ISO8859P1 char set. It worked for portuguese, must work for french also.
-Everyone here recommended, and I do the same, leave the database creation for later, not during instalation.
Good Luck! -
I have a table with a clob field on an Oracle 8.1.7.4 database. When querying the clob field via odbc and ado the value is truncated. The Oracle server and client are using a WE8ISO8859P1 character set. Has anyone come across this before.
Thanks.I believe the data should be able to be represented by IS0-8859. The data is a long random string of characters that represents a fingerprint image.
We seem to only get 996 characters back from the database. If I do a getchunk on the data then I get 996 characters of data, then 996 NULLS, then 996 characters of data and so on. The 996 NULLS should be data.
The data is in the database because I can do a dbms_lob.substr and get the correct info back. -
Oracle Database Character set and DRM
Hi,
I see the below context in the Hyperion EPM Installation document.
We need to install only Hyperion DRM and not the entire Hyperion product suite, Do we really have to create the database in one of the uft-8 character set?
Why it is saying that we must create the database this way?
Any help is appreciated.
Oracle Database Creation Considerations:
The database must be created using Unicode Transformation Format UTF-8 encoding
(character set). Oracle supports the following character sets with UTF-8 encoding:
l AL32UTF8 (UTF-8 encoding for ASCII platforms)
l UTF8 (backward-compatible encoding for Oracle)
l UTFE (UTF-8 encoding for EBCDIC platforms)
Note: The UTF-8 character set must be applied to the client and to the Oracle database.
Edited by: 851266 on Apr 11, 2011 12:01 AMSrini,
Thanks for your reply.
I would assume that the ConvertToClob function would understand the byte order mark for UTF-8 in the blob and not include any parts of it in the clob. The byte order mark for UTF-8 consists of the byte sequence EF BB BF. The last byte BF corresponds to the upside down question mark '¿' in ISO-8859-1. Too me, it seems as if ConvertToClob is not converting correctly.
Am I missing something?
BTW, the database version is 10.2.0.3 on Solaris 10 x86_64
Kind Regards,
Eyðun
Edited by: Eyðun E. Jacobsen on Apr 24, 2009 8:26 PM -
MySQL Character Set and Collation
Hey There,
Can somebody please tell me why MySQL's PKGBUILD contains:
--with-charset=latin1 --with-collation=latin1_general_ci
line ? I mean why not utf8 and utf8_general_ci but latin1 ?Hey There,
Can somebody please tell me why MySQL's PKGBUILD contains:
--with-charset=latin1 --with-collation=latin1_general_ci
line ? I mean why not utf8 and utf8_general_ci but latin1 ? -
Non latin character sets and accented latin character with refind
I need to use refind to deal with strings containing accented
characters like žittâ lísu, but it doesn't seem to
find them. Also when using it with cyrillic characters , it won't
find individual characters, but if I test for [\w] it'll work.
I found a livedocs that says cf uses the Java unicode
standard for characters. Is it possible to use refind with non
latin characters or accented characters or do I have to write my
own Java?ogre11 wrote:
> I need to use refind to deal with strings containing
accented characters like
> ?itt? l?su, but it doesn't seem to find them. Also when
using it with cyrillic
> characters , it won't find individual characters, but if
I test for [\w] it'll
> work.
works fine for me using unicode data:
<cfprocessingdirective pageencoding="utf-8">
<cfscript>
t="Tá mé in ann gloine a ithe;
Nà chuireann sé isteach nó amach
orm";
s="á";
writeoutput("search:=#t#<br>for:=#s#<br>found
at:=#reFind(s,t,1,false)#");
</cfscript>
what's the encoding for your data? -
Conversions between character sets when using exp and imp utilities
I use EE8ISO8859P2 character set on my server,
when exporting database with NLS_LANG not set
then conversion should be done between
EE8ISO8859P2 and US7ASCII charsets, so some
characters not present in US7ASCII should not be
successfully converted.
But when I import such a dump, all characters not
present in US7ASCII charset are imported to the database.
I thought that some characters should be lost when
doing such a conversions, can someone tell me why is it not so?Not exactly. If the import is done with the same DB character set, then no matter how it has been exported. Conversion (corruption) may happen if the destination DB has a different character set. See this example :
[ora102 work db102]$ echo $NLS_LANG
AMERICAN_AMERICA.WE8ISO8859P15
[ora102 work db102]$ sqlplus test/test
SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jul 25 14:47:01 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
TEST@db102 SQL> create table test(col1 varchar2(1));
Table created.
TEST@db102 SQL> insert into test values(chr(166));
1 row created.
TEST@db102 SQL> select * from test;
C
¦
TEST@db102 SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
[ora102 work db102]$ export NLS_LANG=AMERICAN_AMERICA.EE8ISO8859P2
[ora102 work db102]$ sqlplus test/test
SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jul 25 14:47:55 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
TEST@db102 SQL> select col1, dump(col1) from test;
C
DUMP(COL1)
©
Typ=1 Len=1: 166
TEST@db102 SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
[ora102 work db102]$ echo $NLS_LANG
AMERICAN_AMERICA.EE8ISO8859P2
[ora102 work db102]$ exp test/test file=test.dmp tables=test
Export: Release 10.2.0.1.0 - Production on Tue Jul 25 14:48:47 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in EE8ISO8859P2 character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P15 character set (possible charset conversion)
About to export specified tables via Conventional Path ...
. . exporting table TEST 1 rows exported
Export terminated successfully without warnings.
[ora102 work db102]$ sqlplus test/test
SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jul 25 14:48:56 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
TEST@db102 SQL> drop table test purge;
Table dropped.
TEST@db102 SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
[ora102 work db102]$ imp test/test file=test.dmp
Import: Release 10.2.0.1.0 - Production on Tue Jul 25 14:49:15 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Export file created by EXPORT:V10.02.01 via conventional path
import done in EE8ISO8859P2 character set and AL16UTF16 NCHAR character set
import server uses WE8ISO8859P15 character set (possible charset conversion)
. importing TEST's objects into TEST
. importing TEST's objects into TEST
. . importing table "TEST" 1 rows imported
Import terminated successfully without warnings.
[ora102 work db102]$ export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P15
[ora102 work db102]$ sqlplus test/test
SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jul 25 14:49:34 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
TEST@db102 SQL> select col1, dump(col1) from test;
C
DUMP(COL1)
¦
Typ=1 Len=1: 166
TEST@db102 SQL> -
CHARACTER SET CONVERSION PROBLEM BETWEEN WIN XP (SOURCE EXPORT) AND WIN 7
Hi colleagues, please assist:
I have a laptop running win 7 professional. Its also running oracle database 10g release 10.2.0.3.0. I need to import a dump into this database. The dump originates from a client pc running win XP and oracle 10g release 10.2.0.1.0 When i use the import utility in my database(on the laptop), the following happens:
Import: Release 10.2.0.3.0 - Production on Tue Nov 9 17:03:16 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Username: system/password@orcl
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options
Import file: EXPDAT.DMP > F:\uyscl.dmp
Enter insert buffer size (minimum is 8192) 30720>
Export file created by EXPORT:V08.01.07 via conventional path
Warning: the objects were exported by UYSCL, not by you
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
export client uses WE8ISO8859P1 character set (possible charset conversion)
export server uses WE8ISO8859P1 NCHAR character set (possible ncharset conversion)
List contents of import file only (yes/no): no >
when i press enter, the import windows terminates prematurely without completing the process. What should i do to fix this problem?Import: Release 10.2.0.3.0 - Production on Fri Nov 12 14:57:27 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Username: system/password@orcl
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options
Import file: EXPDAT.DMP > F:\Personal\DPISIMBA.dmp
Enter insert buffer size (minimum is 8192) 30720>
Export file created by EXPORT:V10.02.01 via conventional path
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
List contents of import file only (yes/no): no >
Ignore create error due to object existence (yes/no): no >
Import grants (yes/no): yes >
Import table data (yes/no): yes >
Import entire export file (yes/no): no >
Username: -
I have had problems with extended charaters
in a database table not being represented
correctly by clients. I believe that the FAQ
concerning "Why do I see questions marks..."
identified the problems. The database is set
to Latin4 and clients at Latin1. I am seeing
inverted questions marks for characters that
don't match when displaying the table on
a client, whether SQLPLUS under NT4.0, SQLPLUS under Solaris 8, even ODBC to MS Access.
My questions are
1) How do the database and clients know
that the character sets are different? We
at first assumed that only the bit patterns
were seen so we might see different characters for the same 8 bits.
2) How are the character sets compared?
3) If a character is moved to a different
bit pattern, is this recognized and hadled
properly? Or is it only matching characters
with the same bit pattern?
Answers will be greatly appreciated after
weeks of asking questions outside this forum
and searching the WWW.
Thanks,
DaveHi,
You didn't mention what your nls_lang setting on your client is set to. Your NLS_LANG setting for Windows should reflect your current code page. In general two scenarios can occur
when data is sent from client to the database. If the database character set and client NLS_LANG match then no conversion takes place. Otherwise the data is automatically converted converted from the client code page to the database character set and vice versa. In either of these two scenarios if the NLS_LANG is set improperly (not reflecting current client OS code) corruption can occur. In the scenario you are describing have you entered non Latin1 data into the database? If so how? If you have, and it was entered properly you will still have difficulties displaying the data in SQL*PLUS on a Latin1 client as it will not know about these characters. Another tactic that would be useful is to use the dump command to see if your latin4 characters are stored properly on the database. An example would be something like: SELECT DUMP(col,1016)FROM table ;
null -
Import error character set conversion
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses AL32UTF8 character set (possible charset conversion)
export client uses WE8ISO8859P1 character set (possible charset conversion)
Import terminated successfully without warnings.
how to resolve this
importinng into linux machineimport done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses AL32UTF8 character set (possible charset conversion)
export client uses WE8ISO8859P1 character set (possible charset conversion)
Import terminated successfully without warnings.Your import was successful. However, there are a few things you need to note from the top of the import log. It is advisable to use correct NLS settings when doing export/import to avoid data corruption/loss. That idea is to ensure you set the correct NLS setting (NLS_LANG) at the client before running exp or imp. In some cases, the setting does not matter because the different character sets can go together.
In this case, I would not worry about it. -
Conversion error, from character set 4102 to character set 4103
Hi,
We've developed a JCO server(in Java) with an ABAP report the function provided by the JCO server.
MetaData:
static {
repository = new Repository("SMSRepository");
fmeta = new JCO.MetaData("ZSMSSEND");
fmeta.addInfo("TO", JCO.TYPE_CHAR, 255, 0, 0, JCO.IMPORT_PARAMETER, null);
fmeta.addInfo("CONTENT", JCO.TYPE_CHAR, 255, 0, 0, JCO.IMPORT_PARAMETER, null);
fmeta.addInfo("RETN", JCO.TYPE_CHAR, 255, 0, 0, JCO.EXPORT_PARAMETER, null);
repository.addFunctionInterfaceToCache(fmeta);
Server parameters:
Properties prop = new Properties();
prop.put("jco.server.gwhost","shaw2k07");
prop.put("jco.server.gwserv","sapgw01");
prop.put("jco.server.progid","JCOSERVER01");
prop.put("jco.server.unicode","1");
srv = new SMSServer(prop,repository);
If we run JCO server in both my client machine(from developer studio) and in the WAS machine(stand alone Java program), everything is ok. In the Abap side, the SM59 unicode test return the destination is an unicode system, and the ABAP report call the function can run smoothly.
But we package this JCO server to a web application and deploy to WAS, problem occured. The SM59 unicode test still say the destination is an unicode system. But the ABAP report runs with an ABAP DUMP:
Conversion error between two character set
RFC_CONVERSION_FIELD
Conversion error "RETN" from character set 4102 to character set 4103
A conversion error occurred during the execution of a Remote Function
Call. This happened either when the data was received or when it was
sent. The latter case can only occur if the data is sent from a Unicode
system to a non-Unicode system.
I read the jrfc.trc log, it shows it receives data in unicode 4103(that's ok), but send data in unicode 4102(that's the problem).4102 is UTF-16 Big Endian and 4103 UTF-16 Little Endian. Our system is windows on intel 32 aritechture, so based on Note 552464, it should be 4103.
Why it sends data (Java JCO server send output parameter to ABAP) in 4102?????
What's the problem??? Thank you very much!!
Best Regards,
Xiaoming Yang
Message was edited by:
Xiaoming YangHello Experts,
Any replies on this?
I am also getting a similar kind of error.
Do you have any idea on this?
Thanks and Best Regards,
Suresh -
Character Set conversion fails
Hi,
We have 2 databases, both are UTF8. One database fetches the data from other via a DB Link. Say there are 2 databases A and B. A fetches data from B. A is in WE8ISO8859P1 character set and B is in EEC8EUROPA3 . While fetching the data european characters come as junk in database A. I tried using convert but that too fails.
Can you please suggest if there is any way where i can have the characters presereved?
Thanks.Hi Deng,
I notice you had a similar problem to james chen, regarding
character set conversion errors and that you did indeed fix
this problem?
Re: DB Link not working
I'd really appreciate it if you could please can you post
onto James' query and let him and other members know your
solution.
Given that members of the Designer community use the forum
to work together and help other members, I think they would
also appreciate this information!
Thanks for your help.
Regards,
Dominic
Designer Prod Mgt
Oracle Corp -
Server uses WE8ISO8859P15 character set (possible charset conversion)
Hi,
when EXP in 9i I receive :
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in WE8PC850 character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P15 character set (possible charset conversion)What is the problem ?
Thank you.
I exported just a table, how to see if it is exported ?Dear user522961,
You have not defined or misdefined the NLS_LANG environmental variable before trying to run the export command.
Here is a little illustration;
*$ echo $NLS_LANG*
*AMERICAN_AMERICA.WE8ISO8859P9*
$ exp system/password@opttest file=ogan.dmp owner=OGAN
Export: Release 10.2.0.4.0 - Production on Mon Jul 12 18:10:47 2010
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
*Export done in WE8ISO8859P9 character set and AL16UTF16 NCHAR character set*
About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user OGAN
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user OGAN
About to export OGAN's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export OGAN's tables via Conventional Path ...
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.
*$ export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P15*
$ exp system/password@opttest file=ogan.dmp owner=OGAN
Export: Release 10.2.0.4.0 - Production on Mon Jul 12 18:12:41 2010
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
*Export done in WE8ISO8859P15 character set and AL16UTF16 NCHAR character set*
*server uses WE8ISO8859P9 character set (possible charset conversion)*
About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user OGAN
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user OGAN
About to export OGAN's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export OGAN's tables via Conventional Path ...
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.Hope it Helps,
Ogan
Maybe you are looking for
-
Will Mid-2011 Macbook Air support OpenCL?
Does anyone here know if the Mid 2011 MBA support OpenCL? BTW, where I could find the more information? Thanks.
-
I have a 2-yr old Intel-based iMac. I am not too computer savvy and I really haven't kept track of what version of the OS I'm using (I think 10.6.4). Last night, I got a gray screen that curtailed over the internet video I was watching and said somet
-
1099 misc - Printing Copy C from General Witholding
Hi!! I have configured the system and applied both the notes specified for 2010. My Copy A for 1099 Misc is getting printed and my Copy B is getting printed. My problem is I am not able to determine how to print the Copy C. SAP notes images are only
-
HP 2010 - information requistioned-
From C.S SUDHAKARAN To M/s.Hewlett Packard team. Sir, Sub: HP Inkjet Printer 2010- Levied repair charges of device within warranty- Information seeking before approaching Consumer Protection Forum-
-
I am sorry but I have found the docs to be a little on the sparse side. I am a developer and it seems that the docs were written by a developer that knows the system. Having NEVER implemented a proxy server before I am having a dificult time making h