Character Set Settings Affecting CSV Upload
Hi,
We have a production issue that I need a little help with. Using one of the examples listed here I have created a csv upload and on our dev box its works fine however on production we have a problem on production.
The database is set to WE8ISO8859P1 character set but for some reason the web browser on production keeps changing it to UTF8 so for some of the characters it uploads are incorrect e.g. £ becomes £
I'm guessing this may be a set up issue as on dev this does not happen but I've ran out of things to check.
Automatic CSV Encoding is set to Yes on both envirnments
The following select returns WE8ISO8859P1 on both environments
select value$ charset, 'sys.props$' source from sys.props$ where name='NLS_CHARACTERSET' ;
select property_value, 'database_properties' from database_properties where property_name='NLS_CHARACTERSET' ;
select value, 'nls_database_parameters' from nls_database_parameters where parameter='NLS_CHARACTERSET';
I have limited access to the system but what else can I check
Thanks inadvance
Hi Andy,
1) You say that "you have created a csv upload". What does this mean?
2) How are you loading data? Is it through Application Express -> Utilities -> Data Load? Or is this something you've built into your own application?
3) What is the encoding of the CSV file you're using to upload data?
Automatic CSV Encoding only impacts CSV download from reports. It has nothing to do with data loading.
Joel
Similar Messages
-
Change character set settings & character set.in infopackages
hello Experts
Because of Unicode, I need to change in all of the BW infopackages : character set settings & character set.
Do you know a way to customize globally all the infopackages... or i have to do it manually on all of them.
thanks for help!
DevHi,
Following links may help in UNICODE conversion of SAP BW :-
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/fdbedb90-0201-0010-4483-b64ac397986a?QuickLink=index&overridelayout=true
http://wiki.sdn.sap.com/wiki/display/unicode/ChallengesinBIUnicodeconversion
Also go through SAP Note 888210.
In short you can change the UNICODE settings globally for all the source systems through SM59.
Double click on the source system name.Goto MDMP & UNICODE tab and selct radio button UNICODE from there and save it. Do it for all the source systems connected with your BW system which are UNICODE compliant even the self connecting source system.
After that you will not need to do any changes in the IP.
Navesh
Edited by: navsamol on Dec 12, 2011 1:55 PM -
Character set is lost during uploading file to blob?
Hi!
When I upload file from a disk to htmldb_application_files it is stored there as a blob. Then when I convert this blob to clob, with proper character set id as a parameter, characters in clob are not correct. They loose seriffs etc.
The question is : What happens with characters during uploading to blob?
Thanks!Hi!
When I upload file from a disk to htmldb_application_files it is stored there as a blob. Then when I convert this blob to clob, with proper character set id as a parameter, characters in clob are not correct. They loose seriffs etc.
The question is : What happens with characters during uploading to blob?
Thanks! -
Data load uses wrong character set, where to correct? APEX bug/omission?
Hi,
I created a set of Data Load pages in my application, so the users can upload a CSV file.
But unlike the Load spreadsheet data (under SQL Workshop\Utilities\Data Workshop), where you can set the 'File Character Set', I didn't see where to set the Character set for Data Load pages in my application.
Now there is a character set mismatch, "m³/h" and "°C" become "m�/h" and "�C"
Where to set?
Seems like an APEX bug or at least omission, IMHO the Data Load page should ask for the character set, as clients with different character sets could be uploading CSV.
Apex 4.1 (testing on the apex.oracle.com website)Hello JP,
Please give us some more details about your database version and its character set, and the character set of the CSV file.
>> …But unlike the Load spreadsheet data (under SQL Workshop\Utilities\Data Workshop), where you can set the 'File Character Set', I didn't see where to set the Character set for Data Load pages in my application.
It seems that you are right. I was not able to find any reference to the (expected/default) character set of the uploaded file in the current APEX documentation.
>> If it's an APEX omission, where could I report that?
Usually, an entry on this forum is enough as some of the development team members are frequent participants. Just to be sure, I’ll draw the attention of one of them to the thread.
Regards,
Arie.
♦ Please remember to mark appropriate posts as correct/helpful. For the long run, it will benefit us all.
♦ Author of Oracle Application Express 3.2 – The Essentials and More -
Oracle to MySql character set problem
Dear Gurus,
My database is Oracle 11g R2 (11.2.0.1.0) on Sun Solaris 10. To get data from mysql database for reporting purpose, I used DG4ODBC and followed strictly the OMSC note "Detailed Overview of Connecting Oracle to MySQL Using DG4ODBC Database Link [ID 1320645.1]. Here are main configuration steps:
- Check DG4ODBC 32/64-bit
- Install and configure ODBC Driver Manager unixodbc-2.2.14
- Install and configure MyODBC 5.1.8
- Configure tnsnames.ora and listener.ora
- Create db links
Oracle character set is AL32UTF8
MySQL charactoer set is uft8
$ODBC_HOME/etc/odbc.ini
[ODBC Data Sources]
myodbc5 = MyODBC 5.1 Driver DSN
[myodbc5]
Driver = /opt/mysql/myodbc5/lib/libmyodbc5.so
Description = Connector/ODBC 5.1 Driver DSN
SERVER = <mysql server ip>
PORT = 3306
USER = <mysql_user>
PASSWORD = ****
DATABASE = <mysql db name>
OPTION = 0
TRACE = OFF
$ORACLE_HOME/hs/admin/initmyodbc5.ora
# HS init parameters
HS_FDS_CONNECT_INFO=myodbc5 # Data source name in odbc.ini
HS_FDS_TRACE_LEVEL=OFF
HS_FDS_SHAREABLE_NAME=/opt/unixodbc-2.2.14/lib/libodbc.so
HS_FDS_SUPPORT_STATISTICS=FALSE
HS_LANGUAGE=AMERICAN_AMERICA.WE8ISO8859P1
# ODBC env variables
set ODBCINI=$ODBC_HOME/etc/odbc.ini
My issue is I can query data from mysql database tables but the output is incorrect in character type columns (VARCHAR columns). It just shows the first character in such columns. I tried to read through some OMSC notes but none is useful. If you experienced on such issues, please share your idea / help me resolve it.
Thanks much in advance,
HieuS. Wolicki, Oracle wrote:
I have little experience with MySQL and ODBC Gateway, but this setting looks weird to me: HS_LANGUAGE=AMERICAN_AMERICA.WE8ISO8859P1. Why do you configure WE8ISO8859P1 when both databases are Unicode UTF-8. Shouldn't the setting be AMERICAN_AMERICA.AL32UTF8 instead?
-- SergiuszIf I set HS_LANGUAGE=AMERICAN_AMERICA.AL32UTF8 or without the HS_LANGUAGE setting, the following error will happen.
SQL> select count(*) from "nicenum_reserve"@ussd;
select count(*) from "nicenum_reserve"@ussd
ERROR at line 1:
ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
I followed the metalink note "Error Ora-28500 and Sqlstate I Issuing Selects From a Unicode Oracle RDBMS With Dg4odbc To Mysql or SQL*Server [ID 756186.1]" to resolve the above error. The note guides to set HS_LANGUAGE=AMERICAN_AMERICA.WE8ISO8859P1 and resolved the above error.
The following are the output from original database (MySql) and Oracle via SQLPLUS and TOAD.
On MySQL database (Sorry because of the output format)
SQL> select ID, source_msisdn, target_msisdn, comment from nicenum_reserve where ID=91;
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| ID | source_msisdn | target_msisdn | comment |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 91 | 841998444408 | 84996444188 | Close reservation becasue of swap activity |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
SQLRowCount returns 1
1 rows fetched
Via Sqlplus on Oracle server:
SQL> select "ID","source_msisdn","target_msisdn","comment" from "nicenum_reserve"@ussd where "ID"=91;
ID
source_msisdn
target_msisdn
comment
91
8 4 1 9 9 8 4 4 4 4 0 8
8 4 9 9 6 4 4 4 1 8 8
C l o s e r e s e r v a t i o n b e c a s u e o f s w a p a c t i v i
t y
Via TOAD connected to Oracle server:
ID source_msisdn target_msisdn comment
91 8 8 C
It's likely this issue related to character set settings but I don't know how to set it properly.
Brgds,
Hieu -
Character Set questions on setup
I am trying to determine what the best setup recommendations are for creating non_English Oracle 10g databases. I have not had much experience building databases for non_English locales, so this is getting a little overwhelming as I have been researching Oracle's Database Globalization Support Guide. Obviously it has a wealth of information and I am trying to determine what applies to us at this point and time.
Generally when someone buys our product they create a new Oracle instance for our app. I need to be able to recommend proper database settings/parameters for potential global customers who purchase our software to run on Oracle.
Currently my biggest question is what to recommend for the Database Characterset on db creation. Currently the DB Character Set we recommend (for standard U.S. installs on Windows) is the default WE8MSWIN1252 character set. Our application is non-unicode. It has been recommended to me from an outside consultant that we "must" use UTF-8 for DB and National Character Set settings, as opposed to WE8MSWIN1252 or WE8ISO8859P1. I should mention that our focus at this point and time is getting a solution for French, German, and Spanish. We are also more concerned about a single language setup than multilanguage - although that is a definite future consideration.
What impact can using UTF-8 as opposed to WE8MSWIN1252 or WE8ISO8859P1 have on a non-unicode application? I hope I am explaining the situation well enough as I am fairly new and still getting to know our application. I am kind of getting thrown into the i18n fire...
Any input is greatly appreciated. Thanks.Your questions are certainly valid but you have not given any details about your application: what it does, what technologies and access drivers are employed, and what client operating systems are supported. This determines how much effort is required to make the application Unicode-enabled and what are the risks coming from each of the possible approaches.
As long as your application can work with single-byte character sets only and as long as it is not expected to contain multibyte data, and as long as it supports Windows only, the Oracle character set corresponding to relevant Windows ANSI code page is the correct choice. For English, French, German, Spanish, and other Western European languages, WE8MSWIN1252 is right one.
Processing of WE8MSWIN1252 is easier and somehow faster than processing AL32UTF8 (i.e. UTF-8) data. One character corresponds to one byte and this simplifies some aspects of text processing.
On the other hand, world becomes smaller and smaller in the Internet area. Companies that never did any business abroad start to talk to customers around the world because somebody found their website. Western European companies take advantage of the European Union enlargement and start making business in new countries. Therefore, it is dangerous to assume that a company currently interested in a monolingual, single-byte solution will not want to migrate to a multilingual and multibyte solution in few years.
If you follow a few rules in database design and programming, you can run your single-byte application against an AL32UTF8 database, even if you do not get a multilingual system in this way. Such configuration has the huge advantage of avoiding the need of a complex and resource consuming task of migrating the database character set to Unicode in future, when your customer asks for multilingual support. Upgrading binaries of your application to an Unicode-enabled version is usually fast, migrating the database character set is not.
The main rules you should follow are:
1) Use character length semantics to define column and PL/SQL variable lengths, i.e. say VARCHAR2(10 CHAR) instead of VARCHAR2(10 [BYTE]). If you do not want to modify all creation scripts to include the CHAR keyword, issue ALTER SESSION SET NLS_LENGTH_SEMANTICS=CHAR at the beginning of each script. I recommend modifying the scripts.
2) Do not use VARCHAR2 columns longer than 1000 characters, CHAR columns longer than 500 characters, and PL/SQL VARCHAR2/CHAR variables longer than 8190 characters. This guarantees that in the future no AL32UTF8 string will exceed the hard limit of 4000/2000/32760 bytes. Use CLOB for longer text.
3) Use SUBSTR/LENGTH/INSTR in place of SUBSTRB/LENGTHB/INSTRB. Use SUBSTRB/LENGTHB/INSTRB only when dealing with legacy stuff or Data Dictionary that still use byte length semantics.
4) Define the client setting - mainly NLS_LANG - to correctly correspond to the character set processed by your application.
5) Modify interfaces to other databases, if any, to cope with the character length semantics. You do not have to do much if the other databases follow the same rules.
The cost of running the database in Unicode is not high for most languages, though languages that do not use Latin script, such as Russian, Greek, or Japanese, need significantly more storage for the textual data (but only textual data in those languages - this is only some fraction of all data in the database). Processing is slower by a few percent as compared to single-byte character sets (unless a lot of textual processing is performed in the database, in which case the percentage may be higher - benchmark recommended). This costs can be usually compensated by adding some more computing power (GHz and disks). Unless your application needs a VLDB (very large database) and almost saturates the system, you should not notice a big difference.
-- Sergiusz -
Utl_file and character sets
hello,
we are using AL32UTF8 character set in our database, I have a PL/SQL routine reading a csv file with utl_file into the database, Client using UTF-8 (Toad). When I have a UTF-8 csv file, everything works fine. But I want to be able to read ANSI aswell. Is there a way to read ANSI files aswell with utl_file ?
ps: I used CONVERT(v_line, 'UTF8', 'WE8MSWIN1252') and it worked. Is there a way I can read out the character set of the csv file with PL/SQL, so I dont need to use a parameter ?
Ilja
Edited by: Ikrischer on Sep 25, 2009 2:32 PMToo bad your Oracle installation doesn't have a version number or use any DDL so we would know what you are doing.
Are you using GET_LINE or GET_LINE_NCHAR or something else. -
Flat File Load Issue - Cannot convert character sets for one or more charac
We have recently upgraded our production BW system to SPS 17 (BW SP19)
and we have issues loading flat files (contain Chinese, Japanese and
Korean characters) which worked fine before the upgrade. The Asian
languages appear as invalid characters (garbled) as we can see in PSA. The Character Set
Settings was previously set as Default Setting and it worked fine until
the upgrade abd we referred to note 1130965 and with the code page 1100
the load went through (without the Cannot convert character sets for one or more characters error) however the Asian language characters will appear
as invalid characters. We tried all the code pages suggested in the note e.g.
4102, 4103 etc on the info packages but it did not work. Note that the
flat files are encoded in UTF-8.I checked lower case option for all IO
When i checked the PSA failed log no of records processed is "0 of 0" my question is with out processing single record system is througing this error message.
When i use same file loading from local workstation no error message
I am thinking when I FTP the file from AS/400 to BI Application Server(Linex) some invalid characters are adding but how can we track that invalid char?
Gurus please share your thoughts on this I will assign full points.
Thanks, -
Agent control character set problem
Hi,
here's my problem :
i've got the grid that's running on a RHES4 with an agent. On another RHES4, i've got 10g databases that run and another agent.
The repository database is configured like this :
nsl_language = AMERICAN
nls_territory = AMERICA
character set = AL32UTF8
all the uploads from the agent on the RHES4 where the grid is installed are ok.
On the other server, as soon as there's an UTF8 character in a xml file (like " é " ou " ' "), the upload fails and the agent stops.
in the logs, it s clear it comes from this. I've deleted all the occurences of UTF8 characters in the xml file and restarted the upload and it's ok...
I've tested different configurations but without success.
Any clue ?
AlivetuThanks for the reply,
NLS LANG is set on the 2 machines with FRENCHFRANCE.WE8ISO8859P15
I've added the line '<?xml version="1.0" encoding="ISO-8859-1"?>' at the beginning of the xml file that doesn't work and made a 'emctl upload'....it has passed and the agent has stopped arrived to another UTF8 xml file...
So, it really is a character set problem but where to set it ???
Alivetu -
Database character set = UTF-8, but mismatch error on XML file upload
Dear experts,
I am having problems trying to upload an XML file into an XMLType table. The Database is 9.2.0.5.0, with the character set details:
SELECT *
FROM SYS.PROPS$
WHERE name like '%CHA%';
Query results:
NLS_NCHAR_CHARACTERSET UTF8 NCHAR Character set
NLS_SAVED_NCHAR_CS UTF8
NLS_NUMERIC_CHARACTERS ., Numeric characters
NLS_CHARACTERSET UTF8 Character set
NLS_NCHAR_CONV_EXCP FALSE NLS conversion exception
To upload the XML file into the XMLType table, I am using the command:
insert into XMLTABLE
values(xmltype(getClobDocument('ServiceRequest.xml','UTF8')));
However, I get the error:
ORA-31011: XML parsing failed
ORA-19202: Error occurred in XML processing
LPX-00200: could not convert from encoding UTF-8 to UCS2
Error at line 1
ORA-06512: at "SYS.XMLTYPE", line 0
ORA-06512: at line 1
Why does it mention UCS2, as can't see that on the Database character set?
Many thanks for your help,
MarkUSC2 is known as AL16UTF16(LE/BE) by Oracle...
Try using AL32UTF8 as the character set name
AFAIK The main difference between Oracle's UTF8 and AL32UTF8 character set is that is the UTF8 character set does not support those UTF-8 characteres that require 4 bytes..
-Mark -
NLS settings for a database link between DBs with different character sets
I am using a database link to move data from one database to another and I am seeing some strange data problems. The databases have different character sets and different NLS settings. I wonder if this could be causing my problem.
Here are the NLS parameters for the database where the database link exists. (the SOURCE database)
1 NLS_CALENDAR GREGORIAN
2 NLS_CHARACTERSET WE8MSWIN1252
3 NLS_COMP BINARY
4 NLS_CURRENCY $
5 NLS_DATE_FORMAT DD-MON-RR
6 NLS_DATE_LANGUAGE AMERICAN
7 NLS_DUAL_CURRENCY $
8 NLS_ISO_CURRENCY AMERICA
9 NLS_LANGUAGE AMERICAN
10 NLS_LENGTH_SEMANTICS BYTE
11 NLS_NCHAR_CHARACTERSET AL16UTF16
12 NLS_NCHAR_CONV_EXCP FALSE
13 NLS_NUMERIC_CHARACTERS .,
14 NLS_SORT BINARY
15 NLS_TERRITORY AMERICA
16 NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
17 NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
18 NLS_TIME_FORMAT HH.MI.SSXFF AM
19 NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
Here are the NLS parameters for the database that the database link connects to. (the TARGET database)
1 NLS_CALENDAR GREGORIAN
2 NLS_CHARACTERSET AL32UTF8
3 NLS_COMP BINARY
4 NLS_CURRENCY $
5 NLS_DATE_FORMAT DD-MON-RR
6 NLS_DATE_LANGUAGE AMERICAN
7 NLS_DUAL_CURRENCY $
8 NLS_ISO_CURRENCY AMERICA
9 NLS_LANGUAGE AMERICAN
10 NLS_LENGTH_SEMANTICS BYTE
11 NLS_NCHAR_CHARACTERSET AL16UTF16
12 NLS_NCHAR_CONV_EXCP FALSE
13 NLS_NUMERIC_CHARACTERS .,
14 NLS_SORT BINARY
15 NLS_TERRITORY AMERICA
16 NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
17 NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
18 NLS_TIME_FORMAT HH.MI.SSXFF AM
19 NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
The SOURCE database version is 10g Release 10.2.0.3.0 - Production
The TARGET database version is 11g Release 11.1.0.6.0 - 64bit Production
Do I need to modify the NLS settings in the SOURCE database before executing a script to insert data into the TARGET database?
Thanks, JackThe difference in settings is not a problem by itself, especially that only the NLS_CHARACTERSET matters. Of course, this difference may lead to certain issues if not taken into consideration.
Please, describe symptoms of your problems.
-- Sergiusz -
Wich Character Set is used in iWeb?
My web-server uses utf8unicodeic (default). Symbols like ä, ö, ü or € cannot be displayed on the web-pages, written in iWeb 09 v 3.0.1 (9833). These symbols are essential part of speech and text in Europe, especially in German, these characters are necessary...
Does anybody know which character set is used in iWeb?
Can it be changed in iWeb. Ootherwise I can change the character set of the server - but I have to know where to!
Thanks for your help!Does anybody know which character set is used in iWeb?
Can it be changed in iWeb.
All iWeb pages are in UTF-8 and this cannot be changed. If your server is really using utf-8, you should not have a problem. It is quite common for servers to force ISO-8859-1 and thus botch iWeb pages. Some fixes for that can be found in the Server Settings section of
http://homepage.mac.com/thgewecke/iwebchars.html
Also some servers cannot handle special characters in page names, even though having them in the text is OK. That requires different fixes.
Another possible problem is when you use an old ftp app for uploading that doesn't do utf-8 right.
If you will provide the url of your page so I can see what is happening, I can perhaps provide better advice. -
Character set migration error to UTF8 urgent
Hi
when we migrated from ar8iso889p6 to utf8 characterset we are facing one error when i try to compile one package through forms i am getting error program unit pu not found.
When i running the source code of that procedure direct from database using sqlplus its running wihtout any problem.How can i migrate this forms from ar8iso889p6 to utf8 characterset. We migrated from databas with ar8iso889p6 oracle 81.7 database to oracle 9.2. database with character set UTF8 (windows 2000) export and import done without any error
I am using oracle 11i inside the calling forms6i and reports 6i
with regards
ramya
1) this is server side program yaa when connecting with forms i am getting error .When i am running this program using direct sql its working when i running compiling i am getting this error.
3) yes i am using 11 i (11.5.10) inside its calling forms 6i and reports .Why this is giving problem using forms.Is there any setting changing in forms nls_lang
with regardsHi Ramya
what i understand from your question is that you are trying to compile a procedure from a forms interface at client side?
if yes you should check the code in the forms that is calling the compilation package.
does it contains strings that might be affected from the character set change???
Tony G. -
ORA-12709: error while loading create database character set
I installed Oracle 8.05 on Linux successfully: was able to login
whith SQLPlus, start and stop the db whith svrmgrl etc.
During this install I chose WE8ISO8859P9 as the database
characterset when prompted.
After that I installed Oracle Application Server 3.02, and now
I'm getting the
ORA-12709: error while loading create database character set
message when I try to start up the database, and the database
won't mount.
Platform is RedHat Linux 5.2.
NLS_LANG set to different settings,
e.g. AMERICAN_AMERICA.WE8ISO8859P9
but without success.
Anyone any clue?
Thanks!
nullJogchum Reitsma (guest) wrote:
: I installed Oracle 8.05 on Linux successfully: was able to
login
: whith SQLPlus, start and stop the db whith svrmgrl etc.
: During this install I chose WE8ISO8859P9 as the database
: characterset when prompted.
: After that I installed Oracle Application Server 3.02, and now
: I'm getting the
: ORA-12709: error while loading create database character set
: message when I try to start up the database, and the database
: won't mount.
: Platform is RedHat Linux 5.2.
: NLS_LANG set to different settings,
: e.g. AMERICAN_AMERICA.WE8ISO8859P9
: but without success.
: Anyone any clue?
: Thanks!
You can create the database with WE8DEC character set
and to use the WE8ISO8859P9 on the client or even on Linux.
The NLS_LANG setting doesn't effect the database, but the
interface with the database. The same setting can be used in de
windows 95/98/NT registry.
null -
ORA-12709: error while loading create database character set after upgrade
Dear All
i m getting ORA-12709: error while loading create database character set, After upgraded the database from 10.2.0.3 to 11.2.0.3 in ebusiness suit env.
current application version 12.0.6
please help me to resolve it.
SQL> startup;
ORACLE instance started.
Total System Global Area 1.2831E+10 bytes
Fixed Size 2171296 bytes
Variable Size 2650807904 bytes
Database Buffers 1.0133E+10 bytes
Redo Buffers 44785664 bytes
ORA-12709: error while loading create database character set
-bash-3.00$ echo $ORA_NLS10
/u01/oracle/PROD/db/teche_st/11.2.0/nls/data/9idata
export ORACLE_BASE=/u01/oracle
export ORACLE_HOME=/u01/oracle/PROD/db/tech_st/11.2.0
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/perl/bin:$PATH
export PERL5LIB=$ORACLE_HOME/perl/lib/5.10.0:$ORACLE_HOME/perl/site_perl/5.10.0
export ORA_NLS10=/u01/oracle/PROD/db/teche_st/11.2.0/nls/data/9idata
export ORACLE_SID=PROD
-bash-3.00$ pwd
/u01/oracle/PROD/db/tech_st/11.2.0/nls/data/9idata
-bash-3.00$ ls -lh |more
total 56912
-rw-r--r-- 1 oracle oinstall 951 Jan 15 16:05 lx00001.nlb
-rw-r--r-- 1 oracle oinstall 957 Jan 15 16:05 lx00002.nlb
-rw-r--r-- 1 oracle oinstall 959 Jan 15 16:05 lx00003.nlb
-rw-r--r-- 1 oracle oinstall 984 Jan 15 16:05 lx00004.nlb
-rw-r--r-- 1 oracle oinstall 968 Jan 15 16:05 lx00005.nlb
-rw-r--r-- 1 oracle oinstall 962 Jan 15 16:05 lx00006.nlb
-rw-r--r-- 1 oracle oinstall 960 Jan 15 16:05 lx00007.nlb
-rw-r--r-- 1 oracle oinstall 950 Jan 15 16:05 lx00008.nlb
-rw-r--r-- 1 oracle oinstall 940 Jan 15 16:05 lx00009.nlb
-rw-r--r-- 1 oracle oinstall 939 Jan 15 16:05 lx0000a.nlb
-rw-r--r-- 1 oracle oinstall 1006 Jan 15 16:05 lx0000b.nlb
-rw-r--r-- 1 oracle oinstall 1008 Jan 15 16:05 lx0000c.nlb
-rw-r--r-- 1 oracle oinstall 998 Jan 15 16:05 lx0000d.nlb
-rw-r--r-- 1 oracle oinstall 1005 Jan 15 16:05 lx0000e.nlb
-rw-r--r-- 1 oracle oinstall 926 Jan 15 16:05 lx0000f.nlb
-rw-r--r-- 1 oracle oinstall 1.0K Jan 15 16:05 lx00010.nlb
-rw-r--r-- 1 oracle oinstall 958 Jan 15 16:05 lx00011.nlb
-rw-r--r-- 1 oracle oinstall 956 Jan 15 16:05 lx00012.nlb
-rw-r--r-- 1 oracle oinstall 1005 Jan 15 16:05 lx00013.nlb
-rw-r--r-- 1 oracle oinstall 970 Jan 15 16:05 lx00014.nlb
-rw-r--r-- 1 oracle oinstall 950 Jan 15 16:05 lx00015.nlb
-rw-r--r-- 1 oracle oinstall 1.0K Jan 15 16:05 lx00016.nlb
-rw-r--r-- 1 oracle oinstall 957 Jan 15 16:05 lx00017.nlb
-rw-r--r-- 1 oracle oinstall 932 Jan 15 16:05 lx00018.nlb
-rw-r--r-- 1 oracle oinstall 932 Jan 15 16:05 lx00019.nlb
-rw-r--r-- 1 oracle oinstall 951 Jan 15 16:05 lx0001a.nlb
-rw-r--r-- 1 oracle oinstall 944 Jan 15 16:05 lx0001b.nlb
-rw-r--r-- 1 oracle oinstall 953 Jan 15 16:05 lx0001c.nlb
Starting up:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options.
ORACLE_HOME = /u01/oracle/PROD/db/tech_st/11.2.0
System name: SunOS
Node name: proddb3.zakathouse.org
Release: 5.10
Version: Generic_147440-19
Machine: sun4u
Using parameter settings in server-side spfile /u01/oracle/PROD/db/tech_st/11.2.0/dbs/spfilePROD.ora
System parameters with non-default values:
processes = 200
sessions = 400
timed_statistics = TRUE
event = ""
shared_pool_size = 416M
shared_pool_reserved_size= 40M
nls_language = "american"
nls_territory = "america"
nls_sort = "binary"
nls_date_format = "DD-MON-RR"
nls_numeric_characters = ".,"
nls_comp = "binary"
nls_length_semantics = "BYTE"
memory_target = 11G
memory_max_target = 12G
control_files = "/u01/oracle/PROD/db/apps_st/data/cntrl01.dbf"
control_files = "/u01/oracle/PROD/db/tech_st/10.2.0/dbs/cntrl02.dbf"
control_files = "/u01/oracle/PROD/db/apps_st/data/cntrl03.dbf"
db_block_checksum = "TRUE"
db_block_size = 8192
compatible = "11.2.0.0.0"
log_archive_dest_1 = "LOCATION=/u01/oracle/PROD/db/apps_st/data/archive"
log_archive_format = "%t_%s_%r.dbf"
log_buffer = 14278656
log_checkpoint_interval = 100000
log_checkpoint_timeout = 1200
db_files = 512
db_file_multiblock_read_count= 8
db_recovery_file_dest = "/u01/oracle/fast_recovery_area"
db_recovery_file_dest_size= 14726M
log_checkpoints_to_alert = TRUE
dml_locks = 10000
undo_management = "AUTO"
undo_tablespace = "APPS_UNDOTS1"
db_block_checking = "FALSE"
session_cached_cursors = 500
utl_file_dir = "/usr/tmp"
utl_file_dir = "/usr/tmp"
utl_file_dir = "/u01/oracle/PROD/db/tech_st/10.2.0/appsutil/outbound"
utl_file_dir = "/u01/oracle/PROD/db/tech_st/10.2.0/appsutil/outbound/PROD_proddb3"
utl_file_dir = "/usr/tmp"
plsql_code_type = "INTERPRETED"
plsql_optimize_level = 2
job_queue_processes = 2
cursor_sharing = "EXACT"
parallel_min_servers = 0
parallel_max_servers = 8
core_dump_dest = "/u01/oracle/PROD/db/tech_st/10.2.0/admin/PROD_proddb3/cdump"
audit_file_dest = "/u01/oracle/admin/PROD/adump"
db_name = "PROD"
open_cursors = 600
pga_aggregate_target = 1G
workarea_size_policy = "AUTO"
optimizer_secure_view_merging= FALSE
aq_tm_processes = 1
olap_page_pool_size = 4M
diagnostic_dest = "/u01/oracle"
max_dump_file_size = "20480"
Tue Jan 15 16:16:02 2013
PMON started with pid=2, OS id=18608
Tue Jan 15 16:16:02 2013
PSP0 started with pid=3, OS id=18610
Tue Jan 15 16:16:03 2013
VKTM started with pid=4, OS id=18612 at elevated priority
VKTM running at (10)millisec precision with DBRM quantum (100)ms
Tue Jan 15 16:16:03 2013
GEN0 started with pid=5, OS id=18616
Tue Jan 15 16:16:03 2013
DIAG started with pid=6, OS id=18618
Tue Jan 15 16:16:03 2013
DBRM started with pid=7, OS id=18620
Tue Jan 15 16:16:03 2013
DIA0 started with pid=8, OS id=18622
Tue Jan 15 16:16:03 2013
MMAN started with pid=9, OS id=18624
Tue Jan 15 16:16:03 2013
DBW0 started with pid=10, OS id=18626
Tue Jan 15 16:16:03 2013
LGWR started with pid=11, OS id=18628
Tue Jan 15 16:16:03 2013
CKPT started with pid=12, OS id=18630
Tue Jan 15 16:16:03 2013
SMON started with pid=13, OS id=18632
Tue Jan 15 16:16:04 2013
RECO started with pid=14, OS id=18634
Tue Jan 15 16:16:04 2013
MMON started with pid=15, OS id=18636
Tue Jan 15 16:16:04 2013
MMNL started with pid=16, OS id=18638
DISM started, OS id=18640
ORACLE_BASE from environment = /u01/oracle
Tue Jan 15 16:16:08 2013
ALTER DATABASE MOUNT
ORA-12709 signalled during: ALTER DATABASE MOUNT...ORA-12709 signalled during: ALTER DATABASE MOUNT...Do you have any trace files generated at the time you get this error?
Please see these docs.
ORA-12709: WHILE STARTING THE DATABASE [ID 1076156.6]
Upgrading from 9i to 10gR2 Fails With ORA-12709 : Error While Loading Create Database Character Set [ID 732861.1]
Ora-12709 While Trying To Start The Database [ID 311035.1]
ORA-12709 when Mounting the Database [ID 160478.1]
How to Move From One Database Character Set to Another at the Database Level [ID 1059300.6]
Thanks,
Hussein
Maybe you are looking for
-
Working in Numbers 3.2.2. I've figured out how to rank with one or more criteria, but I need help completing my formula... I have a table with each person (row) who have different values for two different products (Columns B and C). I want to rank ea
-
Dear Forum Members I am planning to take up Oracle forms 9i(Upgradation)Beta version exam.But i could not get any materials to prepare.If somebody can guide me i will be most helpful.U can mail me to [email protected] Thanks in advance. Regards Samin
-
Functionality of FRGKE...
Hi All, I have an investigation regarding Purchase Order (ME22N). Please read the user's email below: <b><i>"When EKKO-FRGKE = B, we cannot send the PO thru auto email. I believe this is a standard function in SAP. Therefore, I am assuming that the
-
Unwanted Document Flow (VTAA)
Hello experts, We are working through our configuration in attempt to disable the document flow off contracts of deliveries, goods issue, and billing documents. Through VTAA we have found the option "2" Create doc. flow records except for delivery/g
-
Installer disc could not be found.
Running MacBook Pro, early 2011, OS X 10.9.1. Trying to install Win 7 Pro on Boot Camp. Ran Boot Camp Assistant. Keeps telling me "the installer disc could not be found", even though I've made a DVD from a Win 7 ISO I had. Any ideas? Thanks.