Listener setup when using multiple databases on same cluster
Hi,
In our 2-node RAC cluster, we have 5 databases running on the same cluster. Since we are using SCAN listener on port #1521, does this mean db port# for all databases is 1521?
If I decide to use a unique port# (other than SCAN port#), what are the choices?
ENV: 11gR2, 2-node RH 5.x
Thanks!
The answer to your question depends on how it was configured.
My recommendation ... run some tests and see if that is the way it was done.
Similar Messages
-
Cannot use multiple database in my application
I have written a C++/CLI wrapper for use berkeleydb without lost performance. This wrapper run perfectly with one database but when i open a second database ( i create a new instance of the wrapper ) the application crash.
I have written a minimal pure C++ application that use the pure C++ classe of my wrapper, when a open and use only one database the code run perfectly but with two databases, the application be crazy.
infos : compiler VC++ 2008, OS : Vista 32bit
this the code of my berkeleydb class :
#pragma comment (lib, "libdb47.lib")
#if defined(WIN32) || defined(WIN64)
#include <windows.h>
#include <list>
#endif
#ifndef ParamsStructCpp
#include "ParamsStruct.h"
#endif
#include <db_cxx.h>
#include "BerkeleyMethods.h"
using namespace std;
using namespace stdext;
// type
//typedef list<ParamsStructCpp>::iterator it;
typedef list<ParamsStructCpp> fetchbuffer;
// Db objects
Db * db; // Database object
DbEnv env(0); // Environment for transaction
u_int32_t oFlags = DB_CREATE|DB_AUTO_COMMIT|DB_READ_UNCOMMITTED; // Open flags; //
u_int32_t env_oFlags = DB_CREATE |
DB_THREAD |
DB_INIT_LOCK |
DB_INIT_LOG |
DB_INIT_MPOOL |
DB_INIT_TXN |
DB_MULTIVERSION; // Flags for environement
// Constructeurs
BerkeleyMethods::BerkeleyMethods()
BerkeleyMethods::BerkeleyMethods(char * dbname, unsigned int db_cache_gbyte, unsigned int db_cache_size,
int db_cache_number, int db_type, char * dberr_file, char * envdir, unsigned int dbtxn_timeout,
unsigned int dbtxn_max)
strcpy_s(this->db_name, strlen(_db_name)+1, dbname);
this->db_cache_gbyte = db_cache_gbyte;
this->db_cache_size = db_cache_size;
this->db_cache_number = db_cache_number;
this->db_type = db_type;
this->db_txn_timeout = dbtxn_timeout;
this->db_txn_max = dbtxn_max;
strcpy_s(this->db_err_file, strlen(_db_err_file)+1, dberr_file);
strcpy_s(this->env_dir, strlen(_env_dir)+1, envdir);
this->Set_restoremode(false);
// ==========
// Fonctions
// http://www.codeproject.com/KB/string/UtfConverter.aspx
bool BerkeleyMethods::OpenDatabase()
try
std::cout << "Dbname " << this->db_name << std::endl;
if (strlen(this->db_name) < 2) {
throw new std::exception("Database name is unset");
// Set database cache
env.set_cachesize(this->db_cache_gbyte, this->db_cache_size, this->db_cache_number);
// Set transaction timeout
if (this->db_txn_timeout > 0) {
env.set_timeout(this->db_txn_timeout, DB_SET_TXN_TIMEOUT);
// Set max opened transactions
if (this->db_txn_max > 0) {
env.set_tx_max(this->db_txn_max);
// Dupplicate key support;
if (this->Get_dup_support()) {
env_oFlags = env_oFlags|DB_DUPSORT;
// Deadlokcs gesture
env.set_lk_detect(DB_LOCK_MINWRITE);
// Set the error file
env.set_errfile(fopen(this->db_err_file, "w+"));
// Error prefix
env.set_errpfx("Error > ");
// Open environement
env.open(this->env_dir, env_oFlags, 0);
// Create database object
db = new Db(&env, 0);
// Open the database
switch(this->db_type)
case 1:
db->open(NULL, this->db_name, NULL, DB_BTREE, oFlags, 0);
break;
case 2:
db->open(NULL, this->db_name, NULL, DB_HASH, oFlags, 0);
break;
case 3:
db->open(NULL, this->db_name, NULL, DB_QUEUE, oFlags, 0);
break;
case 4:
db->open(NULL, this->db_name, NULL, DB_RECNO, oFlags, 0);
break;
default:
throw new std::exception("Database name is unset");
break;
u_int32_t gbcacheSize = 0;
u_int32_t bytecacheSize=0;
int ncache=0;
env.get_cachesize(&gbcacheSize,&bytecacheSize,&ncache);
std::cerr << "Taille du cache est:" << gbcacheSize << "Go plus " << bytecacheSize << " octets." << std::endl;
std::cerr << "Number of caches : " << ncache << std::endl;
return true;
catch(DbException &e)
std::cout << e.what() << std::endl;
catch(std::exception &e)
std::cout << e.what() << std::endl;
return false;
bool BerkeleyMethods::CloseDatabase()
try
db->close(0);
env.close(0);
return true;
catch(DbException &e)
std::cout << e.what() << std::endl;
catch(std::exception &e)
std::cout << e.what() << std::endl;
return false;
bool BerkeleyMethods::AddData(char * key, unsigned long int value)
if (this->Get_restoremode())
return false;
DbTxn * txn;
try
env.txn_begin(NULL, &txn, 0); // Bebin transaction
// Set datas
Dbt _key(key, strlen(key)+1);
Dbt _value(&value, sizeof(unsigned long int));
env.txn_checkpoint(512, 2, 0);
int exist = db->put(txn, &_key, &_value, DB_NOOVERWRITE);
if (exist == DB_KEYEXIST) {
std::cout << "This record already exist" << std::endl;
txn->commit(0);
return true;
catch(DbException &e)
std::cout << e.what() << std::endl;
txn->abort();
catch(...)
std::cout << "Error" << std::endl;
txn->abort();
return false;
bool BerkeleyMethods::AddData(unsigned long int key, char * value)
if (this->Get_restoremode())
return false;
DbTxn * txn;
try
env.txn_begin(NULL, &txn, 0); // Bebin transaction
Dbt _key(&key, sizeof(unsigned long int));
Dbt _value(value, strlen(value)+1);
env.txn_checkpoint(512, 2, 0);
int exist = db->put(txn, &_key, &_value, DB_NOOVERWRITE);
if (exist == DB_KEYEXIST) {
std::cout << "This record already exist" << std::endl;
txn->commit(0);
return true;
catch(DbException &e)
std::cout << e.what() << std::endl;
txn->abort();
catch(...)
txn->abort();
return false;
bool BerkeleyMethods::AddData(char * key, char * value)
if (this->Get_restoremode())
return false;
DbTxn * txn;
try
env.txn_begin(NULL, &txn, 0); // Bebin transaction
Dbt _key(key, strlen(key)+1);
Dbt _value(value, strlen(value)+1);
env.txn_checkpoint(512, 2, 0);
int exist = db->put(txn, &_key, &_value, DB_NOOVERWRITE);
if (exist == DB_KEYEXIST) {
std::cout << "This record already exist" << std::endl;
txn->commit(0);
return true;
catch(DbException &e)
std::cout << e.what() << std::endl;
txn->abort();
catch(...)
txn->abort();
return false;
bool BerkeleyMethods::AddData(unsigned long int key, unsigned long int value)
if (this->Get_restoremode())
return false;
DbTxn * txn;
try
env.txn_begin(NULL, &txn, 0); // Bebin transaction
Dbt _key(&key, sizeof(unsigned long int));
Dbt _value(&value, sizeof(unsigned long int));
env.txn_checkpoint(512, 2, 0);
int exist = db->put(txn, &_key, &_value, DB_NOOVERWRITE);
if (exist == DB_KEYEXIST) {
std::cout << "This record already exist" << std::endl;
txn->commit(0);
return true;
catch(DbException &e)
std::cout << e.what() << std::endl;
txn->abort();
catch(...)
txn->abort();
return false;
bool BerkeleyMethods::AddData(char * key, ParamsStructCpp value)
if (this->Get_restoremode())
return false;
DbTxn * txn;
try
env.txn_begin(NULL, &txn, 0); // Bebin transaction
Dbt _key(key, strlen(key)+1);
Dbt _value(&value, sizeof(ParamsStructCpp));
env.txn_checkpoint(512, 2, 0);
int exist = db->put(txn, &_key, &_value, DB_NOOVERWRITE);
if (exist == DB_KEYEXIST) {
std::cout << "This record already exist" << std::endl;
txn->commit(0);
return true;
catch(DbException &e)
std::cout << e.what() << std::endl;
txn->abort();
catch(...)
txn->abort();
return false;
bool BerkeleyMethods::AddData(unsigned long int key, struct ParamsStructCpp value)
if (this->Get_restoremode())
return false;
DbTxn * txn;
try
env.txn_begin(NULL, &txn, 0); // Bebin transaction
Dbt _key(&key, sizeof(unsigned long int));
Dbt _value(&value, sizeof(ParamsStructCpp));
env.txn_checkpoint(512, 2, 0);
int exist = db->put(txn, &_key, &_value, DB_NOOVERWRITE);
if (exist == DB_KEYEXIST) {
std::cout << "This record already exist" << std::endl;
txn->commit(0);
return true;
catch(DbException &e)
std::cout << e.what() << std::endl;
txn->abort();
catch(...)
txn->abort();
return false;
bool BerkeleyMethods::Exist(unsigned long int key)
if (this->Get_restoremode())
return true;
DbTxn * txn;
try
env.txn_begin(NULL, &txn, DB_TXN_SNAPSHOT); // Bebin transaction
Dbt _key(&key, sizeof(unsigned long int));
int state = db->exists(txn, &_key, DB_READ_COMMITTED);
txn->commit(0);
if (state == 0) {
return true;
catch(DbException &e)
std::cout << e.what() << std::endl;
txn->abort();
catch(...)
txn->abort();
return false;
bool BerkeleyMethods::Exist(char * key)
if (this->Get_restoremode())
return true;
DbTxn * txn;
try
env.txn_begin(NULL, &txn, DB_TXN_SNAPSHOT); // Bebin transaction
Dbt _key(key, strlen(key)+1);
int state = db->exists(txn, &_key,DB_READ_COMMITTED);
txn->commit(0);
if (state == 0) {
return true;
catch(DbException &e)
std::cout << e.what() << std::endl;
txn->abort();
catch(...)
txn->abort();
return false;
void BerkeleyMethods::GetData (char * pData, int nbr, unsigned long int key)
if (this->Get_restoremode())
return;
DbTxn * txn;
Dbc *dbcp;
try
env.txn_begin(NULL, &txn, DB_TXN_SNAPSHOT); // Bebin transaction
db->cursor(txn, &dbcp, 0);
Dbt _key;
Dbt data;
key.setdata(&key);
key.setsize(sizeof(unsigned long int));
dbcp->get(&_key, &data, DB_FIRST);
char * temp = (char *)data.get_data();
strcpy_s(pData, strlen(temp)+1, temp);
dbcp->close();
txn->commit(0);
catch(DbException &e)
std::cout << e.what() << std::endl;
if (dbcp != NULL)
dbcp->close();
if (txn != NULL)
txn->abort();
catch(...)
if (dbcp != NULL)
dbcp->close();
if (txn != NULL)
txn->abort();
unsigned long int BerkeleyMethods::GetData(char * key)
if (this->Get_restoremode())
return 0;
DbTxn * txn;
Dbc *dbcp;
try
env.txn_begin(NULL, &txn, DB_TXN_SNAPSHOT); // Bebin transaction
db->cursor(txn, &dbcp, 0);
Dbt _key;
Dbt data;
key.setdata(key);
key.setulen(strlen(key)+1);
dbcp->get(&_key, &data, DB_FIRST);
unsigned long int xdata = *((unsigned long int *)data.get_data());
dbcp->close();
txn->commit(0);
return xdata;
catch(DbException &e)
std::cout << e.what() << std::endl;
dbcp->close();
txn->abort();
catch(...)
dbcp->close();
txn->abort();
return 0;
ParamsStructCpp * BerkeleyMethods::GetData(unsigned long int key, bool null)
if (this->Get_restoremode()) {
return new ParamsStructCpp();
DbTxn * txn;
Dbc *dbcp;
try
env.txn_begin(NULL, &txn, DB_TXN_SNAPSHOT); // Bebin transaction
db->cursor(txn, &dbcp, 0);
Dbt _key;
Dbt data;
key.setdata(&key);
key.setsize(sizeof(unsigned long int));
dbcp->get(&_key, &data, DB_FIRST);
ParamsStructCpp * temp = (ParamsStructCpp *)data.get_data();
dbcp->close();
txn->commit(0);
return temp;
catch(DbException &e)
std::cout << e.what() << std::endl;
dbcp->close();
txn->abort();
catch(...)
dbcp->close();
txn->abort();
return new ParamsStructCpp();
ParamsStructCpp * BerkeleyMethods::GetData(char * key, bool null)
if (this->Get_restoremode()) {
return new ParamsStructCpp();
DbTxn * txn;
Dbc *dbcp;
try
env.txn_begin(NULL, &txn, DB_TXN_SNAPSHOT); // Bebin transaction
db->cursor(txn, &dbcp, 0);
Dbt _key;
Dbt data;
key.setdata(key);
key.setulen(strlen(key)+1);
dbcp->get(&_key, &data, DB_FIRST);
ParamsStructCpp * xdata = (ParamsStructCpp *)data.get_data();
dbcp->close();
txn->commit(0);
return xdata;
catch(DbException &e)
std::cout << e.what() << std::endl;
dbcp->close();
txn->abort();
catch(...)
dbcp->close();
txn->abort();
return new ParamsStructCpp();
list<ParamsStruct> BerkeleyMethods::FetchAllDatabase ()
list<ParamsStruct> temp;
Dbc *dbcp;
try
db->cursor(NULL, &dbcp, 0);
Dbt _key;
Dbt data;
while(dbcp->get(&_key, &data, DB_NEXT))
unsigned long int key = *((unsigned long int *)_key.get_data());
char * datetime = (char *)data.get_data();
ParamsStruct p;
strcpy_s(p.lastaccess, strlen(datetime)+1, datetime);
p.downloaded
temp.push_back(
//temp.insert(Tuple(datetime, key));
catch(DbException &e)
std::cout << e.what() << std::endl;
catch(...)
return temp;
bool BerkeleyMethods::DeleteData(unsigned long int key)
if (this->Get_restoremode())
return true;
DbTxn * txn;
try
env.txn_checkpoint(128, 1, 0);
env.txn_begin(NULL, &txn, 0); // Bebin transaction
Dbt _key;
key.setdata(&key);
key.setsize(sizeof(unsigned long int));
db->del(txn, &_key, 0);
txn->commit(0);
return true;
catch(DbException &e)
std::cout << e.what() << std::endl;
txn->abort();
catch(...)
txn->abort();
return false;;
bool BerkeleyMethods::DeleteData(char * key)
if (this->Get_restoremode())
return true;
DbTxn * txn;
try
env.txn_begin(NULL, &txn, 0); // Bebin transaction
Dbt _key;
key.setdata(key);
key.setulen(strlen(key)+1);
db->del(txn, &_key, 0);
txn->commit(0);
return true;
catch(DbException &e)
std::cout << e.what() << std::endl;
txn->abort();
catch(...)
txn->abort();
return false;
int BerkeleyMethods::Sync()
if (this->Get_restoremode())
return -1;
try
return db->sync(0);
catch(...)
return -1;
int BerkeleyMethods::Count()
if (this->Get_restoremode())
return -1;
Dbc *dbcp;
int count = 0;
try
Dbt key;
Dbt data;
db->cursor(NULL, &dbcp, 0);
while (dbcp->get(&key, &data, DB_NEXT) == 0) {
count++;
dbcp->close();
return count;
catch(...)
return -1;
BerkeleyMethods::~BerkeleyMethods()
if (db) {
db->sync(0);
db->close(0);
env.close(0);
=====
The code the use this class :
BerkeleyMethods db("test.db", 0, 524288000, 1, 1, "log.txt", "./Env_dir", 1000000 * 5, 600000);
BerkeleyMethods db1("test2.db", 0, 524288000, 1, 1, "log2.txt", "./Env_dir2", 1000000 * 5, 600000);
bool z = db.OpenDatabase();
db1.OpenDatabase();
if (z)
std::cout << "Base de données ouverte" << std::endl;
for (unsigned int i = 0; i < 1000; i++)
ParamsStructCpp p = { 10, "02/08/2008 14:46:23", 789 };
bool a = db.AddData(i, p);
db1.AddData(i, p);
if (a)
std::cout << "Ajout de données ok" << std::endl;
for (unsigned int i = 0; i < 1000; i++)
ParamsStructCpp * c = db.GetData(i, false);
ParamsStructCpp * c1 = db1.GetData(i, false);
std::cout << "Donné récupéré " << c->downloaded << " : " << c->lastaccess << " : " << c->waittime << std::endl;
std::cout << "Donné récupéré " << c1->downloaded << " : " << c1->lastaccess << " : " << c1->waittime << std::endl;
/ ====
The application output show that when using two database the data is not correctly set. It seems that db and db1 is the same object :|.
For example in db i insert a key => toto with value 4, and in db1 i insert the same value nomaly have no problem. But berkeleydb say the the key toto in db1 already exist while not
I don't understand.
NB : sorry for my englishMichael Cahill wrote:
As a side note, it is unlikely that you want both
DB_READ_UNCOMMITTED and DB_MULTIVERSION to be set.
This combination pays a price during updates for
maintaining multiple versions, but still requires
(short term) locks to be held during reads. The BDB/XML Transaction Processing Guide states the following:
[...]in addition to BDB XML's normal degrees of isolation, you can also use snapshot isolation. This allows you to avoid the read locks that serializable isolation requires.
http://www.oracle.com/technology/documentation/berkeley-db/xml/gsg_xml_txn/cxx/isolation.html
This seems to contradict what you're saying here.
Is there a general guideline on whether or not to use MVCC together with a relaxed isolation degree like DB_READ_UNCOMMITTED? Should the statement in the BDB/XML TP Guide rather have "as an alternative to" instead of "in addition to"?
Michael Ludwig -
When using Appleworks database for printing labels can I have columns of different widths?
When using Appleworks database for printing labels can I have a column of different width?
Case in point, the supplied avery labels in Appleworks does not have 8195. When I build one using custom design I need a small column between column 2 and 3 to line up the info in columns 3 and 4. All the labels are 1.3/4 inch wide but I need to insert a column 1/4 wide to match up with the avery page....
Any ideas???
Thanks......
---warrenWell, the issue is that when you creat a custom width of 1.3/4 from the edge of the first label to the edge of label 2 all is good. The right and left margins were also set correct in AW. The Avery page, for some reason, has a small 1/4 inch column between the columns of label 2 and 3 therefore making the distance between the left edge of the label in column 2 and the right edge of the label in column 3 - a width of 2 inches and not 1.3/4.
I guess Avery wanted the page to look even.
I did this work around.....
I created a custom lay out of 2 columns (not 4) with the correct width of 1.3/4 from label edge to label edge and the correct left side margin. After printing the first page I just turned the sheet 180 degress and printed the 2nd page to use all the labels on the Avery sheet. Since the upper and lower margin were the same it worked great....
"Great minds think alike"... thanks for your suggetion -
How to generate a single report using multiple Databases
Hi All
Is it possible to create a single report using multiple databases
I am working on Database A to generate reports usually,, but now i have a second database for which the data is coming from flat files now i have to use few tables from
Database B to generate a single report,,,,, can any one help with the processHi,
i didn't see this properly in your post:
but now i have a second database for which the data is coming from flat files if you have ETL then make flat files as source then create target tables in db:B itself.. .Now, import them in the rpd..
If not, import both those tables into rpd with different connections..
Create physical joins by selecting those tables and perform joins operations over db's in physical layer.. -
Why is quicktime slower when using multiple mdat atoms
Hi,
I've been trying to generate a mov file and I noticed that the more mdat atoms I put in my mov file the more the file takes time to load on QuickTime, iTunes.
Even worst, on the iPhone the file takes more than 3 minutes to load.
If there are too many mdat atoms quicktime even says that the file is invalid ( error -2004 or -2002, I don't remember exactly).
Why is quicktime/iTunes slower when using multiple mdat atoms ?
Thanks,Yeah! Problem solved: It's a QT issue.
Cause: Mac Update Software downloaded a faulty QT.
Solution: Download QT from Apple's QT site.
Great to have the Video back -
ISE Not Identifying AD Group Attributes when using Multiple ISE Servers
So we have multiple ISE Servers with differing personas. I was having an issue with our new ISE setup not identifying AD Group Attributes when using them in Authorization rules.
We have 2- 3395 appliances running Admin and Monitoring/Troubleshooting Personas and 2- 3395 appliances running as Policy server personas. We are running v1.1.1.268 with the latest two patches.
I was unable to pull Active Directory Group Attributes in any of my Authorization rules. After Resyncing all the boxes with the Primary Administration box I was able to do this. There is no bug listings for this occurance nor do we have Smartnet to call support for other reasons. I thought this might be useful to someone who is having the same issue and is unable to figure it out with TAC
-CCAbsolutely. All units said in-sync after setting their personas.
Here is our layout:
ISE-ADM-01 Admin-Primary, Monitoring-Secondary
ISE-ADM-02 Admin-Secondary, Monitoring-Primary
ISE-PDP-01 Policy Only
ISE-PDP-02 Policy Only
I synced one at a time starting with ADM-02. After completing the other two boxes. Active Directory Attribs were pulled down when using them in the Ext Group within my Authz rules.
-CC -
Solve Drive self-ejects and System Hangs when using MULTIPLE external drives!
I've discovered what I think is the underlying reason for many folks reporting similar problems. I bet that I can predict the GUIDs of the drives/docks/cases you're having trouble with! I bet that if you look in Disk Utility, if you click on the drives, you'll find "Connection ID 13757101839304263"!
Oh and if you look in System Profiler and click on 'FireWire' or 'USB' you'll find the GUID 0x30E002E0454647 !
What are the odds? I shouldn't be able to predict them; theyr'e supposed to be unique, after all, but google that number and you'll see that a great many devices all use that GUID. (13757101839304263 in decimal is the same thing as 30E002E0454647 in hex.)
Mac OS seems to rely on them to be unique, and has fits when they aren't. I haven't found a total solution yet, but in the meantime, running this in Terminal can help:
sudo kextload /System/Library/Extensions/IOFireWireFamily.kext/Contents/PlugIns/AppleFWOHCI.k ext/
Sometimes it helps to run this first to unload before loading the kext (kernel extension):
sudo kextunload /System/Library/Extensions/IOFireWireFamily.kext/Contents/PlugIns/AppleFWOHCI.k ext/
Enter your password if/when prompted, of course.
In most cases, if you contact your hard drive manufacturer, they'll provide a utility that will fix the drive so that it actually has a truly unique GUID.
I'm mad at Cavalry for hiding the problem and refusing to fix it, W.R.T. the two 1TB HDs I bought from them for around $500, when that was the going price.
I figured it'd be helpful to create a user tip off the discussion in this thread: Disk Drive ejecting itself - https://discussions.apple.com/thread/2151621?start=615&tstart=0 This is my first attempt to create a tip. I'm not sure I'm doing it right, but here goes.Sorry but this is just a really bad idea.
1. The airport is ancient.. your speed will be terrible.
2. Hubs filled with USB drives are never reliable.. disks refuse to spin up.. become unavailable.
No.. don't do it.
Plug USB3 hub into the mini and use multiple decent hard drives .. that is fine.. use thunderbolt hub.. use thunderbolt drives if you can afford it.. but never use USB drives on the network.
3. Time Machine is not reliable to external drive on Airport Extreme.. Apple do not support it in any Extreme except the latest one.
If you want the material available to the network buy a NAS. You can buy a cheap NAS of 8TB or so and another 8TB of USB drives which you plug into the NAS for backup.. however.. iphoto is not supposed to ever be put on network drives..
Apple explicitly states you will corrupt it.
http://support.apple.com/kb/TS5168 Although mostly about FAT32 it adds network drives.
http://support.apple.com/kb/HT1198
And I quote
It's recommended that you store your iPhoto library on a locally mounted hard drive. Storing your iPhoto library on a network share can lead to poor performance, data corruption, or data loss. If you use both iPhoto and Aperture with the same library, using a Mac OS X Extended formatted volume is recommended. For more information, see Aperture: Use locally mounted Mac OS X Extended volumes for your Aperture library.
For backup of the drives.. use Carbon Copy Cloner.. it is much better than TM as you can setup different job.. TM is really a single function.. !! -
TDE Wallets & Multiple Databases on same Host
The Oracle TDE Best Practices (doc ID 130696) states this:
Multiple databases on the same host
If there are multiple Oracle Databases installed on the same server, they
must access their own individual TDE wallet. Sharing the same wallet between independent instances is not supported
and can potentially lead to the loss of encrypted data.
If the databases share the same ORACLE_HOME, they also share the same
sqlnet.ora file in $TNS_ADMIN . In order to access their individual wallet, the
DIRECTORY entry for the ENCRYPTION_WALLET_LOCATION
needs to point each database to its own wallet location:
DIRECTORY= /etc/ORACLE/WALLETS/$ORACLE_UNQNAME
The names of the subdirectories under /etc/ORACLE/WALLETS/ reflect
the ORACLE_UNQNAME names of the individual databases.
If the databases do not share the same ORACLE_HOME, they will also have their individual sqlnet.ora
files that have to point to the individual subdirectories.
What is the correct sqlnet.ora syntax to do this? I currently have what is below but it doesn't seem to be correct:
ENCRYPTION_WALLET_LOCATION =
(SOURCE = (METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = /local/oracle/admin/wallet/DB#1)
(DIRECTORY = /local/oracle/admin/wallet/DB#2)Hi,
You can check this :Setting ENCRYPTION_WALLET_LOCATION For Wallets Of Multiple Instances Sharing The Same Oracle Home (Doc ID 1504783.1)
i haven't done this for multiple database, but as per Doc you can use the syntex like
ENCRYPTION_WALLET_LOCATION =
(SOURCE = (METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = /local/oracle/admin/wallet/$ORACLE_UNQNAME)
Whenever you set the Environmnet with
export $ORACLE_UNQNAME=DB#1
it will choose the file from respective directory like /local/oracle/admin/wallet/DB#1
HTH -
I am trying to write LabVIEW Variants to long binary fields in a .mdb file using the Database Connectivity Toolset. I get errors when trying to convert the field back to a variant after reading it back from the database.
I next tried flattening the variant before writing it and ultimately wound up doing the following experiments:
1) If I use DB Tools Insert Data to write an ordinary string and read it back using a DB Tools Select Data, the string is converted from ASCII to Unicode.
2) If I use DB Tools Create Parameterized Query to do an INSERT INTO or an UPDATE operation, specifying that the data is BINARY, then read it back using a DB Tools Select Data,
the length of the string is prepended to the string itself as a big-endian four-byte integer.
I can't think of any way to do a parameterized read, although the mechanism exists to return data via parameters.
Presuming that this same problem affects Variants when they are written to the database and read back, I could see why I get an error. At least with flattened strings I have the option of discarding the length bytes from the beginning of the string.
Am I missing something here?David,
You've missed the point. When a data item is flattened to a string, the first four bytes of the string are expected to be the total length of the string in big-endian binary format. What is happening here is that preceding this four-byte length code is another copy of the same four bytes. If an ordinary string, "abcdefg" is used in place of the flattened data item, it will come back as <00><00><00><07>abcdefg. Here I've used to represent a byte in hexadecimal notation. This problem has nothing to do with flattening and unflattening data items. It has only to do with the data channel consisting of writing to and reading from the database.
I am attaching three files that you can use to demonstrate the problem. The VI file c
ontains an explanation of the problem and instructions for installing and operating the demonstration.
Ron Martin
Attachments:
TestLongBinaryFields.vi 132 KB
Sample.UDL 1 KB
Sample.mdb 120 KB -
Problem using multiple contexts in same thread
Hello,
I am having problem using multiple contexts in the same thread. Here is the scenario:
front-end is calling a ejb1 with a user1 and password. Ejb1 is then calling ejb2
using user2 and password. I am getting security exception when calling ejb2 with
the message user1 is not authorized. Looking at the documentation, context 2 should
be pushed on stack on top of context 1 and context 2 should then be used until
context.close() is called. It looks like this is not the case in this scenario?
Regards,
Jeba BhaskaranI have the GTX670. So pretty much the same.
When I go to Edit>Preferences>Playback I see:
When I select the monitor I am not currently using for Premiere Pro, the Program Monitor shows up full size at 1920X1080 in that monitor.
While that may not help you, at least you know a similar card can do the job and you know that it should work.. What happens if you drop down to two monitors? Will it work then?
Also, have you performed the hack that allows Premiere Pro to use the card since that card is not in the file? I have no idea if that is relevant at all, by the way. It is just an attempt at getting our systems to work the same way. -
** Is it not possible to use 2 database in same XI server
Hi friends,
We were able to connect to oracle database thru JDBC adapter previously. To connect to SQL server , we have installed JDBC drivers for Microsoft SQL Server 2000 in same XI server. Now, we are able to connect to SQL server table but not able to connect to Oracle table . When change the CC and give the Oracle driver and Connection string, it throws an error 'SAPClassnotFoundException'.
So, Is it not possible to connect to different database in same XI system ?
Kindly reply friends,
Kind Regards,
Jeg P.1) Verify the DB URL by connecting database using Command line SQL plus or any SQL editor.
2) Check wether the Drivers deployed are supported for the Oracle Db version and JDK version on XI server and OS type ( Solaris )
Check this URL for details on JDBC drivers support list -
Re: Different Version Oracle in a SLD
SAPClassNotFoundException: oracle.jdbc.driver.OracleDriver' --> This error clearly implies that the OracleDriver has not been installed on your XI server.
Ask you admin team to do the needful and install the Oracle Drivers.
regards
kummari -
Multiple databases on same server
we're planning to consolidate our SAP R/3 databases on a 64bit based cluster running Windows 2003/SQL Server 2005.
the "SAP with Microsoft SQL Server 2005: Best Practices for High Availability, Maximum Performance, and Scalability" document states "SAP products support multiple instances of SQL Server 2005 on the same server. However, running the database of the SAP production system together with other application databases on one server is not recommended." and "An alternative is to run two or three SAP databases on one SQL Server 2005 instance.".
giving this, I'd like to know if it's not recommended even if the 'other application databases' are related to other instances of SAP R/3 product.
it would be useful to know why one SQL Server instance with multiple SAP databases is better than multiple SQL Server instances with one SAP database for each.
thank you.Dear Matro,
Please review the following thread:
Multiple database instances on single server on SQL2005
I also note that you state your database server is clustered using MSCS. If this is the case you should not use a "Named Instance". If you download the latest Netweaver 2004s Installation guide (SR1) for Windows SQL you will note that on page 172 it specifically states that you must install a "default instance".
If you plan to consolidate several SAP database systems on one host and use a default instance you need to consider the fact that all databases running on this instance will share resources such a memory cache etc.
Factors such as the size of the SAP systems, whether the systems are OLTP will influence your decision
Please post if you have any additional questions.
Thanks
N.P.C -
Issues when running multiple apps on same JRE?
I've created an application that launches other Java applications, all running on the same JRE. I've noticed that this results in significant memory savings (60% or more). Performance doesn't seem to be measurably affected, but I've never run more than 4 applications at the same time on the same JRE.
Does anyone know of any performance issues in situations like this? JRE limitations? Any information would be appreciated."It's just not obvious from the 1.3 API docs that the classLoader is what's responsible for associating a set of classes with a set of threads."
No, that's not what I was implying. The default classloader caches the definition of previously loaded classes (ie, bytecode, static variables). By default, there is no way to unload a class. Static variables will also retain their values -- which could confuse an application that is run multiple times.
This is the suggested implementation defined in the JVM spec:
http://java.sun.com/docs/books/vmspec/html/ConstantPool.doc.html
Basically, if a class has already been loaded, it returns it. You can write your own custom classload to prevent this (as the link you found suggests) or use multiple classloaders.
Imagine what happens when you write an application and then recompile a class WHILE running your application. Will the next instance of that class be updated? The answer depends on whether your classloader will reload the class from disk, or use a cached instance in memory (as the default one does). -
HELP! Oracle FailSafe - Listener fails when adding standalone database
Well, I have a cluster of two nodes with the following specs:
(1) an Oracle 10g database each
(2) Microsoft Cluster Service (MSCS)
(3) Windows Server 2003 64-bit edition
(4) Intel Itanium Processor
(5) Oracle Failsafe 3.3.3 for Windows 2003 64-bit
The 64-bit Oracle Failsafe doesn't come with Oracle Failsafe Manager, so I used a Failsafe Manager remotely from another clustered servers. The version is also 3.3.3, but it's running on a Windows 2000 Advanced Server.
Well, after connecting to the 64-bit cluster, I added the standalone database to a Cluster Group. There are two cluster groups on the Server:
(1)"Cluster Group" (the default cluster group created by MSCS); containing an IP address, a network name, Oracle Cluster Services, and the Quorum hard drive.
(1)"ORACLE DB" A cluster gropu I created for the database; containing another IP address, a network name for the IP address, and every hard drive volumes of the database files.
The database currently resides on the Node 2 (because I created it there). I have successfully verified the database (using "Verify Standalone Database" option). BUT when I added the database into the cluster group ORACLE DB, it failed with the following message:
23 20:48:48 ** ERROR : FS-10066: Failed to start Windows service OracleOraDb10g_home1TNSListener for the Oracle Net listener
When I opened the Windows Event Viewer, apparently the Listener Service had started, but it soon "terminated unexpectedly":
At first, the Listener Service appeared to be started:
But this is what happened next; it seemed the Listener Service terminated abruptly after entering the running state for a very short time:
What happened? What should I do? What is the problem? Many thanks!
PS: the following are the messages from both Verifying Standalone Database and Adding Standalone Database. The verification was successfull, but I just failed to add the database:
>
Versions: client = 3.3.3 server = 3.3.3 OS =
Operation: Verifying standalone database "PAYMENT"
Starting Time: May 11, 2005 19:50:11
Elapsed Time: 0 minutes, 4 seconds
1 19:50:11 Starting clusterwide operation
2 19:50:11 FS-10915: POSDB2 : Starting the verification of standalone resource PAYMENT
3 19:50:11 FS-10371: POSDB2 : Performing initialization processing
4 19:50:11 FS-10371: POSDB1 : Performing initialization processing
5 19:50:12 FS-10372: POSDB2 : Gathering resource owner information
6 19:50:12 FS-10372: POSDB1 : Gathering resource owner information
7 19:50:12 FS-10373: POSDB2 : Determining owner node of resource PAYMENT
8 19:50:12 FS-10374: POSDB2 : Gathering cluster information needed to perform the specified operation
9 19:50:12 FS-10374: POSDB1 : Gathering cluster information needed to perform the specified operation
10 19:50:12 FS-10375: POSDB2 : Analyzing cluster information needed to perform the specified operation
11 19:50:12 FS-10378: POSDB2 : Preparing for configuration of resource PAYMENT
12 19:50:12 ** WARNING : FS-10247: The database parameter file H:\PAYMENT\admin\pfile\pfilePAYMENT.ora specified for this operation will override the parameter file value in the registry
13 19:50:12 ** WARNING : FS-10248: At registry key SOFTWARE\ORACLE\KEY_OraDb10g_home1, value of ORA_PAYMENT_PFILE is H:\PAYMENT\admin\pfile
14 19:50:12 FS-10916: POSDB2 : Verification of the standalone resource
15 19:50:12 > FS-10341: Starting verification of database PAYMENT
16 19:50:13 > FS-10342: Starting verification of Oracle Net configuration information for database PAYMENT
17 19:50:13 > FS-10496: Generating the Oracle Net migration plan for PAYMENT
18 19:50:13 > FS-10491: Configuring the Oracle Net service name for PAYMENT
19 19:50:13 > FS-10343: Starting verification of database instance information for database PAYMENT
20 19:50:13 >> FS-10347: Checking the state of database PAYMENT
21 19:50:13 >> FS-10425: Querying the disks used by the database PAYMENT
22 19:50:15 > FS-10344: Starting verification of Oracle Intelligent Agent for database PAYMENT
23 19:50:15 > FS-10345: Verification of standalone database PAYMENT completed successfully
24 19:50:15 FS-10917: POSDB2 : Standalone resource PAYMENT was verified successfully
25 19:50:15 FS-10378: POSDB1 : Preparing for configuration of resource PAYMENT
26 19:50:15 FS-10916: POSDB1 : Verification of the standalone resource
27 19:50:15 > FS-10341: Starting verification of database PAYMENT
28 19:50:15 > FS-10342: Starting verification of Oracle Net configuration information for database PAYMENT
29 19:50:15 > FS-10496: Generating the Oracle Net migration plan for PAYMENT
30 19:50:15 > FS-10491: Configuring the Oracle Net service name for PAYMENT
31 19:50:15 > FS-10343: Starting verification of database instance information for database PAYMENT
32 19:50:15 > FS-10344: Starting verification of Oracle Intelligent Agent for database PAYMENT
33 19:50:15 > FS-10345: Verification of standalone database PAYMENT completed successfully
34 19:50:15 FS-10917: POSDB1 : Standalone resource PAYMENT was verified successfully
35 19:50:15 The clusterwide operation completed successfully, however, the server reported some warnings.
>
Versions: client = 3.3.3 server = 3.3.3 OS =
Operation: Adding resource "PAYMENT" to group "ORACLE DATABASE"
Starting Time: May 11, 2005 20:48:43
Elapsed Time: 0 minutes, 7 seconds
1 20:48:43 Starting clusterwide operation
2 20:48:44 FS-10370: Adding the resource PAYMENT to group ORACLE DATABASE
3 20:48:44 FS-10371: POSDB2 : Performing initialization processing
4 20:48:44 FS-10371: POSDB1 : Performing initialization processing
5 20:48:45 FS-10372: POSDB2 : Gathering resource owner information
6 20:48:45 FS-10372: POSDB1 : Gathering resource owner information
7 20:48:45 FS-10373: POSDB2 : Determining owner node of resource PAYMENT
8 20:48:45 FS-10374: POSDB2 : Gathering cluster information needed to perform the specified operation
9 20:48:45 FS-10374: POSDB1 : Gathering cluster information needed to perform the specified operation
10 20:48:45 FS-10375: POSDB2 : Analyzing cluster information needed to perform the specified operation
11 20:48:45 >>> FS-10652: POSDB2 has Oracle Database version 10.1.0 installed in ORADB10G_HOME1
12 20:48:45 >>> FS-10652: POSDB1 has Oracle Database version 10.1.0 installed in ORADB10G_HOME1
13 20:48:45 FS-10376: POSDB2 : Starting configuration of resource PAYMENT
14 20:48:45 FS-10378: POSDB2 : Preparing for configuration of resource PAYMENT
15 20:48:46 FS-10380: POSDB2 : Configuring virtual server information for resource PAYMENT
16 20:48:46 ** WARNING : FS-10247: The database parameter file H:\PAYMENT\admin\pfile\pfilePAYMENT.ora specified for this operation will override the parameter file value in the registry
17 20:48:46 ** WARNING : FS-10248: At registry key SOFTWARE\ORACLE\KEY_OraDb10g_home1, value of ORA_PAYMENT_PFILE is H:\PAYMENT\admin\pfile
18 20:48:46 > FS-10496: Generating the Oracle Net migration plan for PAYMENT
19 20:48:46 > FS-10490: Configuring the Oracle Net listener for PAYMENT
20 20:48:46 >> FS-10600: Oracle Net configuration file updated: F:\ORACLE\PRODUCT\10.1.0\DB_1\NETWORK\ADMIN\LISTENER.ORA
21 20:48:46 >> FS-10606: Listener configuration updated in database parameter file: H:\PAYMENT\admin\pfile\pfilePAYMENT.ora
22 20:48:47 >> FS-10605: Oracle Net listener Fslpos created
23 20:48:48 ** ERROR : FS-10066: Failed to start Windows service OracleOraDb10g_home1TNSListener for the Oracle Net listener
24 20:48:48 ** ERROR : FS-10065: Error trying to configure the Oracle Net listener
25 20:48:48 > FS-10090: Rolling back Oracle Net changes on node POSDB2
26 20:48:50 ** ERROR : FS-10784: The Oracle Database resource provider failed to configure the virtual server for resource PAYMENT
27 20:48:50 ** ERROR : FS-10890: Oracle Services for MSCS failed during the add operation
28 20:48:50 ** ERROR : FS-10497: Starting clusterwide rollback of the operation
29 20:48:50 FS-10488: POSDB2 : Starting rollback of operation
30 20:48:50 FS-10489: POSDB2 : Completed rollback of operation
31 20:48:50 ** ERROR : FS-10495: Clusterwide rollback of the operation has been completed
32 20:48:50 Please check your Windows Application log using the Event Viewer for any additional errors
33 20:48:50 The clusterwide operation failed !umm... help? Anyone?
-
Ref: Multiple databases on same ASM Instance
I am having an existing two Node RAC using ASM FIle System. I want to add one more database to the existing RAC using the same ASM diskgroup. We can create multiple databases using the same ASM instance?
You need one ASM instance per server. You can have multiple instances resp databases running on that server accessing the same diskgroup made available on the server by the ASM instance.
Kind regards
Uwe Hesse
http://uhesse.wordpress.com
Maybe you are looking for
-
My scroll wheel click doesn't work anymore.
So, you know how you can open up something in a new tab in chromium by pushing the scroll wheel on your mouse in? Well, I used to be able to do that until recently and now it just flat out doesn't work. Is this a hardware issue? My mouse is in fantas
-
How to convert MDT scenario RPT0 to launchpad
Hello, I'm trying to turn on the reporting launchpad in MSS. I am trying to use the delivered scenario of RPT0 and I've added a new tcode function to it. When I run the IMG step "Convert MDT Data to MSS Reporting Launch Pad" I get the message that
-
EWA Configuration in Solution Manager 7.1 in SP08
Hi All, Could any one please advice on the Changes that has been taken place for EWA configuration in Solution Manager 7.1. Warm Regards, Sudhakar G
-
Hi, I have several problems with screen sharing: 1. When closing the browser window the "Adobe Acrobat Connect Pro" tab on the bottom is still running. When doing it several times I'm getting several tabs. (windows XP, Chrome). 2. It takes sever
-
Hello Friends, Can any body give me sap service market place USER-ID and PASSWORD. it will be very knowledgeble for me.i want to clear some doubts. Thanks With regards Sumit Birla