Single instance of a package in memory
I heard rumor (but cannot find in documentation) that in 9i, one can create a package that is accessible and updatable by any session. That is, the public variable values would be the same in all sessions when updated by a session, as opposed to making a copy in each session memory.
Note that I'm not talking about FGAC/VPD (virtual private database, and fine grain access control).
Any idea?
There was a discussion on this topic on Tom Kyte's site. Click on the link below and scroll down for the more recent 9i stuff.
http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:1578344046713
Similar Messages
-
Single instance app with native packaging
Is there any way to allow only one instance of the app to run with javafx2 native packaging? Like an attribute in the build.xml or something?
I'm using the .exe for windows and the .dmg for mac.
Appreciate your help.There is no simple deployment build switch that I know of for achieving a single instance app.
A couple of ideas (none of which I have tried).
Perhaps you could use the SingleInstanceService:
http://www.oracle.com/technetwork/articles/java/fxbest-1583679.html "Ensuring Only One Instance of the Application Is Started"
http://docs.oracle.com/javase/7/docs/technotes/guides/javaws/developersguide/examples.html#SingleInstanceService "SingleInstanceService"
It is a jnlp api based service though and I'm not sure if such a service would be available to a packaged app (maybe it would require including the jre/lib/javaws.jar file with your app or something like that).
You could write out a lock file when the app starts.
On unix (i.e. mac/linux) the lock could include the process pid for your app. On startup, check if there is a process with the lock pid currently running, if so, don't startup and perhaps send an interrupt signal to the existing app notifying it that the user tried to start a new instance.
On windows you could read and write the lock value value from the registry.
To get rid of the OS specific stuff surrounding this, perhaps this kind of lock logic could be implemented using the Java Preferences API:
http://docs.oracle.com/javase/7/docs/technotes/guides/preferences/index.html
Intellij Idea is open source and and seems to have this kind of functionality, so you could check how they do it. -
Converting from RAC to Single Instance - Memory
hi,
we are moving forward with virtualizing our database environment and want to use 11g RACOne. We currently are using 3 node, 10g RAC. In coming up with specifications I am wondering what general rule there is for sizing the SGA. As an example if one of my databases has 3 500MB SGA's do I simply have a single instance with 1500MB SGA? I'm not sure what the approach would be.
Any info appreciated. Thanks in advance...ronRonHeeb wrote:
thanks for the response. i have been regularly taking current sizes of each SGA from gv$sgastat to see what's being allocated. my thinking is that this is a minimum and that i should add to it for peak loads, ensuring that it's not set below any minimum that RACOne requires.
beyond that going to RACOne seems to be a direction for virtualized DB servers and for us 24 by 7 is not needed (although more than a few minutes outage would be an issue). in any case if needed we could go RAC on our most critical environments. I'm attracted to how patching/server maintenance can be achieved with RACOne.Okay, you either need high availability or you don't. if having a db go down "for more than a few minutes" is a problem then, don't you really need 24x7 ? And in that case, isn't the high availability offered by RAC is your only option? For me, having a mission critical database (and it looks like this qualifies) on anything "vitualized" is a disaster waiting to happen. I find it lunacy to have, say, 4 virtual failover servers (RACOne) on the same physical hardware. When that server crashes so does your entire failover scenario.
>
Edited by: user10489842 on Sep 13, 2012 2:04 PM -
Multiple DNS Domain support in Single instance of Portal
Can BEA portal support multiple DNS domains in a single instance of BEA Portal.
For example can I setup portal to respond as bothe www.xxx.com and www.yyy.com
and keep those urls as trhough the entire portal?Hi,
thanks for your quick response. You mean we should run only one copy of the package I mentioned and seperate the plants and machines by logic implemented in the package? Well, I think this is critical in case of deploying a new version, since all machines at all sites won't have the system available at the same time. At the moment we do not have things in the system that are needed to go on with production, but we have planned to implement some things that will be indispensable and in this stage we need a clear seperation of the plants to minimize the risk of a simultaneous stand at all plants.
Thanks for your suggestion and best regards,
Matthias -
CREATING A SINGLE INSTANCE PHYSICAL STANDBY FOR A RAC PRIMARY
Hi
Creating a single instance physical standby database for a RAC Primary.
Getting this error.
sql statement: alter database mount standby database
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 01/17/2008 23:05:38
RMAN-03015: error occurred in stored script Memory Script
RMAN-03009: failure of sql command on clone_default channel at 01/17/2008 23:05:38
RMAN-11003: failure during parse/execution of SQL statement: alter database mount standby database
ORA-01103: database name 'PROD' in control file is not 'DPROD'
Any help on this.
Regards
SatishThe problem here is probably with your standby init.ora file.
When you create a standby database, the db_name parameter must NOT change. It has to match the primary database. So in your case, db_name ='PROD' and your db_unique_name='DPROD'...
-peter -
Single instance standby for 2-node RAC
Hi,
Oracle Version:11.2.0.1
Operating system:Linux
Here i am planing to create single instance standby for my 2-node RAC database.Here i am creating my single instance standby database on 1-node of my 2-node RAC DB.
1.) Do i need to configure any separate listener for my single instance standby in $ORACLE_HOME/network/admin in ORACLE user or need to change in Grid user login.
2.) Below is the error when i am duplicating my primary 2-Node RAC to single instance DB. And it is shutting down my auxiliary instance.
[oracle@rac1 ~]$ rman target / auxiliary sys/racdba123@stand
Recovery Manager: Release 11.2.0.1.0 - Production on Sun Aug 28 13:32:29 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
connected to target database: RACDB (DBID=755897741)
connected to auxiliary database: RACDB (not mounted)
RMAN> duplicate database racdba to stand
2> ;
Starting Duplicate Db at 28-AUG-11
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=6 device type=DISK
contents of Memory Script:
sql clone "create spfile from memory";
executing Memory Script
sql statement: create spfile from memory
contents of Memory Script:
shutdown clone immediate;
startup clone nomount;
executing Memory Script
Oracle instance shut down
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 08/28/2011 13:33:55
RMAN-03015: error occurred in stored script Memory Script
RMAN-04006: error from auxiliary database: ORA-12514: TNS:listener does not currently know of service requested in connect descriptorAlso find my listener services.
[oracle@rac1 ~]$ lsnrctl status
LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 29-AUG-2011 10:56:24
Copyright (c) 1991, 2009, Oracle. All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.1.0 - Production
Start Date 18-AUG-2011 10:35:07
Uptime 11 days 0 hr. 21 min. 17 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/11.2.0/grid/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/rac1/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.8.123)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.8.127)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "RACDB" has 1 instance(s).
Instance "RACDB1", status READY, has 1 handler(s) for this service...
Service "RACDBXDB" has 1 instance(s).
Instance "RACDB1", status READY, has 1 handler(s) for this service...
Service "stand" has 2 instance(s).
Instance "stand", status UNKNOWN, has 1 handler(s) for this service...
Instance "stand", status BLOCKED, has 1 handler(s) for this service...
Service "testdb" has 1 instance(s).
Instance "RACDB1", status READY, has 1 handler(s) for this service...
Service "testdb1" has 1 instance(s).
Instance "RACDB1", status READY, has 1 handler(s) for this service...
The command completed successfully
[oracle@rac1 ~]$ lsnrctl services
LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 29-AUG-2011 10:56:35
Copyright (c) 1991, 2009, Oracle. All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "RACDB" has 1 instance(s).
Instance "RACDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:3 refused:0 state:ready
LOCAL SERVER
Service "RACDBXDB" has 1 instance(s).
Instance "RACDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER <machine: rac1.qfund.net, pid: 3975>
(ADDRESS=(PROTOCOL=tcp)(HOST=rac1.qfund.net)(PORT=43731))
Service "stand" has 2 instance(s).
Instance "stand", status UNKNOWN, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0
LOCAL SERVER
Instance "stand", status BLOCKED, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:669 refused:0 state:ready
LOCAL SERVER
Service "testdb" has 1 instance(s).
Instance "RACDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:3 refused:0 state:ready
LOCAL SERVER
Service "testdb1" has 1 instance(s).
Instance "RACDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:3 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[oracle@rac1 ~]$Tnsnames.ora file content.
RACDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = racdb-scan.qfund.net)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RACDB)
#QFUNDRAC =
stand =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST= racdb-scan.qfund.net)(PORT = 1521))
(CONNECT_DATA =
(SERVICE_NAME = stand)
(UR = A)
)Please help me how to solve this problem.
Thanks & Regards,
Poorna Prasad.SHi,
Please find the output from v$dataguard_status from primary and standby
Primary
SQL> select message from v$dataguard_status;
MESSAGE
ARC0: Archival started
ARCH: LGWR is scheduled to archive destination LOG_ARCHIVE_DEST_2 after log swit
ch
ARCH: Beginning to archive thread 1 sequence 214 (4604093-4604095)
Error 12514 received logging on to the standby
ARCH: Error 12514 Creating archive log file to 'stand'
ARCH: Completed archiving thread 1 sequence 214 (4604093-4604095)
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
MESSAGE
ARC4: Archival started
ARC5: Archival started
ARC6: Archival started
ARC7: Archival started
ARC8: Archival started
ARC9: Archival started
ARCa: Archival started
ARCb: Archival started
ARCc: Archival started
ARCd: Archival started
ARCe: Archival started
MESSAGE
ARCf: Archival started
ARCg: Archival started
ARCh: Archival started
ARCi: Archival started
ARCj: Archival started
ARCk: Archival started
ARCl: Archival started
ARCm: Archival started
ARCn: Archival started
ARCo: Archival started
ARCp: Archival started
MESSAGE
ARCq: Archival started
ARCr: Archival started
ARCs: Archival started
ARC1: Becoming the 'no FAL' ARCH
ARC1: Becoming the 'no SRL' ARCH
ARC2: Becoming the heartbeat ARCH
ARC7: Beginning to archive thread 1 sequence 215 (4604095-4604191)
ARC7: Completed archiving thread 1 sequence 215 (4604095-4604191)
ARC5: Beginning to archive thread 1 sequence 216 (4604191-4604471)
ARC5: Completed archiving thread 1 sequence 216 (4604191-4604471)
ARCt: Archival started
MESSAGE
ARC3: Beginning to archive thread 1 sequence 217 (4604471-4605358)
ARC3: Completed archiving thread 1 sequence 217 (4604471-4605358)
LNS: Standby redo logfile selected for thread 1 sequence 217 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 1 thread 1 sequence 217
LNS: Completed archiving log 1 thread 1 sequence 217
LNS: Standby redo logfile selected for thread 1 sequence 218 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 2 thread 1 sequence 218
MESSAGE
LNS: Completed archiving log 2 thread 1 sequence 218
ARC4: Beginning to archive thread 1 sequence 218 (4605358-4625984)
ARC4: Completed archiving thread 1 sequence 218 (4605358-4625984)
LNS: Standby redo logfile selected for thread 1 sequence 219 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 1 thread 1 sequence 219
LNS: Completed archiving log 1 thread 1 sequence 219
ARC5: Beginning to archive thread 1 sequence 219 (4625984-4641358)
ARC5: Completed archiving thread 1 sequence 219 (4625984-4641358)
LNS: Standby redo logfile selected for thread 1 sequence 220 for destination LOG
MESSAGE
_ARCHIVE_DEST_2
LNS: Beginning to archive log 2 thread 1 sequence 220
LNS: Completed archiving log 2 thread 1 sequence 220
ARC6: Beginning to archive thread 1 sequence 220 (4641358-4644757)
ARC6: Completed archiving thread 1 sequence 220 (4641358-4644757)
LNS: Standby redo logfile selected for thread 1 sequence 221 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 1 thread 1 sequence 221
LNS: Completed archiving log 1 thread 1 sequence 221
MESSAGE
ARC7: Beginning to archive thread 1 sequence 221 (4644757-4648306)
ARC7: Completed archiving thread 1 sequence 221 (4644757-4648306)
LNS: Standby redo logfile selected for thread 1 sequence 222 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 2 thread 1 sequence 222
LNS: Completed archiving log 2 thread 1 sequence 222
ARC8: Beginning to archive thread 1 sequence 222 (4648306-4655287)
ARC8: Completed archiving thread 1 sequence 222 (4648306-4655287)
LNS: Standby redo logfile selected for thread 1 sequence 223 for destination LOG
_ARCHIVE_DEST_2
MESSAGE
LNS: Beginning to archive log 1 thread 1 sequence 223
LNS: Completed archiving log 1 thread 1 sequence 223
ARC9: Beginning to archive thread 1 sequence 223 (4655287-4655307)
ARC9: Completed archiving thread 1 sequence 223 (4655287-4655307)
LNS: Standby redo logfile selected for thread 1 sequence 224 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 2 thread 1 sequence 224
LNS: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (3135)
LNS: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
MESSAGE
Error 3135 for archive log file 2 to 'stand'
LNS: Failed to archive log 2 thread 1 sequence 224 (3135)
ARC3: Beginning to archive thread 1 sequence 224 (4655307-4660812)
ARC3: Completed archiving thread 1 sequence 224 (4655307-4660812)
LNS: Standby redo logfile selected for thread 1 sequence 224 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 2 thread 1 sequence 224
LNS: Completed archiving log 2 thread 1 sequence 224
LNS: Standby redo logfile selected for thread 1 sequence 225 for destination LOG
_ARCHIVE_DEST_2
MESSAGE
LNS: Beginning to archive log 1 thread 1 sequence 225
LNS: Completed archiving log 1 thread 1 sequence 225
ARC4: Beginning to archive thread 1 sequence 225 (4660812-4660959)
ARC4: Completed archiving thread 1 sequence 225 (4660812-4660959)
LNS: Standby redo logfile selected for thread 1 sequence 226 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 2 thread 1 sequence 226
LNS: Completed archiving log 2 thread 1 sequence 226
ARC5: Beginning to archive thread 1 sequence 226 (4660959-4664925)
MESSAGE
LNS: Standby redo logfile selected for thread 1 sequence 227 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 1 thread 1 sequence 227
ARC5: Completed archiving thread 1 sequence 226 (4660959-4664925)
LNS: Completed archiving log 1 thread 1 sequence 227
LGWR: Error 1089 closing archivelog file 'stand'
ARC6: Beginning to archive thread 1 sequence 227 (4664925-4668448)
ARC6: Completed archiving thread 1 sequence 227 (4664925-4668448)
ARC5: Beginning to archive thread 1 sequence 228 (4668448-4670392)
ARC5: Completed archiving thread 1 sequence 228 (4668448-4670392)
MESSAGE
LNS: Standby redo logfile selected for thread 1 sequence 228 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 2 thread 1 sequence 228
LNS: Completed archiving log 2 thread 1 sequence 228
ARC4: Standby redo logfile selected for thread 1 sequence 227 for destination LO
G_ARCHIVE_DEST_2
LNS: Standby redo logfile selected for thread 1 sequence 229 for destination LOG
_ARCHIVE_DEST_2
MESSAGE
LNS: Beginning to archive log 1 thread 1 sequence 229
LNS: Completed archiving log 1 thread 1 sequence 229
ARC3: Beginning to archive thread 1 sequence 229 (4670392-4670659)
ARC3: Completed archiving thread 1 sequence 229 (4670392-4670659)
LNS: Standby redo logfile selected for thread 1 sequence 230 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 2 thread 1 sequence 230
LNS: Completed archiving log 2 thread 1 sequence 230
ARC4: Beginning to archive thread 1 sequence 230 (4670659-4670679)
ARC4: Completed archiving thread 1 sequence 230 (4670659-4670679)
MESSAGE
LNS: Standby redo logfile selected for thread 1 sequence 231 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 1 thread 1 sequence 231
LNS: Completed archiving log 1 thread 1 sequence 231
ARC5: Beginning to archive thread 1 sequence 231 (4670679-4690371)
ARC5: Completed archiving thread 1 sequence 231 (4670679-4690371)
LNS: Standby redo logfile selected for thread 1 sequence 232 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 2 thread 1 sequence 232
MESSAGE
LNS: Completed archiving log 2 thread 1 sequence 232
ARC6: Beginning to archive thread 1 sequence 232 (4690371-4712566)
ARC6: Completed archiving thread 1 sequence 232 (4690371-4712566)
LNS: Standby redo logfile selected for thread 1 sequence 233 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 1 thread 1 sequence 233
LNS: Completed archiving log 1 thread 1 sequence 233
ARC7: Beginning to archive thread 1 sequence 233 (4712566-4731626)
LNS: Standby redo logfile selected for thread 1 sequence 234 for destination LOG
_ARCHIVE_DEST_2
MESSAGE
LNS: Beginning to archive log 2 thread 1 sequence 234
ARC7: Completed archiving thread 1 sequence 233 (4712566-4731626)
LNS: Completed archiving log 2 thread 1 sequence 234
ARC8: Beginning to archive thread 1 sequence 234 (4731626-4753780)
LNS: Standby redo logfile selected for thread 1 sequence 235 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 1 thread 1 sequence 235
ARC8: Completed archiving thread 1 sequence 234 (4731626-4753780)
LNS: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (3135)
MESSAGE
LNS: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
Error 3135 for archive log file 1 to 'stand'
LNS: Failed to archive log 1 thread 1 sequence 235 (3135)
ARC9: Beginning to archive thread 1 sequence 235 (4753780-4765626)
ARC9: Completed archiving thread 1 sequence 235 (4753780-4765626)
LNS: Standby redo logfile selected for thread 1 sequence 235 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 1 thread 1 sequence 235
LNS: Completed archiving log 1 thread 1 sequence 235
LNS: Standby redo logfile selected for thread 1 sequence 236 for destination LOG
MESSAGE
_ARCHIVE_DEST_2
LNS: Beginning to archive log 2 thread 1 sequence 236
LNS: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (3135)
LNS: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
Error 3135 for archive log file 2 to 'stand'
LNS: Failed to archive log 2 thread 1 sequence 236 (3135)
ARCa: Beginning to archive thread 1 sequence 236 (4765626-4768914)
ARCa: Completed archiving thread 1 sequence 236 (4765626-4768914)
LNS: Standby redo logfile selected for thread 1 sequence 236 for destination LOG
_ARCHIVE_DEST_2
MESSAGE
LNS: Beginning to archive log 2 thread 1 sequence 236
LNS: Completed archiving log 2 thread 1 sequence 236
LNS: Standby redo logfile selected for thread 1 sequence 237 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 1 thread 1 sequence 237
LNS: Completed archiving log 1 thread 1 sequence 237
ARCb: Beginning to archive thread 1 sequence 237 (4768914-4770603)
ARCb: Completed archiving thread 1 sequence 237 (4768914-4770603)
LNS: Standby redo logfile selected for thread 1 sequence 238 for destination LOG
MESSAGE
_ARCHIVE_DEST_2
LNS: Beginning to archive log 2 thread 1 sequence 238
LNS: Completed archiving log 2 thread 1 sequence 238
ARCc: Beginning to archive thread 1 sequence 238 (4770603-4770651)
ARCc: Completed archiving thread 1 sequence 238 (4770603-4770651)
LNS: Standby redo logfile selected for thread 1 sequence 239 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 1 thread 1 sequence 239
LNS: Completed archiving log 1 thread 1 sequence 239
MESSAGE
ARCd: Beginning to archive thread 1 sequence 239 (4770651-4773918)
ARCd: Completed archiving thread 1 sequence 239 (4770651-4773918)
LNS: Standby redo logfile selected for thread 1 sequence 240 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 2 thread 1 sequence 240
LNS: Completed archiving log 2 thread 1 sequence 240
ARCe: Beginning to archive thread 1 sequence 240 (4773918-4773976)
ARCe: Completed archiving thread 1 sequence 240 (4773918-4773976)
LNS: Standby redo logfile selected for thread 1 sequence 241 for destination LOG
_ARCHIVE_DEST_2
MESSAGE
LNS: Beginning to archive log 1 thread 1 sequence 241
LNS: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (3135)
LNS: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
Error 3135 for archive log file 1 to 'stand'
LNS: Failed to archive log 1 thread 1 sequence 241 (3135)
ARC3: Beginning to archive thread 1 sequence 241 (4773976-4774673)
ARC3: Completed archiving thread 1 sequence 241 (4773976-4774673)
LNS: Standby redo logfile selected for thread 1 sequence 241 for destination LOG
_ARCHIVE_DEST_2
MESSAGE
LNS: Beginning to archive log 1 thread 1 sequence 241
LNS: Completed archiving log 1 thread 1 sequence 241
LNS: Standby redo logfile selected for thread 1 sequence 242 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 2 thread 1 sequence 242
LNS: Completed archiving log 2 thread 1 sequence 242
ARC4: Beginning to archive thread 1 sequence 242 (4774673-4776045)
ARC4: Completed archiving thread 1 sequence 242 (4774673-4776045)
LNS: Standby redo logfile selected for thread 1 sequence 243 for destination LOG
_ARCHIVE_DEST_2
MESSAGE
LNS: Beginning to archive log 1 thread 1 sequence 243
LNS: Completed archiving log 1 thread 1 sequence 243
ARC5: Beginning to archive thread 1 sequence 243 (4776045-4776508)
ARC5: Completed archiving thread 1 sequence 243 (4776045-4776508)
LNS: Standby redo logfile selected for thread 1 sequence 244 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 2 thread 1 sequence 244
LNS: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (3135)
LNS: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
MESSAGE
Error 3135 for archive log file 2 to 'stand'
LNS: Failed to archive log 2 thread 1 sequence 244 (3135)
ARC6: Beginning to archive thread 1 sequence 244 (4776508-4778741)
ARC6: Completed archiving thread 1 sequence 244 (4776508-4778741)
ARC7: Beginning to archive thread 1 sequence 245 (4778741-4778781)
ARC7: Completed archiving thread 1 sequence 245 (4778741-4778781)
ARC8: Beginning to archive thread 1 sequence 246 (4778781-4778787)
ARC8: Completed archiving thread 1 sequence 246 (4778781-4778787)
ARC9: Standby redo logfile selected for thread 1 sequence 244 for destination LO
G_ARCHIVE_DEST_2
MESSAGE
ARC3: Beginning to archive thread 1 sequence 247 (4778787-4778934)
LNS: Standby redo logfile selected for thread 1 sequence 247 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 1 thread 1 sequence 247
ARC3: Completed archiving thread 1 sequence 247 (4778787-4778934)
LNS: Completed archiving log 1 thread 1 sequence 247
LNS: Standby redo logfile selected for thread 1 sequence 248 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 2 thread 1 sequence 248
MESSAGE
ARC4: Beginning to archive thread 1 sequence 248 (4778934-4781018)
LNS: Completed archiving log 2 thread 1 sequence 248
ARC4: Completed archiving thread 1 sequence 248 (4778934-4781018)
LNS: Standby redo logfile selected for thread 1 sequence 249 for destination LOG
_ARCHIVE_DEST_2
LNS: Beginning to archive log 1 thread 1 sequence 249
LNS: Completed archiving log 1 thread 1 sequence 249
ARC5: Beginning to archive thread 1 sequence 249 (4781018-4781033)
ARC5: Completed archiving thread 1 sequence 249 (4781018-4781033)
LNS: Standby redo logfile selected for thread 1 sequence 250 for destination LOG
MESSAGE
_ARCHIVE_DEST_2
LNS: Beginning to archive log 2 thread 1 sequence 250
233 rows selected.
SQL>Standby
SQL> select message from v$dataguard_status;
MESSAGE
ARC0: Archival started
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
ARC4: Archival started
ARC5: Archival started
ARC6: Archival started
ARC7: Archival started
ARC8: Archival started
ARC9: Archival started
ARCa: Archival started
MESSAGE
ARCb: Archival started
ARCc: Archival started
ARCd: Archival started
ARCe: Archival started
ARCf: Archival started
ARCg: Archival started
ARCh: Archival started
ARCi: Archival started
ARCj: Archival started
ARCk: Archival started
ARCl: Archival started
MESSAGE
ARCm: Archival started
ARCn: Archival started
ARCo: Archival started
ARCp: Archival started
ARCq: Archival started
ARCr: Archival started
ARCs: Archival started
ARC1: Becoming the 'no FAL' ARCH
ARC2: Becoming the heartbeat ARCH
Error 1017 received logging on to the standby
FAL[client, ARC2]: Error 16191 connecting to RACDB for fetching gap sequence
MESSAGE
ARCt: Archival started
Attempt to start background Managed Standby Recovery process
MRP0: Background Managed Standby Recovery process started
Managed Standby Recovery starting Real Time Apply
Media Recovery Log /u02/stand/archive/1_119_758280976.arc
Media Recovery Waiting for thread 2 sequence 183
RFS[1]: Assigned to RFS process 30110
RFS[1]: Identified database type as 'physical standby': Client is ARCH pid 25980
RFS[2]: Assigned to RFS process 30118
RFS[2]: Identified database type as 'physical standby': Client is ARCH pid 26008
RFS[3]: Assigned to RFS process 30124
MESSAGE
RFS[3]: Identified database type as 'physical standby': Client is ARCH pid 26029
RFS[4]: Assigned to RFS process 30130
RFS[4]: Identified database type as 'physical standby': Client is ARCH pid 26021
ARC4: Beginning to archive thread 1 sequence 244 (4776508-4778741)
ARC4: Completed archiving thread 1 sequence 244 (0-0)
RFS[5]: Assigned to RFS process 30144
RFS[5]: Identified database type as 'physical standby': Client is LGWR ASYNC pid
26128
Primary database is in MAXIMUM PERFORMANCE mode
ARC5: Beginning to archive thread 1 sequence 247 (4778787-4778934)
MESSAGE
ARC5: Completed archiving thread 1 sequence 247 (0-0)
ARC6: Beginning to archive thread 1 sequence 248 (4778934-4781018)
ARC6: Completed archiving thread 1 sequence 248 (0-0)
ARC7: Beginning to archive thread 1 sequence 249 (4781018-4781033)
ARC7: Completed archiving thread 1 sequence 249 (0-0)
58 rows selected.
SQL>also find the output for the primary alertlog file.
Tue Aug 30 10:45:41 2011
LNS: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (3135)
LNS: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
Errors in file /u01/app/oracle/diag/rdbms/racdb/RACDB1/trace/RACDB1_nsa2_26128.trc:
ORA-03135: connection lost contact
Error 3135 for archive log file 2 to 'stand'
Errors in file /u01/app/oracle/diag/rdbms/racdb/RACDB1/trace/RACDB1_nsa2_26128.trc:
ORA-03135: connection lost contact
LNS: Failed to archive log 2 thread 1 sequence 244 (3135)
Errors in file /u01/app/oracle/diag/rdbms/racdb/RACDB1/trace/RACDB1_nsa2_26128.trc:
ORA-03135: connection lost contact
Tue Aug 30 10:50:25 2011
Thread 1 advanced to log sequence 245 (LGWR switch)
Current log# 1 seq# 245 mem# 0: +ASM_DATA1/racdb/onlinelog/group_1.268.758280977
Current log# 1 seq# 245 mem# 1: +ASM_DATA2/racdb/onlinelog/group_1.265.758280979
Tue Aug 30 10:50:25 2011
Archived Log entry 612 added for thread 1 sequence 244 ID 0x2d0e0689 dest 1:
Thread 1 cannot allocate new log, sequence 246
Checkpoint not complete
Current log# 1 seq# 245 mem# 0: +ASM_DATA1/racdb/onlinelog/group_1.268.758280977
Current log# 1 seq# 245 mem# 1: +ASM_DATA2/racdb/onlinelog/group_1.265.758280979
Thread 1 advanced to log sequence 246 (LGWR switch)
Current log# 2 seq# 246 mem# 0: +ASM_DATA1/racdb/onlinelog/group_2.269.758280979
Current log# 2 seq# 246 mem# 1: +ASM_DATA2/racdb/onlinelog/group_2.266.758280981
Tue Aug 30 10:50:27 2011
Archived Log entry 613 added for thread 1 sequence 245 ID 0x2d0e0689 dest 1:
Thread 1 cannot allocate new log, sequence 247
Checkpoint not complete
Current log# 2 seq# 246 mem# 0: +ASM_DATA1/racdb/onlinelog/group_2.269.758280979
Current log# 2 seq# 246 mem# 1: +ASM_DATA2/racdb/onlinelog/group_2.266.758280981
Thread 1 advanced to log sequence 247 (LGWR switch)
Current log# 1 seq# 247 mem# 0: +ASM_DATA1/racdb/onlinelog/group_1.268.758280977
Current log# 1 seq# 247 mem# 1: +ASM_DATA2/racdb/onlinelog/group_1.265.758280979
Tue Aug 30 10:50:30 2011
Archived Log entry 614 added for thread 1 sequence 246 ID 0x2d0e0689 dest 1:
Tue Aug 30 10:51:37 2011
ARC9: Standby redo logfile selected for thread 1 sequence 244 for destination LOG_ARCHIVE_DEST_2
Tue Aug 30 10:51:39 2011
Thread 1 advanced to log sequence 248 (LGWR switch)
Current log# 2 seq# 248 mem# 0: +ASM_DATA1/racdb/onlinelog/group_2.269.758280979
Current log# 2 seq# 248 mem# 1: +ASM_DATA2/racdb/onlinelog/group_2.266.758280981
Tue Aug 30 10:51:39 2011
Archived Log entry 620 added for thread 1 sequence 247 ID 0x2d0e0689 dest 1:
Tue Aug 30 10:51:39 2011
LNS: Standby redo logfile selected for thread 1 sequence 247 for destination LOG_ARCHIVE_DEST_2
LNS: Standby redo logfile selected for thread 1 sequence 248 for destination LOG_ARCHIVE_DEST_2
Tue Aug 30 11:08:27 2011
Thread 1 advanced to log sequence 249 (LGWR switch)
Current log# 1 seq# 249 mem# 0: +ASM_DATA1/racdb/onlinelog/group_1.268.758280977
Current log# 1 seq# 249 mem# 1: +ASM_DATA2/racdb/onlinelog/group_1.265.758280979
Tue Aug 30 11:08:27 2011
Archived Log entry 622 added for thread 1 sequence 248 ID 0x2d0e0689 dest 1:
Tue Aug 30 11:08:27 2011
LNS: Standby redo logfile selected for thread 1 sequence 249 for destination LOG_ARCHIVE_DEST_2
Thread 1 cannot allocate new log, sequence 250
Checkpoint not complete
Current log# 1 seq# 249 mem# 0: +ASM_DATA1/racdb/onlinelog/group_1.268.758280977
Current log# 1 seq# 249 mem# 1: +ASM_DATA2/racdb/onlinelog/group_1.265.758280979
Thread 1 advanced to log sequence 250 (LGWR switch)
Current log# 2 seq# 250 mem# 0: +ASM_DATA1/racdb/onlinelog/group_2.269.758280979
Current log# 2 seq# 250 mem# 1: +ASM_DATA2/racdb/onlinelog/group_2.266.758280981
Tue Aug 30 11:08:31 2011
Archived Log entry 624 added for thread 1 sequence 249 ID 0x2d0e0689 dest 1:
LNS: Standby redo logfile selected for thread 1 sequence 250 for destination LOG_ARCHIVE_DEST_2Thanks & Regards,
Poorna Prasad.S -
2012 R2: Use of Single Instance Store (SCCMContentLib) only
Hi all,
We have a System Center 2012 R2 Configuration Manager already running with almost every role (standalone, besides from wsus on another server as Software Update Point).
The SC2012R2 server has several HDDs C:,D:,E:,S: with SCCMContentLib
directories on C: & D: and a Sources shared resource in drive S: which is accessed through UNC
\\SC2012R2\Sources\
The fact is that we think that this configuration is wasting space since it seems that both
SCCMContentLib and the Sources shared directory serve the same contents but in diferent ways, thus doubling our disk usage.
As per
Howard Hoy post: By default, the content is single storage and browsing to a DP does not store all the files in a package id folder where you could find an install and execute it. You can enable the older method which will create a separate folder share.
This will require double the disk space!
So here is our question: Is there any way to get rid of the Sources
shared directory and serve contents to clients using SCCMContentLib only? In other words, besides of the Content Library at the Central Administration Site, are we forced to use the Sources in S: to deliver content or can we get rid of
it somehow?
It seems that Automatic Deployment Rules, at the end of the wizard force you to select a UNC to store the Deployment Package. Is this the correct set up and we simply have to deal with it and keep adding more storage to the server?
As per Microsoft documentation about the
Content Library: "The content library is located on each site server and on each distribution point and provides a single instance store for content files"
However, if we are forced to add extra hdd space for the Sources directory to be accessed via UNC, it seems a waste of space.
Thanks in advance for any clarification on this subject.
Regards."So here is our question: Is there any way to get rid of the Sources
shared directory and serve contents to clients using SCCMContentLib only?"
No. There is no way to do this. ConfigMgr is an enterprise content deployment system and this design is a result of efficiently supporting larger enterprises. It also doesn't make sense to directly use the source files that it does not directly control.
Yes, this does create some additional disk space overhead; however, disk space is cheap. You should be storing your source files on cheap disk and also note that there is no reason this has to be directly attached to your ConfigMgr site server. All source
content can be accessed using a UNC so putting the source files on a file server or alternate system that has cheap storage available to it is an easy "solution".
"It seems that Automatic Deployment Rules, at the end of the wizard force you to select a UNC to store the Deployment Package. Is this the correct set up and we simply have to deal with it and keep adding more storage to the server?"
Yes, all applications, drivers, OS image packages, OS install packages, and software update packages must have their content accessed using a UNC. Good practice also dictates that even your package files use a UNC for their source files. That doesn't mean
the UNC can't be local to the site server although see my comment above about using cheap disk for this. Here's a related blog post that you may want to review:
http://blog.configmgrftw.com/configmgr-folder-structure/
"it seems a waste of space."
Ultimately, yes, you end up with duplicate copies of all of your source files. But, to be blunt, use cheap disk for this and move on as you can't change this.
Jason | http://blog.configmgrftw.com | @jasonsandys -
Query performance on RAC is a lot slower than single instance
I simply followed the steps provided by oracle to install a rac db of 2 nodes.
The performce on Insertion (java, thin ojdbc) is pretty much the same compared to a single instance on NFS
However the performance on the select query is very slow compared to single instance.
I have tried using different methods for the storage configuration (asm with raw, ocfs2) but the performance is still slow.
When I shut down one instance, leaving only one instance up, the query performance is very fast (as fast as one single instance)
I am using rhel5 64 bit (16G of physical memory) and oracle 11.1.0.6 with patchset 11.1.0.7
Could someone help me how to debug this problem?
Thanks,
Chau
Edited by: user638637 on Aug 6, 2009 8:31 AMtop 5 timed foreground events:
DB CPU: times 943(s), %db time (47.5%)
cursor.pin S wait on X: wait(13940), time (321s), avg wait(23ms), %db time (16.15%)
direct path read (95,436), times (288), avg watie (3ms), %db ime (14.51%)
IPC send completion sync: wait(546,712), times(149s), avg wait (0), %db time (7.49%)
gc cr multi block request: waits (7574), teims (78) avg wait (10 ms), %db time (4.0)
another thing i see is the "avg global cache cr block flush time (ms): is 37.6 msThe DB CPU Oracle metric is the amount of CPU time (in microseconds) spent on database user-level calls.
You should check your sql statement from report and tuning them.
- Check from Execute Plan.
- If not index, determine to use index.
SQL> set autot trace explain
SQL> sql statement;
cursor: pin S wait on X.
A session waits on this event when requesting a mutex for sharable operationsrelated to pins (such as executing a cursor), but the mutex cannot be granted becauseit is being held exclusively by another session (which is most likely parsing the cursor).
use variable SQL , avoid dynamic sql
http://blog.tanelpoder.com/2008/08/03/library-cache-latches-gone-in-oracle-11g/
check about memory MEMORY_TARGET initialization parameter.
By the way you have high "DB CPU"(47.5%), you should tune about your sql statement (check sql in report and tune)
Good Luck -
Single Instance Storage - WIMs
Consider this:
You update a WIM file which is fairly large. You have 300 DP's, and until any particular DP receives the changes, you'll get a "hash value is not correct" on any OSD task sequences using that WIM.
Someone came up with the idea:
Have two different WIMs that you rotate. Have Image A assigned to the task sequence, and update Image B when you have a change. Once Image B is everywhere, change the task sequence to use Image B. Next change to the image goes to Image
A.
Well, it seems both Single-instance storage along with re-transmittal doesn't work. here's what I did:
Add a new OS image with a path such as this:
\\SCCM\Share\Win7.wim
Add this file to a distribution point, wait for it to appear
Create a new OS image with the *exact* same path
Add it to the same DP.
Both the file transmits (even though it's already there) and the file is stored in SCCMContentLib *again*.
So, I'm really trying to prevent the WIM from transmitting over the WAN again, and I certainly don't want it stored twice. Is DeDuplication not applicable to WIM files? Just Applications/Packages/Drivers? Does it de-dup after the fact?From the blog post Torsten linked:
"For the scenario where the admin creates and distributes a new OS image package by updating some properties of an already existing OS image (WIM format), although only changed file blocks will be sent to the remote server when the “Enable Binary Delta Differential”BDR
option is enabled for the package, the entire changed file will be stored in the content library, even though the original and the revised WIM file differ only by a few blocks. When Data Deduplication is used in the same scenario, only a few
extra blocks will be used to store the new data instead of the entire file."
With that in mind, it seems that even though your two WIM files are from the exact same source path, what gets transmitted to the DP is unique enough that it must store two copies. -
Scalability of a single instance of Sun One 7
Upto how many concurrent users can a single instance of Sun One 7 scale (let's say on E10K with six CPUs & 6-8 gig memory), considering that a single JVM runs everything (http server, web & ejb containers, messaging server etc) ? Are there any numbers available ? Is there an option to have multiple JVMs within a single app server instance (like multiple KJS in iPlanet for scalability) ? If not, is clustering(in enterprise edition) the only option to achieve the equivalent of multiple KJS ?
Our performance engineers can comment on the core of your question, but here are some answers to other aspects of your post:
- The Message Broker always runs out of process. By default, each app server instance starts/stops its own broker process. After creating an app server instance, you can disable the auto start/stop of the broker and map your JMS resources to a broker that could be shared by multiple app server instances.
- In AS 7.0 Platform Edition and Standard Edition, there is one JVM process per app server instance. You can configure multiple app server instances to house the same apps and resources, but you will have to keep these configs in sync on your own and you will need to provide your own front end load balancer that routes traffic across these instances.
In the Enterprise Edition, configuration of cloned instances and built in load balancing across such instances will be available in the product.
- Keep in mind that the built-in HTTP server is a highly refined and efficient subsystem. You might be thinking that its presence incurs a lot of overheard when in fact it is very lean.
We'll have our perf engineers comment on the scalability characteristics of app server 7.
Thanks,
Chris -
Are 2 separate Apple TV's capable of linking to a single instance of iTunes
Are 2 separate Apple TV's capable of linking to a single instance of iTunes?
You're correct that they don't count as authorised computers, but there is a limit on the number you can connect. I think it's five.
Most people only run one or a couple.
Winston had 5 from memory, and I seem to recall another poster had tried to add a 6th AppleTV but that either didn't work or one of the initial ones was replaced.
I agree in theory you could add countless devices but in practice you can't.
AC -
Dataguard configuration from 2-node rac to single instance with out ASM
Hi Gurus,
Oracle Version : 11.2.0.1
Operating system:linux.
Here i am trying to configure data Guard from 2-node rac to a singled instance stanby database . I have done all the changes in the parameter file for both primary and stand by database and when i am trying to duplicate my target database it is giving error as shown below.
[oracle@rac1 dbs]$ rman target / auxiliary sys/qfundracdba@poorna
Recovery Manager: Release 11.2.0.1.0 - Production on Thu Jul 21 14:49:01 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
connected to target database: QFUNDRAC (DBID=3138886598)
connected to auxiliary database: QFUNDRAC (not mounted)
RMAN> duplicate target database for standby from active database;
Starting Duplicate Db at 21-JUL-11
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=63 device type=DISK
contents of Memory Script:
backup as copy reuse
targetfile '/u01/app/oracle/product/11.2.0/db_1/dbs/orapwqfundrac1' auxiliary format
'/u01/app/oracle/product/11.2.0/db_1//dbs/orapwpoorna' ;
executing Memory Script
Starting backup at 21-JUL-11
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=10 instance=qfundrac1 device type=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 07/21/2011 14:49:29
RMAN-03015: error occurred in stored script Memory Script
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 07/21/2011 14:49:29
ORA-17629: Cannot connect to the remote database server
ORA-17627: ORA-01017: invalid username/password; logon denied
ORA-17629: Cannot connect to the remote database serverHere i was able to connect to my auxiliary database as shown below
[oracle@rac1 dbs]$ rman target /
Recovery Manager: Release 11.2.0.1.0 - Production on Thu Jul 21 15:00:10 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
connected to target database: QFUNDRAC (DBID=3138886598)
RMAN> connect auxiliary sys/qfundracdba@poorna
connected to auxiliary database: QFUNDRAC (not mounted)Can any one please help me .
Thanks & Regards
Poorna Prasad.SHi All,
Can any one please find out my both the parameters file and tell me if any thing is wrong.
Primary Database parameters.
qfundrac1.__db_cache_size=2818572288
qfundrac2.__db_cache_size=3372220416
qfundrac1.__java_pool_size=16777216
qfundrac2.__java_pool_size=16777216
qfundrac1.__large_pool_size=16777216
qfundrac2.__large_pool_size=16777216
qfundrac1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
qfundrac2.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
qfundrac1.__pga_aggregate_target=4294967296
qfundrac2.__pga_aggregate_target=4294967296
qfundrac1.__sga_target=4294967296
qfundrac2.__sga_target=4294967296
qfundrac1.__shared_io_pool_size=0
qfundrac2.__shared_io_pool_size=0
qfundrac1.__shared_pool_size=1375731712
qfundrac2.__shared_pool_size=855638016
qfundrac1.__streams_pool_size=33554432
qfundrac2.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/qfundrac/adump'
*.audit_trail='db'
*.cluster_database=true
*.compatible='11.2.0.0.0'
*.control_files='+ASM_DATA2/qfundrac/controlfile/current.256.754410759'
*.db_block_size=8192
*.db_create_file_dest='+ASM_DATA1'
*.db_create_online_log_dest_1='+ASM_DATA2'
*.db_domain=''
*.DB_FILE_NAME_CONVERT='/u02/poorna/oradata/','+ASM_DATA1/','/u02/poorna/oradata','+ASM_DATA2/'
*.db_name='qfundrac'
*.db_recovery_file_dest_size=40770732032
*.DB_UNIQUE_NAME='qfundrac'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=qfundracXDB)'
*.fal_client='QFUNDRAC'
*.FAL_SERVER='poorna'
qfundrac2.instance_number=2
qfundrac1.instance_number=1
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(qfundrac,poorna)'
*.LOG_ARCHIVE_DEST_1='LOCATION=+ASM_FRA VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=qfundrac'
*.LOG_ARCHIVE_DEST_2='SERVICE=boston ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=poorna'
*.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
*.LOG_ARCHIVE_DEST_STATE_2='ENABLE'
*.LOG_ARCHIVE_FORMAT='%t_%s_%r.arc'
*.LOG_ARCHIVE_MAX_PROCESSES=30
*.LOG_FILE_NAME_CONVERT='/u02/poorna/oradata/','+ASM_DATA1/','/u02/poorna/oradata','+ASM_DATA2/'
*.open_cursors=300
*.pga_aggregate_target=4294967296
*.processes=300
*.remote_listener='racdb-scan.qfund.net:1521'
*.REMOTE_LOGIN_PASSWORDFILE='EXCLUSIVE'
*.sec_case_sensitive_logon=FALSE
*.sessions=335
*.sga_target=4294967296
*.STANDBY_FILE_MANAGEMENT='AUTO'
qfundrac2.thread=2
qfundrac1.thread=1
qfundrac1.undo_tablespace='UNDOTBS1'
qfundrac2.undo_tablespace='UNDOTBS2'and my standby database prameter file.
poorna.__db_cache_size=314572800
poorna.__java_pool_size=4194304
poorna.__large_pool_size=4194304
poorna.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
poorna.__pga_aggregate_target=343932928
poorna.__sga_target=507510784
poorna.__shared_io_pool_size=0
poorna.__shared_pool_size=176160768
poorna.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/poorna/adump'
*.audit_trail='db'
*.compatible='11.2.0.0.0'
*.control_files='/u01/app/oracle/oradata/poorna/control01.ctl','/u01/app/oracle/flash_recovery_area/poorna/control02.ctl'
*.db_block_size=8192
*.db_domain=''
#*.db_name='poorna'
#*.db_recovery_file_dest='/u01/app/oracle/flash_recovery_area'
*.db_recovery_file_dest_size=4039114752
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=poornaXDB)'
*.local_listener='LISTENER_POORNA'
*.memory_target=849346560
*.open_cursors=300
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sec_case_sensitive_logon=FALSE
*.undo_tablespace='UNDOTBS1'
############### STAND By PARAMETERS ########
DB_NAME=qfundrac
DB_UNIQUE_NAME=poorna
LOG_ARCHIVE_CONFIG='DG_CONFIG=(poorna,qfundrac)'
#CONTROL_FILES='/arch1/boston/control1.ctl', '/arch2/boston/control2.ctl'
DB_FILE_NAME_CONVERT='+ASM_DATA1/','/u02/poorna/oradata/','+ASM_DATA2/','/u02/poorna/oradata'
LOG_FILE_NAME_CONVERT= '+ASM_DATA1/','/u02/poorna/oradata/','+ASM_DATA2/','/u02/poorna/oradata'
LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc
LOG_ARCHIVE_DEST_1= 'LOCATION=/u02/ARCHIVE/poorna VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=poorna'
LOG_ARCHIVE_DEST_2= 'SERVICE=qfundrac ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=qfundrac'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
STANDBY_FILE_MANAGEMENT=AUTO
FAL_SERVER=qfundrac
FAL_CLIENT=poornaThanks & Regards,
Poorna Prasad.S -
Streams Setup from RAC to Single instance
Does anyone have a document to setup streams from RAC to Non RAC. I successfully setup streams on 2 single instances but I am having issues in replicating, Streams is setup on node1 or Rac and Apply process is also setup on single node. but data is not replicating.
Appreciate any suggestions.From Metalink Note 418755.1:
Additional Configuration for RAC Environments for a Source Database Archive Logs
The archive log threads from all instances must be available to any instance
running a capture process. This is true for both local and downstream capture.
Queue Ownership
When Streams is configured in a RAC environment, each queue table has an
"owning" instance. All queues within an individual queue table are owned by
the same instance. The Streams components (capture/propagation/apply) all
use that same owning instance to perform their work. This means that
+ a capture process is run at the owning instance of the source queue.
+ a propagation job must run at the owning instance of the queue
+ a propagation job must connect to the owning instance of the target queue.
Ownership of the queue can be configured to remain on a specific instance,
as long as that instance is available, by setting the PRIMARY _INSTANCE
and/or SECONDARY_INSTANCE parameters of DBMS_AQADM.ALTER_QUEUE_TABLE.
If the primary_instance is set to a specific instance (ie, not 0), the queue
ownership will return to the specified instance whenever the instance is up.
Capture will automatically follow the ownership of the queue.If the ownership
changes while capture is running, capture will stop on the current instance
and restart at the new owner instance.
For queues created with Oracle Database 10g Release 2, a service will be
created with the service name= schema.queue and the network name
SYS$schema.queue.global_name for that queue. If the global_name of the
database does not match the db_name.db_domain name of the database, be sure
to include the global_name as a service name in the init.ora.
For propagations created with the Oracle Database 10g Release 2 code with
the queue_to_queue parameter to TRUE, the propagation job will deliver only
to the specific queue identified. Also, the source dblink for the target
database connect descriptor must specify the correct service (global name of
the target database ) to connect to the target database. For example, the
tnsnames.ora entry for the target database should include the CONNECT_DATA
clause in the connect descriptor for the target database. This claus should
specify (CONNECT_DATA=(SERVICE_NAME='global_name of target database')).
Do NOT include a specific INSTANCE in the CONNECT_DATA clause.
For example, consider the tnsnames.ora file for a database with the global name
db.mycompany.com. Assume that the alias name for the first instance is db1 and
that the alias for the second instance is db2. The tnsnames.ora file for this
database might include the following entries:
db.mycompany.com=
(description=
(load_balance=on)
(address=(protocol=tcp)(host=node1-vip)(port=1521))
(address=(protocol=tcp)(host=node2-vip)(port=1521))
(connect_data=
(service_name=db.mycompany.com)))
db1.mycompany.com=
(description=
(address=(protocol=tcp)(host=node1-vip)(port=1521))
(connect_data=
(service_name=db.mycompany.com)
(instance_name=db1)))
db2.mycompany.com=
(description=
(address=(protocol=tcp)(host=node2-vip)(port=1521))
(connect_data=
(service_name=db.mycompany.com)
(instance_name=db2)))
Use the italicized tnsnames.ora alias in the target database link USING clause.
DBA_SERVICES lists all services for the database. GV$ACTIVE_SERVICES identifies
all active services for the database In non_RAC configurations, the service
name will typically be the global_name. However, it is possible for users to
manually create alternative services and use them in the TNS connect_data
specification . For RAC configurations, the service will appear in these views
as SYS$schema.queue.global_name.
Propagation Restart
Use the procedures START_PROPAGATION and STOP_PROPAGATION from
DBMS_PROPAGATION_ADM to enable and disable the propagation schedule.
These procedures automatically handle queue_to_queue propagation.
Example:
exec DBMS_PROPAGATION_ADM.stop_propagation('name_of_propagation'); or
exec DBMS_PROPAGATION_ADM.stop_propagation('name_of_propagation',force=>true);
exec DBMS_PROPAGATION_ADM.start_propagation('name_of_propagation');
If you use the lower level DBMS_AQADM procedures to manage the propagation schedule,
be sure to explicitly specify the destination_queue name when queue_to_queue propagation has been configured.
Example:
DBMS_AQADM.UNSCHEDULE_PROPAGATION('source_queue_name','destination',destination_queue=>'specific_queue');
DBMS_AQADM.SCHEDULE_PROPAGATION('source_queue_name','destination',destination_queue=>'specific_queue');, DBMS_AQADM.ENABLE_PROPAGATION_SCHEDULE('source_queue_name','destination',destination_queue=>'specific_queue');,
DBMS_AQADM.DISABLE_PROPAGATION_SCHEDULE('source_queue_name','destination',destination_queue=>'specific_queue');, DBMS_AQADM.ALTER_PROPAGATION_SCHEDULE('source_queue_name','destination',destination_queue=>'specific_queue');
Changing the GLOBAL_NAME of the Source Database
See the OPERATION section on Global_name below. The following are some
additional considerations when running in a RAC environment.
If the GLOBAL_NAME of the database is changed, ensure that any propagations
are dropped and recreated with the queue_to_queue parameter set to TRUE.
In addition, if the GLOBAL_NAME does not match the db_name.db_domain of the
database, include the global_name for the queue (NETWORK_NAME in DBA_QUEUES)
in the list of services for the database in the database parameter
initialization file.
Section 4. Target Site Configuration
The following recommendations apply to target databases, ie, databases in which
Streams apply is configured.
1. Privileges
Grant Explicit Privileges to APPLY_USER for the user tables
Examples:
Privileges for table level DML: INSERT/UPDATE/DELETE,
Privileges for table level DDL: CREATE (ANY) TABLE , CREATE (ANY) INDEX,
CREATE (ANY) PROCEDURE
2. Instantiation
Set Instantiation SCNs manually if not using export/import. If manually
configuring the instantiation scn for each table within the schema, use the
RECURSIVE=>TRUE option on the DBMS_STREAMS_ADM.SET_SCHEMA_INSTANTIATION_SCN
procedure
For DDL Set Instantiation SCN at next higher level(ie,SCHEMA or GLOBAL level).
3. Conflict Resolution
If updates will be performed in multiple databases for the same shared
object, be sure to configure conflict resolution. See the Streams
Replication Administrator's Guide Chapter 3 Streams Conflict Resolution,
for more detail.
To simplify conflict resolution on tables with LOB columns, create an error
handler to handle errors for the table. When registering the handler using
the DBMS_APPLY_ADM.SET_DML_HANDLER procedure, be sure to specify the
ASSEMBLE_LOBS parameter as TRUE.
In Streams Concepts manual 10.2 chapter 22: Monitoring Apply
Displaying detailed information about Apply errors.
4. Apply Process Configuration
A. Rules
If the maintain_* procedures are not suitable for your environment,
please use the ADD_RULES procedures (ADDTABLE_RULES , ADD_SCHEMA_RULES ,
ADD_GLOBAL_RULES (for DML and DDL), ADD_SUBSET_RULES (DML only).
These procedures minimize the number of steps required to configure Streams
processes. Also, it is possible to create rules for non-existent objects,
so be sure to check the spelling of each object specified in a rule carefully.
APPLY can be configured with or without a ruleset. The ADD_GLOBAL_RULES can
be used to apply all changes in the queue for the database. If no ruleset is
specified for the apply process, all changes in the queue are processed by the apply process.
A single Streams apply can process rules for multiple tables or schemas
located in a single queue that are received from a single source database .
For best performance, rules should be simple. Rules that include LIKE clauses are
not simple and will impact the performance of Streams.
To eliminate changes for particular tables or objects, specify the
include_tagged_lcr clause along with the table or object name in the
negative rule set for the Streams process. Setting this clause will
eliminate all changes, tagged or not, for the table or object.
B. Parameters
Set the following parameters after a apply process is created:
+ DISABLE_ON_ERROR=N Default: Y
If Y, then the apply process is disabled on the first unresolved error,
even if the error is not fatal.
If N, then the apply process continues regardless of unresolved errors.
+ PARALLELISM=3* Number of CPU Default: 1
Apply parameters can be set using the SET_PARAMETER procedure from the
DBMS_APPLY_ADM package. For example, to set the DISABLE_ON_ERROR parameter
of the streams apply process named APPLY_EX, use the following syntax while
logged in as the Streams Administrator:
exec dbms_apply_adm.set_parameter('apply_ex','disable_on_error','n');
Change the apply parallelism parameter recommendation to a lower number.
In general, try 4 or 8 and increase or decrease as necessary for your workload.
In some cases, performance can be improved by setting the following hidden
parameter. This parameter should be set when the major workload is UPDATEs
and the updates are performed on just a few columns of a many-column table.
+ DYNAMICSTMTS=Y Default: N
If Y, then for UPDATE statements, the apply process will optimize the
generation of SQL statements based on required columns.
CHECKPOINTFREQUENCY=1000
Increase the frequency of logminer checkpoints especially in a
database with significant LOB or DDL activity.
exec dbms_capture_adm.set_parameter('capture_ex','_checkpoint_frequency','1000');
5. Additional Configuration for RAC Environments for a Apply Database
Queue Ownership
When Streams is configured in a RAC environment, each queue table has an
"owning" instance. All queues within an individual queue table are owned
by the same instance. The Streams components (capture/propagation/apply)
all use that same owning instance to perform their work. This means that
the database link specified in the propagation must connect to the owning
instance of the target queue. the apply process is run at the owning instance
of the target queue
Ownership of the queue can be configured to remain on a specific instance,
as long as that instance is available, by setting the PRIMARY _INSTANCE and
SECONDARY_INSTANCE parameters of DBMS_AQADM.ALTER_QUEUE_TABLE. If the
primary_instance is set to a specific instance (ie, not 0), the queue
ownership will return to the specified instance whenever the instance is up.
Apply will automatically follow the ownership of the queue. If the ownership
changes while apply is running, apply will stop on the current instance and
restart at the new owner instance.
Changing the GLOBAL_NAME of the Database
See the OPERATION section on Global_name below. The following are some
additional considerations when running in a RAC environment.
If the GLOBAL_NAME of the database is changed, ensure that the queue is
empty before changing the name and that the apply process is dropped and
recreated with the apply_captured parameter = TRUE. In addition, if the
GLOBAL_NAME does not match the db_name.db_domain of the database, include
the GLOBAL_NAME in the list of services for the database in the database
parameter initialization file. -
SharePoint Designer workflow executing same lines\steps multiple time in single instance
I am working SharePoint 2010 workflow. I am facing an issue.
Problem:
1) I have few workflows running on different situation
2) Out of all workflows one is set to execute on "Item Created"
3) This workflow having some lines to execute and then create an item and then wait for a change in field value.
4) Expectation: This should create one instance and execute the lines\steps only once and wait for field change.
5) Actual Out-Come: The workflow is executing the lines\steps three times in a single instance.
6) The workflow history is showing the three time execution of all lines\steps using log to history.
7) It is a SPD workflow so there is no while loop. but it behaving like while loop.
8) On create Item, we are creating task, so it is creating three tasks. It is sending three emails and also we are appending text in title so text is appended three times.
I did a lot of research but not able to find the solution.
Please help.
- Khan Abubakar Disclaimer: The opinions expressed herein are my own personal opinions and do not represent any others view in anyway.Hi Khan,
According to your description, my understanding is that your workflow executed some actions three times.
What actions did you for the problematic actions?
If they are "Start Approval Process" action, please check whether there are multiple approvers for tasks. If yes, the action will create one task for each approver.
Please create a new list and a new workflow based on the problematic workflow, test again, compare the result.
For reproducing your issue, I suggest you provide more information about your list and workflow (detailed actions or some screenshots). It will make others in the forum easy to find a solution for you.
Best Regards,
Wendy
Wendy Li
TechNet Community Support -
Hi,
We are in the process of consolidating several databases on a single instance of SQL Server 2012.
Databases are developed by outside vendors; they have to be able to install and support their databases but they shouldn't be able to do any thing with the other databases.
When I tried to migrate the first database, the vendor told me that on the former server he used the sa account in some batch.
On a previous thread
https://social.technet.microsoft.com/Forums/sqlserver/en-US/dc1f802f-d8de-4e2b-87e5-ccb289593fb7/security-for-multiple-applications-on-a-single-sql-server-2012-instance?forum=sqlsecurity
it was suggested to me that I create a login for each vendor and that this login should map a user in their respective databases.
To test, I simulated the process in a test database:
1 - I create the login and I scripted the command:
USE [master]
GO
/* For security reasons the login is created disabled and with a random password. */
/****** Object: Login [M02_Test] Script Date: 2014-12-02 16:23:58 ******/
CREATE LOGIN [M02_Test] WITH PASSWORD=N'ÈS^y¡¶=Å€"+y¤j|úªhÖféÎЕœEu
c', DEFAULT_DATABASE=[M02], DEFAULT_LANGUAGE=[us_english], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF
GO
ALTER LOGIN [M02_Test] DISABLE
GO
2 - I create the user and scripted it
USE [M02]
GO
/****** Object: User [M02_Test] Script Date: 2014-12-02 16:29:41 ******/
CREATE USER [M02_Test] FOR LOGIN [M02_Test] WITH DEFAULT_SCHEMA=[dbo]
GO
Questions:
What should we do with te scrambled password: is it saved as it was entered and should it be kept somewhere in a safe place?
Would that do the job for the vendor who used sa before?
Thanks for your adviceI'm not sure why you would save the script. Why not create the login, in one way or another give the vendor the password. Keep the login disabled until needed.
I don't recall exactly what we said last time, but it occurs to me that the application setup things on server level. Jobs is the prime thing that comes to mind, but there could be other things.
Now, here is an important observation. As long as the vendor's application was alone on the server, that was OK, and it was OK to give the vendor sysadmin rights. In this situation this is less OK, and as we said, you should only give the vendor db_owner
in the database.
But the vendor will need to tell you what they do on server-level. They should know this - unless they sell their app as a "alone-on-a-server application". (And there are indeed such applications out there, even from Microsoft.) But there is a
risk that they will bill you extra if you make their installation more difficult.
Maybe you will have to give some vendors sysadmin for the installation, but in such case, you should ask them why they need it. If they don't, give them db_owner, and they will have to find out their hard way. (And you don't pay them for learning
their own application.)
Erland Sommarskog, SQL Server MVP, [email protected]
Maybe you are looking for
-
Start System Updater and almost immediately (at 3% progress) get: ThinkVantage System Update has encountered a problem and needs to close.... and it just won't run. All I can do is hit close. I've removed it from the system, rebooted, cleaned the r
-
Mac Mail creates duplicate messages in GMAIL bin and various other locations...
This has been quite frustrating finding 20 copies or half written copies of my mail as mac mail saves it as a draft then moves it to the bin. When i perform a search for something im greeted with 20 half written copies of an email? Anyone know why th
-
Hi, I have a Macbook Pro Late 2011 and I have been having major problems with it in the last 24hours, I shut it down and when I restarted it it laggs like crazy everytime i open a site it freezes for about 1-2 minutes eachtime i try to open a new pro
-
Silly question - I should know this ....
When my Mac boots, I have a couple of servers which are in the login items, so they auto-mount on the desktop. One is the Time Capsule disc and another is a Windoze PC shared drive. All works fine. The only annoying bit is that on boot, the servers a
-
Problems importing raw files from nikon d610
My lightroom 4.4 will not import raw files from my new nikon d610. I downloaded the converted but cannot get that to work either. Please help