Huge log.xml are generaing in 11.1.0.7 database.
Hi
we have an issue where listner log file is generating hugely and also it is creatin very huge listner logs. we tried bounce the listner and database but still it geerates huge log files
how big is the listener log file.... ?
to save on space, you may backup and truncate the existing listener log file with the below steps.
lsnrctl set log_status off (turn listener logging off)
gzip <listener.log>listener.log.07052011.gz (backup listener file by zipping)
next step to truncate by doing this $>listener.log (do this to truncate the existing listener log)
lsnrctl set log_status on (turn listener logging on)
Again, I would like to know the listener.log file size..
Edited by: s_n_i_dba on Jul 5, 2011 11:37 AM
Similar Messages
-
Redo log files are not applied in DR of primary
Hi All,
I have a DR database of primary on QA Serevr. The Redo log files are not properly applied in the DR database.
The Oracle version is 11.2.0.1 Some of the files get shipped and applied to DR database automatically but not all.
SQL> select status, error from v$archive_dest where dest_id=2; gives following massage
ERROR ORA-16086: Redo data cannot be written to the standby redo log
Please suggest.
Regards,
ShashiHi,
Sorry for delay in response. Here I am attaching the error captured in standby database.
Please advise
alert_abc.log
RFS[1780]: Identified database type as 'physical standby': Client is LGWR SYNC pid 21855
Primary database is in MAXIMUM AVAILABILITY mode
Standby controlfile consistent with primary
Standby controlfile consistent with primary
RFS[1780]: No standby redo logfiles of file size 94371840 AND block size 512 exist
Clearing online log 16 of thread 0 sequence number 0
Errors in file /oracle/diag/rdbms/abc_location11/abc/trace/abc_rfs_27994.trc:
ORA-00367: checksum error in log file header
ORA-00315: log 16 of thread 0, wrong thread # 1 in header
ORA-00312: online log 16 thread 0: '/oracle/abc/origlogB/log_g116m1.dbf'
Mon Nov 14 00:49:16 2011
Clearing online log 9 of thread 0 sequence number 0
Errors in file /oracle/diag/rdbms/abc_location11/abc/trace/abc_arc0_15653.trc:
/oracle/diag/rdbms/abc_location11/abc/trace/abc_rfs_27994.trc
2011-11-14 00:49:19.385
DDE rules only execution for: ORA 312
START Event Driven Actions Dump -
END Event Driven Actions Dump -
START DDE Actions Dump -
Executing SYNC actions
START DDE Action: 'DB_STRUCTURE_INTEGRITY_CHECK' (Async) -
DDE Action 'DB_STRUCTURE_INTEGRITY_CHECK' was flood controlled
END DDE Action: 'DB_STRUCTURE_INTEGRITY_CHECK' (FLOOD CONTROLLED, 1 csec) -
Executing ASYNC actions
END DDE Actions Dump (total 0 csec) -
ORA-00367: checksum error in log file header
ORA-00315: log 16 of thread 0, wrong thread # 1 in header
ORA-00312: online log 16 thread 0: '/oracle/abc/origlogB/log_g116m1.dbf'
DDE rules only execution for: ORA 312
START Event Driven Actions Dump -
END Event Driven Actions Dump -
START DDE Actions Dump -
Executing SYNC actions
START DDE Action: 'DB_STRUCTURE_INTEGRITY_CHECK' (Async) -
DDE Action 'DB_STRUCTURE_INTEGRITY_CHECK' was flood controlled
END DDE Action: 'DB_STRUCTURE_INTEGRITY_CHECK' (FLOOD CONTROLLED, -641 csec) -
Executing ASYNC actions
END DDE Actions Dump (total 0 csec) -
ORA-19527: physical standby redo log must be renamed
ORA-00312: online log 16 thread 0: '/oracle/abc/origlogB/log_g116m1.dbf'
Error 19527 clearing SRL 16
/oracle/diag/rdbms/abc_location11/abc/trace/abc_arc0_15653.trc
ORA-19527: physical standby redo log must be renamed
ORA-00312: online log 9 thread 0: '/oracle/abc/origlogA/log_g19m1.dbf'
Error 19527 clearing SRL 9
DDE rules only execution for: ORA 312
START Event Driven Actions Dump -
END Event Driven Actions Dump -
START DDE Actions Dump -
Executing SYNC actions -
Dear oracles gurus,
cannot understand the log file which has been generate when I start the listener I start the listener doing this
[root@rac1 admin]# su - oracle
[oracle@rac1 ~]$ lsnrctl start
LSNRCTL for Linux: Version 11.1.0.6.0 - Production on 07-OCT-2007 02:55:15
Copyright (c) 1991, 2007, Oracle. All rights reserved.
Starting /u01/app/oracle/product/11.1.0/db_1/bin/tnslsnr: please wait...
TNSLSNR for Linux: Version 11.1.0.6.0 - Production
System parameter file is /u01/app/oracle/product/11.1.0/db_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/rac1/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac1.westernsolution.co.uk)(PORT=1521)))
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521)))
STATUS of the LISTENER
Alias LISTENER
Version TNSLSNR for Linux: Version 11.1.0.6.0 - Production
Start Date 07-OCT-2007 02:55:17
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/11.1.0/db_1/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/rac1/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac1.westernsolution.co.uk)(PORT=1521)))
The listener supports no services
The command completed successfully
[oracle@rac1 ~]$
the show that the listener is connected but infact their are no listener because when I check my log.xml it says
<msg time='2007-10-07T02:41:05.792+01:00' org_id='oracle' comp_id='tnslsnr'
type= 'UNKNOWN' level='16' host_id='rac1.westernsolution.co.uk' <----------------
host_addr='192.168.122.1'>
<txt>07-OCT-2007 02:41:05 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=rac1.westernsolution.co.uk)(USER=oracle))(COMMAND=status)(ARGUMENTS=64)(SERVICE=listener)(VERSION=185599488)) * status * 0
</txt>
</msg>
I really do not understand why it says 'UNKOWN' . I think I have leave a mistake here.
to confirm that my listener is not working I have created a database link and the result are as follows
root@rac1 admin]# sqlplus
bash: sqlplus: command not found
[root@rac1 admin]# su - oracle
[oracle@rac1 ~]$ sqlplus
SQL*Plus: Release 11.1.0.6.0 - Production on Sun Oct 7 03:03:19 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Enter user-name: system/manager
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select * from emp@local;
select * from emp@local
ERROR at line 1:
ORA-12541: TNS:no listener
SQL>
Please any body can help me to find out what done wrong with the listener by observing the out I have paste above
Regards
sallil chudooryplease close this tar I got the solutionLOL -
This was not a TAR. This was a discussion with the volunteers (mostly non-Oracle employees at that) who use Oracle and are willing to help others.
If this was a TAR, you would have been paying someone for the solution. -
Reader 10.1 update fails, creates huge log files
Last night I saw the little icon in the system tray saying an update to Adobe Reader was ready to be installed.
I clicked it to allow the install.
Things seemed to go OK (on my Windows XP Pro system), although very slowly, and it finally got to copying files.
It seemed to still be doing something and was showing that it was copying file icudt40.dll. It still displayed the same thing ten minutes later.
I went to bed, and this morning it still showed that it was copying icutdt40.dll.
There is no "Cancel" button, so this morning I had to stop the install through Task Manager.
Now, in my "Local Settings\TEMP" directory, I have a file called AdobeARM.log that is 2,350,686 KB in size and a file MSI38934.LOG that is 4,194,304 KB in size.
They are so big I can't even look at them to see what's in them. (Too big for Notepad. When I tried to open the smaller log file, AdobeARM.log, with Wordpad it was taking forever and showing only 1% loaded, so after five minutes, I terminated the Wordpad process so I could actually do something useful with my computer.)
You would think the installer would be smart enough to stop at some point when the log files begin to get enormous.
There doesn't seem to be much point to creating log files that are too big to be read.
The update did manage to remove the Adobe Reader X that was working on my machine, so now I can no longer read PDF files.
Maybe I should go back Adobe Reader 9.
Reader X never worked very well.
Sometimes the menu bar showed up, sometimes it didn't.
PDF files at the physics e-print archive always loaded with page 2 displayed first. And if you forgot to disable the look-ahead capability, you could get banned from the e-print archive site altogether.
And I liked the user interface for the search function a lot better in version 9 anyway. Who wants to have to pop up a little box for your search phrase when you want to search? Searching is about the most important and routine activity one does, other than going from page to page and setting the zoom.Hi Ankit,
Thank you for your e-mail.
Yesterday afternoon I deleted the > 2 GB AdobeARM.log file and the > 4.194 GB
MSI38934.LOG file.
So I can't upload them. I expect I would have had a hard time doing so
anyway.
It would be nice if the install program checked the size of the log files
before writing to them and gave up if the size was, say, three times larger
than some maximum expected size.
The install program must have some section that permits infinite retries or
some other way of getting into an endless loop. So another solution would be
to count the number of retries and terminate after some reasonable number of
attempts.
Something had clearly gone wrong and there was no way to stop it, except by
going into the Task Manager and terminating the process.
If the install program can't terminate when the log files get too big, or if
it can't get out of a loop some other way, there might at least be a "Cancel"
button so the poor user has an obvious way of stopping the process.
As it was, the install program kept on writing to the log files all night
long.
Immediately after deleting the two huge log files, I downloaded and installed
Adobe Reader 10.1 manually.
I was going to turn off Norton 360 during the install and expected there
would be some user input requested between the download and the install, but
there wasn't.
The window showed that the process was going automatically from download to
install.
When I noticed that it was installing, I did temporarily disable Norton 360
while the install continued.
The manual install went OK.
I don't know if temporarily disabling Norton 360 was what made the difference
or not.
I was happy to see that Reader 10.1 had kept my previous preference settings.
By the way, one of the default settings in "Web Browser Options" can be a
problem.
I think it is the "Allow speculative downloading in the background" setting.
When I upgraded from Reader 9 to Reader 10.0.x in April, I ran into a
problem.
I routinely read the physics e-prints at arXiv.org (maintained by the Cornell
University Library) and I got banned from the site because "speculative
downloading in the background" was on.
[One gets an "Access denied" HTTP response after being banned.]
I think the default value for "speculative downloading" should be unchecked
and users should be warned that one can lose the ability to access some sites
by turning it on.
I had to figure out why I was automatically banned from arXiv.org, change my
preference setting in Adobe Reader X, go to another machine and find out who
to contact at arXiv.org [I couldn't find out from my machine, since I was
banned], and then exchange e-mails with the site administrator to regain
access to the physics e-print archive.
The arXiv.org site has followed the standard for robot exclusion since 1994
(http://arxiv.org/help/robots), and I certainly didn't intend to violate the
rule against "rapid-fire requests," so it would be nice if the default
settings for Adobe Reader didn't result in an unintentional violation.
Richard Thomas -
7110 refuses to update and has huge logs
I have a 7110 which is running 2009.04.10.0.0,1-1.2
I have tried updating it to the latest release and got an error 255 saying that something generic went wrong.
The error reads "
An unanticipated system error occurred:
failed to update: job failed with status 255
This may be due to transient failure, or a software defect. If this problem persists, contact your service provider."
I also tried updating to 2009.04.10.2.1,1-1.15 and got the same error.
The audit logs files are huge with 7057801 entries. How do I clear these.
When I reboot the system is takes a very long time before the bui is working, the ssh times out and drops me to the normal
unix shell. I can see the ak process consuming over 25% of cpu for a very long time. I always log out of this emergency shell without
making any changes to the system.
Im uploading a support bundle /cores/ak.b03120ea-64a7-69cc-ad72-b0854fd154b8.tar.gz in case someone at Sun has a look at this.
I am trying to upgrade this to the latest version as it is part of a VDI 3.1.1 setup - which is half upgraded from vdi 3.1 but waiting on the storage :)
Many thanks for any help.
Regards
Edited by: mlasham on Mar 27, 2010 9:27 AM
Edited by: mlasham on Mar 27, 2010 9:31 AMalexandriap1975 wrote:
I have a Macbook Air that refuses to Update.
Update from what?
Update to what?
Which MacBook Air do you have (Apple icon from the menu bar > About This Mac > More Info)?
How are you attempting to update (e.g., use the App Store)?
alexandriap1975 wrote:
I can't get my Messages App as well as my Mailbox to quit.
Does "Mailbox" mean Apple's Mail app?
You could click the Apple logo in the menu bar, then "Force Quit" and exit both apps. -
Revision: 2716
Author: [email protected]
Date: 2008-08-04 01:18:12 -0700 (Mon, 04 Aug 2008)
Log Message:
SDK-15848 - Conditional compilation constants defined in flex-config.xml are never used if a single constant is specified on the command line
* There's a possibility this will break a conditional complication test which disallows overwriting an existing definition -- I don't know if that will break the build, but the test should be removed either way.
* Using append syntax ("-define+=" on the command line or ant tasks, or append="true" in flex-config) and redefining a value works now if you use an already-defined namespace and name.
* So your flex-config may have -define=CONFIG::debug,false, and you may want -define+=CONFIG::debug,true from the commandline build, or FB build.
* Made the ASC ConfigVar fields final as a sanity check since overwriting is now allowed. It would be harder to track changes and subtle bugs if they were mutable. This means that you must build a new ConfigVar object if you need to make changes.
Bugs: SDK-15848
QA: Yes. Please read the updated javadocs in CompilerConfiguration. Tests need to be added to validate that overwriting is allowed, and happens correctly in different situations: I believe the order should be that flex-config is overwritten by a custom config (can we have more than one user config? is the order deterministic? I forget...), is overwritten by commandline or OEM. Did I miss any? (I didn't write code which changes this, it works however the existing configuration system allows overwriting and appending; if we have tests for that, maybe we don't need them duplicated for this feature.)
Doc: Yes. Please read the updated javadocs in CompilerConfiguration.
Reviewer: Pete
Ticket Links:
http://bugs.adobe.com/jira/browse/SDK-15848
http://bugs.adobe.com/jira/browse/SDK-15848
Modified Paths:
flex/sdk/trunk/modules/asc/src/java/macromedia/asc/embedding/ConfigVar.java
flex/sdk/trunk/modules/compiler/src/java/flex2/compiler/common/CompilerConfiguration.java
flex/sdk/trunk/modules/compiler/src/java/flex2/tools/oem/internal/OEMConfiguration.javaPlease note: I AM USING:
JkOptions ForwardKeySize ForwardURICompat -ForwardDirectories
And that's what's supposed to fix this problem in the first place, right?? -
FMS - change directory where the log files are located?
I want to change the logs files directory from:
C:\Program Files (x86)\Adobe\Flash Media Server 3.5/logs
to:
D:\fmsLogs
Please halp me to understand...
in adobe in:
Home / Flash Media Server 3.5 Configuration and Administration Guide / XML configuration files reference
it says:
in Logger.xml in Directory
Specifies the directory where the log files are located.
By default, the log files are located in the logs directory in the server installation directory.
Example:
<Directory>${LOGGER.LOGDIR}</Directory>
what this meens: ${LOGGER.LOGDIR} ?
in order to change the logs files directory from:
C:\Program Files (x86)\Adobe\Flash Media Server 3.5/logs
to:
D:\fmsLogs
do i need to write this:
<Directory>D:\fmsLogs</Directory>
or what do i neet to write?
it is totaly not understandable from this example...
big thanks for any halp
cheinanYou can change LOGGER.LOGDIR in fms.ini to your preferred location i.e. D:\fmsLogs and restart FMS.
Now if you want to change for individual logs - you can change in Logger.xml - by default logger.xml will use value from fms.ini -
Entries with oracle's IP in log.xml file.
This is about a couple of log entries I found in /dcm/logs/emd_logs/log.xml file.
while I have some Idea about the message itself what I don't understand is why the <HOST_NWADDR> points to 148.87.12.57
Can anyone shed some light on this.
Does R2 connect to oracle in some way each time it is brought up, 'cause the IP 148.87.12.58 is the IP of the PortalStudio and I figured its neighbour belongs to oracle too.
Do I need to keep the box connected to Internet all the time for it to work.
Could someone at oracle tell me what my box is trying to do by connecting to 148.87.12.57.
Please find the entries below.
Thanx
Vinodh R.
These are the entries I found
i)
<MESSAGE>
<HEADER>
<TSTZ_ORIGINATING>2002-07-08T03:56:36.721-04:00</TSTZ_ORIGINATING>
<COMPONENT_ID>OC4J</COMPONENT_ID>
<MSG_TYPE TYPE="ERROR"></MSG_TYPE>
<MSG_GROUP>n/a</MSG_GROUP>
<MSG_LEVEL>1</MSG_LEVEL>
<HOST_ID>myportal</HOST_ID>
<HOST_NWADDR>148.87.12.57</HOST_NWADDR>
<MODULE_ID>iAS_dcm/oracle/defaultLogger/ExceptionLogger</MODULE_ID>
<PROCESS_ID>null-Thread[ApplicationServerThread,5,applicationServerThreadGroup]</PROCESS_ID>
<USER_ID>root</USER_ID>
</HEADER>
<PAYLOAD>
<MSG_TEXT>[ RM ] Exception in repository API new SchemaManager()</MSG_TEXT>
<SUPPL_DETAIL><![CDATA[oracle.ias.repository.schema.SchemaException: Password could not be retrieved
at oracle.ias.repository.IASSchema.init(IASSchema.java:152)
at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:243)
at com.evermind.util.ThreadPoolThread.run(ThreadPoolThread.java:64)
]]></SUPPL_DETAIL>
</PAYLOAD>
</MESSAGE>
ii)
<MESSAGE>
<HEADER>
<TSTZ_ORIGINATING>2002-07-08T06:41:27.339-04:00</TSTZ_ORIGINATING>
<COMPONENT_ID>OC4J</COMPONENT_ID>
<MSG_TYPE TYPE="ERROR"></MSG_TYPE>
<MSG_GROUP>n/a</MSG_GROUP>
<MSG_LEVEL>1</MSG_LEVEL>
<HOST_ID>myportal</HOST_ID>
<HOST_NWADDR>148.87.12.57</HOST_NWADDR>
<MODULE_ID>iAS_dcm/oracle/defaultLogger/ExceptionLogger</MODULE_ID>
<PROCESS_ID>null-Thread[ApplicationServerThread,5,applicationServerThreadGroup]</PROCESS_ID>
<USER_ID>oracle</USER_ID>
</HEADER>
<PAYLOAD>
<MSG_TEXT>[ RM ] Exception in repository API getDBConnect()</MSG_TEXT>
<SUPPL_DETAIL><![CDATA[oracle.ias.repository.schema.SchemaException: Unable to connect to Directory Server:javax.naming.CommunicationException: oracleportal.peesh.com:389 [Root exception is java.net.ConnectException: Connection refused]
at oracle.ias.repository.directory.DirectoryReader.connect(DirectoryReader.java:104)
at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:243)
at com.evermind.util.ThreadPoolThread.run(ThreadPoolThread.java:64)
]]></SUPPL_DETAIL>
</PAYLOAD>
</MESSAGE>The <url-pattern> tells the application server which requests are to be handled by the FacesServlet. If the requested URL matches, then the FacesServlet handles the request.
This parameter is not specific to JSF. -
ESB_HOME error in log.xml after installation
What is that magic failed to get ESB_HOME: java.lang.NullPointerException
in log.xml file after installing SOA Suite package?
ESB Console also shows s empty service list.
I have tried couple of times to reinstall Suite with Basic install but nothing helps.
Anyone knows this error issus?
Cheers.Hello
I've solved the problem.
In the stack.xml file are multiples software-components with the same name, here "SERVERCORE".
I had 3 entries, two with patch-level 0 and one with patch-level 1.
I've deleted the two entries with patch-level 0 and then it works.
Regards Christian -
ADF Logger - logging.xml file
Hi,
We are building an ADF Application and created our logger, which is a wrapper around the ADFLogger. As expected logging etc works just fine for the ADF applications deployed in weblogic. However there are parts of the application which could be a standalone (pure java) based components deployed independently, which are leveraging this wrapper logger too. For these applications we added the ADF dependent libraries needed for the ADFLogger. Still these apps are not able to log anything, and I think the main reason being it is not able to get a handle to the logging.xml fine which defines the ODL handler. All other runtime parameters like
"-Djbo.debugoutput=adflogger -Djbo.adflogger.level=FINEST" are passed.
There is no error etc, however the logs are not coming.
Is there a way where we can specify the logging.xml to the ADFLogger, so that it doesn't looks for the default location under - '+/<domain>/config/fmwconfig/servers/DefaultServer+'?
In general also (for pure ADF based applications), is there a way we can define the path of our logging.xml file, instead of using the default one?
Would appreciate any help.
Thanks
SachinHi,
While I appreciate your response, however the question I have is - 'can we have our own logging.xml?' If yes how do we pass the information of our logging.xml to ADFLogger, instead of using the default logging.xml.
I know we can have our own logger handler defined within the logging.xml, but that is not what we are looking for. The ask is to have the option of having the path of logging.xml defined for ADFLogger.
Thanks
Sachin -
How to control listener log levels with log.xml
I'm running Oracle 11 and I am seeing tons of log.xml files being created, rotated, and renamed. Looking at the log.xml files, I'm seeing what appears to be rather excessive logging. It appears to be logging every single connection. Is there any way to control the logging levels for this file and/or decrease the number of log files kept in the rotation?
I don't believe it is possible to disable part of the ADR activity. But I would be interested in your definition of "excessive logging."
Are you seeing a specific performance impact or are you just seeing something different from 10gR2 (meaning the new ADR functionality)? -
Unable to dispatch JSP Page : Exception:java.io.FileNotFoundException: /soad/app/soadas/10gAS/j2ee/home/applications/orabpel/console/default/index.jsp
On top of this we also have an error "Unable to connect to repository" Are these related or two seprate errors? what should i look in to solve them
log.xml file is where i retrieved these from
Cause im trying to get an ESB running is like an old BPEL project causing this error? how can i get rid of the BPEL project in whatever text file this is saying the BPEL project is failing?
Message was edited by:
vandePlease see if the following docs help.
Discovery of IBM Websphere 6.1.0.21 in Grid Control fails with SOAP Exception in javax.net.ssl.SSLHandshakeException [ID 882752.1]
"java.io.IOException: Invalid keystore format" and "java.security.NoSuchAlgorithmException" Seen During Weblogic Server Startup After Specifying a Keystore [ID 1453174.1]
java.net.SocketException Failure In SOA When SSL Is Turned On [ID 1487792.1]
Thanks,
Hussein -
Databsase log messages are not written to custom OutputStream
I am using BDB 2.3.10 via Java API.
I've enabled BDB log messages and could see them output fine to the console, but I couldn't output these messages into a file.
Here is what I am doing:
EnvironmentConfig envConfig = EnvironmentConfig.DEFAULT;
envConfig.setErrorStream( new FileOutputStream ( errorLogFile ) );
envConfig.setMessageStream( new FileOutputStream ( messageLogFile ) );
XmlManager.setLogCategory( XmlManager.CATEGORY_ALL, true );
XmlManager.setLogLevel( XmlManager.LEVEL_DEBUG, true );
Environment env = new Environment(new File( dbLocation ), envConfig );
XmlManager xmlManager = new XmlManager(env, xmlManagerConfig );
All log messages are going to the console, instead of going to messageLogFile and errorLogFile .
What am I doing wrong?Hi Basil,
Exception handling and debugging for DBXML applications is explained in chapter 2 in the guide here:
http://www.oracle.com/technology/documentation/berkeley-db/xml/gsg_xml/java/BerkeleyDBXML-JAVA-GSG.pdf
Did you check your code so that you catch any exception that may be thrown ? Is there any exception thrown at all ?
It may be more helpful if you post the small piece of code that demonstrates the issue. Put the code within pre tags surrounded by square brackets ([ pre ] <code here> [ /pre ] eliminate the spaces inside the brackets).
Regards,
Andrei -
Why multiple log files are created while using transaction in berkeley db
we are using berkeleydb java edition db base api, we have already read/write CDRFile of 9 lack rows with transaction and
without transaction implementing secondary database concept the issues we are getting are as follows:-
with transaction----------size of database environment 1.63gb which is due to no. of log files created each of 10 mb.
without transaction-------size of database environment 588mb and here only one log file is created which is of 10mb. so we want to know how REASON CONCRETE CONCLUSION ..
how log files are created and what is meant of using transaction and not using transaction in db environment and what are this db files db.001,db.002,_db.003,_db.004,__db.005 and log files like log.0000000001.....plz reply soonwe are using berkeleydb java edition db base api, If you are seeing __db.NNN files in your environment root directory, these are environment's shared region files. And since you see these you are using Berkeley DB Core (with the Java/JNI Base API), not Berkeley DB Java Edition.
with transaction ...
without transaction ...First of all, do you need transactions or not? Review the documentation section called "Why transactions?" in the Berkeley DB Programmer's Reference Guide.
without transaction-------size of database environment 588mb and here only one log file is created which is of 10mb.There should be no logs created when transactions are not used. That single log file has likely remained there from the previous transactional run.
how log files are created and what is meant of using transaction and not using transaction in db environment and what are this db files db.001,db.002,_db.003,_db.004,__db.005 and log files like log.0000000001Have you reviewed the basic documentations references for Berkeley DB Core?
- Berkeley DB Programmer's Reference Guide
in particular sections: The Berkeley DB products, Shared memory regions, Chapter 11. Berkeley DB Transactional Data Store Applications, Chapter 17. The Logging Subsystem.
- Getting Started with Berkeley DB (Java API Guide) and Getting Started with Berkeley DB Transaction Processing (Java API Guide).
If so, you would have had the answers to these questions; the __db.NNN files are the environment shared region files needed by the environment's subsystems (transaction, locking, logging, memory pool buffer, mutexes), and the log.MMMMMMMMMM are the log files needed for recoverability and created when running with transactions.
--Andrei -
I have one problem with Data Guard. My archive log files are not applied.
I have one problem with Data Guard. My archive log files are not applied. However I have received all archive log files to my physical Standby db
I have created a Physical Standby database on Oracle 10gR2 (Windows XP professional). Primary database is on another computer.
In Enterprise Manager on Primary database it looks ok. I get the following message Data Guard status Normal
But as I wrote above the archive log files are not applied
After I created the Physical Standby database, I have also done:
1. I connected to the Physical Standby database instance.
CONNECT SYS/SYS@luda AS SYSDBA
2. I started the Oracle instance at the Physical Standby database without mounting the database.
STARTUP NOMOUNT PFILE=C:\oracle\product\10.2.0\db_1\database\initluda.ora
3. I mounted the Physical Standby database:
ALTER DATABASE MOUNT STANDBY DATABASE
4. I started redo apply on Physical Standby database
alter database recover managed standby database disconnect from session
5. I switched the log files on Physical Standby database
alter system switch logfile
6. I verified the redo data was received and archived on Physical Standby database
select sequence#, first_time, next_time from v$archived_log order by sequence#
SEQUENCE# FIRST_TIME NEXT_TIME
3 2006-06-27 2006-06-27
4 2006-06-27 2006-06-27
5 2006-06-27 2006-06-27
6 2006-06-27 2006-06-27
7 2006-06-27 2006-06-27
8 2006-06-27 2006-06-27
7. I verified the archived redo log files were applied on Physical Standby database
select sequence#,applied from v$archived_log;
SEQUENCE# APP
4 NO
3 NO
5 NO
6 NO
7 NO
8 NO
8. on Physical Standby database
select * from v$archive_gap;
No rows
9. on Physical Standby database
SELECT MESSAGE FROM V$DATAGUARD_STATUS;
MESSAGE
ARC0: Archival started
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
ARC4: Archival started
ARC5: Archival started
ARC6: Archival started
ARC7: Archival started
ARC8: Archival started
ARC9: Archival started
ARCa: Archival started
ARCb: Archival started
ARCc: Archival started
ARCd: Archival started
ARCe: Archival started
ARCf: Archival started
ARCg: Archival started
ARCh: Archival started
ARCi: Archival started
ARCj: Archival started
ARCk: Archival started
ARCl: Archival started
ARCm: Archival started
ARCn: Archival started
ARCo: Archival started
ARCp: Archival started
ARCq: Archival started
ARCr: Archival started
ARCs: Archival started
ARCt: Archival started
ARC0: Becoming the 'no FAL' ARCH
ARC0: Becoming the 'no SRL' ARCH
ARC1: Becoming the heartbeat ARCH
Attempt to start background Managed Standby Recovery process
MRP0: Background Managed Standby Recovery process started
Managed Standby Recovery not using Real Time Apply
MRP0: Background Media Recovery terminated with error 1110
MRP0: Background Media Recovery process shutdown
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[1]: Assigned to RFS process 2148
RFS[1]: Identified database type as 'physical standby'
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[2]: Assigned to RFS process 2384
RFS[2]: Identified database type as 'physical standby'
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[3]: Assigned to RFS process 3188
RFS[3]: Identified database type as 'physical standby'
Primary database is in MAXIMUM PERFORMANCE mode
Primary database is in MAXIMUM PERFORMANCE mode
RFS[3]: No standby redo logfiles created
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[4]: Assigned to RFS process 3168
RFS[4]: Identified database type as 'physical standby'
RFS[4]: No standby redo logfiles created
Primary database is in MAXIMUM PERFORMANCE mode
RFS[3]: No standby redo logfiles created
10. on Physical Standby database
SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
RFS IDLE 0 0 0 0
RFS IDLE 0 0 0 0
RFS IDLE 1 9 13664 2
RFS IDLE 0 0 0 0
10) on Primary database:
select message from v$dataguard_status;
MESSAGE
ARC0: Archival started
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
ARC4: Archival started
ARC5: Archival started
ARC6: Archival started
ARC7: Archival started
ARC8: Archival started
ARC9: Archival started
ARCa: Archival started
ARCb: Archival started
ARCc: Archival started
ARCd: Archival started
ARCe: Archival started
ARCf: Archival started
ARCg: Archival started
ARCh: Archival started
ARCi: Archival started
ARCj: Archival started
ARCk: Archival started
ARCl: Archival started
ARCm: Archival started
ARCn: Archival started
ARCo: Archival started
ARCp: Archival started
ARCq: Archival started
ARCr: Archival started
ARCs: Archival started
ARCt: Archival started
ARCm: Becoming the 'no FAL' ARCH
ARCm: Becoming the 'no SRL' ARCH
ARCd: Becoming the heartbeat ARCH
Error 1034 received logging on to the standby
Error 1034 received logging on to the standby
LGWR: Error 1034 creating archivelog file 'luda'
LNS: Failed to archive log 3 thread 1 sequence 7 (1034)
FAL[server, ARCh]: Error 1034 creating remote archivelog file 'luda'
11)on primary db
select name,sequence#,applied from v$archived_log;
NAME SEQUENCE# APP
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00003_0594204176.001 3 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00004_0594204176.001 4 NO
Luda 4 NO
Luda 3 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00005_0594204176.001 5 NO
Luda 5 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00006_0594204176.001 6 NO
Luda 6 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00007_0594204176.001 7 NO
Luda 7 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00008_0594204176.001 8 NO
Luda 8 NO
12) on standby db
select name,sequence#,applied from v$archived_log;
NAME SEQUENCE# APP
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00004_0594204176.001 4 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00003_0594204176.001 3 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00005_0594204176.001 5 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00006_0594204176.001 6 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00007_0594204176.001 7 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00008_0594204176.001 8 NO
13) my init.ora files
On standby db
irina.__db_cache_size=79691776
irina.__java_pool_size=4194304
irina.__large_pool_size=4194304
irina.__shared_pool_size=75497472
irina.__streams_pool_size=0
*.audit_file_dest='C:\oracle\product\10.2.0\admin\luda\adump'
*.background_dump_dest='C:\oracle\product\10.2.0\admin\luda\bdump'
*.compatible='10.2.0.1.0'
*.control_files='C:\oracle\product\10.2.0\oradata\luda\luda.ctl'
*.core_dump_dest='C:\oracle\product\10.2.0\admin\luda\cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='luda','irina'
*.db_name='irina'
*.db_unique_name='luda'
*.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
*.fal_client='luda'
*.fal_server='irina'
*.job_queue_processes=10
*.log_archive_config='DG_CONFIG=(irina,luda)'
*.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/luda/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=luda'
*.log_archive_dest_2='SERVICE=irina LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=irina'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_max_processes=30
*.log_file_name_convert='C:/oracle/product/10.2.0/oradata/irina/','C:/oracle/product/10.2.0/oradata/luda/'
*.open_cursors=300
*.pga_aggregate_target=16777216
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=167772160
*.standby_file_management='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='C:\oracle\product\10.2.0\admin\luda\udump'
On primary db
irina.__db_cache_size=79691776
irina.__java_pool_size=4194304
irina.__large_pool_size=4194304
irina.__shared_pool_size=75497472
irina.__streams_pool_size=0
*.audit_file_dest='C:\oracle\product\10.2.0/admin/irina/adump'
*.background_dump_dest='C:\oracle\product\10.2.0/admin/irina/bdump'
*.compatible='10.2.0.1.0'
*.control_files='C:\oracle\product\10.2.0\oradata\irina\control01.ctl','C:\oracle\product\10.2.0\oradata\irina\control02.ctl','C:\oracle\product\10.2.0\oradata\irina\control03.ctl'
*.core_dump_dest='C:\oracle\product\10.2.0/admin/irina/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='luda','irina'
*.db_name='irina'
*.db_recovery_file_dest='C:\oracle\product\10.2.0/flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
*.fal_client='irina'
*.fal_server='luda'
*.job_queue_processes=10
*.log_archive_config='DG_CONFIG=(irina,luda)'
*.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/irina/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=irina'
*.log_archive_dest_2='SERVICE=luda LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=luda'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_max_processes=30
*.log_file_name_convert='C:/oracle/product/10.2.0/oradata/luda/','C:/oracle/product/10.2.0/oradata/irina/'
*.open_cursors=300
*.pga_aggregate_target=16777216
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=167772160
*.standby_file_management='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='C:\oracle\product\10.2.0/admin/irina/udump'
Please help me!!!!Hi,
After several tries my redo logs are applied now. I think in my case it had to do with the tnsnames.ora. At this moment I have both database in both tnsnames.ora files using the SID and not the SERVICE_NAME.
Now I want to use DGMGRL. Adding a configuration and a stand-by database is working fine, but when I try to enable the configuration DGMGRL gives no feedback and it looks like it is hanging. The log, although says that it succeeded.
In another session 'show configuration' results in the following, confirming that the enable succeeded.
DGMGRL> show configuration
Configuration
Name: avhtest
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
avhtest - Primary database
avhtestls53 - Physical standby database
Current status for "avhtest":
Warning: ORA-16610: command 'ENABLE CONFIGURATION' in progress
It there anybody that experienced the same problem and/or knows the solution to this?
With kind regards,
Martin Schaap
Maybe you are looking for
-
Hi all, I have been getting a timed out exception in my EJB layer when performing a query that will return 89,000 results. First, there is no issue with our Oracle database as the results are returned withing seconds when performed from the command l
-
HowTo: Link from ABAP to SM Document
Hi all, we plan to use the Solutionmanager for documentation, also for User-Documentation of developed Reports and other ABAPS. How To Link to a document in the Solutionmanager from an ABAP Documentation? thx.
-
Insert XMLTYPE data into CLOB column
Hi, I am trying to insert XMLTYPE datatype column value into the CLOB datatype column. I get an error - ORA - 00932: Inconsistent datatypes: expected CLOB got - How do I insert xml type data into clob? Thanks!
-
Converting files to Apple Lossless without keeping originals
Can someone let me know if it is possible to convert files to Apple Lossless without actually keeping the original file. I converted all my files and now my whole library is duplicated! Also need to kneo the quickest way to erase all my duplicated fi
-
public static ToplinkTrattative1 doProvaUpdate(Double idTrattativa) { ConversionManager.getDefaultManager().setShouldUseClassLoaderFromCurrentThread(true); oracle.toplink.sessions.Project project = XMLProjectReader.read("/C:/Documents and Settings/Ch