Logging of FWSM context logs to two diffrent zone SYSLOG SERVER
Hello Sat Shri Akal,
Can any one help me about logging of FWSM context logs to two diffrent zone SYSLOG SERVER and SYSLOG Collector
in CSM 3.2.2. I am able to have logs from Admin context but not from my other context of FWSM. Otherwise that context is sending syslogs to ONE syslog server in similsr vlan,but why that perticular context is not able to log ay syslog collector of CSM which is having logs from admin context. Please help me in this case.
regards
Pradeep,
All contexts should be able to reach the CSM server's IP address just like the admin context.
The individual contexts should be configured to send logs to the CSM server's IP address.
From CSM go under each context and add management IP address for the particular context.
Once the above is done you will see logs from all the contexts under CSM.
-Kureli
Similar Messages
-
How to log successful logins to a syslog server in NX-OS
Does anyone know how to do this in NX-OS? I do it in IOS with the following commands:
login on-failure log
login on-success log
logging x.x.x.x
With that I get a syslog message that I can then log to a file to track who has logged into which device and when. But I can't find the syntax to do the same thing in the Nexus switches that we have. Does anyone know what the equivalent commands are?
Thanks,
BenHi Ben,
By default, failed logins are logged.
You can checked the log using:
show logging logfile | last 15
and for every logging failed (by default) you will get something like this:
2012 Dec 18 14:51:08 Nexus5010-B %AUTHPRIV-3-SYSTEM_MSG: pam_aaa:Authentication
failed for user en from 2.2.2.1 - login
To get the success-login to show up in the logs we need to increase the level of the authpriv to 5 (it is 3 by default), and doing this will add a new log for failed or succesful connections.
Use the following command:
Nexus5010-A(config)# logging level authpriv 5
You can check loggin levels by using:
#show logging level
After you do this with the logging level you will see in the log something like this when a succesful login takes place:
2005 Jan 6 03:29:48 Nexus5010-A %AUTHPRIV-5-SYSTEM_MSG: admin :TTY=unknown
; PWD=/var/sysmgr/vsh ; USER=root ; COMMAND=/usr/bin/strings/proc/18340/environ
- sudo
Now for a failed login and after increasing the authpriv level you will see the following logs:
2005 Jan 6 03:31:36 Nexus5010-A %AUTHPRIV-4-SYSTEM_MSG: pam_unix(aaa:auth):check pass; user unknown - aaad
2005 Jan 6 03:31:36 Nexus5010-A %AUTHPRIV-5-SYSTEM_MSG: pam_unix(aaa:auth):
aut
hentication failure; logname= uid=0 euid=0 tty= ruser= rhost= - aaad
For logging *****
Nexus7018(config)# logging ?
console Set console logging
event Interface events
ip IP configuration
level Facility parameter for syslog messages
logfile Set File logging
message Interface events
module Set module(linecard) logging
monitor Set terminal line(monitor) logging level
origin-id Enable origin information for Remote Syslog Server
server Enable forwarding to Remote Syslog Server
source-interface Enable Source-Interface for Remote Syslog Server
timestamp Set logging timestamp granularity
You can use logging source-interface ....
Thanks-
Afroz
***Ratings Encourages Contributors **** -
Best way to log/notify when someone logs into ASDM
What is the best practice to get a notification and log when s
omeone logs in and uses the ASDM? I have syslog server setup and smtp, and snmp traps, but not sure where to tell it to notify
me when someone logs into the asdm..... any suggestions?
thanks in advanceThere a few ways to accomplish this (if I understand you correctly). You can simply drag the captured clips from the finder into the new projet or you can have multiple projects open at the same time and drag the clips between the bins in the browser. If you're moving clips between computers it's possible you might have to reconnect the clips if you've moved the clips to another drive. Pretty simple, select multiple clips in the browser and control click and choose reconnect. Explore the options in the file requester and you should be able to figure this out.
-
Fetch From UWL to WD view :UWL Exception Logged in users context or session
Hi,
We have a requirement in which , we need to fetch all UWL Items and need to display these in a webdynpro view.
We tried based on this link [Custom UWL|http://searchsap.techtarget.com/tip/0,289483,sid21_gci1240907,00.html].
Coding What I have Tried is
IWDClientUser user1 = WDClientUser.getLoggedInClientUser();
IUser epUser1 =user1.getSAPUser();
IPortalRuntimeResources runtimeResources = PortalRuntime.getRuntimeResources();
wdComponentAPI.getMessageManager().reportSuccess("Version "+PortalRuntime.getVersion());
IUWLService uwlService = (IUWLService) runtimeResources.getService(IUWLService.ALIAS_KEY);
uwlService = (IUWLService) runtimeResources.getService(IUWLService.ALIAS_KEY);
wdComponentAPI.getMessageManager().reportSuccess("6");
UWLContext uwlContext = new UWLContext();
uwlContext.setUser(epUser1);
wdComponentAPI.getMessageManager().reportSuccess(" UML Context"+uwlContext.getUserId());
wdComponentAPI.getMessageManager().reportSuccess("9");
IUWLSession uwlSess=uwlService.beginSession(uwlContext, 6000);
uwlContext.setSession(uwlSess);
wdComponentAPI.getMessageManager().reportSuccess(" UML Session"+uwlSess.getUser().getUniqueID());
IUWLItemManager itemManager = uwlService.getItemManager(uwlContext);
wdComponentAPI.getMessageManager().reportSuccess("Item manager"+itemManager.getItems(uwlContext,null,null));
QueryResult result = itemManager.getItems(uwlContext,null,null);
wdComponentAPI.getMessageManager().reportSuccess("12");
int size = result.getTotalNumberOfItems();
ItemCollection collection = result.getItems();
java.util.List list = collection.list();
Item item = null;
Date date = null;
String subject = null;
for(int i = 0; i < 5; i++)
if(!(i > (size -1)))
item = collection.get(i);
date = item.getDueDate();
subject = item.getSubject();
wdComponentAPI.getMessageManager().reportSuccess("item "+item);
wdComponentAPI.getMessageManager().reportSuccess("date "+date);
wdComponentAPI.getMessageManager().reportSuccess("subject "+subject);
But I am getting the Exception
Exp: com.sap.netweaver.bc.uwl.UWLException: Wed May 14 11:20:29 IST 2008 (Default) Logged in users context or session doesnt exist
in the line
QueryResult result = itemManager.getItems(uwlContext,null,null);
But according to first mentioned link the user is getting through Portal Request Object. In webdynpro, we were not able to get Portal request object. So we are getting the current user from UME .
So we are getting the exception.
Can anybody help me
Thanks and Regards
Smithacheck this link
Re: UWL breaks behind TAM -
Hi,
I noticed that some of our fwsm traffic log is not appearing in the syslog server. I noticed in the show logging queue, it's reporting discard messages like below:
fwsm# sh logg que
Logging Queue length limit : 512 msg(s), 921036 msg(s) discarded.
Current 512 msg on queue, 512 msgs most on queue
fwsm# sh logg que
Logging Queue length limit : 512 msg(s), 921654 msg(s) discarded.
Current 512 msg on queue, 512 msgs most on queue
Does anyone know what it mean by this ? I have tried to increase the queue to 2048 but after a few minutes, it start to fill up and discard message counter increase. I tried to set unlimited but the Current Msg counter keep increment without going down. Does this mean the fw is dropping or the syslog server is too slow to serve ?
Any comments or ideas are welcome
Thanks
Justin VoHi,
Disable local logging ...
Only test with logging trap 4 and then with trap 7.
Check your logging server and check trough a repeated ping if the connection is not dropping.
If you find this post usefull
please don't forget to rate this
#Iwan Hoogendoorn -
If I want logging for the "internet" facing context on an ASA, do I have to configure logging on that context, or will the logging on the admin or system context also send logs for the other context?
Logging must be configured separately in each "customer" (non-system or -admin) context that you want to receive syslog messages from regarding its activity.
The admin context can send syslog messages related to its own and the system context status.
These items and more are covered in this Configuration Guide section. -
I am trying to connect to my uni wifi, and when i click on the wifi, it directs me to a log in page. However, the two boxes to enter the username and password do not appear. Can anyone help me with this???
You didn't say what you have already tried, but if you haven't already done so, power-cycle your router (unplug it for 15 seconds then plug it back in), then on your phone go to Settings>General>Reset and tap Reset Network Settings, then try joining your wifi again.
-
I just installe Lion and when I get to the log in screen it shows the two users but when I try to select either of them nothing happens. How can I fix this so I can log in.
Can you select them with "Arrow left" or "arrow right" key on your keyboard?
-
Servlet context logging in 8.1
Amazed that I was not able to find any documentation how to configure servletcontext logging I have to post this rather basic question here:
So where and how can i configure the servletcontext logging of my web app? ( I am using the servletcontext.log method)
I saw how it can be done in wls 9.0 :
http://e-docs.bea.com/wls/docs90/i18n/writing.html#1178936
but not 8.1
CheersI've found the documentation lacking at times, but in general, if you didn't find documentation for a feature in a particular release, it's pretty likely that means the feature didn't exist in that release.
It appears that feature is new in 9.0. ServletContext.log() in 8.1 will just write to the standard server log unconditionally. -
Why do I get these messages "logged in but not to a specific secure zone"
Could anyone please explain why this messge would appear on the dashboard
'username' logged in but not to a specific secure zone
instead of
'username' logged in to Member Only Area secure zone
On this website, customers can only log on to a secure zone, and there is only one secure zone.
ThanksHi Mario,
Thank you for your response.
There are only two places users could log in, and none have ZoneID=-1
I tested both options with one of my own (non admin account) login, and none produced the above message.
action="/ZoneProcess.aspx?ZoneID=51&OID=5459507&OTYPE=1"
action="/ZoneProcess.aspx?ZoneID=51&OID=5459507&OTYPE=1"
both ZoneID's are the same
What is weird though, only one specific user account is generating that message.
At this stage the user has not complained about not being able to access his/her data yet.
My only concern was that this could be a possible security risk.
Thanks -
ORA-16191: Primary log shipping client not logged on standby.
Hi,
Please help me in the following scenario. I have two nodes ASM1 & ASM2 with RHEL4 U5 OS. On node ASM1 there is database ORCL using ASM diskgroups DATA & RECOVER and archive location is on '+RECOVER/orcl/'. On ASM2 node, I have to configure STDBYORCL (standby) database using ASM. I have taken the copy of database ORCL via RMAN, as per maximum availability architecture.
Then I have ftp'd all to ASM2 and put them on FS /u01/oradata. Have made all necessary changes in primary and standby database pfile and then perform the duplicate database for standby using RMAN in order to put the db files in desired diskgroups. I have mounted the standby database but unfortunately, log transport service is not working and archives are not getting shipped to standby host.
Here are all configuration details.
Primary database ORCL pfile:
[oracle@asm dbs]$ more initorcl.ora
stdbyorcl.__db_cache_size=251658240
orcl.__db_cache_size=226492416
stdbyorcl.__java_pool_size=4194304
orcl.__java_pool_size=4194304
stdbyorcl.__large_pool_size=4194304
orcl.__large_pool_size=4194304
stdbyorcl.__shared_pool_size=100663296
orcl.__shared_pool_size=125829120
stdbyorcl.__streams_pool_size=0
orcl.__streams_pool_size=0
*.audit_file_dest='/opt/oracle/admin/orcl/adump'
*.background_dump_dest='/opt/oracle/admin/orcl/bdump'
*.compatible='10.2.0.1.0'
*.control_files='+DATA/orcl/controlfile/current.270.665007729','+RECOVER/orcl/controlfile/current.262.665007731'
*.core_dump_dest='/opt/oracle/admin/orcl/cdump'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='orcl'
*.db_recovery_file_dest='+RECOVER'
*.db_recovery_file_dest_size=3163553792
*.db_unique_name=orcl
*.fal_client=orcl
*.fal_server=stdbyorcl
*.instance_name='orcl'
*.job_queue_processes=10
*.log_archive_config='dg_config=(orcl,stdbyorcl)'
*.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
*.log_archive_dest_2='SERVICE=stdbyorcl'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_format='%t_%s_%r.dbf'
*.open_cursors=300
*.pga_aggregate_target=121634816
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=364904448
*.standby_file_management='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS'
*.user_dump_dest='/opt/oracle/admin/orcl/udump'
Standby database STDBYORCL pfile:
[oracle@asm2 dbs]$ more initstdbyorcl.ora
stdbyorcl.__db_cache_size=251658240
stdbyorcl.__java_pool_size=4194304
stdbyorcl.__large_pool_size=4194304
stdbyorcl.__shared_pool_size=100663296
stdbyorcl.__streams_pool_size=0
*.audit_file_dest='/opt/oracle/admin/stdbyorcl/adump'
*.background_dump_dest='/opt/oracle/admin/stdbyorcl/bdump'
*.compatible='10.2.0.1.0'
*.control_files='u01/oradata/stdbyorcl_control01.ctl'#Restore Controlfile
*.core_dump_dest='/opt/oracle/admin/stdbyorcl/cdump'
*.db_block_size=8192
*.db_create_file_dest='/u01/oradata'
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='orcl'
*.db_recovery_file_dest='+RECOVER'
*.db_recovery_file_dest_size=3163553792
*.db_unique_name=stdbyorcl
*.fal_client=stdbyorcl
*.fal_server=orcl
*.instance_name='stdbyorcl'
*.job_queue_processes=10
*.log_archive_config='dg_config=(orcl,stdbyorcl)'
*.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
*.log_archive_dest_2='SERVICE=orcl'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_format='%t_%s_%r.dbf'
*.log_archive_start=TRUE
*.open_cursors=300
*.pga_aggregate_target=121634816
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=364904448
*.standby_archive_dest='LOCATION=USE_DB_RECOVERY_FILE_DEST'
*.standby_file_management='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS'
*.user_dump_dest='/opt/oracle/admin/stdbyorcl/udump'
db_file_name_convert=('+DATA/ORCL/DATAFILE','/u01/oradata','+RECOVER/ORCL/DATAFILE','/u01/oradata')
log_file_name_convert=('+DATA/ORCL/ONLINELOG','/u01/oradata','+RECOVER/ORCL/ONLINELOG','/u01/oradata')
Have configured the tns service on both the hosts and its working absolutely fine.
<p>
ASM1
=====
[oracle@asm dbs]$ tnsping stdbyorcl
</p>
<p>
TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 19-SEP-2008 18:49:00
</p>
<p>
Copyright (c) 1997, 2005, Oracle. All rights reserved.
</p>
<p>
Used parameter files:
</p>
<p>
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.20.20)(PORT = 1521))) (CONNECT_DATA = (SID = stdbyorcl) (SERVER = DEDICATED)))
OK (30 msec)
ASM2
=====
</p>
<p>
[oracle@asm2 archive]$ tnsping orcl
</p>
<p>
TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 19-SEP-2008 18:48:39
</p>
<p>
Copyright (c) 1997, 2005, Oracle. All rights reserved.
</p>
<p>
Used parameter files:
</p>
<p>
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.20.10)(PORT = 1521))) (CONNECT_DATA = (SID = orcl) (SERVER = DEDICATED)))
OK (30 msec)
Please guide where I am missing. Thanking you in anticipation.
Regards,
Ravish GargFollowing are the errors I am receiving as per alert log.
ORCL alert log:
Thu Sep 25 17:49:14 2008
ARCH: Possible network disconnect with primary database
Thu Sep 25 17:49:14 2008
Error 1031 received logging on to the standby
Thu Sep 25 17:49:14 2008
Errors in file /opt/oracle/admin/orcl/bdump/orcl_arc1_4825.trc:
ORA-01031: insufficient privileges
FAL[server, ARC1]: Error 1031 creating remote archivelog file 'STDBYORCL'
FAL[server, ARC1]: FAL archive failed, see trace file.
Thu Sep 25 17:49:14 2008
Errors in file /opt/oracle/admin/orcl/bdump/orcl_arc1_4825.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed. Archiver continuing
Thu Sep 25 17:49:14 2008
ORACLE Instance orcl - Archival Error. Archiver continuing.
Thu Sep 25 17:49:44 2008
FAL[server]: Fail to queue the whole FAL gap
GAP - thread 1 sequence 40-40
DBID 1192788465 branch 665007733
Thu Sep 25 17:49:46 2008
Thread 1 advanced to log sequence 48
Current log# 2 seq# 48 mem# 0: +DATA/orcl/onlinelog/group_2.272.665007735
Current log# 2 seq# 48 mem# 1: +RECOVER/orcl/onlinelog/group_2.264.665007737
Thu Sep 25 17:55:43 2008
Shutting down archive processes
Thu Sep 25 17:55:48 2008
ARCH shutting down
ARC2: Archival stopped
STDBYORCL alert log:
==============
Thu Sep 25 17:49:27 2008
Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
ORA-01017: invalid username/password; logon denied
Thu Sep 25 17:49:27 2008
Error 1017 received logging on to the standby
Check that the primary and standby are using a password file
and remote_login_passwordfile is set to SHARED or EXCLUSIVE,
and that the SYS password is same in the password files.
returning error ORA-16191
It may be necessary to define the DB_ALLOWED_LOGON_VERSION
initialization parameter to the value "10". Check the
manual for information on this initialization parameter.
Thu Sep 25 17:49:27 2008
Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
ORA-16191: Primary log shipping client not logged on standby
PING[ARC0]: Heartbeat failed to connect to standby 'orcl'. Error is 16191.
Thu Sep 25 17:51:38 2008
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence 40-40
DBID 1192788465 branch 665007733
FAL[client]: All defined FAL servers have been attempted.
Check that the CONTROL_FILE_RECORD_KEEP_TIME initialization
parameter is defined to a value that is sufficiently large
enough to maintain adequate log switch information to resolve
archivelog gaps.
Thu Sep 25 17:55:16 2008
Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
ORA-01017: invalid username/password; logon denied
Thu Sep 25 17:55:16 2008
Error 1017 received logging on to the standby
Check that the primary and standby are using a password file
and remote_login_passwordfile is set to SHARED or EXCLUSIVE,
and that the SYS password is same in the password files.
returning error ORA-16191
It may be necessary to define the DB_ALLOWED_LOGON_VERSION
initialization parameter to the value "10". Check the
manual for information on this initialization parameter.
Thu Sep 25 17:55:16 2008
Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
ORA-16191: Primary log shipping client not logged on standby
PING[ARC0]: Heartbeat failed to connect to standby 'orcl'. Error is 16191.
Please suggest where I am missing.
Regards,
Ravish Garg -
Audit/Log GPO changes and Logging of new addition of Domain Controllers in the Event Log
Hi all,
We am trying to log the following items in the event log for Windows 2012. This applies to a domain controller.
1) Audit any changes made to the Group Policy
2) Log the addition of new domain controllers added to the system.
We need the windows event log to record the above events for security purposes. Can anyone advise if this is doable? If yes what are the steps.
Thank youHi,
>>1) Audit any changes made to the Group Policy
We can enable audit for directory service object access and configure specific SACL for group policy files to do this.
Regarding how to step-to-step guide for auditing changes of group policy, the following two blogs can be referred to for more information.
Monitoring Group Policy Changes with Windows Auditing
http://blogs.msdn.com/b/ericfitz/archive/2005/08/04/447951.aspx
Auditing Group Policy changes
http://blogs.msdn.com/b/canberrapfe/archive/2012/05/02/auditing-group-policy-changes.aspx
>>2) Log the addition of new domain controllers added to the system.
Based on my knowledge, when a server is successfully promoted to be domain controller, event ID 29223 will be logged in the System log.
Regarding this point, the following thread can be referred to for more information.
Is an Event ID for a completed Domain Controller promotion logged on the PDC?
https://social.technet.microsoft.com/Forums/windowsserver/en-US/11b18816-7db0-49e2-9a65-3de0e7a9645e/is-an-event-id-for-a-completed-domain-controller-promotion-logged-on-the-pdc?forum=winserverDS
Best regards,
Frank Shen -
Wait Events "log file parallel write" / "log file sync" during CREATE INDEX
Hello guys,
at my current project i am performing some performance tests for oracle data guard. The question is "How does a LGWR SYNC transfer influences the system performance?"
To get some performance values, that i can compare i just built up a normal oracle database in the first step.
Now i am performing different tests like creating "large" indexes, massive parallel inserts/commits, etc. to get the bench mark.
My database is an oracle 10.2.0.4 with multiplexed redo log files on AIX.
I am creating an index on a "normal" table .. i execute "dbms_workload_repository.create_snapshot()" before and after the CREATE INDEX to get an equivalent timeframe for the AWR report.
After the index is built up (round about 9 GB) i perform an awrrpt.sql to get the AWR report.
And now take a look at these values from the AWR
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
log file parallel write 10,019 .0 132 13 33.5
log file sync 293 .7 4 15 1.0
......How can this be possible?
Regarding to the documentation
-> log file sync: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3120
Wait Time: The wait time includes the writing of the log buffer and the post.-> log file parallel write: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3104
Wait Time: Time it takes for the I/Os to complete. Even though redo records are written in parallel, the parallel write is not complete until the last I/O is on disk.This was also my understanding .. the "log file sync" wait time should be higher than the "log file parallel write" wait time, because of it includes the I/O and the response time to the user session.
I could accept it, if the values are close to each other (maybe round about 1 second in total) .. but the different between 132 seconds and 4 seconds is too noticeable.
Is the behavior of the log file sync/write different when performing a DDL like CREATE INDEX (maybe async .. like you can influence it with the initialization parameter COMMIT_WRITE??)?
Do you have any idea how these values come about?
Any thoughts/ideas are welcome.
Thanks and RegardsSurachart Opun (HunterX) wrote:
Thank you for Nice Idea.
In this case, How can we reduce "log file parallel write" and "log file sync" waited time?
CREATE INDEX with NOLOGGINGA NOLOGGING can help, can't it?Yes - if you create index nologging then you wouldn't be generating that 10GB of redo log, so the waits would disappear.
Two points on nologging, though:
<ul>
it's "only" an index, so you could always rebuild it in the event of media corruption, but if you had lots of indexes created nologging this might cause an unreasonable delay before the system was usable again - so you should decide on a fallback option, such as taking a new backup of the tablespace as soon as all the nologging operatons had completed.
If the database, or that tablespace, is in +"force logging"+ mode, the nologging will not work.
</ul>
Don't get too alarmed by the waits, though. My guess is that the +"log file sync"+ waits are mostly from other sessions, and since there aren't many of them the other sessions are probably not seeing a performance issue. The +"log file parallel write"+ waits are caused by your create index, but they are happeninng to lgwr in the background which is running concurrently with your session - so your session is not (directly) affected by them, so may not be seeing a performance issue.
The other sessions are seeing relatively high sync times because their log file syncs have to wait for one of the large writes that you have triggered to complete, and then the logwriter includes their (little) writes with your next (large) write.
There may be a performance impact, though, from the pure volume of I/O. Apart from the I/O to write the index you have LGWR writting (N copies) of the redo for the index and ARCH is reading and writing the completed log files caused by the index build. So the 9GB of index could easily be responsible for vastly more I/O than the initial 9GB.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
"Science is more than a body of knowledge; it is a way of thinking"
Carl Sagan -
EM Application Log and Web Access Log growing too large on Redwood Server
Hi,
We have a storage space issue on our Redwood SAP CPS Orcale servers and have found that the two log files above are the main culprits for this. These files are continually updated and I need to know what these are and if they can be purged or reduced down in size.
They have been in existence since the system has been installed and I have tried to access them but they are too large. I have also tried taking the cluster group offline to see if the file stops being updated but the file continues to be updated.
Please could anyone shed any light on this and what can be done to resolve it?
Thanks in advance for any help.
JasonHi David,
The file names are:
em-application.log and web access.log
The File path is:
D:\oracle\product\10.2.0\db_1\oc4j\j2ee\OC4J_DBConsole_brsapprdbmp01.britvic.BSDDRINKS.NET_SAPCPSPR\log
Redwood/CPS version is 6.0.2.7
Thanks for your help.
Kind Regards,
Jason -
High redo, log.xml and alert log generation with streams
Hi,
We have a setup where streams and messaging gateway is implemented on Oracle 11.1.0.7 to replicated the changes.
Until recently there was no issue with the setup but for last few days there is an excessive amount of redo and log.xml and alert log generation, which takes up about 50gb for archive log and 20 gb for the rest of the files.
For now we have disabled the streams.
Please suggest the possible reasons for this issue.
Regards,
AnkitObviously, as no one here has access to the two files with error messages, log.xml and alert log, the resolution starts with looking into those files
and you should have posted this question only after doing this.
Now no help is possible.
Sybrand Bakker
Senior Oracle DBA
Maybe you are looking for
-
My iPhone is not showing in iTunes. I can see it on my computer but not iTunes. Is anybody able to assist me please? Thanks
-
Hi SAP-ABAP Experts, In SLDB, I created the logical base base, in that i Created 3 Nodes. Error: When i tried to save the created nodes, i am getting the Run time Error. Please resolve the problem. Thanks Kavitha
-
Having problem with purchase in apps
Few days ago, i bought an application and it work correctly, but when i try to purchase in app i got this message Your purchase could not be completed for assistance contact itunes support at www.apple.com/support/itunes/ww/ So i fowllow it, i have t
-
Hello Experts, Will you please help me to solve my problem? I have issue with new form layout where we have to enhance the BADI for CIT my ABAPr is facing problem while enhancing it. Please provide the solution for the same. Regards, ARU Edited by: A
-
IBook G4 Startup Error!!! URGENT!
Hi everyone... ok lets start.... i was listening music when my iBook fall down from the table... then my screen turned into black... when i pressed the BLOQ NUM button it turns on and off and that mean that my laptop was on... then i turn my laptop o