Seach in log file archive by using findevent
Hi @all,
we are using several IronPort C series systems. All our log files are stored via scp on a central log file server running under Linux. The log files are stored in subfolders for each system.
Now it became to be necessary to search emails from last year. I did it by using the grep command and it was very complicated to find all informations (MID, ICID, DCID).
Does someone knows a way to use the findevent command on a Linux based system or do someone have a normal shell script which do the same work as the findevent command do?
Regards, Thomas
There is a tool on the Support Portal that emulates the AsyncOS's findevent command. The tool was written in Python which should work on your Linux system, assuming that Python is available on it.
Find Event Tool
Python
This is the core code to the CLI findevent command which will dump log information based on MID or regular expression searches on "To", "From" and "Subject". The help command description for findevent is "Find events in mail log files".
1. Log onto the support portal (http://www.ironport.com/support/login.html).
2. After you log in, click on "Appliance Documentation > Tools" on the left side and go down near the bottom of the page.
good luck
Similar Messages
-
SOA Server Diagnostic log files Archive Directory
Hello all,
we know that the soa server logs would be there under servers/soa_server1/logs/
So, as the file size grows i mean the soa_server1-diagnostic.log, they should be kept somewhere in the archive right...so where should i set the directory for archive files...and how to set the size of the archive files...can anyone please tell me how to do this...may be a documentation on that would be very helpful for me...
Thanks,
NHi Naresh,
If i understand correctly u want to rotate the log files based on size in soa 11g.
If so, log into EM console. Right click on soa-infra->Logs ->Log configuration.
Select the Log Files tab. Choose odl-handler and edit the configuration. You can fix the rotation of log files based on size as well as time.
The log path would be $user_projects/domains/domain_name/servers/soa_server1/logs/soa_server1-diagnostic.log.
Hope this helps u -
Online Redo log file/ Archive Files
hi guyz,
is there any tool available for to view contents of redo log file, or archive file (WHAT"S INSIDE????)
i hope i've explained what i want
regards
neoDear Friend,
Good day to you and I hope that this e-mail will find you well and in good
health.I write you this e-mail to seek for a possible investment co-operation
and assistance with you in your company. It will be my gladness to go into a
possible investment co-operation with you in your company.If you are interested, do respond to me as soon as possible so that we
can discuss in greater details.Do have a nice day and I look forward to read from you, soon.
Sincerely,
Mrs Lohan Cisse. -
Accessing Informatica File Archive Service using its native JDBC driver
Good morning,
Using the Informatica ILM suite, we've retired a number of database applications (which ran on Oracle) into the so-called Optimized File Archive, or OFA.
The information stored in there can be accessed using their File Archive Service, or FAS.
To allow access for certain tools, they've also got an ODBC and JDBC driver (from the original company that created the archive bit, RainStor), to be able to get to tables in archives.
I've been able to set it up for Aquafold's Aqua Data Studio (ADS), but given that that is shareware I've only got an evaluation version, expiring in 2 weeks.
Since SQLdeveloper, like ADS, is a Java-based tool, and it also allows for third party JDBC drivers, I'm inclined to think that somehow SQLdeveloper should also be able to connect to those archives. Unfortunately, I always get an error message: "Invalid connection information specified. Verify the URL format for the specified driver"
There are some slight differences between ADS and SQLdev in setting up such a JDBC connection.
In ADS, apart from providing user/pw, I have to specify following:
- name of the JDBC driver ("com.simba.client.core.SimbaJDBCDriver")
- location of the JDBC driver (e.g. "C:\Ora\RainStorJDBC-3.0.1.2.jar")
- JDBC URL of format "jdbc:simba://<host>:<port>;Archive=<archive>;ConnectionTimeout=3600;Parser=ORACLE"
In SQLdev I can import the JDBC driver (menu > Tools > Preferences, then under Database > Third Party JDBC Drivers), and in the Database Connection Screen create a new connection, provide the user/pw of the so-called Service Manager (from FAS), and choose Connection Type "Advanced" to then specify the "Custom JDBC URL", which would have to be the same as above. If I test the connection, I get an instant error:
"Status: Failure -Test failed: Invalid connection information specified. Verify the URL format for the specified driver".
Informatica's Global Customer Support are saying that SQLdev can't do it, but I'm just wondering whether some really bright lights in the Oracle community have found an alternative route to be able to connect to such FAS/OFA archives. Given the similarities between ADS and SQLdev, to me it seems it - somehow - has to be possible as well...
Thanks in advance for thinking about this problem/challenge.
Cheers!
Edited by: exapat on Dec 7, 2012 10:43 AM
Did some further investigations, on other third party drivers. Those ones (e.g. for SQL Server) - when loaded - create an extra tab in the connection screen.
The RainStor JDBC driver does not do that, and I can imagine that that's where it falls short.
If indeed the case, what could be done to overcome this?Hi,
Did some further investigations, on other third party drivers ... If indeed the case, what could be done to overcome this?That is the case. Currently browsing and migration support for third-party databases is limited to the following:
http://www.oracle.com/technetwork/developer-tools/sql-developer/supportedmigplatforms-086703.html
See section 1.1 for currently supported databases and section 1.3 for those planned for a future release.
To see an example of some (all?) of the extra bits needed to support browsing of a third-party database, find the extensions directory in your SQL Developer installation, look for
oracle.sqldeveloper.thirdparty.browsers.jarthen browse it to see the various class, properties, and xml files necessary for supporting connections to databases like mysql or sybase. You may conclude adding browsing support is not all that complicated. Migration support is an entirely different matter, however. Oracle is the thirdparty.browsers extension owner. Support for more third-party databases would normally be added by the SQL Developer group based on a database's general popularity or a feature request on the SQL Developer Exchange -- obviously a long-term proposition.
If you look at the [Extensions Exchange|http://www.oracle.com/technetwork/developer-tools/sql-developer/extensions-083825.html] maybe you can get some ideas how best to proceed if you have the resources to build your own extension to support RainStor.
Regards,
Gary
SQL Developer Team -
Calling the Log file (Operating System) using PL/SQL
Hi to everybody
i am loading the legacy data to oracle apps table through sqlloader,
now i want to know how many data record in my legacy file,
we get it through log file of Sqlloader,
now my question is how to call the log file through PL/sql script
Please solved my questionYou can define an external table on it, and read it with Sql commands.
See External Table Example -
Getting the log files from client using java program
hi
this is lalita...and i am doing a project in networking.... i am new to socket programming....i have established the socket connection between the client and server...with this site members' help....now i have to get the log files of the client system from the server.... via the created socket....i need it by tomorrow...i.e apr 12th ....as i have to show it to my guide...
i just need a core java program that will get the log information of the client from the server......
Can anybody please help me in this regard..... it would be of great help to me and my group....
Anxiously awaiting for the replies....
Thanking you and regards...
Lalita.Simple.
Server is listening on a specific port for the connection from the clients.
Connect the client with the server on the above mentioned port.
Open the streams on both side for the connection and run in separate thread.
Define a protocol for communication between client and server.
e.g after connection with the server the server send a text message to the client (send log) now the client first should the log file name and size to the sever and then send the file. the server should save the file.
then disconnect the client or want to get another file or for other tasks define the other commands -
Firefighter Log File Archiving - Leading Practice
Hi All,
I am wondering if there is any leading practice by SAP or other on how often to archive data from Firefighter Logs. How big do these get? I know it depends on how much FF IDs get used, but is this a weekly, monthly, yearly type activity?
Thanks so much,
Grace RaeHi,
SAP recommends to keep at least one year data in the system for firefighter logs. You can check SAP note 1041912.
Thanks
Sunny -
Hi,
our OS is AIX and DB is DB2.
there are no of files present in sapmnt/<SID>/global/<client no.>JOBLG directory, which are more than 1 week old.
could you please help me to archive the list of files either from SAP Level or from OS level
Thanks,
MohanHi
i agree with Juan.. in this point - If they are older than a week and you want to keep them just make a backup of them and delete them from the system...
Just schedule reorg jobs RSBTCDEL2 should delete old batch job logs... it recommended by SAP not to schedule RSBTCDEL as it deletes all the log files. you can use the program RSBTCDEL2.
Thanks & Regards
Gopi -
Archive log file size is varying in RAC 10g database.
---- Environment oracle 10g rac 9 node cluster database, with 3 log groups for each node with 500 mb size for each redo log file.
Question is why would be the archive log file size is varying, i know when ever there is log file switch the redo log will be archived, So as our redo log file size is of 500 MB
isn't the archive log file size should be the same as 500 MB?
Instead we are seeing the archive log file is varying from 20 MB to 500 MB this means the redo log file is not using the entire 500 MB space? What would be causing this to happen? how can we resolve this?
Some init parameter values.(just for information)
fast_start_mttr_target ----- 400
log_checkpoint_timeout ----- 0
log_checkpoint_interval ----- 0
fast_start_io_target ----- 0There was a similar discussion a few days back,
log file switch before it filled up
The guy later claimed it's because their log_buffer size. It's remain a mystery to me still. -
Retrive User & Cookie Information Using Apache Access Log Files
Hi All
The following information are not showing under Apache access log files that is used with Oracle Appserver (10g 10.1.2) given below.
1)User Informations
2)Cookie Informations
We are using below Commands in the httpd.conf File that have specified in the documents link given below
http://download-west.oracle.com/docs/cd/B31017_01/web.1013/q20201/logs.html#accesslog
LogFormat "%h %l %u %t \"%r\" %>s %b %v \"%{Referer}i\" \"%{User-Agent}i\" \"%{cookie}n\"" combined
Please anyone can tell what are the necessary informations that we have to specified in the httpd.conf file to retrive Cookie and user Informations.
Thanks to all
SonaThanks for your reply
Can u please check the below link for the cookie flag information
http://download-west.oracle.com/docs/cd/B31017_01/web.1013/q20201/mod/mod_usertrack.html
For your information i have logged in already.
Our Sample O/p is given below
151.146.191.186 - - [28/Dec/2006:10:13:05 +0530] "GET /Tab_files/lowerbox.gif HT
TP/1.1" 200 150 - "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows)"
We are using the below command format
LogFormat "%h %l %u %t \"%r\" %>s %b %{cookie}n \"%{Referer}i\" \"%{User-Agent}i\"" combined
But User and Cookie informations is not displaying.
what steps should i follow.
Looking for the favourable reply
Thanks -
Hi all,
how can i display a log file from server using java applet.
rgds
geethaBuild a tail-daemon using netcat or inetd, then have a socket from the applet connect to the service thus created, and display using some simple java.awt.Canvas or some other text-displayer in the awt or swing. Viola. Just means that everyone else can see your logfile as well, so why not just remove that firewall ?
-
Hi,
Can anyone tell me what's the real difference between a backup file and a redo log file/archived redo log file and the scenarios (examples) when each of them can be used (for example, "......") ? Both are used for database/instance recovery. I have read the concepts of there 2 terms, but I need some additional information about the difference.
Thanks!Roger25 wrote:
What i still don't understand is how redo mechanism works; I know, redo logs records changes made to the database and it's used to re-do informations in case of a system failure/crash, for example. Ok, let's say I have a huge update, but when the system crash has occured, the transaction is neither comitted, nor rolled back. Then, how this redo is useful? If a system crash occur, all those updates aren't rolled back automatically? So the database state is as before executing that huge update? So in this case, 'what to redo'?No, with the system's crash, the transaction never gets a chance to get committed (and even rolled back) because a commit is only supposed to happen if there is an explicit or implicit commit statement is issued. Now, for the redo , with the statement, a commit marker is entered at the end of the redo stream of that transaction denoting that it's finally over now and for the corresponding transaction, the status is cleared from the transaction table that's stored in the Undo segment in which the Undo information of that transaction is stored. If you have given a huge update, the redo would be very highly required as you should be knowing that it's not all the time that the dirty buffers that are written to the data files unlikely the redo log files which are written every 3 econds. This means, there is a very fair chance that many changes have yet not even propagated to the data files for which the change vectors are now already there in the redo log files. Now, for the sake of clarity, assume that this transaction was actually committed! So now, without those changes be applied to the data files how Oracle can actually show the committed reults when the database next time would be opened? For this purpose, the redo change vectors are required. And for the uncommitted transactions , the applcation of the redo change vectors would keep on updating the Checkpoint numbers of the data files and would eventually get them synched with the control file and redo log files, without which the database would never be opened.
HTH
Aman.... -
Question about how Oracle manages Redo Log Files
Good morning,
Assuming a configuration that consists of 2 redo log groups (Group A and B), each group consisting of 2 disks (Disks A1 & A2 for Group A and Disks B1 and B2 for group B). Further, let's assume that each redo log file resides by itself in a disk storage device and that the device is dedicated to it. Therefore in the above scenario, there are 4 disks, one for each redo log file and, each disk contains nothing else other than a redo log file. Furthermore, let's assume that the database is in ARCHIVELOG mode and that the archive files are stored on yet another different set of devices.
sort of graphically:
GROUP A GROUP B
A1 B1
A2 B2The question is: When the disks that comprise Group A are filled and Oracle switches to the disks in Group B, can the disks in Group A be taken offline, maybe even physically removed from the system if necessary, without affecting the proper operation of the database ? Can the Archiver process be temporarily delayed until the disks (that were removed) are brought back online or is the DBA forced to wait until the Archiver process has finished creating a copy of the redo log file into the archive ?
Thank you for your help,
John.Hello,
Dropping Log Groups
To drop an online redo log group, you must have the ALTER DATABASE system privilege. Before dropping an online redo log group, consider the following restrictions and precautions:
* An instance requires at least two groups of online redo log files, regardless of the number of members in the groups. (A group is one or more members.)
* You can drop an online redo log group only if it is inactive. If you need to drop the current group, first force a log switch to occur.
* Make sure an online redo log group is archived (if archiving is enabled) before dropping it. To see whether this has happened, use the V$LOG view.
SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;
GROUP# ARC STATUS
1 YES ACTIVE
2 NO CURRENT
3 YES INACTIVE
4 YES INACTIVE
Drop an online redo log group with the SQL statement ALTER DATABASE with the DROP LOGFILE clause.
The following statement drops redo log group number 3:
ALTER DATABASE DROP LOGFILE GROUP 3;
When an online redo log group is dropped from the database, and you are not using the Oracle Managed Files feature, the operating system files are not deleted from disk. Rather, the control files of the associated database are updated to drop the members of the group from the database structure. After dropping an online redo log group, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped online redo log files.
When using Oracle-managed files, the cleanup of operating systems files is done automatically for you.
Your Database wont be affected as you can operate with 2 redo log files in each group as The minimum number of redo log files required in a database is two because the LGWR (log writer) process writes to the redo log files in a circular manner. so the process will hang becuase you are having 2 only groups if you want to remove 1 add a third one and make it the current group then remove the one you want to be offline.
Please refer to:
http://download.oracle.com/docs/cd/B10500_01/server.920/a96521/onlineredo.htm#7438
Kind regards
Mohamed
Oracle DBA -
Monitoringhost.exe is locking a circular log file
Hi Team,
We have a circular log file which was monitored using rule.
the log file archives once it reaches 10MB. recently what we are finding is the log file is been locked by MonitoringHost.exe and the new log file is not creating.
when we tried to manually move the file it is gives a message saying it is locked. the only resolution is to stop the health service and move the file and start the service.
can you please suggest if their any resolution for this.
We are on SCOM 2012 SP1 Ur2
RajKumarHi,
It is suggested to create a script to automtically stop the service, move the file and start the service. As we should not do any change on the locked file.
Regards,
Yan Li
Regards, Yan Li -
Moving the log file of a publisher database SQL Server 2008
There are many threads on this here. Most of them not at all helpful and some of them wrong. Thus a fresh post.
This post regards SQL Server 2008 (10.0.5841)
The PUBLISHER database primary log file which is currently of 3 blocks and not extendable,
must be moved as the LUN is going away.
The database has several TB of data and a large number of push transactional replications as well as a couple of bi-directional replications.
While the primary log file is active, it is almost never (if ever) used due to its small fixed size.
We are in the 20,000 TPS range at peak (according to perfmon). This is a non-trivial installation.
This means that
backup/restore is not even a remotely viable option (it never is in the real world)
downtime minimization is critical - measured in minutes or less.
dismantling and recreating the replications is doable, but I have to say, I have zero trust in the script writer to generate accurate scripts. Many of these replications were originally set up in older versions of SQL Server and have come along for the
ride as upgrades have occurred. I consider scripting everything and dismantling the whole lot pretty high risk. In any case, I do not want to have to reinitialize any replications as this takes, effectively, an eternity.
Possible solution:
The only option I can think of is to wind down everything, such that there are zero outstanding uncommitted transactions and detach the database, delete the offending log file and reattach using the CREATE DATABASE xyz ATTACH_REBUILD_LOG option.
This should, if I have understood things correctly, cause SQL Server to recreate the default log file in the same directory as the .mdf file. I am not sure what will happen to the secondary log file which is not moving anywhere at this point.
The hard bit is insuring that every transaction in the active log files have been replicated before shutdown. This is probably doable. I do not know how to manually flush any left over transactions to replication. I expect if I shut down all "real"
activity and wait for a certain amount of time, eventually all the replications will show "No replicated transactions are available" and then I would be good to go.
Hillary, if you happen to be there, comments appreciated.Hi Philip
you should try this long back suggested way of stopping replication and restore db and rename or detach attach
http://social.msdn.microsoft.com/Forums/sqlserver/en-US/6731803b-3efa-4820-a303-4ffb7edf154a/detaching-a-replicated-database?forum=sqlreplication
Thanks
Saurabh Sinha
http://saurabhsinhainblogs.blogspot.in/
Please click the Mark as answer button and vote as helpful
if this reply solves your problem
I do not wish to be rude, but which part of the OP didn't you understand?
Specifically the bit about 20,000 transactions a second and database size of several TB. Do you have any concept whatsoever of what this means? I will answer for you, "no, you are clueless" as your answer clearly shows.
Stop wasting bandwidth by proposing pointless and wrong solutions which indicate that you did not read the OP, or do you just do this to generate points?
Also, you clearly failed to notice that I was on the thread to which you referred, and I had some pointed comments to make. This thread was an attempt to garner some input for an alternative proposal.
Maybe you are looking for
-
How to query data from grid cache group after created global AWT group
It is me again. as I mentioned in my previous posts, I am in progress of setup IMDB grid environment, and now I am at stage of creating cache group. and I created global AWT cache group on one node(cachealone2), but I can not query this global cache
-
Error CNTR0019E when starting LiveCycleES cluster
when satrting the cluster it throws warnings from the first node and errors from the 2nd node. any ideas? first node contunuously shows: [10/14/09 18:38:05:733 EDT] 00000045 DSCJobStoreTX W org.quartz.impl.jdbcjobstore.JobStoreSupport findFailedInsta
-
I need to update my iPad 2 to the latest operating system. When I open iTunes it says that I can do this. I start the download and update. When it's done it says it failed due to the system time out. What does this mean and how can I fix it?
-
HT1218 Airport Extreme can't connect to internet?
Hello all, Purchased myself a Airport Extreme base station today and have plugged it in to configure it, however it keeps failing to connect to the internet. The amber light is constantly flashing after the green start up led. I have tried resetting
-
Price changes in Consignment info records
Hi Is there any standard transaction to change the prices in the purchasing info records site (plant) wise on daily basis? e.g. 5000 info records are there and i want to change the prices in the info records site wise. I don't know the rate of the ma