Zone log management
Hi all,
What would be a good best practice to manage zone logs?
I have thought about directing all zone logs to a central database, like MySQL. Not sure exactly how yet, but syslogging through a SSL-tunnel might work. I don't want to store the logs in each zone, I want them sent directly to another log-zone. In Linux I can use capabilities to set APPEND ONLY flags on the logs, to keep them from being tampered with.
Anyone have a working setup, or a setup that could work?
I also need recommendations on GUI tools to analyze logs, preferably web based, and syslog to MySQL projects.
Personally I'd use the remote loghost options which are available by default on the syslog daemon which is shipped with Solaris. The only thing you'll need to keep in mind is setting up the network, which could be a little tricky.
As to logfile scanning; I'm wondering if you wouldn't be better off to setup auditing on your global zone so it could audit your non-global zones.
Similar Messages
-
Hi
I wonder to know what is the enterprise solution for windows and application event log management and analyzer.
I have recently research and find two application that seems to be profession ,1-manageengine eventlog analyzer, 2- Solarwinds LEM(Solarwind Log & Event Manager).
I Want to know the point of view of Microsoft expert and give me their experience and solutions.
thanks in advance.Consider MS System Center 2012.
Rgds -
I cannot get my Audit Log Management job to run. It fails because "Invalid audit archive path. Please check your setting." Where do I check the setting? I am running 5.0.3 with Collab 4.0. Thanks.
Just an FYI, with ZDM 6.5 HP3 the file name changed from AuditLog.txt to
ZRMAudit.txt still located under system32 on Windows XP.
Jim Webb
>>> On 5/22/2006 at 3:27 PM, in message
<[email protected]>,
Jim Webb<[email protected]> wrote:
> Well I found out the ZDM 6.5 HP2 fixes the problem of the log file not
> being
> created.
>
> Jim Webb
>
>>>> On 5/19/2006 at 8:37 AM, in message
> <[email protected]>,
> Jim Webb<[email protected]> wrote:
>> Well, it does show up in the event log but not in the inventory. If I
>> disable inventory the log file won't be deleted, correct?
>>
>> Jim Webb
>>
>>>>> On 5/18/2006 at 10:03 AM, in message
>> <[email protected]>, Marcus
>> Breiden<[email protected]> wrote:
>>> Jim Webb wrote:
>>>
>>>> I did a search on a machine I am remote controlling, no log file. What
>>>> next?
>>> good question... does the session show up in the eventlog? -
In the development and production environment, log management has been enabled for many of the functionalities and for most of the users. Hence due to this there are lot of log files created (also being created) and ending up occupying huge space. I could not find any relevant information in PIM user, implementation guides or any Knowledge base articles in Oracle tech support side. Hence I need your inputs on the following:
• What is the standard log management practice in Oracle PIM?
• What is the standard process, procedure for archival, deletion of log files?
• How often is archival of log files done?
Regards,
Ram
+358 451172788Please see the following MOS Docs.
R12 Product Information Management (PIM) Training [Video] (Doc ID 1498058.1)
Information Center - Oracle Fusion Product Information Management ( PIM ) (Doc ID 1353460.2)
Information Center - Troubleshooting Fusion Product Information Management (PIM) Applications. (Doc ID 1380507.2)
Information Center: Product Information Management (PIM) (Doc ID 1310505.2)
Guidelines and Product Definition Methodology for Oracle MDM Product Hub Integration (Doc ID 1086492.1)
Oracle Product Hub for Communications Readme Document, Release 12.1.1 (Doc ID 885359.1)
Thanks,
Hussein -
Security Audit Log SM19 and Log Management external tool
Hi all,
we are connecting a SAP ECC system with a third part product for log management.
Our SAP system is composed by many application servers.
We have connected the external tool with the SAP central system.
The external product gathers data from SAP Security Audit Log (SM19/SM20).
The problem is that we see, in the external tool, only the data available in the central system.
The mandatory parameters have been activated and the system has been restarted.
The strategy of SAP Security Audit Log is to create many audit log file for each application server. Probably, only when SM20 is started, all audit files from all application servers are read and collected.
In our scenario, we do not use SM20 since we want read the collected data in the external tool.
Is there a job to be scheduled (or something else) in order to have all Security Audit Log available (from all application servers) in the central instance ?
Thanks in advance.
Andrea CavalleriI am always amazed at these questions...
For one, SAP provides an example report ( RSAU_READ_AUDITLOG_EXTERNAL ) to use BAPIs for alerts from the audit log yet 3rd party solutions seem to be alergic to using APIs for some reason.
However, mainly I do not understand why people don't use the CCMS (tcode RZ20) security templates and monitor the log centrally from SolMan. You can do a million cool things in SolMan... but no...
Cheers,
Julius -
Warning Log Management
The date field is probably invalid on /usr/web/serveurs_web/bea_prod/wlserver6.1/config/workflow/logs/weblogic.log
line 14. Message ignored during search.Hi Abhishek,
As i mentioned earlier the Alert resolution says the same points.
Can you give details on the below ?
Is there really a log named "Dhcpadminevents" in the MS's Event viewer ?
Did you recently configure any new alert where you mentioned "Dhcpadminevents"
as a event log location ?
If yes then what is the target you selected for the rule / monitor there ?
Can you post the results for analysis ?
Gautam.75801 -
Casting variables in Sentinel Log Manager
Hello! I have been trying to make a collector work to pick up events
from MySQL tables and when I run it on SIEM-Sentinel (classic Sentinel
6.1 on SQL Server and Sentinel-RD ), it works fine. However, I tried
using the same collector on Sentinel Log Manager 1.2, and it does not
pick up the events!
After some debugging, I found the culprit lines of code. I do a lot of
casting to make sure the variables are treated as integers or strings
(mostly when I need to manipulate it as a number, since String is the
default data type, I think) so I have lines that look like this in the
pre-parsing stage of the collector:
this.anInteger = Number ( this.RXmap.col_anInteger);
this.aString = String (this.RXmap.col_aString);
But when debugging, while in SIEM-Sentinel it populates the variable as
expected, in SLM I get "NaN" as the output value. I can see the values I
want to get in RXBufferString, but problem occurs when I cast them.
Maybe this isn't the "Sentinel" way of doing it? While I can do without
casting "String", how can I cast the value as an integer?
Any help is appreciated. Cheers!
Jean-Paul_GM
Jean-Paul_GM's Profile: http://forums.novell.com/member.php?userid=12809
View this thread: http://forums.novell.com/showthread.php?t=447842-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Compare your Sentinel and Log Manager systems' inputs on that line of
code. I'm going to guess that the value in those variables which you
are trying to cast are not the same, causing the problem. NaN shouldn't
be the output if the input can be cast, but I'd also probably avoid
casting the value in the first place since I'm pretty sure that Sentinel
and Log Manager will handle that casting as necessary later (untested,
just assuming from a lack of having ever needed to do that myself in the
past, though I am not an expert collector writer). ECMAscript is not
strongly typed and while I've had issues in one cases where the type was
incorrect it was not a simple string/int type but rather a Date object
that the send() function expected to be present but was in fact not a
valid Date and so the sending failed while everything else (parsing,
etc.) seemed to function properly.
Also whenever you get [Object Object] in the debugger you should be able
to move over to the left-hand side where either 'this' or 'locals' will
show you something that allows you to drill down to the object and then
you can expand it. I do not think I have ever seen a case where the
object existed but could not be accessed as something listed directly
as, or as a child of, something present on the left-hand side section of
the debugger.
Good luck.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.15 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iQIcBAEBAgAGBQJOt9tGAAoJEF+XTK08PnB5ohAP/2kW6O2lEI1EDi1XDAKSRTkD
FzCx7WL8ReEXlf0nqEY+KDQAakPOG6h2MqG0fgVzSy1ty/OopF2UYX1QteI+gq/A
+DmlgWa+Os3GOvxZPOjpPg2diWHazZmADVdm0NesGwUrSWLJ4t tTCMDYlME3yIDF
3T8rSd76s40FEZTxlKqYuvGg1T863pGrCb1Ul7PC52VmkrI0p+ gxWqJjIap0nTRF
+DXYHsn9WHJT+OBy7icF1MMYT/arjmNyuVUV4leGpRWkLYVSQJEomn/W7OO7oO0v
uRm9uZ2iNUiigyf/u07BSpyvkQSPCo8ICGeYc5zYb8LoExVu2vH1LA70CoVXzfIZ
RJfQeW6KIVQHeOB10kGt/gjxvsFGsjJ5q3tmiRN33r8fJpE5QZ3hDfBA3qBdcjYz
/dSR+jbLNSpGBG8dvkl9y4JRLEAUgcKAX+OYpY/I0/W4lsZvfcU/Yp8dR41qJDlk
Cp2Ds2vnk4gCUZKqdp0/jMBLxvc8ZZ4XW8oxlsAEOGbOL9pBU57txxIiPOfYV8MR
f0Ylo5yWdQYcLqYgqsWE1z0JRSW18DQz7EtsazMjhS3PuuvVF5 xpc9LUJfeAj7zY
0gYCE4keDH8ljFQZ9v+DrSBhpZnOLDDndKBtC8pk3ahCVnhKPN olxviCliOmUJsB
WeGiUoMtJJks66JkUsmP
=OD70
-----END PGP SIGNATURE----- -
Log manager throughput considerations - No more than 3840K in-flight
Hi Experts,
As per the documentation about Log Manager
Log manager throughput considerations
Outstanding log writes: 32-bit=Limit of 8, 64-bit=Limit of 32
No more than 3840K “in-flight”
Individual write size varies
Up to 60KB in size
in 64 Bit==> We can have 32 outstanding log writes and 1 log write size is max 60 KB so if there are 32 user dbs, max inflight size can be 32*60KB==> 1920 KB. but documentation says 3840 KB
["No more than 3840K “in-flight"]. double the size of 1920 KB.
This is contradictory i think.
ManishThanks for the help.
http://blogs.msdn.com/b/sqlcat/archive/2013/09/10/diagnosing-transaction-log-performance-issues-and-limits-of-the-log-manager.aspx
Limits of the Log Manager
Within the SQL Server engine there are a couple limits related to the amount of I/O that can be "in-flight" at any given time; "in-flight" meaning log data for which the Log Manager has issued a write and not yet received an acknowledgement that the write has
completed. Once these limits are reached the Log Manager will wait for outstanding I/O’s to be acknowledged before issuing any more I/O to the log. These are hard limits and cannot be adjusted by a DBA. The limits imposed by the log manager are based on conscious
design decisions founded in providing a balance between data integrity and performance.
There are two specific limits, both of which are per database.
1. Amount of "outstanding log I/O" Limit.
a. SQL Server 2008: limit of 3840K at any given time
2. Amount of Outstanding I/O limit.
a. SQL Server 2005 SP1 or later (including SQL Server 2008 ):
i. 64-bit: Limit of 32 outstanding I/O’s
Regards
Manish -
PU19 - Tax Reporter - W2 Simulation runs deletion from Log Manager
Hi,
We have used program rpctrdu0 to delete W2 simulation runs from PCL4 clusters. Now the question from the end user is why are the W2 simulation runs still showing up in the Log Manager in PU19? I am not even sure if we can delete the runs or records showing up in Log Manager.
Any help would be appreciated.W-2 data (prod or simulation) are stored in cluster PCL4 whereas log manager info is stored in Temse. I guess Log Mananger is not meant to be stored permanently (i.e. can be deleted regularly while tax info still is in cluster).
When you run RPCTRDU0 to delete W-2 info, only data in cluster is deleted, but Temse remains untouched (as probably SAP assumption would be Temse will be cleaned up routinely).
For HR Temse, the lifetime is configured in feature 'TEMSE'. If the Temse deletion job is run routinely (we don't have it set up), it will delete out all Temse that have an expired date. The problem is if you run this, it will not just purge the log for w-2 simulation you deleted, but also those that already have expired date.
What you can do is to identify the Temse files you want to delete, and manually delete through PU12. -
UCCX 8: Log Manager partial service
Hi all,
I'm running UCCX v8.02 SU3 in HA mode. In my master server, the Log Manager service is always showing as partial service (even after UCCX Serviceability restart) but its working fine (In Service) in my slave server.
What process is this impacting when its in partial service? Which logs should i be analysing to determine the cause?
Thanks!
-JT-I have the same issue except my master is fine and the secondary server shows Log Manager in Partial service. TAC told me that was OK. I would like a better answer. My master server is on a MCS server and the secondary is on UCS VM platform. Any help would be appreciated.
Thanks,
RQ -
Firewall Log Management Software
Can anyone recommend any firewall log management software that are proven?
Adam,
I suggest you to try ManageEngine Firewall Analyzer.
The product almost support all the leading vendors in the industry. The product is segregated in to the three categories and they are,
1.Traffic
2.Security
3.Management
1. Traffic Statistics:
This will give you the complete bandwidth information that was transacted through out the network with multiple drill analysis such as Source, Destination, Protocol, Hits, Bytes Sent, Bytes Received etc. You can even do capacity planning and forecasting with the product.
2. Security Statistics:
Security Statistics (Reports) will display all malicious events in your network. It will help you to know the various threats and attacks to the company from outside to inside and vice versa.
3. Management Statistics:
This will help you to do audit and security configuration analysis which includes change management, compliance report. This will point out the loop holes of the network and assist you to fix it.
Why Firewall Analyzer?
*Support for Firewall and security devices from multiple vendors
*Real-time bandwidth monitoring
*Employee internet usage with URL monitoring
*Real-time alerting
*Firewall Change Management reports
*Security Audit & Configuration Analysis reports
*Diagnose live connections
*Capability to view traffic trends and usage patterns (Capacity Planning)
*Powerful search for forensic and security analysis
*Multi-level drill down into top hosts, protocols, web sites and more
*Network security reports
*Firewall compliance reports
*Flexible and secured log data archiving
*Rebranding, User based views and dashboard for MSSP Support
and more
http://www.manageengine.com/products/firewall/features.html
I recommend you to evaluate the fully functioned 30 days evaluation copy and check if it helps you to acheive your use case.
Regards,
Vignesh.K
Firewall Analyzer -
I want to send FLEXCUBE logs on Splunk log management server. does anyone know how ?
i want to send FLEXCUBE logs on Splunk log management server. does anyone know how ?
On your Mac: Messages>Preferences>Accounts...adjust here.
On your iPad: Settings>Messages>Send/Receive...adjust here. -
Call log Manager..?
Hi All,
I am looking for if there are mobile phone software (for symbian obviously) that can log all incoming and outgoing phone calls and transmit the log to file server/mobilephone in real-time (or small delay)?
Ps: I found one but it's for SonyEricsson which is called Call Log Manager...
Thanks
SerhanHi parden,
You can also run : set logging enable
in CUCM Cli mode this will atleast allows you to enable CLI Admin logs
and to check all the logins detail run : show logins
Regards
Spooster -
Centralized event log management
Hi Experts,
Our requirement is configure the DCs and Servers to do a centralized event log management. Is there a default way of doing it? Is mapping the shared network drive and configuring the events to log in the shared network drive a suggested method?
I need your exprts opinion.
Rgrds,
MPCHi,
You can use a centralized event-log management system as Meinolf mentioned. You can also use MMC (Microsoft Management console) snap-ins with several of event viewer
setting the focus on the servers need. Please refer to the following information:
1. Go to Start-> Run and type mmc
2. Click File-> Add/Remove Snap-In, then select the Add button.
3. In the window of available snap-ins select Event Viewer and then click Add.
4. In the Select Computer window, select the computer from which to get events and click finish.
5. Repeat this process for each server you want added to the MMC.
When finished, you should save the console so that the next time you open it keep all these changes we made. To save the console, once added server events for, go
to the File menu and select Save as and enter a name Console.
Also, the Event Comb tool (Eventcombmt.exe) will be helpful. It is a multi-threaded tool that can be used to gather specific events from the Event Viewer logs of different
computers at the same time. For more information, you can refer to the following link:
http://support.microsoft.com/kb/308471
Thanks.
Nina
This posting is provided "AS IS" with no warranties, and confers no rights. -
Is there anyway to limit the CPU usage of a zone reguardlesss of what the rest of the system is doing?
I've just now started looking at the resource management capabilities and I see that I can reallocate
CPU shares, but I'd like to say "ZoneA only gets 50% of the CPU" and even if the entire system is idle,
ZoneA still can use no more than 50%. Ordinarily I know that doesn't make much sense, if you've got CPU, use it, but I'm worried about heat. I'm debugging a problem were MySQL is running 100% constantly... it's not choking the system at all, but I'd like to cut the maxium usable CPU down to like 10% and ease off some of the utilization of the CPUs.
Is this possible or is adjusting the schedualing priorities all that I can do? As I understand it, I can set cpu-shares to 10, but if the system is idle and the proccess wants to run 100, it can... the limits only kick in when there is a constraint by other higher priority proccesses/projects.
ThanxI would think this functionality would be required to
obtain lower costs from ISV's who base their pricing
on the number of CPUs available to their
applications. This is one of the reasons my companyOne bit of functionality which was introduced in the Solaris
Express 8/04 build and can be downloaded now is that
when resource pools are enabled, a zone will only show
the CPU elements that are a part of the pool the zone is bound
to. This means that programs like psrinfo(1M) all of the various
*stat(1M) commands as well as APIs such as sysconf(3C) and
getloadavg(3C) will return information based on the "virtualized"
view of the pool.
Maybe you are looking for
-
Hi Experts, I have to schedule the MC.9 report for Material Stock Analysis in the background. The MC.9 transaction if run in the foreground is running successfully for short period of 1 month. But if i try to run the transaction for more than 3 month
-
I clicked on iMessage on my Macbook Pro today for the first time, and now NOTES keeps opening up every 5 seconds, with a new note. I cannot stop it from opening. If I close it or force close, it opens again in 5 seconds. I can't get rid of it. I h
-
Error occured in OIM client running
Hai all, Am new to OIM . I completed basic installation java,jboss, oracle database, OIM server and OIM client . All the thing are running sucessfully except OIM client. It is also installed sucessfully but after entering the user name and password i
-
A box on my screen interfering with operation
There appears intermittently on my screen a blue box that looks like part of some HP tool. It says "audio adjust," followed by 2 lines that are alternately highlighted: "volume" and "mute." I can't remove or minimuze or move this box. When it is
-
Mac mini vs. Lacie Ethernet Hard Drive
So I just got a new Lacie Ethernet Disk mini that I'll be hooking up to my router. I also have a "spare" Mac mini that I thought about using as a file server. Both would just be connected to the router. Which do you think would be faster in terms of