Alert & Audit Log Purging sample script

Hi Experts,
Can somebody point to sample scripts for
1. alert & audit log purging?
2. Listener log rotation?
I am sorry if questions look too naive, I am new to DBA activities; pls let me know if more details are required.
As of now the script is required to be independent of versions/platforms
Regards,

34MCA2K2 wrote:
Thank a lot for your reply!
If auditing is enabled in Oracle, does it generate Audit log or it inserts into a SYS. table?
Well, what do your "audit" initialization parameters show?
For the listener log "rotation", just rename listener.log to something else (there is an OS command for that), then bounce the listener.
You don't want to purge the alert log, you want to "rotate" it as well.  Just rename the existing file to something else. (there is an OS command for that)
So this has to be handled at operating system level instead of having a utility. Also if that is the case, all this has to be done when database is shut down right?
No, the database does not have to be shut down to rotate the listener log.  The database doesn't give a flying fig about the listener log.
No, the database does not have to be shut down to rotate the alert log.  If the alert log isn't there when it needs to write to it, it will just start a new one.  BTW, beginning with 11g, there are two alert logs .. the old familiar one, now located at $ORACLE_BASE/diag/rdbms/$ORACLE_SID/$ORACLE_SID/trace, and the xml file used by adrci.  There are adrci commands and configurations to manage the latter.
Again, I leave the details as an exercise for the student to practice his research skills.
Please confirm my understanding.
Thanks in advance!

Similar Messages

  • Audit Log Trimming Timer Job stuck at "pausing" status

    Hi,
    We have a SharePoint 2010 farm and our Audit table is growing rapidly. I checked our "Audit log Trimming" timer job and it has been stuck at "pausing" status for more than a month. Any advice to resolve this issue would be great.
    Thanks,
    norasampang

    Hi Trevor,
    Do you think the reason that the time job is failing is because the audit log table is big and the audit timer jod times out. I saw your reply here at this
    post 
    where you have mentioned "
    It may be timing out. Have you executed it manually to see if it runs without errors?
    Can you please explain in more detail what you meant by that. I was thinking of trying to trim the Audit log using this script in small batch. Can you please let me know if this script seems right?
    $site = Get-SPSite -Identity http://sharepointsite.com
    $date = Get-Date
    $date = $date.AddDays(-1021)
    $site.Audit.DeleteEntries($date) 
    At first i would like to delete all datas that are older than 1021 days old and eventually get rid of the other logs in smaller chunks. Any advice and suggestion would be highly appreciated.
    Thanks,
    norasampang

  • I got warning in audit log but I don't get any alerts...

    Hello,
    I have File Adapter sender with FCC.
    Because I was curious how FCC will behave when invalid file is used as a source file.
    I deleted much of the contents in the source file and the File adpater with FCC read it.
    In communication channel monitoring in RWB, I got this message in Audit log.
    "Empty document found - proceed without sending message".
    However, I cannot find this message anywhere else... Either in Message Monitoring (RWB) or SXMB_MONI (SAPGUI).
    Also, I don't get any alerts.
    How do I get alerts when file is not processed by file adapter?
    Also, how do I view alert view for this kind of error?
    Thank you.
    -Won

    Hi Won,
    I suppose that you have checked alredy that your alerts are setup correctly and rules are defined for it.
    First, you might set the log level of the file adapter to debug - you should see the issue in the NWA log.
    Should PI try to generate an alert but fails, you would also see it here and locate your problem.
    But, my opinon is that  your chances are not very good to get an alert in such case. SAP decided for which issue they issue an alert and for which not.
    When you have an empty file in the file adapter, it seems that SAP thiks that this is not to critical and therefore they don't issue an alert.
    The reason why you dont see it anywhere else is, that the adapter generates a message id only when there is something to make a message. Empty file means no message and you can not see it in the monitoring tools.
    best regards,
    Markus

  • Powershell script to get Audit log settings for all site collections.

    Hi all,
    I am facing issue to get audit log details for all site collection across the farm with the below script. Could someone help me change the script.
    Function AuditValue($url)
    $site=Get-SPSite $url
    auditMask = $site.audit.auditflag
    return auditMask
    Get-SPSite -Limit All | Get-SPWeb -Limit All |
    Select Title,Url,AuditValue(Url)| export-csv "D:\scripts\Test\AuditDetails.csv" -notypeinformagettion
    Thanks Basva

    What errors are you getting? That is if any.
    Scrap that I see a few.
    Not had time to fix it fully. As I am now done at work, but this will help you on your way. It gets back only the audit flag value at the moment.
    Function AuditValue
    $site = Get-SPWeb "http://server" -limit ALL
    foreach($i in $site)
    $auditMask = $site.audit
    $list = $auditMask
    $list | Select-object auditflags
    AuditValue | out-file "C:\temp\AuditDetails.csv"
    EDIT::
    Function AuditValue
    $site = Get-SPWeb "http://SERVER" -limit ALL
    foreach($i in $site)
    $auditMask = $site.audit
    $list = $auditMask
    $list | Select-object @{Name="URL"; Expression ={$site.url}}, auditflags
    AuditValue | out-file "C:\temp\AuditDetails.csv"
    The above will also put URL 
    If this is helpful please mark it so. Also if this solved your problem mark as answer.

  • Generating Audit log report using PowerShell script

    Hi All,
    I have a requirement to generate the audit log report for a Document library/ custum list. Like 
    1) Who had downloaded and when for the site
    2) Respective username,date time
    3) URL of the document / subsite name etc.
    if it possible, how to automate the process in weekly basess.
    I know it can be done through OOB. Audit log reports 
    can any one help on this?
    Below URL i had for reference : http://social.technet.microsoft.com/wiki/contents/articles/23900.get-audits-for-a-sharepoint-document-using-powershell.aspx
    Vijaivel

    Hi Peter,
    thanks for your reply, URLS are good but am having limited access (i.e) am not a sitecollection Aministrator. So I will not have the access for SiteCollection Administrator section. I having the only one option is Site Collection Web Analytic report. Is
    it possible to achive with this option? or anyother work around ?
    Suggest any other option for automated notification process
    Thanks
    Vijaivel

  • Sample scripts for streams setting source 9i-- destination10g

    I need to set up streams across 9i to 10g (both in windows OS)
    tried out sucessfully setting up across 9i-->9i(using OEM - using sample by oracle ) and
    10g-->10g(http://www.oracle.com/technology/obe/obe10gdb/integrate/streams/streams.htm#t6 which uses scripts)
    I need to implement streams from 9i to 10g. the problem is:
    packages used in 10g demo are not available in 9i.
    Do we have a sample script to implement streams across 9i-->10g?

    thanks Arvind, that would be really great.Me trying to have a demo so trying the demo scripts on dept table.Me trying since a month.I have moved my 9.2.0.1.0 source to 9.2.0.7 then applied the patchset 3 for 9.2.0.7 to fix the bug as i got to know there was a bug with streams across 9i,10g
    bug no:4285404 - PROPROGATION FROM 9.2 AND 10.1 TO 10.2
    Note: Executed the same script, with 4.2.2 and not 4.2.1(it is optional) ,as when i tried to export then import and then when i tried to delete supplimental log group from target it said "trying to drop non existant group"
    also when i query capture process it is showing LCRs getting queued,propogation also showing data is propagated from source, apply doesnt have errors but showing 0 for transactions assigned as well as applied.
    looks like destination queue not getting populated though at source propagation is sucessful
    Please find
    1.scripts
    2.init parameters of 9i (source)
    3. init parameters of 10g (target)
    SCRIPT:
    2.1 Create Streams Administrator :
    connect SYS/password as SYSDBA
    create user STRMADMIN identified by STRMADMIN;
    2.2 Grant the necessary privileges to the Streams Administrator :
    GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE to STRMADMIN;
    GRANT SELECT ANY DICTIONARY TO STRMADMIN;
    GRANT EXECUTE ON DBMS_AQ TO STRMADMIN;
    GRANT EXECUTE ON DBMS_AQADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_FLASHBACK TO STRMADMIN;
    GRANT EXECUTE ON DBMS_STREAMS_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_CAPTURE_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_APPLY_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_RULE_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO STRMADMIN;
    BEGIN
    DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => 'ENQUEUE_ANY',
    grantee => 'STRMADMIN',
    admin_option => FALSE);
    END;
    BEGIN
    DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => 'DEQUEUE_ANY',
    grantee => 'STRMADMIN',
    admin_option => FALSE);
    END;
    BEGIN
    DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => 'MANAGE_ANY',
    grantee => 'STRMADMIN',
    admin_option => TRUE);
    END;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    END;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_ANY_RULE_SET,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.ALTER_ANY_RULE_SET,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE_SET,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_ANY_RULE,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.ALTER_ANY_RULE,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    END;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.EXECUTE_ANY_EVALUATION_CONTEXT,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    END;
    2.3 Create streams queue :
    connect STRMADMIN/STRMADMIN
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'STREAMS_QUEUE_TABLE',
    queue_name => 'STREAMS_QUEUE',
    queue_user => 'STRMADMIN');
    END;
    2.4 Add apply rules for the table at the destination database :
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'SCOTT.DEPT',
    streams_type => 'APPLY',
    streams_name => 'STRMADMIN_APPLY',
    queue_name => 'STRMADMIN.STREAMS_QUEUE',
    include_dml => true,
    include_ddl => true,
    source_database => 'str1');
    END;
    2.5 Specify an 'APPLY USER' at the destination database:
    This is the user who would apply all DML statements and DDL statements.
    The user specified in the APPLY_USER parameter must have the necessary
    privileges to perform DML and DDL changes on the apply objects.
    BEGIN
    DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name => 'STRMADMIN_APPLY',
    apply_user => 'SCOTT');
    END;
    2.6 If you do not wish the apply process to abort for every error that it
    encounters, you can set the below paramter.
    The default value is 'Y' which means that apply process would abort due to
    any error.
    When set to 'N', the apply process will not abort for any error that it
    encounters, but the error details would be logged in DBA_APPLY_ERROR.
    BEGIN
    DBMS_APPLY_ADM.SET_PARAMETER(
    apply_name => 'STRMADMIN_APPLY',
    parameter => 'DISABLE_ON_ERROR',
    value => 'N' );
    END;
    2.7 Start the Apply process :
    BEGIN
    DBMS_APPLY_ADM.START_APPLY(apply_name => 'STRMADMIN_APPLY');
    END;
    Section 3
    Steps to be carried out at the Source Database (V920.IDC.ORACLE.COM)
    3.1 Move LogMiner tables from SYSTEM tablespace:
    By default, all LogMiner tables are created in the SYSTEM tablespace.
    It is a good practice to create an alternate tablespace for the LogMiner
    tables.
    CREATE TABLESPACE LOGMNRTS DATAFILE 'logmnrts.dbf' SIZE 25M AUTOEXTEND ON
    MAXSIZE UNLIMITED;
    BEGIN
    DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS');
    END;
    3.2 Turn on supplemental logging for DEPT table :
    connect SYS/password as SYSDBA
    ALTER TABLE scott.dept ADD SUPPLEMENTAL LOG GROUP dept_pk
    (deptno) ALWAYS;
    3.3 Create Streams Administrator and Grant the necessary privileges :
    Repeat steps 2.1 and 2.2 for creating the user and granting the required
    privileges.
    3.4 Create a database link to the destination database :
    connect STRMADMIN/STRMADMIN
    CREATE DATABASE LINK str2 connect to
    STRMADMIN identified by STRMADMIN using 'str2' ;
    //db link working fine.I tested it
    3.5 Create streams queue:
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_name => 'STREAMS_QUEUE',
    queue_table =>'STREAMS_QUEUE_TABLE',
    queue_user => 'STRMADMIN');
    END;
    3.6 Add capture rules for the table at the source database:
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'SCOTT.DEPT',
    streams_type => 'CAPTURE',
    streams_name => 'STRMADMIN_CAPTURE',
    queue_name => 'STRMADMIN.STREAMS_QUEUE',
    include_dml => true,
    include_ddl => true,
    source_database => 'str1');
    END;
    3.7 Add propagation rules for the table at the source database.
    This step will also create a propagation job to the destination database.
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
    table_name => 'SCOTT.DEPT',
    streams_name => 'STRMADMIN_PROPAGATE',
    source_queue_name => 'STRMADMIN.STREAMS_QUEUE',
    destination_queue_name => 'STRMADMIN.STREAMS_QUEUE@str2,
    include_dml => true,
    include_ddl => true,
    source_database => 'str1');
    END;
    Section 4
    Export, import and instantiation of tables from Source to Destination Database
    4.1 If the objects are not present in the destination database, perform an
    export of the objects from the source database and import them into the
    destination database
    Export from the Source Database:
    Specify the OBJECT_CONSISTENT=Y clause on the export command.
    By doing this, an export is performed that is consistent for each
    individual object at a particular system change number (SCN).
    exp [email protected] TABLES=SCOTT.DEPT FILE=tables.dmp
    GRANTS=Y ROWS=Y LOG=exportTables.log OBJECT_CONSISTENT=Y
    INDEXES=Y STATISTICS = NONE
    Import into the Destination Database:
    Specify STREAMS_INSTANTIATION=Y clause in the import command.
    By doing this, the streams metadata is updated with the appropriate
    information in the destination database corresponding to the SCN that
    is recorded in the export file.
    imp [email protected] FULL=Y CONSTRAINTS=Y
    FILE=tables.dmp IGNORE=Y GRANTS=Y ROWS=Y COMMIT=Y LOG=importTables.log
    STREAMS_INSTANTIATION=Y
    4.2 If the objects are already present in the desination database, there are
    2 ways of instanitating the objects at the destination site.
    1. By means of Metadata-only export/import :
    Export from the Source Database by specifying ROWS=N
    exp USERID=SYSTEM@str1TABLES=SCOTT.DEPT FILE=tables.dmp
    ROWS=N LOG=exportTables.log OBJECT_CONSISTENT=Y
    Import into the destination database using IGNORE=Y
    imp USERID=SYSTEM@str2FULL=Y FILE=tables.dmp IGNORE=Y
    LOG=importTables.log STREAMS_INSTANTIATION=Y
    2. By Manaually instantiating the objects
    Get the Instantiation SCN at the source database:
    connect STRMADMIN/STRMADMIN@source
    set serveroutput on
    DECLARE
    iscn NUMBER; -- Variable to hold instantiation SCN value
    BEGIN
    iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
    DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' || iscn);
    END;
    Instantiate the objects at the destination database with this SCN value.
    The SET_TABLE_INSTANTIATION_SCN procedure controls which LCRs for a table
    are to be applied by the apply process.
    If the commit SCN of an LCR from the source database is less than or
    equal to this instantiation SCN , then the apply process discards the LCR.
    Else, the apply process applies the LCR.
    connect STRMADMIN/STRMADMIN@destination
    BEGIN
    DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
    source_object_name => 'SCOTT.DEPT',
    source_database_name => 'str1',
    instantiation_scn => &iscn);
    END;
    Enter value for iscn:
    <Provide the value of SCN that you got from the source database>
    Finally start the Capture Process:
    connect STRMADMIN/STRMADMIN@source
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'STRMADMIN_CAPTURE');
    END;
    INIT.ora at 9i
    # Copyright (c) 1991, 2001, 2002 by Oracle Corporation
    # Archive
    log_archive_dest_1='LOCATION=D:\oracle\oradata\str1\archive'
    log_archive_format=%t_%s.dbf
    log_archive_start=true
    # Cache and I/O
    db_block_size=8192
    db_cache_size=25165824
    db_file_multiblock_read_count=16
    # Cursors and Library Cache
    open_cursors=300
    # Database Identification
    db_domain=""
    db_name=str1
    # Diagnostics and Statistics
    background_dump_dest=D:\oracle\admin\str1\bdump
    core_dump_dest=D:\oracle\admin\str1\cdump
    timed_statistics=TRUE
    user_dump_dest=D:\oracle\admin\str1\udump
    # File Configuration
    control_files=("D:\oracle\oradata\str1\CONTROL01.CTL", "D:\oracle\oradata\str1\CONTROL02.CTL", "D:\oracle\oradata\str1\CONTROL03.CTL")
    # Instance Identification
    instance_name=str1
    # Job Queues
    job_queue_processes=10
    # MTS
    dispatchers="(PROTOCOL=TCP) (SERVICE=str1XDB)"
    # Miscellaneous
    aq_tm_processes=1
    compatible=9.2.0.0.0
    # Optimizer
    hash_join_enabled=TRUE
    query_rewrite_enabled=FALSE
    star_transformation_enabled=FALSE
    # Pools
    java_pool_size=33554432
    large_pool_size=8388608
    shared_pool_size=100663296
    # Processes and Sessions
    processes=150
    # Redo Log and Recovery
    fast_start_mttr_target=300
    # Security and Auditing
    remote_login_passwordfile=EXCLUSIVE
    # Sort, Hash Joins, Bitmap Indexes
    pga_aggregate_target=25165824
    sort_area_size=524288
    # System Managed Undo and Rollback Segments
    undo_management=AUTO
    undo_retention=10800
    undo_tablespace=UNDOTBS1
    firstspare_parameter=50
    jobqueue_interval=1
    aq_tm_processes=1
    transaction_auditing=TRUE
    global_names=TRUE
    logmnr_max_persistent_sessions=5
    log_parallelism=1
    parallel_max_servers=2
    open_links=5
    INIT>ora at 10g (target)
    # Copyright (c) 1991, 2001, 2002 by Oracle Corporation
    # Archive
    log_archive_format=ARC%S_%R.%T
    # Cache and I/O
    db_block_size=8192
    db_cache_size=25165824
    db_file_multiblock_read_count=16
    # Cursors and Library Cache
    open_cursors=300
    # Database Identification
    db_domain=""
    db_name=str2
    # Diagnostics and Statistics
    background_dump_dest=D:\oracle\product\10.1.0\admin\str2\bdump
    core_dump_dest=D:\oracle\product\10.1.0\admin\str2\cdump
    user_dump_dest=D:\oracle\product\10.1.0\admin\str2\udump
    # File Configuration
    control_files=("D:\oracle\product\10.1.0\oradata\str2\control01.ctl", "D:\oracle\product\10.1.0\oradata\str2\control02.ctl", "D:\oracle\product\10.1.0\oradata\str2\control03.ctl")
    db_recovery_file_dest=D:\oracle\product\10.1.0\flash_recovery_area
    db_recovery_file_dest_size=2147483648
    # Job Queues
    job_queue_processes=10
    # Miscellaneous
    compatible=10.1.0.2.0
    # Pools
    java_pool_size=50331648
    large_pool_size=8388608
    shared_pool_size=83886080
    # Processes and Sessions
    processes=150
    sessions=4
    # Security and Auditing
    remote_login_passwordfile=EXCLUSIVE
    # Shared Server
    dispatchers="(PROTOCOL=TCP) (SERVICE=str2XDB)"
    # Sort, Hash Joins, Bitmap Indexes
    pga_aggregate_target=25165824
    sort_area_size=65536
    # System Managed Undo and Rollback Segments
    undo_management=AUTO
    undo_tablespace=UNDOTBS1
    sga_target=600000000
    parallel_max_servers=2
    global_names=TRUE
    open_links=4
    logmnr_max_persistent_sessions=4
    REMOTE_ARCHIVE_ENABLE=TRUE
    streams_pool_size=300000000
    undo_retention=1000
    tahnks a lot...

  • Audit Log query

    I am trying to figure out why a query of the OID audit logs is taking so long....
    the search filter is:
    (&(orcleventtime>=20070426)(orcleventtime<=20070427)(orcleventtype=User login))
    it takes 97 seconds to return 1622 entries.
    when i run a query with this filter....
    (&(orcleventtime>=20070426)(orcleventtype=User login))
    it takes 0.2 seconds
    any ideas?

    Purging an AUD$ table is good idea after taking the export....
    Yeah...that could be better idea to audit those things that application skips...
    I was just getting calls from finance and operations departments... complaining that their ERP applications were haning taking long time to execute day end procedures and in reports...around 20 to 30 minutes.... as I recalled that my last deployment on live was enabling of auditing as I executed noaudit all and noaudit select, update, delete, insert on erp, The user got their day end procedures executed and report in less than 1 minute...
    Can anybody explain me....Does auditing degrades performance..?
    Regards?

  • Audit log for document library

    Hi All,
    I have a requirement to generate a report for a document library which contains confidential “Policies”
     documents, this library exist under sub site. Now my clients wants a log report, which should give information like Who accessed/download/modified document?
    Please guide me
    MercuryMan

    You need to first enable menu audit log reports SharePoint 2010. Please follow the given below steps.
    1. Site Action > site settings
    2. Site collection features
    3.Click on the reporting  features
    4.Search for the Reporting feature 
    4.Click on Activate button
    Otherwise, you could try automate solution named Lepide Auditor for SharePoint (http://www.lepide.com/lepideauditor/sharepoint.html) which assists to generate  report and and provides the
    auditing data into real time. It gets real time alerts on detecting changes to users, groups, lists, libraries, folder and permission etc.
    Lepide - Simplifying IT Management

  • Customizing Audit Log Report - Adding/Removing Columns from Display

    Hi All -
    Has anyone tried adding/removing the columns from Out of Box Auditi Log report with minor customizations to the code/configuration files ? Right now , when the Audit Log report is executed, there are numbe of columns that appear on report (Server, Client IP etc) which are too technical for the client and requirement is remove some of these and add some more for the attributes that we are audit logging through Audit workflow service. If you have done something similar in the past, please provide me with some inputs. Any sample code, examples will be highly appreciated.

    Hello Gurus,
    I also have same kind of requirement.WE have to send a monthly report to customer where number of users created and deleted to be given to them.
    Its urgent.Please help
    Thanks in advance

  • Security Audit Log SM19 and Log Management external tool

    Hi all,
    we are connecting a SAP ECC system with a third part product for log management.
    Our SAP system is composed by many application servers.
    We have connected the external tool with the SAP central system.
    The external product gathers data from SAP Security Audit Log (SM19/SM20).
    The problem is that we see, in the external tool,  only the data available in the central system.
    The mandatory parameters have been activated and the system has been restarted.
    The strategy of SAP Security Audit Log is to create many audit log file for each application server. Probably, only when SM20 is started, all audit files from all application servers are read and collected.
    In our scenario, we do not use SM20 since we want read the collected data in the external tool.
    Is there a job to be scheduled (or something else) in order to have all Security Audit Log available (from all application servers) in the central instance ?
    Thanks in advance.
    Andrea Cavalleri

    I am always amazed at these questions...
    For one, SAP provides an example report ( RSAU_READ_AUDITLOG_EXTERNAL ) to use BAPIs for alerts from the audit log yet 3rd party solutions seem to be alergic to using APIs for some reason.
    However, mainly I do not understand why people don't use the CCMS (tcode RZ20) security templates and monitor the log centrally from SolMan. You can do a million cool things in SolMan... but no...
    Cheers,
    Julius

  • Abrupt increase in alert SID .log file size

    alert<SID>.log file is abrubtly increasing in size which is in process filles up a disk space, hence further no DB login.
    I shutdown the database, took a bakup of the alert log file, nullified the alert log ( using cat /dev/null > alert.log) and started up the database.
    As of now, its okay, but can I nullify this alert log file while the database is up and running..???

    It is better to write a simple shell script to housekeep the alert.log.
    Below is an example
    if [ `ls -al $ALERTLOG | awk '{print $5}'` -gt 2500000 ]
    then
    cp -p $ALERTLOG $ALERTLOG.`date +%d%m%y`
    cat /dev/null > "$ALERTLOG
    gzip $ALERTLOG.`date +%d%m%y`
    find "$ALERTLOGFOLDER "-name *.gz -mtime +10 -print -exec rm {} \;"
    fi
    Also, you need to housekeep adump, bdump, cdump ... etc folders

  • Adding entries in Audit Log Tab in Component Monitoring under Runtime Workb

    Hello Experts,
    I am trying to add my own audit log entries to the Audit Log Tab under Runtime Workbench -> Component Monitoring. I found this sap help link (http://help.sap.com/saphelp_nwpi71/helpdata/en/3b/6fe540b1278631e10000000a1550b0/frameset.htm) I am not sure if i am going in the right direction or not. But, when I tried to use the code in my User Defined Function in Message Mapping it gives me java error on PublicAPIAccess.
    Can anyone please let me know what am i doing wrong or if I am going in a totally wrong direction to achieve me goal.
    I am using PI 7.1 without EP1 and my example uses File adapter.
    Thanks!!

    Hi,
    if you are trying to add custom audit log for system monitoring in RWB in component monitoring, then i think it is not feasible........moreover for this thing, you can ask your basis guys to configure CCMS in your XI system to recieve alerts for your system.............
    if you are trying to add custom audit log msgs for your msg processing, then you should develop a custom J2EE adapter module and add your audit log entries in the process method of your adapter module..............
    Regards,
    Rajeev Gupta
    Edited by: RAJEEV GUPTA on May 6, 2009 7:12 AM

  • Oblix audit logs to track last login time in Sun DS

    Hi,
    I would like to use oblix audit logs to track last login time in Sun DS.
    Is there a straightforward procedure to do that other than parsing the logs and using custom script to update Sun DS.
    Please advice.
    Thanks.

    Hi,
    In OAM you can define your own plugins to run during the authentication (you include them in the relevant authentication schemes) - you could write one that updates the user profile of the logged-in user. You would be pretty much on your own, though, all that OAM would give you is the DN of the logged in user. You would need to include libraries that connect to ldap (or maybe the plugin could send a web service call) and perform the necessary attribute updates. Authn plugins are documented in the Developer Guide: http://docs.oracle.com/cd/E15217_01/doc.1014/e12491/authnapi.htm#BABDABCG (actually that's for 10.1.4.3).
    Regards,
    Colin

  • Audit logs for read operation on tables

    I have a requirement of implementing audit logs for tables on read / select operation in addition to insert,update,delete operations. Is there any way to achieve this since triggers are present only for insert,update and delete ?
    thanks in advance

    Hi,
    yes there are many ways you can audit the Source database according to your requirments. as you need to audit the select , insert etc you can audit in many ways
    1) By implementing policies , (i.e) FGA , or statement policy on a given table or a given user.
    2) you can also do the required task by implementing the alerts on specific conditions like select on a specifc table etc
    you can use these utileties from AV console.
    Regards.

  • ODSI 10Gr3 audit logs common/time question

    Hi
    With ODSI10GR3, We are investigating delays in processing of some DB2 Inserts
    Inserst occur daily but th eproblem happens maybe once in a week
    A review of the audit log during teh problem occurance shows the following
    Comon/time is taking 33 seconds
    common/time {
    timestamp: Mon Mar 01 10:21:36 PST 2010
    duration: 33323 }
    with compile time being ~ 14 sec and insert time being ~ 4 seconds
    Is it possible that things such as full gc ooccuring can impact this time?
    We increased tx timeout to 120 seconds to avoid the timeout but would like to investigate further on this.
    Thanks Much for any info
    Best
    ####<Mar 1, 2010 10:22:09 AM PST> <Info> <ODSI> <qa-sc-eibapp02.corp.test.com> <ds_ms2> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <service.uateibsads> <> <> <1267467729490> <BEA-000000> <ClientDataspace> <DSPAuditEvent timestamp: Mon Mar 01 10:21:36 PST 2010 severity: FAILURE id: ClientDataspace:48:Mon Mar 01 10:21:36 PST 2010 {
    common/application {
    user: service.uateibsads
    name: ClientDataspace
    eventkind: update
    server: ds_ms2
    query/performance {
    compiletime: 14869
    update/relational {
    source: CBS_DB2_DS
    sql: INSERT INTO "S109935E"."C@CR538"."C@CUNEW01" ("C@STATUS", "C@SEQNBR", "CUBK", "CUNBR", "CUSTAT",
    "CUALT", "CUNA1", "CUNA2", "CUNA3", "CUNA4", "CUNA5", "CUNA6", "CUZIP", "CUZIP2", "CUZIP3",
    "CUZIP4", "CUSHRT", "CUSSNR@", "CUSSTY", "CUCLPH", "CUHMPH", "CUBUPH", "CUPOFF", "CUSOFF",
    "CUPOF1", "CUPOF2", "CUOPDT", "CUTYPE", "CUTYP", "CUSIC", "CUSEX", "CURACE", "CUOWN", "CUYREM",
    "CUINC", "CUSRIN", "CUBDTE", "CUDEP", "CUCTC", "CUCTCT", "CUCIRA", "CUMNBR", "CUNTID", "CUUSR1",
    "CUCLNK", "CUUSR3", "CUCDCH", "CUCDCN", "CUCDCD", "CUCMCH", "CUCMNR", "CUCMCD", "CUCVSH",
    "CUCVCN", "CUCVCD", "CUCATH", "CUCATN", "CUCATD", "CUCLNG", "CUCCCD", "CULGLR", "CUCWHP",
    "CUCPSP", "CUCTXN", "CUCPRF", "CUSHKY", "CUITLD", "CUPSTL", "CUACOM", "CUBRCH", "CUMIDT@",
    "CUMRTS", "CUMAIL", "CUSOLI", "CUSOCI", "CUCPNA", "CUBPNA", "CUPERS", "CUSALU", "CUFAX",
    "CUTELX", "CUTXAN", "CUDOCF", "CUDCDT", "CUTINU", "CUTADT@", "CUWPRT", "CUCECD", "CUCELM",
    "CUEXTF", "CUMTND", "CUCNCD", "CUMARK", "CUEMPL", "CUINQ", "CUMNT", "CUCENS", "CUCODT", "CUDEDT",
    "CUACCD", "CUBYR1", "CUBYR2", "CUPREF", "CUJBDT", "CUJDDT", "CUEMA1", "CUEMA2", "CUOPT",
    "CUOPTD", "CUSPFG", "CUOENTTYP", "CUCENTTYP", "CUAMDT", "CUESDT", "CUENA1", "CUENA2", "CUENA3",
    "CUENA4", "CUENA5", "CUENA6", "CUEPST", "CUEZIP", "CUEZIS", "CUEZP3", "CUEZP4", "CUAAPL",
    "CUAAKY", "CUAREC", "CUASTA", "CUANA1", "CUANA2", "CUANA3", "CUANA4", "CUANA5", "CUANA6",
    "CUAZP1", "CUAZP2", "CUAZP3", "CUAZP4", "CUAPSD", "CUASTR", "CUASTP", "CUASTS", "CUAFLG",
    "CUARFG", "CUCLS", "CURISK", "CURDT1", "CURSK2", "CURDT2", "CUCRLN", "CUCRDT", "CUCRFR",
    "CUCRND", "CUCRPR", "CUFSDT", "CUFSFR", "CUFSND", "CUSALE", "CUCSTS", "CUNETI", "CUPRJI",
    "CUASST", "CUCURA", "CUCASH", "CUACCR", "CUMKTS", "CUREAL", "CULIFE", "CUINVN", "CUFIXA",
    "CULIAB", "CUCURL", "CULTRM", "CUNETW", "CUDIRL", "CUINDL", "CUDIRT", "CUINDT", "CUREDB",
    "CULCRO", "CUOTHA", "CUOTHL", "CU5WHP", "CUIWHY", "CUWHEX", "CUFILL", "CUREC1", "CUSTAD",
    "CUFRN1", "CUCHIB", "CUCHID", "CUCLOB", "CUCLOD", "CUCCDD", "CUCDD1", "CUCDD2", "CUCHD1",
    "CUCHD2", "CUCHP1", "CUCHP2", "CUCHP3", "CUCHP4", "CUCOL1", "CUCOL2", "CUCOL3", "CUCOL4",
    "CUCCDT", "CUCCYD", "CUCTYD", "CUCIDB", "CUCDIR", "CUCIND", "CUCSEC", "CUCUNS", "CUCILD",
    "CUCOPN", "CUCTOD", "CUCNON", "CUCHGO", "CUCRNB", "CUCQAG", "CUCQHU", "CUCQHI", "CUCQLO",
    "CUCQDD", "CUCQDT", "CUDPD1", "CUDPD2", "CUDPD3", "CUDPDP", "CUDPD", "CUIPD", "CUUPD", "CUCAGY",
    "CUCIAM", "CUCIAY", "CUCPAM", "CUCPAY", "CUCLTC", "CUCLSO", "CUUCMO", "CULCCA", "CUDRSD",
    "CUFACS", "CUINDS", "CUIPDS", "CUSECS", "CUUNSS", "CUDPDS", "CUUCMS", "CUBDIR", "CUBCMO",
    "CUTHR1", "CUTHR2", "CUTHR3", "CUTHR4", "CUTHR5", "CUFIV1", "CUFIV2", "CUFIV3", "CUFIV4",
    "CUFIV5", "CUTEN1", "CUTEN2", "CUTEN3", "CUTEN4", "CUTEN5", "CUTWN1", "CUTWN2", "CUTWN3",
    "CUTWN4", "CUTWN5", "CUTHI1", "CUTHI2", "CUTHI3", "CUTHI4", "CUTHI5", "CUSEV1@", "CUSEV2",
    "CUSEV3", "CUSEV4", "CUSEV5")
    VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?,
    rowsmodified: 1
    time: 3655
    common/application {
    exception: Transaction timed out after 29 seconds
    BEA1-000021E917F59C34B15A
    update/service {
    procedure: InsertNewCIF
    arity: 1
    dataservice: ld:CoreBankingSystem/LogicalServices/CreateNewCIF.ds
    script: declare namespace ns0="ld:CoreBankingSystem/LogicalServices/CreateNewCIF";
    declare namespace ns1="http://www.test.com/schemas/client/cbs/logical";
    declare variable $__fparam0 as element(ns1:NewCIF)* external;
    { return value ns0:InsertNewCIF($__fparam0); }
    common/time {
    timestamp: Mon Mar 01 10:21:36 PST 2010
    duration: 33323 }

    1) Is it possible that there is a database lock preventing the insert from being committed?
    2) What does the audit look like for a successful update?
    3) Notice that the "compile time" is non-zero. This indicates that the plan was not cached. Likely because it is the first time this was executed after the server was started. So not only do you have the extra query compilation time, there would also be time for loading classes and other initialization. (but 12 seconds of loading an initialization seems like a lot). Given that increasing the tx time to 120 seconds solves the problem (it does solve the problem, doesn't it?) I would say that this is the issue.
    4) Given that you just started the server (right? see (3)), it's not likely this is due to GC. But you could enable gc verbose to see.

Maybe you are looking for

  • Early Adopter release : Extract DDL for tables does not work

    Hi, just had a look at Raptor - really nice tool - easy install - could be a replacement for SQLnavigator for us. One or two things I noticed though ... 1) Export->DDL for tables does not work throws following error java.lang.ClassNotFoundException:

  • Developing an online web application

    Hi, I'm currently doing my final year project at uni as part of a computer science degree and am designing an interactive breast cancer diagnosis system. At the moment I'm just programming the basic structure including, registration, login,feedback a

  • Best way to log runtime parameters

    i have a log procedure that takes a varchar2 parameter with autonomous transaction. in my proc , just after begin statement, i want to log all parameter values given at runtime. but i dont want to use pipes to concat the param values for each procedu

  • Problem with adobe render opraration

    Hello, I have a problem with the view of a PDF. When I use the transaction ME22N and I want to print preview, the system return me that message : ADS: Request start time: Mon Oct 18 09:45:50 CEST 2010(200101) I debug the program and I had that java e

  • How to create customized 404 error page in iplanet 6.1?

    Hi, i need to display custmozed 404 page when my applilcation server(weblogic 9.2) is not available or down. How i can do this in iplanet?? Any idea would be helpful.. Thank you, ---Praveen Reddy J