Audit data in tables

I need to audit the data for few tables and save the audit records into a table called app_audit where i would save the primary key of those tables, what record user updating, inserting or deleting and username and sysdate etc....
When a user performs any DML on the tables, I need to generate a DML statements into app_audit table into a column called vstring.
How do I generate dynamic DML statements into this vstring when a insert or update is done on the table?
I know I can write a trigger but its on many tables I have to do this, instead i am looking for an simpler solution?
Can I write procedure and fire this procedure when a DML occurs and generate automatically the DML statement into app_audit table's column vstring. If so could you give an example please?
Please do help me......
Thanks a bunch....

If all you want to know is what table was changed and what kind of DML statement was issued the basic Oracle audit command will give you that and a lot more such as the Oracle and OS user Id and when the change was made.
On the other hand it you want to capture what columns were changed and what the old and new values were or capture the entire before or after change row then a table level trigger is a good option that you as a DBA can control. That is, you can turn it off it you need to run an export/truncate/import and then back on and which do not file for alter table move commands.
I would not use dynamic SQL as I suggest you write a trigger code generator that you feed the table_name to and it uses the rdbms dictionary tables to generate the code to record the kind of information you need recorded in the type for format you want.
Just capturing the table_name and DML statement type seems pretty useless from an audit point of view. What is the purpose of capturing this data.
IMHO -- Mark D Powell --

Similar Messages

  • RE: Customized Auditing of Data base Tables

    Dear experts
    I really need your help.
    We have a requirement to do auditing on customized Data Base tables.  I know there is a tick in the table to keep a log and you can then use SCU3 to check it. But this uses alot of resources that we can not afford.  We need to design our own customized table to update any data changes to any of our customized tables.
    Now we need this table to show us the user. the date and time, the old value of the field, the new value of the field, the field name, and  the table name.  However there will also be a report to check who changed, edited added or deleted what entries on any of the customized tables.  So my problem is that when updating my customized table that holds the data. The logical flow of data does not make sense.  Please see example below;
    Z_SELLING_PRICE Table (the client does not want to use standard pricing that is why we use customized tables)
    MANDT         MATNR        PRICE         VALID_TO       VALID_FROM          CURRRENCY
    100               TYRES         200             20100201        20100228               ZAR   - (user changes price to 100)
    100               TYPES         250             20100301        20100331               ZAR   - (user changes valid_to to 20100302)
    100               RIMS            150             20100301        20100331               ZAR
    Z_AUDIT Table
    MANDT       TABLE                      FIELD           OLDVALUE              NEWVALUE          CHANGE_TYPE   USER       DATE         TIME
    100             Z_SELLING_PRICE   PRICE           200                           100                       Modified               PETER     20100202   121216
    100             Z_SELLING_PRICE   VALID_TO   20100301                  20100302            Modified               JANE       20100301   154553
    My problem here is how will my report know that the price (for example) that was changed wat the price changed on Types as suppose to rims?
    Maby my logic is too complicated.  And if i save all the fields regardless if they were changed so that my report of this table (Z_AUDIT) can make logical sense. how will i know what the logical flow of field names is to be combined to make up the record that was changed.
    Please help.
    Kind regards

    Hey Thomas
    Thanks for your quick response. Yes the resourses (in my opinion would probably be the same) but unfortunatly we have a couple of basis consultants that convinced the business other wise. I get the idee that they are carefull of the fact that if they open the system to this function and someone adds a log to a standard table that the system might not have suffiecient memory.
    So business decided that they wil not take this "risk" and asked the ABAP team to design our own updates.
    Another option that was presented was to add a USER DATE and TIME Field to every Customized table that needs to have logging on. But in some cases we cannot allow for duplicate keys and if i make these fields keys the logic behind that will be bypassed as the same user can then enter a "duplicate key" the next day because it is not seen as a duplicate entry in the DB because the date would then differ.
    So by adding a section to the user exit behind SM30 and calling the same function module (that has to stil be written) we will  be able to do this "the ABAP" way. I was just wondering if there is (hopefully) someone that had a similiar situation so that we can maby duplicate the logic.??
    Kind regards

  • Audit- Not laoding Audit data into Audit tables

    Hi ,
    Audit data not loading into audit tables in my XI3.1 SP4Environament.
    Audit data base is in SQL SERVER 2008 R2
    where Audit  database congigured , audit events enabled properly.
    and also log files creating without any errors also
    Please help me ..
    Edited by: Reddeppa Konduru on Nov 9, 2011 11:31 AM
    Edited by: Reddeppa Konduru on Nov 9, 2011 12:31 PM

    I am getting below error
    bsystem_impl.cpp:2651: TraceLog message 1
    2011/11/09 03:22:47.071|>>|A| |13664|9368| |||||||||||||||assert failure: (.\auditsubsystem_impl.cpp:2651). (false : Next event id value should be greater than the current one, check the auditee packing events code).

  • Audit data Tables

    Hi All,
    Can you tell me the names of the table where Audit data is stored in Microsoft platform.
    I think there are two tables ie Audit Activity table and Audit data table .
    Thank you
    Kind Regards
    Abhinav Sona

    Hi Sona,
    these are the table names:
    AuditDtsLog
    AuditActivityDetail<Appset>
    AuditActivityDetail<Appset>Archive
    AuditActivityHdr<Appset>
    AuditActivityHdr<Appset>Archive
    for each application:
    AuditActivityDetail<Application>
    AuditActivityDetail<Application>Archive
    AuditActivityHdr<Application>
    AuditActivityHdr<Application>Archive
    AuditData<Application>
    AuditData<Application>Archive
    AuditDataTmp<Application>
    AuditHdr<Application>
    AuditHdr<Application>Archive
    Comment<Application>Archive
    Kind regards
    Roberto
    Edited by: Roberto Vidotti on Dec 13, 2011 10:34 AM

  • Audit database. config auditing data source (DB2)

    Hello expert,
    I want to enable audit database for Business Object and I has followed the admin guide but not work.
    I install Business Object enterprise XI 3.1 on AIX 5.3 .
    When I installed BO ,I choose 'Use an existing database'  and choose DB2. (The same database of SAP BW database)
    And when the installation require the information of CMS and Audit database, I fill the same db alias as SAP BW database.
    So now I has SAPBW , CMS and Audit  data in the same database.
    After installation I saw the CMS table in "DB2<aliasName>"  schema.
    But can not find the Audit table.
    Does the audit table will create after installation?
    Then I try to enable the audit database using cmsdbsetup.sh and I choose 'selectaudit' and fill in the information that it require.
    Then it finish with no error.
    "Select auditing data source was successful. Auditing Data Source setup finished."
    But I still can not find any audit table in the database.
    I run serverconfig.sh and I can't see the Enable Audit option when I choose 'modify a server'.
    Any idea?
    Thanks in advance.
    Chai

    Hello,
    Thanks for your reply.
    It is not a BO cluster.
    And a log detail when select audit data source is show as below
    Wed Nov 17 2010 10:17:02 GMT+0700 (THAIST) /bodev/bobje/setup/jscripts//ccmunix.js started at Wed Nov 17 2010 10:17:02 GMT+0700 (THAIST)
    Wed Nov 17 2010 10:17:02 GMT+0700 (THAIST) About to process commandline parameters...
    Wed Nov 17 2010 10:17:02 GMT+0700 (THAIST) Finished processing commandline parameters.
    Wed Nov 17 2010 10:17:02 GMT+0700 (THAIST) Warning: No password supplied, setting password to default.
    Wed Nov 17 2010 10:17:02 GMT+0700 (THAIST) Warning: No username supplied, setting username and password to default.
    Wed Nov 17 2010 10:17:02 GMT+0700 (THAIST) Warning: No authentication type supplied, setting authentication type to default.
    Wed Nov 17 2010 10:17:02 GMT+0700 (THAIST) Warning: No CMS name supplied, setting CMS name to machine name.
    Wed Nov 17 2010 10:17:02 GMT+0700 (THAIST) Select auditing data source was successful.
    Wed Nov 17 2010 10:25:22 GMT+0700 (THAIST) /bodev/bobje/setup/jscripts//ccmunix.js started at Wed Nov 17 2010 10:25:22 GMT+0700 (THAIST)
    Wed Nov 17 2010 10:25:22 GMT+0700 (THAIST) About to process commandline parameters...
    Wed Nov 17 2010 10:25:22 GMT+0700 (THAIST) Finished processing commandline parameters.
    Wed Nov 17 2010 10:25:22 GMT+0700 (THAIST) Warning: No password supplied, setting password to default.
    Wed Nov 17 2010 10:25:22 GMT+0700 (THAIST) Warning: No username supplied, setting username and password to default.
    Wed Nov 17 2010 10:25:22 GMT+0700 (THAIST) Warning: No authentication type supplied, setting authentication type to default.
    Wed Nov 17 2010 10:25:22 GMT+0700 (THAIST) Warning: No CMS name supplied, setting CMS name to machine name.
    Wed Nov 17 2010 10:25:22 GMT+0700 (THAIST) Select auditing data source was successful.
    And the CMS log file did not show any error.
    Additional detail:
    - My BW and BO are in the same server.
    - I already grant all the rights to the user who relate to audit database.
    - My BW and  BO are in the same database.
    - There is no audit table appear in the database.
    - No Fix pack installed.
    I wonder why BO audit connection did not see my database.
    (In case of DB2, I think the db2 alias name is the same name of the database name (as default).
    So if my database name is BWD then the database alias name should be BWD, am I right?)
    Any idea?
    Thanks in advance.
    Chai

  • Analysing Task Audit, Data Audit and Process Flow History

    Hi,
    Internal Audit dept has requested a bunch of information, that we need to compile from Task Audit, Data Audit and Process Flow History logs. We do have all the info available, however not in a format that allows proper "reporting" of log information. What is the best way to handle HFM logs so that we can quickly filter and export required audit information?
    We do have housekeeping in place, so the logs are partial "live" db tables, and partial purged tables that were exported to Excel to archive the historical log info.
    Many Thanks.

    I thought I posted this Friday, but I just noticed I never hit the 'Post Message Button', ha ha.
    This info below will help you translate some of the information in the tables, etc. You could report on it from the Audit tables directly or move them to another appropriate data table for analysis later. The concensus, though I disagree, is that you will suffer performance issues if your audit tables get too big, so you want to move them periodically. You can do this using a scheduled Task, manual process, etc.
    I personally just dump it to another table and report on it from there. As mentioned above, you'll need to translate some of the information as it is not 'human readable' in the database.
    For instance, if I wanted to pull Metadata Load, Rules Load, Member List load, you could run a query like this. (NOTE: strAppName should be equal to the name of your application .... )
    The main tricks to know at least for task audit table are figuring out how to convert times and determing which activity code corresponds to the user friendly name.
    -- Declare working variables --
    declare @dtStartDate as nvarchar(20)
    declare @dtEndDate as nvarchar(20)
    declare @strAppName as nvarchar(20)
    declare @strSQL as nvarchar(4000)
    -- Initialize working variables --
    set @dtStartDate = '1/1/2012'
    set @dtEndDate = '8/31/2012'
    set @strAppName = 'YourAppNameHere'
    --Get Rules Load, Metadata, Member List
    set @strSQL = '
    select sUserName as "User", ''Rules Load'' as Activity, cast(StartTime-2 as smalldatetime) as "Time Start",
          cast(EndTime-2 as smalldatetime) as ''Time End'', ServerName, strDescription, strModuleName
       from ' + @strAppName + '_task_audit ta, hsv_activity_users au
       where au.lUserID = ta.ActivityUserID and activitycode in (1)
            and cast(StartTime-2 as smalldatetime) between ''' + @dtStartDate + ''' and ''' + @dtEndDate + '''
    union all
    select sUserName as "User", ''Metadata Load'' as Activity, cast(StartTime-2 as smalldatetime) as "Time Start",
          cast(EndTime-2 as smalldatetime) as ''Time End'', ServerName, strDescription, strModuleName
       from ' + @strAppName + '_task_audit ta, hsv_activity_users au
       where au.lUserID = ta.ActivityUserID and activitycode in (21)
            and cast(StartTime-2 as smalldatetime) between ''' + @dtStartDate + ''' and ''' + @dtEndDate + '''
    union all
    select sUserName as "User", ''Memberlist Load'' as Activity, cast(StartTime-2 as smalldatetime) as "Time Start",
          cast(EndTime-2 as smalldatetime) as ''Time End'', ServerName, strDescription, strModuleName
       from ' + @strAppName + '_task_audit ta, hsv_activity_users au
       where au.lUserID = ta.ActivityUserID and activitycode in (23)
            and cast(StartTime-2 as smalldatetime) between ''' + @dtStartDate + ''' and ''' + @dtEndDate + ''''
    exec sp_executesql @strSQLIn regards to activity codes, here's a quick breakdown on those ....
    ActivityID     ActivityName
    0     Idle
    1     Rules Load
    2     Rules Scan
    3     Rules Extract
    4     Consolidation
    5     Chart Logic
    6     Translation
    7     Custom Logic
    8     Allocate
    9     Data Load
    10     Data Extract
    11     Data Extract via HAL
    12     Data Entry
    13     Data Retrieval
    14     Data Clear
    15     Data Copy
    16     Journal Entry
    17     Journal Retrieval
    18     Journal Posting
    19     Journal Unposting
    20     Journal Template Entry
    21     Metadata Load
    22     Metadata Extract
    23     Member List Load
    24     Member List Scan
    25     Member List Extract
    26     Security Load
    27     Security Scan
    28     Security Extract
    29     Logon
    30     Logon Failure
    31     Logoff
    32     External
    33     Metadata Scan
    34     Data Scan
    35     Extended Analytics Export
    36     Extended Analytics Schema Delete
    37     Transactions Load
    38     Transactions Extract
    39     Document Attachments
    40     Document Detachments
    41     Create Transactions
    42     Edit Transactions
    43     Delete Transactions
    44     Post Transactions
    45     Unpost Transactions
    46     Delete Invalid Records
    47     Data Audit Purged
    48     Task Audit Purged
    49     Post All Transactions
    50     Unpost All Transactions
    51     Delete All Transactions
    52     Unmatch All Transactions
    53     Auto Match by ID
    54     Auto Match by Account
    55     Intercompany Matching Report by ID
    56     Intercompany Matching Report by Acct
    57     Intercompany Transaction Report
    58     Manual Match
    59     Unmatch Selected
    60     Manage IC Periods
    61     Lock/Unlock IC Entities
    62     Manage IC Reason Codes
    63     Null

  • Creating Audit Data with Triggers

    I want to create an audit table like
    AuditTable(
    FieldName Varchar2(40),
    OldValue Varchar2(100),
    NewValue Varchar2(100),
    User varchar2(20),
    UpdtDate Date)
    Whenever Table X is updated, then the Trigger should capture the changes and create a row for each field in the UPdate Statement.
    I don't want to turn on Audit as the DB is very big and high transaction oriented. I just want to watch a particulare table only.
    I need a little guidance in parsing the update statement and get the fieldnames, their old values and new values.

    Well, you could certainly audit a single table - you don't have to audit every table.
    In any case, the trigger would look something like:
    create or replace trigger t_audit
    after update on t
    for each row
    begin
      if updating('column_a') then
        insert into audittable values ('column_a', :old.column_a, :new.column_a, user, sysdate);
      end if;
      ... repeat for all columns
      if updating('column_z') then
        insert into audittable values ('column_z', :old.column_z, :new.column_z, user, sysdate);
      end if;
    end;You can use the data dictionary (user_tab_columns) to automate the writing of that trigger if there are many columns.

  • What is the best way to audit data

    What is the best way to audit actual changes in the data, that is, to be able to see each insert, update, delete on a given row, when it happened, who did it, and what the row looked like before and after the change?
    Currently, we have implemented our own auditing infrastructure where we generate standard triggers and an audit table to store OLD (values at the beginning of the Before Row timing point) and NEW (values at the beginning of the After Row timing point) values for every change.
    I'm questioning this strategy because of the performance impact it has (significant, to say the least) and because it's something that a developer (confession, I'm the developer) came up with, rather than something a database administrator came up with. I've looked into Oracle Auditing, but this doesn't seem like we'd be able to go back and see what a row looked like at a given point in time. I've also looked at Flashbacks, but this seems like it would require a monumental amount of storage just to be able to go back a week, much less the years we currently keep this data.
    Thanks,
    Matt Knowles
    Edited by: mattknowles on Jan 10, 2011 8:40 AM

    mattknowles wrote:
    What is the best way to audit actual changes in the data, that is, to be able to see each insert, update, delete on a given row, when it happened, who did it, and what the row looked like before and after the change?
    Currently, we have implemented our own auditing infrastructure where we generate standard triggers and an audit table to store OLD (values at the beginning of the Before Row timing point) and NEW (values at the beginning of the After Row timing point) values for every change.You can either:
    1. Implement your own custom auditing (as you currently do)
    2. Flashback Data Archive (11g). Requires license.
    3. Version enable your tables with Workspace Manager.
    >
    I'm questioning this strategy because of the performance impact it has (significant, to say the least) and because it's something that a developer (confession, I'm the developer) came up with, rather than something a database administrator came up with. I've looked into Oracle Auditing, but this doesn't seem like we'd be able to go back and see what a row looked like at a given point in time. I've also looked at Flashbacks, but this seems like it would require a monumental amount of storage just to be able to go back a week, much less the years we currently keep this data.
    Unfortunately, auditing data always takes lots of space. You must also consider performance, as custom triggers and Workspace Manager will perform much slower than FDA if there is heavy DML on the table.

  • XI R2 Auditing data retension period

    Hi,
    Using XI R2 SP2 FP5 with an SQLServer database and want to limit the amount of data held, ideally by date i.e. only 6 months of data.
    I would expect a setting somewhere to say keep x days of data but can't find one and can't find any reference to it in documentation or on these forums.
    Any help much appreciated.
    John

    Hello,
    There is no way to restrict/purge audit data out of the box. You could however purge data in your db as mentioned in SAP NOTE  1198638 - How to purge the BO AUDIT tables leaving only the last 6 months data. I.e
    To purge the BO AUDIT tables, leaving only the last 6 months data, delete AUDIT_DETAIL with a join to AUDIT_EVENT and select the date for less then 6 months. Then delete the same period in AUDIT_EVENT.
    Note that this is apparently not supported. SAP  NOTE 1406372 - Is it possible to purge Audit database entries? 
    Best,
    Srinivas

  • Collector gows down and audit data aren't in OAV Console

    Hi,
    I have OAV 10.2.3.2, DBAUD collector, collection agent is on windows. When I run this collector, it starts but after a while it is stopped again. I can retrieve politics, create politics and provision them. Audit data are stored in AUD$ but I can't see them in Audit Vault console.
    In C:/oracle/product/10.2.3.2/av_agent_1/av/log/DBAUD_Collector_av_db2_2 are following:
         ***** Started logging for 'AUD$ Audit Collector' *****
    INFO @ '21/04/2010 12:04:45 02:00':
    ***** Collector Name = DBAUD_Collector
    INFO @ '21/04/2010 12:04:45 02:00':
    ***** Source Name = av_db2
    INFO @ '21/04/2010 12:04:45 02:00':
    ***** Av Name = AV
    INFO @ '21/04/2010 12:04:45 02:00':
    ***** Initialization done OK
    INFO @ '21/04/2010 12:04:45 02:00':
    ***** Starting CB
    INFO @ '21/04/2010 12:04:46 02:00':
    Getting parameter |AUDAUDIT_DELAY_TIME|, got |20|
    INFO @ '21/04/2010 12:04:46 02:00':
    Getting parameter |AUDAUDIT_SLEEP_TIME|, got |5000|
    INFO @ '21/04/2010 12:04:46 02:00':
    Getting parameter |AUDAUDIT_ACTIVE_SLEEP_TIME|, got |1000|
    INFO @ '21/04/2010 12:04:46 02:00':
    Getting parameter |AUDAUDIT_MAX_PROCESS_RECORDS|, got |1000|
    INFO @ '21/04/2010 12:04:46 02:00':
    ***** CSDK inited OK + 1
    INFO @ '21/04/2010 12:04:46 02:00':
    ***** Src alias = SRCDB2
    INFO @ '21/04/2010 12:04:46 02:00':
    ***** SRC connected OK
    INFO @ '21/04/2010 12:04:46 02:00':
    ***** SRC data retrieved OK
    INFO @ '21/04/2010 12:04:46 02:00':
    ***** Recovery done OK
    ERROR @ '21/04/2010 12:05:07 02:00':
    On line 1287; OAV-46599: internal error ORA-1882 count(DWFACT_P20100420_TMP) =
    ORA-06512: na "AVSYS.DBMS_AUDIT_VAULT", line 6
    ORA-06512: na "AVSYS.AV$DW", line 1022
    ORA-01882: oblast časového pásma nebyla nalezena
    ORA-06512: na "AVSYS.AV$DW", line 1290
    ORA-06512: na line 1
    ERROR @ '21/04/2010 12:05:08 02:00':
    Collecting thread died with status 46821
    INFO @ '21/04/2010 12:06:01 02:00':
    Could not call Listener, NS error 12541, 12560, 511, 2, 0
    INFO @ '21/04/2010 12:07:01 02:00':
    Could not call Listener, NS error 12541, 12560, 511, 2, 0
    INFO @ '21/04/2010 12:08:01 02:00':
    Could not call Listener, NS error 12541, 12560, 511, 2, 0
    INFO @ '21/04/2010 12:09:01 02:00':
    Could not call Listener, NS error 12541, 12560, 511, 2, 0
    INFO @ '21/04/2010 12:10:01 02:00':
    Could not call Listener, NS error 12541, 12560, 511, 2, 0
    INFO @ '21/04/2010 12:11:01 02:00':
    Could not call Listener, NS error 12541, 12560, 511, 2, 0
    INFO @ '21/04/2010 12:12:01 02:00':
    Could not call Listener, NS error 12541, 12560, 511, 2, 0
    INFO @ '21/04/2010 12:13:01 02:00':
    Could not call Listener, NS error 12541, 12560, 511, 2, 0
    INFO @ '21/04/2010 12:13:41 02:00':
    Could not call Listener, NS error 12541, 12560, 511, 2, 0
    INFO @ '21/04/2010 12:13:41 02:00':
         ***** Started logging for 'AUD$ Audit Collector' *****
    INFO @ '21/04/2010 12:13:41 02:00':
    ***** Collector Name = DBAUD_Collector
    INFO @ '21/04/2010 12:13:41 02:00':
    ***** Source Name = av_db2
    INFO @ '21/04/2010 12:13:41 02:00':
    ***** Av Name = AV
    INFO @ '21/04/2010 12:13:41 02:00':
    ***** Initialization done OK
    INFO @ '21/04/2010 12:13:41 02:00':
    ***** Starting CB
    INFO @ '21/04/2010 12:13:42 02:00':
    Getting parameter |AUDAUDIT_DELAY_TIME|, got |20|
    INFO @ '21/04/2010 12:13:42 02:00':
    Getting parameter |AUDAUDIT_SLEEP_TIME|, got |5000|
    INFO @ '21/04/2010 12:13:42 02:00':
    Getting parameter |AUDAUDIT_ACTIVE_SLEEP_TIME|, got |1000|
    INFO @ '21/04/2010 12:13:42 02:00':
    Getting parameter |AUDAUDIT_MAX_PROCESS_RECORDS|, got |1000|
    INFO @ '21/04/2010 12:13:42 02:00':
    ***** CSDK inited OK + 1
    INFO @ '21/04/2010 12:13:42 02:00':
    ***** Src alias = SRCDB2
    INFO @ '21/04/2010 12:13:42 02:00':
    ***** SRC connected OK
    INFO @ '21/04/2010 12:13:42 02:00':
    ***** SRC data retrieved OK
    INFO @ '21/04/2010 12:13:42 02:00':
    ***** Recovery done OK
    ERROR @ '21/04/2010 12:14:03 02:00':
    On line 1287; OAV-46599: internal error ORA-1882 count(DWFACT_P20100420_TMP) =
    ORA-06512: na "AVSYS.DBMS_AUDIT_VAULT", line 6
    ORA-06512: na "AVSYS.AV$DW", line 1022
    ORA-01882: oblast časového pásma nebyla nalezena
    ORA-06512: na "AVSYS.AV$DW", line 1290
    ORA-06512: na line 1
    ERROR @ '21/04/2010 12:14:05 02:00':
    Error on get metric callback: 46821
    ERROR @ '21/04/2010 12:14:05 02:00':
    Collecting thread died with status 46821
    ERROR @ '21/04/2010 12:14:06 02:00':
    Receive error. NS error 12537
    ERROR @ '21/04/2010 12:14:06 02:00':
    Timeout for getting metrics reply!
    INFO @ '21/04/2010 12:15:01 02:00':
    Could not call Listener, NS error 12541, 12560, 511, 2, 0
    INFO @ '21/04/2010 12:16:01 02:00':
    Could not call Listener, NS error 12541, 12560, 511, 2, 0
    INFO @ '21/04/2010 12:17:01 02:00':
    Could not call Listener, NS error 12541, 12560, 511, 2, 0
    I've already follow help in administrator guide for Problem: Cannot start the DBAUD collector and the log file shows an error. There is that if I run command
    $ sqlplus /@SRCDB1
    successful, my source database is set up correctly. I run this command successful, but my problem didn't solve.
    Any advice, please?

    I reinstall AVS and AV Collection agent and again patchset 10.2.3.2, again add source database, agent, collectors,... but still have the same problem... collector starts successfully, but after wihile its stopped and in AV console aren't any data for reports (I create some audit policy and alerts).
    I test connection with sqlplus /@SRCDB1 and it is ok, log for collector is:
    May 11, 2010 8:11:15 AM Thread-43 FINE: return cached metric , name=RECORDS_PER_SEC value=0.00
    May 11, 2010 8:11:38 AM Thread-44 FINE: timer task interval calculated = 60000
    May 11, 2010 8:11:38 AM Thread-44 FINE: Going to start collector, m_finalCommand=/bin/sh -c $ORACLE_HOME/bin/a
    vaudcoll hostname="avtest.zcu.cz" sourcename="stag_db" collectorname="DBAUD_Collector" avname="AV" loglevel="I
    NFO" command=START
    May 11, 2010 8:11:46 AM Thread-44 FINE: collector started, exitval=0
    May 11, 2010 8:11:49 AM Thread-44 FINE: return cached metric , name=IS_ALIVE value=true
    May 11, 2010 8:11:49 AM Thread-44 FINE: return cached metric , name=BYTES_PER_SEC value=0.0000
    May 11, 2010 8:11:49 AM Thread-44 FINE: return cached metric , name=RECORDS_PER_SEC value=0.0000
    May 11, 2010 8:11:49 AM Thread-44 FINE: timer task going to be started...
    May 11, 2010 8:12:15 AM Thread-43 FINE: return cached metric , name=IS_ALIVE value=false
    May 11, 2010 8:12:15 AM Thread-43 FINE: return cached metric , name=BYTES_PER_SEC value=0.00
    May 11, 2010 8:12:15 AM Thread-43 FINE: return cached metric , name=RECORDS_PER_SEC value=0.00
    May 11, 2010 8:13:15 AM Thread-43 FINE: return cached metric , name=IS_ALIVE value=false
    May 11, 2010 8:13:15 AM Thread-43 FINE: return cached metric , name=BYTES_PER_SEC value=0.00
    May 11, 2010 8:13:15 AM Thread-43 FINE: return cached metric , name=RECORDS_PER_SEC value=0.00
    May 11, 2010 8:14:15 AM Thread-43 FINE: return cached metric , name=IS_ALIVE value=false
    May 11, 2010 8:14:15 AM Thread-43 FINE: return cached metric , name=BYTES_PER_SEC value=0.00
    May 11, 2010 8:14:15 AM Thread-43 FINE: return cached metric , name=RECORDS_PER_SEC value=0.00
    May 11, 2010 8:15:15 AM Thread-43 FINE: return cached metric , name=IS_ALIVE value=false
    May 11, 2010 8:15:15 AM Thread-43 FINE: return cached metric , name=BYTES_PER_SEC value=0.00
    May 11, 2010 8:15:15 AM Thread-43 FINE: return cached metric , name=RECORDS_PER_SEC value=0.00
    May 11, 2010 8:16:15 AM Thread-43 FINE: return cached metric , name=IS_ALIVE value=false
    May 11, 2010 8:16:15 AM Thread-43 FINE: return cached metric , name=BYTES_PER_SEC value=0.00
    May 11, 2010 8:16:15 AM Thread-43 FINE: return cached metric , name=RECORDS_PER_SEC value=0.00
    May 11, 2010 8:17:15 AM Thread-43 FINE: return cached metric , name=IS_ALIVE value=false
    May 11, 2010 8:17:15 AM Thread-43 FINE: return cached metric , name=BYTES_PER_SEC value=0.00
    May 11, 2010 8:17:15 AM Thread-43 FINE: return cached metric , name=RECORDS_PER_SEC value=0.00
    May 11, 2010 8:18:15 AM Thread-43 FINE: return cached metric , name=IS_ALIVE value=false
    May 11, 2010 8:18:15 AM Thread-43 FINE: return cached metric , name=BYTES_PER_SEC value=0.00
    May 11, 2010 8:18:15 AM Thread-43 FINE: return cached metric , name=RECORDS_PER_SEC value=0.00
    May 11, 2010 8:19:15 AM Thread-43 FINE: return cached metric , name=IS_ALIVE value=false
    May 11, 2010 8:19:15 AM Thread-43 FINE: return cached metric , name=BYTES_PER_SEC value=0.00
    May 11, 2010 8:19:15 AM Thread-43 FINE: return cached metric , name=RECORDS_PER_SEC value=0.00
    May 11, 2010 8:20:15 AM Thread-43 FINE: return cached metric , name=IS_ALIVE value=false
    May 11, 2010 8:20:15 AM Thread-43 FINE: return cached metric , name=BYTES_PER_SEC value=0.00
    May 11, 2010 8:20:15 AM Thread-43 FINE: return cached metric , name=RECORDS_PER_SEC value=0.00
    May 11, 2010 8:21:15 AM Thread-43 FINE: return cached metric , name=IS_ALIVE value=false
    May 11, 2010 8:21:15 AM Thread-43 FINE: return cached metric , name=BYTES_PER_SEC value=0.00
    May 11, 2010 8:21:15 AM Thread-43 FINE: return cached metric , name=RECORDS_PER_SEC value=0.00
    May 11, 2010 8:21:15 AM Thread-43 FINE: Stop caching since it is NOT alive for 10 times of query
    And in table av$rads_flat are collected audit data...
    Pls, know any one where could be a problem??

  • Options for auditing data changes

    Hi Friends,
    I thought I will get some inputs on my following implementation. The requirement is to audit some data changes with in the system.( Oracle 10.2 on RHEL 4.7 )
    The audit is required in a sense that, the before images of data and information of who changed the data is required. I have looked at options like Oracle Auditing,FGA and so. But this cannot give me audit for the data changes,when and who changed.
    The first thing that comes into my mind are using triggers . Another option is using log miner. I have successfully tested it out with both of these approaches. The environment is like
    1 ) For some critical tables for which audit is required triggers were written ( ours is an OLTP application )
    2 ) For some non critical tables, log miner which was called by a stored procedure which runs at certain periods is used.
    3 ) audit data is stored in a different schema , with same table names as in base schema.
    I would like to know your thoughts on this.
    Thank You,
    SSN

    The delay with log miner is acceptable with some less critical audit tables and as you said, tirggers can be implemented for some critical tables.
    One bottleneck with using logminer is that it depends on the availability of archived redo logs, the backup mechanism if any implemented should make sure that, no archive logs are removed from the locations specified for the audit program. The backup mechanism should ensure that all archived logs are processed by periodically running audit program which uses log miner.
    Wondering if there is any other recommended approach for this.
    Thanks
    SSN

  • Logminer Issue : Not getting the proper auditing data

    Hi,
    I am using Logminer utility on my test database to view the contents from archived logs which I copied from my prod database. I have setup everything, and started trying to view SQL_REDO from v$logmnr_contents.
    The output looks a bit strange as it has picked the object_id and some hexadecimal values for columns.
    Here it is -
    SQL> set lines 120
    SQL> set pages 300
    SQL> select sql_redo FROM v$logmnr_contents;
    set transaction read write;
    update "UNKNOWN"."OBJ# 979115" set "COL 1" = HEXTORAW('c40b072041') where "COL 1" = HEXTORAW('3b5b5f462666') and ROWID =
    'AADvCrACCAAAAQ2AAA';
    delete from "UNKNOWN"."OBJ# 291676" where "COL 1" = HEXTORAW('c4504d3a42') and "COL 2" = HEXTORAW('4220') and "COL 3" =
    HEXTORAW('c3064c4f1d') and "COL 4" = HEXTORAW('80') and "COL 5" = HEXTORAW('80') and "COL 6" = HEXTORAW('80') and "COL 7
    " = HEXTORAW('80') and "COL 8" = HEXTORAW('80') and "COL 9" = HEXTORAW('80') and "COL 10" = HEXTORAW('c3091a0a45') and "
    COL 11" = HEXTORAW('c3091a0a45') and "COL 12" = HEXTORAW('80') and "COL 13" = HEXTORAW('80') and "COL 14" = HEXTORAW('80
    ') and "COL 15" = HEXTORAW('c4504d3a42') and "COL 16" = HEXTORAW('786d08150e080a') and "COL 17" = HEXTORAW('786d08150e08
    0a') and "COL 18" = HEXTORAW('534554544c455f55534552') and "COL 19" = HEXTORAW('534554544c455f55534552') and ROWID = 'AA
    BHNcAA9AABX7KABy';
    (I guess dictionary information has missed.)
    However, When I queried the dba_objects on my prod database, and I am getting the object_name with type.
    OWNER OBJECT_NAME OBJECT_TYPE CREATED
    SSCHEMA SCHEMATABLE TABLE 09-FEB-08
    Please suggest, is there anyway I can retrieve the proper auditing data.

    Thanks for reply!
    Another thing I missed to mention here that our test database is in NOARCHIVELOG mode. As per you suggestion it is giving me below error as -
    SQL> EXECUTE DBMS_LOGMNR_D.BUILD( -
    OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
    BEGIN DBMS_LOGMNR_D.BUILD( OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS); END;*
    ERROR at line 1:
    ORA-01325: archive log mode must be enabled to build into the logstream
    ORA-06512: at "SYS.DBMS_LOGMNR_INTERNAL", line 3172
    ORA-00258: manual archiving in NOARCHIVELOG mode must identify log
    ORA-06512: at "SYS.DBMS_LOGMNR_INTERNAL", line 5786
    ORA-06512: at "SYS.DBMS_LOGMNR_INTERNAL", line 5884
    ORA-06512: at "SYS.DBMS_LOGMNR_D", line 12
    ORA-06512: at line 1
    Please suggest.

  • How to import Filenames and File creation Date into Table..

    Hi Folks,
    I am importing Differennt Excels Files into table. my require ment is after importing completed I need to insert all these Filenames ,File creation date into table. (for Auditing).
    Can you please give me any ideas or reference link.  Thanks In Advance.

    Hi Folks,
    I am importing Differennt Excels Files into table. my require ment is after importing completed I need to insert all these Filenames ,File creation date into table. (for Auditing).
    Can you please give me any ideas or reference link.  Thanks In Advance.
    You can also prepare dir command and then exeucte it by using XP_CMDSHELL. This gives you file name, modified date, creation date information.
    Please refer:
    https://sqljourney.wordpress.com/2010/06/08/get-list-of-files-from-a-windows-directory-to-sql-server/
    https://hernandezpaul.wordpress.com/2013/02/15/store-file-names-and-modified-dates-in-a-table-without-a-foreach-loop-task-sql-server-ssis/
    Cheers,
    Vaibhav Chaudhari
    [MCP],
    [MCTS], [MCSA-SQL2012]

  • BPC Audit data

    BPC for audit has been enabled and since lot of audit data is piling up.. it is slowing down the reporting..
    How to delete the BPC data that has piled up over a period..
    I want to completely delete the data..
    Appreciate your inputs..

    Hi,
    there are 2 standard process chains for archiving audit data:
    /CPMB/ARCHIVE_ACTIVITY
    /CPMB/ARCHIVE_DATA
    tables for audit:
    activity tables: UJU_AUDACTHDR, UJU_AUDACTDET, UJU_AUDACTHDR_A, UJU_AUDACTDET_A
    data audit: UJU_ AUDDATAHDR, /1 CPMB/ KIOMBAD, UJU_ AUDDATAHDR_A/, 1CPMB/ KIOMBAD_A
    regards
    D

  • HFM audit data export utility availability in version 11

    Hi Experts,
    We have a client who has a HFM environment where the audit & task logs grow very large very quickly.
    They need to be able to archive and clear the logs. They are too large for EPM Maestro to handle and they don't want to schedule them as a regular event.
    I am concerned because I am sure that these large log tables are impacting performance.
    They want to know if the old system 9 utility they used to use is still available in the latest version. It was called the HFM audit data export utility. Does anyone know?
    Thanks in advance and kind regards
    Jean

    I know this is a reasonably old post but I found it through Google. To help those in the future, this utility is available via Oracle Support. It is HFM Service Fix 11.1.1.2.05 but it is compatible up to 11.1.1.3.
    Here is the Oracle Support KB Article:
    *How To Extract the Data Audit and Task Audit records of an HFM application to a File [ID 1067055.1]*+
    Modified 23-MAR-2010 Type HOWTO Status PUBLISHED
    Applies to:
    Hyperion Financial Management - Version: 4.1.0.0.00 to 11.1.1.3.00 - Release: 4.1 to 11.1
    Information in this document applies to any platform.
    Goal
    Some system administrators of Financial Management desire a method to archive / extract the information from the DATA_AUDIT and TASK_AUDIT database tables of an HFM application before truncating those tables.
    Solution
    Oracle provides a stand alone utility called HFMAuditExtractUtilitiy.exe to accomplish this task. As well as extracting the records of the two log tables, the utility can also be used to truncate the tables at the same time.
    The utility comes with a Readme file which should be consulted for more detailed instructions on how it should be used.
    The latest version of the utility which is compatible with all versions of HFM up to 11.1.1.3 is available as Service Fix 11.1.1.2.05 (Oracle Patch 8439656).
    Edited by: Fredric J Parodi on Nov 5, 2010 9:43 AM

Maybe you are looking for

  • No apple ID found AND already in use??

    So I just made a new apple ID.  I go to log in and it says wrong password... I try to reset my password and I get nothing, it's been 7 hours.  Then I tried to find my apple ID (using my email), what comes up says: "No Apple ID found. We can't find an

  • Combining logical and physical elements in reports.

    It would be nice to be able to define in a more granular basis what I am able to include in a report. for example. I use the logical capabilities of the model as a more readable view of the physical, the physical always have table and column name lim

  • Aging report by docuemnt currency

    Currently an aging report is run by Local currency, business partner curreny, or other specified currencies. Where the BP currency is set to multi currency, and transactions have occured in the local (AUD) and other currencies (USD, EUR), the aging r

  • Comms between a database and an oracle client is slow

    I have a Win2000 server machine (Running MicroStrategy) with Oracle client (9.2.0.1.0) querying an Oracle DB (9.2.0.6.0) on AIX 5.2 (They are connected via Gbit ethernet, ping times are 1ms and no hops enroute). After a lot of testing I have simulate

  • Oracle Cluster Hang after failing a network

    Dear, I have a cluster database on two linux 4.0 servers the oracle version is 10.2. there are two network adapters installed on each machine (private + public). the problem is when i unplug the network cable from the server (to test the failover), t