My audit database getting too large

Post Author: amr_foci
CA Forum: Administration
my audit database getting too large, how i reset it?

Post Author: jsanzone
CA Forum: Administration
Amr,
The best that I can determine, there is no official documentation from BusinessObjects regarding a method to "trim" the Auditor database.  Based on previous disucssions, I seem to remember that you are on XI R2, but if I'm wrong, then these notes will not apply to you.  Here is the scoop:
There are six tables used by Auditor: 1) APPLICATION_TYPE (initialized w/ 13 rows, does not "grow") 2) AUDIT_DETAIL (tracks activity at a granular level, grows) 3) AUDIT_EVENT (tracks activity at a granular level, grows) 4) DETAIL_TYPE (initialized w/ 28 rows, does not "grow") 5) EVENT_TYPE (initialized w/ 41 rows, does not "grow") 6) SERVER_PROCESS ( (initialized w/ 11 rows, does not "grow")
If you simply want to remove all audit data and start over, then truncate AUDIT_EVENT and AUDIT_DETAIL.
If you want to only remove rows based on a period, then consider that the two tables, AUDIT_DETAIL and AUDIT_EVENT, are transactional in nature, however, AUDIT_DETAIL is a child to the parent table AUDIT_EVENT, thus you will want to remove rows from AUDIT_DETAIL based on its link to AUDIT_EVENT before removing rows from AUDIT_EVENT first.  Otherwise, rows in AUDIT_DETAIL will get "orphaned" and never be of any use to you, and worse, you will not readily know how to ever delete these rows again.
Here is the SQL statements:delete from AUDIT_DETAILwhere event_id =(select Event_ID from AUDIT_EVENT                  where Start_Timestamp between '1/1/2006' and '12/31/2006')godelete from AUDIT_EVENT                  where Start_Timestamp between '1/1/2006' and '12/31/2006'go
One word of caution is to down you BOE application before doing this maintenance work, otherwise there is a possibility that Auditor will be busy trying to bring new rows to your database while you're busy delete rows and you might encounter an unwanted table lock, either on the work you're doing or the work that BOE is trying to perform.
Good luck!

Similar Messages

  • Ics50deletelog.db size getting too large

    I 've been watching this file (ics50deletelog.db) on our Sun calendar. It is currently over 1,200,000 KB - way too big for my taste. What is this file storing? Is there any way to reduce the size?
    Please help

    navvith wrote:
    I can see that entries are being removed, but at a painfully slow pace. Is there any way to completely clear this log database? If your users are making use of the Outlook Connector I wouldn't recommend completely clearing the database.
    I have a clean install of calendar server on other system with an empty delete log database. Could I simply replace the oversized one with this and restart the calendar server?If the objective is to shrink the size of the database, you could first try dumping/reloading the database e.g.
    1. Stop the calendar instance
    cd /opt/SUNWics5/cal/sbin
    ./stop-cal
    2. Verify the current database, keep a record of the output for comparison
    ./csdb check
    3. Dump the database to a txt file.
    cd /opt/SUNWics5/cal/tools/unsupported/bin
    export DB_HOME=/var/opt/SUNWics5/csdb/
    ./db_checkpoint -1
    ./db_archive
    ./db_dump -r /var/opt/SUNWics5/csdb/ics50deletelog.db > /var/tmp/ics50deletelog.db.txt
    4. Reload the database to a temporary db file.
    ./cs_dbload /var/tmp/ics50deletelog.db < /var/tmp/ics50deletelog.db.txt
    5. Keep a backup of the the old deletelog database and move the new db into place
    cp -p /var/opt/SUNWics5/csdb/ics50deletelog.db /var/tmp/ics50deletelog.db.orig
    cp /var/tmp/ics50deletelog.db /var/opt/SUNWics5/csdb/
    6. Verify the database, compare the output with the previous ./csdb check to ensure they are the same
    ./csdb check
    7. Restart the calendar serverI would recommend running through these steps in your test-environment first to ensure you are comfortable with this prior to trying them in production.
    Regards,
    Shane.

  • Content Database Growing too large

    We seem to be experiencing some slowness on our SharePoint farm and noticed that one of our databases (we have two) is now at 170 Gb. Best practice seems to be to keep the database from going over 100Gb.
    We have hundreds of Sites within one Database and need to spit these up to save space on our databases.
    So I  would like to create some new databases and move some of the sites from the old database over to the new databases.
    Can anyone tell me if I am on the right track here and if so how to safely move these sites to another Content Database?
    dfrancis

    I would not recommend using RBS. Microsoft's RBS is really just meant to be able to exceed the 4GB/10GB MDF file size limit in SQL Express. RBS space /counts against/ database size, and backup/restore becomes a more complex task.
    Trevor Seward
    Follow or contact me at...
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • How do I create a new mailbox file, not folder. My primary mailbox file is getting too large and I want to split it into multiple physical files.

    I have done this multiple times in the past (I have several files - not folders - that contain mail). However due to the fact that II was somewhat brain damaged several years ago I can no longer remember (or figure out) how to do it. For instance, I have a "MoreJunque" mailbox file that is (from the "Properties" dialog) at mailbox:///C:/Users/Daniel Mathews/AppData/Roaming/Thunderbird/Profiles/bxk6ngnt.default/Mail/pop.att.yahoo.com/MoreJunque. However, my inbox is in mailbox:///C:/Users/Daniel Mathews/AppData/Roaming/Thunderbird/Profiles/bxk6ngnt.default/Mail/pop.att.yahoo.com/Inbox. These are unique files that contain folders.

    right click on your account on the left and select new folder.

  • CMS/Audit  Database Sizing

    Hello,
    We are planing to deploy BOE 3.1 for about 3,00,000 users the OS is windows and CMS database would be SQL server 2008.
    Any help would be appreciated to help me  arrive at an estimation of the size for CMS and Audit database for these many users?
    Thanks
    Ranjit Krishnan

    Hi Ranjit, it really depends on how many metrics you are auditing (in another word, the items on CMC that are checked for auditing). We have 20,000+ users in our system and I am auditing about 90% of the metrics. Our audit schema grows to 8-10 GB in about 3, 4 months.
    Frankly, in this day and age, disk drives are cheap. So storage is not your main concern. Your concern should be the performance. When the audit schema gets too large, naturally, the query performance will decrease. Therefore, I have our DBA archived off the audit schema every 6 months to reduce the size. And you need to back up the log files as well. In another word, we only keep a 6-months history. If someone needs older information, we can bring them back from the archive for a one-off query.
    This is something you must sit down with your DBA and come up with a strategy that makes sense for your organization. Every company has different need. So there is no cookie-cutter formula to handle this.
    Hope this helps.
    If you are using the BusinessObjects tool, you should join [ASUG|www.asug.com]

  • Display unaccountably gets too big for the screen

    This has happened maybe a dozen times, and I don't know what I did to make it happen. Suddenly, the desktop display gets too large for the screen, and I have to scroll around to find the dock or the menu bar. Whatever else I'm working on, e-mail, etc., gets outsized, too. Does anyone know how to correct this? So far, restarting has worked, but that's a pretty time-consuming way to do it.

    Cindy,
    Hold down the CNTL key and scroll down with the mouse or trackpad. That should change it back to size. This is called screen zoom.
    Cheers
    You can also deactivate it in System Preferences for the Mouse and/or trackpad.
    Message was edited by: captfred

  • I am getting error "ORA-12899: value too large for column".

    I am getting error "ORA-12899: value too large for column" after upgrading to 10.2.0.4.0
    Field is updating only through trigger with hard coded value.
    This happens randomly not everytime.
    select * from v$version
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE     10.2.0.4.0     Production
    TNS for Linux: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    Table Structure
    desc customer
    Name Null? Type
    CTRY_CODE NOT NULL CHAR(3 Byte)
    CO_CODE NOT NULL CHAR(3 Byte)
    CUST_NBR NOT NULL NUMBER(10)
    CUST_NAME CHAR(40 Byte)
    RECORD_STATUS CHAR(1 Byte)
    Trigger on the table
    CREATE OR REPLACE TRIGGER CUST_INSUPD
    BEFORE INSERT OR UPDATE
    ON CUSTOMER FOR EACH ROW
    BEGIN
    IF INSERTING THEN
    :NEW.RECORD_STATUS := 'I';
    ELSIF UPDATING THEN
    :NEW.RECORD_STATUS := 'U';
    END IF;
    END;
    ERROR at line 1:
    ORA-01001: invalid cursor
    ORA-06512: at "UPDATE_CUSTOMER", line 1320
    ORA-12899: value too large for column "CUSTOMER"."RECORD_STATUS" (actual: 3,
    maximum: 1)
    ORA-06512: at line 1
    Edited by: user4211491 on Nov 25, 2009 9:30 PM
    Edited by: user4211491 on Nov 25, 2009 9:32 PM

    SQL> create table customer(
      2  CTRY_CODE  CHAR(3 Byte) not null,
      3  CO_CODE  CHAR(3 Byte) not null,
      4  CUST_NBR NUMBER(10) not null,
      5  CUST_NAME CHAR(40 Byte) ,
      6  RECORD_STATUS CHAR(1 Byte)
      7  );
    Table created.
    SQL> CREATE OR REPLACE TRIGGER CUST_INSUPD
      2  BEFORE INSERT OR UPDATE
      3  ON CUSTOMER FOR EACH ROW
      4  BEGIN
      5  IF INSERTING THEN
      6  :NEW.RECORD_STATUS := 'I';
      7  ELSIF UPDATING THEN
      8  :NEW.RECORD_STATUS := 'U';
      9  END IF;
    10  END;
    11  /
    Trigger created.
    SQL> insert into customer(CTRY_CODE,CO_CODE,CUST_NBR,CUST_NAME,RECORD_STATUS)
      2                values('12','13','1','Mahesh Kaila','UPD');
                  values('12','13','1','Mahesh Kaila','UPD')
    ERROR at line 2:
    ORA-12899: value too large for column "HPVPPM"."CUSTOMER"."RECORD_STATUS"
    (actual: 3, maximum: 1)
    SQL> insert into customer(CTRY_CODE,CO_CODE,CUST_NBR,CUST_NAME)
      2                values('12','13','1','Mahesh Kaila');
    1 row created.
    SQL> set linesize 200
    SQL> select * from customer;
    CTR CO_   CUST_NBR CUST_NAME                                R
    12  13           1 Mahesh Kaila                             I
    SQL> update customer set cust_name='tst';
    1 row updated.
    SQL> select * from customer;
    CTR CO_   CUST_NBR CUST_NAME                                R
    12  13           1 tst                                      Urecheck your code once again..somewhere you are using record_status column for insertion or updation.
    Ravi Kumar

  • Get Info folder size is incorrect/too large

    I am getting an incorrect report from the Get Info command that one the folders in my 8TB array is topping 15TB! In fact, this folder only contains data (Red movie files) that amount to about 800GB total. This problem, in turn, is preventing me from copying that directory to a backup drive, as it is seen as being too large, which it is not.
    Of course this is impossible, as my array, as mentioned above, is only 8TB. And when I do a Get Info on the array drive icon itself, it reads correctly, listing the Capacity as 8TB, and the Available space at 1.8TB. So, it's just the folders that are being read incorrectly.
    I have not enabled or set up Time Machine, so I do not think that is the issue.
    I am new to Macs and really have no clue how to remedy this situation.
    ANY help would be appreciated.

    Thank you for the response, Eric. I've already tried that. Didn't make a difference.
    Since I can't afford to wait for this to be solved, and because I don't know how to solve this issue myself, I've gone ahead and deleted the offending folder, created a new folder with a different name, and re-populated it with output files from Davinci Resolve. In effect, I rendered my project out again from scratch. Unfortunately the same **** thing is happening! I'm creating a populated folder that Get Info reports as being 16 Terabytes, on an 8 Terabyte array!
    (*sigh*)

  • My mac will not copy more than one file at a time and gets locked up if the file is too large, my mac will not copy more than one file at a time and gets locked up if the file is too large

    my mac will not copy more than one file at a time and gets locked up if the file is too large, my mac will not copy more than one file at a time and gets locked up if the file is too large

    So now that you have repeated the same thing three times that doesn't make things any clearer at all.
    You are copying files from where to where?
    How are you attempting to copy files, software or click and drag?
    Any other detail would be helpful.
    Allan

  • How do I get my web pages back to original size? My fonts are all too large and keep reverting after I change them back!

    How do I get my web pages back to original size? My fonts are all too large and keep reverting after I change them back!

    That has not worked. I do it and still it reverts. Isn't there a way to tell Firefox to just go back to default settings?

  • Burning a disc but getting a too large for current selected disc media mess

    Trying to burn a project on to DVD (4.7g disc) but I keep getting a message that says "Compiled project is 2835.4 MB too large for currently selected disc media. Does anyone have any ideas on this? Isn't 2835.4 MB around 2.8G. I'm confused! Should'nt this fit. Any help would be appreciated

    Hi
    To fit that lenght in a SL DVD you must use Compressor to encode your video and audio before importing in DVDSP. You can take the standard DVD High Quality 120 min preset and use the MPEG2 and AC3 (Dolby 2) settings, then use those files in DVDSP.
    If you are currently using DVDSP as encoder, you'll get AIFF uncompressed audio what takes a lot of disc space and bitrate.
    Hope that helps !
      Alberto

  • While doing  F-32 i am getting the The difference is too large for clearing

    Dear Gurus,
    I am getting the below message while clearing customer open items in f-32.
    The difference is too large for clearing.
    Regards,
    Prasad

    When you are clearing the Debit and Credits in F-32, You need to see the below things.
    1. Bothe Debits and Credits are mathcing, means Negatives and Positives are matching or not.
    2. If there are any discounts line items automatically generating coming
    3. Forex transactions are also there
    4. Just check the entire program again
    Thanks

  • Getting error ORA-01401: inserted value too large for column

    Hello ,
    I have Configured the scenario IDOC to JDBC .In the SXMB_MONI am getting the succes message .But in the Adapter Monitor am getting the error message as
    ORA-01401: inserted value too large for column and the entries also not inserted in to the table.I hope this is because of the date format only.In Oracle table date field has defined in the format of '01-JAN-2005'.I am also passing the date field in the same format only for INVOICE_DATE and INVOICE_DUE_DATE.Please see the target structure .
    <?xml version="1.0" encoding="UTF-8" ?>
    - <ns:INVOICE_INFO_MT xmlns:ns="http://sap.com/xi/InvoiceIDoc_Test">
    - <Statement>
    - <INVOICE_INFO action="INSERT">
    - <access>
      <INVOICE_ID>0090000303</INVOICE_ID>
      <INVOICE_DATE>01-Dec-2005</INVOICE_DATE>
      <INVOICE_DUE_DATE>01-Jan-2005</INVOICE_DUE_DATE>
      <ORDER_ID>0000000000011852</ORDER_ID>
      <ORDER_LINE_NUM>000010</ORDER_LINE_NUM>
      <INVOICE_TYPE>LR</INVOICE_TYPE>
      <INVOICE_ORGINAL_AMT>10000</INVOICE_ORGINAL_AMT>
      <INVOICE_OUTSTANDING_AMT>1000</INVOICE_OUTSTANDING_AMT>
      <INTERNAL_USE_FLG>X</INTERNAL_USE_FLG>
      <BILLTO>0004000012</BILLTO>
      <SHIPTO>40000006</SHIPTO>
      <STATUS_ID>O</STATUS_ID>
      </access>
      </INVOICE_INFO>
      </Statement>
      </ns:INVOICE_INFO_MT>
    Please let me know what are all the possible solution to fix the error and to insert the entries in the table.
    Thanks in Advance!

    Hi muthu,
    // inserted value too large for column
    When your oracle insertion throws this error, it implies that some value that you are trying to insert into the table is larger than the allocated size.
    Just check the format of your table and the respective size of each field on your oracle cleint by using the command,
    DESCRIBE <tablename> .
    and then verify it with the input. I dont think the problem is with the DATE format because if it is not a valid date format, you would have got on error like
    String Literal does not match type
    Hope this helps,
    Regards,
    Bhavesh

  • Alter mount database failing: Intel SVR4 UNIX Error: 79: Value too large for defined data type

    Hi there,
    I am having a kind of weird issues with my oracle enterprise db which was perfectly working since 2009. After having had some trouble with my network switch (replaced the switch) the all network came back and all subnet devices are functioning perfect.
    This is an NFS for oracle db backup and the oracle is not starting in mount/alter etc.
    Here the details of my server:
    - SunOS 5.10 Generic_141445-09 i86pc i386 i86pc
    - Oracle Database 10g Enterprise Edition Release 10.2.0.2.0
    - 38TB disk space (plenty free)
    - 4GB RAM
    And when I attempt to start the db, here the logs:
    Starting up ORACLE RDBMS Version: 10.2.0.2.0.
    System parameters with non-default values:
      processes                = 150
      shared_pool_size         = 209715200
      control_files            = /opt/oracle/oradata/CATL/control01.ctl, /opt/oracle/oradata/CATL/control02.ctl, /opt/oracle/oradata/CATL/control03.ctl
      db_cache_size            = 104857600
      compatible               = 10.2.0
      log_archive_dest         = /opt/oracle/oradata/CATL/archive
      log_buffer               = 2867200
      db_files                 = 80
      db_file_multiblock_read_count= 32
      undo_management          = AUTO
      global_names             = TRUE
      instance_name            = CATL
      parallel_max_servers     = 5
      background_dump_dest     = /opt/oracle/admin/CATL/bdump
      user_dump_dest           = /opt/oracle/admin/CATL/udump
      max_dump_file_size       = 10240
      core_dump_dest           = /opt/oracle/admin/CATL/cdump
      db_name                  = CATL
      open_cursors             = 300
    PMON started with pid=2, OS id=10751
    PSP0 started with pid=3, OS id=10753
    MMAN started with pid=4, OS id=10755
    DBW0 started with pid=5, OS id=10757
    LGWR started with pid=6, OS id=10759
    CKPT started with pid=7, OS id=10761
    SMON started with pid=8, OS id=10763
    RECO started with pid=9, OS id=10765
    MMON started with pid=10, OS id=10767
    MMNL started with pid=11, OS id=10769
    Thu Nov 28 05:49:02 2013
    ALTER DATABASE   MOUNT
    Thu Nov 28 05:49:02 2013
    ORA-00202: control file: '/opt/oracle/oradata/CATL/control01.ctl'
    ORA-27037: unable to obtain file status
    Intel SVR4 UNIX Error: 79: Value too large for defined data type
    Additional information: 45
    Trying to start db without mount it starts without issues:
    SQL> startup nomount
    ORACLE instance started.
    Total System Global Area  343932928 bytes
    Fixed Size                  1280132 bytes
    Variable Size             234882940 bytes
    Database Buffers          104857600 bytes
    Redo Buffers                2912256 bytes
    SQL>
    But when I try to mount or alter db:
    SQL> alter database mount;
    alter database mount
    ERROR at line 1:
    ORA-00205: error in identifying control file, check alert log for more info
    SQL>
    From the logs again:
    alter database mount
    Thu Nov 28 06:00:20 2013
    ORA-00202: control file: '/opt/oracle/oradata/CATL/control01.ctl'
    ORA-27037: unable to obtain file status
    Intel SVR4 UNIX Error: 79: Value too large for defined data type
    Additional information: 45
    Thu Nov 28 06:00:20 2013
    ORA-205 signalled during: alter database mount
    We have already checked in everywhere in the system, got oracle support as well without success. The control files are in the place and checked with strings, they are correct.
    Can somebody give a clue please?
    Maybe somebody had similar issue here....
    Thanks in advance.

    Did the touch to update the date, but no joy either....
    These are further logs, so maybe can give a clue:
    Wed Nov 20 05:58:27 2013
    Errors in file /opt/oracle/admin/CATL/bdump/catl_j000_7304.trc:
    ORA-12012: error on auto execute of job 5324
    ORA-27468: "SYS.PURGE_LOG" is locked by another process
    Sun Nov 24 20:13:40 2013
    Starting ORACLE instance (normal)
    control_files = /opt/oracle/oradata/CATL/control01.ctl, /opt/oracle/oradata/CATL/control02.ctl, /opt/oracle/oradata/CATL/control03.ctl
    Sun Nov 24 20:15:42 2013
    alter database mount
    Sun Nov 24 20:15:42 2013
    ORA-00202: control file: '/opt/oracle/oradata/CATL/control01.ctl'
    ORA-27037: unable to obtain file status
    Intel SVR4 UNIX Error: 79: Value too large for defined data type
    Additional information: 45
    Sun Nov 24 20:15:42 2013
    ORA-205 signalled during: alter database mount

  • When editing a wiki page, get a 'request entity too large' error message.

    [https://stbeehive.oracle.com/teamcollab/wiki/Sales+Playbooks:Demonstrating+Differentiators|https://stbeehive.oracle.com/teamcollab/wiki/Sales+Playbooks:Demonstrating+Differentiators] I'm trying to edit one of my wiki pages that has been static for about 5 months now, and when I try and save the page, I get the following message (note I cropped some because of formatting issues when posting):
    *413 Request Entity Too Large*
    HTTP/1.1 413 Request Entity Too Large Date: Tue, 18 Oct 2011 15:35:41 GMT Server: Oracle-Application-Server-10g Connection: close Transfer-Encoding: chunked Content-Type: text/html; charset=iso-8859-1
    Request Entity Too Large
    The requested resource
    /teamcollab/wiki/<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><meta content="text/html; charset=utf-8" http-equiv="Content-Type" /><script type="text/javascript"> var U = "undefined"; var gHttpRelativeWebRoot = "/ocom/"; var SSContributor = false; var SSForceContributor = false; var SSHideContributorUI = false; var ssUrlPrefix = "/splash/"; var ssUrlType = "2"; var g_navNode_Path = new Array(); g_navNode_Path[0] = '1790'; g_navNode_Path[1] = 'splash_collabsuite'; var g_ssSourceNodeId = "splash_collabsuite"; var g_ssSourceSiteId = "splash";</script><script id="SSNavigationFunctionsScript" type="text/javascript" src="/ocom/websites/splash/sitenavigationfunctions.js"></script><script id="SSNavigationScript" type="text/javascript" src="/ocom/websites/splash/sitenavigation.js"></script><script type="text/javascript">var g_strLanguageId = "en";</script><script type="text/javascript" src="/ocom/resources/wcm/sitestudio/wcm.toggle.js"></script><script type="text/javascript" src="/ocom/resources/sitestudio/ssajax/ssajax.js"></script> <script id="ssInfo" type="text/xml" warning="DO NOT MODIFY!"> <ssinfo> <fragmentinstance id="fragment1" fragmentid="universal-metatag" library="server:UNIVERSAL-FRAGMENTS"> </fragmentinstance> <fragmentinstance id="fragment2" fragmentid="ExternalSiteCatalystFragment" library="server:EXTERNALSCFRAGMENTLIB"></fragmentinstance> </ssinfo> </script> <meta name="GENERATOR" content="MSHTML 8.00.6001.18904" /><!--SS_BEGIN_SNIPPET(fragment1,head_tags)--><title>Collabsuite Outage</title><meta name="Title" content="Collabsuite Outage"><meta name="Description" content="Collabsuite Outage"><meta name="Keywords" content="Collabsuite Outage"><meta name="robots" content="NOINDEX, NOFOLLOW"><meta name="country" content=""><meta name="Language" content="en"><meta name="Updated Date" content="4/12/11 10:38 AM"><!--SS_END_SNIPPET(fragment1,head_tags)--> </head><body> <!--SS_BEGIN_SNIPPET(fragment1,code)...
    does not allow request data with GET requests, or the amount of data provided in the request exceeds the capacity limit.
    Additionally, a 413 Request Entity Too Large error was encountered while trying to use an ErrorDocument to handle the request.
    The page, should you wish to eyeball it, is at:
    https://stbeehive.oracle.com/teamcollab/wiki/Sales+Playbooks:Demonstrating+Differentiators

    Duane,
    This looks like the URL has the content of a wiki page as an attachment to the URL which is blowing up the get request. Can you go to the earlier version - the history should allow you to backtrack changes - if you access this earlier version and change something small - does it save OK. If so then maybe the change you made is the problem.
    I cannot access the workspace without being given explicit access so this is a guess.
    Phil

Maybe you are looking for