AUD$ table export

Hi All,
Please help to export sys.aud$ and also how to find the size of aud$ table .Please suggest the query to execute these...
thanks

SQL> select sum(bytes)/(1024*1024) "Table size(MB)" from dba_segments where segment_name='AUD$' and owner='SYS';
Table size(MB)
             6
C:\>exp 'sys/oracle as sysdba' tables=aud$ file=d:/aud$.dmp
Export: Release 11.2.0.1.0 - Production on Fri Jun 8 21:43:18 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
About to export specified tables via Conventional Path ...
. . exporting table                           AUD$         64 rows exported
Export terminated successfully without warnings.But can I know why you want to export this table?

Similar Messages

  • Is there a way to include the sys.aud$ table in a full database dp export?

    I am doing an export using the following parfile information:
    userid=/
    directory=datapump_nightly_export
    dumpfile=test_expdp.dmp
    logfile=test_expdp.log
    full=y
    content=all
    However when I run this I do not see the sys.aud$ in the log file. I know I can do a seperate export to specifically get the sys.aud$ table but is there any way to include it in with my full export?
    Thanks in advance for any suggestion.

    here's more background infomation... I have some audits setup on my database for one of my users. Every quarter I have an automated job that runs that creates a usage/statics report for this person using data in aud$. at the end of the job I export the aud$ table and truncate it. However last quarter I found that there was a mistake in my report and my export did not run properly thus my audit data was gone. i also have full datapump exports that run daily but found that aud$ was not there. so that is why I thought I'd like to include sys.aud$ in the full datapump exports.
    i understand why other sys tables would be left out of a full export but aud$ data cannot be reproduced so to me it makes sense to include it in a full export.
    don't worry, we run our true backups using rman which is eventually how I got the aud$ data back by creating a copy of my database up until the time of the truncate. however this was quite time consuming.

  • Export  "sys.aud$"  table as system user using datapump

    Friends,
    I want to export (using datapump 'expdp') the sys user's AUD$ table (sys.aud$) as the system
    user . But it shows the following error :
    bash-3.00$ expdp system/sys123@onlinete directory=test_dir TABLES=sys.AUD$ DUMPFILE=sysaud.$Date.dmp logfile=audit.$date.log
    Export: Release 10.2.0.1.0 - 64bit Production on Wednesday, 14 January, 2009 13:30:56
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/********@onlinete directory=test_dir TABLES=sys.AUD$ DUMPFILE=sysaud..dmp logfile=audit..log
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    ORA-39165: Schema SYS was not found.
    ORA-39166: Object AUD$ was not found.
    ORA-31655: no data or metadata objects selected for job
    Job "SYSTEM"."SYS_EXPORT_TABLE_01" completed with 3 error(s) at 13:31:01
    It also shows error when I take it as SYS user :
    bash-3.00$ expdp sys/sys123@onlinete directory=test_dir TABLES=sys.AUD$ DUMPFILE=sysaud.$Date.dmp logfile=audit.$date.log
    Export: Release 10.2.0.1.0 - 64bit Production on Wednesday, 14 January, 2009 13:35:19
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    UDE-00008: operation generated ORACLE error 28009
    ORA-28009: connection as SYS should be as SYSDBA or SYSOPER
    Username: sys/sys123 as sysdba
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    Starting "SYS"."SYS_EXPORT_TABLE_01": sys/******** AS SYSDBA directory=test_dir TABLES=sys.AUD$ DUMPFILE=sysaud..dmp logfile=audit..log
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    ORA-39165: Schema SYS was not found.
    ORA-39166: Object AUD$ was not found.
    ORA-31655: no data or metadata objects selected for job
    Job "SYS"."SYS_EXPORT_TABLE_01" completed with 3 error(s) at 13:35:29
    I dont understand the problem why it is not working . Need advice plz ... ...

    But that's not fair..
    Imagine the situation, where I figured out that some data was edited a year ago, but I don't know by whom.. Audit was enabled at that time, I was exporting (using the regular exp) AUD$ table during the year, everything is good.. BUT.. Two months ago I upgraded my DB to 11g. Hence I cannot use imp in order to restore the table and see what was going on a year ago.. That means that I always have to have an ability to create 10g database in order to use my AUD$ export??
    Is there any other way of backing up this table? Sop far I was doing exp+truncate, but since 11g release where exp/imp are not supported I am trying to think about another way of dealing with the audit...
    does anybody have ideas about it?
    thanks,
    M

  • Audit log AUD$ table and Query...

    We have enabled auditing for DB user ERP, 'DB, EXTENDED', we restored our ERP schema day before yesterday, and today When I checked the log file, its size is 4.3GB, before that the file size was 60 MB.
    When I queried to check the total rows with sysdate-1, there are 66,60,756 rows, before the ERP schema restoration there were around 30,000 rows.
    I have not changed the query.
    The AUD$ table size is increasing rapidly, its size has become over 10GB in two days.
    PLease help me... what should I do?
    Regards,
    Rakesh Soni,
    http://rakeshocp.blogspot.com/

    Purging an AUD$ table is good idea after taking the export....
    Yeah...that could be better idea to audit those things that application skips...
    I was just getting calls from finance and operations departments... complaining that their ERP applications were haning taking long time to execute day end procedures and in reports...around 20 to 30 minutes.... as I recalled that my last deployment on live was enabling of auditing as I executed noaudit all and noaudit select, update, delete, insert on erp, The user got their day end procedures executed and report in less than 1 minute...
    Can anybody explain me....Does auditing degrades performance..?
    Regards?

  • Slow split table export (R3load and WHERE clause)

    For our split table exports, we used custom coded WHERE clauses. (Basically adding additional columns to the R3ta default column to take advantage of existing indexes).
    The results have been good so far. Full tablescans have been eliminated and export times have gone down, in some cases, tables export times have improved by 50%.
    However, our biggest table, CE1OC01 (120 GB), continues to be a bottleneck. Initially, after using the new WHERE clause, it looked like performance gains were dramatic, with export times for the first 5 packages dropping from 25-30 hours down to 1 1/2 hours.
    However, after 2 hours, the remaining CE1OC01 split packages have shown no improvement. This is very odd because we are trying to determine why part of the table exports very fast, but other parts are running very slow.
    Before the custom WHERE clauses, the export server had run into issues with SORTHEAP being exhausted, so we thought that might be the culprit. But that does not seem to be an issue now, since the improved WHERE clauses have reduced or eliminated excessive sorting.
    I checked the access path of all the CE1OC01 packages, through EXPLAIN, and they all access the same index to return results. The execution time in EXPLAIN returns similar times for each of the packages:
    CE1OC01-11: select * from CE1OC01  WHERE MANDT='212'
    AND ("BELNR" > '0124727994') AND ("BELNR" <= '0131810250')
    CE1OC01-19: select * from CE1OC01 WHERE MANDT='212'
    AND ("BELNR" > '0181387534') AND ("BELNR" <= '0188469413')
          0 SELECT STATEMENT ( Estimated Costs =  8.448E+06 [timerons] )
      |
      ---      1 RETURN
          |
          ---      2 FETCH CE1OC01
              |
              ------   3 IXSCAN CE1OC01~4 #key columns:  2
    query execution time [millisec]            |       333
    uow elapsed time [microsec]                |   429,907
    total user CPU time [microsec]             |         0
    total system cpu time [microsec]           |         0
    Both queries utilize an index that has fields MANDT and BELNR. However, during R3load, CE1OC01-19 finishes in an hour and a half, whereas CE1OC01-11 can take 25-30 hours.
    I am wondering if there is anything else to check on the DB2 access path side of things or if I need to start digging deeper into other aggregate load/infrastructure issues. Other tables don't seem to exhibit this behavior. There is some discrepancy between other tables' run times (for example, 2-4 hours), but those are not as dramatic as this particular table.
    Another idea to test is to try and export only 5 parts of the table at a time, perhaps there is a throughput or logical limitation when all 20 of the exports are running at the same time. Or create a single column index on BELNR (default R3ta column) and see if that shows any improvement.
    Anyone have any ideas on why some of the table moves fast but the rest of it moves slow?
    We also notice that the "fast" parts of the table are at the very end of the table. We are wondering if perhaps the index is less fragmented in that range, a REORG or recreation of the index may do this table some good. We were hoping to squeeze as many improvements out of our export process as possible before running a full REORG on the database. This particular index (there are 5 indexes on this table) has a Cluster Ratio of 54%, so, perhaps for purposes of the export, it may make sense to REORG the table and cluster it around this particular index. By contrast, the primary key index has a Cluster Ratio of 86%.
    Here is the output from our current run. The "slow" parts of the table have not completed, but they average a throughput of 0.18 MB/min, versus the "fast" parts, which average 5 MB/min, a pretty dramatic difference.
    package     time      start date        end date          size MB  MB/min
    CE1OC01-16  10:20:37  2008-11-25 20:47  2008-11-26 07:08   417.62    0.67
    CE1OC01-18   1:26:58  2008-11-25 20:47  2008-11-25 22:14   429.41    4.94
    CE1OC01-17   1:26:04  2008-11-25 20:47  2008-11-25 22:13   416.38    4.84
    CE1OC01-19   1:24:46  2008-11-25 20:47  2008-11-25 22:12   437.98    5.17
    CE1OC01-20   1:20:51  2008-11-25 20:48  2008-11-25 22:09   435.87    5.39
    CE1OC01-1    0:00:00  2008-11-25 20:48                       0.00
    CE1OC01-10   0:00:00  2008-11-25 20:48                     152.25
    CE1OC01-11   0:00:00  2008-11-25 20:48                     143.55
    CE1OC01-12   0:00:00  2008-11-25 20:48                     145.11
    CE1OC01-13   0:00:00  2008-11-25 20:48                     146.92
    CE1OC01-14   0:00:00  2008-11-25 20:48                     140.00
    CE1OC01-15   0:00:00  2008-11-25 20:48                     145.52
    CE1OC01-2    0:00:00  2008-11-25 20:48                     184.33
    CE1OC01-3    0:00:00  2008-11-25 20:48                     183.34
    CE1OC01-4    0:00:00  2008-11-25 20:48                     158.62
    CE1OC01-5    0:00:00  2008-11-25 20:48                     157.09
    CE1OC01-6    0:00:00  2008-11-25 20:48                     150.41
    CE1OC01-7    0:00:00  2008-11-25 20:48                     175.29
    CE1OC01-8    0:00:00  2008-11-25 20:48                     150.55
    CE1OC01-9    0:00:00  2008-11-25 20:48                     154.84

    Hi all, thanks for the quick and extremely helpful answers.
    Beck,
    Thanks for the health check. We are exporting the entire table in parallel, so all the exports begin at the same time. Regarding the SORTHEAP, we initially thought that might be our problem, because we were running out of SORTHEAP on the source database server. Looks like for this run, and the previous run, SORTHEAP has remained available and has not overrun. That's what was so confusing, because this looked like a buffer overrun.
    Ralph,
    The WHERE technique you provided worked perfectly. Our export times have improved dramatically by switching to the forced full tablescan. Being always trained to eliminate full tablescans, it seems counterintuitive at first, but, given the nature of the export query, combined with the unsorted export, it now makes total sense why the tablescan works so much better.
    Looks like you were right, in this case, the index adds too much additional overhead, and especially since our Cluster Ratio was terrible (in the 50% range), so the index was definitely working against us, by bouncing all over the place to pull the data out.
    We're going to look at some of our other long running tables and see if this technique improves runtimes on them as well.
    Thanks so much, that helped us out tremendously. We will verify the data from source to target matches up 1 for 1 by running a consistency check.
    Look at the throughput difference between the previous run and the current run:
    package     time       start date        end date          size MB  MB/min
    CE1OC01-11   40:14:47  2008-11-20 19:43  2008-11-22 11:58   437.27    0.18
    CE1OC01-14   39:59:51  2008-11-20 19:43  2008-11-22 11:43   427.60    0.18
    CE1OC01-12   39:58:37  2008-11-20 19:43  2008-11-22 11:42   430.66    0.18
    CE1OC01-13   39:51:27  2008-11-20 19:43  2008-11-22 11:35   421.09    0.18
    CE1OC01-15   39:49:50  2008-11-20 19:43  2008-11-22 11:33   426.54    0.18
    CE1OC01-10   39:33:57  2008-11-20 19:43  2008-11-22 11:17   429.44    0.18
    CE1OC01-8    39:27:58  2008-11-20 19:43  2008-11-22 11:11   417.62    0.18
    CE1OC01-6    39:02:18  2008-11-20 19:43  2008-11-22 10:45   416.35    0.18
    CE1OC01-5    38:53:09  2008-11-20 19:43  2008-11-22 10:36   413.29    0.18
    CE1OC01-4    38:52:34  2008-11-20 19:43  2008-11-22 10:36   424.06    0.18
    CE1OC01-9    38:48:09  2008-11-20 19:43  2008-11-22 10:31   416.89    0.18
    CE1OC01-3    38:21:51  2008-11-20 19:43  2008-11-22 10:05   428.16    0.19
    CE1OC01-2    36:02:27  2008-11-20 19:43  2008-11-22 07:46   409.05    0.19
    CE1OC01-7    33:35:42  2008-11-20 19:43  2008-11-22 05:19   414.24    0.21
    CE1OC01-16    9:33:14  2008-11-20 19:43  2008-11-21 05:16   417.62    0.73
    CE1OC01-17    1:20:01  2008-11-20 19:43  2008-11-20 21:03   416.38    5.20
    CE1OC01-18    1:19:29  2008-11-20 19:43  2008-11-20 21:03   429.41    5.40
    CE1OC01-19    1:16:13  2008-11-20 19:44  2008-11-20 21:00   437.98    5.75
    CE1OC01-20    1:14:06  2008-11-20 19:49  2008-11-20 21:03   435.87    5.88
    PLPO          0:52:14  2008-11-20 19:43  2008-11-20 20:35    92.70    1.77
    BCST_SR       0:05:12  2008-11-20 19:43  2008-11-20 19:48    29.39    5.65
    CE1OC01-1     0:00:00  2008-11-20 19:43                       0.00
                558:13:06  2008-11-20 19:43  2008-11-22 11:58  8171.62
    package     time      start date        end date          size MB   MB/min
    CE1OC01-9    9:11:58  2008-12-01 20:14  2008-12-02 05:26   1172.12    2.12
    CE1OC01-5    9:11:48  2008-12-01 20:14  2008-12-02 05:25   1174.64    2.13
    CE1OC01-4    9:11:32  2008-12-01 20:14  2008-12-02 05:25   1174.51    2.13
    CE1OC01-8    9:09:24  2008-12-01 20:14  2008-12-02 05:23   1172.49    2.13
    CE1OC01-1    9:05:55  2008-12-01 20:14  2008-12-02 05:20   1188.43    2.18
    CE1OC01-2    9:00:47  2008-12-01 20:14  2008-12-02 05:14   1184.52    2.19
    CE1OC01-7    8:54:06  2008-12-01 20:14  2008-12-02 05:08   1173.23    2.20
    CE1OC01-3    8:52:22  2008-12-01 20:14  2008-12-02 05:06   1179.91    2.22
    CE1OC01-10   8:45:09  2008-12-01 20:14  2008-12-02 04:59   1171.90    2.23
    CE1OC01-6    8:28:10  2008-12-01 20:14  2008-12-02 04:42   1172.46    2.31
    PLPO         0:25:16  2008-12-01 20:14  2008-12-01 20:39     92.70    3.67
                90:16:27  2008-12-01 20:14  2008-12-02 05:26  11856.91

  • Delete AUD$ table do not free space

    Hi,
    We have an Oracle 10g database (10.2.0.4) with the Oracle auditing feature on.
    Our problem is that I delete on regular basis the AUD$ rows, but the system do not free the space on the tablespace. The system free the space only after a truncate.
    The actual situation is:
    SQL> select segment_name,bytes/1024/1024 as Mb from dba_segments where bytes/1024/1024 > 1000 and TABLESPACE_NAME='SYSTEM';
    SEGMENT_NAME                                                                              MB
    AUD$                                                                                             10161
    SQL> select count(*) from sys.aud$;
    COUNT(*)
    +1073+
    Someone can help me with this!?
    The delete of the table is done with a simple script:
    select count(*) from sys.aud$;
    delete from sys.aud$ where NTIMESTAMP# < sysdate -(1/24*6);
    select count(*) from sys.aud$;
    commit;
    exit;
    Thanks
    Nunzio

    Nunzio Cafarelli wrote:
    EdStevens wrote:
    Nunzio Cafarelli wrote:
    Hi,
    thanks for your answer.
    The scrink and the coalesce feature do not works:
    SQL> alter table SYS.AUD$ enable row movement;
    alter table SYS.AUD$ shrink space;
    alter table SYS.AUD$ coalesce;
    Table altered.
    SQL> SQL> alter table SYS.AUD$ shrink space
    ERROR at line 1:
    ORA-10635: Invalid segment or tablespace type
    SQL> SQL> alter table SYS.AUD$ coalesce
    ERROR at line 1:
    ORA-01735: invalid ALTER TABLE optionthe only way I have is to move the table in a different tablespace and in case move back.
    Do you know if that is a know bug!?No it is not a bug.
    Why are you concerned about the space being released? If it were you'd just have to re-aquire it as new rows are added to the table.
    As rows are added, when an extent is filled a new extent is allocated. If rows are deleted those extents remain allocated to the table and are re-used as needed. The aud$ table is a perfect example of a table which most definately will need to reuse that space, so it it a total waste of effor to try to reclaim it.Hi, that is the real point, it seems that the table don't use the allocated space.
    It constantly grows and reclaim new space. That is the real problem for me.You haven't demonstrated that. You only showed us a one-time snapshot of the space used and the row count.
    You need to track this over several days / iterations of deleting rows. Show us that over a period of time the row count is fairly stable and yet the space used continues to increase.
    If, today the rowcount is 'x' and the space allocated (measured in blocks or extents) is 'y', and you delete a bunch (or even all) rows, space allocated should remain 'y'. Then tomorrow, you should have near 'x' rows again (assuming a fairly level amount of audited activity). And space allocated should still be 'y' .. maybe allowing for a slight increase in 'x' triggering one more extent. But if you are regularly pruning the row count, the number of extents should stabilize. If not, then you do have an issue, but I haven't yet seen proof of that happening.
    >
    >>
    Can this create problems since AUD$ is a system table!?No, but you don't want a very active table like that in your SYSTEM tablespace. Oracle puts it there by default, but there have been well known procedures on the net for YEARS on how to move it to another TS. Thanks
    With 11g, they've even introduced the dbms_audit_management package to help with these tasks.I know, I hope the my customer will authorize the upgrade soon.
    Thanks for your suggestion to upgrade sybrand_b, but as you can understand, sometime the upgrade are not possible until the software vendor will authorize that, so, we need to work on version 10.2.0.4 for some other months.
    Regards
    Nunzio
    P.S. I think that there are problems, after the move of the table to a different tablespace, and the move back to SYSTEM, the used space was around 2Gb, but I was not able to resize the datafile. Don't move it back. Get it out of SYSTEM and leave it there.I think that I'll do this, at least to not risk the corruption of the system tablespace.
    Thanks again for your answers.
    regards
    Nunzio

  • Sys.aud$ table

    Hi,
    I need to get the auditing information for the last seven days fron the auditing table for users other than apps.I tried the below query.Please correct me if it is wrong.The query takes
    a long time to execute and no output dispayed.Please inform me whether the below query is correct and inform me if i need to do any modifications.
    select userid,userhost,terminal,action#,obj$name, NTIMESTAMP# from sys.aud$ where action#=3 and timestamp# >=(sysdate-7) and userid not in ('APPS') order by ntimestamp# desc;
    Regards
    Aram

    You are SELECTing and ORDERing using column NTIMESTAMP#, but WHERE clause is using TIMESTAMP# (different column), most likely leading to a poor execution plan. Is the statement you posted syntactically correct ?
    MOS Doc 1025314.6 - Descriptions of Action Code and Privileges Used in Fields in SYS.AUD$ Table
    HTH
    Srini

  • [SQL SERVER 2000] Generic table exporter

    Hello every body.
    First of all sorry for my bad english but I'm french ;-)
    My internship consits into making a generic table exporter (with a table list). Export into csv files.
    I have tried 2 solutions :
    1 - Create a DTS with a Dynamic Properties Task. The problem is I can easily change the destination file but when I change the table source I don't know how to remap the transformations between source and destination (do you see what I mean ?). Any idea ?
    2 - Use the bcp command. Very simple but how to do when tables contains the separator caracter ? for example : If a table line is "toto" | "I am , very happy" --> the csv file will look like this : toto, I am , very happy --> problem to get back the data ( to much comma ).
    Does someone has a solution ?
    Last point is how to export the table structure ? For the moment, using the table structure, I generate an sql query which creates a table (I write this query in a file). Isn't there any "cleaner" solution ?
    Thanks in advance and have a nice day all

    Answers,
    1. Use ActiveX script to transform. Refer
    http://technet.microsoft.com/en-us/library/aa933459(v=sql.80).aspx
    2. Replace the pipe delimiter first with comma if it is a single column and use bcp command. Refer
    http://technet.microsoft.com/en-us/library/aa174646(v=sql.80).aspx
    3. Regarding generating script refer
    http://stackoverflow.com/questions/4058977/exporting-tables-and-indexes-from-one-sql-server-2000-database-to-another
    Regards, RSingh

  • Sys.aud$ Table not accesible over PL/SQL ?

    I try to do the follow.
    When you start the auditing with specific command like AUDIT
    SESSION; it will produce many many rows in the sys.aud$ table.
    This is the reason while we need to maintain the data witch exist
    then in thsi table.
    I did try it to do this with a separate user like AUDITER.
    I gave them from the sys user the follow permissions:
    GRANT select, delete, update , insert to AUDITER;
    If i try now to select from SYS.AUD$ it works if i do it with a
    separet select statement like :
    SELECT * FROM SYS.AUD$.
    If i make a PROCEDURE like folow :
    PROCEDURE proceed_audit as
    CURSOR audtab is select * from sys.aud$
    BEGIN
    END;
    Oracle generate the message :
    PLS-00201: identifier 'SYS.AUD$' must be declared
    I don't anderstand this message, becose this object exists and in
    "SQL" i can use it.
    Can anyone help me ?
    Thanks
    P.S. it's the Oracle version 8.1.7i

    Are you sure the user that is executing the PL/SQL block has
    direct grants to the tables you are referencing? I.E. NOT
    through a role? PL/SQL requires the user to have direct grants
    to the object it references. Granting DBA to the user won't have
    any affect on the execution of the PL/SQL

  • Strange issue on deleting some rows on SYS.AUD$ table

    I just found out this strange thing happened on my 10gR2 database. I created a user called AUDIT_LOG and GRANT DELETE, REFERENCES, SELECT ON SYS.AUD$ TO AUDIT_LOG when I logged on as SYS dba.
    (1) Then I logged on as AUDIT_LOG user, tested the following statements:
    SELECT count(*) from sys.aud$ where ntimestamp# < TRUNC (SYSDATE-14);
    COUNT(*)
    2
    DELETE from sys.aud$ where ntimestamp# < TRUNC(SYSDATE-14);
    0 rows deleted
    (2) When I logged on as SYS account, SYS deleted them all,
    DELETE from sys.aud$ where ntimestamp# < TRUNC(SYSDATE-14);
    2 rows deleted
    I don't understand why the AUDIT_LOG user can't delete that two rows?
    Thanks for your help!
    lixidon

    Apologies for misreading the first time. I am wondering if the rows in question were related to audit actions on sys.aud$ itself as those rows should not be deleted by the AUDIT_LOG user (even if the user has been granted delete).
    Here's an excerpt from the Security Guide under the "Protecting the Standard Audit Trail" section:
    Audit records generated as a result of object audit options set for the SYS.AUD$ table can only be deleted from the audit trail by someone connected with administrator privileges, which itself has protection against unauthorized use.
    Here's a quick example illustrating this:
    SQL> connect / as sysdba
    Connected.
    SQL> grant delete, references, select on sys.aud$ to scott;
    Grant succeeded.
    SQL> connect scott/tiger
    Connected.
    SQL> select count(*) from sys.aud$ where sessionid = 30002;
      COUNT(*)
             2
    1 row selected.
    SQL> delete from sys.aud$ where sessionid = 30002;
    2 rows deleted.
    SQL> commit;
    -- now try to delete the sys.aud$ rows related to the above delete
    -- this will not succeed as user scott even though delete has been granted
    -- the session that performed the delete is 422426
    SQL> select count(*) from sys.aud$ where obj$name = 'AUD$' and action# = 7 and sessionid = 422426;
      COUNT(*)
             2
    1 row selected.
    SQL> delete from sys.aud$ where obj$name = 'AUD$' and action# = 7 and sessionid = 422426;
    0 rows deleted.
    SQL>Regards,
    Mark

  • During the Unicode conversion , Cluster table export taken too much time ap

    Dear All
    during the Unicode conversion , Cluster table export taken too much time approximately 24 hours of 6 tables , could you please advise , how can  we   reduse  the time
    thanks
    Jainnedra

    Hello,
    Use latest R3load from market place.
    also refer note
    Note 1019362 - Very long run times during SPUMG scans
    Regards,
    Nitin Salunkhe

  • Is it safe to purge / delete older records from AUD$ table in SYS schema

    Hi,
    Can we purge / delete older records from AUD$ table in SYS schema.
    Please advice.
    Thanks
    Naveen

    Pl see MOS Doc 73408.1 (How to Truncate, Delete, or Purge Rows from the Audit Trail Table SYS.AUD$) fro details on how to do so.
    HTH
    Srini

  • What is sessionid field in SYS.AUD$ table

    Hi,
    Can anyone say,what is sessionid field in sys.aud$ table..It seems different than the sessions

    Look at the session value in sys.aud$ table.
    QL> select sessionid from sys.aud$ where rownum<10;
    SESSIONID
    459521060
    459521607
    459521661
    459521901
    459521954
    459522004
    459522052
    459522262
    459522424
    It seems that,its not asession id.Mostly sessionid length ll be in3-4.

  • 많은 개수의 TABLE을 한번에 TABLE 별로 EXPORT받는 방법

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-12
    많은 개수의 TABLE을 한번에 TABLE 별로 EXPORT받는 방법
    =====================================================
    Purpose
    export를 table 별로 받아야 하는 데, export를 받아야 할 table이 매우
    많은 경우 tables option에 모두 적는 것이 불가능한 경우가 있다.
    이러한 경우에 대해 보다 쉽게 작업할 수 있는 방법을 알아보자.
    Explanation
    1. sqlplus scott/tiger로 login
    SQL> set heading off
    SQL> set pagesize 5000 (user가 소유한 table의 갯수이상)
    SQL> spool scott.out
    SQL> select tname from tab;
    SQL> exit
    2. 위와 같이하면 모든 scott user의 table들이 scott.out에 저장
    $ vi scott.out
         SQL> select tname from tab;
         BONUS
         DEPT
         DUMMY
         EMP
         SALGRADE
         SQL> exit
    vi editor로 불필요한 처음과 마지막 두라인 삭제후 table 이름뒤에
    있는 null문자를 제거 한다.
    < null문자 제거 및 export 화일을 만드는 사전 작업 >
    화일을 open 한 후
    1) :g/ /s///g      <--- table name뒤의 null문자 제거
    2) :1
    3) bonus table 뒤에 comma 를 append
    4) :map @ j$. 하고 Enter <--- 다음 라인에도 2번의 작업을 하기 위한 macro
    5) Shift+2 (계속 누르고 있음)<--- 다음 라인의 마지막에 comma 추가
    6) 제일 마지막 라인은 comma 불필요
    위의 out file을 100 개씩(table name이 길 경우는 그 이하로) 라인을
    쪼개어 화일을 나누어 개별 화일 이름을 부여하여 저장한다.
    예) 1~100은 scott1.out 101~200은 scott2.out .....과 같이 나누고
    화일의 제일 마지막 라인의 comma를 제거
    아래의 script4exp.c를 compile하여 export를 위한 shell script를
    작성한다. ( 필요하다면 script내의 export option을 수정하여 compile)
    compile이 끝난후
    $ script4exp scott1.out scott1.sh scott tiger scott1.dmp scott1.log
    $ script4exp scott2.out scott2.sh scott tiger scott2.dmp scott2.log
    하게 되면 scott1.sh, scott2.sh,.....가 생기며 이를 모드를 바꿔
    background job으로 수행하면 된다.
    주의) 1. 작업이 끝난후 *.sh의 file size를 check 한다.
    2. 가능한 큰 table은 outfile에서 빼내 따로 export한다.
    ====script4exp.c=================
    #include <stdio.h>
    #include <string.h>
    #define EXPCMD "exp %s/%s buffer=52428800 file=%s log=%s tables="
    main(int argc, char **argv)
    FILE ifp, ofp;
    char buff[256], *pt;
    if (argc != 7)
    printf("\nUSAGE :\n");
    printf("$ script4exp infile.out, outfile.sh, username,
    passwd, dmpfile.dmp, logfile.log\n\n");
    exit(0);
    if ((ifp = fopen(argv[1], "r")) == NULL)
    printf("%s file open fail !!\n", argv[1]);
    exit(0);
    if ((ofp = fopen(argv[2], "w")) == NULL)
    printf("%s file open fail !!\n", argv[1]);
    exit(0);
    fprintf(ofp, EXPCMD, argv[3], argv[4], argv[5], argv[6]);
    while((fgets(buff, 80, ifp)) != NULL)
    if ((pt = strchr(buff, '\n')) != NULL) *pt = NULL;
    fprintf(ofp, "%s", buff);
    memset(buff, 0, sizeof(buff));
    fprintf(ofp, "\n");
    fclose(ifp);
    fclose(ifp);
    }

  • Aud$ table in system schema????

    Dear all.
    Facts: oracle 9204 enterprise. Data guard config.
    OS. AIX 5.3
    Some weeks ago i started to manage a database with the facts describe above. Well. I was viewing the config and i saw a strange configuration...
    My table AUD$ is in the system schema.
    There aren't AUD$ table in SYS schema.
    Exists a synonym sys.aud$ for system.aud$.
    I knew that this table exists en SYS schema and from time to time we can move the records into another schema for storage and performance. But in SYSTEM schema?
    Or exists a parameter in 9i that can i configure that ???
    Thanks a lot !!!!
    ps. apologize my english is not very well !!! =)

    It is quite possible that the AUD$ table was moved from the SYS schema into the SYSTEM schema. See these MOS Docs
    1019377.6 - Script to move SYS.AUD$ table out of SYSTEM tablespace
    72460.1 - Moving AUD$ to Another Tablespace and Adding Triggers to AUD$
    HTH
    Srini

Maybe you are looking for

  • How do u convert any executable file to a web service

    Say i have an executable software...say something like an Optical character resolution software...or something like a dictionary software..which i normally run as an exe file.. now how do i go about writing up java code, which allows me to convert th

  • Why is iPhoto suddenly missing from OX 10.8.5?

    I rarely use iPhoto but today I tried to open it and it is inexplicably missing from my computer. I have a MacBook Air from a couple of years ago and I'm running OX 10.8.5. I'm holding off of Mavericks until problems are resolved and my IT guy says i

  • RE: forte-users-digest V1 #1490

    Jim - We had the same issues when we were running multiple production environments. The best way to handle the logging of application exceptions from multiple environments, is to use a database. Plus the database allows for easier reporting. Give us

  • About jbuilder4 and weblogic6.0

    when i want to deploy my jar to weblogic and do as the wizard,select the folder application as standalone,then start deploy. it will report: Stored archive as D:\weblogic\wlserver6.0sp1\config\mydomain\applications\test.jar Error type: java.lang.Secu

  • CS5.5 hangs on P2 files that CS5 didn't

    When importing P2 files to CS5, I had no problems. Since upgrading to CS5.5 my earlier projects hang on certain (P2)  files until I restart premiere. I cannot import these files at all, but can separate them and copy in to premiere. This solution doe