Control Center performance improvement for large (runtime) repositories

Hi all,
I'm currently working on a large OWB project. We have serious performance issues with the Control Center (version 10.2.0.3) I have investigated what happens when you start the Control Center (SQL traces) and have implemented the following:
-- slow query 1
-- also added parent_audit_object_id to this index to help with slow query 3
CREATE INDEX colin_wb_rt_audit_objects_1 ON
  wb_rt_audit_objects(audit_object_id, parent_audit_object_id, audit_unit_id, number_of_script_errors, number_of_script_warnings);
-- slow query 2
CREATE INDEX colin_wb_rt_audit_scripts_1 ON
  wb_rt_audit_scripts(audit_object_id, audit_script_id);
-- slow query 3
CREATE INDEX colin_wb_rt_audit_objects_2 ON
  wb_rt_audit_objects(object_uoid, audit_object_id DESC, audit_unit_id);
CREATE INDEX colin_wb_rt_audit_objects_3 ON
  wb_rt_audit_objects(parent_audit_object_id, audit_object_id);The reason this helps is that now the indexes contain all the data needed in the slow queries obviating the need to go to the tables, which in these cases are particularly expensive as the tables contain large LOB columns.
It is very interesting to see that two of the indexes can replace existing indexes that were added in a recent patch!
For the wb_rt_warehouse_objects table I've implemented an alternative solution, and that is to move the storage of 2 of the 3 CLOBs out-of-row:
ALTER TABLE wb_rt_warehouse_objects MOVE
  TABLESPACE <xxx>
  LOB (creation_script)
  STORE AS (
    TABLESPACE <xxx>
    DISABLE STORAGE IN ROW
  LOB (client_info)
  STORE AS (
    TABLESPACE <xxx>
    DISABLE STORAGE IN ROW
;where you should replace <xxx> with the tablespaces of your choice.
I hope this will help some of you with large repositories.
Cheers,
Colin

Hi David,
I hope these improvements can be implemented! ;-)
We have a runtime repository with some 2300 deployed tables and an equivalent number of mappings.
Total number of rows in wb_rt_warehouse_objects is more than 40,000
I used an SQL trace and tkprof to identify some high cost queries -- individually these queries perform quite reasonably but when executed many times (as happens when starting the Control Center) these can cost many tens of extra seconds.
If you're interested I can send you
* traces
* tkprofs
* the slow queries
and then you can see the before and after explain plans and why this works.
Please contact me at colinthart at the Google mail service :-)
Cheers,
Colin

Similar Messages

  • DMA Performance Improvements for TIO-based Devices

    Hello!
    DMA Performance Improvements for TIO-based Devices
    http://digital.ni.com/public.nsf/websearch/1B64310FAE9007C086256A1D006D9BBF
    Can I apply the procedure to NI-DAQmx 9? These ini-files dont seem to exist anymore in the newer version.
    Best, Viktor

    Hi Viktor,
    this page is 7 years old and doesn't apply to the DAQmx.
    Regards, Stephan

  • Status control table - 'Perform update for status control table for cube '

    Hello Experts,
    While loading data to DSO, I am getting this message - 'Perform update for status control table for cube Z3MLQUA'.
    What is status control table ? and can I have some inputs as to how to solve this issue
    Regards,
    arjun

    Hi,
    This message is not a issue in itself. It is just a SAP message which means that the system is checking/updating the system tables to carry out the operation whether it can be done or not.
    If your loads are failing at this step or after this message there can be many reasons, check some of  the possible reasons below
    Performing check and potential update for status control table
    Update from PSA error in Process Chain
    problem in deleting request
    Reporting not available ,even if Data is sucessfully loaded into DT
    Hope this helps,
    Kush kashyap

  • IOS 7 control center isnt working for me

    Im swiping up, but the control center isnt coming up.  I swipe down and get the notification center and search bar, but the control center will not work.  In settings everything is on green.  I have the iphone 5.  I do have a screen protector, but why would the notification center work?  Help please

    Yup the **** otter box kept me from getting too low.  I didn't really think of it because the notification center to swipe down works just fine, and it's just as bulky on top.  I really have to shove my finger down at the bottom of the screen to get it to work.  I hope they maybe adjust it just a tad to make it easier for people with bulky cases.  Ive dropped this phone more than a few times and still in great condition from the case

  • Performance improvement for ALE/IDOC processing

    Dear experts,
    Could you let me know any information to tune performance?
    In our system (SAP R/3 Enterprise 6.20), material master data is distributed from one client to other clients in the same instance. It is because material master is maintained centrally, and is distributed to other clients for member companies.
    During the night batch, distributing master data takes more than 2 hours. Although the distribution usually finishes without errors, we would like to explore ways to improve processing performance. Especially, program RBDMIDOC runs long time to create IDOC for MATMAS, even 0 master IDOC is set eventually.
    OSS notes of the list for OS/DB/SAP parameters related to ALE/IDOC, tips in organizing batch jobs, etc, will be very appreciated.
    Many Thanks,
    Nori

    I'd recommend to profile the program at least once to see where there's possible room for improvements. I.e. an ABAP runtime analysis and SQL trace would be good for a start. You can check out the links in thread Please Read before Posting in the Performance and Tuning Forum, which will give you some references if you need them. Once you have more details, you either know what you need to do or you can provide them and ask a more specific question (and thus you will receive much better answers).
    As a general remark though I've seen often poor performance initially on the changer pointer selection, because there are too many entries in the table (due to system generating unnecessary change pointers, which should be disabled or/and lack of proper reorganization, i.e. deletion of old/processed change pointers via BD22). Sounds like this is most likely also the cause of your problem, because otherwise it's hard to explain long run times without generating any master IDocs. You can check the number of change pointers easily via view BDCPV or BDCP2 - it depends how you store your change pointers (most likely you find them via view BDCPV in a 6.20 system, unless somebody switched it to the newer BDCP2).
    If you still have them in BDCPV (or the underlying tables to be more precise), check out [OSS note 305462 - MOD: Migration change pointer to table BDCP2|https://service.sap.com/sap/support/notes/305462], which will provide you a general hint how to do that (and thus also improve the performance). However, if you're currently not deleting any old change pointers, you should ensure that a regular job runs for transaction BD22 (program RBDCPCLR). You'll easily find other (possibly relevant) OSS notes by doing a search yourself...

  • Oracle performance, slow for larger and more complex results.

    Hello Oracle forum,
    At the moment i have a Oracle database running and i'm specifically interested in the efficiency spatial extension for webmaps and GIS.
    I've been testing the database with large shape files (400mb - 1gigabyte) loaded them into the database with shp2sdo->sql*loader.
    Using Benchmark factory i've test the speed of transactions an these drop relatively quickly. I've started with a simple query:
    SELECT id FROM map WHERE id = 3 when I increase the amount of id's to 3-10000 the performance decreases drastically.
    so :
    SELECT id FROM map WHERE id >=3 and id <= 10000
    The explain plan shows the second query , both query's use the index.
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 9828 | 49140 | 22 (0)| 00:00:01 |
    |* 1 | INDEX RANGE SCAN| SYS_C009650 | 9828 | 49140 | 22 (0)| 00:00:01 |
    Statistics
    0 recursive calls
    0 db block gets
    675 consistent gets
    0 physical reads
    0 redo size
    134248 bytes sent via SQL*Net to client
    7599 bytes received via SQL*Net from client
    655 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    9796 rows processed
    The statistics does not show very weird stuff, but maybe i'm wrong. Nothing changed in the explain plan except for the range scan instead of a unique scan.
    The query returns lots of results and this is I think the reason why my measured time of the query is large. The time it takes returning large amount of rows increases quickly for more rows.
    . Can this be solved? The table has been analyzed before starting the query.
    The parameters of the database are not really changed from standard, I increased the amount of memory used by Oracle 11g to 1+ gigabyte.
    and let the database itself decide how it uses this memory.
    The system specs are and db_parameters are:
    Oracle 11G
    Memory Processor # of CPUs OS OS Version OS B
    1.99 gb Intel(R) Core(TM)2 CPU 6600 @ 2.40GHz 2 Microsoft WindowsXP 5.2600
    0=Oracle decides which value will be given
    cursor_sharing EXACT
    cursor_space_for_time FALSE
    db_block_size 8192
    db_recovery_file_dest_size 2147483648
    diagnostic_dest C:\DBBENCHMARK\ORACLE
    dispatchers (PROTOCOL=TCP) (SERVICE=gistestXDB)
    hash_area_size 131072
    log_buffer 5656576
    memory_max_target 1115684864
    memory_target 1048576000
    open_cursors 300
    parallel_max_servers 20
    pga_aggregate_target 0
    processes 150
    resumable_timeout 2162688
    sort_area_size 65536
    Sga=632mb
    PGA=368mb
    javapool=16mb
    largepool=8mb
    other=8mb
    So I indexed and analyzed the data what did i forget? I can speed it up with soft parsing, but the problem remains . Hopefully this enough information for some analysis, does anyone experienced the same problems ? I tested with SQLdeveloper the speed and is shows the same speed as Benchmark factory. What could be wrong with the parameters?
    Thanks,
    Jan Martijn
    Edited by: user12227964 on 25-jan-2010 4:53
    Edited by: user12227964 on 26-jan-2010 2:20

    Sand wrote:
    select count(id) , resulted in 3669015 counted id's.
    The database counted 18,345,075 rows per second without binded variables , which is ten times slower as your result. This can be possible because of hardware but my question is specifically about the number of rows returned thus large amount of results. The idea was not to compare the speed of "+select count(*)+" statements - but to illustrate that even when dealing with a huge number of rows, one can decrease the amount of I/O that needs to be performed to deal with that number of rows.
    Select id from map where id <= 1
    4000 rows per second are selected, Rows/sec is a meaningless measurement - due to physical I/O (PIO) versus logical I/O (LIO). You can select a 100 rows and these require PIO. Resulting in an elapsed time of 1 sec. You can select a 1000 rows that require only LIO. With an an elapsed time of 0.5 sec.
    Is the 2nd method better or faster? No. It simply needed less time to be spend on I/O as the data blocks were in the buffer cache (memory) and did not require very slow and expensive disk access.
    Another database i testes returns 6 times 25425 rows back per second for the same query (100 ids). What could be a parameter that limits the output speed of multiple rows in a query?.Every single row that needs to be read/processed by a SQL statement has a cost associated with it. This cost is not consistent! It differs depending on how that row can reached - what I/O paths are available to find that rows? Does the full table need to be scanned? Does an index need to be scanned? Is there a unique index that can be used? Is the table partitioned and can partitioning pruning be applied and local partition indexes used? Are there are user functions that need to be applied to the row's data? Etc. Etc.
    All these together determine how fast the client gets a row from the cursor executing that SQL.
    The more rows you want to process, the bigger the increase in the cost/expense - specifically more I/O. As I/O is the biggest expense (slowest ito elapsed time).
    So you want to do as little I/O as possible and read as little data as possible. For example, instead of a full table scan, a fast full index scan. For example, instead of reading the complete contents of a 10GB table, reading the complete contents of a 12MB index for that table.
    I suggest that you read the Oracle Performance Guide to familiarise yourself with basic performance concepts. Use http://tahiti.oracle.com for finding the the guide for your applicable Oracle version.

  • Using IMAQ Image Display control vs IMAQ WindDraw for large image files

    Hello All;
    I am designing an application that is currently using IMAQ Image Display control to view large images (5K x 3K and larger).  My problem is that these images take 10-20 seconds to load and display, whereas if I use IMAQ WindDraw to display my image in a separate window, it only takes a couple of seconds.  My application is designed such that I am making use of the Subpanels in LabVIEW 8.0, and to make it pleasant for the user, the interface is such that my line profile, histograph and image viewer displays are contained within the same GUI (panel).
    I read the National Instruments application note regarding displaying of large images, and it did not seem to make a difference.  For example, I switched the 'modern' IMAQ Image Display control with the classic Image Display control, since the 'classic' does not contain any of the 3D rendering tools which might slow the process down.
    Why is there such a huge difference in loading times if I am trying to do exactly the same thing with both methods?  How can I bring the IMAQ Image Display control up to the same speed as the IMAQ WindDraw tool?
    I am currently using LabVIEW 8.0 with the latest IMAQ/NI Vision package from NI (IMAQ v7.1?).  Thanks.
    DJH

    Use a property node and select 16 bit image mapping. You can create a control for this or whatever you need. If you select the individual elements, you can get enumerated controls.
    Bruce
    Bruce Ammons
    Ammons Engineering

  • Performance Improvment for CTE

    I have below query which is used in a View which checks duplicate rows and selects it for further processing
    with MyCTETable (DefId, CustId,MyId,TacticId,cnt) as
    (select BaseTable.DefId ,
    BaseTable.MyCustId,
    WL.Id,
    BaseTable.TacticId ,
    COUNT(*)
    from WorkList WL
    inner Join WorkListMeta WLMeta ON WL.WorkListMetaId = WLMeta.Id
    and WLMeta.MyAnswer='1'
    Inner Join BaseTable BaseTable ON WL.Id = BaseTable.DefId
    and BaseTable.Org = '101'
    and BaseTable.State<>'d'
    group by BaseTable.DefId,
    WL.Id,
    BaseTable.MyCustId,
    BaseTable.TacticId having COUNT(*)>1)
    select list.Id
    from MyCTETable,
    BaseTable as list
    where list.DefId = MyCTETable.DefId
    and list.MyCustId =MyCTETable.CustId
    and list.MyCustId in
    (Select distinct MyCustId
    from BaseTable
    where DefId =MyCTETable.DefId
    and Done<>'0'
    and Status <>'d')
    Above query is returning 532084 records in around 15 mins.
    Just need your opinion, any changes/optimization can be done in query to improve the performance?
    Note: Indexes and statistics are all maintained.
    -Vaibhav Chaudhari

    Here is a method that can (depending on some other factors) sometimes gives you a method of eliminating one, maybe both of your self joins, if you're using them just to identify duplicates.  It uses the OVER() clause to give you your COUNT(*) as part
    of your primary result set. 
    This is just an example of the possibility, I didn't look closely enough at your code to say for sure it'll help in your case, but on the surface, I bet you could eliminate one of those self joins. 
    ;With X (letter) as (Select 'a' UNION ALL select 'b' UNION ALL select 'b' UNION ALL select 'c')
    , Y as (select *, count(letter) over(partition by letter) as count_this_letter from X)
    Select * from Y where count_this_letter > 1
    You don't have to use the WHERE clause as I did, that was just to show a way to refer to cases where a duplicate existed. 
    EDIT/Clarification: My mocked up example is just an example.  You would tweak your OVER clause probably something like "Count(*) Over(Partition byObDefListPKey, BpaPKey, JobDefid)" to count and group by those fields, while still presenting
    the full result set.

  • Performance improvement for af:table

    My page consists in a table and button. The button displays a popup containing several tabs with trees inside them that allows the user to filter the data. Clicking OK on the button runs the query and refresh the table. The table is configured as followed
    autoHeightRows="1000"
    fetchSize="The number of rows returned by the query"
    contentDelivery="immediate"
    immediate="true"
    value="call a method returning a List<MyLineBean> from managed bean"
    One requirement is to display the table with no scrollbar.
    The first issue is that displaying a table with 1000 rows is slow to render, but also makes the browser slow (Chrome in my case). The corresponding js file is about 11MB. I can understand that processing a 11MB JS file can be slow especially with DOM creation.
    The other issue I noticed is that the speed to display the popup depends on the size of the table. With a 1000 rows table, I click on the button, the first server request is done after 3s. The JS size is about 20KB and network latency is low. Closing the popup with no processing is also slow (~2s). Now if I do the same experiments with a table of 13 rows (180KB of JS), the popup displays and closes instantaneously.
    My priority is to improve the speed of displaying of the popup. Is there any reason why the speed depends on the size of the table?
    ADF 11gR1 + WebCenter Portal

    Hi user,
    Follow this link GEBS | ADF View Object Performance Tunning Analysis for better performance of table..
    Thank You.

  • PERFORMANCE IMPROVEMENT for a DB view

    Hi,
    There is around 300000 entries with MDBS and we are having very slow access and low performance.
    Following is the query.
    ima61v internal table does have only single entry in a sample run.
    SELECT wemng menge wepos elikz umrez
                 umren matnr werks lgort pstyp retpo           
            FROM  mdbs
            INTO (mdbs-wemng, mdbs-menge, mdbs-wepos, mdbs-elikz,
                  mdbs-umrez, mdbs-umren, mdbs-matnr, mdbs-werks,
                  mdbs-lgort, mdbs-pstyp, mdbs-retpo)         
            WHERE matnr  EQ ima61v-matnr
              AND werks  EQ ima61v-werks                       
              AND loekz  EQ space
              AND elikz  EQ space
              AND bstyp  IN ('F', 'L').
    The following is the ST05 analysis.
                                                                                    Executions - 1
    Identical Duration - 0
      Records  - 0
    Time/exec -  21,766,348
    Rec/exec.   - 0
    AvgTime/R. - 21,766,348
    MinTime/R. -  21,766,348
    Obj.  type   - MDBS                                                                               
    The SQL explain is as follows.
    SELECT STATEMENT ( Estimated Costs = 7 , Estimated #Rows = 1 )                                                                               
    6 TABLE ACCESS BY INDEX ROWID EKET                         
             ( Estim. Costs = 3 , Estim. #Rows = 1 )                                                                               
    5 NESTED LOOPS                                         
                 ( Estim. Costs = 7 , Estim. #Rows = 1 )                                                                               
    3 INLIST ITERATOR                                                                               
    2 TABLE ACCESS BY INDEX ROWID EKPO             
                         ( Estim. Costs = 4 , Estim. #Rows = 1 )                                                                               
    1 INDEX RANGE SCAN EKPO~1                  
                             ( Estim. Costs = 3 , Estim. #Rows = 1 )  
                             Search Columns: 6                                                                               
    4 INDEX RANGE SCAN EKET~0                          
                     ( Estim. Costs = 2 , Estim. #Rows = 1 )          
                     Search Columns: 3
    1.The tables are not going for full scan.
    2. DB stats are up to date.
    3. All indexes show in SQL explain are available at DB
    Apart from all these what else we can check to identify what is the problem? if  we change the variant for multiple mateirals and if we go for b/g execution it is taking more than 30 min to execute.
    and also let me know how to resolve the issue.
    Thanks in Advance.
    Praneeth

    3 simple points:
    I am quite sure that you did not run the statement before you did run the trace, please repeat and show the result of the second or third execution.
    I guess that is the only point, the explain is so simple that it can no take very long.
    And ... there is no record coming back .... I know that there are many executions where no record comes back, but is it really a good point for a discussion of a performance problem? Is this statement never successful?
    Number of records:
    The view has no records, you must check the two tables EKKE and EKPO, how many records are in these tables.

  • Performance improvement for select query

    Hi all,
    need to improve performace for the below select query as it is taking long time
    SELECT vbeln pdstk
             FROM vbuk INTO TABLE it_vbuk1 FOR ALL ENTRIES IN it_likp
          WHERE vbeln = it_likp-vbeln       AND
                wbstk = 'C'  AND "pdstk = ' ' AND
                vbtyp IN gr_delivery AND
                ( fkstk = 'A' OR fkstk = 'B' ) OR
                ( fkivk = 'A' OR fkivk = 'B' ).
    Regards,
    Kumar

    Hi,
        Check if it_likp is sorted on vbeln.
    SELECT vbeln pdstk
    FROM vbuk INTO TABLE it_vbuk1 FOR ALL ENTRIES IN it_likp
    WHERE vbeln = it_likp-vbeln AND
    wbstk = 'C' AND
    vbtyp IN gr_delivery AND
    ( ( fkstk = 'A' OR fkstk = 'B' ) OR      <-- check this condition , if ( ) is needed ...
      ( fkivk = 'A' OR fkivk = 'B' ) ) .
    Regards,
    Srini.

  • Performance Optimization for Cubes

    Hi All,
    In our project, we have a daily proces chain which will refresh four reporting cube, which is consuming 8-10 hours to complete the refresh. We have suggested to archive the historical data to the new cube to improve the performance of the daily load
    In UAT, the performance of the daily load did not improved after we performed the archiving.
    Kindly suggest the performance improvement for the cubes.
    Regards
    Suresh Kumar

    Hi,
    Before loading the cube , you need to delete the index and once the load is complete recreate the same.For this you have to go to the manage screen of the infocube----> Performance Tab.
    Also Create the DB Statistics.For this you have to go to the manage screen of the infocube----> Performance Tab.This will reduce the load time to a considerable amount.
    Also increase the Maximum size of the data packet in the Infopackage. For this you have to go to the infopackage-->Scheduler in the menu bar--> Data S. Default Data Transfer.Increase the size to a considerable amount(not very high).Also increase the Number of Data packets per info IDOC. This field will be available just after Maximum size of the data packet in the Infopackage.
    Hope It Helps,
    Regards,
    Amit
    Edited by: Amit Kr on Sep 4, 2009 5:37 PM

  • Improvements for control center

    Please put a switch for the enabling cellular data in the control center. I would use a great deal to try to save the most of the battery on my iPhone. Thanks

    You really should post this at http://www.apple.com/feedback/iphone.html

  • Help for upgrading runtime to control center

    hi guys,
    I am upgrading runtime repository (10g r1) to control center (10g r2). while moving the audit data, I get the error message "Failed to Migrage Audit Data into OWBRTR. java.qal.SQLException ORA-00001: unique constraint (OWBRTR.AMP_PK) violated".
    I check the orginial runtime repository, and all these unique constraints are enabled. so I assume that there should not be any duplicates there. How does it possible to having duplicates when I doing the upgrade?
    does anybody met this error before? how can I resolve it?
    thanks so so so much

    Hello
    Even I was getting almost similar problem. Once you solve please let us know.
    Mine is like this
    Error Created while moving audit data from existing runtime repository to Control Center
    Using OWB Control Center Upgrade Assistant. Here OWBRTR2TST is Runtime repository in
    Control Center.
    Exception:
    Failed to Migrate Audit Data into OWBRTR2TST java.sql.SQLException: ORA-02298:
    Can not validate (OWBRTR2TST.co.FK_PARENT_CO) - parent keys not found
    ORA-06512 at line 16

  • How to improve preformance on Control Center?

    Hi all!
    Does anyone know how to improve performance on Control Center? Its very slow when openning and refreshing
    We are using OWB client 11.1.0.6.0 and OWB Repository 11.1.0.1.1
    The performance was improved when we cleaned up the deployment and execution repository, buy it took almost 3 days to finish, the script used was purge_audit_template.sql
    If anybody knows any other way to improve the performance will be great
    Thanks Yuri

    Hi
    I also find that purge audit template script very slow. I'v found another one that is much faster and speeds up the control center.
    Before running this script you must log in as repository owner and run stop_service.sql (found in owb_home\rtp\sql). After the script, run start_service.sql
    //Cheers
    REM sqlplus <RT_OWNER>/<RT_PASSWORD>@<RT_CONNECT> @truncate_audit_execution_tables.sql
    REM
    REM to truncate wb_rt_audit_executions and dependent audit tables in the runtime repository
    REM First run stop_service_sql in <OWB_HOME>/rtp/sql
    REM Then run this script
    REM Then run start_service.sql in <OWB_HOME>/rtp/sql
    set echo off
    set verify off
    rem 'truncate_audit_execution_tables : begin'
    alter table wb_rt_feedback disable constraint fk_rtfb_rta;
    truncate table wb_rt_feedback;
    rem 'wb_rt_feedback truncated'
    alter table wb_rt_error_sources disable constraint fk_rts_rta;
    truncate table wb_rt_error_sources;
    rem 'wb_rt_error_sources truncated'
    alter table wb_rt_error_rows disable constraint fk_rtr_rte;
    truncate table wb_rt_error_rows;
    rem 'wb_rt_error_rows truncated'
    alter table wb_rt_errors disable constraint fk_rter_rta;
    alter table wb_rt_errors disable constraint fk_rter_rtm;
    truncate table wb_rt_errors;
    rem 'wb_rt_errors truncated'
    alter table wb_rt_audit_struct disable constraint fk_rtt_rtd;
    truncate table wb_rt_audit_struct;
    rem 'wb_rt_audit_struct truncated'
    alter table wb_rt_audit_detail disable constraint fk_rtd_rta;
    truncate table wb_rt_audit_detail;
    rem 'wb_rt_audit_detail truncated'
    alter table wb_rt_audit_amounts disable constraint fk_rtam_rta;
    truncate table wb_rt_audit_amounts;
    rem 'wb_rt_audit_amounts truncated'
    alter table wb_rt_operator disable constraint fk_rto_rta;
    truncate table wb_rt_operator;
    rem 'wb_rt_operator truncated'
    alter table wb_rt_audit disable constraint fk_rta_rte;
    truncate table wb_rt_audit;
    rem 'wb_rt_audit truncated'
    alter table wb_rt_audit_parameters disable constraint ap_fk_ae;
    truncate table wb_rt_audit_parameters;
    rem 'wb_rt_audit_parameters truncated'
    alter table wb_rt_audit_messages disable constraint am_fk_ae;
    delete from wb_rt_audit_messages where audit_execution_id is not null;
    rem 'wb_rt_audit_messages deleted'
    rem 'wb_rt_audit_message_lines cascade deleted'
    rem 'wb_rt_audit_message_parameters cascade deleted'
    alter table wb_rt_audit_files disable constraint af_fk_ae;
    delete from wb_rt_audit_files where audit_execution_id is not null;
    rem 'wb_rt_audit_files deleted'
    truncate table wb_rt_audit_executions;
    rem 'wb_rt_audit_executions truncated'
    alter table wb_rt_feedback enable constraint fk_rtfb_rta;
    alter table wb_rt_error_sources enable constraint fk_rts_rta;
    alter table wb_rt_error_rows enable constraint fk_rtr_rte;
    alter table wb_rt_errors enable constraint fk_rter_rta;
    alter table wb_rt_errors enable constraint fk_rter_rtm;
    alter table wb_rt_audit_struct enable constraint fk_rtt_rtd;
    alter table wb_rt_audit_detail enable constraint fk_rtd_rta;
    alter table wb_rt_audit_amounts enable constraint fk_rtam_rta;
    alter table wb_rt_operator enable constraint fk_rto_rta;
    alter table wb_rt_audit enable constraint fk_rta_rte;
    alter table wb_rt_audit_parameters enable constraint ap_fk_ae;
    alter table wb_rt_audit_messages enable constraint am_fk_ae;
    alter table wb_rt_audit_files enable constraint af_fk_ae;
    rem 'truncate_audit_execution_tables : end'
    commit;
    -----

Maybe you are looking for