Performance tuning and Periodic Maintance in Oracle Streaming Environment

We had Setup the Bi-Directional Oracle Streaming between two remote Sites each of 2-Node RAC Databases
This is Our Enviroment Summary.
Database Oracle 10g R2 version 10.2.0.4.0
Os: Solaris[tm] OE (64-bit)
Currenly Oracle Streaming working Successfully and I'm Daily Monitor the Same.
As Mention in the Master Note for Streams Performance Recommendations [ID 335516.1]
Purging Streams Checkpoints
10.2: Alter the capture parameter CHECKPOINT_RETENTION_TIME from the default retention of 60 days to a realistic value for your database.
A typical setting might be to retain 7 days worth of checkpoint metadata :
exec dbms_capture_adm.alter_capture(capture_name=>'your_capture', checkpoint_retention_time=> 7);
My Query
===> Currenly In My Environment CHECKPOINT_RETENTION_TIME is as default 60 days
I want to change the checkpoint_retention_time=> 7 so what should I take care for before execute the above command in my Live Streaming Environment.
Edited by: user8171787 on Apr 13, 2011 11:00 PM

We had Setup the Bi-Directional Oracle Streaming between two remote Sites each of 2-Node RAC Databases
This is Our Enviroment Summary.
Database Oracle 10g R2 version 10.2.0.4.0
Os: Solaris[tm] OE (64-bit)
Currenly Oracle Streaming working Successfully and I'm Daily Monitor the Same.
As Mention in the Master Note for Streams Performance Recommendations [ID 335516.1]
Purging Streams Checkpoints
10.2: Alter the capture parameter CHECKPOINT_RETENTION_TIME from the default retention of 60 days to a realistic value for your database.
A typical setting might be to retain 7 days worth of checkpoint metadata :
exec dbms_capture_adm.alter_capture(capture_name=>'your_capture', checkpoint_retention_time=> 7);
My Query
===> Currenly In My Environment CHECKPOINT_RETENTION_TIME is as default 60 days
I want to change the checkpoint_retention_time=> 7 so what should I take care for before execute the above command in my Live Streaming Environment.
Edited by: user8171787 on Apr 13, 2011 11:00 PM

Similar Messages

  • Resources for performance tuning and RAC needed

    Can you suggest me the best knowledge resources for performance tuning and RAC?
    Besides Oracle doc ...
    Thanks!

    Before all, I'm searching for resources on web like performance tuning from Dizwell Informatics is.
    Anyway thank you Eric for your suggestion!

  • ABAP Programs Performance Tuning and Web Services

    Hi,
    Can anyone give me any good material link or eBook on SAP ABAP programs Performance Tuning. What are the things that needs to be done for performance tuning etc..
    Also, any material or simple eBook on web services.
    my email is [email protected]
    Thanks a ton in advance.
    Swetha.

    Check this link ABAP Development  Performance Tuning
    https://www.sdn.sap.com/irj/sdn/wiki?path=/display/abap/performance%2btuning
    Check these threads.
    How do you take care of performance issues in your ABAP programs?
    http://help.sap.com/saphelp_nw04/helpdata/en/9f/db9bd335c111d1829f0000e829fbfe/frameset.htm

  • Query Performance tuning and scope of imporvement

    Hi All ,
    I am on oracle 10g and on Linux OS.
    I have this below query which I am trying to optimise :
    SELECT 'COMPANY' AS device_brand, mach_sn AS device_source_id,
                           'COMPANY' AS device_brand_raw,
                           CASE
                               WHEN fdi.serial_number IS NOT NULL THEN
                                fdi.serial_number
                               ELSE
                                mach_sn || model_no
                           END AS serial_number_raw,
                           gmd.generic_meter_name AS counter_id,
                           meter_name AS counter_id_raw,
                           meter_value AS counter_value,
                           meter_hist_tstamp AS device_timestamp,
                           rcvd_tstamp AS server_timestamp
                      FROM rdw.v_meter_hist vmh
                      JOIN rdw.generic_meter_def gmd
                        ON vmh.generic_meter_id = gmd.generic_meter_id
                      LEFT OUTER JOIN fdr.device_info fdi
                        ON vmh.mach_sn = fdi.clean_serial_number
                     WHERE meter_hist_id IN
                           (SELECT /*+ PUSH_SUBQ */ MAX(meter_hist_id)
                              FROM rdw.v_meter_hist
                             WHERE vmh.mach_sn IN
                                   ('URR893727')
                               AND vmh.meter_name IN
                                   ('TotalImpressions','TotalBlackImpressions''TotalColorImpressions')
                               AND vmh.meter_hist_tstamp >=to_date ('04/16/2011', 'mm/dd/yyyy')
                               AND vmh.meter_hist_tstamp <= to_date ('04/18/2011', 'mm/dd/yyyy')
                             GROUP BY mach_sn, vmh.meter_def_id)
                     ORDER BY device_source_id, vmh.meter_def_id, meter_hist_tstamp;Earlier , it was taking too much time but it started to work faster when i added this :
    /*+ PUSH_SUBQ */ in the select query.
    The explain plan generated for the same is :
    | Id  | Operation                | Name              | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |                   |    29M|  3804M|       |    15M  (4)| 53:14:08 |
    |   1 |  SORT ORDER BY           |                   |    29M|  3804M|  8272M|    15M  (4)| 53:14:08 |
    |*  2 |   FILTER                 |                   |       |       |       |            |          |
    |*  3 |    HASH JOIN             |                   |    29M|  3804M|       |  8451K  (2)| 28:10:19 |
    |   4 |     TABLE ACCESS FULL    | GENERIC_METER_DEF |    11 |   264 |       |     3   (0)| 00:00:01 |
    |*  5 |     HASH JOIN RIGHT OUTER|                   |    29M|  3137M|    19M|  8451K  (2)| 28:10:17 |
    |   6 |      TABLE ACCESS FULL   | DEVICE_INFO       |   589K|    12M|       |   799   (2)| 00:00:10 |
    |*  7 |      HASH JOIN           |                   |    29M|  2527M|  2348M|  8307K  (2)| 27:41:29 |
    |*  8 |       HASH JOIN          |                   |    28M|  2016M|       |  6331K  (2)| 21:06:19 |
    |*  9 |        TABLE ACCESS FULL | METER_DEF         |    33 |   990 |       |     4   (0)| 00:00:01 |
    |  10 |        TABLE ACCESS FULL | METER_HIST        |  3440M|   137G|       |  6308K  (2)| 21:01:44 |
    |  11 |       TABLE ACCESS FULL  | MACH_XFER_HIST    |   436M|  7501M|       |  1233K  (1)| 04:06:41 |
    |* 12 |    FILTER                |                   |       |       |       |            |          |
    |  13 |     HASH GROUP BY        |                   |     1 |    26 |       |  6631K  (7)| 22:06:15 |
    |* 14 |      FILTER              |                   |       |       |       |            |          |
    |  15 |       TABLE ACCESS FULL  | METER_HIST        |  3440M|    83G|       |  6304K  (2)| 21:00:49 |
    ------------------------------------------------------------------------------------------------------Is there any other way to optimise it more ... PLease suggest since I am new to query tuning.
    Thanks and Regards
    KK

    Hi Dom ,
    Greetings. Sorry for the delayed response. I have read the How to Post document.
    I will provide all the required information here now :
    Version : 10.2.0.4
    OS : LinuxThe SQL Query which is facing the performance issue :
    SELECT mh.meter_hist_id, mxh.mach_sn, mxh.collectiontag, mxh.rcvd_tstamp,
              mxh.mach_xfer_id, md.meter_def_id, md.meter_name, md.meter_type,
              md.meter_units, md.meter_desc, mh.meter_value, mh.meter_hist_tstamp,
              mh.max_value, md.generic_meter_id
         FROM meter_hist mh JOIN mach_xfer_hist mxh
              ON mxh.mach_xfer_id = mh.mach_xfer_id
              JOIN meter_def md ON md.meter_def_id = mh.meter_def_id;Explain plan for this query :
    Plan hash value: 1878059220
    | Id  | Operation           | Name           | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT    |                |  3424M|   497G|       |    17M  (1)| 56:42:49 |
    |*  1 |  HASH JOIN          |                |  3424M|   497G|       |    17M  (1)| 56:42:49 |
    |   2 |   TABLE ACCESS FULL | METER_DEF      |   423 | 27918 |       |     4    | 00:00:01   |
    |*  3 |   HASH JOIN         |                |  3424M|   287G|    26G|    16M  (1)| 56:38:16 |
    |   4 |    TABLE ACCESS FULL| MACH_XFER_HIST |   432M|    21G|       |  1233K  (1)| 04:06:40 |
    |   5 |    TABLE ACCESS FULL| METER_HIST     |  3438M|   115G|       |  6299K  (2)| 20:59:54 |
    Predicate Information (identified by operation id):
       1 - access("MD"."METER_DEF_ID"="MH"."METER_DEF_ID")
       3 - access("MH"."MACH_XFER_ID"="MXH"."MACH_XFER_ID")Parameters :
    show parameter optimizer
    NAME                                 TYPE                                         VALUE
    optimizer_dynamic_sampling           integer                                      2
    optimizer_features_enable            string                                       10.2.0.4
    optimizer_index_caching              integer                                      70
    optimizer_index_cost_adj             integer                                      50
    optimizer_mode                       string                                       ALL_ROWS
    optimizer_secure_view_merging        boolean                                      TRUEshow parameter db_file_multi
    db_file_multiblock_read_count        integer                                      8
    show parameter cursor_sharing
    NAME                                 TYPE                                         VALUE
    cursor_sharing                       string                                       EXACT
    select  sname  , pname  , pval1  , pval2  from  sys.aux_stats$;
    SNAME                          PNAME                               PVAL1 PVAL2
    SYSSTATS_INFO                  STATUS                                    COMPLETED
    SYSSTATS_INFO                  DSTART                                    07-12-2011 09:22
    SYSSTATS_INFO                  DSTOP                                     07-12-2011 09:52
    SYSSTATS_INFO                  FLAGS                                   0
    SYSSTATS_MAIN                  CPUSPEEDNW                     1153.92254
    SYSSTATS_MAIN                  IOSEEKTIM                              10
    SYSSTATS_MAIN                  IOTFRSPEED                           4096
    SYSSTATS_MAIN                  SREADTIM                            4.398
    SYSSTATS_MAIN                  MREADTIM                            3.255
    SYSSTATS_MAIN                  CPUSPEED                              180
    SYSSTATS_MAIN                  MBRC                                    8
    SYSSTATS_MAIN                  MAXTHR                          244841472
    SYSSTATS_MAIN                  SLAVETHR                           933888
      show parameter db_block_size
    NAME                                 TYPE                                         VALUE
    db_block_size                        integer                                      8192Please let me me if any other information is needed. This Query is currently taking almost one hour to run right now.
    Also , we have two indexes on the columns : xfer_id and meter_def_id in both the tables , but its not getting used without any filtering ( Where clause).
    Do addition of hint in the query above will be of some help.
    Thanks and Regards
    KK

  • Oracle Streams and Dataguard

    Does anyone know how to configure oracle streams and dataguard to work together? I've set up an environment which successfully captures and applies records in a streams environment using
    log_archive_config=SEND, RECEIVE, NODG_CONFIG
    as soon as I try to introduce the streams setup into a database which already has dataguard setup it begins to get convoluted as setting DG_CONFIG requires you to then set the db_unique_name and it seems that the streams log mining will not work without being fully incorporated into the Dataguard configuration.
    Has anyone set up streams in a dataguard environment?

    Lets see what's missing from your request:
    1. Oracle version number?
    2. Physical or logical Data Guard?
    3. Which Data Guard mode? Synch or Asynch?
    4. Which Data Guard protection level?
    5. ARCH or LGWR?
    6. Which streams mode? Synch or Asynch?
    7. Hotlog or Autolog?
    8. Streams on the production server or the standby server?
    I'm all out of guesses tonight. Please provide enough information for someone to help you.

  • No one wants my Performance Tuning experience?

    I worked as a DBA for 2 years in IT major where my work was confined to Performance Tuning & monitoring
    and so I had no opportunity to get the experience of Backup & Recovery,RMAN and other Core DBA responsibilities.
    I had to leave in July last year due to my some other career interest. Unfortunately that didn't get
    fulfilled.
    So from January 15th I have decided to resume my career as a DBA.
    I went to a 3 interviews ,but every time my lack of experience in Backup & Recovery and other Core DBA responsibilities is not making my case stronger in the interviews.
    I am losing confidence over my chances of getting a job. Most(Almost 95%) of the IT companies want minimum 3 years experience and that too with Backup & Recovery.
    I have not yet done OCA (I passed only 1z0 042). So i am planning to give 1z0-007 so that i become OCA.
    My questions to all?
    1.*How much value OCA hold in the market* along with 2 years experience(In India i am based,more specifically Pune)
    2. How should i get the practical experience of Backup & Recovery at home itself?+_
    3. I dont have the financial power to go for TRAINING part of the OCP, so how should i make my resume look strong?
    4. Doesn't any company want a DBA who hasn't had a experience in Backup & Recovery. Is Performance Tuning & Monitoring experience such a waste?
    Cheers,
    Kunwar

    user12591638 wrote:
    I worked as a DBA for 2 years in IT major where my work was confined to Performance Tuning & monitoring
    and so I had no opportunity to get the experience of Backup & Recovery,RMAN and other Core DBA responsibilities.
    I had to leave in July last year due to my some other career interest. Unfortunately that didn't get
    fulfilled.
    So from January 15th I have decided to resume my career as a DBA.
    I went to a 3 interviews ,but every time my lack of experience in Backup & Recovery and other Core DBA responsibilities is not making my case stronger in the interviews.
    I am losing confidence over my chances of getting a job. Most(Almost 95%) of the IT companies want minimum 3 years experience and that too with Backup & Recovery.
    I have not yet done OCA (I passed only 1z0 042). So i am planning to give 1z0-007 so that i become OCA.
    OK. India has produced a lot of DBA's over the past few years, so I suspect the supply/demand situation is not in your favour, especially with the global economic climate.
    And because of career break employers will often have other candidates who look stronger. And they will have certifications.
    So you've been looking a month and got 3 interviews .... that's actually not that bad.
    My questions to all?
    1.*How much value OCA hold in the market* along with 2 years experience(In India i am based,more specifically Pune)In general not a lot. However the only thing stopping you gettig an 10g DBA OCA is an SQL exam pass. Now if you can't pass that exam then you don't deserve the job. And given you've passed 1z0-042 an employer wll ask why who haven't passed the SQL exam (especially if you're into tuing and performance). People in India seem to love 1z0-007 .... I prefer to see people who have taken 1z0-051 or 1z0-047. However I suspect you cannot afford to spend the extra time on 1z0-047 as you need to concentrate on backup/recovery.
    2. How should i get the practical experience of Backup & Recovery at home itself?+_Hopefully you've got kit to practice on.
    3. I dont have the financial power to go for TRAINING part of the OCP, so how should i make my resume look strong?Resume's are not my best area. However in your case studying and passing 1z0-043 will help. 1z0-043 is 50% backup and recovery so that has synergy with your weak area. If finance is an issue consider a WDP course if you can get one. Some eligible courses are cheaper than others, but even they may be beyond reach,
    4. Doesn't any company want a DBA who hasn't had a experience in Backup & Recovery. Is Performance Tuning & Monitoring experience such a waste?
    Obviously performance tuning and monitoring is not a waste ..... but remember Oracle is increasingly attemting to automate these in later vesions, and different techniques are available in 11gR2 compared to 9i.
    Cheers,
    KunwarYou might find: RMAN Recipes for Oracle Database 11g : A Problem-Solution Approach ... ISBN13: 978-1-59059-851-1 ... a little useful.
    Rgds bigdelboy. (I've been composing this over a 2 hour period with several distractions and have run out of time so some bits might not make sense ... i may even have siad something silly).

  • Performance Tuning for ECC 6.0

    Hi All,
      I have an ECC 6.0EP 7.0(ABAPJAVA). My system is very slow. I have Oracle 10.2.0.1.
      Can you please guide me how to do these steps in the sytem
    1) Reorganization should be done at least for the top 10 huge tables
    and their indexes
    2) Unaccessed data can be taken out by SAP Archiving
    3)Apply the relevant corrections for top sap standard objects
    4) CBO update statistics must be latest for all SAP and customer objects
    I have never done performance tuning and want to do it on this system.
    Regards,
    Jitender

    Hi,
    Below are the details of ST06. Please suggest me what should I do. the system performance is very bad.
    I require your inputs for performance tuning
    CPU
    Utilization  user    %                   3     Count                                    2
                  system  %                   3      Load average  1min                0.11
                  idle    %                       1                         5 min                0.21
                  io wait %                    93                       15 min                0.22
    System calls/s                        982  Context switches/s                    1752
    Interrupts/s                          4528
    Memory
    Physical mem avail  Kb             6291456 Physical mem free   Kb               93992
    Pages in/s                             473 Kb paged in/s                         3784
    Pages out/s                            211 Kb paged out/s                        1688
    Pool
    Configured swap     Kb            26869896 Maximum swap-space  Kb            26869896
    Free in swap-space  Kb            21631032 Actual swap-space   Kb            26869896
    Disk with highest response time
    Name                                   md3 Response time       ms                  51
    Utilization                              2     Queue                                    0
    Avg wait time       ms              0    Avg service time    ms                  51
    Kb transfered/s                       2   Operations/s                             0
    Current parameters in the system
    System:        sapretail_RET_01          Profile Parameters for SAP Buffers
    Date and Time: 08.01.2009       13:27:54
    Buffer Name                    Comment
    Profile Parameter             Value      Unit  Comment
    Program buffer                 PXA
    abap/buffersize               450000     kB    Size of program buffer
    abap/pxa                      shared           Program buffer mode
    |
    CUA buffer                     CUA
    rsdb/cua/buffersize           3000       kB    Size of CUA buffer
    The number of max. buffered CUA objects is always: size / (2 kB)
                                                                                    |
    Screen buffer                  PRES
    zcsa/presentation_buffer_area 4400000    Byte  Size of screen buffer
    sap/bufdir_entries            2000             Max. number of buffered screens
    |
    Generic key table buffer       TABL
    zcsa/table_buffer_area        30000000   Byte  Size of generic key table buffer
    zcsa/db_max_buftab            5000             Max. number of buffered objects
    |
    Single record table buffer     TABLP
    rtbb/buffer_length            10000      kB    Size of single record table buffer
    rtbb/max_tables               500              Max. number of buffered tables
    |
    Export/import buffer           EIBUF
    rsdb/obj/buffersize           4096       kB    Size of export/import buffer
    rsdb/obj/max_objects          2000             Max. number of objects in the buffer
    rsdb/obj/large_object_size    8192       Bytes Estimation for the size of the largest object
    rsdb/obj/mutex_n              0                Number of mutexes in Export/Import buffer
    |
    OTR buffer                     OTR
    rsdb/otr/buffersize_kb        4096       kB    Size of OTR buffer
    rsdb/otr/max_objects          2000             Max. number of objects in the buffer
    rsdb/otr/mutex_n              0                Number of mutexes in OTR buffer
    |
    Exp/Imp SHM buffer             ESM
    rsdb/esm/buffersize_kb        4096       kB    Size of exp/imp SHM buffer
    rsdb/esm/max_objects          2000             Max. number of objects in the buffer
    rsdb/esm/large_object_size    8192       Bytes Estimation for the size of the largest object
    rsdb/esm/mutex_n              0                Number of mutexes in Exp/Imp SHM buffer
    |
    Table definition buffer        TTAB
    rsdb/ntab/entrycount          20000            Max. number of table definitions buffered
    The size of the TTAB is nearly 100 bytes * rsdb/ntab/entrycount
                                                                                    |
    Field description buffer       FTAB
    rsdb/ntab/ftabsize            30000      kB    Size of field description buffer
    rsdb/ntab/entrycount          20000            Max. number / 2 of table descriptions buffered
    FTAB needs about 700 bytes per used entry
                                                                                    |
    Initial record buffer          IRBD
    rsdb/ntab/irbdsize            6000       kB    Size of initial record buffer
    rsdb/ntab/entrycount          20000            Max. number / 2 of initial records buffered
    IRBD needs about 300 bytes per used entry
                                                                                    |
    Short nametab (NTAB)           SNTAB
    rsdb/ntab/sntabsize           3000       kB    Size of short nametab
    rsdb/ntab/entrycount          20000            Max. number / 2 of entries buffered
    SNTAB needs about 150 bytes per used entry
                                                                                    |
    Calendar buffer                CALE
    zcsa/calendar_area            500000     Byte  Size of calendar buffer
    zcsa/calendar_ids             200              Max. number of directory entries
    |
    Roll, extended and heap memory EXTM
    ztta/roll_area                3000000    Byte  Roll area per workprocess (total)
    ztta/roll_first               1          Byte  First amount of roll area used in a dialog WP
    ztta/short_area               3200000    Byte  Short area per workprocess
    rdisp/ROLL_SHM                16384      8 kB  Part of roll file in shared memory
    rdisp/PG_SHM                  8192       8 kB  Part of paging file in shared memory
    rdisp/PG_LOCAL                150        8 kB  Paging buffer per workprocess
    em/initial_size_MB            4092       MB    Initial size of extended memory
    em/blocksize_KB               4096       kB    Size of one extended memory block
    em/address_space_MB           4092       MB    Address space reserved for ext. mem. (NT only)
    ztta/roll_extension           2000000000 Byte  Max. extended mem. per session (external mode)
    abap/heap_area_dia            2000000000 Byte  Max. heap memory for dialog workprocesses
    abap/heap_area_nondia         2000000000 Byte  Max. heap memory for non-dialog workprocesses
    abap/heap_area_total          2000000000 Byte  Max. usable heap memory
    abap/heaplimit                40000000   Byte  Workprocess restart limit of heap memory
    abap/use_paging               0                Paging for flat tables used (1) or not (0)
    |
    Statistic parameters
    rsdb/staton                   1                Statistic turned on (1) or off (0)
    rsdb/stattime                 0                Times for statistic turned on (1) or off (0)
    Regards,
    Jitender

  • Looking for book about performance tuning 11g database

    Hi,
    I am looking for books about performance tuning oracle 11g. The documents from OTN 2 days 11g performance tuning and performance tuning guide are not really study material.
    Has someone idea?
    greeting,
    Max

    Hi,
    http://astore.amazon.com/oraclebooks-20/detail/1590599683 {Part IV Database Tuning }
    Greetings,
    Sim

  • Reg: Process Chain, query performance tuning steps

    Hi All,
    I come across a question like,  There is a process chain of 20 processes.out of which 5 processes are completed at the 6th step error occured and it cannot be rectified. I should start the chain again from the 7th step.If i go to a prticular step i can do that particular step, How can i start the entair chain again from step 7.i know that i need to use a function module but i dont know the name of FM. Please somebody help me out.
    Please let me know the steps involved in query performance tuning and aggregate tuning.
    Thanks & Regards
    Omkar.K

    Hi,
    Process Chain
    Method 1 (when it fails in a step/request)
    /people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
    How is it possible to restart a process chain at a failed step/request?
    Sometimes, it doesn't help to just set a request to green status in order to run the process chain from that step on to the end.
    You need to set the failed request/step to green in the database as well as you need to raise the event that will force the process chain to run to the end from the next request/step on.
    Therefore you need to open the messages of a failed step by right clicking on it and selecting 'display messages'.
    In the opened popup click on the tab 'Chain'.
    In a parallel session goto transaction se16 for table rspcprocesslog and display the entries with the following selections:
    1. copy the variant from the popup to the variante of table rspcprocesslog
    2. copy the instance from the popup to the instance of table rspcprocesslog
    3. copy the start date from the popup to the batchdate of table rspcprocesslog
    Press F8 to display the entries of table rspcprocesslog.
    Now open another session and goto transaction se37. Enter RSPC_PROCESS_FINISH as the name of the function module and run the fm in test mode.
    Now copy the entries of table rspcprocesslog to the input parameters of the function module like described as follows:
    1. rspcprocesslog-log_id -> i_logid
    2. rspcprocesslog-type -> i_type
    3. rspcprocesslog-variante -> i_variant
    4. rspcprocesslog-instance -> i_instance
    5. enter 'G' for parameter i_state (sets the status to green).
    Now press F8 to run the fm.
    Now the actual process will be set to green and the following process in the chain will be started and the chain can run to the end.
    Of course you can also set the state of a specific step in the chain to any other possible value like 'R' = ended with errors, 'F' = finished, 'X' = cancelled ....
    Check out the value help on field rspcprocesslog-state in transaction se16 for the possible values.
    Query performance tuning
    General tips
    Using aggregates and compression.
    Using  less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particual day > check query execution time.
    /people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
    /people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
    Try table rsddstats to get the statistics
    Using cache memoery will decrease the loading time of the report.
    Run reporting agent at night and sending results to email.This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1.  Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. --- sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    Thanks,
    JituK

  • Performance tuning help

    hello friends,
    i am supposed to use ST05 and SE30 to do performance tuning,and also perform cost estimation,
    can some plz help me understand, i am not that good at reading large paragraphs, so plz donot give me links to help.sap
    thank you.

    Hi
    Se30  is the runtime analysis
    Here it will give you detailed graph about your program .
    abap time,database time application time ..
    st05 is sql trace where it will give you individual select stmnts in your program
    and also proper index is being used or not.
    Other performance tips
    check any select stmnts inside the loop and replace with read statement
    use binary search in read stmnts
    use proper index in where condition in select stmnt.
    if you want to execute st05..go st05..trace on-execute ur program--come to st05..trace off and --list trace...
    you get summary details of select queires and it will be pink or other color.
    Thanks

  • Complex Oracle Streams issue - Update conflicts

    This is for Oracle streams replication on 11g r2.
    I am facing update conflicts in a table. The conflict arise due to technical and business logic issue. The business logic will pass through the replication/apply process successfully but we want to arrest and resolve it before replication for our requirements. These are typically a bit complex cases and we are exploring the possibility of having both DML handlers and Error handlers. The DML handlers will take care of business logic conflicts and Error handler for technical issues before pushing it to Error queue by Streams. Based on our understanding and verification, we found a limitation to configure both procedure DML handler and Error handler together for the same table operation.
    Statement handlers can not be used for our conflict scenarios.
    Following are my questions:
    1. Have anyone implemented or faced such a scenario in their real time application? If yes, can you please share some insights or inputs?
    2. Is there a custom way to handle this complex problem of configuring both DML and Error handler?
    3. Is there any alternative possible way to resolve this situation at Oracle streams environment with other handlers?

    Dear All
    I too have a similar requirement. Could anyone help with one?
    We can handle the error-ing transactions via Error Handler procedures.
    But we can not configure the DML handler procedure for transactions that are successfully replicated. STreams does not allow us to configure a handler for this. Is there any other handler / procedures / hooks in streams where we can implement the desired functionality - which includes changing the values in the LCR before invoking lcr.execute() and we should be able to discard the LCR also if required.
    Regards
    Velmurugan
    Edited by: 982387 on Jan 16, 2013 11:25 PM
    Edited by: 982387 on Jan 16, 2013 11:27 PM

  • Register Oracle Streams with OID

    I'm trying to setup an Oracle Streams environment that registers queues with OID. I have the db registered, and set global_toppic_enabled=true. DB is in archivemode. When I try to setup a queue with:
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'STREAMS_QUEUE_TABLE',
    queue_name => 'STREAMS_QUEUE',
    queue_user => 'STRMADMIN');
    END;
    I get
    Error report:
    ORA-00600: internal error code, arguments: [kcbgtcr_5], [52583], [4], [0], [], [], [], []
    ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 739
    ORA-06512: at line 2
    00600. 00000 - "internal error code, arguments: [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s]"
    *Cause:    This is the generic internal error number for Oracle program
    exceptions.     This indicates that a process has encountered an
    exceptional condition.
    *Action:   Report as a bug - the first argument is the internal error number
    Has anyone run into this? I searched metalink, but couldn't find anything. I'm running 10.2.0.1 on Windows 2K.
    Thanks in advance.

    I never use streams with OID but check Bug 4996133 - OERI[kcbgtcr_5] updating an IOT in RAC environment.
    I would consider to upgrade db to 10.2.0.3 - it is first really stable release of 10gR2
    Regards,
    Serge

  • Oracle Streams 10gR2 Schedule Propagation or Application

    Hi all,
    Is there a way to schedule the propagation and application when configuring Oracle Streams?, I mean, I don't want to execute online replication because I have other object outside Oracle that need to replicate withing the db to have my aplication (red only) in sync.
    So, where I can do that.
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name => 'shm',
    streams_type => 'apply',
    streams_name => 'apply_from_db1',
    queue_name => 'strmadmin.from_db1',
    include_dml => true,
    include_ddl => true,
    source_database => 'db1.world',
    inclusion_rule => true);
    END;
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
    schema_name => 'shm',
    streams_name => 'db1_to_db2',
    source_queue_name => 'strmadmin.captured_db1',
    destination_queue_name => '[email protected]',
    include_dml => true,
    include_ddl => true,
    source_database => 'db1.world',
    inclusion_rule => true,
    queue_to_queue => true);
    END;
    /

    Hello, Am I not being clear with my question?. Or maybe I'm doing everything in the wrong way about Stream propagation and application processes?.
    I looked into all documentation available and I can't find how to schedule the processes ... what's the part I'm missundertanding?.
    AS long I have other non Oracle objects (file system objects, we say some ECM file system objects) to replicate with the data schema, I can't replicate (automatically) all the changes occurred at source table's schema to the destiny database schema. So, I'm replicating ECM file systems objects within other external tool, and It has Windows schedule integration ... but how I can schedule propagation and/or application processes in the a 2-way Streams environment.
    Please, I need a hint from somebody.
    Thanks.

  • Oracle Streams - First Load

    Hi,
    I have an Oracle Streams environment working well. I replicate and transform the data.
    My problem is:
    Initially I have a source database with 3 million of records and my destination database with no records.
    I have to equalize source and destination database before start to synchronize it.
    Do you know how can I replicate (and transform) this data for my first load?
    It's not only to copy all the data, it's to copy and transform.
    Is it possible to use the same transformation process for this first load?
    If I didn't transform the data I would use the Data Pump tool(e.g.). But I have to transform the data for my destination database.
    Thanks

    I am in DAC and trying to run the Informatica ETL for one of the prebuilt execution plan (HR- Oracle R12). I built the project plan and ran it. I got failed status for all of the ETL steps (Starting from the first one 'Load Row into Run table'). I have attached the error log for the Load Row into Run table ibelow.
    I took a closer look at all the steps, and it seem like they all have this common "fail parent if this task fails" error message.
    Error log for
    pmcmd startworkflow -u Administrator -p **** -s SOBI:4006 -f SILOS -lpf C:\Informatica\PowerCenter8.1.1\server\infa_shared\SrcFiles\SILOS.SIL_InsertRowInRunTable.txt SIL_InsertRowInRunTable
    Status Desc : Failed
    WorkFlowMessage :
    =====================================
    STD OUTPUT
    =====================================
    Informatica(r) PMCMD, version [8.1.1 SP5], build [186.0822], Windows 32-bit
    Copyright (c) Informatica Corporation 1994 - 2008
    All Rights Reserved.
    Invoked at Thu May 07 14:46:04 2009
    Connected to Integration Service at [SOBI:4006]
    Folder: [SILOS]
    Workflow: [SIL_InsertRowInRunTable] version [1].
    Workflow run status: [Failed]
    Workflow run error code: [36331]
    Workflow run error message: [WARNING: Session task instance [SIL_InsertRowInRunTable] failed and its "fail parent if this task fails" setting is turned on. So, Workflow [SIL_InsertRowInRunTable] will be failed.]
    Start time: [Thu May 07 14:45:43 2009]
    End time: [Thu May 07 14:45:47 2009]
    Workflow log file: [C:\Informatica\PowerCenter8.1.1\server\infa_shared\WorkflowLogs\SIL_InsertRowInRunTable.log]
    Workflow run type: [User request]
    Run workflow as user: [Administrator]
    Integration Service: [Oracle_BI_DW_Base_Integration_Service]
    Disconnecting from Integration Service
    Completed at Thu May 07 14:46:04 2009
    =====================================
    ERROR OUTPUT
    =====================================
    Error Message : Unknown reason for error code 36331
    ErrorCode : 36331
    If you have any input on how to fix this issue, please let me know.

  • Support oracle stream feature on 12c

    Oracle doc says the oracle stream on 12c is deprecated, but it doesn't say desupported. My question is "still can I use oracle stream on 12c and get tech support from oracle"?

    Yes, all Oracle databases participating in an Oracle Streams environment must be at Oracle9i Release 2.

Maybe you are looking for