Best practices in Queue table maintenance

Hi Fellow AQ Users,
I am looking to hear from the community about best practices in queue table maintenance.
I have been mining through metalink about various Oracle recommendations and putting
together a set of recommendations as a starting point for my DBAs.
I am looking to answer questions like these --
How often (in relation to messaging load) would you coalesce and rebuild the indexes?
How often would you rebuild the table itself to get rid of the high water mark issues ?
and what procedure would you use to do that?
Would really love to learn from your experiences in this area. We are using 9.2.0.7
64 bit DB and have plans to go to 10g over the next year. So, I am looking at 9i related
stuff and then 10g.
Thanks
Vijay

Hello,
In general you coalesce once per day ideally during a quiet time to avoid ORA-54 errors as per <Note:271855.1>. Some customers do it more often than that but once per day is a good starting point.
In terms of shrinking the queue tables you can use the procedure in <Note:304522.1> with a null 3rd parameter. This is an offline procedure so you could only run it during a maintenance window. In 10.2 onwards you can dynamically shrink the queue table and IOTS. Again it depends on exactly what you are doing with your queue tables how often you might need to do this.
Thanks
Peter

Similar Messages

  • Best practices in Internal table naming convention GT ,  GS , LT  ,  LS  ??

    Hi Gurus,
         Are GT_ ,  GS_ ,  LT_  ,  LS_  --- the  Best practices in Internal table naming convention ????
         I  have  seen this  naming convetions adhered in standard programs .
         What each one  of  the  below  signify
         GT_ ,  GS_ ,  LT_  ,  LS_   ??????? 
    Regards
    Jaman
    Message was edited by:
            ABAP Techie

    Hello
    I use the following naming conventions:
    - G = global variable
    - L = local variable
    - T = internal table
    - S = structure
    - D = field
    That's how the combinations look like:
    - GT_ITAB     = global itab
    - GS_STRUC = global structure
    - GD_FIELD   = global field
    - LT_ITAB     = local itab
    - LS_STRUC = local structure
    - LD_FIELD   = local field
    Function module parameters have to stick to the following rules:
    - I = importing
    - E = exporting
    - [C = changing -> never used]
    - IT_ITAB = imported table type (itab)
    - IS_STRUC = imported structure
    - ID_FIELD   = imported field
    - ET_ITAB = exported table type (itab)
    - ES_STRUC = exported structure
    - ED_FIELD   = exported field
    Depending on their semantics TABLES parameters look like:
    - IT_ITAB = imported data
    - ET_ITAB = exported data
    - XT_ITAB = changing data (import & export)
    Here are the conventions for FORM routine parameters:
    - UT_ITAB = using itab (data are usually treated like constants; no changes will be transfer - although possible - to the calling program)
    - CT_ITAB = changing itab (if it is semantically an exporting itab then one of the very
    first statements in the routine is: REFRESH ct_itab. )
    - US_STRUCT
    - UD_FIELD
    - CS_STRUCT
    - CS_FIELD
    Conventions for class/interface parameters:
    - IT_ITAB = importing table type
    - IS_STRUC = importing structure
    - ID_FIELD = importing field
    - ET_ITAB = exporting table type
    - ES_STRUC = exporting structure
    - ED_FIELD = exporting field
    - RT_ITAB = returning table type
    - RS_STRUC = returning structure
    - RD_FIELD = returning field
    Conventions for class/interface attributes:
    - MT_ITAB = table type
    - MS_STRUC = structure
    - MD_FIELD = field
    - MC_CONST = constant
    <b>Question</b>: Are there any advantages of such elaborated naming conventions?
    My answer to this question is: Yes, definitively.
    I believe that the advantage of semantically differentiating TABLES parameters of function modules is quite obvious:
      CALL FUNCTION 'Z_BAD_NAMING'
        TABLES
           itab1 = ...
           itab2 = ...
           itab3 = ... .
      CALL FUNCTION 'Z_GOOD_NAMING'
        TABLES
           it_itab1 = ...
           et_itab2 = ...
           xt_itab3 = ... .
    I also believe that my naming conventions clearly enhance <b>readability </b>and <b>maintainability </b>of my programs.
    Regards
      Uwe

  • Best Practice to Copy Table to Different Schema?

    This is what I tried:
    1. Create USER2 new user that uses TBS2 as default Tablespace
    2. Perform exp (resides on TBS1 Tablespace)
    3. Perform imp using fromuser and touser parameter to USER2
    What I noticed is that the table that was imported to USER2 still resides on the same tablespace which suppose to be on TBS2.
    Can someone help me perform this steps?

    mrp wrote:
    will this option recreate index?The best way to find answers of such questions would be to test it yourself.
    SQL> conn aman/aman
    Connected.
    SQL> create table test as select * from scott.dept;
    Table created.
    SQL> create index tidx on test(deptno);
    Index created.
    SQL> select object_name, object_type from user_objects;
    OBJECT_NAME
    --------------------------------------------------------------------------------OBJECT_TYPE
    TIDX
    INDEX
    TEST
    TABLE
    SQL> create table test2 as select * from test;
    Table created.
    SQL> column object_name format a40
    SQL> select object_name, object_type from user_objects;
    OBJECT_NAME                              OBJECT_TYPE
    TIDX                                     INDEX
    TEST                                     TABLE
    TEST2                                    TABLE
    SQL>So did the index get created for the 2nd table or not?
    Aman....

  • Best practice of metadata table in data warehouse environment ?

    Hi guru's,
    In datawarehouse, we have 1. Stage schema 2. DWH(Data warehouse reporting schema). In stageing we have about 300 source tables. In DWH schema, we are creating the tables which are only required from reporting prespective . some of the tables in stageing schema, have been created in DWH schema as well with different table name and column names. The naming convention for these same tables and columns in DWH schema is more based on business names.
    In order to keep track of these tables we are creating metadata table in DWH schema say for example
    Stage                DWH_schema
    Table_1             Table_A         
    Table_2             Table_b
    Table_3             Table_c
    Table_4              Table_DMy question is how do we handle the column names in each of these tables. The stage_1, stage_2 and stage_3 column names have been renamed in DWH_schema which are part of Table_A, Table_B, Table_c.
    As said earlier, we have about 300 tables in stage and may be around 200 tables in DWH schema. Lot of the column names have been renamed in DWH schema from stage tables. In some of the tables we have 200 column's
    so my concern is how do we handle the column names in metadata table ? Do we need to keep only table names in metadata table not column names ?
    Any idea will be greatly appriciated.
    Thanks!

    hi
    seems quite a buzzing question.
    In our project we designed a hub and spoke like architecture.
    Thus we have 3 layer, L0 is the one closest to the source and L0 table's name are linked to the corresponding sources names by mean of naming standard (like tabA EXT_tabA tabA_OK1 so on based on implementation of load procedures).
    At L1 we have the ODS , normalized model , we use business names for table there and standard names for temporary structures and artifacts
    Both L0 an L1 keep source's column names as general rule, new columns like calculated one are business driven and metadata are standard driven.
    Datamodeler fits perfect for modelling L1 purpose.
    L2 is the dimensional schema business names take place for tables and columns eventually rewritten at presentation layer ( front end tool )
    hope this helps D.

  • Best practice: self referencing table

    Hallo,
    I want to load a self referencing table to my enterprise area in the dwh. I am not sure how to do this. What if a record references to another record which is inserted later in the same etl process? There is now way to sort, because it is possible that record a references to record b and record b references to record a. Disabling the fk constraint is not a good idea, because this doesn't prevent that invalid references will be loaded? I am thinking of building two mappings, one without the self referencing column and the other only with it, but that would cause approx. twice as much running time. Any other solutions?
    Regards,
    Torsten

    Mind sharing the solution? Would be interested to hear your solution (high level).
    Jean-Pierre

  • Best Practice loading Dimension Table with Surrogate Keys for Levels

    Hi Experts,
    how would you load an Oracle dimension table with a hierarchy of at least 5 levels with surrogate keys in each level and a unique dimension key for the dimension table.
    With OWB it is an integrated feature to use surrogate keys in every level of a hierarchy. You don't have to care about
    the parent child relation. The load process of the mapping generates the right keys and cares about the relation between the parent and child inside the dimension key.
    I tried to use one interface per Level and created a surrogate key with a native Oracle sequence.
    After that I put all the interfaces in to one big Interface with a union data set per level and added look ups for the right parent child relation.
    I think it is a bit too complicated making the interface like that.
    I will be more than happy for any suggestions? Thank you in advance!
    negib
    Edited by: nmarhoul on Jun 14, 2012 2:26 AM

    Hi,
    I do like the level keys feature of OWB - It makes aggregate tables very easy to implement if your sticking with a star schema.
    Sadly there is nothing off the shelf with the built in knowledge modules with ODI , It doesnt support creating dimension objects in the database by default but there is nothing stopping you coding up your own knowledge module (use flex fields maybe on the datastore to tag column attributes as needed)
    Your approach is what I would have done, possibly use a view (if you dont mind having it external to ODI) to make the interface simpler.

  • Best practice for long term maintenance ??

    still busy with the long term maintenance requirement.
    when maintaining a building my client wants to register the following
    - long term maintenance (like 15 till 25 years)
    - short term is already in preventive maintenance
    - the long term maintenance must be related to specific activitities which contains keyfigures (the keyfigures are obtained from third party libraries which can be used to calculate the estimate cost for a certain acitivity when multiplied with for instance the surface area of the object)
    - for the costplanning a report should calculate the estimated costs derived from the keyfigures
    - the long term planning should be easy to manage and activities easily be changed in time
    is there anything already at hand or an example where long term maintenance like this is implemented ?? I can't see how I can get above requirements in for instance preventive maintenance ??
    reward for usefull info
    kind regards
    arthur

    I try to make an config takslist but somehow I can't enter values when I attach the task to a preventive maintenance plan.
    I created charateristics and a class with type 300
    made a config profile with cu41
    I also can't enter values when I attach the general task to a serviceorder
    do I still miss something here ??
    hmm I made a separete thread for this since it has nothing to do with the initial problem
    Message was edited by:
            A. de Smidt
    Message was edited by:
            A. de Smidt

  • Best practice guide for Mac maintenance

    Hi,
    I have alway wondered what were the things to do to keep your computer in its best shape.
    My first ideas are
    1. Use software update often with a few days delay as they arrive to avoid surprises
    2. Repair permissions from time to time above all after upgrading the OS
    3. Repair Hard Drive booting the OS disc using disk utility
    4. Defragment ? I have heard Unix based systems do this automatically.
    5...
    Is that ok?
    What else?
    how often?

    Hi, Micha'l.
    You wrote: "I am surprised they favor antiviruses though."If you want to understand why, see my "Detecting and avoiding malware and spyware" FAQ for my recommendations as well as a list of some recent Mac OS X security threats that have emerged, including Trojans, rootkits, and spyware. There's a bit more to it than simply not forwarding Windows viruses.
    The FAQ also addresses some of the usual arguments against installing an anti-virus solution that others may offer.
    Glad you like my site. If you think that's good, you should read my books.
    Good luck!
    Dr. Smoke
    Author: Troubleshooting Mac® OS X
    Note: The information provided in the link(s) above is freely available. However, because I own The X Lab™, a commercial Web site to which some of these links point, the Apple Discussions Terms of Use require I include the following disclosure statement with this post:
    I may receive some form of compensation, financial or otherwise, from my recommendation or link.

  • Azure table design best practice

    What's the best practice for designing tables in Azure Tables to optimize query performance?

    Hi Raj,
    When we design the azure table, we need to consider the scalability of the azure table.
    and selecting the PartitionKey is very more important to scalability.
    Basically, we have two options which have their advantages and disadvantages:
    One Option: having a single partition by having the same value for PartitionKey for all entities to
    Second Option: having a unique value for PartitionKey for every entity
    More information about how to get the most out of windows azure tables ,please refer to the link below:
    http://blogs.msdn.com/b/windowsazurestorage/archive/2010/11/06/how-to-get-most-out-of-windows-azure-tables.aspx
    There is  also a detailed article which explain how to design a scalable partitioning strategy for Windows Azure Storage,please refer to the link below:
    http://msdn.microsoft.com/en-us/library/hh508997.aspx
    Best Regards,
    Kevin Shen

  • Best Practice for module components based on API's

    Hi all,
    is there a white paper/other documents which outline best
    practice for using table/module component api's and best
    approaches to get around restrictions?
    I would also appreciate war stories about using this method of
    Forms development bearing in mind i would be using this as the
    foundation of a web deployed based application (intranet first -
    ultimately internet).
    Thanks
    Mark
    null

    You cannot add agents to skills dynamically; however, you can queue the caller into more than one CSQ based on statistics. You would use the Get Reporting Statistics and an If step within the Queued branch of your first Select Resource step. If the condition is met (e.g. Contacts Waiting >= 10) then exectue a second Select Resource step within the queued branch of the first. The contact would then be waiting in both CSQs waiting for an agent.

  • Best Practices for SAP, connections to use without BW

    Hello,
    Could you help me to solve a problem with Best Practices 4.31?
    SAP Integration Kit XI 3.1 SP3 has 4 options  :
    u2022     SAP Infosets
    u2022     SAP BW MDX Query
    u2022     SAP BW Query
    u2022     Table, cluster ou function
    When I install SAP Integration Kit XI 3.1 SP3 for serveur, I have one additional service in my CCM,  BW Publisher 12. This service is from SAP BW.
    From the article /people/glen.spalding/blog/2010/08/04/the-full-montypart-13bobj-integration-kit-sp3-install-configure and installation guide for IIntegration Kit XI 3.1 SP3 I've learned that I need SAP BW to use Infosets dans Crystal Reports.
    Dans notre installation nous avons
    u2022     SAP ERP système without BW,
    u2022     BOBJ Server Setup,
    u2022     BOBJ Edge Integration Kit for SAP XI 3.1 SP3 server
    u2022     BOBJ Client Setup,
    u2022     BOBJ Edge Integration Kit for SAP XI 3.1 SP3 client
    u2022     Crystal Reports 2008
    u2022     Xcelsius 2008
    I've got Best Practices reports via « Table, cluster ou fonction » connection. There is no problem with this type of connection !
    I suppose that « SAP Infosets », « SAP BW MDX Query » et « SAP BW Query » connections should be used with SAP BW.
    Could you give me your opinion about this question?
    Thanks beforehand,
    Malika

    Hi,
    the InfoSets are a connection option for the classic InfoSets in an ERP system. Would suggest you take a look at the user guide for the SAP Integration Kit
    ingo

  • Best Practice for Designing Database Tables?

    Hi,
    I work at a company for tracking devices (GPS Devices). Our SQL Server database is designed to have a table for each device we sell, currently there is 2500 tables in our database and they all have the same columns they only differ in table name. Each device
    sends about 4K records per day.
    currently each table hold from 10K records to 300K records
    What is the best practice to design a database in this situation? 
    When accessing database from a C# application, which is better to use, direct SQL commands or views? 
    a detailed description about what is best to do in such scenario would be great. 
    Thanks in advance.
    Edit:
    Tables columns are:
    [MessageID]
          ,[MessageUnit]
          ,[MessageLong]
          ,[MessageLat]
          ,[MessageSpeed]
          ,[MessageTime]
          ,[MessageDate]
          ,[MessageHeading]
          ,[MessageSatNumber]
          ,[MessageInput]
          ,[MessageCreationDate]
          ,[MessageInput2]
          ,[MessageInput3]
          ,[MessageIO]

    Hello Louis, thank you so much for your informative post. I'll describe in detail what situations I came through my 9 months of work in the company (working as a software engineer, but I am planning to take over database maintenance since no one is maintaining
    it right now and I cannot do anything else in the code to make it faster)
    At every end of the month our clients generate report for the previous month for all their cars, some clients have 100+ cars, and some have few. This is when real issue start, they are calling their data from our server through internet while having 2000
    unit sending data to our server, they keep on getting read time out since SQL Server gives priority to insert and hold all select commands. I solved it temporary in the code using "Read Uncommitted" once I initialize a connection through C#. 
    The other issue is generating reports for a month or two takes lots of time when selecting 100+ units. Thats what I want to solve, the problem is the one who wrote the C# app used hard coded SQL Statements
    AND
    the company is refusing to upgrade from SQL Server 2003 and Windows Server 2003. 
    Now talking about reports, there are summary reports, stops reports, zone reports ..etc most of them depend usually on at least MessageTime, MessageDate, MessageSpeed, MessageIO and MessageSatNumber.
    So from your post I conclude that for now I need to set snapshots so that select statements don't get kicked out in favor for insert commands, but does SQL Server automatically select from the snapshots or do I have to tell it to do so? 
    Other than proper indexing what else I need? Tom
    Phillips suggested Table partitioning but I don't think it is needed in my case since our database size is 78GB
    When I run code analysis on the app, Visual Studio tells me I better use stored procedures, views than using hard coded Select Statements, what difference will this bring me when talking about performance?
    Thanks in advance. 

  • Multi layer table view/navigation controller hierarchy best practice

    Hi,
    I am new to iPad/iPhone development and wondering what the best practice for multiple layers of table views is? I understand the principle of a navigation controller providing the framework for moving up and down a list but have not yet quite got my head around if you should have one navigation controller for the whole tree or several navigation controllers.
    In my app I need to have the following:
    Main view -> window view showing some interactive elements (picker, buttons etc.)
    Setup view -> Hierarchy managed by nav controller/table views
    The setup view needs to manage the following hierarchy...
    - Level A:
    - Global app variables (one table view)
    - Level B Items (table view showing list of items at belonging to Level B)
    - Level B Item 1 (table view showing list of items at level C belonging to level B item 1)
    - Level C Item 1 (table view showing list of items at level D belonging to level C item 1)
    - Level D Item 1 (table view showing list of items at level E belonging to level D item 1)
    - Level E item (table view for properties of item at Level E)
    - Level D Item n
    - Level C Item n
    - Level B Item n
    Each level in this has some properties and then a list of child items.
    What would be the best way of structuring this? I would assume that creating a class that extends a view controller for each level is a given but what about the control of the navigation? Should this be handled by one navigation controller or one per level? I think I know the right answer but have not seen a neat way of implementing
    I think I am also best off having each level in it's own xib but, once again, am not 100% sure that this is the best design pattern.
    Many thanks in advance for any help/pointers!
    Cheers
    jez

    Hi Julian,
    I have struggled with the same questions you are addressing. On a previous project we tried to model based on packages, but during the course of the project we encountered some problems that grew overtime. The main problems were:
    1. It is hard to enforce rules on package assignments
    2. With multiple developers on the project and limited time we didn't have time to review package assignment
    3. Devopelers would click away warnings that an object was already part of another project and just continue
    4. After go-live the maintenance partner didn't care.
    So, my experience is is that it is a nice feature, but only from a high level design point of view. In real life it will get messy and above all, it doesn't add much value to the development. On my neew assignment we are just working with packages based on functional area and that works just fine.
    Roy

  • What is best practice for using Maintenance Optimizer to download SPS + EhP

    SAP indicates in note 1095233 and many other documents that the best practice is to implement an EhP along with an SP Stack in the same queue. Furthermore, the only way to download the components of an EhP is via Maintenance Optimizer. However, in step 2 (Calculate download files automatically) you must choose either Maintenance (SPS) or Enhancement Package Installation (EhP); there is no way to tell Maintenance Optimizer that you want to download both SPS and EhP.
    In developing the process for my team I circumvented the problem by saying to select Maintenance first to get the SPS into the Download Basket and follow that with selecting Enhancement Package Installation to get the EhP into the Download Basket. The plan would be to include all the downloads in the SPAM/SAINT queue with the expectation that SPAM/SAINT will be able to determine what must be included.
    I'm wondering now whether that is a legitimate way to approach this and I am hoping others will share their process.
    Thanks,
    Terry McCann
    Monsanto Company
    St. Louis, MO

    Actually, I don't believe you will get the entire SP Stack for ERP 6, you only get those pieces of the stack that are required for the Technical Usages you select for the EhP.  I'm basing that on my observation that you get a different subset of the files making up the stack depending upon your TU selection. Also, note 1095233 specifically states:
    If now the corrections included in the enhancement package 3 correspond to ERP 6.0 SP level 11, SAP  recommends to update also the parts of your system to SP level 11 which you do not want to update to enhancement package 3  to achieve a consistent correction level in all parts of your application: SAP recommends to update all software components to the correction status corresponding to SAP ERP 6.0 Support Package Stack 11 with the installation of SAP enhancement package 3 for SAP ERP 6.0.
    Thanks,
    Terry

  • Handling Error queue and Resolving issues - Best practices

    This is related to Oracle 11g R2 streams replication.
    I have a table A in Source and Table B in destination. During the replication by Oracle streams, if there is an error at apply process, the LCR record will go to error queue table.
    Please share the best practices on the following:
    1. Maintaining error table operation,
    2. monitor the errors in such a way that other LCRs are not affected,
    3. Reapply the resolved LCRs from error queue back to Table B, and
    4. Retain the synchronization without degrading the performance of streams
    Please share some real time insight into the error queue handling mechanism.
    Appreciate your help in advance.

    This is related to Oracle 11g R2 streams replication.
    I have a table A in Source and Table B in destination. During the replication by Oracle streams, if there is an error at apply process, the LCR record will go to error queue table.
    Please share the best practices on the following:
    1. Maintaining error table operation,
    2. monitor the errors in such a way that other LCRs are not affected,
    3. Reapply the resolved LCRs from error queue back to Table B, and
    4. Retain the synchronization without degrading the performance of streams
    Please share some real time insight into the error queue handling mechanism.
    Appreciate your help in advance.

Maybe you are looking for

  • In Pricing procedure

    Hai Gurus In free sample PP I need net Value Zero...I used R100% it is coming.. My problem is BEdE.cessH.E.cess is printing at Net value place when i raise the sales order. This Tax amount Has to print in sales order level Tax Column... Plz help me..

  • BSOD after installing Boot Camp, AMD Radeon 6630M driver issue

    I have reinstalled Windows using Boot Camp twice now and get a blue screen right after installing the Boot Camp drivers and restarting. I am installing Windows 7 Ultimate 64-bit on my Mac mini 2011 edition. I created an install disk in Lion and used

  • Why can't I see the applications set in the Excell client

    I can log into the APPSET in Admin console, Also I can loginto the APPSET in MS Excel client. But I can not see the Applications in the APPSET. I have tried every trick on the book for this SAP BPC MS7 version. Any clue will be highly appreciated. Th

  • What is program_error in pl/sql

    sir/madam plz answer this program_error when it comes ,why it comes

  • Opening a RAW image from RL4 Beta in PSS5.1 I get this error message!

    I can't find any reference to the ACR-7 plug-in on the Adobe Labs site. When I select the 'Render using Lightroom' option,  the rendered image looks the same as the RAW image in LR4 Beta, that's good however you lose the Smart-Object capability, no m