Oracle best practice metrics

Aside from security hardening and ensuring there is an effective backup/restore process in operation, what other issues should a network admin want assurances on for a healthy oracle database? What other metrics or risk areas would be of concern to IT/network management?

Osama_mustafa wrote:
You could start with small test, backup and restore a few files to see if the system itself works,Check the content of your backup archive also you need different approach to test tape drive and make sure it validSorry maybe a bit of confusion, I basically meant if you take security and backup/restore out of the equation, what other metrics would DBA's look for when trying to determine a health well configured/managed database server...

Similar Messages

  • What oracle best practices in mapping budgeting to be implement at item

    Dear Consultant's
    Really i need you values Consultantancy
    What oracle best practices in mapping budgeting to be implement at item category level or item level
    I want to check fund at encumbrance account according to item level
    Case:
    I have there item category
    One is Computer's items
    Tow is printer's items
    Third is food's item's
    I want to implement my budget on item category level
    Example:
    I want my purchase budget for item with printer's type not exceed 30000USD
    And For item with type food's not exceed 45000usd
    How to map this in oracle application
    The modules implemented on my site
    (GL, AP, AR, INV, PURCHASING, OM)
    Please give me the oracle best practice that handle this case
    Thanks for all of you

    Hi,
    It is really difficult to have Budgetary Control on Inventory Items in Average Costing enviornment as you can have only one Inventory Account at the Inventory Organization level.
    You have to modify your PO / Requisition Account Generator to populate the Encumbrance Account in PO / Requisition based upon item category. Moreover, the "Reverse Encumbrance" flag in your Inventory Org needs to be unchecked so that the encumbrances are not revered when the goods are received.
    Gajendra

  • Where does one find the Oracle Best Practice/recommendations for how to DR

    What is the Oracle Best Practice for install/deployment and configuration of ODI 11g for Disaster Recovery?
    We have a project that is using Oracle ODI 11g (11.1.1.5).
    We have configured all the other Oracle FMW components as per the Oracle DR EDG guides. Basically using the Host ip name/aliasing concept to ‘trick’ the secondary site into thinking
    it is primary and continue working with minimal (or no) manual reconfiguration. But will this work for ODI? The FMW DR guide has sections for SOA, WebCenter and IdM, but nothing for ODI.
    Since ODI stores so much configuration information in the Master Repository..when this DB gets ‘data guarded’ to the secondary site and promoted to Primary…ODI will still think it is at the ‘other’ site. Will this break the actual agents running the scenarios?
    Where does one find the Oracle Best Practice/recommendations for how to DR ODI properly?
    We are looking for a solution that will allow a graceful switchover/failover with minimal manual re-configuration.

    user8804554 wrote:
    Hi all,
    I m currently testing external components with Windows Server and I want to test Oracle 11g R2.
    The only resource I have is this website and the only binaries seem to be for Linux OS.You have one other HUGE resource that, while it won't answer your current question, you'd better start getting familiar with if you are going to use Oracle. That is the complete and official documentation, found at tahiti.oracle.com
    >
    Does anybody know how I can upgrade my Oracle 11.1.0.7 version to the R2 release?
    Thanks,
    Bertrand

  • Oracle Best Practices for generating Transactions IDs in high OLTP systems

    We are in the process of designing a high OLTP system using Oracle 11g Database with the following NFRs:
    1) 1 million transactions per day
    2) 100,000 concurrent users
    There are close to about 160-180 entities in the database and we want to know the best approach/practice in deriving the transaction IDs for the OLTP system. Our preferences are given below:
    1) Use Oracle Sequence starting with 1,000,000,000 (1 billion) - This is to make the TXN ID look meaningful when it starts with 1 billion instead of starting it with 1.
    2) Use timestamp and cast it to number instead of using Oracle sequence.
    Note: Transaction IDs must appear in sequence as they are inserted - be it sequence/timestamp
    I would like to know pros/cons of the above methods and their impacts on performance. Also, appreciate if you could share any any best practices/methods that Oracle supports.
    Thanks in advance.
    Ken R

    Ken R wrote:
    I did a quick PoC using both Oracle Sequence & Timestamp for 1 million inserts in a Non-RAC environment. Code used is given below:
    create sequence testseq start with 1 cache 10000 order;
    create table test1 (txnid number, txndate timestamp(9));
    create table test2 (txnid number, txndate timestamp(9));
    begin
    for i in 1..1000000
    loop
    insert into test1 values(testseq.nextval,systimestamp(9));
    end loop;
    commit;
    end;
    begin
    for i in 1..1000000
    loop
    insert into test2 values(to_number(to_char(systimestamp(9),'yyyymmddhh24missff9')), systimestamp(9));
    end loop;
    commit;
    end;
    Here are the results:
    select max(txndate)-min(txndate) from test1;
    Result >> 0 0:3:3.514891000
    select max(txndate)-min(txndate) from test2;
    Result >> 0 0:1:32.386923000
    It appears that Timestamp is faster than sequence... Any thought is highly appreciated...Interesting that your sequence timing is so slow. You say this was a non-RAC environment, but I wonder if you had Oracle linked in RAC mode even though you were running single instance - this would result in the ORDERed sequence running through RAC's "DFS Lock Handle" mechanism which might account for the timing anomaly.
    Unfortunately your test is not particularly relevant. As DomBrooks points out there are lots of problems with sequence-based or time-based columns, especially in RAC, and most particularly if you think you want a "no-gap" sequence. On top of this, of course, your test doesn't include an index on the relevant column, and it's single user and doesn't test for any concurrency effects.
    Typical performance problems are: your RAC instances spend all their time negotiating who gets to use the next value; the index you use to enforce uniqueness suffers from massive contention on the "high-value" block unless you create a reverse-key index - at which point you have to be able to cache the entire index to minimise I/O overheads; you can hash partition the index to avoid using the reverse-key option - but that costs a lot of money if you don't already license the partitioning option.
    Regards
    Jonathan Lewis

  • Any Oracle best practice/standards for inter-DataCente links for Oracle RAC

    Hello Oracle Experts,
    Am working for a customer to set up Oracle RAC architecture hosting SAP/Non-SAP applications per SLA levels(MC/BC/Standard) specs. Currently my network team needs calculation to arrive at whether we will go for a (1), (2) or (3) 10Gig links for inter DC (Data-Center) for Oracle RAC.. below is additional background:
    •     Porting all client SAP/Non-SAP Oracle databases to new 2 data-centers.
    •     There will be 10 blades (4x BL680s and 6x BL460s) in each DC (can scale-up/out later on).
    •     Clusters architecture to support Extended/Stretched RAC cluster feature
    •     Clusters 2-node each(1-datacenter1, 1-datacenter2) and nodes distributed across 2 x c7000 such that no cluster has more than one node in an enclosure.
    •     Each node will have - 4 NIC ports ( 2 x public and 2 x private) , 2 dual-port HBA
    •     Oracle ASM/ACFS (ASM Cluster File System), Voting Disk, OCR and Database files
    •     the versions are Oracle 11g RAC, Oracle 10g RAC and Oracle 9i (for DataGuard/Standby) on RHEL 6 on Proliant Blades (x86) + BladeMatrix
    My network colleagues considering using DWDM across the 2 DCs(given the lesser cost?). Am still looking around if there are any Oracle/industry-best practices around this and having a calculation to support that..
    Many Thanks in advance..
    Regards,
    Abhijit

    Hi ,
    There are no specific set of steps / practices for batch loading contents to ucm . It would be very much dependent on how many contents does the user have to load to UCM and how well the server is configured in terms of performance .
    You can get more details from the following documentation link : http://docs.oracle.com/cd/E21043_01/doc.1111/e10792/c02_settings009.htm
    Thanks,
    Srinath

  • Require official Oracle Best Practices about PSU patches

    A customer complained about the following
    Your company statements are not clear...
    On your web page - http://www.oracle.com/security/critical-patch-update.html
    The following is stated!
    Critical Patch Update
    Fixes for security vulnerabilities are released in quarterly Critical Patch Updates (CPU), on dates announced a year in advance and published on the Oracle Technology Network. The patches address significant security vulnerabilities and include other fixes that are prerequisites for the security fixes included in the CPU.
    The major products patched are Oracle Database Server, Oracle Application Server, Oracle Enterprise Manager, Oracle Collaboration Suite, Oracle E-Business Suite, PeopleSoft Enterprise Tools, PeopleSoft CRM, JD Edwards EnterpriseOne, JD Edwards OneWorld XE, Oracle WebLogic Suite, Oracle Communications and Primavera Product Suite.
    Oracle recommends that CPUs be the primary means of applying security fixes to all affected products as they are released more frequently than patch sets and new product releases.
    BENEFITS
    * Maximum Security—Vulnerabilities are addressed through the CPU in order of severity. This process ensures that the most critical security holes are patched first, resulting in a better security posture for the organization.
    * Lower Administration Costs—Patch updates are cumulative for many Oracle products. This ensures that the application of the latest CPU resolves all previously addressed vulnerabilities.
    * Simplified Patch Management—A fixed CPU schedule takes the guesswork out of patch management. The schedule is also designed to avoid typical "blackout dates" during which customers cannot typically alter their production environments.
    PROGRAM FEATURES
    * Cumulative versus one-off patches—The Oracle Database Server, Oracle Application Server, Oracle Enterprise Manager, Oracle Collaboration Suite, Oracle Communications Suite and Oracle WebLogic Suite patches are cumulative; each Critical Patch Update contains the security fixes from all previous Critical Patch Updates. In practical terms, the latest Critical Patch Update is the only one that needs to be applied if you are solely using these products, as it contains all required fixes. Fixes for other products, including Oracle E-Business Suite, PeopleSoft Enterprise Tools, PeopleSoft CRM, JD Edwards EnterpriseOne, and JD Edwards OneWorld XE are released as one-off patches, so it is necessary to refer to previous Critical Patch Update advisories to find all patches that may need to be applied.
    * Prioritizing security fixes—Oracle fixes significant security vulnerabilities in severity order, regardless of who found the issue—whether the issue was found by a customer, a third party security researcher or by Oracle.
    * Sequence of security fixes—Security vulnerabilities are first fixed in the current code line. This is the code being developed for a future major release of the product. The fixes are scheduled for inclusion in a future Critical Patch Update. However, fixes may be backported for inclusion in future patch sets or product releases that are released before their inclusion in a future Critical Patch Update.
    * Communication policy for security fixes—Each Critical Patch Update includes an advisory. This advisory lists the products affected by the Critical Patch Update and contains a risk matrix for each affected product.
    * Security alerts—Security alerts provide a notification designed to address a single bug or a small number of bugs. Security Alerts have been replaced by scheduled CPUs since January 2005. Unique or dangerous threats can still generate Security Alert email notifications through MetaLink and the Oracle Technology Network.
    Nowhere in that statement is the Patch Set Update even mentioned. If Oracle intends to recommend to all customers that Patch Set Updates are the recommended means of Patching for Security and Functionality then it should be stated so here!
    Please clarify!
    Where can I find the current information so that I can use to Official Oracle statement as a reference for my Enterprise Practices and Standards document? The individual patch package references you are giving me do not state Oracle recommended Best Practice, they only speak to the specific patch package they describe. These do not help me in making an Enterprise statement of Practices and Standards.
    I need to close the process out to capture a window of availability for Practices and Standards approval.
    Do we have any Best Practice document about PSU patches available for customers?

    cnawrati wrote:
    A customer complained about the following
    Your company statements are not clear...
    On your web page - http://www.oracle.com/security/critical-patch-update.html
    Who is the "your" to which you are referring?
    <snip>
    Nowhere in that statement is the Patch Set Update even mentioned. If Oracle intends to recommend to all customers that Patch Set Updates are the recommended means of Patching for Security and Functionality then it should be stated so here!Um. OK
    Please clarify!
    Of whom are you asking for a clarification?
    Where can I find the current information so that I can use to Official Oracle statement as a reference for my Enterprise Practices and Standards document? The individual patch package references you Who is the "you" to which you refer?
    are giving me do not state Oracle recommended Best Practice, they only speak to the specific patch package they describe. These do not help me in making an Enterprise statement of Practices and Standards.
    I need to close the process out to capture a window of availability for Practices and Standards approval.
    Be our guest.
    Do we What do you mean "we", Kemosabi?
    have any Best Practice document about PSU patches available for customers?This is a very confusing posting, but overall it looks like you are under the impression that this forum is some kind of channel for communicating back to Oracle Crop anything that happens to be on your mind about their corporate web site and/or policies and practices. Please be advised that this forum is simply a platform provided BY Oracle Corp as a peer operated user support group. No one here is responsible for anything on any Oracle web site. No one here is responsible for any content anywhere in the oracle.com domain, outside of their own personal posting on this forum. In other words, you can complain all you want about Oracle's policy, practice, and support, but "there's no one here but us chickens."

  • Oracle Best practices for changing  Byte to Char on Varchar2 columns

    Dear Team,
    Application Team wanted to change Byte to Char on Varchar2 columns to accommodate Multi byte character  on couple of production tables.
    Wanted to know is it safe to have mixture of BYTE and CHAR semantics in the same table i have read on the couple of documents that It's good practice to avoid using a mixture of BYTE and CHAR semantics columns in the same table.
    What happens if we have mixture of BYTE and CHAR semantics columns in the same table?
    Do we need to gather stats & rebuild indexes on the table after these column changes .
    Thanks in Advance !!!
    SK

    Application Team wanted to change Byte to Char on Varchar2 columns to accommodate Multi byte character  on couple of production tables.
    Wanted to know is it safe to have mixture of BYTE and CHAR semantics in the same table i have read on the couple of documents that It's good practice to avoid using a mixture of BYTE and CHAR semantics columns in the same table.
    No change is needed to 'accommodate Multibyte characters'. That support has NOTHING to do with whether a column is specified using BYTE or CHAR.
    In 11g the limit for a VARCHAR2 column is 4000 bytes, period. If you specify CHAR and try to insert 1001 characters that each take 4 bytes you will get an exception since that would require 4004 bytes and the limit is 4000 bytes.
    In practice the use of CHAR is mostly a convenience to the developer when defining columns for multibyte characters. For example for a NAME column you might want to make sure Oracle will allocate room for 50 characters REGARDLESS of the actual length in bytes.
    If you provide a name of 50 one byte characters then only 50 bytes will be used. Provide a name of 50 four byte characters and 200 bytes will be used.
    So if  that NAME column was defined using BYTE how would you know what length to use for the column? Fifty BYTES will seldom be long enough and 200 bytes SEEMS large since the business user wants a limit of FIFTY characters.
    That is why such columns would typically use CHAR; so that the length (fifty) defined for the column matches the logical length of the number of characters.
    What happens if we have mixture of BYTE and CHAR semantics columns in the same table?
    Nothing happens - Oracle could care less.
    Do we need to gather stats & rebuild indexes on the table after these column changes .
    No - not if you by 'need' you mean simply because you made ONLY that change.
    But that begs the question: if the table already exists, has data and has been in use without their being any problems then why bother changing things now?
    In other words: if it ain't broke why try to fix it?
    So back to your question of 'best practices'
    Best practices is to set the length semantics at the database level when the database is first created and to then use that same setting (BYTE or CHAR) when you create new objects or make DDL changes.
    Best practices is also to not fix things that aren't broken.
    See the 'Length Semantics' section of the globalization support guide for more best practices
    http://docs.oracle.com/cd/E11882_01/server.112/e10729/ch2charset.htm#i1006683

  • Oracle Best Practices in 10g When Disabling NUMA

    We have both Linux and Solaris 10 DB servers running 10g on them with NUMA. What is Oracle's best practice for dealing with NUMA in hardware when it can be disabled?

    user10387007 wrote:
    We have both Linux and Solaris 10 DB servers running 10g on them with NUMA. What is Oracle's best practice for dealing with NUMA in hardware when it can be disabled?How are you using NUMA (Non Uniformed Memory Access)?
    NUMA can be implemented at CPU level - in which case CPU affinity becomes important. NUMA can be used across an Interconnect (e.g. SCSI over RDMA protocol).
    So it depends on what you mean by NUMA and how you are using it (and whether or not it is used by the Oracle s/w stack itself).

  • Oracle Best Practices / Guidelines regarding Cleaning TEMP files

    Hi folks,
    Can any one help me with a set of steps / guidelines or best practices to clean TEMP files from OBIEE servers (our PROD environment)?
    Does perhaps OBIEE take care of this for you automatically, how is that process happening?
    Thanks a lot for your time and attention and hope to hear from you soon.

    TEMPS files are deleted from the server once a user logs out. But there might be chance where the TEMP files does not get deleted automaticlly, when the user logs out of the system even before the TEMP file has been generated completely. In this case the temp files get stored in the server, and a bounce of services cleans up the files.
    The best practice would be to create a script to empty out the temp directory during the start up of the services.
    -Amith.

  • BOBJ on Oracle Best Practices - Schema vs New DB

    Hi all,
    My company is installing Business Objects 4.1 as part of an overall (new) SAP project.  I am the Oracle DB for the project and new to the SAP/Business Objects world.
    My question is how do most people separate the different pieces of the overall Business Objects install on the database level?  I was thinking we would have one database with different schemas for Audit, CMS, BODS and Information Steward.  I've heard from our integrator that some companies will have a separate database for each. 
    Not knowing the system that well, adding separate databases seems unnecessary.  Where I would have some additional hoops to jump through if a restore was needed for just one of the schemas, it seems better then adding the additional DBs.
    Hopefully this makes sense.
    Thanks.

    Hi Michael,
    I agree with Mani's suggestion as recommended approach. However, It would depend from org to org.
    and if insisted to continue with one database then you would have to scale ORA DB accordingly considering trnsactional nature of each service highlighted (CMS, AUDIT, BODS)
    Since, these are transactional you would need to customize database while setup (DB buffer, Undo segments, Shared pool)
    I'm assuming you would be using 11g R2 on Linux Operating system(mostly preferred). Hence,
    how is Oracle DB deployed(single instance architecture or RAC) on which you would create DB schemas for above services?
    In either of the above case you should setup db user for BODS and CMS to conect to the ORA database in "Dedicated server connection" mode and not in "Shared server connection" mode. Hence, no latency provided the Server platform are in the same network or on same server. Audit could connect in "Shared server conection" mode.
    In ORA RAC deployment i don't forsee any issues as your transactions could be benefited by "Streams" feature incase of ORA instance failure.
    Starting in Oracle 11g Release you may be benefited from automatic PGA memory management can be implemented using
    • PGA_AGGREGATE_TARGET initialization parameter for PGA
    • MEMORY_TARGET initialization parameter combined total memory for SGA & PGA
    I may go on and on. Hope i was able to put some light here. Please refer Performance Optimization in BODS for more information regarding BODS that might also assist you.
    Regards,
    Sandeep

  • Oracle Best Practices Discussion Pros/Cons of using Synonyms

    Please share your experience given the Pros/Cons of Developing Enterprise Database Applications using public and private Synonyms.
    My recommendation to developers on my team is to avoid using Public Synonyms in their code and instead Fully Qualify the database object by schema owner.
    Pros: When you drop a schema, you do not drop the public synonyms that they created. Therefore if you have to use synonym, make it private and not public.
    Please share your experience!

    Fahd Mirza wrote:
    Well I rarely use public synonyms and that only case of db links. For example, I have a scenario, in which I have hooked up a MS SQLSERVER database with Oracle database through Heterogenous Services. I am accessing the SQLSERVER table in real time, through HS, in Oracle in real time, and then from this Oracle environment I have created db links to many other interested databases. In those interested database, I have created public synonyms over those dblinks.
    It's so transparent for the interested databases. But I have documented this whole configuration in great detail for any upcoming DBA, just in case I leave, or expire or anything. Sounds interesting - this might be worth 'cleaning' and publishing to OTN's Articles. If you are interested in pursuing this, you might want to contact Justin (Community Forum) or myself ([email protected])

  • Best Practice: Configuring Windows Azure Management Services

    I have a 3 Websites, 1 Blob Storage, and 1 SQL Server that I would like to configure for basic stability and performance monitoring. I know I can set up alerts through Management Services based on various metrics. My question is, can someone give me a recommended
    set of metrics that are good baselines?
    It is nice that Azure is so customizable, but frankly I have no idea how much CPU Time in milliseconds over a given evaluation window is appropriate. Or how many Http Server Errors? More than 0 seems bad, no? Wouldn't I want to know of any/all errors?
    So if anyone has some "best practice" metrics for me, that would be really helpful.
    Thanks.

    Hi,
      >> can someone give me a recommended set of metrics that are good baselines?
    Actually, many metrics depend on your scenario. For instance, if there're a lot of concurrent requests or if a single request is expected to take some heavy computation, then it is expected to have a high CPU usage, thus it is difficult to give
    you a specific number.
    In general, you may want the CPU usage of a web server to be as high as possible (idle CPU costs money but does not provide valuable results), but if it is low enough, if additional concurrent requests are received, they can be served without too much
    delay. In Windows Azure, you may want to setup auto scaling so that if CPU usage is high enough during a period, you create a new instance. If CPU usage is low enough during a period, you remove an instance. You may also want to use response time in addition
    to CPU to monitor whether you need to add/remove an instance.
      >> Or how many Http Server Errors? More than 0 seems bad, no? Wouldn't I want to know of any/all errors?
    As for server error, in general you want to get notified by all errors (> 0), however they're unexpected and need to be investigated. But if in your scenario you expect a certain level of server errors, then it is fine to use a larger number.
    Best Regards,
    Ming Xu
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Best Practice for Deploying ADF application

    I am tasked with developing a best or prefered practice of feploying a large ADF application. Background: we are in the process of redeveloping a UI for a large system. We have broken the system down into susbsytems. Each of these susbsystems UI will be a ADF aaplicaion(?). This is a move from a MS .Net front end. The backend (Batch processes etc) is being dveloped in Java. So my question is if I have several ADF projects for each subsystem and common components that they all will use - what is the best practice to compile package and deploy? The deployment will be to weblogic server or servers(Cluster).
    We have a team of at least 40 -50 developers worldwide so we are looking for an automated build and deploy and would like to follow Oracle best practice. So far I have read Deploying ADF Applications (http://download.oracle.com/docs/cd/E15523_01/web.1111/e15470/deploy.htm#BGBJHGFH) and have followed the links. I have also look at the ADF evangalist blogs - lots of chatter about ojdeploy. My concern about ojdeploy is that dependent files are also being compiled at the same time. I expected that we want shared dependent files compiled only once (Is that a valid concern)?
    So then when we build the source out of subversion (ojdeploy ? Ant? ) then what is best practice to deploy to a weblogic server (wslt admin console) - again we want it to be automated.
    Thank you in advance for replies.
    RK

    Rule 1: Never use the "Automatically Expose UI Componentes in a New Managed Bean" option, create your bindings manually;
    Rule 2: Rule 1 is always right;
    Rule 3: In doubts, refer to rule 2.
    You may also want to check out :
    http://groups.google.com/group/adf-methodology
    And :
    http://www.oracle.com/technology/products/jdev/collateral/4gl/papers/Introduction_Best_Practices.pdf

  • Statspack Best Practices

    Hello Everyone:
    Common sense tells me that (within reason) statspack snapshots should be run fairly frequently. I have a set of users who is challenging that notion, saying that Statspack is spiking the system and slowing them down, and so they want me to only take snapshots every 12 hours.
    I remember seeing a document (I thought it was on MetaLink, but I dunno...) that spoke of best practices for Statspack snapshots. My customers want to limit me to one snapshot every 12 hours, and I contend that I might as well not run it with that window.
    Can someone point me to some best practice or other documentation that will support my contentions that:
    1) Statspack is NOT a resource hog, and
    2) twice-a-day is not going to provide meaningful data.
    Thanks,
    Mike
    P.S. If I'm all wet, and you know it, I'd like to see that documentation, too!

    Hi Mike,
    saying that Statspack is spiking the system and slowing them downI wrote both of the Oracle Press STATSPACK books and I've NEVER seen STATSPACK cause a burden. Remember a "snapshot" is a simple dump of the X$ memory structures into tables, very fast . . .
    they want me to only take snapshots every 12 hours.Why bother? STATSPACK and AWR reports are elapsed-time reports, and long-term reports are seldom useful . . . .
    An important thing to remember is that even if statistics are gathered too frequently with STATSPACK, reporting can always be done on a larger time window. For example, if snapshots are at five-minute intervals and there is a report that takes 30 minutes to run, that report may or may not be slow during any given five-minute period.
    After looking at the five-minute windows, the DBA can decide to look at a 30-minute window and then run a report that spans six individual five-minute windows. The moral of the story is to err on the side of sampling too often rather than not often enough.
    I have over 600 pages dedicated to STATSPACK and AWR analysis at the link below, if you want a super-detailed explaination:
    http://www.rampant-books.com/book_2005_1_awr_proactive_tuning.htm
    I'm not as authoritative as the documentation, but even hourly snapshot durations can cause loss of performance details.
    Ah, this Oracle Best Practices document may help:
    http://www.oracle.com/technology/products/manageability/database/pdf/ow05/PS_S998_273998_106-1_FIN_v1.pdf
    By default, every hour a snapshot of all workload andstatistics information is taken and stored in the AWR. The data is retained for 7 days by default and both snapshot interval and retention settings are userconfigurable."
    Hope this helps. . .
    Donald K. Burleson
    Oracle Press author

  • OIM 11g - Best Practices

    Hello,
    a customer of us, want to use one OIM as production and quality environment. From my point of view a big mistake. I recommened to build a separate environment.
    Are there any best practices documentation from oracle, where is described to seperate the environments (development, test, quality, production,...)?
    Thanks

    An enterprise deployment guide discusses Oracle best practices blueprint based on proven Oracle high-availability technologies and recommendations for Oracle Fusion Middleware.
    Oracle&amp;reg; Fusion Middleware Enterprise Deployment Guide for Oracle Identity Management 11g Release 2 (11.1.2.1) - …

Maybe you are looking for

  • How Can i open Html file in a Browser from Jar file

    Hi i am having Html help files inside my Jar file ... if i use getclass().getRource("\lib\start.html"); it is not opening ... so i have to ship seperate folders for html files along with jar files.... can anyone give the solution to have(open) html f

  • Slow motion / fast motion in Premiere Elements 13

    I read that in Premiere Elements 13 fast motion (and I think also slow motion) ist removed. Is that correct. And if yes: Is there any workaround available?

  • PSE 8 in Win 7 - Slide Show Editor does not display preview

    I have a student who is not able to see the preview either in the monitor window or full screen in the Slide Show Editor. The individual slides are black. However, when the slide show is rendered it is fine.  She also has PSE 7, and it works fine in

  • FI_FIND_PAYMENT_CONDITIONS for a purchase order

    Hi, I need to use the FM FI_FIND_PAYMENT_CONDITIONS to get the "zfbdt" value. I'm working with purchase orders, I mean I've the ekko and the ekpo tables. Can anybody tell how to use this FM to get "zfbdt" using ekko table?  What export and import val

  • Thai Lang Dropdownlist and Listbox show "???????"

    Hi gurus,    My office use SAP/R3 4.6C patch level 21, SAP Gui 620 patch 45-67 and Windows XP sp2 or sp3 when I'm use tcode MIRO or other tcode have screen include dropdownlist or listbox field, it has show "???????" but other field is correct.    Pl