Oracle OLAP best practice and DB11g parameter suggestion

Hi All ,
We have huge partitioned fact table with nearly 1 billion of data(15GB export dump) for range by month partition holding 24 months data. Any special recommendation you prefer for parameters (AWM etc.) ?
or else any recommendation to Create cube strategies ?
Also any recommandation on Cube partition and Database 11g paramater(int.ora) related changes for OLAP 11g cube ?
Thanks in advance,
Debashis

There are recommended parameters and recommended strategies in the Oracle documentation.
For starters, I recommend these guides:
VLDB and Partitioning Guide
Data Warehousing Guide
OLAP User's Guide
All of which can be found at:
http://www.oracle.com/pls/db112/portal.all_books

Similar Messages

  • Oracle BPM Best Practices

    Hi all,
    Anybody has any information on the Oracle BPM Best Practices?
    Any guide?

    All,
    I was trying to find a developers guide for using Oracle BPM Suite (11g). I found the one in the following link, however this looks like a pretty detailed one...
    http://download.oracle.com/docs/cd/B31017_01/integrate.1013/b28981/toc.htm
    Can you someone help me find any other flavors of the developers guide? I am looking for the following...
    1. Methods of work - Best Practices for design and development of BPM process models.
    2. Naming Conventions for Process Modeling - Best Practices
    3. Coding standards for Process Modeling (J Developer)
    4. Guide with FAQ's for connecting / Publishing Process Models to the MDS Database.
    5. Deployment Standards - best practices....
    6. Infrastructure - Recommendations for Scale out deployment in Linux v/s Windows OS.
    Regards,
    Dinesh Reddy

  • Oracle 10G Best practice Installation

    Hi all,
    Somebody will have a document of like doing Oracle 10G tuning in Solaris 10?
    Thanks

    oops sorry, that's best practices and not tuning. But there may be some stuff in there.

  • Best Practices and Usage of Streamwork?

    Hi All -
    Is this Forum a good place to inquire about best practices and use of Streamwork? I am not a developer working with the APIs, but rather have setup a Streamwork Activity for my team to collaborate on our activities.
    We are thinking about creating a sort of FAQ on our team activity and I was thinking of using either a table or a collection for this. I want it to be easy for team members to enter the question and the answer (our team gets a lot of questions from many groups and over time I would like to build up a sort of knowledge base).
    Does anyone have any suggestions for such a concept in StreamWork? Has anyone done something like this and can share experiences?
    Please let me know if I should post this question in another place.
    Thanks and regards,
    Rob Stevenson

    Activities have a limit of 200 items that can be included.  If this is the venue you wish to use,  it might be better to use a table rather than individual notes/discussions.

  • Coherence Best Practices and Performance

    I'm starting to use coherence and I'd to know if someone could point me out some doc on Best Practices and Performance optimizations when using it.
    BTW, I haven't had the time to go through the entire Oracle documentation.
    Regards

    Hi
    If you are new to Coherence (or even for people who are not that new) one of the best things you can do is read this book http://www.packtpub.com/oracle-coherence-35/book I know it says Coherence 3.5 and we are currently on 3.7 but it is still very relevant.
    You don't need to go through all the documentation but at least try the introductions and try out some of the examples. You need to know the basics otherwise it makes it harder for people to either understand what you want or give you detailed enough answers to questions.
    For performance optimizations it depends a lot on your use cases and what you are doing; there are a number of things you can do with Coherence to help performance but as with anything there are trade-offs. Coherence on the server-side is a Java process and often when tuning, sorting out issues and performance I spend a lot of time with the usual tools for Java such as VisualVM (or JConsole), tuning GC, looking at thread dumps and stack traces.
    Finally, there are plenty of people on these forums happy to answer your questions in return for a few forum points, so just ask.
    JK

  • Quick question regarding best practice and dedicating NIC's for traffic seperation.

    Hi all,
    I have a quick question regarding best practice and dedicating NIC's for traffic seperation for FT, NFS, ISCSI, VM traffic etc.  I get that its best practice to try and separate traffic where you can and especially for things like FT however I just wondered if there was a preferred method to achieving this.  What I mean is ...
    -     Is it OK to have everything on one switch but set each respective portgroup to having a primary and failover NIC i.e FT, ISCSI and all the others failover (this would sort of give you a backup in situations where you have limited physical NICs.
    -    Or should I always aim to separate things entirely with their own respective NICs and their own respective switches?
    During the VCAP exam for example (not knowing in advance how many physical NIC's will be available to me) how would I know which stuff I should segregate on its own separate switch?  Is there some sort of ranking order of priority /importance?  FT for example I would rather not stick on its own dedicated switch if I could only afford to give it a single NICs since this to me seems like a failover risk.

    I know the answer to this probably depends on however many physical NICs you have at your disposal however I wondered if there are any golden 100% rules for example FT must absolutely be on its own switch with its own NICs even at the expence of reduced resiliency should the absolute worst happen?  Obviously I know its also best practice to seperate NICs by vender and hosts by chassis and switch etc 

  • What is the best practice and Microsoft best recommended procedure of placing "FSMO Roles on Primary Domain Controller (PDC) and Additional Domain Controller (ADC)"??

    Hi,
    I have Windows Server 2008 Enterprise  and have
    2 Domain Controllers in my Company:
    Primary Domain Controller (PDC)
    Additional Domain Controller (ADC)
    My (PDC) was down due to Hardware failure, but somehow I got a chance to get it up and transferred
    (5) FSMO Roles from (PDC) to (ADC).
    Now my (PDC) is rectified and UP with same configurations and settings.  (I did not install new OS or Domain Controller in existing PDC Server).
    Finally I want it to move back the (FSMO Roles) from
    (ADC) to (PDC) to get UP and operational my (PDC) as Primary. 
    (Before Disaster my PDC had 5 FSMO Roles).
    Here I want to know the best practice and Microsoft best recommended procedure for the placement of “FSMO Roles both on (PDC) and (ADC)” ?
    In case if Primary (DC) fails then automatically other Additional (DC) should take care without any problem in live environment.
    Example like (FSMO Roles Distribution between both Servers) should be……. ???
    Primary Domain Controller (PDC) Should contains:????
    Schema Master
    Domain Naming Master
    Additional Domain Controller (ADC) Should contains:????
    RID
    PDC Emulator
    Infrastructure Master
    Please let me know the best practice and Microsoft best recommended procedure for the placement of “FSMO Roles.
    I will be waiting for your valuable comments.
    Regards,
    Muhammad Daud

    Here I want to know the best practice
    and Microsoft best recommended procedure for the placement of “FSMO Roles both on (PDC) and (ADC)” ?
    There is a good article I would like to share with you:http://oreilly.com/pub/a/windows/2004/06/15/fsmo.html
    For me, I do not really see a need to have FSMO roles on multiple servers in your case. I would recommend making it simple and have a single DC holding all the FSMO roles.
    In case if
    Primary (DC) fails then automatically other Additional (DC) should take care without any problem in live environment.
    No. This is not true. Each FSMO role is unique and if a DC fails, FSMO roles will not be automatically transferred.
    There is two approaches that can be followed when an FSMO roles holder is down:
    If the DC can be recovered quickly then I would recommend taking no action
    If the DC will be down for a long time or cannot be recovered then I would recommend that you size FSMO roles and do a metadata cleanup
    Attention! For (2) the old FSMO holder should never be up and online again if the FSMO roles were sized. Otherwise, your AD may be facing huge impacts and side effects.
    This posting is provided "AS IS" with no warranties or guarantees , and confers no rights.
    Get Active Directory User Last Logon
    Create an Active Directory test domain similar to the production one
    Management of test accounts in an Active Directory production domain - Part I
    Management of test accounts in an Active Directory production domain - Part II
    Management of test accounts in an Active Directory production domain - Part III
    Reset Active Directory user password

  • Best Practice For Database Parameter ARCH_LAG_TARGET and DBWR CHECKPOINT

    Hi,
    For best practice - i need to know - what is the recommended or guideline concerning these 2 Databases Parameter.
    I found for ARCH_LAG_TARGET, Oracle recommend to setup it to 1800 sec (30min)
    Maybe some one can guide me with these 2 parameters...
    Cheers

    Dear unsolaris,
    First of all if you want to track the full and incremental checkpoints, make the LOG_CHECKPOINT_TO_ALERT parameter TRUE. You will see the checkpoint SCN and the completion periods.
    Full checkpoint is being triggered when a log switch happens and checkpoint position in the controlfile is written in the datafile headers. For just a really tiny amount of time the database could be consistent eventhough it is open and in read/write mode.
    ARCH_LAG_TARGET parameter is disabled and set to 0 by default. Here is the definition for that parameter;
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams009.htm
    If you want to set this parameter up the Oracle recommends it to be 1800 as you have said. This can subject to change from database to database and it is better for you to check it by experiencing it.
    Regards.
    Ogan

  • JSP Best Practices and Oracle Report

    Hello,
    I am writing an application that obtains information from the user using a JSP/HTML form and then submitted to a database, the JSP page is setup using JSP Best Practices in which the SQL statments, database connectivity information, and most of the Java source code in a java bean/java class. I want to use Oracle Reports to call this bean, and generate a JSP page displaying the information the user requested from the database. Would you please offer me guidance for setting this up.
    Thank you,
    Michelle

    JSP Best Practices.
    More JSP Best Practices
    But the most important Best Practice has already been given in this thread: use JSP pages for presentation only.

  • Oracle BPEL standard, best practice and naming convention

    Hi, folks,
    Is there any standard or best practice associated with Oracle BPEL, regarding development, performace, what to avoid, etc? And is there any naming convention for the process, variable partner link name, etc? Similar to naming convention in writing Java code?
    Thanks
    John

    Hi,
    Here is the best practice guide:
    http://download.oracle.com/technology/tech/soa/soa_best_practices_1013x_drop3.pdf
    Thanks & Regards,
    Dharmendra
    http://soa-howto.blogspot.com

  • Oracle Statistics - Best Practice?

    We run stats with brconnect weekly:
    brconnect -u / -c -f stats -t all
    I'm trying to understand how some of our stats are old or stale.  Where's my gap?  We are running Oracle 11g and have Table Monitoring set on every table.  My user_tab_modifications is tracking changes in just over 3,000 tables.  I believe that when those entries surpass 50% changed, then they will be flagged for the above brconnect to update their stats.  Correct?
    Plus, we have our DBSTATC entries.  A lot of those entries were last analyzed some 10 years ago.  Does the above brconnect consider DBSTATC at all?  Or do we need to regularly run the following, as well?
    brconnect -u / -c -f stats -t dbstatc_tab
    I've got tables that are flagged as stale, so something doesn't seem to be quite right in our best practice.
    SQL> select count(*) from dba_tab_statistics
      2  where owner = 'SAPR3' and stale_stats = 'YES';
      COUNT(*)
          1681
    I realize that stats last analyzed some ten years ago does not necessarily mean they are no longer good but I am curious if the weekly stats collection we are doing is sufficient.  Any best practices for me to consider?  Is there some kind of onetime scan I should do to check the health of all stats?

    Hi Richard,
    > We are running Oracle 11g and have Table Monitoring set on every table.
    Table monitoring attribute is not necessary anymore or better said it is deprecated due to the fact that these metrics are controlled by STATISTICS_LEVEL nowadays. Table monitoring attribute is valid for Oracle versions lower than 10g.
    > I believe that when those entries surpass 50% changed, then they will be flagged for the above brconnect to update their stats.  Correct?
    Correct, if BR*Tools parameter stats_change_threshold is set to its default. Brconnect reads the modifications (number of inserts, deletes and updates) from DBA_TAB_MODIFICATIONS and compares the sum of these changes to the total number of rows. It gathers statistics, if the amount of changes is larger than stats_change_threshold.
    > Does the above brconnect consider DBSTATC at all?
    Yes, it does.
    > I've got tables that are flagged as stale, so something doesn't seem to be quite right in our best practice.
    The column STALE_STATS in view DBA_TAB_STATISTICS is calculated differently. This flag is used by the Oracle standard DBMS_STATS implementation which is not considered by SAP - for more details check the Oracle documentation "13.3.1.5 Determining Stale Statistics".
    The GATHER_DATABASE_STATS or GATHER_SCHEMA_STATS procedures gather new statistics for tables with stale statistics when the OPTIONS parameter is set to GATHER STALE or GATHER AUTO. If a monitored table has been modified more than 10%, then these statistics are considered stale and gathered again.
    STALE_PERCENT - Determines the percentage of rows in a table that have to change before the statistics on that table are deemed stale and should be regathered. The valid domain for stale_percent is non-negative numbers.The default value is 10%. Note that if you set stale_percent to zero the AUTO STATS gathering job will gather statistics for this table every time a row in the table is modified.
    SAP has its own automatism (like described with brconnect and stats_change_threshold) to identify stale statistics and how to collect statistics (percentage, histograms, etc.) and does not use / rely on the corresponding Oracle default mechanism.
    > Any best practices for me to consider?  Is there some kind of onetime scan I should do to check the health of all stats?
    No performance issue? No additional and unnecessary load on the system (e.g. dynamic sampling)? No brconnect runtime issue? Then you don't need to think about the brconnect implementation or special settings. Sometimes you need to tweak it (e.g. histograms, sample sizes, etc.), but then you have some specific issue that needs to be solved.
    Regards
    Stefan

  • Subversion best practices and assumptions?

    I am using SQL Developer 3.0.04, accessing Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production. I am taking over a project and setting up version control using Subversion. The project uses 4 schema for its tables, PLSQL objects, etc. When saving a PLSQL object (a package specification for example) in SQL Developer (using the File->Save As menu option) the default name is PACKAGE_NAME.sql. The schema name is not automatically set up as part of the file name. In looking at the SQL Developer preferences, I do not see a way to change this.
    In viewing the version control OBE, which uses files from the HR schema, there is an implicit assumption that the files all affect the same schema. Thus the repository directory only contains files from that one schema. Is this the normative/best practice for using Subversion with Oracle and SQL Developer? I want to set up our version-control environment to minimize the likelihood of "user(programmer) error".
    Thus, in our environment, should I :
    1) set up Subversion sub-directories for each schema within my Subversion project, given that each release (we are an Agile project, releasing every 2 weeks) may contain objects from multiple schema?
    2) rename each object to include the schema name in the object?
    Any advice would be gratefully appreciated.
    Vin Steele
    Edited by: Vin Steele on Aug 8, 2011 11:13 AM
    Edited by: Vin Steele on Aug 8, 2011 11:20 AM
    Edited by: Vin Steele on Aug 8, 2011 11:22 AM

    Hi
    It makes sense to have the HCM system in the same system as rest of the components because
    1) We are able to make use of the tight integration between various components, most importantly Payroll - Finance.
    2) We can manage without tiresome ALE/interface development and management.
    3) lesser hardware cost (probably)
    It makes sense to have HCM in different systems because
    1) because of different sequence of HRSP/LCP compared to other systems, we can have a separate strategy for HRSP application independent of other components. We can save a lot of effort in regression testing as only HR needs to be tested after patch application.
    2) IN many countries there are strict data protection laws, and having HR in a separate system ensures that people from other functions do not have access to HR data even accidentally as they will not have user ids in the HR system.
    Hope this is enough to get you started.

  • Great new resources on OTN: best practices and OPM project polishing tips

    Two great new resources are now available on OTN.
    Oracle Policy Modeling Best Practice Guide
    A clearly laid out paper that walks through a series of valuable recommendations. It will help you to design and model rules that maximize the advantages of using Oracle Policy Automation's unique natural language approach. Leverages more than 10 years of practical experience in designing and delivering enterprise policy models using OPA. Highly recommended reading for all skill levels.
    Tips for Polishing a Policy Modeling Project
    This presentation contains dozens of useful tips for delivering rich and natural-feeling interactive interviews and other decision-making experiences with OPA.
    See the links at the top of the New and Featured section on the OPA overview tab, and also at the top of the Learn more section.
    http://www.oracle.com/technetwork/apps-tech/policy-automation/overview/index.html
    Jasmine Lee has digested much of her 10 years experience into these fantastically useful new materials - and they're free!
    Davin Fifield

    Thanks Davin to posted this info!
    Thanks Jasmine these material very nice.

  • Oracle Cluster Best Practice

    Is there a "Best Practice" to follow concerning Oracle and clustering. Currently we are using VCS trying to cluster a box running one Oracle engine and multiple instances. This is not working well. Is it best to cluster a box running one Oracle engine and one instance?, or is the multi-instance thing ok? Also, is VCS the best solution? Please respond to my email below.
    TIA
    James Qualls
    [email protected]

    Is there a "Best Practice" to follow concerning Oracle and clustering. Currently we are using VCS trying to cluster a box running one Oracle engine and multiple instances. This is not working well. Is it best to cluster a box running one Oracle engine and one instance?, or is the multi-instance thing ok? Also, is VCS the best solution? Please respond to my email below.
    TIA
    James Qualls
    [email protected]

  • Music on Hold: Best Practice and site assignment

    Hi guys,
    I have a client with multiple sites, a large number of remote workers (on and off domain) and Lync Phone Edition devices.
    We want to deploy a custom music on hold file. What's the best way of doing this? I'm thinking
    Placing the file on a share on one of the Lync servers. However this would mean (I assume) that clients will always try to contact the UNC path every time a call is placed on hold, which would result in site B connecting to site A for it's MoH file. This
    is very inefficient and adds delay onto placing a call on hold. If accessing the file from a central share is best practice, how could I do this per site? Site policies I've tried haven't worked very well. For example, if a file is on
    \\serverB\MoH\file.wma for a site called "London Site" what commands do I need to run to create a policy that will force clients located at a site to use that UNC path? Also, how do client know what site
    they are in?
    Alternatively, I was thinking of pushing out the WMA file to local devices via a Group Policy, and then setting Lync globally to point to %systemdrive%\MoH\file.wma. Again, how do I go about doing this? Also, what would happen to LPE devices that wouldn't
    have the file (as they wouldn't get the GPO)?
    Any help with this would be appreciated. Particularly around how users are assigned to sites, and the syntax used to create a site policy for the first option. Any best practice guidance would be great!
    Thanks - Steve

    Hi StevehootMITS,
    If Lync Phone Edition or other device that doesn’t provide endpoint MOH, you can use PSTN Gateways to provide music on hold. For more information about Music On Hold, you can check
    http://windowspbx.blogspot.in/2011/07/questions-about-microsoft-lync-server.html
    Note: Microsoft is providing this information as a convenience to you. The sites are not controlled by Microsoft. Microsoft cannot make any representations regarding the quality, safety, or
    suitability of any software or information found there. Please make sure that you completely understand the risk before retrieving any suggestions from the above link.
    Best regards,
    Eric

Maybe you are looking for

  • Error while crawling LOB contents SharePoint 2013

    Error while crawling LOB contents SharePoint 2013 I have Configured the BDC Service application using SQL external content. The connection was successful and I am able to see the external content in the List "BDC Demo" . But when I search in the BDC

  • My apple camera connector comes up with an error "this accessorty is not supported.." on my iPhone 4 now that I have updated to iOS7, previously it worked fine.  Any suggestions?

    I have an iPhone 4 and a camera card connector (bought from apple) which worked fine together, then I updated to iOS7 and now when I plug it in it says " This accessory is not supported by this iPhone." Any suggestions on how to get my certified appl

  • Creation of db trigger with error ..

    Hello All, I am creating a trigger as shown .. but getting the following error .. ORA-01748: only simple column names allowed here I have a specific requirement as stated below for which i have written a trigger so when user manipulates the column p,

  • 2nd monitor in bootcamp

    My uncle has a 20" c2d imac with the x1600 video. I have connected a 17" lcd to the other video port and it comes up fine in OS X but, not in bootcamp. It looks like Windows is sending a signal the monitor doesn't like but, I have changed resolution

  • QTVR & XP64bit?

    Hello apple community, I'm currently having a problem with QuickTimeVirtualReality as I cannot display the 360 view here: http://www.sideshowtoy.com/cgi-bin/category.cgi?item=2932 (the 360 view option) When I try to view it, I get this error: "Error