DataWare Housing Database Optimization Parameter

Hi,
Does any one have any standard on Oracle optimized parameter value with small DataWare housing enviornment?
Any suggestions are welcome.
thanks

As with any tuning problem, there is no "one size fits all" approach. The standard tuning methodology applies here as it does anywhere
- Figure out how quickly something needs to run
- If it isn't running quickly enough, figure out what is taking so much time
- Once you know what is taking so much time, figure out how to reduce the time required. That may involve a global configuration change, it may involve tuning SQL, etc.
In addition, specifying at least the Oracle version would be critical-- there's a world of difference between an 8.1.7 database and an 11.1 database. If you are managing SGA & PGA separately, data warehouses generally allocate a larger fraction of the RAM to PGA than their OLTP cousins. They generally make greater use of parallelism. They more commonly use compression, bitmap indexes, and partitioning.
Justin

Similar Messages

  • Support::Rebuild SCOM 2012 R2 Dataware house Database

    Hi Folks,
    We have 2 Management server, 1 opsmanager, 1 datawarehousedb + reporting server, 1 acs server. Unfortunately our datawarehousedb server took a hit and database got corrupted. To add to injury we don’t have a valid database backup. We have tried to repair
    the database and it repair all the errors expect catalog errors due to which database keep going into suspect mode. I would like to know if there is a way to rebuild datawarehouse database component of the scom 2012 r2 environment. Any assistance in this matter
    will be very helpful.
    Regards,
    nav
    Regards, Navdeep [a.k.a v-2nas]

    Hi There,
    1. Uninstall Report Server role.
    2. Blow away ReportServer and ReportServerTempDB and the reporting services website (or do ssrs reset)
    3. Uninstall the Data Warehouse component (i.e., Delete the Data Warehouse database)
    4. Install the Data Warehouse (i.e., create the data warehouse database)
    5. Ensure SSRS is working in default state (you can get to http://localhost/reports without error). You'll need to use SSRS Configuration Tool.
    6. Install the Report Server role
    Refer : https://social.technet.microsoft.com/Forums/systemcenter/en-US/ca03b455-8c13-42a7-a810-8a63c913b527/scom-data-warehouse-database-uninstall-and-reinstall-procedures-in-production?forum=operationsmanagerreporting
    Gautam.75801

  • Views or MV in 11g for dataware house database

    Hello All,
    I would like to know which is better option to go with Views or Materialised views in DWH. I have read that views should not be used. I need to know the pros & cons for views before I propose to scrap views to my team. Which is better in terms of performance?
    Thanks

    Views are not designed to improve query performance: http://download.oracle.com/docs/cd/E11882_01/server.112/e16508/schemaob.htm#i20690.
    Materialized views are designed to improve query performance: http://download.oracle.com/docs/cd/E11882_01/server.112/e16508/schemaob.htm#CFAIGHFC

  • Dataware House Slow Down

    Hi,
    We are having some maping on our dataware housing system. When we shutdown that database then it is taking 2-3 hours more to finish.
    What my thinking is when we shutdown database all sql needs to compile again. I dont know whether i am right or wrong.
    Can any one highlight on this? It would be great if some one suggest possible solution for this.
    Platform: Oracle8i on HP UX
    Thanks
    Chetan

    Which shutdown option are you using?
    SHUTDOWN NORMAL is a system quiesce that waits for all sessions to log off - this can take a LONG time
    SHUTDOWN TRANSACTIONAL will wait for all transactions to complete. If you are doing a large load transaction, this can take a long time
    SHUTDOWN IMMEDIATE will terminate all users sessions, roll back any uncommitted transactions and shutdown. If there is a long running transaction that needs to be rolled back, this can take a long time.
    SHUTDOWN ABORT will crash the instance, requiring instance recovery upon startup. This will shut the database quickly.
    NORMAL and TRANSACTIONAL are the cleanest options, but may take too long.
    If your shutdown is taking too long, you can connect via another session, do a SHUTDOWN ABORT and then let the transactions roll back during the instance recovery on startup.

  • Dataware house question

    Hello Everyone,
    In dataware housing what needs to be higher the SGA or PGA? I have read it that it was suppose to be PGA? Can any1 clearify this for me.
    Thanks in advance

    For any database, the amount of RAM allocated to SGA and to PGA will depend heavily on the database workload and the application(s) connecting to the database. There is no rule that states that the SGA should always be bigger than the PGA or vice versa.
    In general, data warehouses allocate a greater percentage of the available memory to PGA than would OLTP databases because data warehouses are generally supporting queries that involve doing relatively large sorts, hash joins, etc for relatively few concurrent users, and those sorts of operations require more PGA. Since data warehouses are commonly doing table scans and blocks read by one query aren't commonly re-read by other queries, the benefit of a lot of SGA tends to be less than for an OLTP system.
    Whether these generalities mean that a particular warehouse allocates more RAM to PGA than to SGA in absolute terms rather than merely in relative terms compared to an OLTP database on the same hardware, though, depends entirely on the specifics of the system.
    Justin

  • Install Dataware House on SUSE linux 9.3 with Oracle Rel 2 (9.2.0.4)

    Dear All
    One of my customer needs dataware house on SUSE linux and i am a core DBA working on Prod and development servers upto now. So i want to know the necessary things i need to keep in mind before going to configure datawarehouse for the customer. Can you guys please suggest me the things i need to take care?
    Any URL's PDF's
    Thanks in Advance
    Ravi

    But when i don't give the oracle user all rights, it isn't possible to proceed with theinstallation
    But if you give that rights then it's a security hole. According to your words I guess you have similar enviroment settings:
    ORACLE_BASE=/
    ORACLE_HOME=/<directory_name>
    Why you not installing on deeper directory such as /opt or some your own directory? For example
    ORACLE_BASE=/myoracledir
    ORACLE_HOME=$ORACLE_BASE/<directory_name>
    Then chown -R oracle:dba /myoracledir.
    Then oracle will be owner just for /myoracle directory and all its subdirectories.
    i just could look at the error details, but they didn't described the erroranyway.
    That's not so true. Error log you could find in /tmp/OraInstallYYYY-MM-DD_HH_MI_SS..

  • Passing database command parameter to sub-report

    I'm trying to pass a runtime parameter from a main report to a database command parameter in a sub-report, and having some trouble.
    My main report has parameter fields (Vendor, Manufacturer, etc) that the user selects at runtime - the result set includes Item Code - that part is working fine.  Where I'm having trouble is with linking to the sub-report.  My sub-report has a stored command that takes a parameter (ItemCode) and counts the number of times it's shown up on an invoice.  When I created the command, it asked for a value for the parameter, and won't accept a blank entry.  Now, whenever I try to run the main report, it asks for the ItemCode parameter for the sub-report - which kind of defeats the purpose.  I've tried linking from ?Pm-OITM.ItemCode in the main report, but the Subreport Links window doesn't show the Command parameter to link to.
    How do I take a field from the main report and pass it to a sub-report to be used as a parameter in a stored database command? 
    I'm running CR 2008, and SAP B1 2007 PL 42 on MS SQL 2005, and I'm still pretty new at CR, so details help.
    Thanks.

    Go to change subreport links and add Item Code from main report and then you can see the sureport stored procedure parameter in the Subreport parameters list, select the parameter and do not select any field from subreport.
    You will be able to see the stored procedure parameter in the list only when the Item Code and the parameter are of same data type.
    Regards,
    Raghavendra

  • Modifying Memory Optimization parameter for BPEL process in SOA 11g

    Hello
    I have turned on memory optimization parameter for my BPEL process in the composite.xml (11g)
    this is what I have in composite.xml:
    <property name="bpel.config.inMemoryOptimization">false</property>
    How do we modify this parameter in the EM console at runtime? I changed this property to "true" using the System MBean browser, but it wasn't taking effect. I thought the SOA server must be restarted (similar to what we used to do in 10g). But when I restart the SOA server, the parameter goes back to whatever the value was in the composite.xml ignoring the change I made in the System MBean browser
    Please share your thoughts.
    Thanks in advance.
    Raja

    Deploying a newer version is not an option, as the endpoints could change (not sure if it would in 11g, but in 10g it does) and also, our service consumers will be pointing to the older version.As mentioned above, if clients are using URL without version then call will be forwarded to default version of composite internally. No manual tweaking required for this. Just make sure that while deploying the new version you are marking it as default.
    Besides, we report on service metrics and having multiple versions just complicates things.Not at all. If you are not using versioning feature, you are really under utilizing the Oracle SOA 11g. Remember that metrics can be collected for a single composite with same effort, irrespective of the number of composite versions deployed. Only few product tables refer the version while storing composite name and rest all use only the composite name without version. I do not know how you are collecting service metrics but we use DB jobs for same it works perfectly with any number of composites having multiple versions deployed.
    The idea is to do some debugging and collect audit trail in case there is a production issue by disabling inMemoryOptimization parameter. This is a live production environment and deploying whenever we want is not even an option for us, unfortunately. Why not debug by increasing log level. Diagnostic logs are the best option to debug an issue even in production. For getting audit trail you may re-produce the issue in lower environments. I think no organization will allow re-deployments just for debugging some issue in production until and unless it is too critical issue to handle.
    Does this not supported in 11g? if it isn't, it does seem like a bug to me. You may always go ahead and raise a case with support.
    Regards,
    Anuj

  • Where is the Database Initialization Parameter file (inti.ora ??) located ?

    I have E-Business Suite R12 installed on Windows XP.
    I want to change utl_file_dir path.
    I know that I have to change the Database Initialization Parameter file (inti.ora ??)
    Does anybody know, where is the file located ?
    When I search I am getting two files. (init.ora and init.ora.txt)
    Which file I need to make the changes in ???
    Thanks in advance

    The initialization file for the database is located in the $ORACLE_HOME/dbs directory, and is called init<SID>.ora

  • Sharepoint 2013 in cloud connect to in house database

    I am in a company that recently implemented office 365 so that we can use sharepoint 2013 in the cloud. I am trying to determine how to setup a web service that will be setup to work with SharePoint 2013 in the cloud.
    The issue is the databases that the web service will call are on premises. However workflows are in the cloud on sharepoint 2013. I am assuming that there will be firewall and/or security issues that will need to be dealt with.
    I am thinking of creating the web service by using the following link as a good reference point:
    link is called: 'Working with Web Services in SharePoint 2013 Workflows using SharePoint Designer 2013'
    link is:
    http://msdn.microsoft.com/en-us/library/office/dn567558(v=office.15)
    Thus I wondering if you can tell me or point me to a url that will show me how to make the following work:
    a. Have sharepoint 2013 in cloud workflow call a web service,
    b. Have the web service obtain the data from in houses database ?

    Hi,
    According to your post, my understanding is that you want to use web service in Sharepoint Online.
    You can call a web service in cloud workflow.
    You can obtain the data from in houses database using web service.
    For more information, you can refer to:
    Call Web Services From Workflows on SharePoint Online
    SharePoint Online and External Data using JSONP
    Best Regards,
    Linda Li
    Linda Li
    TechNet Community Support

  • Dataware house Daily Maintenance + up and running 24/7

    Hello fellows
    I start working in dataware house environment, need suggestion regarding daily maintenance + running the db 24/7 , & what should be steps to acconmplish this , or forward me good notes .

    You'll need to manage loads into the Warehouse so I would suggest some metric capturing tables to assist there. Backups and monitoring of the backups will need to be in place. Statistics gathered on a predetermined schedule. Aggregations calculated via materialized views. General monitoring of the environment (disk space, CPU, memory).

  • Dataware house

    Hello all,
    I am looking for extracting data from Kalido dataware house for reporting in BI 7.0.
    Could any one help me on quick set up tools and steps for UD connect as well as any other method you could suggest specifically for Kalido.
    Points will be rewarded on merit.
    Thanks
    Message was edited by:
            Shail

    make sure there is a JDBC driver available for Kalido, if the drivers are available you can configure it like other UD connect datasources

  • Database Optimization

    Hi,
    I have some queries regarding database optimization in hyperion 9.2.
    can anyone explain me the following properties if it is correct and reason why to set them as written below:
    a. Block size should be less than 15000
    b. In general tab set Aggregate missing value and Two pass calculation
    c. In cache tab set buffer size as 64000 or more
    d. In transactions tab set commit blocks and commit rows = 0
    e. In storage tab set pending I/O mode as “Buffered I/O”
    Thanks

    I agree that optimization is more of an art.
    I can address certain specifics though, with the caveat that no one piece of information is ever "guaranteed" when it comes to optimization settings -- there are always exceptions to the rule.
    a. This number varies depending on who you ask. Most people use < 100K, others < 64K, but few would want to take it down to 15K as a standard value. The truth is that the smaller the block, the more blocks you can fit in your cache. However, if you have billions of blocks at 15K, the overhead in searching for that one block you need is much higher than taking it up a bit. Also, usage patterns dictate much more than size alone -- if you have to split your most frequent queries into thousands of blocks that used to be fielded with a few dozen, you went from good to bad, not the other way around. Hence, it's all about the overall picture. One thing I like to do is compare the index file size to the square of the block size, if it's more than an order of magnitude higher, then perhaps a larger block size with fewer index entries will be beneficial (keyword: perhaps -- it's worth a check).
    b. Sound advice on the Agg missing, when you can get away with it (which should be: ALWAYS). This is based on not having to check if the parent has a stored value before going on and aggregating data into it. It's just faster, plain and simple, to roll everything up without regard to what might have been there before. If you store values at upper level though, you need to do two things (1) slap yourself for doing it, and (2) create an input member to represent that higher level input at level 0 (example: a "Quarter 1 Input" value sitting next to your rolled up Month members). Two pass calculation is often just a necessity so the dynamic calc stuff works as needed, typically to represent post-rolled up values in blocks that are "backwards referenced" (a difficult concept to grasp or explain).
    c. This is dependent on resources, and I always recommend a "systemic" perspective. Don't treat each application as a separate factor, treat the entire server as a resource and allocate your caches by priority. The artform of optimization is really key here. Examining your application statistics to see if your retrieves are getting well cached takes time and effort, but it pays off (as opposed to just giving them all the same amounts or shoving more memory at the "worst case" application). With limited resources, you have to be stingy when you can, and generous when you must.
    d. I'm sure there are cases where this is not a valid condition, but in general, these settings will help.
    e. "Direct I/O" was announced a long time ago with fanfare as being the latest way to really make things fast. It didn't. It was even set as the default method for a while. People squawked. We're back to square one but now that we have a choice, I don't know ANYONE that benefits from Direct I/O in a Windows platform, and it's a rare exception in Unix (or was it AIX, I forget the one case where it was better than Buffered I/O). The theory was that bypassing the system's I/O scheme would make the buffers in Essbase more tightly tied to the hardware, as I understand it. The only problem with that logic is that virtually all file systems use an approach that gives the OS better access than applications, so Direct I/O can't compete with letting the OS handle it.
    So after writing so much on it, it all boils down to "there's no one perfect answer to optimization, it's an art form."

  • Query/Database Optimization - where to start?

    Hi,
    Being very new to Oracle dev, I am hoping someone can help me with a very slow running query.
    The code is as below:
    UPDATE TABLE1 t1
    SET Processing_Status = 0
    WHERE t1.ID NOT IN (SELECT ID FROM TABLE2);
    circa 500k rows in Table2 and circa 250k rows in Table1
    This statement is running a little (very) slow, and I am hoping that someone can give some clues as to how to improve this. (FYI, it takes hours for the query to run !)
    I have also tried
    UPDATE TABLE1
    SET processing_status = 0
    WHERE NOT EXISTS
    (SELECT ID
    FROM TABLE2
    WHERE TABLE1.ID = TABLE2.ID);
    But this does not seem to improve matters.
    So, I can't really improve the query, but where else should I look?
    Indexes? I have one on the ID column - anything I should really worry about when I create an index?
    I hope someone can guide this newbie :)
    Kind regards,
    B

    Thanks Nikolay, that post was very useful indeed!
    I followed the recommendations in the exact same order, and here are the results:
    select * from v$version;
    select * from product_component_version;
    Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi
    PL/SQL Release 10.2.0.5.0 - Production
    "CORE     10.2.0.5.0     Production"
    TNS for Solaris: Version 10.2.0.5.0 - Production
    NLSRTL Version 10.2.0.5.0 - Production
    show parameter optimizer;
    optimizer_dynamic_sampling           integer  2
    optimizer_features_enable            string   10.2.0.5
    optimizer_index_caching              integer  0
    optimizer_index_cost_adj             integer  100
    optimizer_mode                       string   CHOOSE
    optimizer_secure_view_merging        boolean  TRUE
    show parameter db_file_multi;
    db_file_multiblock_read_count        integer  98
    show parameter db_block_size;
    db_block_size                        integer  8192
    show parameter cursor_sharing;
    cursor_sharing                       string   EXACT
    column sname format a20
    column pname format a20
    column pval2 format a20
    select
                sname
              , pname
              , pval1
              , pval2 from  sys.aux_stats$;
    SNAME                PNAME                     PVAL1 PVAL2              
    SYSSTATS_INFO        STATUS                          COMPLETED          
    SYSSTATS_INFO        DSTART                          07-04-2007 15:46   
    SYSSTATS_INFO        DSTOP                           07-04-2007 15:46   
    SYSSTATS_INFO        FLAGS                         1                    
    SYSSTATS_MAIN        CPUSPEEDNW           598.557692                    
    SYSSTATS_MAIN        IOSEEKTIM                    10                    
    SYSSTATS_MAIN        IOTFRSPEED                 4096                    
    SYSSTATS_MAIN        SREADTIM                                           
    SYSSTATS_MAIN        MREADTIM                                           
    SYSSTATS_MAIN        CPUSPEED                                           
    SYSSTATS_MAIN        MBRC                                               
    SYSSTATS_MAIN        MAXTHR                                             
    SYSSTATS_MAIN        SLAVETHR                                           
    explain plan for
        UPDATE IFSDEV.TBL_PORTFOLIO_AGG
            SET PROCESSING_STATUS = 0
        WHERE IP_ID NOT IN (SELECT IP_ID
                                            FROM IFSDEV.TBL_PORTFOLIO_YB
    Plan hash value: 3806999075
    | Id  | Operation           | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | UPDATE STATEMENT    |                   |   219K|  4500K|  1116M  (2)|999:59:59 |
    |   1 |  UPDATE             | TBL_PORTFOLIO_AGG |       |       |            |          |
    |*  2 |   FILTER            |                   |       |       |            |          |
    |   3 |    TABLE ACCESS FULL| TBL_PORTFOLIO_AGG |   219K|  4500K|   792   (3)| 00:00:10 |
    |*  4 |    TABLE ACCESS FULL| TBL_PORTFOLIO_YB  |     2 |    16 |  5262   (2)| 00:01:04 |
    Predicate Information (identified by operation id):
    2 - filter( NOT EXISTS (SELECT 0 FROM "IFSDEV"."TBL_PORTFOLIO_YB"
    "TBL_PORTFOLIO_YB" WHERE LNNVL("IP_ID"<>:B1)))
    4 - filter(LNNVL("IP_ID"<>:B1))The rest of the analysis can not really be done because my statement does not complete (as of yet).
    FYI, it took roughly 3 minutes when using 10k rows for table 2 (YB table in my code) and 2.5K in Table 1 (AGG table in my code).
    I hope this provides you with the required information, again, your help is much appreciated!
    Kind regards,
    B

  • 11gr2 database compatible parameter

    I have a 11g r2 database and the compatible parameter was originally set to 11.2.0.0.0. I did an alter system set compatible='10.2.0.4' scope=spfiile;
    Now when I try to start the database, it says control file version 11.2.0.0.0 incompatible with ORACLE version.
    When I do a startup nomount and try to alter system set compatible='11.2.0.0.0' scope=spfile, I get ORA-32012: SPFILE format is inconsistent with value of COMPATIBLE parameter.
    I also tried doing a create pfile from spfile and it gives me the same error. Is there anyway to fix this?
    On another note, the reason why I tried this was because I tried installing Grid Control 10.2.0.3 using an existing database with this 11gr2 database. It doesn't let me proceed saying the database must be 10.2 or higher. It seems to not like my 11gr2 database version so that's why I tried changing the compatible parameter. Anyway to get around this?
    Thanks for your help.

    sb92075 wrote:
    alter system set compatible='11.2.0.0.0' scope=spfile
    I also tried doing a create pfile from spfile
    As for shutdown abort and create the spfile, Why are you trying to CREATE SPFILE???????
    it can't find the spfile in the dbs directory because I'm using ASM.Please make up your mind.
    Either spfile exists or it does not.
    Are you having a problem with the ASM instance or "normal" (OLTP?) instance?
    Too bad CUT & PASTE are broken for you!Why the hostility? It was a typo. I was creating a pfile, not spfile.
    Let me clarify. I am using ASM for this "normal" 11gr2 instance, so my spfile is located in a diskgroup called +DATA. When you shutdown the database and try to create a pfile from spfile, it looks for the spfile in the default directory under $ORACLE_HOME/dbs. So since I'm using ASM, my spfile is not located in this directory and oracle can't find the file.
    Anyways, after some googling I answered my own question. I had to use create pfile from spfile='+DATA/emrep/spfileemrep.ora';
    Thanks anyways...
    But any direction on installing Grid Control using a 11gr2 existing database for the repository is appreciated.

Maybe you are looking for

  • How do I transfer iTunes playlists from external HD to PC?

    I have have a referbished Dell computer with Windows 7 that I have downloaded iTunes 10 64 bit to. I have a library and playlists from a previous computer stored on an external drive configered to my previous PC. Seeing as my library is too extensive

  • How to mount a NFS that filename will forces to lowercase

    Hi all, How can I mount an NFS that filename will appear as lower case? The NFS used stores filename in upper case while application porgram run on Sun Solaris only accept lowercase filename. now I'm getting trouble in upper/lower case file name. All

  • Source one message to split into 2 messages

    Hi Gurus. I have one incoming IDOC record. There are 3 segments in the IDOC. Output type is simple - just 5 or 6 fields. But each field of the output type can be derived either from segment 1, or 2, or 3. Means if all 3 segments have data, there will

  • Indesign CS3 Packaging/Preflighting

    When packaging/Preflighting, and the summary screen displays with errors. The majority of the times I still have some RGB files. In "links and images", I have to memorise the file name, find the file or look it up in the links palette then edit it. S

  • My Layer Dropdown Arrows Have Disappeared?!

    So I'm not sure if this has been brought up before on these forums but didn't see anything when googling the question. For some reason my Illustrator CS6 Layer arrows have disappeared.  Not allowing me to drop down all the objects within a layer to u