Should we be using RAC for a data warehouse?

We have an Oracle 11.1 data warehouse system. We were having some performance issues with the system so we shutdown one of the RAC nodes, to see if that was causing the problem. The problem was slow updates on a table (all 30+ million rows on one table had to be fixed). One other perforamnce problem is queries of large partitoned tables (even if the partitioin key is used). Both bulk collect and bulk inserts are very fast.
Question: for a 11.1 data warehouse system should we use RAC? Why?
Thank you...

a school of thought that suggests RAC potentially decreases system availability, rather than increasing it.RAC also has the potential of increasing availability. The potential "cuts both ways", so to speak.
I've worked with non-RAC and RAC databases on a variety of platforms. My experience doesn't show evidence that RAC decreases availability. Given that most servers, even in non-HA clusters, are very reliable (generally), downtime is low in both non-RAC and RAC environments. However, RAC does provide an availability advantage -- protection against node outage. And there are environments which do require the avaialability of RAC. Not all applications require it. RAC is too oversold, not in terms of advantages but in terms of installations.
the increased complexity and the increased risk of both software and human related errors in a RAC environmentI would say that a similar argument arises in DASD v SAN. A SAN is more complex. Human error on a SAN causes a much higher cost. Human error does occur on a SAN. However, no one rejects a SAN on these grounds alone.
RAC is complex to implement. It requires more skills to adminster and diagnose. However, if it is setup well, it doesn't suffer outages. An outage from human error is the same as in a non-RAC environment.
The issue isn't RAC. The issue is that too many customers buy RAC without evaluating seriously whether
a. they need the additional minute increase in availability
b. whether there applications are "RAC-aware" {TAF is still misunderstood}
c. whether they have the skills
RAC provides scalability. It also provides HA. Let me say that again : It also provides HA.
I've seen a high end Failover Cluster environment where one of the "best" vendors in the world talked of a 10-30minute outage for the Failover.
Hemant K Chitale
http://hemantoracledba.blogspot.com
Edited by: Hemant K Chitale on May 31, 2009 11:41 PM

Similar Messages

  • What are Parameters? How are they differenet from Variables? Why can't we use variables for passing data from one sequnece to another? What is the advantage of using Parameters instead of Variables?

    Hi All,
    I am new to TestStand. Still in the process of learning it.
    What are Parameters? How are they differenet from Variables? Why can't we use variables for passing data from one sequnece to another? What is the advantage of using Parameters instead of Variables?
    Thanks in advance,
    LaVIEWan
    Solved!
    Go to Solution.

    Hi,
    Using the Parameters is the correct method to pass data into and out of a sub sequence. You assign your data to be passed into or out of a Sequence when you are in the Edit Sequence Call dialog and in the Sequence Parameter list.
    Regards
    Ray Farmer

  • Use Firefox for sensitive data & use a virtual keyboard plus internet antivirus. This current version does not allow such or an extension . Can you make provision for this?

    Use Firefox for sensitive data & use a virtual keyboard plus internet antivirus. This current version does not allow such or an extension . Can you make provision for this?

    Use Firefox for sensitive data & use a virtual keyboard plus internet antivirus. This current version does not allow such or an extension . Can you make provision for this?

  • LSMW used only for master data upload?

    Hi
    Can you please let me know if LSMW is used only for master data upload or we can also use it for transaction data ?

    Hi Christino.
    I have come across a standard SDN thread which deals with the uploading master data, refer it:
    [SDN Reference for uploading master data using LSMW|how can we upload master data by using LSMW;
    [SDN reference for which uploading is preferred (Master data or Transaction data)|Which one is better for uploading data LSMW or ECATT ?;
    Good Luck & Regards.
    HARSH

  • Advice on implementing oracle streams on RAC 11.2 data warehouse database

    Hi,
    I would like to know high level overview on implementing one-way schema level replication within same database using oracle streams on RAC 11.2 data warehouse database.
    Are there any points that should be kept in mind before drafting the implementation plan.
    Please share your thougts and experiences.
    Thanks in advance
    srh

    Hi,
    I would like to know high level overview on implementing one-way schema level replication within same database using oracle streams on RAC 11.2 data warehouse database.
    Are there any points that should be kept in mind before drafting the implementation plan.
    Please share your thougts and experiences.
    Thanks in advance
    srh

  • Where to find best practices for tuning data warehouse ETL queries?

    Hi Everybody,
    Where can I find some good educational material on tuning ETL procedures for a data warehouse environment?  Everything I've found on the web regarding query tuning seems to be geared only toward OLTP systems.  (For example, most of our ETL
    queries don't use a WHERE statement, so the vast majority of searches are table scans and index scans, whereas most index tuning sites are striving for index seeks.)
    I have read Microsoft's "Best Practices for Data Warehousing with SQL Server 2008R2," but I was only able to glean a few helpful hints that don't also apply to OLTP systems:
    often better to recompile stored procedure query plans in order to eliminate variances introduced by parameter sniffing (i.e., better to use the right plan than to save a few seconds and use a cached plan SOMETIMES);
    partition tables that are larger than 50 GB;
    use minimal logging to load data precisely where you want it as fast as possible;
    often better to disable non-clustered indexes before inserting a large number of rows and then rebuild them immdiately afterward (sometimes even for clustered indexes, but test first);
    rebuild statistics after every load of a table.
    But I still feel like I'm missing some very crucial concepts for performant ETL development.
    BTW, our office uses SSIS, but only as a glorified stored procedure execution manager, so I'm not looking for SSIS ETL best practices.  Except for a few packages that pull from source systems, the majority of our SSIS packages consist of numerous "Execute
    SQL" tasks.
    Thanks, and any best practices you could include here would be greatly appreciated.
    -Eric

    Online ETL Solutions are really one of the biggest challenging solutions and to do that efficiently , you can read my blogs for online DWH solutions to know at the end how you can configure online DWH Solution for ETL  using Merge command of SQL Server
    2008 and also to know some important concepts related to any DWH solutions such as indexing , de-normalization..etc
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927103-data-warehousing-workshop-2-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927173-data-warehousing-workshop-3-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
    Kindly let me know if any further help is needed
    Shehap (DB Consultant/DB Architect) Think More deeply of DB Stress Stabilities

  • 2-3 things to watch out for when SAP is the source for a Data Warehouse

    Hello
    What are 2-3 things to watch out for when SAP is the source for a Data Warehouse (Informatica for ETL and Cognos for reporting)?
    Thanks
    G. Vijay

    Going through some or all of this might help:
    Empty Safari's cache (from the Safari menu), then close Safari.
    Go to Home/Library/Safari and delete the following files:
    form values
    download.plist
    Then go to Home/Library/Preferences and delete
    com.apple.Safari.plist
    Repair permissions (in Disk Utility).
    Start up Safari again, and things should have improved.
    If not, MacFixit have published a very detailed (very!) article on speeding up a slow Safari, here:
    http://www.macfixit.com/article.php?story=20070416000657464
    Many, including me, have also followed the advice given by others here to add DNS codes to their Network Settings, with good results in terms of speed-up:
    Open System Preferences/Network. Double click on your connection type, or select it in the drop-down menu. Click on TCP/IP and in the box marked 'DNS Servers' enter the following two numbers:
    208.67.222.220
    208.67.220.222
    Click on Apply Now and close the window.
    Restart Safari, and repair permissions.

  • Why should we use BRTOOLS for  oracle data backup?

    Hi there,
    Any one of you have any idea on specific advantages with using BRTOOLS for backup over using only Oracle RMAN?
    Are there any advantages using brbackup in databackups which we do not get with RMAN alone?
    thanks
    Arlin

    i seriously doubt you want to use wsdl4j unless you are doing really advanced webservice work. assuming you are developing this webservice from scratch, you basically want to use JAXWS: define an appropriate interface and your value classes, and let JAXWS do the rest. metro is the JAXWS implementation included in the oracle jdk and it has great tutorials and reference documentation online. i'd suggest you start here: http://metro.java.net/getting-started/

  • Use case for financial data

    Hi All,
    I've a question about potential use case for Oracle spatial. Data structures are following:
    Clients
    Account (have a dimension of balance, can be zero or above zero)
    Client to account relationship
    E.g.
    Client C1 is a borrower to Account A1 (balance = 0)
    Client C1 is a co borrower to Account A2 (balance > 0)
    Client C2 is a co borrower to Account A1 (balance > 0)
    Client C3 is a co borrower to Account A3 (balance > 0)
    Currently, database is modeled as a set of three tables, e.g.
    Client
    ID
    DATA
    Account
    ID
    DATA
    BALANCE
    CLIENT_TO_ACCOUNT
    CLIENT_ID
    RELATIONSHIP (E.g borrower)
    ACCOUNT_ID
    Business limitations:
    We are not interested in independent graphs for which all accounts have balance = 0 (let's call it inactive graph), however we might need occasionally query it
    Users are interested in vertices/edges with account which have balance = 0, but linked (up to level N) to active account for analysis purposes
    There is no well defined root (e.g. there can be 2 or more clients which are co borrowers to same account)
    99% of queries will be against active graphs
    Graphs are mutable, e.g. new relationships (edges) may be created/deleted during the day
    Users are potentially interested in free navigation in whole independent graph, starting from root.
    Root is determined by certain business rule
    Need to process active graphs daily as bulk
    Problems which I am trying to solve:
    Limit the amount of data which may need to be processed - based on the analysis of current system, we only need 5% of data + some delta for 99% processing
    Make sure performance does not degrade with time as we get more historical (processed data) - we can not deleted accounts with balance = 0 as potentially new relationship may arrive with new accounts with balance > 0
    Current solution that I am thinking of :
    Artificially partition the data universe as active and inactive graphs. All indexes would be local to two partitions.
    E.g.
    GROUP
    GROUP_ID PK
    ACTIVE_FLAG (partition key)
    CLIENT
    GROUP_ID (PARTITION BY FK TO GROUP)
    ACCOUT
    GROUP_ID (PARTITION BY FK TO GROUP)
    CLIENT_TO_ACCOUNT
    GROUP_ID (PARTITION BY FK TO GROUP)
    The issues I am seeing right now:
    1. Graphs(groups) may be potentially unlimited, so I will need a artificially limit the size using some dividing algorithms - leading to
    2. Graphs(groups) may need to be joined or divided
    3. Graphs(groups) will have to be activated/deactivated - e.g. moved to different partitions.
    4. Data loading, activation/deactivation algorithms are not simple
    So I am thinking about Oracle Spatial (Network) to model this problem.
    Questions:
    1) Can I model this problem using Oracle Spatial?
    2) Will I gain any performance improvement?
    3) Is there any explanation or white paper on how to do this for this particular type of problem?
    4) Will the solution based on Oracle Spatial solve the problems outlined above?
    5) Will my solution (without using Oracle spatial) work at all? Or there are some fundamental issues..
    Thanks you!

    Either add a LOV to the JobID attribute definition in the VO (if the JobID will be editable) or simply add the job description to the select statement (join to the job table) as a reference attribute

  • Where Used List for Master Data Entries

    Hi folks,
    I am looking for a FM, method, etc. that gives me a list, that shows, where a certain master data entry of an InfoObject is used. The BW system makes this check implicitly, when trying to delete master data, but I couldn't get behind the logic yet.
    Anyone here with helpful hints?

    Hi Durgesh,
    Thank you for your answer!
    Unfortunately the two mentioned FMs are not helpful for me. I am looking for a where-used-list of master data entries, but not of InfoObjects. For example I am looking in which InfoCubes the measure pieces of InfoObject '0UNIT' is used.

  • Time Machine: removing the backup drive and using it for other data

    I am trying to stop the time machine backups of my data and use my External HD for other data. Ia m new to the MAC world and I need to know how to do this. Almost like I want to shut Time Machine down and reformat the disk that was being used for the backups. Thanks for the help.

    The big switch in the Time Machine preference pane stops Time Machine from performing automatic backups. I would be inclined to recommend you switched that to off but left the disk assigned as designated backup disk on the Change Disk panel because, if there is nothing assigned there, Time Machine will ask you every time a disk is attached which it could use whether you wish to so assign it.

  • Flex 3  :How to use trace for printing data in console

    Hi ,
    I heard that we can use trace to print data on to Flex Builder 3 console . But when i tries it was of no luck .
    The below is a simple program , in which i was out of luck .
    public function callMe():void
                trace("AAA");
    <mx:Button id="Register" name="Register" label="Register" height="23" click="callMe()"/>
    Here in the above porogram , after clicking the Button , i cant see 'AAA 'related  inside my Flex Builder .
    Any help ??
    Thank you .

    Hi Kiran
                Make a break point at trace line and debug the application  There u can find the message u typed in console..  trace works only under debugging mode... not in development mode ..
               Have a nice day
    Thanks
    Ram

  • Recommended throughput for Oracle data warehouse

    Hi, I know up front this is going to be a vague question...but I'm trying to determine approximate I/O bandwidth for a data mart server. Right now we're hosting 3 or 4 different marts on it, but that number is going to increase.
    Oracle's DW "2 day" class recommends starting with either maximum throughput from user queries, or basing it off of batch windows. Right now the server is barely used for end user queries, as we haven't yet implemented a BI tool to allow users easy access (that's underway right now). So I find it hard to base any info on that. However, it's on the way, and I'm in charge of the BI took (OBIEE). I'm having nightmares that we get OBIEE deployed, and our queries end up taking 5 minutes each to get answers... Right now, on the system basically by myself, if I do a simple "select sum(amount) from fact_ledger", where fact_ledger is a 1 Gb table (with 40 million rows), it takes almost a full minute to run. It feels like I could add this up by hand and get an answer faster...and this certainly doesn't compare with other Oracle marts / DWs I've worked on in the past.
    From a batch window standpoint, all I can say is that it feels really, REALLY too slow to me. Right now, some jobs that start with a 40 million row table and join it to 6 or 7 other small tables (looking up surrogate keys) and writing to a non-logged, non-indexed output table takes over 2 1/2 hours to complete. To me this should be a 15 minute job.
    We've asked IT to do a "root cause analysis" of why performance is so bad - but as part of that, the architecture group wants something more concrete than "it just feels way too slow". So does anyone have some general guidelines they can provide? I guess our detailed info would be:
    - three marts, each of which has a fact table around the 30 - 60 million row level
    - simple "join 30 million row staging table to look up surrogate keys" and writing results is taking 2.5+ hours
    - we expect at some point to have mabe 50 - 100 users running data concurrently (spread across the marts)
    - users will be performance both canned and ad-hoc analysis against it...and they are high level business users, aren't going to be happy with waiting 2 minutes for a simple answer
    My start was to swag this as requiring 6 CPUs or so, which would indicate (according to Oracle's best practice docs) of needing somewhere betweeen 1.2 GB/s to 2.4 GB/s throughput. I'm assuming if it takes almost a full minute to read a 1 GB table, that our IO is currently 60 to 120 times too slow. Does that make sense?
    Thanks and sorry for the lack of details...we just don't know yet.
    Thx,
    Scott

    Why don't you start by taking an AWR report from those two hours so you can see what is the bottleneck for your system ?

  • Is the OBIEE used to create a data warehouses dynamically?

    Management where I work wants to use the OBIEE Administrator to source a 3NF normalized database and create a "virtual data warehouse" in the Business Modeling Mapping layer of OBI Administrator as a Star Schema model is required by OBI Business Model layer. They claim they were told by an Oracle sales rep that the Administrator tool could do this.
    Is this possible? As OBI issues only SQL and not PL/SQL how can one "create" dimensions, lookup tables and fact tables dynamically? And even if it could the performance hit to recreate the virtual data warehouse each time a query is issued would be huge.
    Having used Prism Warehouse Builder and DataStage in the past to create data warehouses I am aware that one needs a procedural programming language to create and maintain the star schema tables (surrogate key maintenance, controling workflows, maintaining slowly changing dimensions, intermediate lookup tables, etc.). SQL was not meant to do this heavy lifting programming. Afterall, isn't this why Oracle Warehouse Builder and previously Informatica is shipped with OBIEE suite because OBI is not an ETL tool to create dimensional models? One uses an ETL tool to create the dimensional data model for OBI to access and pass along the metadata to OBI Answers.
    So is it normal practice to use the Administrator's Business Mapping/Modeling layer for creating a virtual star schema logical tables from physical tables that are 3NF? Or is the tool used to access already denormalized tables in the physical layer that were created using Informatica or OWB or other ETL tool?

    I asked an "Expert" in OBIEE. Here are snippets of his response:
    "Be aware though that the transformation ability is fairly limited, and
    will only really work with data that is very close to a star schema, i.e.
    the data can be easily transformed through a couple of denormalizations and
    table joins. If your source data is very normalized and cannot easily be
    transformed into a star schema, you would need to use a tools such as
    Informatica, OWB or similar to extract data from your source systems, load
    and then transform it into a data warehouse or data mart and report of of
    that. The more that your data needs to be transformed (i.e. the closer it
    is to a 3NF model) the more likely it is that you'll need to use an ETL
    tool, and a data warehouse or data mart, to host your data."
    And in response to my noting the lack of documentation on how to model a 3NF to Star Schema his response was:
    "No, you're right, the documentation doesn't really go into "how to" turn a 3NF model into a dimensional model. If you look back to when OBIEE was a Siebel product, the documentation was really aimed at either Siebel consultants or customers who had been on the training, they didn't want customers "off the street" to try and implement OBIEE as it would hit their services revenue. That's where the blog posts we do, things like the Oracle-by-example training courses on OTN and so on come in, otherwise as you say there's little out there on the best way to transform your model - it's mostly passed on "word of mouth" or is built up from experience on working on projects."

  • Oracle for a Data Warehouse & Data Mining Project

    Hello,
    I am working for a marketing company and we have a pretty big OLTP database and my supervisor wants to make use of these data for decision making. The plan is to create a
    data mart(if not a warehouse) and use data mining tools on top of it.
    He does not want to buy one of those expensive tools so I was wondering if we could handle such a project just by downloading OWB and Darwin from Oracle site? None of us are data warehouse specialists so it will be very though for us. But I would like to learn. Actually, I was looking for some example warehouse + mining environment implementations to get the main ideas. I will appreciate any suggestions and comments.
    Thank you

    Go to
    http://www.oracle.com/ip/analyze/warehouse/datamining/
    for white papers, demos, etc. as a beginning.
    Also, Oracle University offers a course on Oracle data Mining.

Maybe you are looking for

  • USB port does not work properly on Tecra M3

    I need help with my Tecra M3 USB problems. It has 2 USB ports. Awhile back the USB ports refused to recognise any non-driver USB memory sticks (I do not have any driver required sticks to try). Recently I've had to re-install my Siemens ADSL modem, d

  • Move data from alv to another perform

    i have an alv, and i make 1 field etidable  ( fieldcatalog-edit      = 'X'.) my qa: when i change the field i want to pass it to another itab how???

  • Ibook g4 cant turn on with charger, but works fine with battery

    i have a ibook g4 cant turn on with charger. using battery, ibook works fine-----so not the battery problem charger works fine on other ibook-----so not the charger problem the DC-in board works fine on other ibook-----so not the DC-in problem reset

  • Reg : USMM Abap runtime Error

    Dear Friends, Greetings, I got ABAP runtime error in tcode USMM after implement the oss note 1150840  correction. ERROR is SYNTAX_ERROR   Syntax error in program "RSUVM000 " this program unfortunatley can not be executed. The following syntax error o

  • Starting WebLogic under different user ID

    I have my Weblogic servers that were built in a Windows 2000 server environment. They were built with the user ID of "Administrator". The servers have been since rebooted and started under my user ID. I am an "Administrator user" It appears there may