Sync or ASync for a Data Warehouse environment?

We have a 7 TB DW environment that we're using High Availability on. Almost all of the data is bulk loaded nightly/ weekly. We've been running this in Sync-commit mode, but lately the Transaction Log in our Primary DB has grown to > 1/2 TB waiting on
the bulk loads to be committed on the Secondary server. 
Would this scenario be mitigated if we were using ASync-commit mode? Would the TLog, no longer waiting for the Secondary to synchronize, be able to shrink as it normally would (of course considering TLog backups, etc.)?
TIA, ChrisRDBA

We have a 7 TB DW environment that we're using High Availability on. Almost all of the data is bulk loaded nightly/ weekly. We've been running this in Sync-commit mode, but lately the Transaction Log in our Primary DB has grown to > 1/2 TB waiting on
the bulk loads to be committed on the Secondary server. 
Would this scenario be mitigated if we were using ASync-commit mode? Would the TLog, no longer waiting for the Secondary to synchronize, be able to shrink as it normally would (of course considering TLog backups, etc.)?
Hi,
Whether you use Sync or Async mirroring( I guess you are talking about mirroring) logs generated by transaction would be same. IMO problem here is logs being generated. Of course in sync mirroring transaction will wait for commit on mirror to commit
it on principal so in sync commit would be delayed there would not be affect to transaction log.
From what you gave I suggest you to break bulk load operation in more quantised batches. I mean to say decrease amount of data being loaded followed by (If possible) more transaction log backups and in this case Async would help to speed up commit but remember
chances of Data loss in case of disaster will increase in Async Mirroring plus you need to have Enterprise edition to take benefit of Async mirroring.
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
My TechNet Wiki Articles

Similar Messages

  • Where to find best practices for tuning data warehouse ETL queries?

    Hi Everybody,
    Where can I find some good educational material on tuning ETL procedures for a data warehouse environment?  Everything I've found on the web regarding query tuning seems to be geared only toward OLTP systems.  (For example, most of our ETL
    queries don't use a WHERE statement, so the vast majority of searches are table scans and index scans, whereas most index tuning sites are striving for index seeks.)
    I have read Microsoft's "Best Practices for Data Warehousing with SQL Server 2008R2," but I was only able to glean a few helpful hints that don't also apply to OLTP systems:
    often better to recompile stored procedure query plans in order to eliminate variances introduced by parameter sniffing (i.e., better to use the right plan than to save a few seconds and use a cached plan SOMETIMES);
    partition tables that are larger than 50 GB;
    use minimal logging to load data precisely where you want it as fast as possible;
    often better to disable non-clustered indexes before inserting a large number of rows and then rebuild them immdiately afterward (sometimes even for clustered indexes, but test first);
    rebuild statistics after every load of a table.
    But I still feel like I'm missing some very crucial concepts for performant ETL development.
    BTW, our office uses SSIS, but only as a glorified stored procedure execution manager, so I'm not looking for SSIS ETL best practices.  Except for a few packages that pull from source systems, the majority of our SSIS packages consist of numerous "Execute
    SQL" tasks.
    Thanks, and any best practices you could include here would be greatly appreciated.
    -Eric

    Online ETL Solutions are really one of the biggest challenging solutions and to do that efficiently , you can read my blogs for online DWH solutions to know at the end how you can configure online DWH Solution for ETL  using Merge command of SQL Server
    2008 and also to know some important concepts related to any DWH solutions such as indexing , de-normalization..etc
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927103-data-warehousing-workshop-2-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927173-data-warehousing-workshop-3-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
    Kindly let me know if any further help is needed
    Shehap (DB Consultant/DB Architect) Think More deeply of DB Stress Stabilities

  • Are there any timesten installation for data warehouse environment?

    Hi,
    I wonder if there is a way to install timesten as an in memory database for data warehouse environment?
    The DW today consist of a large Oralcle database and I wonder if and how a timesten implementation can be done.
    what kind of application changes involve with such an implementation and so on?
    I know that the answer is probably complex but if anyone knows about such an implementation and some information about it , it would be great to learn from that experience.
    Thanks,
    Adi

    Adi,
    It depends on what you want to do with the data in the TimesTen database. If you know the "hot" dataset that you want to cache in TimesTen, you can use Cache Connect to Oracle to cache a subset of your Oracle tables into TimesTen. The key is to figure out what queries you want to run and see if the queries are supported in TimesTen.
    Assuming you know the dataset you need to cache and you have control of your application code to change the connection to TimesTen (using ODBC or JDBC), you can give it a try. If you are using a third party tool, you need to see if the tool supports JDBC or ODBC access to the database and change the tool to point to your TimesTen database instead of the Oracle database.
    If you are using the TimesTen Cache Connect to Oracle product option, data synchronization between Oracle and TimesTen is handled automatically by the product.
    Without further details of what you'd like to do, it's difficult to provide more detailed recommendation.
    -scheung

  • RMAN crosscheck and expire guidelines for data warehouse environment

    O/S: Windows Server 2008
    DB: Oracle 11gR2
    Are there any guidelines to how often one should do a RMAN crosscheck and set an expiration on archivelogs for data warehouse environments?
    It would seem once a day would be enough for the crosscheck in a data warehouse environment that gets refreshed nightly. Expiration I would expect no less than 1 week.
    Cheers!

    I agree with damorgan
    refer the below links for best practices.
    http://www.oracle.com/technetwork/database/features/availability/311394-132335.pdf
    https://blogs.oracle.com/datawarehousing/entry/data_warehouse_in_archivelog_m
    Hope this helps,
    Regards
    http://www.oracleracepxert.com
    Understand the Power of Oracle RMAN
    http://www.oracleracexpert.com/2011/10/understand-power-of-oracle-rman.html
    Duplicating RAC database using RMAN
    http://www.oracleracexpert.com/2009/12/duplicate-rac-database-using-rman.html

  • 2-3 things to watch out for when SAP is the source for a Data Warehouse

    Hello
    What are 2-3 things to watch out for when SAP is the source for a Data Warehouse (Informatica for ETL and Cognos for reporting)?
    Thanks
    G. Vijay

    Going through some or all of this might help:
    Empty Safari's cache (from the Safari menu), then close Safari.
    Go to Home/Library/Safari and delete the following files:
    form values
    download.plist
    Then go to Home/Library/Preferences and delete
    com.apple.Safari.plist
    Repair permissions (in Disk Utility).
    Start up Safari again, and things should have improved.
    If not, MacFixit have published a very detailed (very!) article on speeding up a slow Safari, here:
    http://www.macfixit.com/article.php?story=20070416000657464
    Many, including me, have also followed the advice given by others here to add DNS codes to their Network Settings, with good results in terms of speed-up:
    Open System Preferences/Network. Double click on your connection type, or select it in the drop-down menu. Click on TCP/IP and in the box marked 'DNS Servers' enter the following two numbers:
    208.67.222.220
    208.67.220.222
    Click on Apply Now and close the window.
    Restart Safari, and repair permissions.

  • Oracle for a Data Warehouse & Data Mining Project

    Hello,
    I am working for a marketing company and we have a pretty big OLTP database and my supervisor wants to make use of these data for decision making. The plan is to create a
    data mart(if not a warehouse) and use data mining tools on top of it.
    He does not want to buy one of those expensive tools so I was wondering if we could handle such a project just by downloading OWB and Darwin from Oracle site? None of us are data warehouse specialists so it will be very though for us. But I would like to learn. Actually, I was looking for some example warehouse + mining environment implementations to get the main ideas. I will appreciate any suggestions and comments.
    Thank you

    Go to
    http://www.oracle.com/ip/analyze/warehouse/datamining/
    for white papers, demos, etc. as a beginning.
    Also, Oracle University offers a course on Oracle data Mining.

  • Should we be using RAC for a data warehouse?

    We have an Oracle 11.1 data warehouse system. We were having some performance issues with the system so we shutdown one of the RAC nodes, to see if that was causing the problem. The problem was slow updates on a table (all 30+ million rows on one table had to be fixed). One other perforamnce problem is queries of large partitoned tables (even if the partitioin key is used). Both bulk collect and bulk inserts are very fast.
    Question: for a 11.1 data warehouse system should we use RAC? Why?
    Thank you...

    a school of thought that suggests RAC potentially decreases system availability, rather than increasing it.RAC also has the potential of increasing availability. The potential "cuts both ways", so to speak.
    I've worked with non-RAC and RAC databases on a variety of platforms. My experience doesn't show evidence that RAC decreases availability. Given that most servers, even in non-HA clusters, are very reliable (generally), downtime is low in both non-RAC and RAC environments. However, RAC does provide an availability advantage -- protection against node outage. And there are environments which do require the avaialability of RAC. Not all applications require it. RAC is too oversold, not in terms of advantages but in terms of installations.
    the increased complexity and the increased risk of both software and human related errors in a RAC environmentI would say that a similar argument arises in DASD v SAN. A SAN is more complex. Human error on a SAN causes a much higher cost. Human error does occur on a SAN. However, no one rejects a SAN on these grounds alone.
    RAC is complex to implement. It requires more skills to adminster and diagnose. However, if it is setup well, it doesn't suffer outages. An outage from human error is the same as in a non-RAC environment.
    The issue isn't RAC. The issue is that too many customers buy RAC without evaluating seriously whether
    a. they need the additional minute increase in availability
    b. whether there applications are "RAC-aware" {TAF is still misunderstood}
    c. whether they have the skills
    RAC provides scalability. It also provides HA. Let me say that again : It also provides HA.
    I've seen a high end Failover Cluster environment where one of the "best" vendors in the world talked of a 10-30minute outage for the Failover.
    Hemant K Chitale
    http://hemantoracledba.blogspot.com
    Edited by: Hemant K Chitale on May 31, 2009 11:41 PM

  • Best practice of metadata table in data warehouse environment ?

    Hi guru's,
    In datawarehouse, we have 1. Stage schema 2. DWH(Data warehouse reporting schema). In stageing we have about 300 source tables. In DWH schema, we are creating the tables which are only required from reporting prespective . some of the tables in stageing schema, have been created in DWH schema as well with different table name and column names. The naming convention for these same tables and columns in DWH schema is more based on business names.
    In order to keep track of these tables we are creating metadata table in DWH schema say for example
    Stage                DWH_schema
    Table_1             Table_A         
    Table_2             Table_b
    Table_3             Table_c
    Table_4              Table_DMy question is how do we handle the column names in each of these tables. The stage_1, stage_2 and stage_3 column names have been renamed in DWH_schema which are part of Table_A, Table_B, Table_c.
    As said earlier, we have about 300 tables in stage and may be around 200 tables in DWH schema. Lot of the column names have been renamed in DWH schema from stage tables. In some of the tables we have 200 column's
    so my concern is how do we handle the column names in metadata table ? Do we need to keep only table names in metadata table not column names ?
    Any idea will be greatly appriciated.
    Thanks!

    hi
    seems quite a buzzing question.
    In our project we designed a hub and spoke like architecture.
    Thus we have 3 layer, L0 is the one closest to the source and L0 table's name are linked to the corresponding sources names by mean of naming standard (like tabA EXT_tabA tabA_OK1 so on based on implementation of load procedures).
    At L1 we have the ODS , normalized model , we use business names for table there and standard names for temporary structures and artifacts
    Both L0 an L1 keep source's column names as general rule, new columns like calculated one are business driven and metadata are standard driven.
    Datamodeler fits perfect for modelling L1 purpose.
    L2 is the dimensional schema business names take place for tables and columns eventually rewritten at presentation layer ( front end tool )
    hope this helps D.

  • Recommended throughput for Oracle data warehouse

    Hi, I know up front this is going to be a vague question...but I'm trying to determine approximate I/O bandwidth for a data mart server. Right now we're hosting 3 or 4 different marts on it, but that number is going to increase.
    Oracle's DW "2 day" class recommends starting with either maximum throughput from user queries, or basing it off of batch windows. Right now the server is barely used for end user queries, as we haven't yet implemented a BI tool to allow users easy access (that's underway right now). So I find it hard to base any info on that. However, it's on the way, and I'm in charge of the BI took (OBIEE). I'm having nightmares that we get OBIEE deployed, and our queries end up taking 5 minutes each to get answers... Right now, on the system basically by myself, if I do a simple "select sum(amount) from fact_ledger", where fact_ledger is a 1 Gb table (with 40 million rows), it takes almost a full minute to run. It feels like I could add this up by hand and get an answer faster...and this certainly doesn't compare with other Oracle marts / DWs I've worked on in the past.
    From a batch window standpoint, all I can say is that it feels really, REALLY too slow to me. Right now, some jobs that start with a 40 million row table and join it to 6 or 7 other small tables (looking up surrogate keys) and writing to a non-logged, non-indexed output table takes over 2 1/2 hours to complete. To me this should be a 15 minute job.
    We've asked IT to do a "root cause analysis" of why performance is so bad - but as part of that, the architecture group wants something more concrete than "it just feels way too slow". So does anyone have some general guidelines they can provide? I guess our detailed info would be:
    - three marts, each of which has a fact table around the 30 - 60 million row level
    - simple "join 30 million row staging table to look up surrogate keys" and writing results is taking 2.5+ hours
    - we expect at some point to have mabe 50 - 100 users running data concurrently (spread across the marts)
    - users will be performance both canned and ad-hoc analysis against it...and they are high level business users, aren't going to be happy with waiting 2 minutes for a simple answer
    My start was to swag this as requiring 6 CPUs or so, which would indicate (according to Oracle's best practice docs) of needing somewhere betweeen 1.2 GB/s to 2.4 GB/s throughput. I'm assuming if it takes almost a full minute to read a 1 GB table, that our IO is currently 60 to 120 times too slow. Does that make sense?
    Thanks and sorry for the lack of details...we just don't know yet.
    Thx,
    Scott

    Why don't you start by taking an AWR report from those two hours so you can see what is the bottleneck for your system ?

  • Create schemas for Reporting Data Warehouse with Oracle XE

    There is the possibility to import dw house and loader with the database Oracle XE?
    I receive this error ORA-00439: feature not enabled: Advanced replication in CIM log.
    I saw that the database, we do not have the feature 'Advanced replication'.
    SQL> select * from v $ option where parameter = 'Advanced replication';
    PARAMETER
    VALUE
    advanced replication
    FALSE
    Log CIM:
    **** info Mon Feb 23 14:16:00 BRT 2015 1424711760686 atg.cim.database.dbsetup.CimDBJobManager Top level module list for Datasource Reporting DataWarehouse: DCS.DW,ARF.DW.base,ARF.DW.InternalUsers,Store.Storefront
    **** info Mon Feb 23 14:16:05 BRT 2015 1424711765012 atg.cim.database.dbsetup.CimDBJobManager 0 of 0 imports not previously run.
    **** info Mon Feb 23 14:16:05 BRT 2015 1424711765192 atg.cim.database.dbsetup.CimDBJobManager Top level module list for Datasource Reporting Loader: DafEar.Admin,DCS.DW,DCS.PublishingAgent,ARF.base,Store.EStore,Store.EStore.International
    **** info Mon Feb 23 14:16:05 BRT 2015 1424711765733 atg.cim.database.dbsetup.CimDBJobManager 1 of 1 imports not previously run.
    **** info Mon Feb 23 14:16:05 BRT 2015 1424711765953 atg.cim.database.dbsetup.CimDBJobManager Top level module list for Datasource Publishing: DCS-UI.Versioned,BIZUI,PubPortlet,DafEar.Admin,DCS-UI.SiteAdmin.Versioned,SiteAdmin.Versioned,DCS.Versioned,DCS-UI,Store.EStore.Versioned,Store.Storefront,DAF.Endeca.Index.Versioned,DCS.Endeca.Index.Versioned,ARF.base,DCS.Endeca.Index.SKUIndexing,Store.EStore.International.Versioned,Store.Mobile,Store.Mobile.Versioned,Store.Endeca.International,Store.KnowledgeBase.International,Portal.paf,Store.Storefront
    **** info Mon Feb 23 14:16:11 BRT 2015 1424711771561 atg.cim.database.dbsetup.CimDBJobManager 65 of 65 imports not previously run.
    **** info Mon Feb 23 14:16:11 BRT 2015 1424711771722 atg.cim.database.dbsetup.CimDBJobManager Top level module list for Datasource Production Core: Store.EStore.International,DafEar.Admin,DPS,DSS,DCS.PublishingAgent,DCS.AbandonedOrderServices,DAF.Endeca.Index,DCS.Endeca.Index,Store.Endeca.Index,DAF.Endeca.Assembler,ARF.base,PublishingAgent,DCS.Endeca.Index.SKUIndexing,Store.Storefront,Store.EStore.International,Store.Recommendations,Store.Mobile,Store.Endeca.International,Store.Fluoroscope,Store.KnowledgeBase.International,Store.Mobile.Recommendations,Store.Mobile.International,Store.EStore,Store.Recommendations.International
    **** info Mon Feb 23 14:16:12 BRT 2015 1424711772473 atg.cim.database.dbsetup.CimDBJobManager 30 of 30 imports not previously run.
    **** info Mon Feb 23 14:16:19 BRT 2015 1424711779573 atg.cim.database.dbsetup.CimDBJobManager Creating Schema for Datasource Reporting DataWarehouse
    **** info Mon Feb 23 14:16:19 BRT 2015 1424711779653 atg.cim.database.dbsetup.CimDBJobManager Top level module list for Datasource Reporting DataWarehouse: DCS.DW,ARF.DW.base,ARF.DW.InternalUsers,Store.Storefront
    **** info Mon Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask for Module ARF.DW.base, sql/db_components/oracle/arf_ddl.sql
    **** info Mon Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask for Module ARF.DW.base, sql/db_components/oracle/arf_view_ddl.sql
    **** info Mon Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask for Module ARF.DW.base, sql/db_components/oracle/arf_init.sql
    **** info Mon Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask for Module DCS.DW, sql/db_components/oracle/arf_dcs_ddl.sql
    **** info Mon Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask for Module DCS.DW, sql/db_components/oracle/arf_dcs_view_ddl.sql **** info Mon Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask for Module DCS.DW, sql/db_components/oracle/arf_dcs_init.sql
    **** info Mon Feb 23 14:16:21 BRT 2015 1424711781085 atg.cim.database.dbsetup.CimDBJobManager Found 2 of 6 previously unrun tasks for Datasource Reporting DataWarehouse
    **** info Mon Feb 23 14:16:21 BRT 2015 1424711781085 atg.cim.database.dbsetup.CimDBJobManager 1 ARF.DW.base : sql/db_components/oracle/arf_view_ddl.sql
    **** info Mon Feb 23 14:16:21 BRT 2015 1424711781085 atg.cim.database.dbsetup.CimDBJobManager 2 DCS.DW : sql/db_components/oracle/arf_dcs_view_ddl.sql
    **** info Mon Feb 23 14:16:21 BRT 2015 1424711781085 /atg/dynamo/dbsetup/job/DatabaseJobManager Starting database setup job 1424711781085.
    **** Error Mon Feb 23 14:16:21 BRT 2015 1424711781516 /atg/dynamo/dbsetup/database/DatabaseOperationManager --- java.sql.SQLException: ORA-00439: feature not enabled: Advanced replication
    is there a workaround?

    Hi
    We haven't tested and certified with Oracle XE internally
    You need to use an  Oracle Enterprise Edition for Advanced Replication
    What version of Oracle Commerce are you installing, I cannot tell from the log snippet you have posted
    ++++
    Thanks
    Gareth
    Please mark any update as "Correct Answer" or "Helpful Answer" if that update helps/answers your question, so that others can identify the Correct/helpful update between many updates.

  • OBIEE reverse engineering to go from SQL Server to a data warehouse

    Hi,
    I'm new to data modeling for warehouses. We currently have an OBIEE environment set up where the data source was SQL Server transactional tables. The SQL Server data is to be moved to a non-Oracle data warehouse and I need to produce a logical data model for the warehouse folks at my company. Unfortunately, the SQL Server data was never modeled, so, I'm basing the model from the Logical and Physical diagram/relationships of OBIEE.
    My question is in regards to the validity of the following relationship to be used in a data warehouse based on what's currently in OBIEE. When I model this via Erwin, I'm wondering if I'm way off base in the relationships (modeling, not personal):
    Dimension 1 has a 0:M with Dimension 2
    Dimension 1 has a 0:M with Dimesion 3
    Dimension 2 has a 0:M with Dimension 3
    Both Dimension 2 and Dimension 3 have a 0:M with Fact 1
    Through the use of aliases and such, this does work in OBIEE. Will this work as a data model for a data warehouse environment?
    Thanks!

    I think you started with the wrong foot. I suggest you search in Google for "kimball methodology" and have a read at a few articles. Your DWH model should not be based on your transactional tables. You should ask your business users what "questions" they want to answer in the DWH. Then model your DWH base on that. You can not model a DWH without knowing what questions you need to answer. For instance if your business users want to know the sales per day and per branch you will a sales fact with a sales amount measure joining to two dimensions branch and time dimension. The number of facts will depend on the questions you need to answer, the type of data and the granularity of them.

  • Auto Rollbacks in Data Warehouse

    Hey Guys!
    Has anyone tried using Auto undo in a data warehouse environment?
    I wonder how that plays out for large jobs, which are normally assigned a bigger rollback segment when running in manual mode (loads for instance).
    Thanks,
    Rene

    Albert,
    Correct me if I'm wrong, I guess you meant OLTP system, which in my view is recommended for AUTO UNDO, especially when you want to take advantage of the new Flashback query mechanism.
    I'm rather concerned for a Data Warehouse environment where a bigger rollback segment is normally assigned to a session prior to kicking off the load. Should the UNDO RETENTION parameter be increased prior to a load in that case? Any hint will be highly appreciated.
    Thanks

  • Syntax for WriterLoginName in Data Warehouse DB

    Hello
    I'm having a few issues with our management servers writing to the Data Warehouse DB. I've checked the 'Management Group' table and can see the WriterLoginName is set to
    DOMAIN\sv-scom-dw - however, i'm just querying whether that field should read
    sv-scom-dw
    The account is in fact a domain account. It's listed as the 'Data Warehouse SQL Account' & 'Data Warehouse Action Account' (under Administration > Run As configuration > Accounts). 
    We have two entries in the database security (rights over OperationsMangerDW), one as DOMAIN\sv-scom-dw & a local SQL login called sv-scom-dw. Both accounts have the following permissions: apm_datareader, apm_datawriter, db_datareader, db_owner, OpsMgrReader,
    OpsMgrWriter, public.
    We're a SCOM 2012 R2 environment. All servers are 2012 R2, SQL is also 2012 standard. 
    Anyone faced a similar issue before? I'm seeing a lot of alerts in the Monitoring section for the Data Warehouse. One in particular:
    Data Warehouse failed to discover performance standard data set. Failed to enumerate (discover) Data Warehouse objects and relationships among them. The operation will be retried.
    Exception 'SqlException': Management Group with id ''5F201AB2-4B10-7FCC-C716-B2361102248D'' is not allowed to access Data Warehouse under login ''sv-scom-dw''
    One or more workflows were affected by this.
    Workflow name: Microsoft.SystemCenter.DataWarehouse.Discovery.StandardDataSet
    Instance name: Performance data set
    Instance ID: {B81C47FB-A80D-0FE5-A8DB-DC4544FC8DA6}
    Management group: ******
    As you can see from the alert the account referenced is 'sv-scom-dw' and not 'DOMAIN\sv-scom-dw'. Which is why I originally asked, should the field in the management table be updated?
    Thanks, David.

    Hi guys.
    Thanks for the responses, I shall provide an event  ID shortly. In response to Mai, I've followed the link you've posted and I'm now checking the 'data source and related settings', so i've gone to http://localhost/reports on the Warehouse server (which
    also hosts the reporting), and i've got the following error:
    The report server cannot decrypt the symmetric key that is used to access sensitive or encrypted data in a report server database. You must either restore a backup key or delete all encrypted content. (rsReportServerDisabled)
    Get
    Online Help
    Keyset does not exist (Exception from HRESULT: 0x80090016)
    Have you come across this before?

  • Is RAC suited for Data Warehouse

    Hi,
    I have to provide a best architecture for our data warehouse project, as i googled , i found documents about Oracle RAC for Data Warehouse and features that Oracle RAC provide for a warehouse environment, like performane, availability ...
    But i see some document that says , Oracle RAC is not best choice for data warehousing.
    I want to provide flexible design that don't need to change after years. but our resources are limited, we have two IBM Power7 servers with 32cores and 62GB ram, we estimate about 1 to 2TB data yearly that should upload into warehouse, because each server just have 62GB ram, what should we do?
    Implementing Oracle RAC or just running warehouse on sigle-instance database on one server and just increase the resources of that server?

    VahidS wrote:
    Hi,
    I have to provide a best architecture for our data warehouse project, as i googled , i found documents about Oracle RAC for Data Warehouse and features that Oracle RAC provide for a warehouse environment, like performane, availability ...
    But i see some document that says , Oracle RAC is not best choice for data warehousing.
    I want to provide flexible design that don't need to change after years. but our resources are limited, we have two IBM Power7 servers with 32cores and 62GB ram, we estimate about 1 to 2TB data yearly that should upload into warehouse, because each server just have 62GB ram, what should we do?
    Implementing Oracle RAC or just running warehouse on sigle-instance database on one server and just increase the resources of that server?
    I am not sure that which documents you have read based on which you have got the impression that RAC is for DWH. RAC is a part of the Oracle's High Availability solution architecture. It's not designed to cater any specific kind of workload but to  provide HA in the case of a node or instance crash. And also, RAC itself doesn't give any performance benefit. If you have a database which is poorly performing, it would remain the same or may be , will become even worse after implementing RAC due to too many things being involved in the architecture now e.g storage, network etc. So if you do want to implement RAC, give it a careful thought, do some testing and then only project it. If you are determined to go for RAC and you are a pure DWH shop, I would suggest to go fro Exadata running RAC . Smart Scan, Compression are just few of those benefits which would come with Exadata along with the RAC's HA solution that would be serving better for your requirements.
    HTH
    Aman....

  • How to convert number datatype to raw datatype for use in data warehouse?

    I am picking up the work of another grad student who assembled the initial data for a data warehouse, mapped out a dimensional dw and then created then initial fact and dimension tables. I am using oracle enterprise 11gR2. The student was new to oracle and used datatypes of NUMBER (without a length - defaulting to number(38) for dimension keys. The dw has 1 fact table and about 20 dimension tables at this point.
    Before refining the dw further, I have to translate all these dimension tables and convert all columns of Number and Number(n) (where n=1-38) to raw datatype with a length. The goal is to compact the size of the dw database significantly. With only a few exceptions every number column is a dimension key or attribute.
    The entire dw db is now sitting in a datapump dmp file. this has to be imported to the db instance and then somehow converted so all occurrences of a number datatype into raw datatypes. BTW, there are other datatypes present such as varchar2 and date.
    I discovered that datapump cannot convert number to raw in an import or export, so the instance tables once loaded using impdp will be the starting point.
    I found there is a utl_raw package delivered with oracle to facilitate using the raw datatype. This has a numbertoraw function. Never used it and am unsure how to incorporate this in the table conversions. I also hope to use OWB capabilities at some point but I have never used it and only know that it has a lot of analytical capabilities. As a preliminary step I have done partial imports and determined the max length of every number column so I can alter the present schema number columns tp be an apporpriate max length for each column in each table.
    Right now I am not sure what the next step is. Any suggestions for the data conversion steps would be appreciated.

    Hi there,
    The post about "Convert Numbers" might help in your case. You might also interested in "Anydata cast" or transformations.
    Thanks,

Maybe you are looking for

  • Can't open project after upgrade

    Hi After upgrading from iMovie 9 to iMovie 11, i have a project that i can't open any more. When i choose it in the project library the preview window freezes and the "edit project" button fades out so that I can't open it. Any one else got the same

  • Linking to a specific location in a web page is not working

    Hello, I am using anchor tags for linking to a specific location in a web page . This Page contains some images(.bmp). Eg: <h2><a name="exactline">Linking Directly to a Specific Location</a></h2> <a href="http://www.example.com/some-page-or-other.asp

  • Bulletin Board Theme Font

    Does anyone know what font is used in the Bulletin Board theme titles? I like the font that is used there and would like to match it in some of the other titles available.

  • Want to learn XML - Please advice

    Dear ALl, So far I am using SQL,PLSQL, Forms/Reports. What XML - is it like SQL- I searched Google, but unable to identify the correct resource. I have installed Oracle 10g on my laptop - is this is enough to practice XML. Please adive

  • Scroll bar color to set to same as the color specfied in .cfg color scheme

    Hi Nimphius I set the in .cfg file color scheme to Khaki and my manager likes the color for all forms across but I can not set same color to scroll bar, I tried to give the back ground property to r140g142b123 but forms not taking it. Can you please