Streaming OLTP to a Data Warehouse

I am currently working on a project to use Streams to capture changes from a highly normalized OLTP DB and propagates them to a separate DB that will be in the form of a flat data warehouse design. Streams seems to provide great potential to make the reporting from the warehouse near real-time and to avoid the typical nightly ETL through staging tables. Conceptually, I�d like to have a capture process on the source, have an apply process with a DML handler (changes table name, owner, deletes unnecessary columns, and adds columns of queried data) that re-enqueues the LCR at the source, a propagation process that sends the user defined LCR generated by my DML handler package and finally an apply process at the destination site to populate the data.
I have several components of this process working but I can�t get it all to come together. The capture process and apply with the DML handler is no problem. But once the message is re-enqueued the trouble begins. It seems like the message is propagating based on the events and bytes propagated displayed in dba_queue_schedules but I never see the LCR hit the destination queue. Is there something specific that needs to be created since this is now a �user-defined� LCR? The apply process on the destination was created with the apply_captured parameter set to false so I though that this would be enough. Do I need to create a special agent to handle this or create a subscriber for the queue? Any help would be greatly appreciated.
Thanks,
Bill

Thanks for suggesting where to look, Patricia. I don�t have any errors in the error queue, I do have data being propagated as indicated by the DBA_QUEUE_SCHEDULES view, there are no errors associated with the propagation and my apply process is running on the destination side. However, I�m not seeing any messages being read from the queue on the destination side. The V$STEAMS_APPLY_READER view has a state of �DEQUEUE MESSAGES� but the total_messages_dequeued = 0. I guess this makes sense since I never see any data being populated in the strmadmin.streams_queue_table in the destination database. I assume that if the data was propagating correctly I�d see an entry here since my source apply process uses a DML handler to re-enqueue the lcr for propagation thus making it a non-captured event?
Any suggestions of what to look for next?

Similar Messages

  • Data Warehouse: general info

    Hi all!!! I'm new to data warehousing, so DON'T SMILE; I'm reading some general guides concerning data warehousing but I've not understood which is the method to export data from OLTP database to data warehouse... Someone can help me?
    Thank you very much!
    Stefano.

    Hi,
    A DataWarehouse is not a different thing than an OLTP environnement.
    It's just the way to use the database which is different so the configuration of your database need to be different from an OLTP to a DW.
    In an OLTP environnement, the users will make hundreds or billions insert/update/delete on the tables each hour/minute, ... (big transactionnal activity)
    In a DataWarehouse, u will have some BIIIIIIIG transaction but few. And the DW will be updated (by the datas from the production for example) each night, week-end, ...
    So the configuration of the database will not be the same...
    Fred

  • Advice on implementing oracle streams on RAC 11.2 data warehouse database

    Hi,
    I would like to know high level overview on implementing one-way schema level replication within same database using oracle streams on RAC 11.2 data warehouse database.
    Are there any points that should be kept in mind before drafting the implementation plan.
    Please share your thougts and experiences.
    Thanks in advance
    srh

    Hi,
    I would like to know high level overview on implementing one-way schema level replication within same database using oracle streams on RAC 11.2 data warehouse database.
    Are there any points that should be kept in mind before drafting the implementation plan.
    Please share your thougts and experiences.
    Thanks in advance
    srh

  • Service manager console can't connect to Service manager data warehouse SQL reporting services

    When I start Service manager console, it gives this kind of error:
    The Service Manager data warehouse SQL Reporting Services server is currently unavailable. You will be unable to execute reports until this server is available. Please contact your system administrator. After the server becomes available please close your
    console and re-open to view reports.
    Also in EventViewer says:
    cannot connect to SQL Reporting Services Server. Message= An unexpected error occured while connecting to SQL Reporting Services server: System.Net.WebException: The request failed with HTTP status 401: Unauthorized.
    at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall)
    at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters)
    at Microsoft.EnterpriseManagement.Reporting.ReportingService.ReportingService2005.FindItems(String Folder, BooleanOperatorEnum BooleanOperator, SearchCondition[] Conditions)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReporting.FindItems(String searchPath, IList`1 criteria, Boolean And)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReporting.FindItems(String itemPath)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReporting.FindItem(String itemPath, ItemTypeEnum[] desiredTypes)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReporting.GetFolder(String path)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReportingGroup.Initialize()
    at Microsoft.EnterpriseManagement.Reporting.ServiceManagerReportingGroup..ctor(DataWarehouseManagementGroup managementGroup, String reportingServerURL, String reportsFolderPath, NetworkCredential credentials)
    at Microsoft.EnterpriseManagement.Reporting.ServiceManagerReportingGroup..ctor(DataWarehouseManagementGroup managementGroup, String reportingServerURL, String reportsFolderPath)
    at Microsoft.EnterpriseManagement.UI.SdkDataAccess.ManagementGroupServerSession.TryConnectToReportingManagementGroup() Remediation = Please contact your Administrator.
    We have a four server set-up where SCSM, SCDW, and sqls for both are on different servers. Also I have red that this could be a SPN problem, but this has  been worked on last week without the SPNs.

    On the computer you get the "SQL Reporting Services server is currently unavailable" message please open the Internet Explorer and try to connect to the URL <a href="http:///reports">http://<NameOfReportingServer>/reports
    This should open the reporting website in IE. If this isn't working you should check the proxy settings in IE. If the URL doesn't work in IE it won't work in the SCSM console as well (and vice versa).
    Andreas Baumgarten | H&D International Group
    Actually I can't access to the reporting website. It asks me credentials 3 times and then return a blank page. Also error message comes to the EventViewer System log with id 4 and source Security-Kerberos.
    The Kerberos client received a KRB_AP_ERR_MODIFIED error from the server "accountname".
    The target name used was HTTP/"reporting services fqn". This indicates that the target server failed to decrypt the ticket provided by the client.
    This can occur when the target server principal name (SPN) is registered on an account other than the account the target service is using.
    Ensure that the target SPN is only registered on the account used by the server.
    This error can also happen if the target service account password is different than what is configured on the Kerberos Key Distribution Center for that target service.
    Ensure that the service on the server and the KDC are both configured to use the same password.
    If the server name is not fully qualified, and the target domain (domain.com) is different from the client domain (domain.com), check if there are identically named server accounts in these two domains,
    or use the fully-qualified name to identify the server.
    I can access the website directly from the server which hosts Reporting Services.
    Also I query "setspn -Q HTTP/"reporting services fqn" whit result NO SUCH SPN FOUND.

  • Why do we need SSIS and star schema of Data Warehouse?

    If SSAS in MOLAP mode stores data, what is the application of SSIS and why do we need a Data Warehouse and the ETL process of SSIS?
    I have a SQL Server OLTP database. I am using SSIS to transfer my SQL Server data from OLTP database to a Data Warehouse database that contains fact and dimension tables.
    After that I want to create cubes using SSAS form Data Warehouse data.
    I know that MOLAP stores data. Do I need any Data warehouse with Fact and Dimension tables?
    Is not it better to avoid creating Data warehouse and create cubes directly from OLTP database?

    Another thing to note is data stored in transactional system may not always be in end user consumable format for ex. we may use bit fields/flags to represent some details in OLTP as storage required ius minimum but presenting them as is would not make any
    sense to user as they would not know what each bit value represents. In such cases we apply some transformations and convert data into useful information for users to understand. This is also in the warehouse so that information in warehouse can directly be
    used for reporting. Also in many cases the report will merge data from multiple source systems so merging it on the fly in report would be tedious and would have hit on report server. In comparison bringing them onto common layer (warehouse) and prebuilding
    aggregates would be benefitial for the report performance.
    I think (not sure) we join tables in SSAS queries and calculate aggregations in it.
    I think SSAS stores these values and joined tables and we do not need to evaluates those values again and this behavior is like a Data Warehouse.
    Is not it?
    So if I do not need historical data, Can I avoid creating Data Warehouse?
    On the backend SSAS uses queries only to extract the data
    B/w I was not explaining on SSAS. I was explaining on what happens inside datawarehouse  which is a relational database by itself. SSAS is used to built cube (OLAP structures) on top of datawarehouse. star schema is easier for defining relationships
    and buidling aggregations inside SSAS as its simple and requires minimal lookups to be performed. Also data would be held at lowest granularity level which can easily be aggregated to required levels inside OLAP cubes. Cube processing is very resource
    intensive and using OLTP system would really have a huge impact on processing performance as its nnot denormalized and also doing tranformation etc on the fly adds up to complexity. Precreating a layer (data warehouse) having data in required format would
    make cube processing easier and simpler as it has to just cross join tables and aggregate data based on relationships defined and level needed inside the cube.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Auto Rollbacks in Data Warehouse

    Hey Guys!
    Has anyone tried using Auto undo in a data warehouse environment?
    I wonder how that plays out for large jobs, which are normally assigned a bigger rollback segment when running in manual mode (loads for instance).
    Thanks,
    Rene

    Albert,
    Correct me if I'm wrong, I guess you meant OLTP system, which in my view is recommended for AUTO UNDO, especially when you want to take advantage of the new Flashback query mechanism.
    I'm rather concerned for a Data Warehouse environment where a bigger rollback segment is normally assigned to a session prior to kicking off the load. Should the UNDO RETENTION parameter be increased prior to a load in that case? Any hint will be highly appreciated.
    Thanks

  • Data warehouse modeling

    i am stuck at some of the points and have no clue to what i should do. Please, if you could find it out from someone already there they should know somthing.
    1. What do you do with flag indicators and different code attributes in your entity tables. I mean do you include them in your dimension.
    2. How do you handle dependent or weak entities when transferring from ERD to dimensional star schema. For example, My Account table has dependent (Aggreement, Suitability,Qualification, Name Address) with one to many relationship how do i handle them. Should i include these entities inside account dimension or directly to the fact.
    3. My dimensions are User, Account, Account activity, Time how do we identify which are slowly changing dimensions.
    Because when an account goes through steps there are changes made to it. After being saved it can again be put into the cycle again for more changes. i mean very frequent changes. Where do i put the start date and end date if indeed it is a Slowly changing Dimension.
    I would really appreciate your help.
    plz reply at [email protected]

    My 2 cents, I think Gopi is it about rigth on the OLTP side, but I have to disagreewith Gopi on some the data warehouse points.
    Data warehouse is generally a broader concept than just OLAP /multi-dimensional model, that would be regarded as just a component of most DWs.
    Data warehouses run SQL queries all the time.  I would bet the overwhelming majority of BW queries are SQL queries, even for querying OLAP cubes, although MDX is starting to be used more.  Operationally, SAP uses SQL to perform a lots of the procesess in BW - loading data, rollup, compress, etc.
    The majority of data warehouses are perhaps in the hundreds of GBs, although large enterprises can easiy have TBs of data.
    BW can incorporate real time data from R3 with remote cubes.
    BW has transactional InfoCubes where users enter data for budgeting, forecasting etc.
    You can google hese topics and finds lots of info on data warehouse design.

  • Designer Vs. Oracle Data warehouse builder

    Dear all,
    Currently I'm responsible of building a Data warehousing project using Oracle database. I'm trying to decide on a tool for modelling my datawarehouse. I have two options:
    1) Designer: we have some experience with this tool and we are using it for our main OLTP application.
    2) Oracle Data Warehouse builder: we are using this to design our ETL processes.
    I want to get some advice on whether the OWB is capable of modelling my datawarehouse and of doing a retrofit action. also, I try to standardize on the tools that are using in the Data Warehouse department (currently we are using only OWB).
    I will appreciate for any other advice to help in my selection process.
    Best Regards,
    Bilal

    Hi,
    In my experience this choice depends on the implementation of the datawarehouse. If you are building a "pure" Kimball style dimensional data warehouse you should be able to do this using OWB. I have architected such a DW in the past using only OWB, so I am speaking from experience.
    If on the other hand you are planning to implement an Inmon style CIF, if your requirements includes an operational data store (ODS), or if you for any other reason anticipate that you are going to be doing a lot of ER modeling, then I would not recommend using the current release of OWB for modelling. (Note however that there are significant improvements to the modelling capabilities in the Paris release of OWB, so this may change in the future)
    The advantage of improved maintainability when using a single tools needs to be weighted against the improved functionality if you choose a combination of the two. In the "two tool" scenario strict development and deployment routines need to be enforced to avoid that the model in Designer comes out of sync with the metadata in OWB. (Consider the effect of a developer making a change to a table definition in OWB and deploying it directly to the database without updating the model in Designer.)
    Hope this helps.
    Regards,
    Roald

  • Where to find best practices for tuning data warehouse ETL queries?

    Hi Everybody,
    Where can I find some good educational material on tuning ETL procedures for a data warehouse environment?  Everything I've found on the web regarding query tuning seems to be geared only toward OLTP systems.  (For example, most of our ETL
    queries don't use a WHERE statement, so the vast majority of searches are table scans and index scans, whereas most index tuning sites are striving for index seeks.)
    I have read Microsoft's "Best Practices for Data Warehousing with SQL Server 2008R2," but I was only able to glean a few helpful hints that don't also apply to OLTP systems:
    often better to recompile stored procedure query plans in order to eliminate variances introduced by parameter sniffing (i.e., better to use the right plan than to save a few seconds and use a cached plan SOMETIMES);
    partition tables that are larger than 50 GB;
    use minimal logging to load data precisely where you want it as fast as possible;
    often better to disable non-clustered indexes before inserting a large number of rows and then rebuild them immdiately afterward (sometimes even for clustered indexes, but test first);
    rebuild statistics after every load of a table.
    But I still feel like I'm missing some very crucial concepts for performant ETL development.
    BTW, our office uses SSIS, but only as a glorified stored procedure execution manager, so I'm not looking for SSIS ETL best practices.  Except for a few packages that pull from source systems, the majority of our SSIS packages consist of numerous "Execute
    SQL" tasks.
    Thanks, and any best practices you could include here would be greatly appreciated.
    -Eric

    Online ETL Solutions are really one of the biggest challenging solutions and to do that efficiently , you can read my blogs for online DWH solutions to know at the end how you can configure online DWH Solution for ETL  using Merge command of SQL Server
    2008 and also to know some important concepts related to any DWH solutions such as indexing , de-normalization..etc
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927103-data-warehousing-workshop-2-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927173-data-warehousing-workshop-3-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
    Kindly let me know if any further help is needed
    Shehap (DB Consultant/DB Architect) Think More deeply of DB Stress Stabilities

  • Oracle for a Data Warehouse & Data Mining Project

    Hello,
    I am working for a marketing company and we have a pretty big OLTP database and my supervisor wants to make use of these data for decision making. The plan is to create a
    data mart(if not a warehouse) and use data mining tools on top of it.
    He does not want to buy one of those expensive tools so I was wondering if we could handle such a project just by downloading OWB and Darwin from Oracle site? None of us are data warehouse specialists so it will be very though for us. But I would like to learn. Actually, I was looking for some example warehouse + mining environment implementations to get the main ideas. I will appreciate any suggestions and comments.
    Thank you

    Go to
    http://www.oracle.com/ip/analyze/warehouse/datamining/
    for white papers, demos, etc. as a beginning.
    Also, Oracle University offers a course on Oracle data Mining.

  • Differences between operational systems data modeling and data warehouse da

    Hello Everyone,
    Can anybody help me understand the differences between operational systems data modeling and data warehouse data modeling>
    Thanks

    Hello A S!
    What you mean is the difference between modelling after normal form like in operational systems (OLTP) e. g. 3NF and modelling a InfoCube in a data warehouse (OLAP)?
    While in a OLTP you want to have data tables free of redundance and ready for transactions meaning writing and reading few records often, in an OLAP-system you need to read a lot of data for every query you do on a database. Often in an OLAP-system you aggregate these amounts of data.
    Therefore you use another principle for these database scheme. This is called star schema. This means that you have one central table (called fact table) which holds the key figures and have keys to another tables with characteristics. These other tables are called dimension tables. They hold combinations of the characteristics. Normally you design it that your dimensions are small, so the access  on the data is more efficent.
    the star scheme in SAP BI is a little more complex than explained here but it follows the same concept.
    Best regards,
    Peter

  • Example database for data warehouse

    Hi!
    Does anybody know where can be found exaple database for data warehouse?
    - schema is good
    - schema with data is better
    Best regards

    slkLinuxUser wrote:
    Hi!
    Does anybody know where can be found exaple database for data warehouse?
    - schema is good
    - schema with data is better
    Best regardsJust like an OLTP database, the schema design and its data is 100% dependant on the business needs and (if done properly) the result of a thorough data analysis. Any kind of a pre-designed sample would be near worthless to any actual application.

  • TUNING DATA WAREHOUSE DATABASE INSTANCE

    Hi,
    I have to tune one of the DATA WAREHOUSE DATABASE INSTANCE.
    Any advice for tuning this instance.
    How different is tuning the data warehouse instance than normal instance;
    Regards

    First of all, touch nothing until you understand what your users are doing with the data warehouse, when they are doing it and what their expectations are.
    Secondly, remember that a data warehouse is, generally, much bigger than an OLTP database. This changes the laws of physics. Operations you might expect to take a few minutes might take days. This means you need to be completely certain about what you do in production before you do it.
    Thirdly, bear in mind that a lot of data warehouse tuning techniques implement physical solutions objects - different types of indexes, partitioning - rather than query tweaking. These things are easier to get right at the start than to retrofit to large volumes of data.
    Good luck, APC

  • How create oracle data Warehouse

    hi master
    sir i am use oracle 9i how i creat data Warehouse
    and how use oracle portal for net
    please give me idea how i start and which woftware i use
    thank's
    aamir

    Do you want to craete a Data Warehouse database or you want to learn on how to design a Data Warehouse ?
    Creating a Data Warehouse database is no different than creating an OLTP database, the only thing in mind should be is that DW are mostly read-only, so you should set initialization parameters suiting DW .

  • Building a data warehouse

    Good Morning Buddys,
    Actually I need build a data warehouse and I've some doubts:
    1. OLTP I have these tables (Adventureworks):
    Production.Location
    LocationID int primary key 
    Name nvarchar(50)
    CostRate smallmoney
    Availability decimal(8,2)
    ModifiedDate datetime
    LocationFK int -- example
    Production.ProductCategory
    ProductCategoryID int primary key
    Name nvarchar(50)
    rowguid uniqueidentifier
    ModifiedDate datetime
    ProductCategoryFK int -- example
    Production.ProductModel
    ProductModelID int primary key
    Name nvarchar(50)
    CatalogDescription xml
    Instructions xml
    rowguid uniqueidentifier
    ModifiedDate datetime
    ProductModelFK int -- example
    When I transfer to Staging is necessary I have foreign key columns? I mean with varchar, not int (foreign key) for example:
    STAGING_Production.Location
    LocationID varchar(20)
    Name nvarchar(50)
    CostRate varchar(20)
    Availability varchar(20)
    ModifiedDate varchar(20)
    LocationFK varchar(20) - it's correct? 
    STAGING_Production.ProductCategory
    ProductCategoryID varchar(20)
    Name nvarchar(50)
    rowguid varchar(20)
    ModifiedDate varchar(20)
    ProductCategoryFK varchar(20) - it's correct?
    I've no idea if is necessary foreign key columns transfer to staging but with another data type...
    some one can help me please?

    Ok ok
    following:
    1. OLTP I have these tables (Adventureworks):
    Production.Location
    LocationID int primary key 
    Name nvarchar(50)
    CostRate smallmoney
    Availability decimal(8,2)
    ModifiedDate datetime
    Production.ProductCategory
    ProductCategoryID int primary key
    Name nvarchar(50)
    rowguid uniqueidentifier
    ModifiedDate datetime
    LocationFK int - example
    Production.ProductModel
    ProductModelID int primary key
    Name nvarchar(50)
    CatalogDescription xml
    Instructions xml
    rowguid uniqueidentifier
    ModifiedDate datetime
    ProductCategoryIDFK int -- example
    When I transfer to Staging is necessary I have foreign key columns? I mean with varchar, not int (foreign key) for example:
    STAGING_Production.Location
    LocationID varchar(20)
    Name nvarchar(50)
    CostRate varchar(20)
    Availability varchar(20)
    ModifiedDate varchar(20)
    STAGING_Production.ProductCategory
    ProductCategoryID varchar(20)
    Name nvarchar(50)
    rowguid varchar(20)
    ModifiedDate varchar(20)
    LocationFK varchar(20) data type? - example
    Production.ProductModel
    ProductModelID int primary key
    Name nvarchar(50)
    CatalogDescription varchar(20)
    Instructions varchar(20)
    rowguid varchar(20)
    ModifiedDate varchar(20)
    ProductCategoryIDFK varchar(20) -- example
    What my doubt?
    all columns in oltp (primary key and foreign key) exists on my olap table? i mean using data type varchar(20) - in columns : 
    ProductCategoryIDFK varchar(20) 
    LocationFK varchar(20) 
    because i have no idea how can i load fact table without relationship...
    you understood?

Maybe you are looking for

  • Problem with business process modelling

    Hi guys, Im facing a new problem, while trying to build a new business process in Solution_Manager im getting the runtime raise_exception each time im activating the business process details button. What are the reasons? And how do i handle it? P.S.

  • Can't type while Acrobat plug-in is active!

    I'm currently using Firefox 3.6.16 with Acrobat 10.0.1.434 on Windows 7, and I'm encountering the following problem. Whenever I load a PDF file in Firefox, I can't type into the URL bar, the search bar, or on forms on pages loaded in other tabs. It's

  • Win10: 7.5.0.102 crashes after changing language v...

    Windows 10 build 10130 @ English MUI Skype 7.5.0.102 Language changing is set up to use Alt+Shift+(1\2\3) to switch to English\Russian\Ukrainian accordingly. After pressing Alt+Shift+2 or Alt+Shift+3 in any chat Skype stops responding. Bug existed si

  • On my husband's iMac (Intel), the mail app v.4.6 refers to mobile me. Is there a newer version available?

    OK, I think I get it now. He has OS X v.10.6.8 and I'm guessing that Mail 4.6 is the version that goes with it. It's too old to switch mobile me to iCloud; thus  the references to mobile me that I have encountered in  Mail & System Preferences. Guess

  • Using email in iPad overseas in europe

    I cannot send emails whilst on leave in Europe to friends and family. I cN receive emails but each time I try and send emails zi get a note back saying not recognizable by the server. I now have six emials in my outbox. I am using the hotels wifi sys