OCMD for OLTP database ?

Hi all,
Can we use OCDM for OLTP standard database (as a standard data model to standadize the OLTP schema model, such as sales, customer, billing) ? or OCDM use only for Data Warehouse system ?
Rgds.
VP.

Dear VP,
Sorry for the delay in answering your note.
OCDM is normally planned as Enterprise Data Warehouse, that stores relevant transactions from OLTP systems (Billing, CRM, Customer Care, Trouble Ticket, Point of Sales, Network Management or even Network Element statistics,...). To use OCDM as OLTP standard DB, you would use in fact only the foundation layer (atomic granularity) for the applications you want (while automatically benefiting from the analytics part).
I don't see any strong arguments against your idea, providing some assumptions and maybe "warnings":
- I assume you would develop your applications on top of OCDM. Leveraging an existing application would mean you would have to either adapt OCDM to fit the application or vice versa.
- The SID certification of OCDM over 130 business entities (ABEs) in 6 domains would insure the fact you could map most eTOM processes with your application.
- You can start in a small area, and extend slowly as requirement grows.
- From a use perspective, you should make sure the structure fits your expectation around history. A DWH does not like "update" (for performance reason). It stores the object and each status change. Hence, you might keep more data that what you would like, especially if you start updating events (like customer order). So you shall decide whether you will update or insert only but plan your hardware consequently (+ Information Lifetime Management to drop consequently old or useless data).
- From a data loading perspective, you will face the same issues as with a standard DWH. OCDM expects clean data. Therefore, you must do some data quality check and data cleansing in the staging area (and certainly adapt the default lookup values used by OCDM or provided by the data sources). A simple example: I can store raw CDRs in OCDM. But it will not load if the expected fields are not at the right place or if you have duplicated events, what can easily happen in raw CDRs!
- From a hardware perspective, depending on how big you plan your applications, Exadata could be ideal because it can support a mix load of complex and regular queries, on top of benefiting from the Hybrid Columnar Compression and other nice features.
- One advantage is that you could leverage by default the analytics part into your system (Segmentation, CLV, sentiment analysis, prepaid and postpaid churn, targeted campaigns, call center statistics, revenue and traffic forecast...) on top of having the tools to further develop your own analytics to support the process.
- You should also think upfront about dealing with the regular (at least once a year) OCDM version upgrade and about support. We can support OCDM as product as it is sold, not your application on top.
Feel free to contact OCDM Product Mngt team directly (contact Oracle CGBU or Tech sales rep) for specific development agreement if you see further interest in such development.
Best regards
Axel.

Similar Messages

  • Data archival and purging for OLTP database

    Hi All,
    Need your suggestion regarding data archival and purging solution for OLTP db.
    currently, we are planning to generate flat files from table..before purging the inactive data and move them to tapes/disks for archiving then purge the data from system. we have many retention requirements and conditions before archival of data. so partition alone is not sufficient.
    Is there any better approach for archival and purging other than this flat file approach..
    thank you.
    regards,
    vara

    user11261773 wrote:
    Hi All,
    Need your suggestion regarding data archival and purging solution for OLTP db.
    currently, we are planning to generate flat files from table..before purging the inactive data and move them to tapes/disks for archiving then purge the data from system. we have many retention requirements and conditions before archival of data. so partition alone is not sufficient.
    Is there any better approach for archival and purging other than this flat file approach..
    thank you.
    regards,
    varaFBDA is the better option option .Check the below link :
    http://www.oracle.com/pls/db111/search?remark=quick_search&word=flashback+data+archive
    Good luck
    --neeraj                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • What are the BEST books for Oracle database architect/designer?

    What concrete books would you recommend for OLTP database developer (to start from the scratch, starting from data sources analysis, logical and physical data modeling, indexes, tuning, maintaining). It doesn't have to be a book particulary for Oracle but suitable for it..
    I don't mean books DBA's or overall Oracle handbooks, also not for OLAP.
    Thanks!

    For learning how to use Oracle database effectively, i would say
    Tom Kyte's both books:
    Effective Oracle by design
    & Expert Oracle Database Architecture
    Jonathan Lewis's
    Practical Oracle 8i
    They tell you all the stuff: what/how to do something? and most importantly what/how not to do ?
    And their writing style is just awesome :)
    Amardeep Sidhu

  • Exadata for OLTP

    I was reading a sbout exadata and am confused if it provides any for OLTP databases where only minimal rows are retrieved using indexes there by making exadata smart scan and storage indexes useless.The advantage I can think of is high speed flash cache and flash logging features.
    But can't this be obtained by using any other high speed machines and high speed disks like SSD's used as database flash(11g feature).Can you shed some light on this topic.
    Thanks
    sekar

    Hi,
    migrating to Exadata could be beneficial for an OLTP system: you could fit the entire database up to 22 Tb into the Exadata Smart Flash Cache, and have other nice things like Infiniband, smart scans (which could be useful for OLTPs as well), HCC compression etc.
    It's just that it won't be as beneficial as for DSS or mixed systems, and it would cost a lot. I think that if you don't have an analytic component on the top of your OLTP, and if you don't require things like High Availability etc. then you may be better off with a regular Oracle 12c database on SSD storage.
    But these are just very basic considerations, details depend on your requirements. You will need to sit down and calculate costs for different options, then compare them.
    I would also recommend to review the database thoroughly -- it could be possible to achieve required performance by tuning, not by hardware upgrades. You could save your company hundreds of thousands of dollars if you do that.
    Best regards,
      Nikolay

  • Configuring raid 0+1 for an oltp database of 1 terabyte size on centos 4.5

    Hi all,
    Configuring raid 0+1 for an oltp database on centos 4.5
    I have to configure 0+1 raid for an oltp database of 1 terabyte size on centos 4.5.
    Please anyone suggest me step by step configuration or link.
    Thanks and Regards
    Edited by: DBA24by7 on Mar 15, 2009 2:20 PM

    >
    it is centos 4.5 which is almost like redhat linux.And thus completely unsupported by Oracle - which begs the
    question as to why anyone would bother to go to the
    expense of setting up a RAID configuration for an
    unsupported database?
    Anyway, you should be using RAID 1+0
    see here: http://www.acnc.com/04_01_10.html
    Paul... (lots of RAID questions today!)

  • Requirements for High-Load OLTP Database

    Hi guys!
    Need your Best Practise!
    I will install&configure High-Load OLTP Database.
    5 million users
    500 transactions per second What requirements is need?
    Do you have any papers or documents?

    Denis :) wrote:
    Hi guys!
    Need your Best Practise!
    I will install&configure High-Load OLTP Database.
    5 million users
    concurrent users?
    500 transactions per second
      1* select 5000*60*60/5000000 from dual
    SQL> /
    5000*60*60/5000000
                   3.6each user does about 4 transactions per hour
    How big is a single transaction?
    How much redo is generated every day?
    >
    What requirements is need? more hardware is more better!
    Do you have any papers or documents?

  • UNDO Management - Config changes for OLTP and large background processes

    Hi,
    I am running 9i on a 2 node RAC cluster - Sun Solaris.
    I use Automatic Undo.
    The database is currently configured for OLTP and has 2 undo datafiles - 1GB in size and undo_retention set to default (900seconds)
    I am about to start scheduling a batch job to run each night that will delete approx 1 million records each time it runs.
    To get this to work without the dreaded Error -30036: ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDO01', I have increased the undo datafiles to 5GB and the undo_retention to 4000.
    The question is do I need to worry about switching between these settings - using the smaller values for daytime OLTP and the nighttime settings for the heavy processing?
    What issues might I encounter if I just left the UNDO Management settings in place that I use for the large delete?

    I would say no, leave the settings the highest level required to accomplish the work of the instance. Once the setting are correct for the instance, you should not have to change them around. They are really max setting for the heaviest load on your instance.

  • Resource estimation/Sizing (i.e CPU and Memory) for Oracle database servers

    Hi,
    I have came across one of the requirement of Oracle database server sizing in terms of CPU and Memory requirement. Has anybody metalink notes or white paper to have basic estimation or calculation for resources (i.e CPU and RAM) on based of database size, number of concurrent connections/sessions and/or number of transactions.
    I have searched lot on metalink but failed to have such, will be great help if anybody has idea on this. I'm damn sure it has to be, because to start with implementation of IT infrastructure one has to do estimation of resources aligned with IT budget.
    Thanks in advance.
    Mehul.

    You could start the other way around, if you already have a server is it sufficient for the database you want to run on it? Is there sufficient memory? Is it solely a database server (not shared)? How fast are the disks - SAN/RAID/local disk? Does it have the networking capacity (100mbps, gigabit)? How many CPUs, will there be intensive SQL? How does Oracle licensing fit into it? What type of application that will run on the database - OLTP or OLAP?
    If you don't know if there is sufficient memory/CPU then profile the application based on what everyone expects, again, start with OLTP or OLAP and work your way down to the types of queries/jobs that will be run, number of concurrent users and what performance you expect/require. For an OLAP application you may want the fastest disks possible, multiple CPUs and a large SGA and PGA (2-4GB PGA?), pay a little extra for parallel server and partitioning in license fees.
    This is just the start of an investigation, then you can work out what fits into your budget.
    Edited by: Stellios on Sep 26, 2008 4:53 PM

  • Create Table with Compress for OLTP and error ORA-14464

    Hello,
    i have a Oracle-DB 11.2 and want to use Advanced Compression.
    I want to create a table:
    CREATE TABLE TD_GE_1990
    ( "name_id" NUMBER(1,0),
    "name_txt" VARCHAR2(100 BYTE)
    ) COMPRESS FOR OLTP;
    But i get:
    SQL-Fehler: ORA-14464: Kompressionstyp nicht angegeben
    The "compatible"-Parameter is set to 11.1:
    SELECT value
    FROM gv$parameter
    WHERE name LIKE '%compatible%';
    11.1.0.0.0
    Do i have to change something in the database?
    Best regards
    Heidi

    14464, 00000, "Compression Type not specified"
    // *Cause: Compression Type was not specified in the Compression Clause.
    // *Action: specify Compression Type in the Compression Clause.                                                                                                                                                                                                                                                                                                                                                                                   

  • In Oracle RAC environment which is OLTP database? load balancing advantage.

    In Oracle RAC environment which is a OLTP database? List the options for load balancing along with their advantages.

    You can use a software load balancer.
    https://forums.oracle.com/forums/search.jspa?threadID=&q=Software+AND+Load+AND+Balancer&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    Installing and Configuring Web Cache 10g and Oracle E-Business Suite 12 [ID 380486.1]
    Thanks,
    Hussein

  • Universe on an OLTP database

    Hi All,
    I am using BO XIR2.
    I wanted to know what is the best way/approach to create a universe on an OLTP database.
    The database is highly normalized.
    I pulled in the required tables and added joins. In order to resolve loops I uncluded contexs and some aliases.
    Now when I create a test report, it results in mostly all incompatible objects.
    I think the approach i am using to create the universe is that of a datawarehouse(denormalised) database.
    And thus is resulting in a bad structure.
    Can anyone help me on the approach to be followed to create a universe on a highly normalised database?
    Are there any restirctions in such a universe for reporting?
    Please help. '
    Thanks in advance,
    Edited by: BO User on Sep 16, 2009 10:58 PM
    Edited by: BO User on Sep 16, 2009 10:58 PM

    Hi,
    The objects in a context should be under the same class.
    Since you are using OLTP database, you will not have a proper data model as we have in a datawarehouse.
    Try to include the necessary joins for your reports and remove unnecessary joins as it will lead to more loops.
    Try to use views wherever needed.
    Avoid using more derived tables in your Universe as it will lead to performance issues. Use derived tables when you are not able to acheive something in the normal join approach and when you need a quick solution.
    Regards,
    Santhosh

  • Is 10g RAC ready for OLTP

    To achieve high availability by eliminating single point of failures in the following areas we are thinking of having our OLTP DBs on a single 10g RAC cluster
    i) OS/Firmware patches requiring reboots
    ii) Unplanned server failures
    iii) One-off Oracle patches
    We have migrated our DSS systems to 10g RAC (windows x64). However, in the last 9 months since we deployed we have seen 2 issues: single node eviction & multiple node evictions. Single node eviction is supposedly fixed w/ a patch that needs clusterwide shutdown.
    The baseline for me on OLTP is 8i where I have taken DT once in 2 years to apply Oracle patches, once a year for OS patches and very rare server failures resulting in DB failover.
    Questions I have:
    a) Is 10g RAC really stable to be used for OLTP?
    b) How is this being designed elsewhere with a view to reduce planned/unplanned DTs
    thanks,
    SM

    > a) Is 10g RAC really stable to be used for OLTP?
    Loaded question as you are implying that until now, RAC has not been stable and not robust enough for OLTP.
    Stability for any system is dependent on:
    - platform h/w
    - storage h/w
    - network h/w
    - o/s s/w
    - application s/w
    - administration
    What RAC buys you is having multiple database instance for a single physical database. Which means that in the worse case where you are forced to down a platform because of one of the above reasons, the remaining platforms in the cluster should still be available.. courtesy of the share everything approach.
    But RAC alone is not the answer.. there are numerous factors to consider. One of my longest uptime databases is an Oracle SE server with a 12,000+ uptime. And it is used 24x7 as a data collection platform.
    It went down recently. The cause? Network errors and power failures that resulted in the rack cabinet housing this server, to be reset.
    Have numerous examples of how unforseen events caused disaster in a computer room. From dirty electrical power to an aircon automated switchover failing.
    RAC does not solve any of these. What happens when there is a power failure or h/w error with the switch used for the Interconnect? Without the nodes being able to communicate with one another, all nodes will be evict themselves from the cluster.
    Looking at RAC alone as The Solution to your H/A requirement is a bit naive IMO. Yes, RAC is an excellent and major cog in the wheel of H/A.. but there are others too.
    Q. Is 10g RAC really stable to be used for OLTP?
    A. As stable and as robust as you make it to be.

  • Purpose of t-sql HEAP sturcture in OLTP database

    Hello,
    In an OLTP database when is a HEAP (table with no clustered indexes) to be preferred over a table with a clustered index?
    Same question but for OLAP database.
    TIA,
    edm2

    For OLAP database I would add a clustered index to any table in the database that needs to produce sorted results.This way, the data is already pre-sorted (by the clustered index key), saving a lot of time when the query is actually run. This becomes more
    important as huge numbers of rows are returned from your query
    I see also people create am indexed views in the OLAP database to speed up the aggregation.
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Compress for OLTP and ORA-14464

    Hello,
    i have a Oracle-DB 11.2 and want to use Advanced Compression.
    I want to create a table:
    CREATE TABLE TD_GE_1990
    (     "name_id" NUMBER(1,0),
         "name_txt" VARCHAR2(100 BYTE)
    ) COMPRESS FOR OLTP;
    But i get:
    SQL-Fehler: ORA-14464: Kompressionstyp nicht angegeben
    The "compatible"-Parameter is set to 11.1:
    SELECT value
    FROM gv$parameter
    WHERE name LIKE '%compatible%';
    11.1.0.0.0
    Do i have to change something in the database?
    Best regards
    Heidi

    This post is related to following thread and got resolved by changing compatible parameter to 11.2.0:
    http://translate.google.co.in/translate?hl=en&sl=ko&u=http://kr.forums.oracle.com/forums/thread.jspa%3FthreadID%3D1594232%26tstart%3D345&ei=_u7QTKCwKIOfcbqxqdgL&sa=X&oi=translate&ct=result&resnum=2&ved=0CCMQ7gEwAQ&prev=/search%3Fq%3DORA-14464:%2BKompressionstyp%2Bnicht%2Bangegeben%26hl%3Den%26client%3Dsafari%26rls%3Den

  • OLTP Database and DW Database Creation

    Hi,
    Is there any specific parameter we need to take care during creation of Database Instance on Exadata Machine for OLTP and DW types?
    Thanks,

    Hi Friend
    Good Query.
    Recommended for DWH :
    1. Real Application Clusters
    2. Partitioning
    Recommended for OLTP :
    1. Real Application Clusters
    2. Advanced Compression
    and we should evaluate all the database options to determine their value for your customer's specific situation.
    In terms of Database Options : IORM (IO Resource Manager) + DBRM (DB Resource Manager) plays very big roles in Prioritizing the activities in OLTP and DWH enviroments based on LOAD.
    Hope it helps...
    Thanks
    LaserSoft

Maybe you are looking for

  • USB devices attached to this computer has malfunctioned....

    everytime i plug my ipod in that is what is coming up along with "Windows does not recognize it" i already called apple and they told me that it was my pc's problem..i called dell and they told me that i could talk to a rep for 100 dollars..but previ

  • My iPod nano shuffle doesn't turn on.

    I lost my iPod nano about 1 or 2 months ago, I  found it today and it didn't turn on so I though it could be the battery, so I put it to charge but It doesn't turn on and I dont' even know if it's charging... Help!

  • Any app that uses the internet quits immediately ater opening, HELP!

    Any app that uses internet closes immediately after opening, app crashes. Safari opens fine and can use the internet. iPod software is up-to-date. iPod Touch 32GB

  • Facing problem in adding attributes for sales  in organisation model

    hi i am murali   for my organisation model in sales organisation i want to add division and distribution channel attributes.when i am in ooattrcust transaction,i found that they were in invisible maintenance and i am not able to change them.please he

  • Air update, now video scrubbing has stopped working?

    Well ok, not exactly sure what's happened here, myself but... the main issue is this: I believe that since I did the most recent air update, video seeks no longer work while the video is paused.  Basically meaning, I can't scrub through a video any m