Timesten Clusterware

There are 4 timesten nodes for our production setup and it is going to hold 5 millions of records.
Will Active, Standby and Subscribers give better replication configuration ?
Do I need to install Oracle Clusterware for managing the Active, Standby and Subscriber pair?
Planned to go with installation of TimesTen Release 11.2.1.8.0, is this clusterware supported ?
Is there any additional nodes required for configuring clusterware ? Just saw in the document of clusterware mentioned additional 2 extra nodes apart from active,standby and subscriber nodes.
Do this require NFS mount for creating shared storage for all nodes ?

Hi,
Will Active, Standby and Subscribers give better replication configuration ?
Better than what? What are your requirements with respect to performance, availability, data protection etc.? Is this an IMDB Cache configuration or a Timesten database configuration? It's hard to advise when we do not know the requirements and objectives.
Do I need to install Oracle Clusterware for managing the Active, Standby and Subscriber pair?
Using Clusterware is not mandatory but it provides a high level of automatic. If you do not use Clusterware then you must either handle deployment, failover and recovery manually (slow and error prone) or write your own scripts to do it (fairly complex).
Planned to go with installation of TimesTen Release 11.2.1.8.0, is this clusterware supported ?
Why would you go with such an old release? You should really be using the latest 11.2.2 release for any new deployments or, if you must use 11.2.1, then the latest 11.2.1 release. 11.2.1 is supported with Clusterware (Clusterware 11.1.07 and 11.2.0.2).
Is there any additional nodes required for configuring clusterware ? Just saw in the document of clusterware mentioned additional 2 extra nodes apart from active,standby and subscriber nodes.
Clusterware (or Grid Infrastructure as it is now known) requires a supported shared storage device to provide for voting disks and OCR storage. For true high availability this storage should be spread across at least three different storage units (no common point of failure) connected to the database nodes via a redundant network. So yes, there is potentially additional hardware needed depending on your current infrastructure.
Do this require NFS mount for creating shared storage for all nodes ?
No, NFS is not required. TimesTen does not require any shared storage and anyway does not support use of NFS for software installation or database storage (for performance and reliability reasons). Clusterware does require shared storage but does not support regular NFS. NFS may be used if it is provided as part of a supported NAS unit for Clusterware 11.1.0.7. Clusterware 11.2 does not support NFS for new installations.
Chris

Similar Messages

  • Installing Oracle Clusterware and TimesTen

    I am attempting to setup TimesTen with Oracle Clusterware and this website "Installing Oracle Clusterware and TimesTen" says that I should "Install Oracle Clusterware 11.2.0.2" however, when I go and look for I found this page:  "Oracle Grid Infrastructure Downloads" however as you can see from that page it does not have version 11.2.0.2. 
    Does anyone know where I can get Clusterware 11.2.0.2?
    Would version 12.1.0.1.0 work for me?
    Thanks,
    Victor

    Thanks for the info on 12.1.0.1.0 :-(
    If you look at eDelivery 11.2.0.2 is not available.  I guess I will have to go with setting TimesTen setup without the Grid Infrastructure (i.e. use these instructions instead - "Active Standby Pair with TimesTen In-Memory Database")
    https://lh4.googleusercontent.com/Kg-lODsuooX9cl5-z2W2tuQ0KDQjLCuytwjf_jma1OfZf5IUiKiwQTvUtkRQvBSASQ=w1808-h804

  • Using Timesten with Clusterware

    We try to develop new system using Oracle products. Can you answer me some questions - is it possible to use Oracle Timesten with Clusterware? Does it have any sense? So I guess that in this pair TimesTen can't have its advantages. Am I right or not?
    Message was edited by:
    user616685

    One little addition here, please: Oracle internally we try to make Oracle Clusterware the standard Cluster HA solution for all Oracle applications. One result of those efforts is the "Siebel CRM Applications protected by Oracle Clusterware" TWP that was published recently.
    Currently, we are working on more cases like this (each of them takes some testing effort). One of them is Times Ten. Concluding, yes, there are scenarios in which Clusterware can make Times Ten more available, as mentioned before.
    However, I would be more interested in the scenarios you would like to cover!? Having a Cluster HA solution does not mean that any HA requirement is covered automatically. Therefore, could you provide more information about your environment and what you try to establish?
    Thanks.

  • Ask timesten 11.2 Using Oracle Clusterware to Manage Active Standby Pair

    Using Oracle Clusterware to Manage Active Standby Pairs
    http://download.oracle.com/docs/cd/E13085_01/doc/timesten.1121/e13072/cluster.htm#CCHCFAAD
    What is that mean ?
    I have to use Share Storage for... Oracle Clusterware?
    I think right?... or wrong ?
    Can I have VIP on Master node, and then master down... standby will active with VIP, right ?

    A TimesTen A/S pair is a replicated configuation where a pair of TimesTen datastores, usually located on different machines, act in many ways as a single unit. The active master can process both queries and DML (insert/update/delete) while the standby master can only process queries (think of this as a little like Active Dataguard). Applications that run on the same machine as one of the TT datastores can connect to it via the very high performance 'direct mode' while applications running on other machines can access the datastores in client/server mode. There is a clearly defined procedure for failing over if the active master fails (which involves promoting the standby master to active) and for recovering a failed store back to the correct role. Direct mode applications must failover with the datastore whereas for client/server applications the connection(s) need to be failed over. The basic A/S pair replication mechanism has been available since TimesTen 6.0. The replication configuration is defined via special SQL syntax and monitoring and management (failover control etc.) is performed by way of built-in procedure calls. However, in TT 6.0 and 7.0 the actual monitoring and management of the A/S pair, including failover, is left for the user to implement themselvses or via some custom integration with an external cluster manager.
    TimesTen 11g adds two major new capabilities:
    1. Automatic client connection failover for client/server connections. Think of this as very similar to Oracle DB TAF and FAN. This does not require VIPs or Clusterware since it is implemented completely within TimesTen. However it works very well when used in onjunction with Clusterware.
    2. A deep integration between TimesTen and Oracle Clusterware. All aspects of Timesten A/S pair definition, deployment, management and recovery are handled by Clusterware. There is just a single configuration file and a single TimesTen utility (ttCWAdmin) involved. Of course you need a properly configured Clusterware setup first which will require some form of shared storage (for OCR and voting disks) but TT storage (checkpoint and log files) does not need to be on shared storage. Setting up TT for use with Clusterware is very quick and easy (maybe 30 minutes the first time you do it and much quicker thereafter once you know what you are doing). From then on Clusterware will manage all aspects of failover and recovery completely automatically. VIPs are not required but can be used if desired e.g. for application failover purposes. Clusterware can also manage the failover of direct mode applications. Also, you can define automated backup cycles and spare nodes that can be used if a node suffers some permanent failure. The Clusterware integration offers very rich functionality but also very fast failover (typically just a few seconds in my testing).
    Hope that helps clarify.
    Chris

  • TimesTen Cache Grid Setup Issues on Clusterware

    Dear Experts,
    I would like to set up two TimesTen A/S pairs on Clusterware. Here's the A/S pairs status after the initial setup. I was able to load cache groups into the pairs.
    sss6202/u01/app/TimesTen/tt1122/info> ttcwadmin -status
    TimesTen Cluster status report as of Wed Apr 25 21:48:35 2012
    ====================================================================
    TimesTen daemon monitors:
    Host:SSS6202 Status: online
    Host:SSS6203 Status: online
    ====================================================================
    ====================================================================
    TimesTen Cluster agents
    Host:SSS6202 Status: online
    Host:SSS6203 Status: online
    ====================================================================
    Status of Cluster related to DSN ST_0.0.0.1:
    ====================================================================
    1. Status of Cluster monitoring components:
    Monitor Process for Active datastore:RUNNING on Host sss6203
    Monitor Process for Standby datastore:RUNNING on Host sss6202
    Monitor Process for Master Datastore 1 on Host sss6202: RUNNING
    Monitor Process for Master Datastore 2 on Host sss6203: RUNNING
    Monitor for Application DM_TT_0.0.0.1: RUNNING on Host sss6203
    2.Status of Datastores comprising the cluster
    Master Datastore 1:
    Host:sss6202
    Status:AVAILABLE
    State:STANDBY
    Grid:AVAILABLE
    Master Datastore 2:
    Host:sss6203
    Status:AVAILABLE
    State:ACTIVE
    Grid:AVAILABLE
    ====================================================================
    The cluster containing the replicated DSN is online
    Status of Cluster related to DSN ST_0.1.0.1:
    ====================================================================
    1. Status of Cluster monitoring components:
    Monitor Process for Active datastore:RUNNING on Host sss6203
    Monitor Process for Standby datastore:RUNNING on Host sss6203
    Monitor Process for Master Datastore 1 on Host sss6202: RUNNING
    Monitor Process for Master Datastore 2 on Host sss6203: RUNNING
    Monitor for Application DM_TT_0.1.0.1: RUNNING on Host sss6203
    2.Status of Datastores comprising the cluster
    Master Datastore 1:
    Host:sss6202
    Status:AVAILABLE
    State:IDLE
    Grid:NO GRID
    Master Datastore 2:
    Host:sss6203
    Status:AVAILABLE
    State:ACTIVE
    Grid:AVAILABLE
    ====================================================================
    The cluster containing the replicated DSN is online
    When I attemp to start my application connecting to TimesTen A/S pairs it failed with the following errors. Seems like TT grid just has gone bad for unknown reason.
    M Thu Apr 26 13:41:53 2012 sss6202 dm:10236 dm_search.c(106):3243 1:sss6202:cm:29186:1:0:1335469313:0
    in update size is 1000 (default)
    E Thu Apr 26 13:41:53 2012 sss6202 dm:10235 dm_subr.c(154):1904 1:sss6202:cm:29186:1:0:1335469313:0
    ORACLE error: do_sql_select: PINStmtExecute: code 57000, op 0
    =ORA-57000: TT3331: Failed to send a message to member UNKNOWN -- file "cacheGrid.c", lineno 28552, procedure "sbCGGridCompile"
    E Thu Apr 26 13:41:53 2012 sss6202 dm:10235 dm_ops.c(238):6823 1:sss6202:cm:29186:1:0:1335469313:0
    op_search_and_bulk_act: do_sql_select of search: "select template, flags from pin.search_t where poid_id0 = :1", x=1, id 500
    E Thu Apr 26 13:41:53 2012 sss6202 dm:10235 dm_back.c(31):1440 1:sss6202:cm:29186:1:0:1335469313:0
    DMbe #6: process_op: op 7(PCM_OP_SEARCH), err 43(PIN_ERR_STORAGE)
    E Thu Apr 26 13:41:53 2012 sss6202 dm:10236 dm_subr.c(154):1904 1:sss6202:cm:29186:1:0:1335469313:0
    ORACLE error: do_sql_select: PINStmtExecute: code 57000, op 0
    =ORA-57000: TT3331: Failed to send a message to member UNKNOWN -- file "cacheGrid.c", lineno 28552, procedure "sbCGGridCompile"
    E Thu Apr 26 13:41:53 2012 sss6202 dm:10236 dm_ops.c(238):6823 1:sss6202:cm:29186:1:0:1335469313:0
    op_search_and_bulk_act: do_sql_select of search: "select template, flags from pin.search_t where poid_id0 = :1", x=1, id 500
    E Thu Apr 26 13:41:53 2012 sss6202 dm:10236 dm_back.c(31):1440 1:sss6202:cm:29186:1:0:1335469313:0
    DMbe #7: process_op: op 7(PCM_OP_SEARCH), err 43(PIN_ERR_STORAGE)
    So I try to drop and recreate the A/S pairs, but the call ttcwadmin –create –dsn st_0.0.0.1 now errs out with the following errors in ttcwerrors.log. It’s the same process I ran previously to set up the pairs. Did I miss anything in the process?
    2012-04-26 20:06:04.22 Err : : 15926: (ttCRSdaemon:) ttctl.c(5801): TT16032: Call to send() failed. System Error: 134
    2012-04-26 20:07:01.28 Err : : 1685: (ttClusterAgent:) ttctl.c(7764): S1T00:[TimesTen][TimesTen 11.2.2.1.0 ODBC Driver]
    [TimesTen]TT6003: Lock request denied because of time-out Details: Tran 24.49 (pid 1690) wants IXn lock on table SYS.CACHE_
    GROUP. But tran 23.59 (pid 1690) has it in Un (request was Un). Holder SQL (call ttrepstateset('active')) -- file "cache.c"
    , lineno 7173, procedure "sbCacheGetDDLLocks()"
    2012-04-26 20:07:01.28 Err : : 1685: (ttClusterAgent:) crsagent.c 2086: Failed to set state of autorefresh cache groups
    to paused
    2012-04-26 20:07:01.29 Err : : 15935: (ttCWAdmin:) cwutils.c(1998): TT16032: Call to recv() failed. System Error: -1
    2012-04-26 20:07:01.29 Err : : 15935: (ttCWAdmin:) cwutils.c(1998): TT16032: Call to recv() failed. System Error: -1
    2012-04-26 20:07:01.29 Err : : 15935: (ttCWAdmin:) crsctl.c(19818): TT48013: Failed to create ACTIVE STANDBY PAIR scheme
    for DSN ST_0.0.0.1 on host sss6202.
    2012-04-26 20:07:38.40 Err : : 15935: (ttCWAdmin:) cwutils.c(1941): TT16032: Call to recv() failed. System Error: 131
    It's TimesTen 11.2.2 on Clusterware 11g2.
    Many thanks for your advices / insights!

    Hi Gena,
    Thank you for your reply. My intention is to set up both active datastores on sss6202, and standby on sss6203.
    The reason I dropped them because of the application error shown in the original post and ttcwadmin -status at that time showed NO GRID
    Now I am facing a problem with recreating the A/S pairs. The errors in ttcwerrors.log are shown below. It seems to me the drop command did not drop the A/S pairs cleanly.
    2012-04-27 11:49:32.19 Err : : 3126: (ttClusterAgent:) ttctl.c(7764): S1T00:[TimesTen][TimesTen 11.2.2.1.0 ODBC Driver][TimesTen]TT6003: Lock request denied because of time-out Details: Tran 23.69 (pid 3131) wants IXn lock on table SYS.CACHE_GROUP. But tran 1.109139 (pid 3131) has it in Un (request was Un). Holder SQL (call ttrepstateset('active')) -- file "cache.c", lineno 7173, procedure "sbCacheGetDDLLocks()"
    2012-04-27 11:49:32.19 Err : : 3126: (ttClusterAgent:) crsagent.c 2086: Failed to set state of autorefresh cache groups to paused
    2012-04-27 11:49:32.19 Err : : 4559: (ttCWAdmin:) cwutils.c(1998): TT16032: Call to recv() failed. System Error: -1
    2012-04-27 11:49:32.19 Err : : 4559: (ttCWAdmin:) cwutils.c(1998): TT16032: Call to recv() failed. System Error: -1
    2012-04-27 11:49:32.19 Err : : 4559: (ttCWAdmin:) crsctl.c(19818): TT48013: Failed to create ACTIVE STANDBY PAIR scheme for DSN ST_0.0.0.1 on host sss6202.
    2012-04-27 11:50:25.26 Err : : 4559: (ttCWAdmin:) crsctl.c(19818): TT48013: Failed to create ACTIVE STANDBY PAIR scheme for DSN ST_0.0.0.1 on host sss6203.
    Here's crs_stat -t output
    sss6202/u01/app/TimesTen/tt1122/info> crs_stat -t
    Name Type Target State Host
    TT_A...0.0.0.1 application OFFLINE OFFLINE
    TT_A...SSS6202 application ONLINE ONLINE sss6202
    TT_A...SSS6203 application ONLINE ONLINE sss6203
    TT_A...1_DMTT1 application OFFLINE OFFLINE
    TT_D...SSS6202 application OFFLINE OFFLINE
    TT_D...SSS6203 application OFFLINE OFFLINE
    TT_M...0.0.1_0 application OFFLINE OFFLINE
    TT_M...0.0.1_1 application OFFLINE OFFLINE
    TT_S...0.0.0.1 application OFFLINE OFFLINE
    ora....ER.lsnr ora....er.type ONLINE ONLINE sss6202
    ora....N1.lsnr ora....er.type ONLINE ONLINE sss6202
    ora.asm ora.asm.type OFFLINE OFFLINE
    ora.cvu ora.cvu.type ONLINE ONLINE sss6202
    ora.gsd ora.gsd.type OFFLINE OFFLINE
    ora....network ora....rk.type ONLINE ONLINE sss6202
    ora.oc4j ora.oc4j.type ONLINE ONLINE sss6202
    ora.ons ora.ons.type ONLINE ONLINE sss6202
    ora....ry.acfs ora....fs.type OFFLINE OFFLINE
    ora.scan1.vip ora....ip.type ONLINE ONLINE sss6202
    ora....SM1.asm application OFFLINE OFFLINE
    ora....02.lsnr application ONLINE ONLINE sss6202
    ora....202.gsd application OFFLINE OFFLINE
    ora....202.ons application ONLINE ONLINE sss6202
    ora....202.vip ora....t1.type ONLINE ONLINE sss6202
    ora....SM2.asm application OFFLINE OFFLINE
    ora....03.lsnr application ONLINE ONLINE sss6203
    ora....203.gsd application OFFLINE OFFLINE
    ora....203.ons application ONLINE ONLINE sss6203
    ora....203.vip ora....t1.type ONLINE ONLINE sss6203
    My concern is the grid status has changed from AVAILABLE to NO GRID causing the client application error, then subsequently the error in recreating the A/S pair.
    Many thanks!
    Thomas Cong

  • Install TimesTen with oracle clusterware

    Now I have installed Oracle set groupware, TimesTen,11g,
    If the TimesTen cache Oracle, is it right?
    In "Start the active standby pair" TimesTen cache and replication before opening
    Main the following points do not understand:
    Cluster.oracle.ini
    AppName=reader
    AppType=Active
    AppStartCmd=/timesten/TimesTen/app_start.sh start
    AppStopCmd=/timesten/TimesTen/app_stop.sh stop
    AppCheckCmd=/timesten/TimesTen/app_check.sh
    checkCacheConnect=Y
    Where app_start.sh is the need to write your own?
    Can you give me an example?

    As far as I am aware, recent (11g onwards) releases of Oracle Heterogenous services do not work with TimesTen as they now require an OBC 3.x driver and the TimesTen driver is currently 2.0. Even if they did work, this would not be a useful solution. It might allow Apex to access TimesTen after a fashion (though that is far from certain) but the performance would be very poor due to all the network hops and software layers between the application and TimesTen.
    If you put one or two tables in TimesTen then one problem from an Apex perspective is that it is now dealing with two databases; the TimesTen cache containing two tables and the oracle database containing all the other tables. Is Apex designed to cope with this? Does it have the concept of data located in multiple databases where one of them is not the Oracle database? Also, do you need transactions or queries (joins) that span the TimesTen tables and the tables in Oracle DB? If so then this also  will not work as that is not possible today.
    I have to say that as far as Apex goes I think this is likely a non-starter. However, if you do try it and have any success then please do post the results here as we'd be interested to hear about it.
    Chris

  • Oracle Timesten Geo-redundent architecure deployment queries

    Hi,
    We have 6 sites and each site connect with lease line.
    There are 5 different physical Remote Site we want to deployed the active/standby timesten database on 5 sites
    We want to create the READ ONLY LOCAL CACHE GROUP for each 5 Sites which is auto refresh from single central database site(2-Node RAC)
    Data Modification always happen on Central Site.
    Oracle timesten active/standby is in local LAN on each site but each site is connect through WAN to the Central Local site where the Oracle RAC Database resides
    and configure the Cache Connect with each site for Auto refresh the Read Only Local Cache Group on each site.
    ARCHITECTURE OVERVIEW
    Remote Site Location-1 ==>(Read Only Local cache Group )Oracle TT A/S<<====== Auto Refresh <---------||
    ||
    Remote Site Location-2 ==>(Read Only Local cache Group )Oracle TT A/S<<====== Auto Refresh <---------|| (CENTRAL LOCATION SITE)
    ||
    Remote Site Location-3 ==>(Read Only Local cache Group )Oracle TT A/S<<====== Auto Refresh <---------|| (2-Node RAC Database)
    ||
    Remote Site Location-4 ==>(Read Only Local cache Group )Oracle TT A/S<<====== Auto Refresh <---------||
    ||
    Remote Site Location-5 ==>(Read Only Local cache Group )Oracle TT A/S<<====== Auto Refresh <---------||
    We have go through below Oracle Technical Paper.
    http://www.oracle.com/technetwork/database/performance/wp-imdb-cache-130299.pdf
    please check the Figure 7==>Incremental Autorefresh of Read-Only Local Cache Groups
    Please suggest your advice and suggestion for above architecture deployment
    Regards
    Hitgon
    Edited by: hitgon on Jul 10, 2012 6:44 PM
    Edited by: hitgon on Jul 10, 2012 6:46 PM

    There is a great deal of stuff you need to understand and bear in mind when implementing any complex system and this is n oexception. You should start by studying the TimesTen documentation. In particular you should familiarise yourself with the informaiton in the Cache User's Guide and the replication Guide. These contain lots of very important information regarding the setup/deployment and operation of A/S pairs and cache groups, configuring cache connect for RAC etc. You should also read through the troubleshooting guide to familiarise yourself with what things to look at if things do not seem to work as expected.
    If you are not already very familiar with TimesTen I would also strongly recommed that you take the time to read the rest of the documentation. TimesTen is not Oracle Db and while it is very compatible with Oracle in many areas there are also a lot of significant differences which you need to take into account when developing applications, managing the system etc. if you are able to take an OU training course on Timesten then I would recommend that but if not then reading the documentation is a good second best.
    We do not 'recommend' operating systems specifically but 64-bit Linux is certainly a good choice. You might like to consider Oracle Enterprise Linux instead of Redhat; it has some advantages.
    Within each site both nodes in the TimesTen active/standby pair should be on the same LAN. I would recommend GigaBit ethernet as a minimum.
    While it is possible to write your own scripts to handle deployment, monitoring, failover and recovery of A/S pairs it is much, much easier (and much more robust) if you deploy Oracle Clusterware at each site to provide fully automated management of the A/S pairs. That is our very strong recommendation and is also best practice.
    From a TimesTen perspective, system clock synchronisation is only needed within each site (i.e. the system clocks on both nodes in an A?S pair need to be closely aligned). However, it may be desirable to have all the nodes in all the sites have their clocks aligned for other reasons.
    You need to ensure that the bandwidth and latency of the WAN connections is adequate for the amount of refresh traffic that you will have. There is no easy way to estimate/calculate this; you will need to determine this empirically.
    Those are probably the most important things. As you progress then you can of course ask questions in this forum and use Oracle Support.
    Chris

  • Timesten Queries

    Dear Forum,
    Background: We have single database instance, 22 application instances - (RAC is not allowed solution as per higher management decision)
    If we implement IMDB Cache:
    1. Will there be any overhead associated with reads and writes (Synchronous) ? If yes, how do we quantify the same.
    2. Application demands the use of the global cache (as transactional data). Will there be latency issues owing to multiple nodes. How oracle manages the node in the cache grid.
    3. What is the optimum grid size to be proposed. Is there any guideline (for e.g. 8 grid, 16 or for all 22 grids)
    4. What is the minimum infrastructure required
    5. The application uses XML file in terms of BLOB/CLOB. Is Timesten okay to handle BLOB/CLOB.
    Sorry for flooding with the queries. But any pointer/advise/guidance would be greatly appreciated.
    Thanks
    ORA_IMPL

    Thanks for your detailed answer. This really helps and gives me an insight into the TT. Appreciate if you can answer the following cross queries tagged as 'DM and underlined text'
    thanks for sharing experience, your contribution is very helpful for junior collegues like us.
    Hi,
    Thes questions are not easily answerd in isolation. A decision to adopt TimesTen, and especially TimesTen Grid, needs to take int oaccoutn many factors. For example, adding TimesTen into the picture will not be transparent to the applicaiton. Application code changes, possibly significant ones, will be needed. Also, the beyhaviour and characteristics of cache grid impose constraints on the kind of SQL that is grid capable. Also, application data access needs to be (a) mostly localised to the grid node wher the data currently rsides and (b) have a strong temporal locality of reference if good performance is to be achieved. So, there is a lot to consider and investigate before you can make an informed decision as to whether TT cache grid is the correct solution for you. Having said that, I will try to answer your questions as best I can:
    1. Will there be any overhead associated with reads and writes (Synchronous) ? If yes, how do we quantify the same.
    CJ>> I don't really understand this question. Reads would normally be satisfied from the IMDB Cache and so will be much, much faster than reading from Oracle. For grid capable queries that reference data not present in the local grid node the data will be retrieved from another grid node (preferentially) or if the data is not in any grid node then it will be retrieved from oracle. Clearly there is significant overhead in thsi case compared to the case where the data is already present in the local grid node. There are many factors thta can affect the relative performance but let us say that typically a local access will take a few microseconds, a retrieval from another grid member a millisecond or two and a retrieval from oracle may take several milliseconds. The only way to know is to test your queries with your schema on your typical hardware and measure it. For write operations, all changes are propagated asynchronously to Oracle (there is no synchronous option) and so overhead is very small and updates are very fast (much faster than in Oracle). However, if a query/DML tries to access a row located in a different grid member and for which there are comitted, unpropagated updates then that access will block until those updates have rached Oracle (to ensure data consistency). In this case there could be a large overhead. Again, this depends very much on your specific setup and data access patterns and so you need to measure it.
    DM: if there are any unpropogated changes in node 1, node 2 tries to read the same data. Which data the node 2 will read (the old data or the node 2 read will be blocked OR the node 1 unpropogated changes are committed & node 2 read is facilitated)
    2. Application demands the use of the global cache (as transactional data). Will there be latency issues owing to multiple nodes. How oracle manages the node in the cache grid.
    CJ>> That question has a big answer! Best to read some of the information on Cache Grid in the documentation. Essentially cache grid implements the concept of data ownership and enforces that there will only ever be one copy of any piece of cached data within the grid at any one time. Only the grid node that 'owns' the data can read/write it. If a different grid node needs access to the data the data itself is transferred to that grid node and the ownership updated. Depednin on how data is distributed and the application access pattern there may be very little overhead or a great deal of overhead. Your application must be designed in a 'grid aware' fashion to gain full benefit from grid.
    DM: The database contains around 60 % of semi static reference data. This data is used by all different applications dynamically to validate their work content. Essentially all of these application are reading the same data. It is a prime requirement that any changes to the semi static ref data is propogated in timely fashion to all downstream processing
    3. What is the optimum grid size to be proposed. Is there any guideline (for e.g. 8 grid, 16 or for all 22 grids)
    CJ>> There is no one optimum size. It dpends on data volumes, type of hardware and the workload to be supported.
    4. What is the minimum infrastructure required
    CJ>> Realistically you need to use replicated grid nodes and Oracle Clusterware for grid management. The very smallest grid that could be configyured is therefore two physical machines (each acting as the replicaiton standby for the other) running TimesTen and Clusterware. You also need,m for Clusterware, some supported shared stirage for OCR and voting disks (e.g. NetAPp filers or some kind of SAN).
    5. The application uses XML file in terms of BLOB/CLOB. Is Timesten okay to handle BLOB/CLOB.
    CJ>> TimesTen does not currently support XML nor CLOB/BLOBs. You can cache Oracle CLOB/BLOB data in TimesTen VARCHAR2/VCARBINARY columns. Access to this data in TimesTen will be via the SQL API and you must retrieve / update the entoire CLOB/BLOB value. Any XML interpretation must be done by the application.
    DM: Currently all the applications to and fro communications are happening in CLOB/BLOB and XML. If we have to rely on SQL API and update the whole CLOB/BLOB value, it seems that it will increase the processing time. I see this may be a constraining factor to use the TT.
    Context: Application deals majority of the times with XML files. It receives payment in XML, conversion to internal format (again xml), applying business logics and converting it again to the outbound formats (xml) and sending across. They are currently stored as BLOBS and CLOBS in database. Size of database is 4TB.
    There is one single database instance and 20 application instances. Application uses JDO as their ORM framework. With the growing volumes of data, JDO along with multiple times calls to database during the processing of each transaction(mainly due to blobs/clobs) is known bottleneck for the performance.
    I am trying to find the improvement solution. TT may not fit the bill. Any other oracle feature worth investinging.

  • Clusterware wiht cachegroups

    Hi Gurus,
    I wanna use clusterware in an active standby pair with cachegroups (awt and read only), but when I try to do ttCWAdmin -create the command fails because the user used is not the cache admin.
    I dont want use the same user to do the creation of the active stanby and admin the cachegroups (separation-of-duties) and the admin cache has not admin privilege.
    Regards

    The requirements when running ttCWAdmin to rollout an A/S pair that includes cache groups are:
    The ttCWAdmin -create command prompts for the following:
    Prompts for the name of a TimesTen user with ADMIN privileges. If cache groups are being managed by Oracle Clusterware, enter the TimesTen cache manager user name.
    Prompts for the TimesTen password for the previously entered user name.
    If cache groups are being used, prompts for the password for the Oracle user that has the same name as the cache manager user. This password is provided in the OraclePWD connection attribute when the cache manager user connects.
    Prompts for a random string used to encrypt the above information.
    So if you are using cache groups then whatever user you have created and used as the 'cache manager' must also have ADMIN privilege. If this is not acceptable then I'm afraid you cannot use Clusterware to manage the A/S pair. You can of course raise an Enhancement Request asking to have this requirement lifted.
    Regards,
    Chris

  • Using TTClass connect to Timesten

    hi all,
    Please help me,
    I using TTClasses connect to timesten database, when i using the shample in quickstart that okie (make and running).
    But when i make one project in Eclipes or NetBeen anh add lib:
    #include <ttclasses/TTStatus.h>
    #include <ttclasses/TTConnection.h>
    #include <ttclasses/TTConnectionPool.h>
    #include <ttclasses/TTCmd.h>
    #include <ttclasses/ttTime.h>
    After that i create a classe ( as the document in timesten).
    class OrderTTConn {
    public:
         OrderTTConn();
         virtual ~OrderTTConn();
         virtual void Connect(const char* connStr,
                   TTConnection::DRIVER_COMPLETION_ENUM driverCompletion);
         virtual void Disconnect();
         void create();
         void insert(char* p_OrderNO, char* p_FloorCode,
                   unsigned short int* p_MemberID, unsigned short int* p_StockID,
                   unsigned short int* p_OrderType, unsigned short int* p_OorB,
                   unsigned short int* p_NorP, unsigned short int* p_NorC,
                   unsigned short int* p_BorE, char* p_OrderTime, char* p_OrderDate,
                   unsigned short int* p_SettleType, unsigned short int* p_Dorf,
                   unsigned short int* p_OrderQtty, unsigned short int* p_OrderPrice,
                   char* p_AccountNo, unsigned short int* p_BrokerID,
                   unsigned short int* p_Aorc, char* p_SessionNo,
                   unsigned short int* p_AorI, unsigned short int* p_YieldMat);
    private:
         TTCmd dropTable;
         TTCmd createTable;
         TTCmd insertData;
         TTCmd queryData;
         TTStatus ttStatus;
         TTConnection ttConn;
    And:
    OrderTTConn::OrderTTConn() {
         // TODO Auto-generated constructor stub
    OrderTTConn::~OrderTTConn() {
         // TODO Auto-generated destructor stub
         Disconnect();
    virtual void OrderTTConn::Disconnect() {
         createTable.Drop(ttStatus);
         insertData.Drop(ttStatus);
         queryData.Drop(ttStatus);
         ttConn.Disconnect(ttStatus);
    Show that had error in function Disconnect(), all code line:
    createTable.Drop(ttStatus); or createTable.Drop();
         insertData.Drop(ttStatus); or insertData.Drop();
         queryData.Drop(ttStatus); or queryData.Drop();
    error: undefined reference to `TTCmd::Drop(TTStatus&)'
    Show i check that have function Drop in TTCmd and i had add innclude to project.
    Can anybody help me.
    Thanks & BestRegards.

    Hi Chris,
    I setup two marchine running Timesten Active Standby Pair not in Clusterware. Show in other marchine I running app connect to Timesten in two marchine (client/server), app had config run fail over call back.
    I config fail over in ODBC (TTC_Server2, TTC_Server_DSN2....).
    Show when i insert data to table in Timesten DB ( abount 200.000 rows), in marchine Standby DB I run command "call ttrepstateset('active')" show that marchine become to Active Timesten DB, But the Old Active DB not become Standby (that becom IDLE).
    I had check log in Active DB marchine (new Active DB marchine):
    *16:09:44.42 Warn: REP: 10758: SAMPLEDB_TEST1:transmitter.c(5908): TT16060: Failed to read data from the network. select() timed out*
    *16:09:47.74 Warn: REP: 10758: SAMPLEDB_TEST1:receiver.c(2728): TT16060: Failed to read data from the network. select() timed out*
    *16:09:48.09 Warn: REP: 10758: SAMPLEDB_TEST1:transmitter.c(5908): TT16060: Failed to read data from the network. select() timed out*
    And log tterror.og in Old Active DB (now becom IDLE DB):
    *16:13:13.92 Err : REP: 18440: SAMPLEDB_TEST2:receiver.c(3224): TT16227: Standby store has replicated transactions not present on the active*
    *16:13:13.94 Warn: REP: 18440: SAMPLEDB_TEST2:transmitter.c(3050): TT16999: Neither Standby nor Active: Cannot deal with this locally generated transaction txn nowtxn->ctn = 1301993038.2611 txn->fctn = 0.0*
    I must Drop active standby pair and re Create active standby pair to fix that error.
    Can you give me any suggestion??? Can i test that script and app continune insert data to Timesten DB when I change state DB.
    Thanks very much.
    Edited by: sangvv.vn on Apr 5, 2011 2:34 AM
    Edited by: sangvv.vn on Apr 5, 2011 2:36 AM

  • Query in timesten taking more time than query in oracle database

    Hi,
    Can anyone please explain me why query in timesten taking more time
    than query in oracle database.
    I am mentioning in detail what are my settings and what have I done
    step by step.........
    1.This is the table I created in Oracle datababase
    (Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
    CREATE TABLE student (
    id NUMBER(9) primary keY ,
    first_name VARCHAR2(10),
    last_name VARCHAR2(10)
    2.THIS IS THE ANONYMOUS BLOCK I USE TO
    POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
    declare
    firstname varchar2(12);
    lastname varchar2(12);
    catt number(9);
    begin
    for cntr in 1..2599999 loop
    firstname:=(cntr+8)||'f';
    lastname:=(cntr+2)||'l';
    if cntr like '%9999' then
    dbms_output.put_line(cntr);
    end if;
    insert into student values(cntr,firstname, lastname);
    end loop;
    end;
    3. MY DSN IS SET THE FOLLWING WAY..
    DATA STORE PATH- G:\dipesh3repo\db
    LOG DIRECTORY- G:\dipesh3repo\log
    PERM DATA SIZE-1000
    TEMP DATA SIZE-1000
    MY TIMESTEN VERSION-
    C:\Documents and Settings\dipesh>ttversion
    TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
    Instance admin: dipesh
    Instance home directory: G:\TimestTen\TT70_32
    Daemon home directory: G:\TimestTen\TT70_32\srv\info
    THEN I CONNECT TO THE TIMESTEN DATABASE
    C:\Documents and Settings\dipesh> ttisql
    command>connect "dsn=dipesh3;oraclepwd=tiger";
    4. THEN I START THE AGENT
    call ttCacheUidPwdSet('SCOTT','TIGER');
    Command> CALL ttCacheStart();
    5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
    create readonly cache group rc_student autorefresh
    interval 5 seconds from student
    (id int not null primary key, first_name varchar2(10), last_name varchar2(10));
    load cache group rc_student commit every 100 rows;
    6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
    I SET THE TIMING..
    command>TIMING 1;
    consider this query now..
    Command> select * from student where first_name='2155666f';
    < 2155658, 2155666f, 2155660l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
    another query-
    Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
    2206: Table SCOTT.STUDENTS not found
    Execution time (SQLPrepare) = 0.074964 seconds.
    The command failed.
    Command> SELECT * FROM STUDENT where first_name='2093434f';
    < 2093426, 2093434f, 2093428l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
    Command>
    7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
    SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
    ID FIRST_NAME LAST_NAME
    1498663 1498671f 1498665l
    Elapsed: 00:00:00.15
    Can anyone please explain me why query in timesten taking more time
    that query in oracle database.
    Message was edited by: Dipesh Majumdar
    user542575
    Message was edited by:
    user542575

    TimesTen
    Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
    Version: 7.0.4.0.0 64 bit
    Schema:
    create usermanaged cache group factCache from
    MV_US_DATAMART
    ORDER_DATE               DATE,
    IF_SYSTEM               VARCHAR2(32) NOT NULL,
    GROUPING_ID                TT_BIGINT,
    TIME_DIM_ID               TT_INTEGER NOT NULL,
    BUSINESS_DIM_ID          TT_INTEGER NOT NULL,
    ACCOUNT_DIM_ID               TT_INTEGER NOT NULL,
    ORDERTYPE_DIM_ID          TT_INTEGER NOT NULL,
    INSTR_DIM_ID               TT_INTEGER NOT NULL,
    EXECUTION_DIM_ID          TT_INTEGER NOT NULL,
    EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
    NO_ORDERS               TT_BIGINT,
    FILLED_QUANTITY          TT_BIGINT,
    CNT_FILLED_QUANTITY          TT_BIGINT,
    QUANTITY               TT_BIGINT,
    CNT_QUANTITY               TT_BIGINT,
    COMMISSION               BINARY_FLOAT,
    CNT_COMMISSION               TT_BIGINT,
    FILLS_NUMBER               TT_BIGINT,
    CNT_FILLS_NUMBER          TT_BIGINT,
    AGGRESSIVE_FILLS          TT_BIGINT,
    CNT_AGGRESSIVE_FILLS          TT_BIGINT,
    NOTIONAL               BINARY_FLOAT,
    CNT_NOTIONAL               TT_BIGINT,
    TOTAL_PRICE               BINARY_FLOAT,
    CNT_TOTAL_PRICE          TT_BIGINT,
    CANCELLED_ORDERS_COUNT          TT_BIGINT,
    CNT_CANCELLED_ORDERS_COUNT     TT_BIGINT,
    ROUTED_ORDERS_NO          TT_BIGINT,
    CNT_ROUTED_ORDERS_NO          TT_BIGINT,
    ROUTED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ROUTED_LIQUIDITY_QTY     TT_BIGINT,
    REMOVED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_REMOVED_LIQUIDITY_QTY     TT_BIGINT,
    ADDED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ADDED_LIQUIDITY_QTY     TT_BIGINT,
    AGENT_CHARGES               BINARY_FLOAT,
    CNT_AGENT_CHARGES          TT_BIGINT,
    CLEARING_CHARGES          BINARY_FLOAT,
    CNT_CLEARING_CHARGES          TT_BIGINT,
    EXECUTION_CHARGES          BINARY_FLOAT,
    CNT_EXECUTION_CHARGES          TT_BIGINT,
    TRANSACTION_CHARGES          BINARY_FLOAT,
    CNT_TRANSACTION_CHARGES     TT_BIGINT,
    ORDER_MANAGEMENT          BINARY_FLOAT,
    CNT_ORDER_MANAGEMENT          TT_BIGINT,
    SETTLEMENT_CHARGES          BINARY_FLOAT,
    CNT_SETTLEMENT_CHARGES          TT_BIGINT,
    RECOVERED_AGENT          BINARY_FLOAT,
    CNT_RECOVERED_AGENT          TT_BIGINT,
    RECOVERED_CLEARING          BINARY_FLOAT,
    CNT_RECOVERED_CLEARING          TT_BIGINT,
    RECOVERED_EXECUTION          BINARY_FLOAT,
    CNT_RECOVERED_EXECUTION     TT_BIGINT,
    RECOVERED_TRANSACTION          BINARY_FLOAT,
    CNT_RECOVERED_TRANSACTION     TT_BIGINT,
    RECOVERED_ORD_MGT          BINARY_FLOAT,
    CNT_RECOVERED_ORD_MGT          TT_BIGINT,
    RECOVERED_SETTLEMENT          BINARY_FLOAT,
    CNT_RECOVERED_SETTLEMENT     TT_BIGINT,
    CLIENT_AGENT               BINARY_FLOAT,
    CNT_CLIENT_AGENT          TT_BIGINT,
    CLIENT_ORDER_MGT          BINARY_FLOAT,
    CNT_CLIENT_ORDER_MGT          TT_BIGINT,
    CLIENT_EXEC               BINARY_FLOAT,
    CNT_CLIENT_EXEC          TT_BIGINT,
    CLIENT_TRANS               BINARY_FLOAT,
    CNT_CLIENT_TRANS          TT_BIGINT,
    CLIENT_CLEARING          BINARY_FLOAT,
    CNT_CLIENT_CLEARING          TT_BIGINT,
    CLIENT_SETTLE               BINARY_FLOAT,
    CNT_CLIENT_SETTLE          TT_BIGINT,
    CHARGEABLE_TAXES          BINARY_FLOAT,
    CNT_CHARGEABLE_TAXES          TT_BIGINT,
    VENDOR_CHARGE               BINARY_FLOAT,
    CNT_VENDOR_CHARGE          TT_BIGINT,
    ROUTING_CHARGES          BINARY_FLOAT,
    CNT_ROUTING_CHARGES          TT_BIGINT,
    RECOVERED_ROUTING          BINARY_FLOAT,
    CNT_RECOVERED_ROUTING          TT_BIGINT,
    CLIENT_ROUTING               BINARY_FLOAT,
    CNT_CLIENT_ROUTING          TT_BIGINT,
    TICKET_CHARGES               BINARY_FLOAT,
    CNT_TICKET_CHARGES          TT_BIGINT,
    RECOVERED_TICKET_CHARGES     BINARY_FLOAT,
    CNT_RECOVERED_TICKET_CHARGES     TT_BIGINT,
    PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
    READONLY);
    No of rows: 2228558
    Config:
    < CkptFrequency, 600 >
    < CkptLogVolume, 0 >
    < CkptRate, 0 >
    < ConnectionCharacterSet, US7ASCII >
    < ConnectionName, tt_us_dma >
    < Connections, 64 >
    < DataBaseCharacterSet, AL32UTF8 >
    < DataStore, e:\andrew\datacache\usDMA >
    < DurableCommits, 0 >
    < GroupRestrict, <NULL> >
    < LockLevel, 0 >
    < LockWait, 10 >
    < LogBuffSize, 65536 >
    < LogDir, e:\andrew\datacache\ >
    < LogFileSize, 64 >
    < LogFlushMethod, 1 >
    < LogPurge, 0 >
    < Logging, 1 >
    < MemoryLock, 0 >
    < NLS_LENGTH_SEMANTICS, BYTE >
    < NLS_NCHAR_CONV_EXCP, 0 >
    < NLS_SORT, BINARY >
    < OracleID, NYCATP1 >
    < PassThrough, 0 >
    < PermSize, 4000 >
    < PermWarnThreshold, 90 >
    < PrivateCommands, 0 >
    < Preallocate, 0 >
    < QueryThreshold, 0 >
    < RACCallback, 0 >
    < SQLQueryTimeout, 0 >
    < TempSize, 514 >
    < TempWarnThreshold, 90 >
    < Temporary, 1 >
    < TransparentLoad, 0 >
    < TypeMode, 0 >
    < UID, OS_OWNER >
    ORACLE:
    Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
    Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    Schema:
    CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
    TABLESPACE TS_OS
    PARTITION BY RANGE (ORDER_DATE)
    PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS
    NOCACHE
    NOCOMPRESS
    NOPARALLEL
    BUILD DEFERRED
    USING INDEX
    TABLESPACE TS_OS_INDEX
    REFRESH FAST ON DEMAND
    WITH PRIMARY KEY
    ENABLE QUERY REWRITE
    AS
    SELECT order_date, if_system,
    GROUPING_ID (order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id
    ) GROUPING_ID,
    /* ============ DIMENSIONS ============ */
    time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
    instr_dim_id, execution_dim_id, exec_exchange_dim_id,
    /* ============ MEASURES ============ */
    -- o.FX_RATE /* FX_RATE */,
    COUNT (*) no_orders,
    -- SUM(NO_ORDERS) NO_ORDERS,
    -- COUNT(NO_ORDERS) CNT_NO_ORDERS,
    SUM (filled_quantity) filled_quantity,
    COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
    COUNT (quantity) cnt_quantity, SUM (commission) commission,
    COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
    COUNT (fills_number) cnt_fills_number,
    SUM (aggressive_fills) aggressive_fills,
    COUNT (aggressive_fills) cnt_aggressive_fills,
    SUM (fx_rate * filled_quantity * average_price) notional,
    COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
    SUM (fx_rate * fills_number * average_price) total_price,
    COUNT (fx_rate * fills_number * average_price) cnt_total_price,
    SUM (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END) cancelled_orders_count,
    COUNT (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END
    ) cnt_cancelled_orders_count,
    -- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
    -- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
    -- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
    SUM (routed_orders_no) routed_orders_no,
    COUNT (routed_orders_no) cnt_routed_orders_no,
    SUM (routed_liquidity_qty) routed_liquidity_qty,
    COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
    SUM (removed_liquidity_qty) removed_liquidity_qty,
    COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
    SUM (added_liquidity_qty) added_liquidity_qty,
    COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
    SUM (agent_charges) agent_charges,
    COUNT (agent_charges) cnt_agent_charges,
    SUM (clearing_charges) clearing_charges,
    COUNT (clearing_charges) cnt_clearing_charges,
    SUM (execution_charges) execution_charges,
    COUNT (execution_charges) cnt_execution_charges,
    SUM (transaction_charges) transaction_charges,
    COUNT (transaction_charges) cnt_transaction_charges,
    SUM (order_management) order_management,
    COUNT (order_management) cnt_order_management,
    SUM (settlement_charges) settlement_charges,
    COUNT (settlement_charges) cnt_settlement_charges,
    SUM (recovered_agent) recovered_agent,
    COUNT (recovered_agent) cnt_recovered_agent,
    SUM (recovered_clearing) recovered_clearing,
    COUNT (recovered_clearing) cnt_recovered_clearing,
    SUM (recovered_execution) recovered_execution,
    COUNT (recovered_execution) cnt_recovered_execution,
    SUM (recovered_transaction) recovered_transaction,
    COUNT (recovered_transaction) cnt_recovered_transaction,
    SUM (recovered_ord_mgt) recovered_ord_mgt,
    COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
    SUM (recovered_settlement) recovered_settlement,
    COUNT (recovered_settlement) cnt_recovered_settlement,
    SUM (client_agent) client_agent,
    COUNT (client_agent) cnt_client_agent,
    SUM (client_order_mgt) client_order_mgt,
    COUNT (client_order_mgt) cnt_client_order_mgt,
    SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
    SUM (client_trans) client_trans,
    COUNT (client_trans) cnt_client_trans,
    SUM (client_clearing) client_clearing,
    COUNT (client_clearing) cnt_client_clearing,
    SUM (client_settle) client_settle,
    COUNT (client_settle) cnt_client_settle,
    SUM (chargeable_taxes) chargeable_taxes,
    COUNT (chargeable_taxes) cnt_chargeable_taxes,
    SUM (vendor_charge) vendor_charge,
    COUNT (vendor_charge) cnt_vendor_charge,
    SUM (routing_charges) routing_charges,
    COUNT (routing_charges) cnt_routing_charges,
    SUM (recovered_routing) recovered_routing,
    COUNT (recovered_routing) cnt_recovered_routing,
    SUM (client_routing) client_routing,
    COUNT (client_routing) cnt_client_routing,
    SUM (ticket_charges) ticket_charges,
    COUNT (ticket_charges) cnt_ticket_charges,
    SUM (recovered_ticket_charges) recovered_ticket_charges,
    COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
    FROM us_datamart_raw
    GROUP BY order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id;
    -- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
    -- by Oracle with the associated materialized view.
    CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
    (ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
    INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
    NOLOGGING
    NOPARALLEL
    COMPRESS 7;
    No of rows: 2228558
    The query (taken Mondrian) I run against each of them is:
    select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
    --, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
    --, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
    --, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
    --, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
    --, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
    --, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
    --, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
    --, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
    --, sum("MV_US_DATAMART"."COMMISSION") as "m9"
    --, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
    --, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
    --,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
    --,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
    --, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
    --, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
    --, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
    --, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
    --,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
    --, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
              from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
    where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
    No Columns     ORACLE     TimesTen     
    1     1.05     0.94     
    2     1.07     1.47     
    3     2.04     1.8     
    4     2.06     2.08     
    5     2.09     2.4     
    6     3.01     2.67     
    7     4.02     3.06     
    8     4.03     3.37     
    9     4.04     3.62     
    10     4.06     4.02     
    11     4.08     4.31     
    12     4.09     4.61     
    13     5.01     4.76     
    14     5.02     5.06     
    15     5.04     5.25     
    16     5.05     5.48     
    17     5.08     5.84     
    18     6     6.21     
    19     6.02     6.34     
    20     6.04     6.75

  • Can I config timesten not to hold all data in memory?

    In our production environment we use a server with 32GB memory to run Timesten, but developer machine only have 2GB memory. Now we are going to fix some issue that require duplicate the production data to Developer machine, can we have config timesten not to hold all data in memory so that it is possible to duplicate the production data to Developer machine? Is timesten support something like cache table as hsql?
    http://hsqldb.sourceforge.net/web/hsqlFAQ.html#BIGRESULTS

    TimesTen is an in-memory database. All the data that is managed directly by TimesTen must be 'in memory' and the machine hosting the datastore must have enough physical memory or system performance will be severely degraded.
    The only way to have a combined in-memory and on-disk solution is to introduce a backend Oracle database (10g or 11g) and use TimesTen cache connect. Of course, this then becomes a very different configuration and depending on what 'issue' it is that you need to investigate and fix the change to a Cache Connect configuration may hinder that investigation.
    Chris

  • Clusterware install on HPUX 11.23 on 2node cluster (HP Integrity itanium)

    We are trying to configure a two node cluster to host Oracle 10gR2 RAC environment. The servers are HP-Integrity BL-86C servers, each with two Itanium CPUs and HP-UX 11.23. There are two four network ports available, which are connected to a Gigabit virtual connect. One of them is used as the public interface for RAC and second one is used as private interconnect for RAC.
    We have EVA3000 SAN. We configured two 1GB LUNs and presented both of them to both the servers. These will be used as shared raw storage for Oracle Clusterware files. On each server, we can see these raw devices under /dev/rdsk/c*. We assigned root as owner for first LUN (which is used for Cluster registry), and oracle as owner on the second LUN (which is used for voting disk). We did this on both the servers.
    We set up the ssh user equivalency on both machines so that they can talk to each other using SSH and oracle userid without prompting for password, as per oracle install manual.
    When we start the install, it apparently finishes successfully on the first server, but it fails on the second one saying ‘WARNING: clssnmDiskPMT: long disk latency (25031 ms) to voting disk (2//dev/rdsk/c1t1d0)”
    Oracle is pointing to this error, and saying that is the disk issue. We are stuck at this point where we need to figure out if the way we are trying to use the sharing of LUNs is correct? Can we present a LUN as raw disk to two servers at the same time as shared raw device?
    Also, how can we test the disk latency? I suspect that it is probably not a disk issue but shared storage issue, where first server holds the raw disks and second one can not see them.
    Our unix adminstrator talked to the HP EVA specialist to figure out the 'disk latency' problem, and HP doesn't think it is an issue. But if there is anything Oracle wants to check from o/s or hw side, I can check it.
    My question is, if it is voting disk issue, why does first node successfully starts all the daemons and formats Crs and voting disks successfully? We can see that in the root.sh output , and in the ocssd.log file on the first node
    Is there any utility from oracle, which can check if the shared disks for ocr and vote disks are configured/accessible all right? The runcluvfy.sh which was distributed with the software DVD is buggy and doesn’t work. Oracle tech support was not able to provide any suggestions to make it work.
    Please let me know if there is anybody who has done similar install successfully who can shed some light on this problem.
    Your help will be greatly appreciated.
    Thanks

    I should add that the root.sh finishes successfully on the first node.
    It fails on the second node.
    Following is the output of root.sh on the first node.
    WARNING: directory '/u01/oracle10/product/10gR2' is not owned by root
    WARNING: directory '/u01/oracle10/product' is not owned by root
    WARNING: directory '/u01/oracle10' is not owned by root
    Checking to see if Oracle CRS stack is already configured
    Setting the permissions on OCR backup directory
    Setting up NS directories
    Oracle Cluster Registry configuration upgraded successfully
    WARNING: directory '/u01/oracle10/product/10gR2' is not owned by root
    WARNING: directory '/u01/oracle10/product' is not owned by root
    WARNING: directory '/u01/oracle10' is not owned by root
    Successfully accumulated necessary OCR keys.
    Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
    node <nodenumber>: <nodename> <private interconnect name> <hostname>
    node 1: dhiehr08 dhiehr08-priv dhiehr08
    node 2: dhiehr14 dhiehr14-priv dhiehr14
    Creating OCR keys for user 'root', privgrp 'sys'..
    Operation successful.
    Now formatting voting device: /dev/rdsk/c5t0d4
    Format of 1 voting devices complete.
    Startup will be queued to init within 30 seconds.
    Adding daemons to inittab
    Expecting the CRS daemons to be up within 600 seconds.
    CSS is active on these nodes.
    dhiehr14
    CSS is inactive on these nodes.
    dhiehr08
    Local node checking complete.
    Run root.sh on remaining nodes to start CRS daemons.
    On the second node, the output is as follows...
    WARNING: directory '/u01/oracle10/product/10gR2' is not owned by root
    WARNING: directory '/u01/oracle10/product' is not owned by root
    WARNING: directory '/u01/oracle10' is not owned by root
    Checking to see if Oracle CRS stack is already configured
    Setting the permissions on OCR backup directory
    Setting up NS directories
    Oracle Cluster Registry configuration upgraded successfully
    WARNING: directory '/u01/oracle10/product/10gR2' is not owned by root
    WARNING: directory '/u01/oracle10/product' is not owned by root
    WARNING: directory '/u01/oracle10' is not owned by root
    clscfg: EXISTING configuration version 3 detected.
    clscfg: version 3 is 10G Release 2.
    Successfully accumulated necessary OCR keys.
    Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
    node <nodenumber>: <nodename> <private interconnect name> <hostname>
    node 1: dhiehr08 dhiehr08-priv dhiehr08
    node 2: dhiehr14 dhiehr14-priv dhiehr14
    clscfg: Arguments check out successfully.
    NO KEYS WERE WRITTEN. Supply -force parameter to override.
    -force is destructive and will destroy any previous cluster
    configuration.
    Oracle Cluster Registry for cluster has already been initialized
    Startup will be queued to init within 30 seconds.
    Adding daemons to inittab
    Expecting the CRS daemons to be up within 600 seconds.
    Failure at final check of Oracle CRS stack.
    10
    The error message I mentioned in my previous post is from ocssd.log on the failed node.
    Your help will be greately appreciated. Thanks

  • Oracle 10g to 11g Upgrade - Oracle Clusterware problem

    Oracle10g RAC (2 Nodes) RHEL 4 64bit
    Hi All,
    I have began the procedure for upgrading Oracle Clusterware to 11g. However, during the install OUI was giving me certain error messages like it couldn't transfer the OUI installlogs to node 2. Well, I kept pushing through the install and after I ran the last rootupgrade script on node 2 it gave me the following error:
    Checking the existence of nodeapps on this node
    Exception in thread "main" java.lang.UnsupportedClassVersionError: oracle/ops/opsctl/OPSCTLDriver (Unsupported major.minor version 49.0)
    at java.lang.ClassLoader.defineClass0(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:539)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:123)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:251)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:55)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:194)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:187)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:289)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:274)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:235)
    at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:302)
    Creating '/crs/home/install/paramfile.crs' with data used for CRS configuration
    Exception in thread "main" java.lang.UnsupportedClassVersionError: oracle/ops/opsctl/OPSCTLDriver (Unsupported major.minor version 49.0)
    at java.lang.ClassLoader.defineClass0(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:539)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:123)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:251)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:55)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:194)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:187)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:289)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:274)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:235)
    at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:302)
    Failed to retrieve VIP details
    Exception in thread "main" java.lang.UnsupportedClassVersionError: oracle/ops/opsctl/OPSCTLDriver (Unsupported major.minor version 49.0)
    at java.lang.ClassLoader.defineClass0(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:539)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:123)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:251)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:55)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:194)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:187)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:289)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:274)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:235)
    at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:302)
    Failed to retrieve VIP details
    Setting CRS configuration values in /crs/home/install/paramfile.crs
    So I ignored the error for the time being and after that I checked the status of crs and it gave me the following:
    [oracle@vtl-rac2 crsd]$ crsstatus
    HA Resource Target State
    ora.VMRACDEV.VMRACDEV1.inst ONLINE ONLINE on vtl-rac1
    ora.VMRACDEV.VMRACDEV2.inst ONLINE ONLINE on vtl-rac2
    ora.VMRACDEV.db ONLINE ONLINE on vtl-rac1
    ora.vtl-rac1.ASM1.asm ONLINE ONLINE on vtl-rac1
    ora.vtl-rac1.LISTENER_VTL-RAC1.lsnr ONLINE ONLINE on vtl-rac1
    ora.vtl-rac1.gsd ONLINE ONLINE on vtl-rac1
    ora.vtl-rac1.ons ONLINE OFFLINE
    ora.vtl-rac1.vip ONLINE ONLINE on vtl-rac1
    ora.vtl-rac2.ASM2.asm ONLINE ONLINE on vtl-rac2
    ora.vtl-rac2.LISTENER_VTL-RAC2.lsnr ONLINE ONLINE on vtl-rac2
    ora.vtl-rac2.gsd ONLINE ONLINE on vtl-rac2
    ora.vtl-rac2.ons ONLINE OFFLINE
    ora.vtl-rac2.vip ONLINE ONLINE on vtl-rac2
    So i tried to bring it back up by doing a crs_stop -all and crs_start -all and it gave me the following error:
    vtl-rac1 : CRS-1019: Resource ora.vtl-rac2.ons (application) cannot run on vtl-rac1
    Start of `ora.vtl-rac1.ons` on member `vtl-rac1` failed.
    vtl-rac2 : CRS-1019: Resource ora.vtl-rac1.ons (application) cannot run on vtl-rac2
    CRS-0223: Resource 'ora.VMRACDEV.db' has placement error.
    CRS-0215: Could not start resource 'ora.vtl-rac1.ons'.
    CRS-0215: Could not start resource 'ora.vtl-rac2.ons'.
    I am thinking that all of this is caused by the various issues I had with the install. If I am incorrect, then please let me know. If it is true, I would like to know if there are any 11g docs on how to clean up a failed clusterware upgrade. Any advice would be greatly appreciated on any of the situations I am having.
    Thank you

    Hi Chandra,
    Did CVU report any problems before the upgrade?No, there were no errors reported by CVU before the upgrade.
    I don't there is note out there for cleaning 11g CRS
    install...and I think can very use the 10g CRS note -
    239998.1.Yeah I might have to go that way.
    I have both the 11g CRS install and upgrade from 10g
    to 11CRS at
    http://chandradba.blogspot.com/2007/08/oracle-11g-rac-
    install-on-red-hat-50.html
    and
    http://chandradba.blogspot.com/2008/02/oracle-10g-crs-
    upgrade-to-11g-crs.html
    see if it helps.Yup, your guide is very simple, clear and error proof :) That's how mine when pretty much except right around 75% I started getting these strange errors of files not being able to be transferred to node 2. Anyways, it shouldn't be a problem as none of those errors were configuration related...or else I would have a messed up cluster.
    Well, I actually rebooted both machines and now the whole CRS stack is up!! So I guess I am ok. We'll just have to wait and see.
    Thanks for your help Chandra...I always appreciate it.

  • TimesTen DB Installation problem on Windows 7 OS

    Hi,
    I tried installing timeten DB version 7.0 on windows 7 OS. During installation i am getting unable to create Registry Key error.
    Because of the above error i am not able to install the DB. I require help to solve the above issue.
    I also want to know if TimeTen support windows 7 OS.

    Windows 7 is not currently officially supported but i routinely use TimesTen 7.0 and 11.2.1, both 32 and 64 bit, on Windows-7 (64-bit) with no issues.
    What version of Windows 7 are you using and is it 32 or 64 bit? Are you trying to install 32 or 6 bit TimesTen 7.0? Also, why are you not using TimesTen 11.2.1?
    Did the error give any more information (such as which registry key)? Are you running the install as an Administrator? Have you disabled any anti-virus software or other security software prior to running the install?
    Chris

Maybe you are looking for