Parallelism for Live Database Servers.


Hi All
We are palnning to increasing “cost threshold for parallelism – 5 to 35 and maximum degree of parallelism – 4 to 8, If we change the settings, I believe we may face the below mentioned problems:
#1 . Query may take more time than it takes now, in the range of 5- 20 with parallelism until we optimize it. As we know that there are many application running on TDS Live which are accessed by multiple users. This might impact the users as they might have to wait for 35 seconds until it is executed with parallel plan.
#2 . When analysing we found that on LIVE environment there are two instances running parallel, change of the setting - maximum degree of parallelism from 4 to 8 might crash other query instances and vice versa.
We need suggestions from you all to help us know how parallelism will work when there are 2 instances running on the same machine.
Shivraj Patil.

Is it OLTP or OLAP server/application? Generally speaking if it is OLTP server (many small running queries ) you would probably benefit from setting MAXDOP =1 on the server level and increasing cost
threshold for
parallelism to
50 for the beginning...
But Google on the subject, you will find great blogs from Adam Machanic for example....
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence

Similar Messages

  • How to know the optimal Degree of Parallelism for my database?

    I have an important application on my databae (Oracle 10,2,0) and the box has 4 CPU. All the tables are not partitioned. Should I set the parallel degree by myself?
    How to know the optimal Degree of Parallelism for my database?

    As far as I am concerned there is no optimal degree of parallism at the database level. The optimal value varies by query based on the plan in use. It may change over time.
    It is not that difficult to overuse the PQO and end up harming overall database performance. PQO is a brute force methology and should be applied carefully. Otherwise you end up with inconsisten results.
    You can let Oracle manage it, or you can manage it on the statement level via hints. I do not like specifying degrees of parallelism at the object level. As I said, no two queries are exactly alike and what is right for one query against a table may not be right for another query against the table.
    If in doubt set up the system to let Oracle manage it. if what you are really asking is how many PQO sessions to allocate then look at your Statspack or AWR reports and judge your system load. Monitor v$px_session and v$pq_slave to see how much activity these views show.
    IMHO -- Mark D Powell --

  • Installing one APEX for multiple databases servers.

    Hello, everyone.
    I'm new on this world of Oracle solutions, and I would like
    very much your help.
    Here in my company I have five database servers, and I want to install
    the Oracle Apex in a way that I only need access one webpage and have
    the possibility to edit all the database.
    For instance:
    Imagine that I have five machines and each one has a instance of one database.
    I would instal the Apex on each machine. But I would like to open the Apex on
    my browser only once, and have the capability to work with all databases.
    It would be like the Apex had a scrollbar and I could choose wich database to work.
    Is that possible? If it is, could be done with XML DB server? What is the best HTTP server to use in this case?
    Thanks for your attention.
    Sorry if I wasn't clear, I'm new and I'm learning.
    Regards, Leandro R. de Freitas.

    Hi,
    as you describe this, it is not possible. An Apex instance is installed in an Oracle instance and is as a result is tied very closely to that instance.
    That being said, it is still possible to access data in different database instances from a seperate database by the use of database links, though this is can be a little complicated and not always desirable from a performance point of view. Whether this is desirable in your instance depends on the architecture of your applications.
    For example, if your 5 different instances are part of one distributed application, it may be desirable to install Apex in one instance that may be termed the master instance and have it access data from the other instances via database links, which should already exist in a distributed application.
    On the other hand, if your applications are quite distinct, then I would see it as desirable to have seperate installations of Apex in each instance. This would enable different release cycles and versioning if required, lead towards better performance and avoid single point of failure. If you need a united front end to these applications then you could maybe create a gateway application on one of the Apex instances, though how you deal with sign on would depend on your architecture eg Single Sign on vs LDAP etc. You could simply create an HTML page somewhere on your intranet that points to these applications and then let each application deal with sign on.
    I hope this helps.
    Andre

  • Resource estimation/Sizing (i.e CPU and Memory) for Oracle database servers

    Hi,
    I have came across one of the requirement of Oracle database server sizing in terms of CPU and Memory requirement. Has anybody metalink notes or white paper to have basic estimation or calculation for resources (i.e CPU and RAM) on based of database size, number of concurrent connections/sessions and/or number of transactions.
    I have searched lot on metalink but failed to have such, will be great help if anybody has idea on this. I'm damn sure it has to be, because to start with implementation of IT infrastructure one has to do estimation of resources aligned with IT budget.
    Thanks in advance.
    Mehul.

    You could start the other way around, if you already have a server is it sufficient for the database you want to run on it? Is there sufficient memory? Is it solely a database server (not shared)? How fast are the disks - SAN/RAID/local disk? Does it have the networking capacity (100mbps, gigabit)? How many CPUs, will there be intensive SQL? How does Oracle licensing fit into it? What type of application that will run on the database - OLTP or OLAP?
    If you don't know if there is sufficient memory/CPU then profile the application based on what everyone expects, again, start with OLTP or OLAP and work your way down to the types of queries/jobs that will be run, number of concurrent users and what performance you expect/require. For an OLAP application you may want the fastest disks possible, multiple CPUs and a large SGA and PGA (2-4GB PGA?), pay a little extra for parallel server and partitioning in license fees.
    This is just the start of an investigation, then you can work out what fits into your budget.
    Edited by: Stellios on Sep 26, 2008 4:53 PM

  • Sun T2 chips vs SPARC64 VI/VII chips for Oracle database servers

    We have recently found poor performance of Sun T5240 servers where (against previous V series servers with similar gross I/O capabilities on the disk volume)the timings for the recovery of the database, and the subsequent re-build of the tables and materialised views, are variable from slightly worse to atrocious. Given that the clock-speed is similar,and all significant oracle database parameters are the same, we couldn't see why these processes should take longer on the T5240. The statistics report from the database showed no I/O bottleneck on the new machine, instead it showed raised CPU times for some of the key slow statements.
    We found the attached oracle metalink document (ref 781763.1) <<Sun_T2_slow_for Oracle_DB.html>>
    Seems to be that if you can't control the content of large long-running SQL statements to parallelise them, don't buy a T-series box as the multithreading capability will work against you.
    Therefore we are looking at new servers to run other similar large Oracle apps and appear to have a number of choices within a similar price range
    Notably
    M3000 - 1 x SPARC VII CPU (quad core procesor)
    M4000 - 2 x SPARC VI CPU (dual core processor)
    as opposed to the
    T5240 - 2 x T2 CPU (8 core processor)
    Question is which of these CPU/Core combinations works best...are there performance figures? What's the difference between CPU's and number of cores?
    Any benchmarks to help us decide? Anyone experienced similar or tried these different combos for themselves?

    We tested a T5210 as a database server (running Ingres, not Oracle) and our batch performance was roughly equivalent to a dual CPU UltraSPARC III V480. Our guess is that the performance hit came from having a relatively small cache shared across so many threads. Like you, we're looking at using the M3000 or M4000 in preference.

  • How to configure email Alerts in OEM Cloud 12c for Database Servers up/down

    Hi everybody,
    How to configure email Alerts in OEM Cloud 12c for Database Servers up/down status?
    Regards,
    Miguel Vega

    Hi Miguel Vega,
    Information regarding the notifications:
    ==============================
    Configuring notification rules in 12c is different from earlier releases.
    The concept and function of notification rules has been replaced with a two-tier system consisting of Incident Rules and Incident Rule Sets :
    1. Incident Rules: Operate at the lowest level granularity (on discrete events) and performs the same role as notification rules from earlier releases.
    By using incident rules, you can automate the response to incoming incidents and their updates.
    A rule contains a set of automated actions to be taken on specific events, incidents or problems.
    The actions taken are for example : sending e-mails, creating incidents, updating incidents, and creating tickets.
    2. Incident Rule Set: A rule set is a collection of rules that applies to a common set of objects, for example, targets, jobs, and templates.
    To help you to achieve the Notification Rules configuration, refer those notes :
    How To Configure Notification Rules in 12C Enterprise Manager Cloud Control ? Doc ID 1368036.1
    EM12c How to Add and Configure Email Addresses to EM Administrators and Update the Notification Schedule ?Doc ID 1368262.1
    EM12c How to Subscribe or Unsubscribe for Email Notification for an Incident Rule Set ?Doc ID 1389460.1
    EM 12c How to Configure Notifications for Job Executions ? Doc ID 1386816.1
    Best Regards,
    Venkat

  • Failed to install Add-On For 2 Different Database Servers

    Hi All,
    We are using one application server and 2 database servers (development - DEV01 and production - PRD01).  All SAP Business One application (server, client, license, add-on, etc.) installed on application server.
    Now, we are having problem where we cannot install add-on in one of the database server if we already assign it to one database, for example: if we already assigned and installed it on DEV01 then we cannot assigned and installed it on PRD01.
    The question is, can we use one add-on and pointing to 2 different database server? Is Yes, would you please advice on how to achieve it?
    Thanks and Regards,
    Lailus

    Hi,
    Please check SAP note:
    871572
    - SDK license mechanism
    Thanks & Regards,
    Nagarajan

  • Distribution Monitor for 2 different servers from 2 different sites

    Hello all,
    We are trying to use Distribution Monitor during a parallel Unicode Conversion on a SAP 4.7 system.
    The source system and target system are 2 different servers located on 2 different sites (more than 500Kms distant).
    Questions:
    1. Can we use Distribution Monitor with 1 source server dedicated for the Export and 1 target server dedicated for the import of a package?
    2. If it is not possible, what are the constraints in fact?
    3. Can we have a scenario where Distribution Monitor is used on the source system in order to use the parallelism benefit and Migration Monitor used on the target system?
    Thanks for your help & feedback,
    Chris

    Hi Chris,
    1. Can we use Distribution Monitor with 1 source server dedicated for the Export and 1 target server dedicated for the import of a package? The Answer is No
    In order to use Distribution monitor, u need minimum two application servers on source systems  and correspondingly atleast minimum two application servers on target system 
    For example let us say Application server A  and Application server B on sources systems and Application Server C and Application server D on target ytem
    Then configure Distribution monitor properties to include two application servers as source systems and two application servers as target systems.  When u exeute distribution monitor preparation, first it scan database servers in source system nd target system  and then scan CI servers in source and target system. Then Packages will be distributed in two application servers A and B
    Run Export from Application server A  for first fifty packages  , at the same time Run import  these first fifty packages in Application Server C
    Run Export from Application Server B  for other remining packages and at the same time , run import the remaining packages into Application Server D.
    (that is one to one correspondence)
    2. If it is not possible, what are the constraints in fact? - There is no constraints. However there is lots of time consuming during Distribution monitor preparation and checking.
    3. Can we have a scenario where Distribution Monitor is used on the source system in order to use the parallelism benefit and Migration Monitor used on the target system? - The Answer is No.
    You cannot mix distribution monitor tool for source system and Migration tool used on target system.
    You have to use any one tool depending on the size of database used.
    if your database size used is very very large then  recommend to use distribution monitor where u in sou can have multiple R3load jobs in each application server. Say Application server A use 20 R3load jobs and Application server B use 15 r3load jobs).
    Thanks
    APR

  • How to Integrate real time data between 2 database servers

    How to Integrate real time data between 2 database servers
    May 31, 2006 2:45 AM
    I have a scenario where the data base (DB2 400) is maintained by AS 400 application and my new website application based on j2ee platform access the same database also but the performance is very low. So we have thought of introducing new oracle data base which will be accessed by j2ee application and all the data from db 400 database will be replicate to oracle data base. In that scenario the only problem is of real time data exchange between 2 databases. How do we achieve that considering both the application As400 and j2ee website application are running in parallel and accessing the same information lying on DB2 400 database. We have to look at transaction management also.
    Thanks
    Panky
    DrClap
    Posts:25,835
    Registered: 4/30/99 Re: How to Integrate real time data between 2 database servers
    May 31, 2006 11:16 AM (reply 1 of 2)
    You certainly wouldn't use XML for this.
    The process you're looking for is called "replication". Ask your database experts about it.
    I predict that after you spend all the money to install Oracle and hire consultants to make it replicate the DB2/400 database, your performance problem will be worse.
    panks
    Posts:1
    Registered: 5/31/06 Re: How to Integrate real time data between 2 database servers
    May 31, 2006 11:55 PM (reply 2 of 2)
    Yeajh I now that its not a XML solution.
    Replication is one of the option but AS400 application which uses DB2/400 DB is highly loaded and proposed website also uses the same database for retrieval and updation purpose.All the inventory is maintained in the DB2/400 database so I have thought of introducing new oracle database which will be accessed by new website and it will have all the relevant tables structure along with data from DB2/400 application. Now whenever there is a order placement from new website then first it should update the oracle database and then this data shuold also migrate to db2/400 application at real time so that the main inventory which is lying on db2/400 should be updated on real time basis because order placement is aslo possible from As400 application. So the user from As400 application should not get the wrong data.
    Is it possible to use MQ products??
    -Panky

    Hi,
    the answer to your question is not easy. Synchronization or integration or replication data between 2 (or more) database servers is very complicated task, even though it doesn't look like.
    Firstly I would recommend to create good analysis regarding data flow.
    Important things are:
    1) what is primary side for data creation. In other words on which side - DB2 or Oracle - are primary data (they are created here) and on which side are secondary data (just copies)
    2) on which side are data changed - only in DB2 side or only on Oracle side or on both sides
    3) Are there data which are changed on both side concurrently? If so how should be conflicts solved?
    4) What does it mean "real time"? Is it up to 1 ms or 1s or 1 min or 1 hour?
    5) What should be done when replication will not work? I mean replication crash etc.
    BTW. The word "change" above means INSERT, UPDATE, DELETE commands.
    Analysis should be done for every column in every table. When analysis is ready you can select the best system for your solution (Oracle replication, Sybase replication server, MQ, EJB or your proprietary solution). Without analysis it will be IMHO gunshot into the dark.

  • AIX 5L's HACMP on Database Servers

    Hello everyone,
    I've been looking around for information on a 4 node SAP ERP/Oracle RAC architecture for AIX 5L and I'm being constantly lead to vendors being responsible for the clustering requirements.
    I'm trying to setup a 4-server ERP architecture. Based on documentation provided in the SAP service marketplace, IBM's General Parallel Filesystem (GPFS) and PowerHA/HACMP is needed.
    The following is our Production architecture...
    2 Application Servers for SAP ERP Applications (clustered active-active)
    - and -
    2 Database Servers + External Storage mounted identically onto both (RAC Clustered)
    I'm not sure how GPFS and PowerHA/HACMP will be distributed onto the 4 servers.
    I would like to humbly ask the SAP implementation experts to please confirm this setup I've planned.
    2 App Servers = PowerHA/HACMP
    2 DB Servers = GPFS + Oracle Clusterware
    Is it a requirement to implement PowerHA/HACMP and GPFS on all 4 of my servers?
    Help would be greatly appreciated.
    Many Thanks,
    Brian

    Duplicate post
    Read and Follow the Forum Rules

  • Best Policy to run open queries on LIVE Database

    Hi Experts,
    There are users who has RO access on Live database , they run open queries on Live database. Sometimes we have faced some issues on Live Database due to open queries and we can not restrict them to run as well. So would like to get your suggestion
    to come up with best solutions ? 
    Shivraj Patil.

    What issue do you have? One option is tuning the query, means create proper indexes to speed up the query..
    But  the user may run heavy query even being unaware  that system consumes resources, for example
    --A typical parallel query...
    SELECT 
        p.ProductId, 
        p.ProductNumber,
        p.ReorderPoint,
        th.TransactionId,
        RANK() OVER 
            PARTITION BY
                p.ProductId
            ORDER BY
                th.ActualCost DESC
        ) AS LineTotalRank,
        RANK() OVER
            PARTITION BY
                p.ProductId
            ORDER BY 
                th.Quantity DESC
        ) AS OrderQtyRank
    FROM bigProduct AS p
    INNER JOIN bigTransactionHistory AS th ON
        th.ProductId = p.ProductId
    WHERE
    p.ProductId BETWEEN 1001 AND 5001
    GO 
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Hardware requirements for SQL database used with TestStand

    We are wanting to set up a SQL server to store the data from TestStand.
    How do we determine the hardware requirements for this server? It will be used with 10-30 machines running tests and logging data and another 5-10 machines running queries to pull the data back out for analysis. The result data size will range from 50-25,000 results per run (run times are 1 minute for 50 result tests and 5 hours for the 25,000 result tests).

    Hi,
    database design and hardware requirements are never easy. There are a lot of scientific papers on work load tests and requirement assumptions. I can not give a short answer which machine to use. Just some ideas and starting points.
    The most important parts of a database system with large data sets are network bandwidth, RAM and storage bandwidth. With smaller data sets and more complex transactions the CPU becomes more important.
    In this TS case the data sets are usually rather small. If the queries are not too complex, the requirements seems to be not too high.
    Database performance is usually measured in transactions per minute (tpm). Special database servers can perform several thousands tpm and have costs starting at about 15 $ per tpm. See MS' ad page for a g
    ood starting point: http://www.microsoft.com/sql/evaluation/compare/benchmarks.asp
    You may also visit the TPC.org homepage.
    To be more specific.
    I'd choose a modern Intel-based system like P4-3MHz (that have virtual multiprocessors) with at least 512 MB RAM and a RAID5 hard disk storage system (not nercessarily SCSII) with at least 3 single HDs. Use a 100 MB LAN connection at least, best with a switch. Don't forget backup!
    Check also the pages of your preferred database provider.
    I am on a starting point here too. We have chosen mySQL, which runs (at least now) on the very same machine where TS & LV are running. We plan to test this setup with increasing burden to get a practical assumption of the HW requirements. The planned final setup will have up to 5 test stations and 5 query stations. We'll run about 50 rather complex tests of about 4 hours each that operate in parallel on the test stations.
    HTH and
    Greetings from Germany
    Uwe

  • Use of current time for polling Database Adapter query

    I am writing a simple BPEL process with a Polling Database Adapter and a Recieve. The idea is that we are polling an XE database for any entries in a TRIP table which have an expiration date/time field that has passed.
    The Adapter was build using JDeveloper 10.1.3.2 (with Oracle Application Server patched to 10.1.3.3.0) as a "Poll for New Changed Records in a Table" Operation type with a STATUS field (0 for live, 1 for expired) as the Logical Delete Field.
    I was unable to find a way to generate a SELECT query expression with the wizard that would allow me to use current/system time as an attribute, so I finished the wizard and edited the Toplink Descriptor to use a custom SQL expression for the query. This resulted in the following code in the toplink_mappings.xml file:
    <?xml version="1.0" encoding="UTF-8"?>
    <toplink:object-persistence version="Oracle TopLink - 10g Release 3 (10.1.3.1.0) (Build 061004)"
    xmlns:opm="http://xmlns.oracle.com/ias/xsds/opm" xmlns:xsd="http://www.w3.org/2001/XMLSchema"
    xmlns:toplink="http://xmlns.oracle.com/ias/xsds/toplink"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <opm:name>ExpiredTripPoller</opm:name>
    <opm:class-mapping-descriptors>
    <opm:class-mapping-descriptor xsi:type="toplink:relational-class-mapping-descriptor">
    <opm:class>ExpiredTripPoller.Trip</opm:class>
    <opm:alias>Trip</opm:alias>
    <opm:primary-key>
    <opm:field table="TRIP" name="ID" xsi:type="opm:column"/>
    </opm:primary-key>
    <opm:events xsi:type="toplink:event-policy"/>
    <opm:querying xsi:type="toplink:query-policy">
    <opm:queries>
    <opm:query name="ExpiredTripPoller" xsi:type="toplink:read-all-query">
    <toplink:timeout>0</toplink:timeout>
    <toplink:call xsi:type="toplink:sql-call">
    <toplink:sql>SELECT ID, LPN, START_TIME, EXPIRY_TIME, STATUS FROM TRIP WHERE ((STATUS = '0') AND (EXPIRY_TIME < SYSDATE)) ORDER BY EXPIRY_TIME ASC</toplink:sql>
    </toplink:call>
    <toplink:reference-class>ExpiredTripPoller.Trip</toplink:reference-class>
    <toplink:cache-usage>primary-key</toplink:cache-usage>
    <toplink:lock-mode>none</toplink:lock-mode>
    <toplink:container xsi:type="toplink:list-container-policy">
    <toplink:collection-type>java.util.Vector</toplink:collection-type>
    </toplink:container>
    </opm:query>
    <opm:query name="findAllTrip" xsi:type="toplink:read-all-query">
    <toplink:timeout>0</toplink:timeout>
    <toplink:reference-class>ExpiredTripPoller.Trip</toplink:reference-class>
    <toplink:cache-usage>primary-key</toplink:cache-usage>
    <toplink:lock-mode>none</toplink:lock-mode>
    <toplink:container xsi:type="toplink:list-container-policy">
    <toplink:collection-type>java.util.Vector</toplink:collection-type>
    </toplink:container>
    </opm:query>
    </opm:queries>
    <toplink:does-exist-query xsi:type="toplink:does-exist-query">
    <toplink:existence-check>check-database</toplink:existence-check>
    </toplink:does-exist-query>
    <toplink:read-all-query xsi:type="toplink:read-all-query">
    <toplink:reference-class>ExpiredTripPoller.Trip</toplink:reference-class>
    <toplink:container xsi:type="toplink:list-container-policy">
    <toplink:collection-type>java.util.Vector</toplink:collection-type>
    </toplink:container>
    </toplink:read-all-query>
    </opm:querying>
    <opm:attribute-mappings>
    <opm:attribute-mapping xsi:type="toplink:direct-mapping">
    <opm:attribute-name>id</opm:attribute-name>
    <opm:field table="TRIP" name="ID" xsi:type="opm:column"/>
    <opm:attribute-classification>java.math.BigDecimal</opm:attribute-classification>
    </opm:attribute-mapping>
    <opm:attribute-mapping xsi:type="toplink:direct-mapping">
    <opm:attribute-name>lpn</opm:attribute-name>
    <opm:field table="TRIP" name="LPN" xsi:type="opm:column"/>
    <opm:attribute-classification>java.lang.String</opm:attribute-classification>
    </opm:attribute-mapping>
    <opm:attribute-mapping xsi:type="toplink:direct-mapping">
    <opm:attribute-name>startTime</opm:attribute-name>
    <opm:field table="TRIP" name="START_TIME" xsi:type="opm:column"/>
    <opm:attribute-classification>java.sql.Timestamp</opm:attribute-classification>
    </opm:attribute-mapping>
    <opm:attribute-mapping xsi:type="toplink:direct-mapping">
    <opm:attribute-name>expiryTime</opm:attribute-name>
    <opm:field table="TRIP" name="EXPIRY_TIME" xsi:type="opm:column"/>
    <opm:attribute-classification>java.sql.Timestamp</opm:attribute-classification>
    </opm:attribute-mapping>
    <opm:attribute-mapping xsi:type="toplink:direct-mapping">
    <opm:attribute-name>status</opm:attribute-name>
    <opm:field table="TRIP" name="STATUS" xsi:type="opm:column"/>
    <opm:attribute-classification>java.math.BigDecimal</opm:attribute-classification>
    </opm:attribute-mapping>
    </opm:attribute-mappings>
    <toplink:descriptor-type>independent</toplink:descriptor-type>
    <toplink:caching>
    <toplink:cache-type>weak-reference</toplink:cache-type>
    <toplink:always-refresh>true</toplink:always-refresh>
    </toplink:caching>
    <toplink:remote-caching>
    <toplink:cache-type>weak-reference</toplink:cache-type>
    </toplink:remote-caching>
    <toplink:instantiation/>
    <toplink:copying xsi:type="toplink:instantiation-copy-policy"/>
    <toplink:change-policy xsi:type="toplink:deferred-detection-change-policy"/>
    <toplink:tables>
    <toplink:table name="TRIP"/>
    </toplink:tables>
    </opm:class-mapping-descriptor>
    </opm:class-mapping-descriptors>
    </toplink:object-persistence>
    To test I used the above custom SQL at the command line and it filtered the records by EXPIRY_TIME as expected.
    When deployed, the polling process updates the STATUS file dof table entries, but all entries with status 0 regardless of EXPIRY_DATE. My modification appears to be being ignored. I was unsure as to whether the QUERY was being determined in some other way so I modified the descriptor (with the toplink expression editor) to compare against a literal time value, producing the following modified toplink_mappings.xml:
    <?xml version="1.0" encoding="UTF-8"?>
    <toplink:object-persistence version="Oracle TopLink - 10g Release 3 (10.1.3.1.0) (Build 061004)"
    xmlns:opm="http://xmlns.oracle.com/ias/xsds/opm" xmlns:xsd="http://www.w3.org/2001/XMLSchema"
    xmlns:toplink="http://xmlns.oracle.com/ias/xsds/toplink"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <opm:name>ExpiredTripPoller</opm:name>
    <opm:class-mapping-descriptors>
    <opm:class-mapping-descriptor xsi:type="toplink:relational-class-mapping-descriptor">
    <opm:class>ExpiredTripPoller.Trip</opm:class>
    <opm:alias>Trip</opm:alias>
    <opm:primary-key>
    <opm:field table="TRIP" name="ID" xsi:type="opm:column"/>
    </opm:primary-key>
    <opm:events xsi:type="toplink:event-policy"/>
    <opm:querying xsi:type="toplink:query-policy">
    <opm:queries>
    <opm:query name="ExpiredTripPoller" xsi:type="toplink:read-all-query">
    <opm:criteria operator="lessThan" xsi:type="toplink:relation-expression">
    <toplink:left name="expiryTime" xsi:type="toplink:query-key-expression">
    <toplink:base xsi:type="toplink:base-expression"/>
    </toplink:left>
    <toplink:right xsi:type="toplink:constant-expression">
    <toplink:value xsi:type="xsd:date">2007-07-30</toplink:value>
    </toplink:right>
    </opm:criteria>
    <toplink:timeout>0</toplink:timeout>
    <toplink:reference-class>ExpiredTripPoller.Trip</toplink:reference-class>
    <toplink:cache-usage>primary-key</toplink:cache-usage>
    <toplink:lock-mode>none</toplink:lock-mode>
    <toplink:container xsi:type="toplink:list-container-policy">
    <toplink:collection-type>java.util.Vector</toplink:collection-type>
    </toplink:container>
    </opm:query>
    <opm:query name="findAllTrip" xsi:type="toplink:read-all-query">
    <toplink:timeout>0</toplink:timeout>
    <toplink:reference-class>ExpiredTripPoller.Trip</toplink:reference-class>
    <toplink:cache-usage>primary-key</toplink:cache-usage>
    <toplink:lock-mode>none</toplink:lock-mode>
    <toplink:container xsi:type="toplink:list-container-policy">
    <toplink:collection-type>java.util.Vector</toplink:collection-type>
    </toplink:container>
    </opm:query>
    </opm:queries>
    <toplink:does-exist-query xsi:type="toplink:does-exist-query">
    <toplink:existence-check>check-database</toplink:existence-check>
    </toplink:does-exist-query>
    <toplink:read-all-query xsi:type="toplink:read-all-query">
    <toplink:reference-class>ExpiredTripPoller.Trip</toplink:reference-class>
    <toplink:container xsi:type="toplink:list-container-policy">
    <toplink:collection-type>java.util.Vector</toplink:collection-type>
    </toplink:container>
    </toplink:read-all-query>
    </opm:querying>
    <opm:attribute-mappings>
    <opm:attribute-mapping xsi:type="toplink:direct-mapping">
    <opm:attribute-name>id</opm:attribute-name>
    <opm:field table="TRIP" name="ID" xsi:type="opm:column"/>
    <opm:attribute-classification>java.math.BigDecimal</opm:attribute-classification>
    </opm:attribute-mapping>
    <opm:attribute-mapping xsi:type="toplink:direct-mapping">
    <opm:attribute-name>lpn</opm:attribute-name>
    <opm:field table="TRIP" name="LPN" xsi:type="opm:column"/>
    <opm:attribute-classification>java.lang.String</opm:attribute-classification>
    </opm:attribute-mapping>
    <opm:attribute-mapping xsi:type="toplink:direct-mapping">
    <opm:attribute-name>startTime</opm:attribute-name>
    <opm:field table="TRIP" name="START_TIME" xsi:type="opm:column"/>
    <opm:attribute-classification>java.sql.Timestamp</opm:attribute-classification>
    </opm:attribute-mapping>
    <opm:attribute-mapping xsi:type="toplink:direct-mapping">
    <opm:attribute-name>expiryTime</opm:attribute-name>
    <opm:field table="TRIP" name="EXPIRY_TIME" xsi:type="opm:column"/>
    <opm:attribute-classification>java.sql.Timestamp</opm:attribute-classification>
    </opm:attribute-mapping>
    <opm:attribute-mapping xsi:type="toplink:direct-mapping">
    <opm:attribute-name>status</opm:attribute-name>
    <opm:field table="TRIP" name="STATUS" xsi:type="opm:column"/>
    <opm:attribute-classification>java.math.BigDecimal</opm:attribute-classification>
    </opm:attribute-mapping>
    </opm:attribute-mappings>
    <toplink:descriptor-type>independent</toplink:descriptor-type>
    <toplink:caching>
    <toplink:cache-type>weak-reference</toplink:cache-type>
    <toplink:always-refresh>true</toplink:always-refresh>
    </toplink:caching>
    <toplink:remote-caching>
    <toplink:cache-type>weak-reference</toplink:cache-type>
    </toplink:remote-caching>
    <toplink:instantiation/>
    <toplink:copying xsi:type="toplink:instantiation-copy-policy"/>
    <toplink:change-policy xsi:type="toplink:deferred-detection-change-policy"/>
    <toplink:tables>
    <toplink:table name="TRIP"/>
    </toplink:tables>
    </opm:class-mapping-descriptor>
    </opm:class-mapping-descriptors>
    </toplink:object-persistence>
    On deployment, this version of the BPEL process behaved as expected - only modifying the records with EXPIRY_TIME less thatn the literal time specified. (Also, I can't really pass in the time as a parameter as this is a polling model).
    Can anyone shed any light on what is happening or suggest how I might go about polling in the desired way?
    Cheers

    Please take a look at this article which states SYSDATE is not supported in a
    where clause.
    http://www.oracle.com/technology/pub/articles/bpel_cookbook/qualcomm-bpel.html
    Excerpt from the article:
    Here are three important things you should do in implementing the above design:
    Have the status of the record being processed stored in the database. The status includes the process state, next process attempt time, and processing attempt count.
    Create an updatable view that exposes only records that are ready to be processed. A view is needed because the database adapter cannot handle a where clause that compares against SYSDATE.
    Design logic that determines if a process instance that has faulted should be retried and when the retry should occur. This information will be updated in the database by use of a stored procedure. This can also be done with an update partner link and additional logic in BPEL.
    mahalo,
    a iii

  • Multiple Database Servers Question

    Hi,
    Please forgive this ignorant question, but can someone tell
    me how one would go about using multiple database servers?
    Just as there comes a time where one would need more than
    one web server and would need to use a load balancing
    solution, what happens when one would need more than one
    Database Server?
    How does one go about implementing that solution?
    Is there some way to have two database servers carrying the
    same information with some kind of load balancing solution
    in front of it(?) or does one place some tables on one
    Database
    server, and other tables on the other database server?
    (I have no clue as to how things would work.)
    Thanks in advance,
    Joe

    Microsoft SQL allows for clustering of SQL databases, so for
    instance two servers connected to a fileshare and a virtual address
    shared across the two physical boxes. Database connections are made
    to the virtual addres which will then be handled by the active
    node. Becuase a database is ultimately a file(s) on a disk it can
    only be attached to one node at a time so you end up with an
    active/inactive cluster.

  • Current Patch Set for Oracle Database Release 11.2

    Hi
    I am upgrading oracle 10G to 11GR2 in AIX 5.3 Environment with SAP ECC6 , I have just just installled Oracle 11.2.0.1.0 according to the "Database Upgrade Guide -Upgrade to Oracle Database 11g Release 2 (11.2): UNIX". After database software installation , in upgrade manual in section 3.4.4 it says, you need to install current patch set for oracle database 11g R2 and referencing a SAP note 1431799.
    I have just read note 1431799 which says Current Patch Set for Oracle Database Release 11.2 is not generally available for SAP customers until December 2010, also I donu2019t find the note 1522330 as mentioned in 1431799.
    Please let me know how I can get current patch set to apply with Oracle 11.2.0.1.0 (Is it in SAP service market place?)
    I also want to upgrade my installation to oracle 11.2.0.2, please let me know where I get the patch set to upgrade from 11.2.0.1.0 to 11.2.0.2,
    or
    I have to live with 11.2.0.1.0 and apply SAP Bundle Patch ( SBP)

    Abu Al MAmun,
    I understand from your posts that you recently upgraded to 11.2.0.2.
    I am having the same difficulty, of finding the current patch set for 11.2. Could you please let me know how you worked around it?
    Thanks,
    Siri
    Hello Everyone,
    I did read the post on to how you have to install the current patch set for 11.2.
    But honestly, it pretty much flew over my head.
    I am trying to upgrade our SAP AIX 64 bit test system from Oracle 10.2.0.4 to 11.2.0.2.
    And so far I have downloaded all the required sw for the upgrade, except for the current patch set 11.2.
    This is what I have right now:
    1) Oracle 11.2 AIX 64 bit upgrade sw - 51038805_part 1 to 51038805_part 7 and 51039800.
    2) Database Patches -->Oracle 11.2.0.2 -->
    All they have here is
    a. Database RDMBS - SAP_112020_201012_AIX.zip and b. SAP_112020_201101_AIX.zip
    b. OPatch - OPatch_11201_Generic_v3.zip and mopatch-2_1_6.zip
    c. Database Vault - DV - >Generic - p9656644_112020_Generic.zip
    If I am not wrong, the Database RDBMS files are nothing but the SAP Bundle Patches that have to be installed after installing the currrent patch set 11.2.0.2 using MOPatch.
    But I did not find the currrent patch set 11.2.0.2.
    Could someone please explain the process to me in detail? This being my first time , I am finding it a bit hard to catch up with some of the stuff.
    Thank You!
    Siri

Maybe you are looking for

  • Dynamic action on "Change" of Radio group value

    Hi, I'm using Apex 4.1 on XE 11G. If I create a simple dynamic action which shows/hides a field on the change of an item of type select list this works great. So for example if the value of the select list is 1 then show another field and otherwise h

  • Amount in Bar Code is incorrect;Amount in the line item can be used instead

    We have a problem in our Brazil company code. When we have a document with value higher than 5,000.00 BRL it is subject to Pis/cofins tax. In the Boleto (paper) document the amount is after tax deducted. But we see the full amount in the Vendor line

  • Why can't I play some of my songs?

    I just updated to iOS 5, and as I was scrolling through my songs on my iPod touch, a large majority of the songs names are tinted in grey, and cannot be played. Why is this? And how can I fix it? Anyone?

  • Event Handler not Working

    Salutations! My College teacher was telling me about flash builder and how good it is for mobile applications. I've never done anything with actionscript before and had my first foray into such a few days ago. However I quickly ran into a problem. I'

  • Multiple SELECTS

    I will try to keep my question as simple as possible - thanks to anyone who can help me out. I'm kind of under the gun on this so I'd REALLY appreciate any thoughts/suggestions. I have a simple HTMLDB page - two regions, with the top region having tw