Patching Strategy - poll

We have been hotfix averse in our 5.3 environment and endeavor to do better in 6. While realizing the burden that testing and regression testing incurs on every one - not to mention the 'Oops, we didn't test for that' PRD incidents that happens from time to time, I was wondering if others could share their patching strategy for Tidal.
I would be interested to know:
Do you actively look at the monthly releases and do you ALWAYS apply the monthly released hotfixes?
If not always, how often do you routinely patch (not including critical hotfixes)
Does Tidal fall under some compliance guidelines that require you to be updated on patches?
How long does the new hotfix sit in DEV before you bless it for PRD? Do you have a formal regression...or any testing that users sign off on for each hotfix before it can be applied to PRD or do you just let it sit in DEV for a month or two and apply to PRD if no issues are reported? We have 15 or so teams that use Tidal so its a challenge since they use the product differently.
If things go awry in PRD, do you perform a backout, or do you engage support but not back out hotfix?
Do you automate any of your testing (ha!) - really wanting to get some basic testing criteria for patches - in addition to the info from the readme that you care about testing.
Do you bundle your hotfix with the DB and OS patches in one go, or do you deploy them separately?
We hadn't planned on patching our agents when we upgrade to 6 (too many moving parts). Any tips on how you handle agent patching - do you do it in one go or rollout a few at a time? Is it autmated?
Thanks again - this forum is really useful to to learn how other companies use, manage and maintain this tool. It is one of our most critical applications so we really thread lightly on any changes to PRD.

Do you actively look at the monthly releases and do you ALWAYS apply the monthly released hotfixes?
5.3.1 Yes we review the readme and decide on a case by case basis
We were bit by the change in quoted behavior
6.1 We will be applying patches more frequently but apply each one – 1 month lag
If not always, how often do you routinely patch (not including critical hotfixes)
5.3.1 We do not routinely patch (current prod rev 5.3.1.318)
6.1 – 1 month lag ( honestly I feel more comfortable now that they are publishing monthly)
Does Tidal fall under some compliance guidance that require you to be updated on patches?
Not for our organization we have business and change control but no software patching requirement
How long does the new hotfix sit in DEV before you bless it for PRD? Do you have a formal regression...or any testing that users sign off on for each hotfix before it can be applied to PRD or do you just let it sit in DEV for a month or two and apply to PRD if no issues are reported? We have 15 or so teams that use Tidal so its a challenge since they use the product differently.
No formal timetable for testing in DEV but we also then apply to UAT then to PRD. In general we run for several months in DEV 1 month in UAT. We time prod with a windows patch weekend. We also leverage a Vcloud/Wmware Lab environment and look at the install via a DEMO license... which we have found very useful
Generally our internal customers are either application support or developers who can pick up on root cause easier that regular users
If things go awry in PRD, do you perform a back out, or do you engage support but not back out hotfix?
It all depends on severity of impact, ability to reach support, and time to apply fix.
We wouldn’t be afraid to either back out or fix and move forward but would likely back out to “working” version
Do you automate any of your testing (ha!) - really wanting to get some basic testing criteria for patches - in addition to the info from the readme that you care about testing.
We have a series of jobs that test some basic functionality but it could always be expanding and improved. We insert them ad-hoc to test but plan on automating a "test suite" of jobs with more frequent patching
Do you bundle your hotfix with the DB and OS patches in on go, or do you deploy them separately?
We try to limit the changes where possible. From an O/S perspective we patch Prd every other month and dev every month… so we just time properly
Depending on Cisco patch we might run on a non PRD O/S patch weekend
We hadn't planned on patching our agents when we upgrade to 6 (too many moving parts). Any tips on how you handle agent patching - do you do it in one go or rollout a few at a time? Is it autmated?
I like you thinking.. As long as Agents are at a 6x compatible level I would recommend that as a follow up change after you are confirmed stable.
Typically we roll out all simultaneously, currently not automated but we are looking to automate for 6.1.x

Similar Messages

  • Patching strategy for Production environment

    Hi all,
    Can anyone share their patch strategy specially for production environment?? We have production environment and we are planning to patch our databases using PSU pacthes with no downtime. Our production environment has standby. We have one option that we can make standby to primary. Is there any other options that we can use??? Also we are not using Grid control.
    Thanks

    Hi there,
    In 11gR2 onwards it is possible to patch without downtime using a physical standby database (converts to transient logical standby). Please look at the document below. There is a script on Metalink as well which makes this all much easier.
    http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-transientlogicalrollingu-1-131927.pdf

  • Patch strategy for ECC6 in SAP NW2004s environment

    Hi Experts,
    Our SAPNW04s landscape consists of ECC6, BI 7.0, EP7.0 , SRM, XI, Solution Manager systems.  We are drafting patch upgrade strategy for the ECC6 system & have 2 options:
    Option 1: Apply patches only in ECC6 & not in other components.
    Option 2: Patch ECC6 together with other components like SRM, BI, EP etc.
    Option 2 will make things complicated but for Option 1 below are the 2 concerns/questions raised:
    1. Option 1 will lead to doing UAT twice for Business Processes which have a tight integration with ECC6, for ex. UAT for data extraction needs to be done after applying patches both in the ECC6 & the BI systems. This will lead to arrange for more user resources.
    2. Due to the tight integration between ECC6 & other components applying patches alone in the ECC6 might lead to other systems like BI, EP,SRM, XI etc at a lower level & they might not be able to connect to ECC6 or work properly with ECC6.
    Can somebody share how they are managing the ECC6 patches upgrade in a big NW2004s environment with other components present, are the ECC6 patches done standalone & other systems like EP, BI follow their own different time schedules for patching or ECC6, BI, EP are recommended to be done together.
    Are there any best practices recommendations from SAP on the same?.  Can somebody share the blueprint or any strategy document they are using.
    Thanks,
    Abhishek

    Hi Abhishek,
    For the reasons you outline in 2, it is recommended to patch entire business scenarios (i.e. business processess that are coupled integrated across systems) together.  Rather than go into a full discussin here, I recommend you see my paper [How To Design a SAP NetWeaver - Based System Landscape |https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/50a9952d-15cc-2a10-84a9-fd9184f35366].
    Best Regards,
    Matt

  • Patching Strategy for CRS and ASM homes

    I'm fairly new to RAC/ASM and haven't performed any patch set upgrades yet. Back in the simple days when I wanted to apply a patch set to a database, say from 10.2.0.4 to 10.2.0.5, I would create a brand new Oracle home ahead of time and apply the patch set to it. I'd name my homes like this:
    /opt/oracle/product/10.2.0.4/db1
    /opt/oracle/product/10.2.0.5/db1
    During the maintenance window I would change /etc/oratab to point the database to the new 10.2.0.5 and complete the database upgrade scripts. The advantages of this strategy:
    1 - Less risk installing software as nothing uses the new home yet. If something goes wrong in the install, no big deal. Research the problem and try again without being under the stress of a defined maintenance window.
    2 - No need to backup old home for back-out purposes.
    3 - Less time required for database to be down during actual patch window since Oracle Installer does not need to run.
    Now with CRS and ASM, is there a way to pre-stage a new home for those, but not have them "active" to the node until later during the maintenance window?
    For ASM, it seems like it would be possible to treat the same way as database and simply update ASM SID in /etc/oratab
    +ASM1:/opt/oracle/product/10.2.0.5/asm1
    but I'm not totally confident in that as I'm afraid the CRS home may already have references to the ASM home in the cluster registry.
    For CRS, it seems like the home is pretty well hard-wired into the node startup scripts and installing a brand new CRS home will probably disrupt the running CRS home.
    Any thoughts about this?

    Hi,
    user5448593 wrote:
    I'm fairly new to RAC/ASM and haven't performed any patch set upgrades yet. Back in the simple days when I wanted to apply a patch set to a database, say from 10.2.0.4 to 10.2.0.5, I would create a brand new Oracle home ahead of time and apply the patch set to it.
    Now with CRS and ASM, is there a way to pre-stage a new home for those, but not have them "active" to the node until later during the maintenance window?Although you have not mentioned the version you are actually on, it is a quite up-to-date question and dilemma.
    Starting with 11.2 for Grid Infrastructure only "out-of-place" patchset upgrades are supported.
    >
    For ASM, it seems like it would be possible to treat the same way as database and simply update ASM SID in /etc/oratab
    +ASM1:/opt/oracle/product/10.2.0.5/asm1
    but I'm not totally confident in that as I'm afraid the CRS home may already have references to the ASM home in the cluster registry.
    For CRS, it seems like the home is pretty well hard-wired into the node startup scripts and installing a brand new CRS home will probably disrupt the running CRS home.
    Any thoughts about this?As of 11gR2 the ASM is part of the Grid Infrastructure, therefore it is running from the same home and not recommended to separate them. (although you can do that)
    By the way, what is your upgrade path? It could be easier to answer your questions if we knew that as there has been a quite a few enhancements and changes in the upgrade/patching process from 10g to 11g. (even between 11gR1 and 11gR2)
    Regards,
    Jozsef

  • Problem: installing Recommend Patch Cluster: patch 140900-10 fails

    I don't know if this is correct location for this thread (admin/moderator move as required).
    We have a number of Solaris x86 hosts (ie 4) and I'm trying to get the patched and up to date.
    I obtained the latest Recommended Patch Cluster (dated 28Sep2010).
    I installed it on host A yesterday, 2 hours 9 mins, no problems.
    I tried to install it on host B today, and that was just a critical failure.
    Host B failed to install 140900-01, because the non global zones rejected the patch because dependancy 138218-01 wasn't installed in the non global zones.
    Looking closer, it appears that 138218-01 is for global zone installation only; so it makes sense that the non global zones rejected it.
    However, both hosts were in single user mode when installing the cluster and both hosts have non global zones.
    140900-01 installed on host A with log:
    | This appears to be an attempt to install the same architecture and
    | version of a package which is already installed. This installation
    | will attempt to overwrite this package.
    |
    | Dryrun complete.
    | No changes were made to the system.
    |
    | This appears to be an attempt to install the same architecture and
    | version of a package which is already installed. This installation
    | will attempt to overwrite this package.
    |
    |
    | Installation of <SUNWcsu> was successful.
    140900-1 failed on host B with this log:
    | Validating patches...
    |
    | Loading patches installed on the system...
    |
    | Done!
    |
    | Loading patches requested to install.
    |
    | Done!
    |
    | Checking patches that you specified for installation.
    |
    | Done!
    |
    |
    | Approved patches will be installed in this order:
    |
    | 140900-01
    |
    |
    | Preparing checklist for non-global zone check...
    |
    | Checking non-global zones...
    |
    | Checking non-global zones...
    |
    | Restoring state for non-global zone X...
    | Restoring state for non-global zone Y...
    | Restoring state for non-global zone Z...
    |
    | This patch passes the non-global zone check.
    |
    | None.
    |
    |
    | Summary for zones:
    |
    | Zone X
    |
    | Rejected patches:
    | 140900-01
    | Patches that passed the dependency check:
    | None.
    |
    |
    | Zone Y
    |
    | Rejected patches:
    | 140900-01
    | Patches that passed the dependency check:
    | None.
    |
    |
    | Zone Z
    |
    | Rejected patches:
    | 140900-01
    | Patches that passed the dependency check:
    | None.
    |
    | Fatal failure occurred - impossible to install any patches.
    | X: For patch 140900-01, required patch 138218-01 does not exist.
    | Y: For patch 140900-01, required patch 138218-01 does not exist.
    | Z: For patch 140900-01, required patch 138218-01 does not exist.
    | Fatal failure occurred - impossible to install any patches.
    | ----
    | patchadd exit code : 1
    | application of 140900-01 failed : unhandled subprocess exit status '1' (exit 1 b
    | ranch)
    | finish time : 2010.10.02 09:47:18
    | FINISHED : application of 140900-01
    It looks as if the patch strategy is completely different for the two hosts; both are solaris x86 (A is a V40z; B is a V20z).
    Trying to use "-G" with patchadd crabs the patch is for global zones only and stops (ie it's almost as if it thinks installation is being attempted in a non global zone).
    Without "-G" same behavior as patch cluster install attempt (no surprise there).
    I have inherited this machines; I am told each of these machines were upgraded to Solaris 10u6 in June 2009.
    Other than the 4-5 months it took oracle to get our update entitlement to us, we've had no problems.
    I'm at a loss here, I don't see anyone else having this type of issue.
    Can someone point the way...?
    What am I missing...?

    Well, I guess what you see isn't what you get.
    Guess I'm used to the fact USENET just left things formatted the way they were :-O

  • Oracle 10.2.0.4 Patching Process - CPU vs Interim Patches

    Hi everyone,
    We are getting ready to upgrade our AIX 5.2 - 64bit Oracle 10.2.0.2 systems to Oracle 10.2.0.4.
    I noted that there was also a single CPU patch available (CPUOct2008_p7375644_10204_AIX5L.zip), as well as a large list of interim patches for 10.2.0.4.
    I was wondering what other people do regarding their patching strategy for 10.2.0.4.
    eg:  After you complete your 10.2.0.4 upgrade, do you:
    1.  Apply the CPU patch first and then all the interim patches
    2.  Apply all the interim patches first and the the CPU patch
    3.  Apply the CPU patch only
    4.  Apply the interim patches only
    5.  Its all too hard, so I don't apply any CPU or interim patches ... I hope for the best.
    6.  Migrate to MaxDB ... its much easier to patch.
    Interested in your feedback.
    Cheers
    Shaun
    Edited by: Shaun on Nov 28, 2008 3:34 AM

    Hi Fidel,
    Fidel Vales wrote:
    Oracle changed the CPU patches in 10.2.0.4
    Now they are more "granular". it is divided in "molecules"
    that way it is possible to install only one "part" of the CPU if there is a conflict.
    I was aware of Oracles new approach for 10.2.0.4 of using the molecule patch concept.  I still feel uneasy with it given their past track record of patch conflicts.
    For that reason, I'd install first all patches and then the CPU.
    Have you done this?  Has anyone done this?  I wouldn't mind hearing from someone that has.
    Regarding MaxDB, sorry I never used it
    Your missing out on something wonderful.  Great database.
    Hows this ... I'll apply all of the interim patches, and then the CPU patch on one of our project boxes early next week, and report back if there were any conflicts or issues.
    Cheers
    Shaun

  • 12.0.6/10g Upgrade Strategy

    Hi Steven,
    We have a client on 12.0.6/ 10g and are seriously considering upgrade now. Can you please help with the following questions?
    - What are the upcoming patch set levels that we need to be aware of.
    -     R12 patching strategy, 12.1.1, 12.1.2 etc, what is the most stable version that Oracle recommends as on today? Any specific pre-requisites for these patch set levels?
    -     Oracle’s recommendation for the timeline's for testing and implementation of patch set levels: How does one approach this topic?
    -     Oracle’s internal patching strategy. Are we already on 12.1.2?
    -     When do we usually recommend customers to go to particular patch set - based on Supportability and any other considerations?
    -     Are Beta versions of major patch set rolled out to customers and how are they typically supported.
    - What is the high level roadmap for 12.1.3 and Fusion? WIll 12.1.3 be the last R12 version before Fusion?
    - WHEN is 11g R2 expected to be certified on 12.1.2?
    We needed to revert to the Client this week - appreciate your help with this.
    Thanks & Regards
    Kumar

    Kumar,
    Have you checked the latest post on Steven Chan's blog?
    Blog on hiatus
    http://blogs.oracle.com/stevenChan/2010/02/blog_on_hiatus.html
    If you are interested to hear from us, please lets us know. Or, log a SR as mentioned in the link.
    Regards,
    Hussein

  • Oracle istore

    is istore part of oracle 11i . or where can we download it
    how u integrate istore with oracle 11i

    Oracle iStore is part of Oracle Sales and integrates seamlessly with other Sales Applications, including Oracle Sales and Oracle Partner Management.
    Oracle iStore
    http://www.oracle.com/applications/sales/istore.html
    Note: 205103.1 - Oracle iStore General Setup Test
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=205103.1
    Note: 211165.1 - 11i iStore Patching Strategy and Patch List
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=211165.1
    Note: 294033.1 - Oracle iStore 11.5.10 Documentation
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=294033.1

  • Upgrading LiveCache Version

    We have been asked by SAP to patch our version of LiveCache/Maxdb due to a bug in our current version.
    Are we able to just patch the Livecache database without updating the SCM components in the ABAP stack?
    Or are they linked?
    Thanks
    Matt

    Hello,
    The LCA/liveCache for SCM 7.0 are downward compatible, please review the SAP note:
    1278897   Patch strategy LC/LCA as of SCM 7.0
    You could update the LCA/liveCache for SCM 7.0 to the higher version/build released for SCM 7.0 without updating the SCM SP stack.
    Please review the u201CMatrix of released LCA builds for SAP SCM 7.0u201D at SAP liveCache technology -> SAP SCM 7.0
    Best regards, Natalia Khlopina

  • How can I update to latest Solaris NTP 4  package?

    I am facing difficult challenge on this.
    There is a requirement to run the NTP version 4, I Understand the package involved is SUNWntp4S. But in all the patch bundles installed we have not been able to upgrade the current version, i.e V3
    Tried to see if we can get a application to serve the version 4 protocol but there is a concern on how to implement the free version on a procudtion system.
    Are there any other methods that you can suggest.
    Thanks in advance.

    You may want to review the patch cluster strategy document on OTN:
    http://www.oracle.com/technetwork/articles/servers-storage-admin/solaris-patching-strategy-257476.pdf
    The main information to be aware of is mentioned on page 10:
    Because the Oracle Solaris Update Patch Bundles don't contain new packages introduced in an
    Oracle Solaris Update Release, new functionality that depends upon such new packages is not
    available in the Oracle Solaris Update Patch Bundles.
    This means that you must upgrade your system to get the new NTP4 functionality. Patches can only be applied to existing packages. New packages must be installed by an upgrade (LiveUpgrade is recommended).
    -- Alan

  • Auth Failure Traps

    After i changed snmp strings on our network devices , I see a list of devices with Auth Failure Traps on Syslog server.
    Ive check the snmp credential strings on CW for each device and they're correct.
    This is the error message on my syslog server:
    mm-dd-yyyy    11:23:16    Local0.Info    10.1.1.1    10.1.1.2.150 4 0  Authentication failure 10.1.1.254(CiscoWorks) 1 10.1.1.254(CiscoWorks)
    This message wasnt there before i re-new the snmp community string. After I chnage the snmp string on my routers and switches, I a lots of traps on my syslog server.
    How can I stop this?
    Thank you for your help
    Thanks

    Hi Joe,
    The root cause of authentication failure messages was due to dfmserver. When I stop it, the message disappeared.
    Process:
    DfmServer
    Path:
    C:\PROGRA~1\CSCOpx\objects\smarts\bin\CS_sm_server.exe
    Flags:
    Startup:
    Started automatically at boot.
    Dependencies:
    DfmBroker
    Before applying the patch, when I shutdown dfmserver, I could still see the polling. After applying the patch, the polling stop.
    There are only 2 patches for DFM. I have also applied fix CSCta56151.
    Patches installed
    Patch Name
    Version
    Installed Date
    CSCtb87449-0
    0
    02 Mar 2010, 11:28:07 WST
    CSCta56151-0
    0
    04 Mar 2010, 14:18:46 WST
    Any more tips Joe?

  • Rollback in database adapter with delete polling strategy

    Hi All,
    We have designed a database adapter with "Delete the Rows That Were Read" after read strategy with auto-retry attempts as 5. In BPEL process, where we are receiving the DB records, we are throwing a rollback fault in case of any fault.
    Database adapter polling is being re-tried 5 times in case of faults but the data is being deleted from the tables after 5 retries. Is this is the expected behavior of DB adapter? Doesn't the rollback fault rollback the complete transaction and leave the failed data back in the tables?
    Can any one provide more information on this polling strategy after the number of auto-retries are completed?.
    Thanks.
    -Pavan

    You need to include your bpel process in the same DB adapter transaction
    Use the following properties in the bpel component to do this
    <property name="bpel.config.transaction" type="xs:string" many="false">requiresNew</property>
        <property name="bpel.config.oneWayDeliveryPolicy" type="xs:string"
                  many="false">sync</property>
    Make sure the connection factory that you are using in the DB adapter is XA transaction enabled

  • Database Adapter using Logical Delete Polling Strategy not updating field

    I have an ESB database adapter defined against a table. The adapter is set to use the Logical Delete Polling Strategy, and the table has an extra column defined as the "delete" field. When I register the adapter to the ESB and add a record to the table, the adapter reads the contents of the record as expected. However, 15 seconds later (that being the polling interval) it reads it again, and again 15 seconds after that, ad infinitum.
    I have defined the logical delete field as a single character field and set the Read value for "T" (a record is normally inserted with that field having a null value). Results as outlined in the previous paragraph. I redefined the field as a number and set the Read value to "1" (with a record normally inserted with that field having a value of "0"). Results as outlined in the previous paragraph.
    Apparently the update of the logical delete value to the Read value is not occurring. My question (obviously) is: Why?
    Thanks for your time.

    Here is the adapter WSDL:
    <?xml version="1.0" encoding="UTF-8"?>
    <definitions
    name="Poll_PM_LOG"
    targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/db/Poll_PM_LOG/"
    xmlns:tns="http://xmlns.oracle.com/pcbpel/adapter/db/Poll_PM_LOG/"
    xmlns:plt="http://schemas.xmlsoap.org/ws/2003/05/partner-link/"
    xmlns:jca="http://xmlns.oracle.com/pcbpel/wsdl/jca/"
    xmlns:pc="http://xmlns.oracle.com/pcbpel/"
    xmlns:top="http://xmlns.oracle.com/pcbpel/adapter/db/top/PollPMLOG"
    xmlns="http://schemas.xmlsoap.org/wsdl/">
    <types>
    <schema xmlns="http://www.w3.org/2001/XMLSchema">
    <import namespace="http://xmlns.oracle.com/pcbpel/adapter/db/top/PollPMLOG"
    schemaLocation="PollPMLOG_table.xsd"/>
    </schema>
    </types>
    <message name="LogCollection_msg">
    <part name="LogCollection" element="top:LogCollection"/>
    </message>
    <portType name="Poll_PM_LOG_ptt">
    <operation name="receive">
    <input message="tns:LogCollection_msg"/>
    </operation>
    </portType>
    <binding name="Poll_PM_LOG_binding" type="tns:Poll_PM_LOG_ptt">
    <pc:inbound_binding/>
    <operation name="receive">
    <jca:operation
    ActivationSpec="oracle.tip.adapter.db.DBActivationSpec"
    DescriptorName="PollPMLOG.Log"
    QueryName="Poll_PM_LOG"
    PollingStrategyName="LogicalDeletePollingStrategy"
    MarkReadFieldName="PROCESSED"
    MarkReadValue="T"
    SequencingFieldName="ID"
    MaxRaiseSize="1"
    MaxTransactionSize="unlimited"
    PollingInterval="15"
    NumberOfThreads="1"
    UseBatchDestroy="false"
    ReturnSingleResultSet="false"
    MappingsMetaDataURL="PollPMLOG_toplink_mappings.xml" />
    <input/>
    </operation>
    </binding>
    <service name="Poll_PM_LOG">
    <port name="Poll_PM_LOG_pt" binding="tns:Poll_PM_LOG_binding">
    <jca:address location="eis/DB/PM"
    UIConnectionName="PM"
    ManagedConnectionFactory="oracle.tip.adapter.db.DBManagedConnectionFactory"
    />
    </port>
    </service>
    <plt:partnerLinkType name="Poll_PM_LOG_plt" >
    <plt:role name="Poll_PM_LOG_role" >
    <plt:portType name="tns:Poll_PM_LOG_ptt" />
    </plt:role>
    </plt:partnerLinkType>
    </definitions>
    The field PROCESSED is defined as CHAR(1), and is normally null when a record is inserted. The value of the field is to be set to 'T' when the record is read by the polling adapter.

  • DB Adapter Locking Database Rows in Distributed Delete Polling Strategy

    I am stuck with an issue. To explain the issue in simple steps
    I am creating a Database Polling Adapter with Distributed Delete Polling Strategy in OSB for running in Clustered Environment.
    We are custom SQL so that the records are not deleted after they are fetched but only a column Status Column is getting Updated.
    The Polling query and Delete SQL is as follows
    <query name="ReqJCAAdapterSelect" xsi:type="read-all-query">
    <criteria operator="equal" xsi:type="relation-expression">
    <left name="Status" xsi:type="query-key-expression">
    <base xsi:type="base-expression"/>
    </left>
    <right xsi:type="constant-expression">
    <value>READY</value>
    </right>
    </criteria>
    <reference-class>ReqJCAAdapter.ItemTbl</reference-class>
    <refresh>true</refresh>
    <remote-refresh>true</remote-refresh>
    <lock-mode>lock-no-wait</lock-mode>
    <container xsi:type="list-container-policy">
    <collection-type>java.util.Vector</collection-type>
    </container>
    </query>
    <delete-query>
    <call xsi:type="sql-call">
    <sql>update ITEM_TBL
    set STATUS = 'IN_PROCESS'
    where ID = #ID</sql>
    </call>
    </delete-query>
    In case of any error in Service Error handler the Status is being updated to ERROR.
    Now the problem which I am facing is in the request Pipeline if we want to do any update on the same record we detect that in ROW is locked and is not allowed to do an update and because of this the process can not proceed.
    Also if any error occurs in Request pipeline then from the Service Error handler we are supposed to Update the status as ERROR, but the same thing is happening and the process can not proceed.
    but In the response Pipeline we can successfully update the status of the same record.
    We have tried to use both XA and NON-XA Datasource but no luck.
    Any help in this is appreciated.
    Regards,
    Dilip

    I am stuck with an issue. To explain the issue in simple steps
    I am creating a Database Polling Adapter with Distributed Delete Polling Strategy in OSB for running in Clustered Environment.
    We are custom SQL so that the records are not deleted after they are fetched but only a column Status Column is getting Updated.
    The Polling query and Delete SQL is as follows
    <query name="ReqJCAAdapterSelect" xsi:type="read-all-query">
    <criteria operator="equal" xsi:type="relation-expression">
    <left name="Status" xsi:type="query-key-expression">
    <base xsi:type="base-expression"/>
    </left>
    <right xsi:type="constant-expression">
    <value>READY</value>
    </right>
    </criteria>
    <reference-class>ReqJCAAdapter.ItemTbl</reference-class>
    <refresh>true</refresh>
    <remote-refresh>true</remote-refresh>
    <lock-mode>lock-no-wait</lock-mode>
    <container xsi:type="list-container-policy">
    <collection-type>java.util.Vector</collection-type>
    </container>
    </query>
    <delete-query>
    <call xsi:type="sql-call">
    <sql>update ITEM_TBL
    set STATUS = 'IN_PROCESS'
    where ID = #ID</sql>
    </call>
    </delete-query>
    In case of any error in Service Error handler the Status is being updated to ERROR.
    Now the problem which I am facing is in the request Pipeline if we want to do any update on the same record we detect that in ROW is locked and is not allowed to do an update and because of this the process can not proceed.
    Also if any error occurs in Request pipeline then from the Service Error handler we are supposed to Update the status as ERROR, but the same thing is happening and the process can not proceed.
    but In the response Pipeline we can successfully update the status of the same record.
    We have tried to use both XA and NON-XA Datasource but no luck.
    Any help in this is appreciated.
    Regards,
    Dilip

  • Duplicate processing by DBAdapter when using Distributed Polling with Logical Delete Strategy

    We have DBAdapter based polling services in OSB running across two Active-Active clusters (total 20 managed service across 2 clusters),
    listening to the same database table. (Both clusters read from the same source DB). We want to ensure distributed polling without duplication,
    hence in the DBAdapter we have selected Distributed Polling option, meaning we are using "Select For Update Skip Locking".
    But we see that sometimes, same rows are processed by two different nodes and transactions are processed twice.
    How do we ensure that only one managed server processes a particular row using select for update? We do not want to use the markReservedValue option which was preferred in older version of DBAdapter.
    We are using following values in DB Adapter configuration, the Jdev project for DBAdapter and the OSB proxy using DBAdapter are attached.
    LogicalDeletePolling Strategy
    MarkReadValue = Processed
    MarkUnreadValue = Initiate
    MarkReservedValue = <empty as we are using Skip Locking>
    PollingFrequency = 1 second
    maxRaiseSize = 1
    MaxTransactionSize = 10
    DistributionPolling = checked   (adds lock-n-wait in properties file and changes the SQL to SELECT FOR UPDATE SKIP LOCKED)
    Thanks and Regards

    Hi All,
    Actually I'm also facing the same problem.
    Step I follwed:
    1) Created a job_table in database
    create talbe job_table(id, job_name, job_desc, job_status)
    2)created a bpel process to test the Inbound distributed polling.
    3)Configure the DBAdapter for polling.
    a)update a field in the job_table with logical delete.
    b)select the field name form the drop down
    c) change the read value-->Inprogress and unRead value--->Ready
    d) dont change the value for Reserved value
    e) select the check box for "distributed polling".
    f) the query will be appended with "For update NoWait."
    g)click next and then finish.
    4) Then i followed the below steps.
    To enable pessimistic locking, run through the wizard once to create an inbound polling query. In the Applications Navigator window, expand Application Sources, then TopLink, and click TopLink Mappings. In the Structure window, click the table name. In Diagram View, click the following tabs: TopLink Mappings, Queries, Named Queries, Options; then the Advanced… button, and then Pessimistic Locking and Acquire Locks. You see the message, "Set Refresh Identity Map Results?" If a query uses pessimistic locking, it must refresh the identity map results. Click OK when you see the message, "Would you like us to set Refresh Identity Map Results and Refresh Remote Identity Map Results to true?Ó Run the wizard again to regenerate everything. In the new toplink_mappings.xml file, you see something like this for the query: <lock-mode>1</lock-mode>.
    5) lock-mose is not changed to 1 in toplink_mappingss.xml
    Can we edit the toplink_mappings.xml manually.
    If yes, what allt he values Ineed to change in toplink_mappings.xml file, so that it will not pick the same record for the multiple times in clustered environment.
    Please help me out this is urgent.
    Thanking you in advance.

Maybe you are looking for

  • Creative suite 4 "unattended" install

    Im trying to deploy cs4 silently, but with a progress bar so users can see when it is finished installing (unattended). I know you could do that in cs3, but apparently from what ive found they dont do that anymore in cs4. Unless they have another mod

  • I can't infall OS X Mavericks

    I downloaded the OS X Mavericks, and when I started to install I received the following message: There is any solution? Regards

  • Adobe Flash Upgrade will not install after Firefox upgrade to 3.6.7

    I upgraded to Firefox 3.6.7. After the upgrade I got a message that I needed to upgrade Adobe Flash due to security vulnerabilities. The Flash upgrade will not install. It simply sits and never installs no matter how long I wait. == This happened ==

  • ? bout the warrenty and how can i get mine fixed

    the clip in the back of my radio popped out with the whole backing of it and now the buttons dont respond but is therer any way i can get a new one or send it bac to apple to get a new one since is under a year? n/a   Windows XP   5th g Blck Ipod

  • I have Mac OS 10.3.9 and Firefox 2.0.0.1 and need to upgrade Firefox.How?

    Can I update Firefox given my iBook G4 using OSX 10.3.9, which will not support Firefox 3.6? How do I do that? (The "check for updates" link under "Help" is grayed out.) Thanx. Mitchel