Queue monitoring, best practice

Hi,
          I have had a go at reading the Programming guides supplied on the website, however I still am unsure as to the best approach...
          Situation:
          Server1, Weblogic Server 8.1 with a JMS Queue.
          Server2, Tomcat 6.0.
          Server3, IIS with a Web Server.
          Server2 has a simple daemon process that is intended to monitor a JMS Queue from Server1 and forward (consume) any messages it recieves to Server3.
          The daemon is basically constructed as follows:
                    INIT:
           Initialise JNDI context
           Lookup TopicConnectionFactory
           Create TopicConnection
           Set connection client ID
           Create TopicSession
           Lookup Topic
           Create TopicSubscriber (DurableSubscriber)
           Start connection
          MAIN:
           Loop:
            recieve TextMessage from TopicSubscriber
            Forward consumed message to webservice
            goto Loop: // This is an endless loop
                              As you can see, I initalise all connection details once at the start and then I hope to use it forever.
          Should I be closing the session (or connection) after each consumed message?
          Will this be able to consume messages that are already in the Queue or will it only consume message that are published while it is connected?
          This all seems to work for a while, but I occasionally get the following exception (which is the reason why I'm asking the above questions):
          weblogic.jms.common.InvalidClientIDException: Client id, my_cid, is in use
               at weblogic.rjvm.BasicOutboundRequest.sendReceive(BasicOutboundRequest.java:108)
               at weblogic.rmi.internal.BasicRemoteRef.invoke(BasicRemoteRef.java:137)
               at weblogic.jms.dispatcher.DispatcherImpl_813_WLStub.dispatchSyncFuture(Unknown Source)
               at weblogic.jms.dispatcher.DispatcherWrapperState.dispatchSync(DispatcherWrapperState.java:345)
               at weblogic.jms.client.JMSConnection.setClientID(JMSConnection.java:513)
          which (i think) courses the following exception:
          weblogic.jms.common.JMSException: Connection clientID is null
               at weblogic.jms.client.JMSSession.createConsumer(JMSSession.java:1635)
               at weblogic.jms.client.JMSSession.createDurableSubscriber(JMSSession.java:1461)
               at weblogic.jms.client.JMSSession.createDurableSubscriber(JMSSession.java:1434)
          Thanks in advance for any and all responses to my questions.

-- Should I be closing the session (or connection) after each consumed message?
          No. Connection/session creation has a relatively high overhead.
          -- Will this be able to consume messages that are already in the Queue or will it only consume message that are published while it is connected?
          I assume your refering to topics (pub/sub), not queues. If you want to receive messages were sent while the subscriber was disconnected, then you need to use "durable subscriptions" - a basic standard concept in JMS. Furthermore, if you want the messages to survive even in the event of a server crash, the publisher must specify "PERSISENT" as the delivery mode.
          -- This all seems to work for a while, but I occasionally get the following exception (which is the reason why I'm asking the above questions):
          As part of standard JMS for "durable subscribers", clients must specify a unique client-id when creating a connection. They must also specify a subscriber-id when creating the subscriber. The "connection-id, subscriber-id" tuple uniquely identifies a durable topic subscription. If a connection using the specified connection-id already exists when creating a new connection, the new connection create will fail (must fail) with a duplicate client-id exception.
          I don't know what is causing the null-pointer-exception.
          Hope this helped,
          Tom

Similar Messages

  • BizTalk monitoring best practice(s)

    I am looking for information on best practices to monitor BizTalk environment (ideally using Tivoli monitoring tools).  Specifically I am looking for insight into what should be monitored, how one analyzes performance profile.  Thanks

    While setting up monitoring agents/products for BizTalk server (or for any server/application for this matter), there are two ways to start:
    If available, import/install application specific monitoring packages i.e. Import/install prebuild monitoring rules, alerts and actions specific to the application.
    Or create the rules/alerts and actions from scratch for the application.
    For monitoring products like SCOM, management packs for BizTalk server are available as pre-build, as readymade packages. For a non-Microsoft product like Tivoli check with
    the vendor/ IBM for any such pre-build monitoring packages for BizTalk. If available purchase it and install it in Tivoli. This would be the best option to start, instead of spending time and resource in building the rules, alerts and actions from scratch.
    If pre-build monitoring package is not available, then start by creating rules to monitor any errors or warnings from event logs of the BizTalk and SQL servers. Gradually,
    you can update/add more rules based on your needs.
    And regarding analysing performance profile, most of the monitoring product now-a-days comes with prebuild alerts for monitoring the server performances, CPU utilization
    etc. I’m sure renowned product like Trivoli shall have prebuild alerts for monitoring the server performances. Same can be configured to monitor the BizTalk’s performances. And also monitoring event log entries would also pickup any performance related issues.
    Moreover, Tivoli has got detail user guide document for setting alerts for BizTalk server. Check this
    document here.
    Reading best practices, links provided by MaheshKumar shall help you.
    Key point to remember is no-monitoring product is perfect; you can’t create a fool-proof monitoring alerts and actions on day one. It would get mature over the time in your
    environment.
    If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful.

  • Grid Control and SOA suite monitoring best practice

    Hi there,
    I’m trying to monitor a SOA implementation on Grid Control.
    Are there some best practices about it?
    Thanks,     
    Nisti
    Edited by: rnisti on 12-Nov-2009 9:34 AM

    If they use it to access and monitor the database without making any other changes, then it should be fine. But if they start scheduling stuff like oradba mentioned above, then that is where they will clash.
    You do not want a situation where different jobs are running on the same database from different setups by different team (cron, dbcontrol, dbms_job, grid control).
    Just remember their will be aditional resource usage on the database/server to have both running and the Grid Control Repository cannot be in the same database as the db console repository.

  • Diagnostic Monitoring Best Practices

    Are there any general best practices for Azure Diagnostics?
    Does enabling all diagnostic monitoring and performance counters have an impact on the performance of the application?
    Does the sample rate on the performance counter have an impact on the performance?
    Does the transfer period have an impact on performance?
    Does the buffer size have an impact on performance, or is it just a matter of how much data you want to retain and store?
    I am looking for a good trade off where diagnostic monitoring does not impact performance while also logging as much information as possible.

    Hi,
    Please have a check on the below link which talks about Commonly reported issues while leveraging Windows Azure Diagnostic configurations and Best Practices.
    http://blogs.msdn.com/b/cie/archive/2013/04/16/commonly-reported-issues-while-leveraging-windows-azure-diagnostic-configurations-and-best-practices.aspx
    More information :
    http://msdn.microsoft.com/en-us/library/azure/hh771389.aspx
    http://blogs.msdn.com/b/davidhardin/archive/2011/02/26/windows-azure-diagnostics-series.aspx
    Hope this helps.
    Regards,
    Mekh.

  • Application Availability Monitoring -- Best Practice?

    What are you using to monitor application availability? Thinks like making sure that Essbase, Planning, FR, Workspace, FDM, and HFM are up, and responding to requests in a timely manner.
    Do Hyperion Management Packs exist for MS System Center Operations Manager?
    Someone recommended Oracle Enterprise Manager, but I don't see any Hyperion functionaity in that tool, only Oracle relational database stuff.
    Custom development is an option, but I hate reinventing wheels.
    Thx, Jeff

    I'm headed down a path of semi-custom; using Solarwinds IPMonitor as the backend alerting tool, which calls custom scripts to check the health of the software.
    Essbase is easy, I'm using a MaxL script.
    What about Planning, HFM, and FR? Does anyone have any ways to script a health check of these applications? I was thinking about making calls to the APS, which can connect to all three of those apps..

  • Delivery queue monitoring

    Hello,
    Is there any way to monitor if there are emails in the delivery queue unsent ?
    Or a way to monitor the number of undelivered emails still in delivery queue ?
    I read something that this is impossible to monitor but there is a possibility using some scripts.
    Please is there anyone that has done this already ?
    It would be very helpful to me.
    BR,
    Ilir

    Please find the following KB articles to assist you in external monitoring:
    Article #1339: SNMP best practices
    Link: http://tools.cisco.com/squish/9CF91
    Article #57: How do I monitor the health of my IronPort appliance?
    Link: http://tools.cisco.com/squish/92502
    Article #744: Where can I obtain the SNMP MIB and SMI files for Cisco Email and WebSecurity Appliances?
    Link: http://tools.cisco.com/squish/49E5e
    For full information regarding XML/SNMP monitoring:
    XML, as you have requested more on, is used to monitor the statistics of email processed through the appliance.
    For full documentation regarding XML – please see the Daily Management Guide (page 315).
    SNMP can be used to monitory overall system health, including the hardware.
    For full documentation regarding SNMP monitoring – please see the Daily Management Guide (page 306).
    The Daily Management guide can be located at the following link, as is recommended to be downloaded from there due to the size:
    http://www.cisco.com/en/US/products/ps10154/products_user_guide_list.html
    *If you do require further specifics for monitoring, thresholds – I would suggest that you open a dialog with your Sales/Account team – as they also have tools in place that can assist with threshold monitoring, best practices, or get you in touch with Advanced Services to help implement.

  • Handling Error queue and Resolving issues - Best practices

    This is related to Oracle 11g R2 streams replication.
    I have a table A in Source and Table B in destination. During the replication by Oracle streams, if there is an error at apply process, the LCR record will go to error queue table.
    Please share the best practices on the following:
    1. Maintaining error table operation,
    2. monitor the errors in such a way that other LCRs are not affected,
    3. Reapply the resolved LCRs from error queue back to Table B, and
    4. Retain the synchronization without degrading the performance of streams
    Please share some real time insight into the error queue handling mechanism.
    Appreciate your help in advance.

    This is related to Oracle 11g R2 streams replication.
    I have a table A in Source and Table B in destination. During the replication by Oracle streams, if there is an error at apply process, the LCR record will go to error queue table.
    Please share the best practices on the following:
    1. Maintaining error table operation,
    2. monitor the errors in such a way that other LCRs are not affected,
    3. Reapply the resolved LCRs from error queue back to Table B, and
    4. Retain the synchronization without degrading the performance of streams
    Please share some real time insight into the error queue handling mechanism.
    Appreciate your help in advance.

  • Basic Strategy / Best Practices for System Monitoring with Solution Manager

    I am very new to SAP and the Basis group at my company. I will be working on a project to identify the best practices of System and Service level monitoring using Solution Manager. I have read a good amount about SAP Solution Manager and the concept of monitoring but need to begin mapping out a monitoring strategy.
    We currently utilize the RZ20 transaction and basic CCMS monitors such as watching for update errors, availability, short dumps, etc.. What else should be monitored in order to proactively find possible issues. Are there any best practices you all have found when implimenting Monitoring for new solutions added to the SAP landscape.... what are common things we would want to monitor over say ERP, CRM, SRM, etc?
    Thanks in advance for any comments or suggestions!

    Hi Mike,
    Did you try the following link ?
    If not, it may be useful to some extent:
    http://service.sap.com/bestpractices
    ---> Cross-Industry Packages ---> Best Practices for Solution Management
    You have quite a few documents there - those on BPM may also cover Solution Monitoring aspects.
    Best regards,
    Srini
    Edited by: Srinivasan Radhakrishnan on Jul 7, 2008 7:02 PM

  • Best practice on monitoring Endeca health / defining outage

    (This is a double post from the Endeca Experience Management forum)
    I am looking for best practice on how to define Endeca service outage and monitor the health of the system. I understand this depends on your user requirements and it may vary from customer to customer. Specifically what criteria do you use to notify your engineer there is a problem? We have our load balancers pinging dgraphs on an interval. However the ping operation is not sufficient in our use case. We are also experimenting running a "low cost" query to the dgraphs on an interval and using some query latency thresholds to determine outage. I want to hear from people on the field running large commercial web site about your best practice of monitoring/notifying health of the system.
    Thanks.

    The performance metric should help to analyse the query and metrics for fine tuning.
    Here are few best practices:
    1. Reduce the number of components per page
    2. Avoid complex LQL queries
    3. Keep the LQL threshold small
    4. Display the minimum number of columns needed

  • Best practice for data persistance for monitoring without BAM

    Greetings,
    We are modeling a business process in a large organization using BPEL Process Manager. The key point is that business people needs to monitor the execution of the business process in several key sectors of the process execution as well as they need to get report information of the process.
    To model this in our project, we decided to create a new Oracle Database Schema that is going to hold the information about the business process execution (we decided that because for this initial offering the customer is not buying BAM). In this context, the BPEL process is going to be sending this key information to the repository so business people can then view real time information about the process execution as well as historical information in form of reports.
    The important issue here is, if there is a best practice to send the information to the Database Schema ? it could be just using single database adapters ? maybe using sensors sending the data using topics connections ?
    Any help will be highly appreciated.
    Thanks in advance.

    hi..yes this suggestion is nice...first configure the sensors(activity or variable) ..then configure the sensor action as a JMS Topic which will in turn insert the data into a DB..Or when u configure the sensor action as a DB..then the data goes to Oracle Reports schema..if there is any chance of altering the DB..i mean if there is any chance by changing config files so that the data doesnt go to that Reports schema and goes to a custom schema created by any User....i dont know if it can b done...my problem is wen i m configuring the jms Topic for sensor actions..i see blank data coming..for sm reason or the other the data is not getting posted ...i have used a esb ..a routing service based on the schema which i am monitoring...can any1 help?

  • Best Practice for monitoring database targets configured for Data Guard

    We are in the process of migrating our DB targets to 12c Cloud Control. 
    In our current 10g environment the Primary Targets are monitored and administered by OEM GC A, and the Standby Targets are monitored by OEM GC B.  Originally, I believe this was because of proximity and network speed, and over time it evolved to a Primary/Standby separation.  One of the greatest challenges in this configuration is keeping OEM jobs in sync on both sides (in case of switchover/failover).
    For our new OEM CC environment we are setting up CC A and CC B.  However, I would like to determine if it would be smarter to monitor all DB targets (Primary and Standby) from the same CC console.  In other words, monitor and administer DB Primary and Standby from the same OEM CC Console.   I am trying to determine the best practice.  I am not sure if administering a swichover from Cloud Control from Primary to Standby requires that both targets are monitored in the same environment or not.
    I am interested in feedback.   I am also interested in finding good reference materials (I have been looking at Oracle documentation and other documents online).   Thanks for your input and thoughts.  I am deliberately trying to keep this as concise as possible.

    OMS is a tool it is not need to monitor your primary and standby what is what I meant by the comment.
    The reason you need the same OMS to monitor both the primary and the standby is in the Data Guard administration screen it will show both targets. You also will have the option of doing switch-overs and fail-overs as well as convert the primary or standby. One of the options is also to move all the jobs that are scheduled with primary over to the standby during a switch-over or fail-over.
    There is no document that states that you need to have all targets on one OMS but that is the best method for the reason of having OMS. OMS is a tool to have all targets in a central repository. If you start have different OMS server and OMS repository you will need to log into separate OMS to administrator the targets.

  • Best Practice for ASA Route Monitoring Options?

    We have one pair Cisco ASA 5505 located in different location and there are two point to point links between those two locations, one for primary link (static route w/ low metric) and the other for backup (static route w/ high metric). The tracked options is enabled for monitoring the state of the primary route. the detail parameters regarding options as below,
    Frequency: 30 seconds               Data Size: 28 bytes
    Threshold: 3000 milliseconds     Tos: 0
    Time out: 3000 milliseconds          Number of Packets: 8
    ------ show run------
    sla monitor 1
    type echo protocol ipIcmpEcho 10.200.200.2 interface Intersite_Traffic
    num-packets 8
    timeout 3000
    threshold 3000
    frequency 30
    sla monitor schedule 1 life forever start-time now
    ------ show run------
    I'm not sure if the setting is so sensitive that the secondary static route begins to work right away, even when some small link flappings occur.
    What is the best practice to set those parameters up in the production environment. How can we specify the reasonanble monitoring options to fit our needs.
    Thank you for any idea.

    Hello,
    Of course too sensitive might cause failover to happen when some packets get lost, but remember the whole purpose of this is to provide as less downtime to your network as possible,
    Now if you tune these parameters what happen is that failover will be triggered on a different time basis.
    This is taken from a cisco document ( If you tune the sla process as this states, 3 packets will be sent each 10 seconds, so 3 of them need to fail to SLA to happen) This CISCO configuration example looks good but there are network engineers that would rather to use a lower time-line than that.
    sla monitor 123
    type echo protocol ipIcmpEcho 10.0.0.1 interface outside
    num-packets 3
    frequency 10
    Regards,
    Remember to rate all of the helpful posts ( If you need assistance knowing how to rate a post just let me know )

  • Best practice to monitor 10gR3 OSB performance using JMX API?

    Hi guys,
    I need some advice on the best practice to monitor 10gR3 OSB performance using JMX API.
    Jus to show I have done my home work, I managed to get the JMX sample code from
    http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/jmx_monitoring/example.html#wp1109828
    working.
    The following is the list of options I am think about:
    * Set up: I have a cluster of one 1 admin server with 2 managed servers, which managed server runs an instance of OSB
    * What I try to achieve:
    - use JMX API to collect OSB stats data periodically as in sample code above then save data as a record to a
         database table
    Options/ideas:
    1. Simplest approach: Run the modified version of JMX sample on the Admin Server to save stats data to database
    regularly. I can't see problems with this one ...
    2. Use WLI to schedule the Task of collecting stats data regularly. May be overkill if option 1 above is good for production
    3. Deploy a simple web app on Admin Server, say a simple servlet that displays a simple page to start/stop and configure
    data collection interval for the timer
    What approach would you experts recommend?
    BTW, the caveats os using JMX in http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/jmx_monitoring/concepts.html#wp1095673
    says
         Oracle strongly discourages using this API in a concurrent manner with more than one thread or process. This is because a reset performed in
         one thread or process is not visible to another threads or processes. This caveat also applies to resets performed from the Monitoring Dashboard of
         the Oracle Service Bus Console, as such resets are not visible to this API.
    Under what scenario would I be breaking this rule? I am a little worried about its statement
         discourages using this API in a concurrent manner with more than one thread or process
    Thanks in advance,
    Sam

    Hi Manoj,
    Thanks for getting back. I am afraid configuring aggregation interval from Dashboard doesn't solve problem as I need to collect stats data of endpoint URI or in hourly or daily basis, then output to CSV files so line graphs can be drawn for chosen applications.
    Just for those who may be interested. It's not possible to use SQL to query database tables to extract OSB stats for a specified time period, say 9am - 5pm. I raised a support case already and the response I got back is 'No'.
    That means using JMX API will be the way to go :)
    Has anyone actually done this kind of OSB stats report and care to give some pointers?
    I am thinking of using 7 or 1 days as the aggregation interval set in Dashboard of OSB admin console then collects stats data using JMX(as described in previous link) hourly using WebLogic Server JMX Timer Service as described in
    http://download.oracle.com/docs/cd/E12840_01/wls/docs103/jmxinst/timer.html instead of Java's Timer class.
    Not sure if this is the best practice.
    Thanks,
    Regards,
    Sam

  • Best practices in Queue table maintenance

    Hi Fellow AQ Users,
    I am looking to hear from the community about best practices in queue table maintenance.
    I have been mining through metalink about various Oracle recommendations and putting
    together a set of recommendations as a starting point for my DBAs.
    I am looking to answer questions like these --
    How often (in relation to messaging load) would you coalesce and rebuild the indexes?
    How often would you rebuild the table itself to get rid of the high water mark issues ?
    and what procedure would you use to do that?
    Would really love to learn from your experiences in this area. We are using 9.2.0.7
    64 bit DB and have plans to go to 10g over the next year. So, I am looking at 9i related
    stuff and then 10g.
    Thanks
    Vijay

    Hello,
    In general you coalesce once per day ideally during a quiet time to avoid ORA-54 errors as per <Note:271855.1>. Some customers do it more often than that but once per day is a good starting point.
    In terms of shrinking the queue tables you can use the procedure in <Note:304522.1> with a null 3rd parameter. This is an offline procedure so you could only run it during a maintenance window. In 10.2 onwards you can dynamically shrink the queue table and IOTS. Again it depends on exactly what you are doing with your queue tables how often you might need to do this.
    Thanks
    Peter

  • Best practice using clusters to create queue/notifier/bundles ?

    I have in a block diagram a queue, notifier and several instances of bundle cluster
    that all use the same data structure.   There is a cluster typedef for the data structure.
    Of course, each of these objects (define queue, define notifier, bundle)
    need to know how the cluster is defined.
    What is considered best practice?
    1) create a dummy instance of the cluster everywhere the data structure
        definition is needed (and hide them all on the FP)
    2) create only one instance and wire it to all the places it is needed
       But there is no data flow on this wire -- it's only the cluster *definition*
       that is being used, so this would seem to clutter up the BD unnecessarily.
    3) Create only one control instance of the cluster and use local variables
        everywhere else the cluster definition is needed.  It's _value_ is never
        assigned or read so there's no issue with race conditions.
    4) Some other way?
    If you had to clean up someone else's code, how would you hope to
    see this handled?
    It occurred to me in the course of writing this that where I have
    "unbundle... code ... bundle" I could wire the original bundle to 
    both "unbundle" and "bundle" -- but would this be too confusing
    and clutter the BD with unnecessary wire ?
    Thanks and Best Regards,
    -- J.
    Message Edited by JeffBuckles on 11-06-2008 03:17 PM
    Solved!
    Go to Solution.

    Discovered a rather unpleasant side-effect of using the typedef constant cluster this way. Since I'm using the typedef cluster constant only to communicate the definition of the cluster--not for dataflow--I resize it to the same size and shape as a control terminal.  This is shown in the first image, clean_cluster.jpg.   I was also careful to set "Autosizing->None" on all instances of the constant.   However, when I updated the typedef, all instances of the constant resized!  That's dozens of instances across many different VIs.   Why was "Autosizing->None" not respected?  Is there a better way to do this so that my BD doesn't get messed up if I have to update a typedef such as this one?
    Thanks and Best Regards,
    J. 
    Attachments:
    clean_cluster.png ‏2 KB
    Cluster_Updated.png ‏3 KB

Maybe you are looking for

  • Open documents/files from Application Server in forms 6i

    Hello All, Our requirement is to open files that are in Application server( We do not want to upload them into database). Is there a way to open this files from forms 6i. I am wondering how the "View Output" or View log button works on the concurrent

  • Using Presets for Printing Panoramas

    I have created a generic preset for printing panoramas on an Epson 4900 connected to a Mac with OS 10.8.4. Using the Epson printer driver's Page Setup via Lightroom 5, I reset the length of the panorama for each print and then update the Lightroom pr

  • Use of DAQ and BNC-2120

    Hello there, How can I use DAQ Assistance from labview to generate two different signal and output them from two analog output(AO1) and AO2 of BNC-2120? I tried but the DAQ fails.

  • Decimal

    Hi abapers , My requirement is , ALV report contains  field of quan datatype , and its value is suppose 5,000 which displayed in ALV report , while downloading the data of ALV report in excel sheet ,it displays value as 5,000  . i want to display thi

  • [Eclipse/CR9] Unexpected error due to database connector

    Hi everyone, I'm currently facing a hard problem: I have to fetch a report (designed with CR 9) having an .xls file as datasource. While running the JSP, I continue getting this error (translated, as I receive it in French!! -don't know why): Unexpec